<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ This option can only be omitted when <option>--format</option> is plain.
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files,
+ <filename>toc.glo</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data (roles and tablespaces) in custom format. The second
+ contains a mapping between database OIDs and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ the <filename>databases</filename> subdirectory, named using the database's OID.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ See <xref linkend="app-pgdump"/> for details on how the
+ various non-plain-text archive formats work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
<listitem>
<para>
Dump only global objects (roles and tablespaces), no databases.
+ Note: <option>--globals-only</option> cannot be used with
+ <option>--clean</option> with non-text dump format.
</para>
</listitem>
</varlistentry>
<refsect1 id="app-pg-dumpall-ex">
<title>Examples</title>
<para>
- To dump all databases:
-
+ To dump all databases in plain text format (the default):
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
</screen>
</para>
+ <para>
+ To dump all databases using other formats:
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=directory -f db.out</userinput>
+<prompt>$</prompt> <userinput>pg_dumpall --format=custom -f db.out</userinput>
+<prompt>$</prompt> <userinput>pg_dumpall --format=tar -f db.out</userinput>
+</screen>
+ </para>
+
<para>
To restore database(s) from this file, you can use:
<screen>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If the dump was taken in a non-plain-text format, use
+ <application>pg_restore</application> to restore the databases:
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all databases. To restore only some databases, use
+ the <option>--exclude-database</option> option to skip those not wanted.
+ </para>
</refsect1>
<refsect1>
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
+ <para>
+ A non-plain-text archive made using <application>pg_dumpall</application>
+ is a directory containing a <filename>toc.glo</filename> file with global
+ objects (roles and tablespaces), a <filename>map.dat</filename> file
+ listing the databases, and a subdirectory for each database containing
+ its archive. When restoring such an archive,
+ <application>pg_restore</application> first restores global objects from
+ <filename>toc.glo</filename>, then processes each database listed in
+ <filename>map.dat</filename>. Lines in <filename>map.dat</filename> can
+ be commented out with <literal>#</literal> to skip restoring specific
+ databases.
+ </para>
+
<para>
Obviously, <application>pg_restore</application> cannot restore information
that is not present in the archive file. For instance, if the
ignorable error messages will be reported,
unless <option>--if-exists</option> is also specified.
</para>
+ <para>
+ When restoring a <application>pg_dumpall</application> archive,
+ <option>--if-exists</option> is implied by <option>--clean</option>,
+ since global objects such as roles and tablespaces may not exist
+ in the target cluster.
+ </para>
</listitem>
</varlistentry>
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from a non-plain-text archive made using <application>pg_dumpall</application>.
</para>
<para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from a non-plain-text archive made using <application>pg_dumpall</application>.
+ Note: <option>--globals-only</option> cannot be used with
+ <option>--data-only</option>,
+ <option>--schema-only</option>,
+ <option>--statistics-only</option>,
+ <option>--statistics</option>,
+ <option>--exit-on-error</option>,
+ <option>--single-transaction</option>,
+ <option>--clean</option>, or
+ <option>--transaction-size</option>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from a non-plain-text archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
in <option>--clean</option> mode. This suppresses <quote>does not
exist</quote> errors that might otherwise be reported. This
option is not valid unless <option>--clean</option> is also
- specified.
+ specified. This option is implied when restoring a
+ <application>pg_dumpall</application> archive with
+ <option>--clean</option>.
</para>
</listitem>
</varlistentry>
</para>
</listitem>
+ <listitem>
+ <para>
+ When restoring from a non-plain-text archive made using
+ <application>pg_dumpall</application>, the <option>--section</option>
+ option may be used, but must include <option>pre-data</option>.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ The following options cannot be used when restoring from a non-plain-text
+ archive made using <application>pg_dumpall</application>:
+ <option>-a/--data-only</option>,
+ <option>-l/--list</option>,
+ <option>-L/--use-list</option>,
+ <option>--no-schema</option>,
+ <option>--statistics-only</option>, and
+ <option>--strict-names</option>.
+ </para>
+ </listitem>
+
</itemizedlist>
</para>
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * Update the archive handle in the on_exit callback registered by
+ * on_exit_close_archive(). When pg_restore processes a pg_dumpall archive
+ * containing multiple databases, each database is restored from a separate
+ * archive. After closing one archive and opening the next, we update the
+ * shutdown_info to reference the new archive handle so the cleanup callback
+ * will close the correct archive on exit.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip if no-tablespace is given. */
+ if (ropt->noTablespace && te && te->desc &&
+ (strcmp(te->desc, "TABLESPACE") == 0))
+ continue;
+
+ /*
+ * Skip DROP DATABASE/ROLES/TABLESPACE if we didn't specify
+ * --clean
+ */
+ if (!ropt->dropSchema && te && te->desc &&
+ strcmp(te->desc, "DROP_GLOBAL") == 0)
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
/*******************************
* Stuff below here should be 'private' to the archiver routines
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
*******************************/
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
/* initialize for backwards compatible string processing */
AH->public.encoding = 0; /* PG_SQL_ASCII */
- AH->public.std_strings = false;
+ AH->public.std_strings = true;
/* sql error handling */
AH->public.exit_on_error = true;
return 0;
}
+ /*
+ * Global object TOC entries (e.g., ROLEs or TABLESPACEs) must not be
+ * ignored.
+ */
+ if (strcmp(te->desc, "ROLE") == 0 ||
+ strcmp(te->desc, "ROLE PROPERTIES") == 0 ||
+ strcmp(te->desc, "TABLESPACE") == 0 ||
+ strcmp(te->desc, "DROP_GLOBAL") == 0)
+ return REQ_SCHEMA;
+
/*
* Process exclusions that affect certain classes of TOC entries.
*/
if (ropt->no_subscriptions &&
strncmp(te->tag, "SUBSCRIPTION", strlen("SUBSCRIPTION")) == 0)
return 0;
+
+ /*
+ * Comments on global objects (ROLEs or TABLESPACEs) should not be
+ * skipped, since global objects themselves are never skipped.
+ */
+ if (strncmp(te->tag, "ROLE", strlen("ROLE")) == 0 ||
+ strncmp(te->tag, "TABLESPACE", strlen("TABLESPACE")) == 0)
+ return REQ_SCHEMA;
}
/*
if (ropt->no_subscriptions &&
strncmp(te->tag, "SUBSCRIPTION", strlen("SUBSCRIPTION")) == 0)
return 0;
+
+ /*
+ * Security labels on global objects (ROLEs or TABLESPACEs) should not
+ * be skipped, since global objects themselves are never skipped.
+ */
+ if (strncmp(te->tag, "ROLE", strlen("ROLE")) == 0 ||
+ strncmp(te->tag, "TABLESPACE", strlen("TABLESPACE")) == 0)
+ return REQ_SCHEMA;
}
/* If it's a subscription, maybe ignore it */
else if (strcmp(type, "CAST") == 0 ||
strcmp(type, "CHECK CONSTRAINT") == 0 ||
strcmp(type, "CONSTRAINT") == 0 ||
+ strcmp(type, "DROP_GLOBAL") == 0 ||
+ strcmp(type, "ROLE PROPERTIES") == 0 ||
+ strcmp(type, "ROLE") == 0 ||
strcmp(type, "DATABASE PROPERTIES") == 0 ||
strcmp(type, "DEFAULT") == 0 ||
strcmp(type, "FK CONSTRAINT") == 0 ||
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
/*-------------------------------------------------------------------------
*
* pg_dumpall.c
+ * pg_dumpall dumps all databases and global objects (roles and
+ * tablespaces) from a PostgreSQL cluster.
+ *
+ * For text format output, globals are written directly and pg_dump is
+ * invoked for each database, with all output going to stdout or a file.
+ *
+ * For non-text formats (custom, directory, tar), a directory is created
+ * containing a toc.glo file with global objects, a map.dat file mapping
+ * database OIDs to names, and a databases/ subdirectory with individual
+ * pg_dump archives for each database.
*
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
- * pg_dumpall forces all pg_dump output to be text, since it also outputs
- * text into the same output stream.
- *
* src/bin/pg_dump/pg_dumpall.c
*
*-------------------------------------------------------------------------
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
static void dumpUserConfig(PGconn *conn, const char *username);
static void dumpDatabases(PGconn *conn);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
static void executeCommand(PGconn *conn, const char *query);
+static void check_for_invalid_global_names(PGconn *conn,
+ SimpleStringList *database_exclude_names);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
+static const CatalogId nilCatalogId = {0, 0};
int
main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
}
pgdumpopts = createPQExpBuffer();
+ InitDumpOptions(&dopt);
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option %s=d|c|t requires option %s",
+ "-F/--format", "-f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option %s can only be used with %s=plain",
+ "--restrict-key", "--format");
+
+ /* --clean and -g/--globals-only cannot be used together in non-text dump */
+ if (archDumpFormat != archNull && output_clean && globals_only)
+ pg_fatal("options %s and %s cannot be used together in non-text dump",
+ "--clean", "-g/--globals-only");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
- dumpTimestamp("Started on");
+ /* create a archive file for global commands. */
+ if (archDumpFormat != archNull)
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ char global_path[MAXPGPATH];
+ const char *encname;
+ pg_compress_specification compression_spec = {0};
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /*
+ * Check that no global object names contain newlines or carriage
+ * returns, which would break the map.dat file format. This is only
+ * needed for servers older than v19, which started prohibiting such
+ * names.
+ */
+ if (server_version < 190000)
+ check_for_invalid_global_names(conn, &database_exclude_names);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
+
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ ((ArchiveHandle *) fout)->connection = conn;
+ ((ArchiveHandle *) fout)->public.numWorkers = 1;
- /* Replicate encoding and standard_conforming_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = on;\n");
- fprintf(OPF, "\n");
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release
+ * of our own major version. (See also version check in
+ * pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ /* Dump default_transaction_read_only. */
+ appendPQExpBufferStr(qry, "SET default_transaction_read_only = off;\n\n");
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "default_transaction_read_only",
+ .description = "default_transaction_read_only",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+ resetPQExpBuffer(qry);
+
+ /* Put the correct encoding into the archive */
+ encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "client_encoding",
+ .description = "client_encoding",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+ resetPQExpBuffer(qry);
+
+ /* Put the correct escape string behavior into the archive. */
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = 'on';\n");
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "standard_conforming_strings",
+ .description = "standard_conforming_strings",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+ destroyPQExpBuffer(qry);
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ if (verbose)
+ dumpTimestamp("Started on");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the
+ * dump output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so
+ * whichever database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and standard_conforming_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = on;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
* dependency analysis because databases never depend on each other,
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
+ *
+ * For non-text formats, pg_dumpall unconditionally process --clean
+ * option. In contrast, pg_restore only applies it if the user
+ * explicitly provides the flag. This discrepancy resolves corner
+ * cases where pg_restore requires cleanup instructions that may be
+ * missing from a standard pg_dumpall output.
*/
- if (output_clean)
+ if (output_clean || archDumpFormat != archNull)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump
+ * will handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
dumpDatabases(conn);
- PQfinish(conn);
+ if (archDumpFormat == archNull)
+ {
+ PQfinish(conn);
- if (verbose)
- dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+ if (verbose)
+ dumpTimestamp("Completed on");
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (filename)
+ {
+ fclose(OPF);
+
+ /* sync the resulting file, errors are not fatal */
+ if (dosync)
+ (void) fsync_fname(filename, false);
+ }
+ }
+ else
{
- fclose(OPF);
+ RestoreOptions *ropt;
+
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
- /* sync the resulting file, errors are not fatal */
- if (dosync)
- (void) fsync_fname(filename, false);
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
}
exit_nicely(0);
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
i_rolname = PQfnumber(res, "rolname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
const char *rolename;
+ PQExpBuffer delQry = createPQExpBuffer();
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(rolename));
+ if (archDumpFormat == archNull)
+ {
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(rolename));
+ fprintf(OPF, "%s", delQry->data);
+ }
+ else
+ {
+ appendPQExpBuffer(delQry, "DROP ROLE IF EXISTS %s;\n",
+ fmtId(rolename));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("ROLE %s", fmtId(rolename)),
+ .description = "DROP_GLOBAL",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ }
+
+ destroyPQExpBuffer(delQry);
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
dumpRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer comment_buf = createPQExpBuffer();
+ PQExpBuffer seclabel_buf = createPQExpBuffer();
PGresult *res;
int i_oid,
i_rolname,
i_rolcomment = PQfnumber(res, "rolcomment");
i_is_current_user = PQfnumber(res, "is_current_user");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
}
resetPQExpBuffer(buf);
+ resetPQExpBuffer(comment_buf);
+ resetPQExpBuffer(seclabel_buf);
if (binary_upgrade)
{
if (!no_comments && !PQgetisnull(res, i, i_rolcomment))
{
- appendPQExpBuffer(buf, "COMMENT ON ROLE %s IS ", fmtId(rolename));
- appendStringLiteralConn(buf, PQgetvalue(res, i, i_rolcomment), conn);
- appendPQExpBufferStr(buf, ";\n");
+ appendPQExpBuffer(comment_buf, "COMMENT ON ROLE %s IS ", fmtId(rolename));
+ appendStringLiteralConn(comment_buf, PQgetvalue(res, i, i_rolcomment), conn);
+ appendPQExpBufferStr(comment_buf, ";\n");
}
if (!no_security_labels)
buildShSecLabels(conn, "pg_authid", auth_oid,
"ROLE", rolename,
- buf);
+ seclabel_buf);
+
+ if (archDumpFormat == archNull)
+ {
+ fprintf(OPF, "%s", buf->data);
+ fprintf(OPF, "%s", comment_buf->data);
- fprintf(OPF, "%s", buf->data);
+ if (seclabel_buf->data[0] != '\0')
+ fprintf(OPF, "%s", seclabel_buf->data);
+ }
+ else
+ {
+ char *tag = psprintf("ROLE %s", fmtId(rolename));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "ROLE",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ if (comment_buf->data[0] != '\0')
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "COMMENT",
+ .section = SECTION_PRE_DATA,
+ .createStmt = comment_buf->data));
+
+ if (seclabel_buf->data[0] != '\0')
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "SECURITY LABEL",
+ .section = SECTION_PRE_DATA,
+ .createStmt = seclabel_buf->data));
+ }
}
/*
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "\n--\n-- User Configurations\n--\n");
for (i = 0; i < PQntuples(res); i++)
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
+ destroyPQExpBuffer(comment_buf);
+ destroyPQExpBuffer(seclabel_buf);
}
dumpRoleMembership(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer querybuf = createPQExpBuffer();
PQExpBuffer optbuf = createPQExpBuffer();
PGresult *res;
int start = 0,
i_inherit_option = PQfnumber(res, "inherit_option");
i_set_option = PQfnumber(res, "set_option");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Role memberships\n--\n\n");
/*
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ resetPQExpBuffer(querybuf);
+ appendPQExpBuffer(querybuf, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(querybuf, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(querybuf, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(querybuf, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(querybuf, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", querybuf->data);
+ else
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("ROLE %s", fmtId(role)),
+ .description = "ROLE PROPERTIES",
+ .section = SECTION_PRE_DATA,
+ .createStmt = querybuf->data));
}
}
PQclear(res);
destroyPQExpBuffer(buf);
+ destroyPQExpBuffer(querybuf);
+ destroyPQExpBuffer(optbuf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
"FROM pg_catalog.pg_parameter_acl "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("ROLE %s", fmtId(parowner)),
+ .description = "ROLE PROPERTIES",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ if (archDumpFormat == archNull)
+ {
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(spcname));
+ fprintf(OPF, "%s", delQry->data);
+ }
+ else
+ {
+ appendPQExpBuffer(delQry, "DROP TABLESPACE IF EXISTS %s;\n",
+ fmtId(spcname));
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("TABLESPACE %s", fmtId(spcname)),
+ .description = "DROP_GLOBAL",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ }
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(spcname));
+ destroyPQExpBuffer(delQry);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
dumpTablespaces(PGconn *conn)
{
PGresult *res;
+ PQExpBuffer comment_buf = createPQExpBuffer();
+ PQExpBuffer seclabel_buf = createPQExpBuffer();
int i;
/*
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
/* needed for buildACLCommands() */
fspcname = pg_strdup(fmtId(spcname));
+ resetPQExpBuffer(comment_buf);
+ resetPQExpBuffer(seclabel_buf);
+
if (binary_upgrade)
{
appendPQExpBufferStr(buf, "\n-- For binary upgrade, must preserve pg_tablespace oid\n");
if (!no_comments && spccomment && spccomment[0] != '\0')
{
- appendPQExpBuffer(buf, "COMMENT ON TABLESPACE %s IS ", fspcname);
- appendStringLiteralConn(buf, spccomment, conn);
- appendPQExpBufferStr(buf, ";\n");
+ appendPQExpBuffer(comment_buf, "COMMENT ON TABLESPACE %s IS ", fspcname);
+ appendStringLiteralConn(comment_buf, spccomment, conn);
+ appendPQExpBufferStr(comment_buf, ";\n");
}
if (!no_security_labels)
buildShSecLabels(conn, "pg_tablespace", spcoid,
"TABLESPACE", spcname,
- buf);
+ seclabel_buf);
+
+ if (archDumpFormat == archNull)
+ {
+ fprintf(OPF, "%s", buf->data);
- fprintf(OPF, "%s", buf->data);
+ if (comment_buf->data[0] != '\0')
+ fprintf(OPF, "%s", comment_buf->data);
+
+ if (seclabel_buf->data[0] != '\0')
+ fprintf(OPF, "%s", seclabel_buf->data);
+ }
+ else
+ {
+ char *tag = psprintf("TABLESPACE %s", fmtId(fspcname));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "TABLESPACE",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+
+ if (comment_buf->data[0] != '\0')
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "COMMENT",
+ .section = SECTION_PRE_DATA,
+ .createStmt = comment_buf->data));
+
+ if (seclabel_buf->data[0] != '\0')
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = "SECURITY LABEL",
+ .section = SECTION_PRE_DATA,
+ .createStmt = seclabel_buf->data));
+ }
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ destroyPQExpBuffer(comment_buf);
+ destroyPQExpBuffer(seclabel_buf);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Skip "postgres" and "template1"; dumpDatabases() will deal with
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(dbname));
+ if (archDumpFormat == archNull)
+ {
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(dbname));
+ fprintf(OPF, "%s", delQry->data);
+ }
+ else
+ {
+ appendPQExpBuffer(delQry, "DROP DATABASE IF EXISTS %s;\n",
+ fmtId(dbname));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("DATABASE %s", fmtId(dbname)),
+ .description = "DROP_GLOBAL",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ }
+
+ destroyPQExpBuffer(delQry);
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
res = executeQuery(conn, buf->data);
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
{
char *sanitized;
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = psprintf("ROLE %s", fmtId(username)),
+ .description = "ROLE PROPERTIES",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
}
PQclear(res);
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+ char *map_preamble[] = {
+ "#################################################################",
+ "# map.dat",
+ "#",
+ "# This file maps oids to database names",
+ "#",
+ "# pg_restore will restore all the databases listed here, unless",
+ "# otherwise excluded. You can also inhibit restoration of a",
+ "# database by removing the line or commenting out the line with"
+ "# a # mark.",
+ "#################################################################",
+ NULL
+ };
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ for (char **line = map_preamble; *line; line++)
+ fprintf(map_file, "%s\n", *line);
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
free(sanitized);
/*
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
- {
create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s -f %s %s", pg_dump_bin,
+ pgdumpopts->data, dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
}
+/*
+ * check_for_invalid_global_names
+ *
+ * Check that no database, role, or tablespace name contains a newline or
+ * carriage return character. Such characters in database names would break
+ * the map.dat file format used for non-plain-text dumps. Role and tablespace
+ * names are also checked because such characters were forbidden starting in
+ * v19.
+ *
+ * Excluded databases are skipped since they won't appear in map.dat.
+ */
+static void
+check_for_invalid_global_names(PGconn *conn,
+ SimpleStringList *database_exclude_names)
+{
+ PGresult *res;
+ int i;
+ PQExpBuffer names;
+ int count = 0;
+
+ res = executeQuery(conn,
+ "SELECT datname AS objname, 'database' AS objtype "
+ "FROM pg_catalog.pg_database "
+ "WHERE datallowconn AND datconnlimit != -2 "
+ "UNION ALL "
+ "SELECT rolname AS objname, 'role' AS objtype "
+ "FROM pg_catalog.pg_roles "
+ "UNION ALL "
+ "SELECT spcname AS objname, 'tablespace' AS objtype "
+ "FROM pg_catalog.pg_tablespace");
+
+ names = createPQExpBuffer();
+
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ char *objname = PQgetvalue(res, i, 0);
+ char *objtype = PQgetvalue(res, i, 1);
+
+ /* Skip excluded databases since they won't be in map.dat */
+ if (strcmp(objtype, "database") == 0 &&
+ simple_string_list_member(database_exclude_names, objname))
+ continue;
+
+ if (strpbrk(objname, "\n\r"))
+ {
+ appendPQExpBuffer(names, " %s: \"", objtype);
+ for (char *p = objname; *p; p++)
+ {
+ if (*p == '\n')
+ appendPQExpBufferStr(names, "\\n");
+ else if (*p == '\r')
+ appendPQExpBufferStr(names, "\\r");
+ else
+ appendPQExpBufferChar(names, *p);
+ }
+ appendPQExpBufferStr(names, "\"\n");
+ count++;
+ }
+ }
+
+ PQclear(res);
+
+ if (count > 0)
+ pg_fatal("database, role, or tablespace names contain a newline or carriage return character, which is not supported in non-plain-text dumps:\n%s",
+ names->data);
+
+ destroyPQExpBuffer(names);
+}
+
+
/*
* dumpTimestamp
*/
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * Return the next dumpId.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts);
+
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option %s cannot be used together with %s",
+ "--exclude-database", "-g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
pg_fatal("options %s and %s cannot be used together",
"-1/--single-transaction", "--transaction-size");
+ if (opts->single_txn && globals_only)
+ pg_fatal("options %s and %s cannot be used together when restoring an archive created by pg_dumpall",
+ "--single-transaction", "-g/--globals-only");
+
+ if (opts->txn_size && globals_only)
+ pg_fatal("options %s and %s cannot be used together when restoring an archive created by pg_dumpall",
+ "--transaction-size", "-g/--globals-only");
+
+ if (opts->exit_on_error && globals_only)
+ pg_fatal("options %s and %s cannot be used together when restoring an archive created by pg_dumpall",
+ "--exit-on-error", "-g/--globals-only");
+
+ if (data_only && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "-a/--data-only", "-g/--globals-only");
+ if (schema_only && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "-s/--schema-only", "-g/--globals-only");
+ if (statistics_only && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "--statistics-only", "-g/--globals-only");
+ if (with_statistics && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "--statistics", "-g/--globals-only");
+
/*
* -C is not compatible with -1, because we can't create a database inside
* a transaction block.
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the databases from
+ * map.dat, but skip restoring those matching --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ char global_path[MAXPGPATH];
+ RestoreOptions *tmpopts = pg_malloc0_object(RestoreOptions);
+
+ opts->format = archUnknown;
+
+ memcpy(tmpopts, opts, sizeof(RestoreOptions));
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-l/--list");
+ if (opts->tocFile)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-L/--use-list");
+
+ if (opts->strict_names)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "--strict-names");
+ if (globals_only && opts->dropSchema)
+ pg_fatal("options %s and %s cannot be used together when restoring an archive created by pg_dumpall",
+ "--clean", "-g/--globals-only");
+
+ /*
+ * For pg_dumpall archives, --clean implies --if-exists since global
+ * objects may not exist in the target cluster.
+ */
+ if (opts->dropSchema && !opts->if_exists)
+ {
+ opts->if_exists = 1;
+ pg_log_info("--if-exists is implied by --clean for pg_dumpall archives");
+ }
+
+ if (no_schema)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "--no-schema");
+
+ if (data_only)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-a/--data-only");
+
+ if (statistics_only)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "--statistics-only");
+
+ if (!(opts->dumpSections & DUMP_PRE_DATA))
+ pg_fatal("option %s cannot exclude %s when restoring a pg_dumpall archive",
+ "--section", "--pre-data");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option %s must be specified when restoring an archive created by pg_dumpall",
+ "-C/--create");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Always restore global objects, even if --exclude-database results
+ * in zero databases to process. If 'globals-only' is set, exit
+ * immediately.
+ */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ n_errors = restore_global_objects(global_path, tmpopts);
+
+ if (globals_only)
+ pg_log_info("database restoring skipped because option %s was specified",
+ "-g/--globals-only");
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = n_errors + restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "--exclude-database");
+ }
+
+ if (globals_only)
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "-g/--globals-only");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ */
+static int
+restore_global_objects(const char *inputFileSpec, RestoreOptions *opts)
+{
+ Archive *AH;
+ int nerror = 0;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+ opts->txn_size = 0;
+
+ AH = OpenArchive(inputFileSpec, opts->format);
+
+ SetArchiveOptions(AH, NULL, opts);
+
+ on_exit_close_archive(AH);
+
+ /* Let the archiver know how noisy to be */
+ AH->verbose = opts->verbose;
+
+ /* Don't output TOC entry comments when restoring globals */
+ ((ArchiveHandle *) AH)->noTocComments = 1;
+
+ AH->exit_on_error = false;
+
+ /* Parallel execution is not supported for global object restoration. */
+ AH->numWorkers = 1;
+
+ ProcessArchiveRestoreOptions(AH);
+ RestoreArchive(AH, false);
+
+ nerror = AH->n_errors;
+
+ /* AH may be freed in CloseArchive? */
+ CloseArchive(AH);
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PQExpBuffer db_lit;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+ db_lit = createPQExpBuffer();
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+
+ resetPQExpBuffer(query);
+ resetPQExpBuffer(db_lit);
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if (PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches --exclude-database pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ count_db++;
+ }
+
+ destroyPQExpBuffer(query);
+ destroyPQExpBuffer(db_lit);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+ int len;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as there is
+ * no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ len = strlen(dumpdirpath);
+
+ /* Trim slash from directory name. */
+ while (len > 1 && dumpdirpath[len - 1] == '/')
+ {
+ dumpdirpath[len - 1] = '\0';
+ len--;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* look for the dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+
+ /* ignore lines that don't begin with a digit */
+ if (p == linebuf.data)
+ continue;
+
+ if (*p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Strip trailing newline */
+ if (namelen > 0 && dbname[namelen - 1] == '\n')
+ dbname[--namelen] = '\0';
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen < 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbidname->str, db_oid, map_file_path);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ pfree(linebuf.data);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total = 0;
+ char *connected_db = NULL;
+ PGconn *conn = NULL;
+ RestoreOptions *original_opts = pg_malloc0_object(RestoreOptions);
+ RestoreOptions *tmpopts = pg_malloc0_object(RestoreOptions);
+
+ memcpy(original_opts, opts, sizeof(RestoreOptions));
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile((char *) inputFileSpec, &dbname_oid_list);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, connect to the database to process them.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (!conn)
+ {
+ pg_log_error("could not connect to databases \"postgres\" or \"template1\"\n"
+ "Please specify an alternative database.");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+ }
+ }
+
+ /* Filter the db list according to the exclude patterns. */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+ PQfinish(conn);
+ }
+ else
+ num_db_restore = num_total_db;
+
+ /* Exit if no db needs to be restored. */
+ if (num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ pg_free(original_opts);
+ pg_free(tmpopts);
+ return 0;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * Since pg_backup_archiver.c may modify RestoreOptions during the
+ * previous restore, we must provide a fresh copy of the original
+ * "opts" for each call to restore_one_database.
+ */
+ memcpy(tmpopts, original_opts, sizeof(RestoreOptions));
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (tmpopts->cparams.override_dbname)
+ {
+ pfree(tmpopts->cparams.override_dbname);
+ tmpopts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (tmpopts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, tmpopts->cparams.pghost,
+ tmpopts->cparams.pgport, tmpopts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ tmpopts->createDB = 0;
+ tmpopts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* We'll have to create it */
+ tmpopts->createDB = 1;
+ tmpopts->cparams.dbname = connected_db;
+ }
+ }
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, tmpopts, numWorkers, true);
+
+ n_errors_total += n_errors;
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ pg_free(original_opts);
+ pg_free(tmpopts);
+
+ return n_errors_total;
+}
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--globals-only', '--clean', '-f', 'dumpfile' ],
+ qr/\Qpg_dumpall: error: options --clean and -g\/--globals-only cannot be used together in non-text dump\E/,
+ 'pg_dumpall: --clean and -g/--globals-only cannot be used together in non-text dump');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd' ],
+ qr/\Qpg_dumpall: error: option -F\/--format=d|c|t requires option -f\/--file\E/,
+ 'pg_dumpall: non-plain format requires --file option');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--data-only', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: options -a\/--data-only and -g\/--globals-only cannot be used together\E/,
+ 'pg_restore: error: options -a/--data-only and -g/--globals-only cannot be used together'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--schema-only', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: options -s\/--schema-only and -g\/--globals-only cannot be used together\E/,
+ 'pg_restore: error: options -s/--schema-only and -g/--globals-only cannot be used together'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--statistics-only', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: options --statistics-only and -g\/--globals-only cannot be used together\E/,
+ 'pg_restore: error: options --statistics-only and -g/--globals-only cannot be used together'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is used in pg_restore with the dump of pg_dump'
+);
done_testing();
--- /dev/null
+# Copyright (c) 2021-2026, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# escape tablespace locations on Windows.
+my $tablespace2_orig = $tablespace2;
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2_orig';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# report an error when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# report an error when \l/--list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# report an error when -L/--use-list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--use-list' => 'use',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -L\/--use-list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When -L/--use-list is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --strict-names option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--strict-names',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option --strict-names cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --strict-names is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --clean and -g/--globals-only are used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--clean',
+ '--globals-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --clean and -g\/--globals-only cannot be used together when restoring an archive created by pg_dumpall\E/,
+ 'When --clean and -g/--globals-only are used in pg_restore with dump of pg_dumpall'
+);
+
+# report an error when non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+# report an error when --no-schema is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--no-schema',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option --no-schema cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --no-schema is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --data-only is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--data-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -a\/--data-only cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --data-only is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --statistics-only is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--statistics-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option --statistics-only cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --statistics-only is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --section excludes pre-data with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--section' => 'post-data',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option --section cannot exclude --pre-data when restoring a pg_dumpall archive\E/,
+ 'When --section=post-data is used in pg_restore with dump of pg_dumpall');
+
+# report an error when --globals-only and --data-only are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--data-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options -a\/--data-only and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --data-only are used together');
+
+# report an error when --globals-only and --schema-only are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--schema-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options -s\/--schema-only and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --schema-only are used together');
+
+# report an error when --globals-only and --statistics-only are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--statistics-only',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --statistics-only and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --statistics-only are used together');
+
+# report an error when --globals-only and --statistics are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--statistics',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --statistics and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --statistics are used together');
+
+# report an error when --globals-only and --exit-on-error are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--exit-on-error',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --exit-on-error and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --exit-on-error are used together');
+
+# report an error when --globals-only and --single-transaction are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--single-transaction',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --single-transaction and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --single-transaction are used together');
+
+# report an error when --globals-only and --transaction-size are used together
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--globals-only',
+ '--transaction-size' => '100',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: options --transaction-size and -g\/--globals-only cannot be used together\E/,
+ 'When --globals-only and --transaction-size are used together');
+
+# verify map.dat preamble exists
+my $map_dat_content = slurp_file("$tempdir/format_directory/map.dat");
+like(
+ $map_dat_content,
+ qr/^# map\.dat\n.*# This file maps oids to database names/ms,
+ 'map.dat contains expected preamble');
+
+# verify commenting out a line in map.dat skips that database
+$node->safe_psql($run_db, 'CREATE DATABASE comment_test_db;
+\c comment_test_db
+CREATE TABLE comment_test_table (id int);');
+
+$node->command_ok(
+ [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/comment_test",
+ ],
+ 'pg_dumpall for comment test');
+
+# Modify map.dat to comment out the comment_test_db entry
+my $map_content = slurp_file("$tempdir/comment_test/map.dat");
+$map_content =~ s/^(\d+ comment_test_db)$/# $1/m;
+open(my $fh, '>', "$tempdir/comment_test/map.dat")
+ or die "Cannot open map.dat: $!";
+print $fh $map_content;
+close($fh);
+
+# Create a target node and restore - commented db should be skipped
+my $target_comment = PostgreSQL::Test::Cluster->new("target_comment");
+$target_comment->init;
+$target_comment->start;
+
+$node->command_ok(
+ [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/comment_test_restore.sql",
+ '--host', $target_comment->host,
+ '--port', $target_comment->port,
+ "$tempdir/comment_test",
+ ],
+ 'pg_restore with commented out database in map.dat');
+
+my $restore_output = slurp_file("$tempdir/comment_test_restore.sql");
+unlike(
+ $restore_output,
+ qr/CREATE DATABASE comment_test_db/,
+ 'commented out database in map.dat is not restored');
+
+# Test that --clean implies --if-exists for pg_dumpall archives
+$node->command_ok(
+ [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--clean',
+ '--file' => "$tempdir/clean_test.sql",
+ "$tempdir/format_custom",
+ ],
+ 'pg_restore with --clean on pg_dumpall archive');
+
+my $clean_output = slurp_file("$tempdir/clean_test.sql");
+like(
+ $clean_output,
+ qr/DROP ROLE IF EXISTS/,
+ '--clean implies --if-exists: DROP ROLE IF EXISTS in output');
+
+$node->stop('fast');
+
+done_testing();
CustomScanState
CycleCtr
DBState
+DbOidName
DCHCacheEntry
DEADLOCK_INFO
DECountItem