By default, connections obtain an adapters map from the global map
exposed as `psycopg.adapters`: changing the content of this object will
affect every connection created afterwards. You may specify a different
- template adapters map using the *context* parameter on
+ template adapters map using the ``context`` parameter on
`~psycopg.Connection.connect()`.
.. image:: ../pictures/adapt.svg
If you want to avoid starting to connect to the database at import time, and
want to wait for the application to be ready, you can create the pool using
-*open* = `!False`, and call the `~ConnectionPool.open()` and
+``open=False``, and call the `~ConnectionPool.open()` and
`~ConnectionPool.close()` methods when the conditions are right. Certain
frameworks provide callbacks triggered when the program is started and stopped
(for instance `FastAPI startup/shutdown events`__): they are perfect to
# the pool is now closed
When the pool is open, the pool's background workers start creating the
-requested *min_size* connections, while the constructor (or the `!open()`
+requested ``min_size`` connections, while the constructor (or the `!open()`
method) returns immediately. This allows the program some leeway to start
before the target database is up and running. However, if your application is
misconfigured, or the network is down, it means that the program will be able
----------------------
The pool background workers create connections according to the parameters
-*conninfo*, *kwargs*, and *connection_class* passed to `ConnectionPool`
+``conninfo``, ``kwargs``, and ``connection_class`` passed to `ConnectionPool`
constructor, invoking something like :samp:`{connection_class}({conninfo},
**{kwargs})`. Once a connection is created it is also passed to the
-*configure()* callback, if provided, after which it is put in the pool (or
+``configure()`` callback, if provided, after which it is put in the pool (or
passed to a client requesting it, if someone is already knocking at the door).
-If a connection expires (it passes *max_lifetime*), or is returned to the pool
+If a connection expires (it passes ``max_lifetime``), or is returned to the pool
in broken state, or is found closed by `~ConnectionPool.check()`), then the
pool will dispose of it and will start a new connection attempt in the
background.
ones available in the pool are requested, the requesting threads are queued
and are served a connection as soon as one is available, either because
another client has finished using it or because the pool is allowed to grow
-(when *max_size* > *min_size*) and a new connection is ready.
+(when ``max_size`` > ``min_size``) and a new connection is ready.
The main way to use the pool is to obtain a connection using the
`~ConnectionPool.connection()` context, which returns a `~psycopg.Connection`
committed, or rolled back if the context is exited with as exception.
At the end of the block the connection is returned to the pool and shouldn't
-be used anymore by the code which obtained it. If a *reset()* function is
+be used anymore by the code which obtained it. If a ``reset()`` function is
specified in the pool constructor, it is called on the connection before
-returning it to the pool. Note that the *reset()* function is called in a
+returning it to the pool. Note that the ``reset()`` function is called in a
worker thread, so that the thread which used the connection can keep its
execution without being slowed down by it.
Pool connection and sizing
--------------------------
-A pool can have a fixed size (specifying no *max_size* or *max_size* =
-*min_size*) or a dynamic size (when *max_size* > *min_size*). In both cases, as
-soon as the pool is created, it will try to acquire *min_size* connections in
-the background.
+A pool can have a fixed size (specifying no ``max_size`` or ``max_size`` =
+``min_size``) or a dynamic size (when ``max_size`` > ``min_size``). In both
+cases, as soon as the pool is created, it will try to acquire ``min_size``
+connections in the background.
If an attempt to create a connection fails, a new attempt will be made soon
after, using an exponential backoff to increase the time between attempts,
-until a maximum of *reconnect_timeout* is reached. When that happens, the pool
-will call the *reconnect_failed()* function, if provided to the pool, and just
+until a maximum of ``reconnect_timeout`` is reached. When that happens, the pool
+will call the ``reconnect_failed()`` function, if provided to the pool, and just
start a new connection attempt. You can use this function either to send
alerts or to interrupt the program and allow the rest of your infrastructure
to restart it.
-If more than *min_size* connections are requested concurrently, new ones are
-created, up to *max_size*. Note that the connections are always created by the
+If more than ``min_size`` connections are requested concurrently, new ones are
+created, up to ``max_size``. Note that the connections are always created by the
background workers, not by the thread asking for the connection: if a client
requests a new connection, and a previous client terminates its job before the
new connection is ready, the waiting client will be served the existing
.. __: https://github.com/brettwooldridge/HikariCP/blob/dev/documents/
Welcome-To-The-Jungle.md
-If a pool grows above *min_size*, but its usage decreases afterwards, a number
+If a pool grows above ``min_size``, but its usage decreases afterwards, a number
of connections are eventually closed: one every time a connection is unused
-after the *max_idle* time specified in the pool constructor.
+after the ``max_idle`` time specified in the pool constructor.
What's the right size for the pool?
Switching between using or not using a pool requires some code change, because
the `ConnectionPool` API is different from the normal `~psycopg.connect()`
function and because the pool can perform additional connection configuration
-(in the *configure* parameter) that, if the pool is removed, should be
+(in the ``configure`` parameter) that, if the pool is removed, should be
performed in some different code path of your application.
The `!psycopg_pool` 3.1 package introduces the `NullConnectionPool` class.
is closed immediately and not kept in the pool state.
A null pool is not only a configuration convenience, but can also be used to
-regulate the access to the server by a client program. If *max_size* is set to
-a value greater than 0, the pool will make sure that no more than *max_size*
+regulate the access to the server by a client program. If ``max_size`` is set to
+a value greater than 0, the pool will make sure that no more than ``max_size``
connections are created at any given time. If more clients ask for further
connections, they will be queued and served a connection as soon as a previous
client has finished using it, like for the basic pool. Other mechanisms to
-throttle client requests (such as *timeout* or *max_waiting*) are respected
+throttle client requests (such as ``timeout`` or ``max_waiting``) are respected
too.
.. note::
Queued clients will be handed an already established connection, as soon
as a previous client has finished using it (and after the pool has
- returned it to idle state and called *reset()* on it, if necessary).
+ returned it to idle state and called ``reset()`` on it, if necessary).
Because normally (i.e. unless queued) every client will be served a new
connection, the time to obtain the connection is paid by the waiting client;
.. automethod:: connect
:param conninfo: The `connection string`__ (a ``postgresql://`` url or
- a list of ``key=value pairs``) to specify where and
+ a list of ``key=value`` pairs) to specify where and
how to connect.
:param kwargs: Further parameters specifying the connection string.
- They override the ones specified in *conninfo*.
+ They override the ones specified in ``conninfo``.
:param autocommit: If `!True` don't start transactions automatically.
See :ref:`transactions` for details.
:param row_factory: The row factory specifying what type of records
.. __: https://www.postgresql.org/docs/current/libpq-envars.html
.. versionchanged:: 3.1
- Added *prepare_threshold* parameter.
+ added ``prepare_threshold`` parameter.
.. automethod:: close
:param withhold: Specify the `~ServerCursor.withhold` property of
the server-side cursor created.
:return: A cursor of the class specified by `cursor_factory` (or
- `server_cursor_factory` if *name* is specified).
+ `server_cursor_factory` if ``name`` is specified).
.. note::
within the TPC transaction: in this case a `ProgrammingError`
is raised.
- The *xid* may be either an object returned by the `xid()` method or a
+ The ``xid`` may be either an object returned by the `xid()` method or a
plain string: the latter allows to create a transaction using the
provided string as PostgreSQL transaction id. See also
`tpc_recover()`.
commit is performed. A transaction manager may choose to do this if
only a single resource is participating in the global transaction.
- When called with a transaction ID *xid*, the database commits the
+ When called with a transaction ID ``xid``, the database commits the
given transaction. If an invalid transaction ID is provided, a
`ProgrammingError` will be raised. This form should be called outside
of a transaction, and is intended for use in recovery.
When called with no arguments, `!tpc_rollback()` rolls back a TPC
transaction. It may be called before or after `tpc_prepare()`.
- When called with a transaction ID *xid*, it rolls back the given
+ When called with a transaction ID ``xid``, it rolls back the given
transaction. If an invalid transaction ID is provided, a
`ProgrammingError` is raised. This form should be called outside of a
transaction, and is intended for use in recovery.
- ``raise Rollback()``: same effect as above
- :samp:`raise Rollback({tx})`: roll back any operation that happened in
- the `Transaction` *tx* (returned by a statement such as :samp:`with
+ the `Transaction` ``tx`` (returned by a statement such as :samp:`with
conn.transaction() as {tx}:` and all the blocks nested within. The
- program will continue after the *tx* block.
+ program will continue after the ``tx`` block.
.. autoclass:: Xid()
to a PostgreSQL database session. They are normally created by the
connection's `~Connection.cursor()` method.
-Using the *name* parameter on `!cursor()` will create a `ServerCursor` or
+Using the ``name`` parameter on `!cursor()` will create a `ServerCursor` or
`AsyncServerCursor`, which can be used to retrieve partial results from a
database.
.. autoclass:: ServerCursor()
This class also implements a `DBAPI-compliant interface`__. It is created
- by `Connection.cursor()` specifying the *name* parameter. Using this
+ by `Connection.cursor()` specifying the ``name`` parameter. Using this
object results in the creation of an equivalent PostgreSQL cursor in the
server. DBAPI-extension methods (such as `~Cursor.copy()` or
`~Cursor.stream()`) are not implemented on this object: use a normal
format (`!True`) or in text format (`!False`). By default
(`!None`) return data as requested by the cursor's `~Cursor.format`.
- Create a server cursor with given `name` and the *query* in argument.
+ Create a server cursor with given `name` and the ``query`` in argument.
If using :sql:`DECLARE` is not appropriate (for instance because the
cursor is returned by calling a stored procedure) you can avoid to use
This class implements a DBAPI-inspired interface as the `AsyncCursor`
does, but wraps a server-side cursor like the `ServerCursor` class. It is
- created by `AsyncConnection.cursor()` specifying the *name* parameter.
+ created by `AsyncConnection.cursor()` specifying the ``name`` parameter.
The following are the methods exposing a different (async) interface from
the `ServerCursor` counterpart, but sharing the same semantics.
This class implements a connection pool serving `~psycopg.Connection`
instances (or subclasses). The constructor has *alot* of arguments, but
- only *conninfo* and *min_size* are the fundamental ones, all the other
+ only ``conninfo`` and ``min_size`` are the fundamental ones, all the other
arguments have meaningful defaults and can probably be tweaked later, if
required.
:param min_size: The minimum number of connection the pool will hold. The
pool will actively try to create new connections if some
are lost (closed, broken) and will try to never go below
- *min_size*.
+ ``min_size``.
:type min_size: `!int`, default: 4
:param max_size: The maximum number of connections the pool will hold. If
- `!None`, or equal to *min_size*, the pool will not grow or
- shrink. If larger than *min_size*, the pool can grow if
- more than *min_size* connections are requested at the same
+ `!None`, or equal to ``min_size``, the pool will not grow or
+ shrink. If larger than ``min_size``, the pool can grow if
+ more than ``min_size`` connections are requested at the same
time and will shrink back after the extra connections have
- been unused for more than *max_idle* seconds.
+ been unused for more than ``max_idle`` seconds.
:type max_size: `!int`, default: `!None`
:param kwargs: Extra arguments to pass to `!connect()`. Note that this is
:param reset: A callback to reset a function after it has been returned to
the pool. The connection is guaranteed to be passed to the
- *reset()* function in "idle" state (no transaction). When
- leaving the *reset()* function the connection must be left in
+ ``reset()`` function in "idle" state (no transaction). When
+ leaving the ``reset()`` function the connection must be left in
*idle* state, otherwise it is discarded.
:type reset: `Callable[[Connection], None]`
:param max_idle: Maximum time, in seconds, that a connection can stay unused
in the pool before being closed, and the pool shrunk. This
- only happens to connections more than *min_size*, if
- *max_size* allowed the pool to grow.
+ only happens to connections more than ``min_size``, if
+ ``max_size`` allowed the pool to grow.
:type max_idle: `!float`, default: 10 minutes
:param reconnect_timeout: Maximum time, in seconds, the pool will try to
fails, the pool will try to reconnect a few
times, using an exponential backoff and some
random factor to avoid mass attempts. If repeated
- attempts fail, after *reconnect_timeout* second
+ attempts fail, after ``reconnect_timeout`` second
the connection attempt is aborted and the
- *reconnect_failed* callback invoked.
+ ``reconnect_failed()`` callback invoked.
:type reconnect_timeout: `!float`, default: 5 minutes
:param reconnect_failed: Callback invoked if an attempt to create a new
- connection fails for more than *reconnect_timeout*
+ connection fails for more than ``reconnect_timeout``
seconds. The user may decide, for instance, to
terminate the program (executing `sys.exit()`).
By default don't do anything: restart a new
connection attempt (if the number of connection
- fell below *min_size*).
+ fell below ``min_size``).
:type reconnect_failed: ``Callable[[ConnectionPool], None]``
:param num_workers: Number of background worker threads used to maintain the
.. versionchanged:: 3.1
- Added *open* parameter to init method.
+ added ``open`` parameter to init method.
- .. note:: In a future version, the default value for the *open* parameter
+ .. note:: In a future version, the default value for the ``open`` parameter
might be changed to `!False`. If you rely on this behaviour (e.g. if
you don't use the pool as a context manager) you might want to specify
this parameter explicitly.
`!AsyncConnectionPool` has a very similar interface to the `ConnectionPool`
class but its blocking methods are implemented as ``async`` coroutines. It
returns instances of `~psycopg.AsyncConnection`, or of its subclass if
-specified so in the *connection_class* parameter.
+specified so in the ``connection_class`` parameter.
Only the functions with different signature from `!ConnectionPool` are
listed here.
:param max_size: If None or 0, create a new connection at every request,
without a maximum. If greater than 0, don't create more
- than *max_size* connections and queue the waiting clients.
+ than ``max_size`` connections and queue the waiting clients.
:type max_size: `!int`, default: None
:param reset: It is only called when there are waiting clients in the
.. autoclass:: Json
.. autoclass:: Jsonb
-Wrappers to signal to convert *obj* to a json or jsonb PostgreSQL value.
+Wrappers to signal to convert ``obj`` to a json or jsonb PostgreSQL value.
Any object supported by the underlying `!dumps()` function can be wrapped.
-If a *dumps* function is passed to the wrapper, use it to dump the wrapped
+If a ``dumps`` function is passed to the wrapper, use it to dump the wrapped
object. Otherwise use the function specified by `set_json_dumps()`.
If you need an even more specific dump customisation only for certain objects
(including different configurations in the same query) you can specify a
-*dumps* parameter in the
+``dumps`` parameter in the
`~psycopg.types.json.Json`/`~psycopg.types.json.Jsonb` wrapper, which will
take precedence over what is specified by `!set_json_dumps()`.
.. autofunction:: psycopg.types.composite.register_composite
After registering, fetching data of the registered composite will invoke
- *factory* to create corresponding Python objects.
+ ``factory`` to create corresponding Python objects.
If no factory is specified, a `~collection.namedtuple` is created and used
to return data.
- If the *factory* is a type (and not a generic callable), then dumpers for
+ If the ``factory`` is a type (and not a generic callable), then dumpers for
that type are created and registered too, so that passing objects of that
type to a query will adapt them to the registered type.
- Return results from all queries run through `~Cursor.executemany()`; each
result set can be accessed by calling `~Cursor.nextset()` (:ticket:`#164`).
- Add `pq.PGconn.trace()` and related trace functions (:ticket:`#167`).
-- Add *prepare_threshold* parameter to `Connection` init (:ticket:`#200`).
+- Add ``prepare_threshold`` parameter to `Connection` init (:ticket:`#200`).
- Add `Error.pgconn` and `Error.pgresult` attributes (:ticket:`#242`).
- Allow `bytearray`/`memoryview` data too as `Copy.write()` input
(:ticket:`#254`).
- Add :ref:`adapt-multirange` (:ticket:`#75`).
- Add `pq.__build_version__` constant.
- Don't use the extended protocol with COPY, (:tickets:`#78, #82`).
-- Add *context* parameter to `~Connection.connect()` (:ticket:`#83`).
+- Add ``context`` parameter to `~Connection.connect()` (:ticket:`#83`).
- Fix selection of dumper by oid after `~Copy.set_types()`.
- Drop `!Connection.client_encoding`. Use `ConnectionInfo.encoding` to read
it, and a :sql:`SET` statement to change it.