By default, connections obtain an adapters map from the global map
exposed as `psycopg.adapters`: changing the content of this object will
affect every connection created afterwards. You may specify a different
- template adapters map using the ``context`` parameter on
+ template adapters map using the `!context` parameter on
`~psycopg.Connection.connect()`.
.. image:: ../pictures/adapt.svg
`~AsyncCursor` supporting an `asyncio` interface.
The design of the asynchronous objects is pretty much the same of the sync
-ones: in order to use them you will only have to scatter the ``await`` keyword
+ones: in order to use them you will only have to scatter the `!await` keyword
here and there.
.. code:: python
.. _async-with:
-``with`` async connections
---------------------------
+`!with` async connections
+-------------------------
As seen in :ref:`the basic usage <usage>`, connections and cursors can act as
context managers, so you can run:
Notifications are received as instances of `Notify`. If you are reserving a
connection only to receive notifications, the simplest way is to consume the
`Connection.notifies` generator. The generator can be stopped using
-``close()``.
+`!close()`.
.. note::
blocking `Connection` is perfectly valid.
The following example will print notifications and stop when one containing
-the ``stop`` message is received.
+the ``"stop"`` message is received.
.. code:: python
processed results, so it uses more memory and resources on the server.
Psycopg allows the use of server-side cursors using the classes `ServerCursor`
-and `AsyncServerCursor`. They are usually created by passing the *name*
+and `AsyncServerCursor`. They are usually created by passing the `!name`
parameter to the `~Connection.cursor()` method (reason for which, in
`!psycopg2`, they are usually called *named cursors*). The use of these classes
is similar to their client-side counterparts: their interface is the same, but
closer look at the `PostgreSQL client-server message flow`__.
During normal querying, each statement is transmitted by the client to the
-server as a stream of request messages, terminating with a *Sync* message to
+server as a stream of request messages, terminating with a **Sync** message to
tell it that it should process the messages sent so far. The server will
execute the statement and describe the results back as a stream of messages,
-terminating with a *ReadyForQuery*, telling the client that it may now send a
+terminating with a **ReadyForQuery**, telling the client that it may now send a
new query.
For example, the statement (returning no result):
If you want to avoid starting to connect to the database at import time, and
want to wait for the application to be ready, you can create the pool using
-``open=False``, and call the `~ConnectionPool.open()` and
+`!open=False`, and call the `~ConnectionPool.open()` and
`~ConnectionPool.close()` methods when the conditions are right. Certain
frameworks provide callbacks triggered when the program is started and stopped
(for instance `FastAPI startup/shutdown events`__): they are perfect to
# the pool is now closed
When the pool is open, the pool's background workers start creating the
-requested ``min_size`` connections, while the constructor (or the `!open()`
+requested `!min_size` connections, while the constructor (or the `!open()`
method) returns immediately. This allows the program some leeway to start
before the target database is up and running. However, if your application is
misconfigured, or the network is down, it means that the program will be able
----------------------
The pool background workers create connections according to the parameters
-``conninfo``, ``kwargs``, and ``connection_class`` passed to `ConnectionPool`
+`!conninfo`, `!kwargs`, and `!connection_class` passed to `ConnectionPool`
constructor, invoking something like :samp:`{connection_class}({conninfo},
**{kwargs})`. Once a connection is created it is also passed to the
-``configure()`` callback, if provided, after which it is put in the pool (or
+`!configure()` callback, if provided, after which it is put in the pool (or
passed to a client requesting it, if someone is already knocking at the door).
-If a connection expires (it passes ``max_lifetime``), or is returned to the pool
+If a connection expires (it passes `!max_lifetime`), or is returned to the pool
in broken state, or is found closed by `~ConnectionPool.check()`), then the
pool will dispose of it and will start a new connection attempt in the
background.
ones available in the pool are requested, the requesting threads are queued
and are served a connection as soon as one is available, either because
another client has finished using it or because the pool is allowed to grow
-(when ``max_size`` > ``min_size``) and a new connection is ready.
+(when `!max_size` > `!min_size`) and a new connection is ready.
The main way to use the pool is to obtain a connection using the
`~ConnectionPool.connection()` context, which returns a `~psycopg.Connection`
committed, or rolled back if the context is exited with as exception.
At the end of the block the connection is returned to the pool and shouldn't
-be used anymore by the code which obtained it. If a ``reset()`` function is
+be used anymore by the code which obtained it. If a `!reset()` function is
specified in the pool constructor, it is called on the connection before
-returning it to the pool. Note that the ``reset()`` function is called in a
+returning it to the pool. Note that the `!reset()` function is called in a
worker thread, so that the thread which used the connection can keep its
execution without being slowed down by it.
Pool connection and sizing
--------------------------
-A pool can have a fixed size (specifying no ``max_size`` or ``max_size`` =
-``min_size``) or a dynamic size (when ``max_size`` > ``min_size``). In both
-cases, as soon as the pool is created, it will try to acquire ``min_size``
+A pool can have a fixed size (specifying no `!max_size` or `!max_size` =
+`!min_size`) or a dynamic size (when `!max_size` > `!min_size`). In both
+cases, as soon as the pool is created, it will try to acquire `!min_size`
connections in the background.
If an attempt to create a connection fails, a new attempt will be made soon
after, using an exponential backoff to increase the time between attempts,
-until a maximum of ``reconnect_timeout`` is reached. When that happens, the pool
-will call the ``reconnect_failed()`` function, if provided to the pool, and just
+until a maximum of `!reconnect_timeout` is reached. When that happens, the pool
+will call the `!reconnect_failed()` function, if provided to the pool, and just
start a new connection attempt. You can use this function either to send
alerts or to interrupt the program and allow the rest of your infrastructure
to restart it.
-If more than ``min_size`` connections are requested concurrently, new ones are
-created, up to ``max_size``. Note that the connections are always created by the
+If more than `!min_size` connections are requested concurrently, new ones are
+created, up to `!max_size`. Note that the connections are always created by the
background workers, not by the thread asking for the connection: if a client
requests a new connection, and a previous client terminates its job before the
new connection is ready, the waiting client will be served the existing
.. __: https://github.com/brettwooldridge/HikariCP/blob/dev/documents/
Welcome-To-The-Jungle.md
-If a pool grows above ``min_size``, but its usage decreases afterwards, a number
+If a pool grows above `!min_size`, but its usage decreases afterwards, a number
of connections are eventually closed: one every time a connection is unused
-after the ``max_idle`` time specified in the pool constructor.
+after the `!max_idle` time specified in the pool constructor.
What's the right size for the pool?
Switching between using or not using a pool requires some code change, because
the `ConnectionPool` API is different from the normal `~psycopg.connect()`
function and because the pool can perform additional connection configuration
-(in the ``configure`` parameter) that, if the pool is removed, should be
+(in the `!configure` parameter) that, if the pool is removed, should be
performed in some different code path of your application.
The `!psycopg_pool` 3.1 package introduces the `NullConnectionPool` class.
is closed immediately and not kept in the pool state.
A null pool is not only a configuration convenience, but can also be used to
-regulate the access to the server by a client program. If ``max_size`` is set to
-a value greater than 0, the pool will make sure that no more than ``max_size``
+regulate the access to the server by a client program. If `!max_size` is set to
+a value greater than 0, the pool will make sure that no more than `!max_size`
connections are created at any given time. If more clients ask for further
connections, they will be queued and served a connection as soon as a previous
client has finished using it, like for the basic pool. Other mechanisms to
-throttle client requests (such as ``timeout`` or ``max_waiting``) are respected
+throttle client requests (such as `!timeout` or `!max_waiting`) are respected
too.
.. note::
Queued clients will be handed an already established connection, as soon
as a previous client has finished using it (and after the pool has
- returned it to idle state and called ``reset()`` on it, if necessary).
+ returned it to idle state and called `!reset()` on it, if necessary).
Because normally (i.e. unless queued) every client will be served a new
connection, the time to obtain the connection is paid by the waiting client;
Statement preparation can be controlled in several ways:
-- You can decide to prepare a query immediately by passing ``prepare=True`` to
+- You can decide to prepare a query immediately by passing `!prepare=True` to
`Connection.execute()` or `Cursor.execute()`. The query is prepared, if it
wasn't already, and executed as prepared from its first use.
-- Conversely, passing ``prepare=False`` to `!execute()` will avoid to prepare
+- Conversely, passing `!prepare=False` to `!execute()` will avoid to prepare
the query, regardless of the number of times it is executed. The default for
the parameter is `!None`, meaning that the query is prepared if the
conditions described above are met.
:param conninfo: The `connection string`__ (a ``postgresql://`` url or
a list of ``key=value`` pairs) to specify where and how to connect.
:param kwargs: Further parameters specifying the connection string.
- They override the ones specified in ``conninfo``.
+ They override the ones specified in `!conninfo`.
:param autocommit: If `!True` don't start transactions automatically.
See :ref:`transactions` for details.
:param row_factory: The row factory specifying what type of records
.. __: https://www.postgresql.org/docs/current/libpq-envars.html
.. versionchanged:: 3.1
- added ``prepare_threshold`` and ``cursor_factory`` parameters.
+ added `!prepare_threshold` and `!cursor_factory` parameters.
.. automethod:: close
:param withhold: Specify the `~ServerCursor.withhold` property of
the server-side cursor created.
:return: A cursor of the class specified by `cursor_factory` (or
- `server_cursor_factory` if ``name`` is specified).
+ `server_cursor_factory` if `!name` is specified).
.. note::
.. autoattribute:: autocommit
The property is writable for sync connections, read-only for async
- ones: you should call ``await`` `~AsyncConnection.set_autocommit`
+ ones: you should call `!await` `~AsyncConnection.set_autocommit`
:samp:`({value})` instead.
The following three properties control the characteristics of new
within the TPC transaction: in this case a `ProgrammingError`
is raised.
- The ``xid`` may be either an object returned by the `xid()` method or a
+ The `!xid` may be either an object returned by the `xid()` method or a
plain string: the latter allows to create a transaction using the
provided string as PostgreSQL transaction id. See also
`tpc_recover()`.
commit is performed. A transaction manager may choose to do this if
only a single resource is participating in the global transaction.
- When called with a transaction ID ``xid``, the database commits the
+ When called with a transaction ID `!xid`, the database commits the
given transaction. If an invalid transaction ID is provided, a
`ProgrammingError` will be raised. This form should be called outside
of a transaction, and is intended for use in recovery.
When called with no arguments, `!tpc_rollback()` rolls back a TPC
transaction. It may be called before or after `tpc_prepare()`.
- When called with a transaction ID ``xid``, it rolls back the given
+ When called with a transaction ID `!xid`, it rolls back the given
transaction. If an invalid transaction ID is provided, a
`ProgrammingError` is raised. This form should be called outside of a
transaction, and is intended for use in recovery.
.. versionchanged:: 3.1
Automatically resolve domain names asynchronously. In previous
- versions, name resolution blocks, unless the ``hostaddr``
+ versions, name resolution blocks, unless the `!hostaddr`
parameter is specified, or the `~psycopg._dns.resolve_hostaddr_async()`
function is used.
.. autoclass:: Copy()
- The object is normally returned by ``with`` `Cursor.copy()`.
+ The object is normally returned by `!with` `Cursor.copy()`.
.. automethod:: write_row
Writers instances can be used passing them to the cursor
`~psycopg.Cursor.copy()` method or to the `~psycopg.Copy` constructor, as the
-``writer`` argument.
+`!writer` argument.
.. autoclass:: Writer
to a PostgreSQL database session. They are normally created by the
connection's `~Connection.cursor()` method.
-Using the ``name`` parameter on `!cursor()` will create a `ServerCursor` or
+Using the `!name` parameter on `!cursor()` will create a `ServerCursor` or
`AsyncServerCursor`, which can be used to retrieve partial results from a
database.
This class implements a `DBAPI-compliant interface`__. It is what the
classic `Connection.cursor()` method returns. `AsyncConnection.cursor()`
will create instead `AsyncCursor` objects, which have the same set of
- method but expose an `asyncio` interface and require ``async`` and
- ``await`` keywords to operate.
+ method but expose an `asyncio` interface and require `!async` and
+ `!await` keywords to operate.
.. __: dbapi-cursor_
.. _dbapi-cursor: https://www.python.org/dev/peps/pep-0249/#cursor-objects
If the queries return data you want to read (e.g. when executing an
:sql:`INSERT ... RETURNING` or a :sql:`SELECT` with a side-effect),
- you can specify ``returning=True``; the results will be available in
+ you can specify `!returning=True`; the results will be available in
the cursor's state and can be read using `fetchone()` and similar
methods. Each input parameter will produce a separate result set: use
`nextset()` to read the results of the queries after the first one.
.. versionchanged:: 3.1
- - Added ``returning`` parameter to receive query results.
+ - Added `!returning` parameter to receive query results.
- Performance optimised by making use of the pipeline mode, when
using libpq 14 or newer.
.. attribute:: format
The format of the data returned by the queries. It can be selected
- initially e.g. specifying `Connection.cursor`\ ``(binary=True)`` and
+ initially e.g. specifying `Connection.cursor`\ `!(binary=True)` and
changed during the cursor's lifetime. It is also possible to override
the value for single queries, e.g. specifying `execute`\
- ``(binary=True)``.
+ `!(binary=True)`.
:type: `pq.Format`
:default: `~pq.Format.TEXT`
.. autoclass:: ServerCursor
This class also implements a `DBAPI-compliant interface`__. It is created
- by `Connection.cursor()` specifying the ``name`` parameter. Using this
+ by `Connection.cursor()` specifying the `!name` parameter. Using this
object results in the creation of an equivalent PostgreSQL cursor in the
server. DBAPI-extension methods (such as `~Cursor.copy()` or
`~Cursor.stream()`) are not implemented on this object: use a normal
format (`!True`) or in text format (`!False`). By default
(`!None`) return data as requested by the cursor's `~Cursor.format`.
- Create a server cursor with given `name` and the ``query`` in argument.
+ Create a server cursor with given `!name` and the `!query` in argument.
If using :sql:`DECLARE` is not appropriate (for instance because the
cursor is returned by calling a stored procedure) you can avoid to use
This class implements a DBAPI-inspired interface as the `AsyncCursor`
does, but wraps a server-side cursor like the `ServerCursor` class. It is
- created by `AsyncConnection.cursor()` specifying the ``name`` parameter.
+ created by `AsyncConnection.cursor()` specifying the `!name` parameter.
The following are the methods exposing a different (async) interface from
the `ServerCursor` counterpart, but sharing the same semantics.
.. autoexception:: Rollback
- It can be used as
+ It can be used as:
- ``raise Rollback``: roll back the operation that happened in the current
transaction block and continue the program after the block.
- ``raise Rollback()``: same effect as above
- :samp:`raise Rollback({tx})`: roll back any operation that happened in
- the `Transaction` ``tx`` (returned by a statement such as :samp:`with
+ the `Transaction` `!tx` (returned by a statement such as :samp:`with
conn.transaction() as {tx}:` and all the blocks nested within. The
- program will continue after the ``tx`` block.
+ program will continue after the `!tx` block.
Two-Phase Commit related objects
This class implements a connection pool serving `~psycopg.Connection`
instances (or subclasses). The constructor has *alot* of arguments, but
- only ``conninfo`` and ``min_size`` are the fundamental ones, all the other
+ only `!conninfo` and `!min_size` are the fundamental ones, all the other
arguments have meaningful defaults and can probably be tweaked later, if
required.
:param min_size: The minimum number of connection the pool will hold. The
pool will actively try to create new connections if some
are lost (closed, broken) and will try to never go below
- ``min_size``.
+ `!min_size`.
:type min_size: `!int`, default: 4
:param max_size: The maximum number of connections the pool will hold. If
- `!None`, or equal to ``min_size``, the pool will not grow or
- shrink. If larger than ``min_size``, the pool can grow if
- more than ``min_size`` connections are requested at the same
+ `!None`, or equal to `!min_size`, the pool will not grow or
+ shrink. If larger than `!min_size`, the pool can grow if
+ more than `!min_size` connections are requested at the same
time and will shrink back after the extra connections have
- been unused for more than ``max_idle`` seconds.
+ been unused for more than `!max_idle` seconds.
:type max_size: `!int`, default: `!None`
:param kwargs: Extra arguments to pass to `!connect()`. Note that this is
:param reset: A callback to reset a function after it has been returned to
the pool. The connection is guaranteed to be passed to the
- ``reset()`` function in "idle" state (no transaction). When
- leaving the ``reset()`` function the connection must be left in
+ `!reset()` function in "idle" state (no transaction). When
+ leaving the `!reset()` function the connection must be left in
*idle* state, otherwise it is discarded.
:type reset: `Callable[[Connection], None]`
:param timeout: The default maximum time in seconds that a client can wait
to receive a connection from the pool (using `connection()`
or `getconn()`). Note that these methods allow to override
- the *timeout* default.
+ the `!timeout` default.
:type timeout: `!float`, default: 30 seconds
:param max_waiting: Maximum number of requests that can be queued to the
:param max_idle: Maximum time, in seconds, that a connection can stay unused
in the pool before being closed, and the pool shrunk. This
- only happens to connections more than ``min_size``, if
- ``max_size`` allowed the pool to grow.
+ only happens to connections more than `!min_size`, if
+ `!max_size` allowed the pool to grow.
:type max_idle: `!float`, default: 10 minutes
:param reconnect_timeout: Maximum time, in seconds, the pool will try to
fails, the pool will try to reconnect a few
times, using an exponential backoff and some
random factor to avoid mass attempts. If repeated
- attempts fail, after ``reconnect_timeout`` second
+ attempts fail, after `!reconnect_timeout` second
the connection attempt is aborted and the
- ``reconnect_failed()`` callback invoked.
+ `!reconnect_failed()` callback invoked.
:type reconnect_timeout: `!float`, default: 5 minutes
:param reconnect_failed: Callback invoked if an attempt to create a new
- connection fails for more than ``reconnect_timeout``
+ connection fails for more than `!reconnect_timeout`
seconds. The user may decide, for instance, to
terminate the program (executing `sys.exit()`).
By default don't do anything: restart a new
connection attempt (if the number of connection
- fell below ``min_size``).
+ fell below `!min_size`).
:type reconnect_failed: ``Callable[[ConnectionPool], None]``
:param num_workers: Number of background worker threads used to maintain the
.. versionchanged:: 3.1
- added ``open`` parameter to init method.
+ added `!open` parameter to init method.
- .. note:: In a future version, the default value for the ``open`` parameter
+ .. note:: In a future version, the default value for the `!open` parameter
might be changed to `!False`. If you rely on this behaviour (e.g. if
you don't use the pool as a context manager) you might want to specify
this parameter explicitly.
--------------------------------
`!AsyncConnectionPool` has a very similar interface to the `ConnectionPool`
-class but its blocking methods are implemented as ``async`` coroutines. It
+class but its blocking methods are implemented as `!async` coroutines. It
returns instances of `~psycopg.AsyncConnection`, or of its subclass if
-specified so in the ``connection_class`` parameter.
+specified so in the `!connection_class` parameter.
Only the functions with different signature from `!ConnectionPool` are
listed here.
:param max_size: If None or 0, create a new connection at every request,
without a maximum. If greater than 0, don't create more
- than ``max_size`` connections and queue the waiting clients.
+ than `!max_size` connections and queue the waiting clients.
:type max_size: `!int`, default: None
:param reset: It is only called when there are waiting clients in the
.. autofunction:: class_row
This is not a row factory, but rather a factory of row factories.
- Specifying ``row_factory=class_row(MyClass)`` will create connections and
+ Specifying `!row_factory=class_row(MyClass)` will create connections and
cursors returning `!MyClass` objects on fetch.
Example::
`~psycopg.Connection` or `~psycopg.Cursor` (e.g. `!conn.adapters.types`).
The global registry, from which the others inherit from, is available as
- `psycopg.adapters`\ ``.types``.
+ `psycopg.adapters`\ `!.types`.
.. automethod:: __getitem__
.. autoclass:: Json
.. autoclass:: Jsonb
-Wrappers to signal to convert ``obj`` to a json or jsonb PostgreSQL value.
+Wrappers to signal to convert `!obj` to a json or jsonb PostgreSQL value.
Any object supported by the underlying `!dumps()` function can be wrapped.
-If a ``dumps`` function is passed to the wrapper, use it to dump the wrapped
+If a `!dumps` function is passed to the wrapper, use it to dump the wrapped
object. Otherwise use the function specified by `set_json_dumps()`.
If you need an even more specific dump customisation only for certain objects
(including different configurations in the same query) you can specify a
-``dumps`` parameter in the
+`!dumps` parameter in the
`~psycopg.types.json.Json`/`~psycopg.types.json.Jsonb` wrapper, which will
take precedence over what is specified by `!set_json_dumps()`.
Copy is supported using the `Cursor.copy()` method, passing it a query of the
form :sql:`COPY ... FROM STDIN` or :sql:`COPY ... TO STDOUT`, and managing the
-resulting `Copy` object in a ``with`` block:
+resulting `Copy` object in a `!with` block:
.. code:: python
iterable (a list of tuples, or any iterable of sequences): the Python values
are adapted as they would be in normal querying. To perform such operation use
a :sql:`COPY ... FROM STDIN` with `Cursor.copy()` and use `~Copy.write_row()`
-on the resulting object in a ``with`` block. On exiting the block the
+on the resulting object in a `!with` block. On exiting the block the
operation will be concluded:
.. code:: python
.. _from-psycopg2:
-Differences from ``psycopg2``
-=============================
+Differences from `!psycopg2`
+============================
Psycopg 3 uses the common DBAPI structure of many other database adapters and
tries to behave as close as possible to `!psycopg2`. There are however a few
In Psycopg 3 instead, all the results are available. After running the query,
the first result will be readily available in the cursor and can be consumed
-using the usual ``fetch*()`` methods. In order to access the following
+using the usual `!fetch*()` methods. In order to access the following
results, you can use the `Cursor.nextset()` method::
>>> cur_pg3.execute("SELECT 1; SELECT 2")
>>> conn.execute("SELECT * FROM foo WHERE id = ANY(%s)", [[10,20,30]])
-Note that `ANY()` can be used with ``psycopg2`` too, and has the advantage of
+Note that `ANY()` can be used with `!psycopg2` too, and has the advantage of
accepting an empty list of values too as argument, which is not supported by
the :sql:`IN` operator instead.
.. _diff-with:
-``with`` connection
--------------------
+`!with` connection
+------------------
In `!psycopg2`, using the syntax :ref:`with connection <pg2:with>`,
only the transaction is closed, not the connection. This behaviour is
.. _diff-callproc:
-``callproc()`` is gone
-----------------------
+`!callproc()` is gone
+---------------------
`cursor.callproc()` is not implemented. The method has a simplistic semantic
which doesn't account for PostgreSQL positional parameters, procedures,
.. _diff-client-encoding:
-``client_encoding`` is gone
----------------------------
+`!client_encoding` is gone
+--------------------------
Psycopg automatically uses the database client encoding to decode data to
Unicode strings. Use `ConnectionInfo.encoding` if you need to read the
encoding. You can select an encoding at connection time using the
-``client_encoding`` connection parameter and you can change the encoding of a
+`!client_encoding` connection parameter and you can change the encoding of a
connection by running a :sql:`SET client_encoding` statement... But why would
you?
for.
-``execute()`` arguments
------------------------
+`!execute()` arguments
+----------------------
Passing parameters to a SQL statement happens in functions such as
`Cursor.execute()` by using ``%s`` placeholders in the SQL statement, and
Because not every PostgreSQL type supports binary output, by default, the data
will be returned in text format. In order to return data in binary format you
-can create the cursor using `Connection.cursor`\ ``(binary=True)`` or execute
-the query using `Cursor.execute`\ ``(binary=True)``. A case in which
+can create the cursor using `Connection.cursor`\ `!(binary=True)` or execute
+the query using `Cursor.execute`\ `!(binary=True)`. A case in which
requesting binary results is a clear winner is when you have large binary data
in the database, such as images::
.. autofunction:: psycopg.types.composite.register_composite
After registering, fetching data of the registered composite will invoke
- ``factory`` to create corresponding Python objects.
+ `!factory` to create corresponding Python objects.
If no factory is specified, a `~collection.namedtuple` is created and used
to return data.
- If the ``factory`` is a type (and not a generic callable), then dumpers for
+ If the `!factory` is a type (and not a generic callable), then dumpers for
that type are created and registered too, so that passing objects of that
type to a query will adapt them to the registered type.
.. __: https://www.postgresql.org/docs/current/static/functions-range.html#RANGE-OPERATORS-TABLE
- `!Range` objects are immutable, hashable, and support the ``in`` operator
+ `!Range` objects are immutable, hashable, and support the `!in` operator
(checking if an element is within the range). They can be tested for
equivalence. Empty ranges evaluate to `!False` in a boolean context,
nonempty ones evaluate to `!True`.
--------------------
A more transparent way to make sure that transactions are finalised at the
-right time is to use ``with`` `Connection.transaction()` to create a
+right time is to use `!with` `Connection.transaction()` to create a
transaction context. When the context is entered, a transaction is started;
when leaving the context the transaction is committed, or it is rolled back if
an exception is raised inside the block.
developers is to:
- use a connection block: ``with psycopg.connect(...) as conn``;
- - use an autocommit connection, either passing ``autocommit=True`` as
+ - use an autocommit connection, either passing `!autocommit=True` as
`!connect()` parameter or setting the attribute ``conn.autocommit =
True``;
- use `!with conn.transaction()` blocks to manage transactions only where
If `!unreliable_operation()` causes an error, including an operation causing a
database error, all its changes will be reverted. The exception bubbles up
-outside the block: in the example it is intercepted by the ``try`` so that the
+outside the block: in the example it is intercepted by the `!try` so that the
loop can complete. The outermost block is unaffected (unless other errors
happen there).
- retrieve data from the database, iterating on the cursor or using methods
such as `~Cursor.fetchone()`, `~Cursor.fetchmany()`, `~Cursor.fetchall()`.
-- Using these objects as context managers (i.e. using ``with``) will make sure
+- Using these objects as context managers (i.e. using `!with`) will make sure
to close them and free their resources at the end of the block (notice that
:ref:`this is different from psycopg2 <diff-with>`).
.. index::
- pair: Connection; ``with``
+ pair: Connection; `!with`
.. _with-connection:
.. note::
This behaviour is not what `!psycopg2` does: in `!psycopg2` :ref:`there is
no final close() <pg2:with>` and the connection can be used in several
- ``with`` statements to manage different transactions. This behaviour has
+ `!with` statements to manage different transactions. This behaviour has
been considered non-standard and surprising so it has been replaced by the
more explicit `~Connection.transaction()` block.
.. warning::
If a connection is just left to go out of scope, the way it will behave
- with or without the use of a ``with`` block is different:
+ with or without the use of a `!with` block is different:
- - if the connection is used without a ``with`` block, the server will find
+ - if the connection is used without a `!with` block, the server will find
a connection closed INTRANS and roll back the current transaction;
- - if the connection is used with a ``with`` block, there will be an
+ - if the connection is used with a `!with` block, there will be an
explicit COMMIT and the operations will be finalised.
- You should use a ``with`` block when your intention is just to execute a
+ You should use a `!with` block when your intention is just to execute a
set of operations and then committing the result, which is the most usual
thing to do with a connection. If your connection life cycle and
transaction pattern is different, and want more control on it, the use
- without ``with`` might be more convenient.
+ without `!with` might be more convenient.
See :ref:`transactions` for more information.
------------------
- Add :ref:`null-pool` (:ticket:`#148`).
-- Add `ConnectionPool.open()` and *open* parameter to the pool init
+- Add `ConnectionPool.open()` and ``open`` parameter to the pool init
(:ticket:`#151`).
- Drop support for Python 3.6.
self, cls: Union[type, str, None], dumper: Type[Dumper]
) -> None:
"""
- Configure the context to use *dumper* to convert object of type *cls*.
+ Configure the context to use `!dumper` to convert objects of type `!cls`.
If two dumpers with different `~Dumper.format` are registered for the
same type, the last one registered will be chosen when the query
"`~PyFormat.AUTO`" placeholder).
:param cls: The type to manage.
- :param dumper: The dumper to register for *cls*.
+ :param dumper: The dumper to register for `!cls`.
- If *cls* is specified as string it will be lazy-loaded, so that it
+ If `!cls` is specified as string it will be lazy-loaded, so that it
will be possible to register it without importing it before. In this
case it should be the fully qualified name of the object (e.g.
``"uuid.UUID"``).
- If *cls* is None, only use the dumper when looking up using
+ If `!cls` is None, only use the dumper when looking up using
`get_dumper_by_oid()`, which happens when we know the Postgres type to
adapt to, but not the Python type that will be adapted (e.g. in COPY
after using `~psycopg.Copy.set_types()`).
def register_loader(self, oid: Union[int, str], loader: Type["Loader"]) -> None:
"""
- Configure the context to use *loader* to convert data of oid *oid*.
+ Configure the context to use `!loader` to convert data of oid `!oid`.
:param oid: The PostgreSQL OID or type name to manage.
- :param loader: The loar to register for *oid*.
+ :param loader: The loar to register for `!oid`.
If `oid` is specified as string, it refers to a type name, which is
looked up in the `types` registry. `
:param cls: The class to adapt.
:param format: The format to dump to. If `~psycopg.adapt.PyFormat.AUTO`,
- use the last one of the dumpers registered on *cls*.
+ use the last one of the dumpers registered on `!cls`.
"""
try:
dmap = self._dumpers[format]
"""
Reduce a string to a valid Python identifier.
- Replace all non-valid chars with '_' and prefix the value with *prefix* if
+ Replace all non-valid chars with '_' and prefix the value with `!prefix` if
the first letter is an '_'.
"""
if not s.isidentifier():
def get_dumper(self, obj: Any, format: PyFormat) -> "Dumper":
"""
- Return a Dumper instance to dump *obj*.
+ Return a Dumper instance to dump `!obj`.
"""
# Normally, the type of the object dictates how to dump it
key = type(obj)
def register(self, context: Optional[AdaptContext] = None) -> None:
"""
- Register the type information, globally or in the specified *context*.
+ Register the type information, globally or in the specified `!context`.
"""
if context:
types = context.adapters.types
"""
def _added(self, registry: "TypesRegistry") -> None:
- """Method called by the *registry* when the object is added there."""
+ """Method called by the `!registry` when the object is added there."""
pass
supported are `~psycopg.types.range.RangeInfo` and
`~psycopg.types.multirange.MultirangeInfo`.
:param subtype: The name or OID of the subtype of the element to look for.
- :return: The `!TypeInfo` object of class *cls* whose subtype is
- *subtype*. `!None` if the element or its range are not found.
+ :return: The `!TypeInfo` object of class `!cls` whose subtype is
+ `!subtype`. `!None` if the element or its range are not found.
"""
try:
info = self[subtype]
class Dumper(Protocol):
"""
- Convert Python objects of type *cls* to PostgreSQL representation.
+ Convert Python objects of type `!cls` to PostgreSQL representation.
"""
format: pq.Format
...
def dump(self, obj: Any) -> Buffer:
- """Convert the object *obj* to PostgreSQL representation.
+ """Convert the object `!obj` to PostgreSQL representation.
:param obj: the object to convert.
"""
...
def quote(self, obj: Any) -> Buffer:
- """Convert the object *obj* to escaped representation.
+ """Convert the object `!obj` to escaped representation.
:param obj: the object to convert.
"""
...
def get_key(self, obj: Any, format: PyFormat) -> DumperKey:
- """Return an alternative key to upgrade the dumper to represent *obj*.
+ """Return an alternative key to upgrade the dumper to represent `!obj`.
:param obj: The object to convert
:param format: The format to convert to
In these cases, a dumper can implement `!get_key()` and return a new
class, or sequence of classes, that can be used to identify the same
dumper again. If the mechanism is not needed, the method should return
- the same *cls* object passed in the constructor.
+ the same `!cls` object passed in the constructor.
If a dumper implements `get_key()` it should also implement
`upgrade()`.
...
def upgrade(self, obj: Any, format: PyFormat) -> "Dumper":
- """Return a new dumper to manage *obj*.
+ """Return a new dumper to manage `!obj`.
:param obj: The object to convert
:param format: The format to convert to
Once `Transformer.get_dumper()` has been notified by `get_key()` that
- this Dumper class cannot handle *obj* itself, it will invoke
+ this Dumper class cannot handle `!obj` itself, it will invoke
`!upgrade()`, which should return a new `Dumper` instance, which will
be reused for every objects for which `!get_key()` returns the same
result.
class Loader(Protocol):
"""
- Convert PostgreSQL values with type OID *oid* to Python objects.
+ Convert PostgreSQL values with type OID `!oid` to Python objects.
"""
format: pq.Format
class Dumper(abc.Dumper, ABC):
"""
- Convert Python object of the type *cls* to PostgreSQL representation.
+ Convert Python object of the type `!cls` to PostgreSQL representation.
"""
oid: int = 0
Implementation of the `~psycopg.abc.Dumper.get_key()` member of the
`~psycopg.abc.Dumper` protocol. Look at its definition for details.
- This implementation returns the *cls* passed in the constructor.
+ This implementation returns the `!cls` passed in the constructor.
Subclasses needing to specialise the PostgreSQL type according to the
*value* of the object dumped (not only according to to its type)
should override this class.
Implementation of the `~psycopg.abc.Dumper.upgrade()` member of the
`~psycopg.abc.Dumper` protocol. Look at its definition for details.
- This implementation just returns *self*. If a subclass implements
+ This implementation just returns `!self`. If a subclass implements
`get_key()` it should probably override `!upgrade()` too.
"""
return self
class Loader(abc.Loader, ABC):
"""
- Convert PostgreSQL values with type OID *oid* to Python objects.
+ Convert PostgreSQL values with type OID `!oid` to Python objects.
"""
format: pq.Format = pq.Format.TEXT
`!True` if the connection was interrupted.
A broken connection is always `closed`, but wasn't closed in a clean
- way, such as using `close()` or a ``with`` block.
+ way, such as using `close()` or a `!with` block.
"""
return self.pgconn.status == BAD and not self._closed
def tpc_begin(self, xid: Union[Xid, str]) -> None:
"""
- Begin a TPC transaction with the given transaction ID *xid*.
+ Begin a TPC transaction with the given transaction ID `!xid`.
"""
with self.lock:
self.wait(self._tpc_begin_gen(xid))
Merge a string and keyword params into a single conninfo string.
:param conninfo: A `connection string`__ as accepted by PostgreSQL.
- :param kwargs: Parameters overriding the ones specified in *conninfo*.
- :return: A connection string valid for PostgreSQL, with the *kwargs*
+ :param kwargs: Parameters overriding the ones specified in `!conninfo`.
+ :return: A connection string valid for PostgreSQL, with the `!kwargs`
parameters merged.
Raise `~psycopg.ProgrammingError` if the input doesn't make a valid
def conninfo_to_dict(conninfo: str = "", **kwargs: Any) -> Dict[str, Any]:
"""
- Convert the *conninfo* string into a dictionary of parameters.
+ Convert the `!conninfo` string into a dictionary of parameters.
:param conninfo: A `connection string`__ as accepted by PostgreSQL.
- :param kwargs: Parameters overriding the ones specified in *conninfo*.
- :return: Dictionary with the parameters parsed from *conninfo* and
- *kwargs*.
+ :param kwargs: Parameters overriding the ones specified in `!conninfo`.
+ :return: Dictionary with the parameters parsed from `!conninfo` and
+ `!kwargs`.
- Raise `~psycopg.ProgrammingError` if *conninfo* is not a a valid connection
+ Raise `~psycopg.ProgrammingError` if `!conninfo` is not a a valid connection
string.
.. __: https://www.postgresql.org/docs/current/libpq-connect.html
def _parse_conninfo(conninfo: str) -> List[pq.ConninfoOption]:
"""
- Verify that *conninfo* is a valid connection string.
+ Verify that `!conninfo` is a valid connection string.
Raise ProgrammingError if the string is not valid.
"""
Write a block of data to a table after a :sql:`COPY FROM` operation.
- If the :sql:`COPY` is in binary format *buffer* must be `!bytes`. In
+ If the :sql:`COPY` is in binary format `!buffer` must be `!bytes`. In
text mode it can be either `!bytes` or `!str`.
"""
data = self.formatter.write(buffer)
cls, conn: Union[Connection[Any], AsyncConnection[Any], "PGconn"]
) -> bool:
"""
- Return `!True` if the server connected to ``conn`` is CockroachDB.
+ Return `!True` if the server connected to `!conn` is CockroachDB.
"""
if isinstance(conn, (Connection, AsyncConnection)):
conn = conn.pgconn
def fetchmany(self, size: int = 0) -> List[Row]:
"""
- Return the next *size* records from the current recordset.
+ Return the next `!size` records from the current recordset.
- *size* default to `!self.arraysize` if not specified.
+ `!size` default to `!self.arraysize` if not specified.
:rtype: Sequence[Row], with Row defined by `row_factory`
"""
"""
Move the cursor in the result set to a new position according to mode.
- If *mode* is ``relative`` (default), value is taken as offset to the
- current position in the result set, if set to ``absolute``, *value*
- states an absolute target position.
+ If `!mode` is ``'relative'`` (default), `!value` is taken as offset to
+ the current position in the result set; if set to ``'absolute'``,
+ `!value` states an absolute target position.
Raise `!IndexError` in case a scroll operation would leave the result
set. In this case the position will not change.
Return an error message from a `PGconn` or `PGresult`.
The return value is a `!str` (unlike pq data which is usually `!bytes`):
- use the connection encoding if available, otherwise the *encoding*
+ use the connection encoding if available, otherwise the `!encoding`
parameter as a fallback for decoding. Don't raise exceptions on decoding
errors.
@property
def pgconn_ptr(self) -> Optional[int]:
- """The pointer to the underlying ``PGconn`` structure, as integer.
+ """The pointer to the underlying `!PGconn` structure, as integer.
`!None` if the connection is closed.
@property
def pgresult_ptr(self) -> Optional[int]:
- """The pointer to the underlying ``PGresult`` structure, as integer.
+ """The pointer to the underlying `!PGresult` structure, as integer.
`!None` if the result was cleared.
def class_row(cls: Type[T]) -> BaseRowFactory[T]:
- r"""Generate a row factory to represent rows as instances of the class *cls*.
+ r"""Generate a row factory to represent rows as instances of the class `!cls`.
The class must support every output column name as a keyword parameter.
def args_row(func: Callable[..., T]) -> BaseRowFactory[T]:
- """Generate a row factory calling *func* with positional parameters for every row.
+ """Generate a row factory calling `!func` with positional parameters for every row.
:param func: The function to call for each row. It must support the fields
returned by the query as positional arguments.
def kwargs_row(func: Callable[..., T]) -> BaseRowFactory[T]:
- """Generate a row factory calling *func* with keyword parameters for every row.
+ """Generate a row factory calling `!func` with keyword parameters for every row.
:param func: The function to call for each row. It must support the fields
returned by the query as keyword arguments.
a connection available when you will need to use it.
This function is relatively inefficient, because it doesn't cache the
- adaptation rules. If you pass a *context* you can adapt the adaptation
+ adaptation rules. If you pass a `!context` you can adapt the adaptation
rules used, otherwise only global rules are used.
"""
def join(self, joiner: Union["SQL", LiteralString]) -> "Composed":
"""
- Return a new `!Composed` interposing the *joiner* with the `!Composed` items.
+ Return a new `!Composed` interposing the `!joiner` with the `!Composed` items.
- The *joiner* must be a `SQL` or a string which will be interpreted as
+ The `!joiner` must be a `SQL` or a string which will be interpreted as
an `SQL`.
Example::
where to merge variable parts of a query (for instance field or table
names).
- The *string* doesn't undergo any form of escaping, so it is not suitable to
- represent variable identifiers or values: you should only use it to pass
- constant strings representing templates or snippets of SQL statements; use
- other objects such as `Identifier` or `Literal` to represent variable
- parts.
+ The `!obj` string doesn't undergo any form of escaping, so it is not
+ suitable to represent variable identifiers or values: you should only use
+ it to pass constant strings representing templates or snippets of SQL
+ statements; use other objects such as `Identifier` or `Literal` to
+ represent variable parts.
Example::
:param seq: the elements to join.
:type seq: iterable of `!Composable`
- Use the `!SQL` object's *string* to separate the elements in *seq*.
+ Use the `!SQL` object's string to separate the elements in `!seq`.
Note that `Composed` objects are iterable too, so they can be used as
argument for this method.
"""
Split a non-empty representation of a composite type into components.
- Terminators shouldn't be used in *data* (so that both record and range
+ Terminators shouldn't be used in `!data` (so that both record and range
representations can be parsed).
"""
for m in self._re_tokenize.finditer(data):
:param dumps: The dump function to use.
:type dumps: `!Callable[[Any], str]`
- :param context: Where to use the *dumps* function. If not specified, use it
+ :param context: Where to use the `!dumps` function. If not specified, use it
globally.
:type context: `~psycopg.Connection` or `~psycopg.Cursor`
By default dumping JSON uses the builtin `json.dumps`. You can override
it to use a different JSON library or to use customised arguments.
- If the `Json` wrapper specified a *dumps* function, use it in precedence
+ If the `Json` wrapper specified a `!dumps` function, use it in precedence
of the one set by this function.
"""
if context is None:
:param loads: The load function to use.
:type loads: `!Callable[[bytes], Any]`
- :param context: Where to use the *loads* function. If not specified, use it
- globally.
+ :param context: Where to use the `!loads` function. If not specified, use
+ it globally.
:type context: `~psycopg.Connection` or `~psycopg.Cursor`
By default loading JSON uses the builtin `json.loads`. You can override
:param timeout: timeout (in seconds) to check for other interrupt, e.g.
to allow Ctrl-C.
:type timeout: float
- :return: whatever *gen* returns on completion.
+ :return: whatever `!gen` returns on completion.
- Consume *gen*, scheduling `fileno` for completion when it is reported to
- block. Once ready again send the ready state back to *gen*.
+ Consume `!gen`, scheduling `fileno` for completion when it is reported to
+ block. Once ready again send the ready state back to `!gen`.
"""
try:
s = next(gen)
:param timeout: timeout (in seconds) to check for other interrupt, e.g.
to allow Ctrl-C. If zero or None, wait indefinitely.
:type timeout: float
- :return: whatever *gen* returns on completion.
+ :return: whatever `!gen` returns on completion.
Behave like in `wait()`, but take the fileno to wait from the generator
itself, which might change during processing.
:param gen: a generator performing database operations and yielding
`Ready` values when it would block.
:param fileno: the file descriptor to wait on.
- :return: whatever *gen* returns on completion.
+ :return: whatever `!gen` returns on completion.
Behave like in `wait()`, but exposing an `asyncio` interface.
"""
(fd, `Ready`) pairs when it would block.
:param timeout: timeout (in seconds) to check for other interrupt, e.g.
to allow Ctrl-C. If zero or None, wait indefinitely.
- :return: whatever *gen* returns on completion.
+ :return: whatever `!gen` returns on completion.
Behave like in `wait()`, but take the fileno to wait from the generator
itself, which might change during processing.
Upon context exit, return the connection to the pool. Apply the normal
:ref:`connection context behaviour <with-connection>` (commit/rollback
the transaction in case of success/error). If the connection is no more
- in working state replace it with a new one.
+ in working state, replace it with a new one.
"""
conn = self.getconn(timeout=timeout)
t0 = monotonic()