See https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a7be2a6c262d5352756d909b29c419ea5e5fa1d9:
> drivers built on top of libpq should expose this function and its
> use should be generally encouraged over doing ALTER USER directly for
> password changes.
The test case assumes that the role connected to postgres has CREATEROLE
rights. If this is not true, the test is skipped.
Daniele Varrazzo [Wed, 29 May 2024 19:45:57 +0000 (21:45 +0200)]
fix(copy): fix count of chars to escape
We missed to reset the number of chars to escape at every field. As a
consequence, we end up resizing and scanning all the fields after the
first one requiring an escape and allocating a bit more memory than
needed.
Daniele Varrazzo [Tue, 14 May 2024 11:18:40 +0000 (13:18 +0200)]
fix: use the simple query protocol to execute BEGIN/COMMIT out of pipeline
We started using the extended protocol in e5079184 to fix #350, but,
probably to keep symmetry, we also changed the behaviour out of the
pipeline.
This turns out to be a problem for people connecting to the PgBouncer
admin console. They can use the `ClientCursor`, which tries to use the
simple protocol as much as it can, but they currently have to use
autocommit. With this changeset autocommit shouldn't be needed anymore.
See #808.
ci(macos): test and build macOS packages on M1 runners
Separate macos runners because:
The macos-14 runner can build amd64 images, but doesn't have Python <
3.10.
The macos-12 runner can only build x86_64 images.
Run Postgres from CI rather than from cibuildwheel, as cibw won't use a
docker image on macOS anyway. It makes it more uniform w.r.t. other
runners and doesn't require a "before" script.
fix(pool): avoid possible deadlock (until timeout) on pool closing
With the previous change to avoid finding open connections in the pool
(#784), stopping the worker was moved into the critical section. This
can create a deadlock in case a worker is in the process of obtaining a
new connection, because putting it to the pool requires the lock. The
deadlock only last for the default 5s timeout passed to _stop_workers().
Solve the problem by guarding _add_to_pool() to avoid it to try to add
the connection if the pool is closed.
However, refactor the pool closing sequence too and close the workers
and other resources that now out of the state outside the critical
section to keep the operation running under lock to a minimum.
Try to fix the flakiness shown by deaf_listen() in CI. Maybe there is a
race condition in listen()/connect() but I have also seen problems
related with localhost in the /etc/hosts file and ipv6, so let's first
try this.
feat: add a timeout parameter to Connection.cancel_safe()
This will only work for PGcancelConn, i.e. libpq >= 17, or thanks to
added waiting logic in AsyncConnection's implementation; so we add a
note about the limitation in Connection's documentation.
fix: avoid explicit str() call when logging exception
This was done as paranoia check for Sentry which might uses the repr of
the exception even if we asked for `%s` and therefore might leak
secrets, but frankly it's not our responsibility.
Avoid catching NotSupported, just check for the libpq version.
Also avoid the half exception handler in `_cancel_gen()`: as in the
legacy branch, warn and ignore any error happening in the outermost
method, without adding an ignore exception policy in an implementation
method.
This method was useful before introducing cancel_safe, which is now the
function of choice for internal cancelling.
Also refactor the exception handling to account for possible errors in
`PGcancel.cancel()`, not only in `PGconn.get_cancel()`, to make sure to
not clobber an exception bubbling up with ours, whatever happens to the
underlying connection.
fix(pool): make sure there are no connection in the pool after close()
The case has been reported in #784. While not easy to reproduce, it
seems that it might be caused by the pool being closed while a worker is
still trying to create a connection, which will be put in the _pool
state after supposedly no other operation should have been performed.
Stop the workers and then empty the pool only after they have stopped to
run.
Also refactor the cleanup of the pool and waiting queue, moving them
to close(). There is no reason why a method called "stop workers" should
empty them, and there is no other code path that use such feature.
tests: fix allow to skip running the pool tests again
`pytest -m not pool` should allow to skip the pool test. However,
because of the attribute access at import time to define the test
marker, import failed as well.
Convert the marker to strings which will be used in a getattr by the
fixture. Extend async-to-sync to convert the pool classes names from the
string too.
Denis Laxalde [Fri, 24 Mar 2023 13:55:18 +0000 (14:55 +0100)]
feat: use non-blocking cancellation upon Copy termination
The logic of Copy termination, in finish(), is reworked so that
connection cancellation is invoked from there directly instead of from
_end_copy_out_gen() as we cannot call async code from the generator.
Denis Laxalde [Fri, 24 Mar 2023 13:55:18 +0000 (14:55 +0100)]
feat: add encrypted and non-blocking cancellation
We introduce Connection.cancel_safe() which uses the encrypted and
non-blocking libpq cancellation API available with PostgreSQL 17. As a
non-blocking entry point, cancel_safe() delegates to a generator function,
namely _cancel_gen(). If the libpq version is too old, the method raises
a NotSupportedError.
CTRL+C handling (in Connection.wait() or Cursor.stream()) also uses the
non-blocking cancellation but falls back to old method if the former is
not supported.
The behavior of cancel() method (either on Connection or
AsyncConnection) is kept unchanged to use the old cancellation API.