fix: don't clobber a Python exception on COPY FROM with QueryCanceled
We trigger the server to raise the QueryCanceled; however, the original
exception has more information (the traceback). We can consider the
server exception just a notification that cancellation worked as
expected.
This is a mild change in behaviour, as the fixed tests state. However,
raising QueryCanceled is not explicitly documented and not part of a
strict interface, so we can probably change the exception raised without
needing to wait for psycopg 4.
The issue previously mentioned about '# type: ignore[arg-type]' in
rows.py got resolved.
The new '# type: ignore[comparison-overlap]' in test_pipeline*.py are
due to https://github.com/python/mypy/issues/15509, a known regression
from Mypy 1.4. We use the workaround documented in the release blog post
https://mypy-lang.blogspot.com/2023/06/mypy-140-released.html (section
"Narrowing Enum Values Using “==”").
Denis Laxalde [Fri, 19 May 2023 07:06:55 +0000 (09:06 +0200)]
fix: finish the PGconn upon connection failure
Attaching the non-finished PGconn to exceptions raised in connect() is
causing problem, as described in issue #565, because the PGconn might
not get finished soon enough in application code and the socket would
remain open.
On the other hand, just removing the pgconn attribute from Error would
be a breaking change and we'd loose the inspection features introduced
in commit 9220293dc023b757f2a57702c14b1460ff8f25b0.
As an alternative, we define a new PGconn implementation that is
error-specific: it captures all attributes of the original PGconn and
fails upon other operations (methods call). Some attributes have a
default value since they are not available in old PostgreSQL versions.
Finally, the (real) PGconn is finished before raising exception in
generators.connect().
Denis Laxalde [Tue, 16 May 2023 11:28:37 +0000 (13:28 +0200)]
tests: check that OperationalError raised by connect() holds a pgconn
Sort of a non-regression test for the part of commit 9220293dc023b757f2a57702c14b1460ff8f25b0 concerning generators. It used
roughly the same logic as tests/pq/test_pgconn.py::test_used_password()
to determine if the test connection needs a password and gets skipped
otherwise.
Denis Laxalde [Thu, 8 Jun 2023 09:21:42 +0000 (11:21 +0200)]
fix: always validate PrepareManager cache in pipeline mode
Previously, when processing results in pipeline mode
(BasePipeline._process_results()), we'd run
'cursor._check_results(results)' early before calling
_prepared.validate() with prepared statement information. However, if
this check step fails, for example if the pipeline got aborted due to a
previous error, the latter step (PrepareManager cache validation) was
not run.
We fix this by reversing the logic, and checking results last.
However, this is not enough, because the results processing logic in
BasePipeline._fetch_gen() or _communicate_gen(), which sequentially
walked through fetched results, would typically stop at the first
exception and thus possibly never go through the step of validating
PrepareManager cache if a previous error happened.
We fix that by making sure that *all* results are processed, possibly
capturing the first exception and then re-raising it. In both
_communicate_gen() and _fetch_gen(), we no longer store results in the
'to_process' like, but process then upon reception as this logic is no
longer needed.
Denis Laxalde [Thu, 8 Jun 2023 11:38:44 +0000 (13:38 +0200)]
docs: remove outdated comments in PrepareManager's docstrings
PrepareManager's methods maybe_add_to_cache() and validate() are said to
only be used in pipeline mode, but this is wrong as can be seen in
BaseCursor._maybe_prepare_gen(). (Comments are probably a left-over from
a prior implementation of the pipeline mode.)
fix: don't reuse the same Transformer in composite dumper
We need different dumpers because, in case a composite contains another
composite, we need to call `dump_sequence()` on different sequences, so
we row dumpers must be distinct.
Daniele Varrazzo [Thu, 16 Mar 2023 21:15:51 +0000 (22:15 +0100)]
fix(pool): reinforce handling of errors in queued clients
Unlike the async counterpart, I have no idea how to test this condition,
because I don't think you can cancel a Python thread. I could put the
pool and the first client in threads and use the main thread as queued
client, but it seems pretty convoluted.
However, an error handling in the same place that fixed #509 in the
async pools doesn't hurt.
Denis Laxalde [Sun, 5 Feb 2023 17:40:16 +0000 (18:40 +0100)]
test: use Windows asyncio policy in "non-asyncio" pool tests
In commit b6cc8343159fc0a27365e09a3beef06433f3f1b5, we drop the global
asyncio.set_event_loop_policy() for Windows in conftest.py, replacing it
with asyncio_options to be used by the anyio test runner. However, test
cases in test_pool_async_noasyncio.py are not "async def" so they would
not use that anyio runner (nor did they use pytest-asyncio's runner
before); but they need the event loop policy for Windows still.
Accordingly, we configure the loop from "anyio_backend_options" in
asyncio_run() fixture.
Denis Laxalde [Wed, 17 Aug 2022 13:59:31 +0000 (15:59 +0200)]
test: use anyio instead of pytest-asyncio
This is in preparation for adding support for other async libraries,
through anyio. AnyIO pytest plugin is used in replacement for
pytest-asyncio:
- either the pytest.mark.asyncio is replaced by pytest.mark.anyio, or,
- we rely on the 'anyio_backend' fixture that is pulled in 'aconn_cls'
fixture (and hence 'aconn') providing automatic detection for test
functions using it.
The 'anyio_backend' fixture is parametrized to only use asyncio and
selects the event loop policy we need on Windows platform as previously
done in pytest_sessionstart(), but only for Python version 3.8 or
higher.
This fixture is defined in main conftest.py, as well as in
pool/conftest.py since we'll change the former to support more async
backend while keeping the later asyncio-only for now.
Function test_concurrency_async.py::test_ctrl_c is no longer 'async'
because its code does not directly use asyncio (it's done through a
subprocess); but the 'async def' was needed before in order for
pytest-asyncio to run it since the test module had a global
pytest.mark.asyncio (and we were using the "auto" mode).
Ran Benita [Tue, 24 Jan 2023 20:59:55 +0000 (22:59 +0200)]
refactor(pool): make constructor configuration parameters type-safe
Previously, arguments to ConnectionPool and friends would be forwarded
using `**kwargs` to `BasePool`. This is however not type-safe, and
prevents code editors from auto-completing the parameters.
Drop the `**kwargs`, duplicate the parameters instead.
Denis Laxalde [Sat, 7 Jan 2023 15:01:36 +0000 (16:01 +0100)]
fix: set rowcount to the first result in executemany(..., returning=True)
Previously, _rowcount was unconditionally set to the overall number of
rows of all queries in executemany() and then reset only upon the first
call to nextset(). In the returning=True case, this lead the rowcount
attribute to be wrong for the first result (i.e. it didn't match the
number of rows that would be returned by .fetchall(), as can be seen in
updated tests).
Now we only set _rowcount to the cumulated number of rows of executed
queries *if* executemany() is not returning (so the value is still
useful, e.g., in to check the number of INSERTed rows):
>>> cur.executemany("INSERT INTO t(r) VALUES (%s)", [(1,), (2,)])
>>> cur.rowcount
2 # number of inserted rows
>>> cur.nextset()
>>> cur.executemany("INSERT INTO t(r) VALUES (%s) RETURNING r", [(1,), (2,)], returning=True)
>>> cur.rowcount
1 # number of rows in the first result set
>>> cur.fetchall()
[(1,)]
>>> cur.nextset()
True
>>> cur.rowcount
1
>>> cur.fetchall()
[(2,)]
>>> cur.nextset()
Besides, the code for processing results from executemany() in
_executemany_gen_no_pipeline() is now similar to that of
_set_results_from_pipeline().
Denis Laxalde [Sun, 8 Jan 2023 14:48:38 +0000 (15:48 +0100)]
refactor: drop superfluous _rowcount reset
In BaseCursor._set_results_from_pipeline() we were resetting _rowcount
if negative. This is superfluous as this code is reached when coming
from _executemany_gen_pipeline() in which we already do
'self._rowcount = 0' early on.
Daniele Varrazzo [Tue, 20 Dec 2022 20:19:23 +0000 (20:19 +0000)]
ci: configure the database to run two-phase-commit tests
We can't use the Github Actions PostgreSQL service because it doesn't
seem possible to configure the server for stored transactions:
- the service yaml definition doesn't take args to pass to run
- we can't overwrite the entry point with a script of ours because the
service starts before actions/checkout
- the postgres image doesn't allow to pass extra parameters to the
server via an env var