Joel Jakobsson [Tue, 9 May 2023 00:49:03 +0000 (02:49 +0200)]
Add raw query support with PostgreSQL native placeholders
This commit introduces support for raw queries with PostgreSQL's native
placeholders ($1, $2, etc.) in psycopg3. By setting the use_raw_query attribute
to True in a custom cursor class, users can enable the use of raw queries with
native placeholders.
The code demonstrates how to create a custom RawQueryCursor class that sets the
use_raw_query attribute to True. This custom cursor class can be set as the
cursor_factory when connecting to the database, allowing users to choose between
PostgreSQL's native placeholders or the standard %s placeholder in their queries.
The code also demonstrates how both styles of placeholders can coexist. Test
cases are included to verify the correct behavior of the new feature.
The bad condition is only reached using COPY into executemany in
pipeline mode and with prepared statements disabled. It should probably
never happen outside the unit test.
Daniele Varrazzo [Sat, 17 Dec 2022 03:47:18 +0000 (03:47 +0000)]
fix(numpy): fix dumpers registration order
If numpy dumpers are registered after numeric ones, then NPNumericBinaryDumper
is used instead of NumericBinaryDumper when looking up by oid. This
breaks dumping values with a decimal part.
fix: raise an error instead of a warning using nextset in pipeline mode
Note: the tests, intentionally, don't pass, because they expects
execute() results to clobber the previous ones rather than accumulating.
This behaviour is to be changed in a further commit.
fix: raise a warning if nextset is used after execute in pipeline mode
So far have accumulated results, unlike in no-pipeline mode, where
results have been discarded. But this proved to be not reliable, so we
will forbid it in 3.2.
docs: add clarification about transaction characteristics attributes
They don't affect autocommit connections as they used to in psycopg2, so
note it as a difference from psycopg2. Add more explicit warning about
this limitation.
fix: fix possible errors calling __repr__ from __del__.
The errors get ignored but print a warning on program exit and eclipse a
genuine warning.
The error is caused by the `pq.misc` module getting gc'd on interpreter
shutdown before `connection_summary()` is called. The solution is to
import `connection_summary` in the module namespace, which is similar to
the solution that proved working for #198. It is less robust than the
solution used by the Python devs to import the function in the method
via an argument default, but it should work adequately (as nobody
complained about #198 anymore).
In #591 discussion, I proposed that connection_summary is too complex to
be considered safe to call on __del__. Actually, looking at it, it seems
innocent enough, as it only calls objects methods, no functions from
module namespaces. As a consequence we assume that this commit fixes the
issue. As I can't reproduce it, will ask the OP if this is the case.
fix: don't clobber a Python exception on COPY FROM with QueryCanceled
We trigger the server to raise the QueryCanceled; however, the original
exception has more information (the traceback). We can consider the
server exception just a notification that cancellation worked as
expected.
This is a mild change in behaviour, as the fixed tests state. However,
raising QueryCanceled is not explicitly documented and not part of a
strict interface, so we can probably change the exception raised without
needing to wait for psycopg 4.
James Johnston [Wed, 12 Jul 2023 00:08:10 +0000 (17:08 -0700)]
Update documentation related to async and Ctrl-C
Now that #543 is fixed, users shouldn't need to manually set up
signal handlers to cancel operations any more. Update the documentation
to reflect this.
The issue previously mentioned about '# type: ignore[arg-type]' in
rows.py got resolved.
The new '# type: ignore[comparison-overlap]' in test_pipeline*.py are
due to https://github.com/python/mypy/issues/15509, a known regression
from Mypy 1.4. We use the workaround documented in the release blog post
https://mypy-lang.blogspot.com/2023/06/mypy-140-released.html (section
"Narrowing Enum Values Using “==”").
Denis Laxalde [Fri, 19 May 2023 07:06:55 +0000 (09:06 +0200)]
fix: finish the PGconn upon connection failure
Attaching the non-finished PGconn to exceptions raised in connect() is
causing problem, as described in issue #565, because the PGconn might
not get finished soon enough in application code and the socket would
remain open.
On the other hand, just removing the pgconn attribute from Error would
be a breaking change and we'd loose the inspection features introduced
in commit 9220293dc023b757f2a57702c14b1460ff8f25b0.
As an alternative, we define a new PGconn implementation that is
error-specific: it captures all attributes of the original PGconn and
fails upon other operations (methods call). Some attributes have a
default value since they are not available in old PostgreSQL versions.
Finally, the (real) PGconn is finished before raising exception in
generators.connect().
Denis Laxalde [Tue, 16 May 2023 11:28:37 +0000 (13:28 +0200)]
tests: check that OperationalError raised by connect() holds a pgconn
Sort of a non-regression test for the part of commit 9220293dc023b757f2a57702c14b1460ff8f25b0 concerning generators. It used
roughly the same logic as tests/pq/test_pgconn.py::test_used_password()
to determine if the test connection needs a password and gets skipped
otherwise.
Denis Laxalde [Thu, 8 Jun 2023 09:21:42 +0000 (11:21 +0200)]
fix: always validate PrepareManager cache in pipeline mode
Previously, when processing results in pipeline mode
(BasePipeline._process_results()), we'd run
'cursor._check_results(results)' early before calling
_prepared.validate() with prepared statement information. However, if
this check step fails, for example if the pipeline got aborted due to a
previous error, the latter step (PrepareManager cache validation) was
not run.
We fix this by reversing the logic, and checking results last.
However, this is not enough, because the results processing logic in
BasePipeline._fetch_gen() or _communicate_gen(), which sequentially
walked through fetched results, would typically stop at the first
exception and thus possibly never go through the step of validating
PrepareManager cache if a previous error happened.
We fix that by making sure that *all* results are processed, possibly
capturing the first exception and then re-raising it. In both
_communicate_gen() and _fetch_gen(), we no longer store results in the
'to_process' like, but process then upon reception as this logic is no
longer needed.
Denis Laxalde [Thu, 8 Jun 2023 11:38:44 +0000 (13:38 +0200)]
docs: remove outdated comments in PrepareManager's docstrings
PrepareManager's methods maybe_add_to_cache() and validate() are said to
only be used in pipeline mode, but this is wrong as can be seen in
BaseCursor._maybe_prepare_gen(). (Comments are probably a left-over from
a prior implementation of the pipeline mode.)
fix: don't reuse the same Transformer in composite dumper
We need different dumpers because, in case a composite contains another
composite, we need to call `dump_sequence()` on different sequences, so
we row dumpers must be distinct.