Daniele Varrazzo [Mon, 22 Feb 2021 03:20:33 +0000 (04:20 +0100)]
Use the connections in the pool uniformly
I feel this is a better use than using some more than other (e.g. in
term of bloat of the connections associated with the resources) and
gives a more predictable performance of the connection (there won't be
some cold, some hot).
Now there aren't really "unused connections" to single out in order to
shrink the pool. So keep a tally of the number of connections unused and
use a worker thread to close some if there are above minconn unused in a
period.
Daniele Varrazzo [Sun, 21 Feb 2021 13:04:10 +0000 (14:04 +0100)]
Allow proxy tests to fail on Travis
Don't know why it fails, it requires interactive investigation there.
Make sure tests can run connecting on TCP socket and avoid SSH in the
proxy anyway. Not that any of this worked...
Daniele Varrazzo [Sat, 20 Feb 2021 03:16:27 +0000 (04:16 +0100)]
Shrink the pool when connections have been idle long enough
Pool behaviour on start changed: block on __init__ until minconn
connections have been obtained or raise PoolTimeout if timeout_sec have
passed. Not doing so makes complicated to understand, when a connection
is requested, if it's done during initialisation, and avoid an unneeded
grow request.
Daniele Varrazzo [Sun, 14 Feb 2021 01:22:20 +0000 (02:22 +0100)]
Add connection pool close()
When the pool is closed, raise an exception in the thread of the clients
already waiting and refuse new requests. Let the current request finish
anyway.
Daniele Varrazzo [Sat, 13 Feb 2021 23:45:50 +0000 (00:45 +0100)]
Make sure the pool can be deleted with no warning
Make sure to delete reference loops between the pool and the maintenance
tasks after they have run.
Do not raise a warning if a connection in a pool is deleted without
being closed as this is a normal condition (imagining a pool being
created as a global object).
Daniele Varrazzo [Sun, 21 Feb 2021 01:16:28 +0000 (02:16 +0100)]
Fix return error without exception on PQsocket call of broken connection
Still some weirdness here around: the method raises an exception on my
box (both with unix and tcp socket) but it seems to raise still a valid
number on certain databases on Travis. Make sure, in the test, at least
that it is a reasonable value.
Daniele Varrazzo [Wed, 24 Feb 2021 14:23:23 +0000 (15:23 +0100)]
Make the row_maker non-optional
Use the `tuple` type as return value for `tuple_row()`, which has a
valid interface and can also be used in the Cython code to fast-path the
case where the tuples created internally are good enough.
Daniele Varrazzo [Wed, 24 Feb 2021 02:05:15 +0000 (03:05 +0100)]
Make the row_factory attribute non-nullable
Added a `tuple_row()` factory for completeness. Note that it returns None, not
a callable, and the row_maker on the Transformer hasn't changed. The
signature of RowFactory now allows that.
This
makes simpler to specify the row_factory option on `conn.cursor()`: None
means default (the connection `row_factory`), specifying `tuple_row()`
overrides it to the normal tuplish behaviour.
Denis Laxalde [Fri, 12 Feb 2021 09:31:30 +0000 (10:31 +0100)]
Add row_factory as connection attribute and connect argument
When passing 'row_factory' to connect(), respective attribute will be
set on the connection instance. This will be used as default at cursor
creation and can be overridden with conn.cursor(row_factory=...) or
conn.execute(row_factory=...).
We use a '_null_row_factory' marker to handle None-value passed to
.cursor() or .execute() for disabling the default row factory.
Daniele Varrazzo [Fri, 12 Feb 2021 02:21:35 +0000 (03:21 +0100)]
Set up row maker and loaders only once in a server-side cursor lifetime
It wasn't happening once per movement, as I was fearing, but it was happening
exactly twice: once on DECLARE, once on describe_portal(). We actually
don't care about the DECLARE result: it was being set on the cursor only
to detect errors, so now that's done manually.
Daniele Varrazzo [Fri, 12 Feb 2021 01:23:16 +0000 (02:23 +0100)]
Don't recalculate loaders when not needed
The case it's not needed is when the new result is guaranteed to have
the same fields as the previous one. This happens querying in single-row
mode and on server-side cursors fetch.