From: Walter Doerwald Date: Wed, 22 Dec 2021 20:39:34 +0000 (+0100) Subject: Fix typos and grammar in documentation. X-Git-Tag: pool-3.1~68 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=d8f856a386fab2d7b27d22f866318354be1da6ca;p=thirdparty%2Fpsycopg.git Fix typos and grammar in documentation. --- diff --git a/docs/advanced/adapt.rst b/docs/advanced/adapt.rst index 891c923d5..5eca13144 100644 --- a/docs/advanced/adapt.rst +++ b/docs/advanced/adapt.rst @@ -46,12 +46,12 @@ returned. sequence in a format understood by PostgreSQL. The string returned *shouldn't be quoted*: the value will be passed to the database using functions such as :pq:`PQexecParams()` so quoting and quotes escaping is - not necessary. The dumper usually also suggests the server what type to + not necessary. The dumper usually also suggests to the server what type to use, via its `~psycopg.abc.Dumper.oid` attribute. - Loaders (objects implementing the `~psycopg.abc.Loader` protocol) are the objects used to perform the opposite operation: reading a bytes - sequence from PostgreSQL and create a Python object out of it. + sequence from PostgreSQL and creating a Python object out of it. - Dumpers and loaders are instantiated on demand by a `~Transformer` object when a query is executed. @@ -69,7 +69,7 @@ Writing a custom adapter: XML ----------------------------- Psycopg doesn't provide adapters for the XML data type, because there are just -too many ways of handling XML in Python.Creating a loader to parse the +too many ways of handling XML in Python. Creating a loader to parse the `PostgreSQL xml type`__ to `~xml.etree.ElementTree` is very simple, using the `psycopg.adapt.Loader` base class and implementing the `~psycopg.abc.Loader.load()` method: @@ -100,7 +100,7 @@ too many ways of handling XML in Python.Creating a loader to parse the The opposite operation, converting Python objects to PostgreSQL, is performed -by dumpers. The `psycopg.adapt.Dumper` base class makes easy to implement one: +by dumpers. The `psycopg.adapt.Dumper` base class makes it easy to implement one: you only need to implement the `~psycopg.abc.Dumper.dump()` method:: >>> from psycopg.adapt import Dumper @@ -177,7 +177,7 @@ PostgreSQL but not handled by Python: ... psycopg.DataError: Python date doesn't support years after 9999: got infinity -One possibility would be to store Python's `datetime.date.max` to PostgreSQL +One possibility would be to store Python's `datetime.date.max` as PostgreSQL infinity. For this, let's create a subclass for the dumper and the loader and register them in the working scope (globally or just on a connection or cursor): diff --git a/docs/advanced/async.rst b/docs/advanced/async.rst index 6d33b88da..a9a49d524 100644 --- a/docs/advanced/async.rst +++ b/docs/advanced/async.rst @@ -86,7 +86,7 @@ two steps instead, as in .. code:: python - aconn = await psycopg.AsyncConnection.connect(): + aconn = await psycopg.AsyncConnection.connect() async with aconn: async with aconn.cursor() as cur: await cur.execute(...) diff --git a/docs/advanced/cursors.rst b/docs/advanced/cursors.rst index a77ec6478..bfd530e20 100644 --- a/docs/advanced/cursors.rst +++ b/docs/advanced/cursors.rst @@ -48,7 +48,7 @@ reasonably small result sets. Server-side cursors ------------------- -PostgreSQL has also its own concept of *cursor* (sometimes also called +PostgreSQL also has its own concept of *cursor* (sometimes also called *portal*). When a database cursor is created, the query is not necessarily completely processed: the server might be able to produce results only as they are needed. Only the results requested are transmitted to the client: if the @@ -68,7 +68,7 @@ server (for instance when fetching new records or when moving using `~Cursor.scroll()`). Using a server-side cursor it is possible to process datasets larger than what -would fit in the client memory. However for small queries they are less +would fit in the client's memory. However for small queries they are less efficient because it takes more commands to receive their result, so you should use them only if you need to process huge results or if only a partial result is needed. @@ -114,7 +114,7 @@ you can run a one-off command in the same connection to call it (e.g. using conn.execute("SELECT reffunc('curname')") after which you can create a server-side cursor declared by the same name, and -call directly the fetch methods, skipping the `~ServerCursor.execute()` call: +directly call the fetch methods, skipping the `~ServerCursor.execute()` call: .. code:: python diff --git a/docs/advanced/pool.rst b/docs/advanced/pool.rst index 7d4ce441b..e370183d3 100644 --- a/docs/advanced/pool.rst +++ b/docs/advanced/pool.rst @@ -6,9 +6,8 @@ Connection pools ================ A `connection pool`__ is an object managing a set of connections and allowing -their use to functions needing one. Because the time to establish a new -connection can be relatively long, keeping connections open can reduce the -latency of a program operations. +their use in functions needing one. Because the time to establish a new +connection can be relatively long, keeping connections open can reduce latency. .. __: https://en.wikipedia.org/wiki/Connection_pool @@ -54,7 +53,7 @@ until the pool is full or will throw a `PoolTimeout` if the pool isn't ready within an allocated time. The pool background workers create connections according to the parameters -*conninfo*, *kwargs*, *connection_class* passed to `ConnectionPool` +*conninfo*, *kwargs*, and *connection_class* passed to `ConnectionPool` constructor. Once a connection is created it is also passed to the *configure()* callback, if provided, after which it is put in the pool (or passed to a client requesting it, if someone is already knocking at the door). @@ -103,7 +102,7 @@ Pool connection and sizing A pool can have a fixed size (specifying no *max_size* or *max_size* = *min_size*) or a dynamic size (when *max_size* > *min_size*). In both cases, as soon as the pool is created, it will try to acquire *min_size* connections in -background. +the background. If an attempt to create a connection fails, a new attempt will be made soon after, using an exponential backoff to increase the time between attempts, @@ -115,7 +114,7 @@ to restart it. If more than *min_size* connections are requested concurrently, new ones are created, up to *max_size*. Note that the connections are always created by the -background workers, not by the thread asking the connection: if a client +background workers, not by the thread asking for the connection: if a client requests a new connection, and a previous client terminates its job before the new connection is ready, the waiting client will be served the existing connection. This is especially useful in scenarios where the time to connect @@ -134,7 +133,7 @@ What's the right size for the pool ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Big question. Who knows. However, probably not as large as you imagine. Please -take a look at `this this analysis`__ for some ideas. +take a look at `this analysis`__ for some ideas. .. __: https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing @@ -144,8 +143,8 @@ program, eventually adjusting the size of the pool using the `~ConnectionPool.resize()` method. -Connections quality -------------------- +Connection quality +------------------ The state of the connection is verified when a connection is returned to the pool: if a connection is broken during its usage it will be discarded on @@ -159,7 +158,7 @@ return and a new connection will be created. Why not? Because doing so would require an extra network roundtrip: we want to save you from its latency. Before getting too angry about it, just think that the connection can be lost any moment while your program is using it. As your -program should be already able to cope with a loss of a connection during its +program should already be able to cope with a loss of a connection during its process, it should be able to tolerate to be served a broken connection: unpleasant but not the end of the world. @@ -187,8 +186,8 @@ briefly unavailable and run a quick check on them, returning them to the pool if they are still working or creating a new connection if they aren't. If you set up a similar check in your program, in case the database connection -is temporarily lost, we cannot do anything for the thread which had taken -already a connection from the pool, but no other thread should be served a +is temporarily lost, we cannot do anything for the thread which already had +taken a connection from the pool, but no other thread should be served a broken connection, because `!check()` would empty the pool and refill it with working connections, as soon as they are available. @@ -203,7 +202,7 @@ Pool stats The pool can return information about its usage using the methods `~ConnectionPool.get_stats()` or `~ConnectionPool.pop_stats()`. Both methods return the same values, but the latter reset the counters after its use. The -values can be send to a monitoring system such as Graphite_ or Prometheus_. +values can be sent to a monitoring system such as Graphite_ or Prometheus_. .. _Graphite: https://graphiteapp.org/ .. _Prometheus: https://prometheus.io/ diff --git a/docs/advanced/prepare.rst b/docs/advanced/prepare.rst index 9e4400c05..7e633fc36 100644 --- a/docs/advanced/prepare.rst +++ b/docs/advanced/prepare.rst @@ -26,7 +26,7 @@ Statement preparation can be controlled in several ways: wasn't already, and executed as prepared from its first use. - Conversely, passing ``prepare=False`` to `!execute()` will avoid to prepare - the query, regardless of the number of times it is executed. The default of + the query, regardless of the number of times it is executed. The default for the parameter is `!None`, meaning that the query is prepared if the conditions described above are met. diff --git a/docs/api/connections.rst b/docs/api/connections.rst index 143fa63c2..aed62ddef 100644 --- a/docs/api/connections.rst +++ b/docs/api/connections.rst @@ -220,7 +220,7 @@ The `!Connection` class The `~pq.PGconn` libpq connection wrapper underlying the `!Connection`. - It can be used to send low level commands to PostgreSQL and access to + It can be used to send low level commands to PostgreSQL and access features not currently wrapped by Psycopg. .. autoattribute:: info @@ -243,7 +243,7 @@ The `!Connection` class .. automethod:: notifies - Notifies are recevied after using :sql:`LISTEN` in a connection, when + Notifies are received after using :sql:`LISTEN` in a connection, when any sessions in the database generates a :sql:`NOTIFY` on one of the listened channels. diff --git a/docs/api/module.rst b/docs/api/module.rst index 653c349be..3c3d3c43b 100644 --- a/docs/api/module.rst +++ b/docs/api/module.rst @@ -23,7 +23,7 @@ it also exposes the `module-level objects`__ required by the specifications. The standard `DBAPI exceptions`__ are exposed both by the `!psycopg` module and by the `psycopg.errors` module. The latter also exposes more specific exceptions, mapping to the database error states (see -:ref:`sqlstate-exceptions`. +:ref:`sqlstate-exceptions`). .. __: https://www.python.org/dev/peps/pep-0249/#exceptions @@ -47,7 +47,7 @@ exceptions, mapping to the database error states (see The default adapters map establishing how Python and PostgreSQL types are converted into each other. - This map is used as template when new connections are created, using + This map is used as a template when new connections are created, using `psycopg.connect()`. Its `~psycopg.adapt.AdaptersMap.types` attribute is a `~psycopg.types.TypesRegistry` containing information about every PostgreSQL builtin type, useful for adaptation customisation (see diff --git a/docs/basic/adapt.rst b/docs/basic/adapt.rst index 024f22692..03b1be5cb 100644 --- a/docs/basic/adapt.rst +++ b/docs/basic/adapt.rst @@ -271,7 +271,7 @@ If you need an even more specific dump customisation only for certain objects (including different configurations in the same query) you can specify a *dumps* parameter in the `~psycopg.types.json.Json`/`~psycopg.types.json.Jsonb` wrapper, which will -take precedence over what specified by `!set_json_dumps()`. +take precedence over what is specified by `!set_json_dumps()`. .. code:: python diff --git a/docs/basic/copy.rst b/docs/basic/copy.rst index 2761b3dbf..a11ff12f1 100644 --- a/docs/basic/copy.rst +++ b/docs/basic/copy.rst @@ -45,7 +45,7 @@ Writing data row-by-row ----------------------- Using a copy operation you can load data into the database from any Python -iterable (a list of tuple, or any iterable of sequences): the Python values +iterable (a list of tuples, or any iterable of sequences): the Python values are adapted as they would be in normal querying. To perform such operation use a :sql:`COPY ... FROM STDIN` with `Cursor.copy()` and use `~Copy.write_row()` on the resulting object in a ``with`` block. On exiting the block the @@ -74,10 +74,10 @@ Binary copy Binary copy is supported by specifying :sql:`FORMAT BINARY` in the :sql:`COPY` statement. In order to load binary data, all the types passed to the database -must have a binary dumper registered (see see :ref:`binary-data`). +must have a binary dumper registered (see :ref:`binary-data`). Note that PostgreSQL is particularly finicky when loading data in binary mode -and will apply *no cast rule*. This means that e.g. passing the value 100 to +and will apply *no cast rules*. This means that e.g. passing the value 100 to an `integer` column will fail because Psycopg will pass it as a `smallint` value. You can work around the problem using the `~Copy.set_types()` method of the `!Copy` object and specify carefully the types to dump. @@ -163,5 +163,5 @@ a fully-async copy operation could be: while data := await f.read(): await copy.write(data) -The `AsyncCopy` object documentation describe the signature of the +The `AsyncCopy` object documentation describes the signature of the asynchronous methods and the differences from its sync `Copy` counterpart. diff --git a/docs/basic/from_pg2.rst b/docs/basic/from_pg2.rst index 49a9c3836..383cfb1a4 100644 --- a/docs/basic/from_pg2.rst +++ b/docs/basic/from_pg2.rst @@ -7,7 +7,7 @@ Differences from ``psycopg2`` ============================= -Psycopg 3 uses the common DBAPI structure of many other database adapter and +Psycopg 3 uses the common DBAPI structure of many other database adapters and tries to behave as close as possible to `!psycopg2`. There are however a few differences to be aware of. @@ -185,11 +185,11 @@ adaptation system `. .. _diff-copy: -Copy is no more file-based --------------------------- +Copy is no longer file-based +---------------------------- `!psycopg2` exposes :ref:`a few copy methods ` to interact with -PostgreSQL :sql:`COPY`. Their file-based interface doesn't make easy to load +PostgreSQL :sql:`COPY`. Their file-based interface doesn't make it easy to load dynamically-generated data into a database. There is now a single `~Cursor.copy()` method, which is similar to @@ -237,7 +237,7 @@ function_name(...)` or :sql:`CALL procedure_name(...)` instead. ``client_encoding`` is gone --------------------------- -Psycopg uses automatically the database client encoding to decode data to +Psycopg automatically uses the database client encoding to decode data to Unicode strings. Use `ConnectionInfo.encoding` if you need to read the encoding. You can select an encoding at connection time using the ``client_encoding`` connection parameter and you can change the encoding of a diff --git a/docs/basic/install.rst b/docs/basic/install.rst index c9bea28bc..8cfbffa53 100644 --- a/docs/basic/install.rst +++ b/docs/basic/install.rst @@ -58,7 +58,7 @@ some cases though. At the time of writing we don't distribute binary packages for Apple M1 (ARM) processors. -If you platform is not supported you should proceed to a :ref:`local +If your platform is not supported you should proceed to a :ref:`local installation ` or a :ref:`pure Python installation `. diff --git a/docs/basic/params.rst b/docs/basic/params.rst index 4d08395f8..04f75108e 100644 --- a/docs/basic/params.rst +++ b/docs/basic/params.rst @@ -144,7 +144,7 @@ untrusted source (such as data coming from a form on a web site) an attacker could easily craft a malformed string, either gaining access to unauthorized data or performing destructive operations on the database. This form of attack is called `SQL injection`_ and is known to be one of the most widespread forms -of attack to database systems. Before continuing, please print `this page`__ +of attack on database systems. Before continuing, please print `this page`__ as a memo and hang it onto your desk. .. _SQL injection: https://en.wikipedia.org/wiki/SQL_injection @@ -156,7 +156,7 @@ and reliable. We must stress this point: .. warning:: - - Don't merge manually values to a query: hackers from a foreign country + - Don't manually merge values to a query: hackers from a foreign country will break into your computer and steal not only your disks, but also your cds, leaving you only with the three most embarrassing records you ever bought. On cassette tapes. @@ -169,7 +169,7 @@ and reliable. We must stress this point: balaclava will find their way to your fridge, drink all your beer, and leave your toilet sit up and your toilet paper in the wrong orientation. - - You don't want to merge manually values to a query: :ref:`use the + - You don't want to manually merge values to a query: :ref:`use the provided methods ` instead. The correct way to pass variables in a SQL command is using the second @@ -199,7 +199,7 @@ PostgreSQL has two different ways to transmit data between client and server: available most of the times but not always. Usually the binary format is more efficient to use. -Psycopg can support both the formats for each data type. Whenever a value +Psycopg can support both formats for each data type. Whenever a value is passed to a query using the normal ``%s`` placeholder, the best format available is chosen (often, but not always, the binary format is picked as the best choice). diff --git a/docs/basic/pgtypes.rst b/docs/basic/pgtypes.rst index dadfc5dbb..0588697cd 100644 --- a/docs/basic/pgtypes.rst +++ b/docs/basic/pgtypes.rst @@ -100,7 +100,7 @@ Range adaptation ---------------- PostgreSQL `range types`__ are a family of data types representing a range of -value between two elements. The type of the element is called the range +values between two elements. The type of the element is called the range *subtype*. PostgreSQL offers a few built-in range types and allows the definition of custom ones. @@ -125,8 +125,8 @@ different types. `!Range` objects are immutable, hashable, and support the ``in`` operator (checking if an element is within the range). They can be tested for - equivalence. Empty ranges evaluate to `!False` in boolean context, - nonempty evaluate to `!True`. + equivalence. Empty ranges evaluate to `!False` in a boolean context, + nonempty ones evaluate to `!True`. `!Range` objects have the following attributes: @@ -207,7 +207,7 @@ sequence of `~psycopg.types.range.Range` elements. you try to add it a `Range[Decimal]`. Like for `~psycopg.types.range.Range`, built-in multirange objects are adapted -automatically: if a `!Multirange` objects contains `!Range` with +automatically: if a `!Multirange` object contains `!Range` with `~datetime.date` bounds, it is dumped using the :sql:`datemultirange` OID, and :sql:`datemultirange` values are loaded back as `!Multirange[date]`. @@ -273,7 +273,7 @@ database using: Because |hstore| is distributed as a contrib module, its oid is not well known, so it is necessary to use `!TypeInfo`\.\ `~psycopg.types.TypeInfo.fetch()` to query the database and get its oid. The -resulting object you can use passed to +resulting object can be passed to `~psycopg.types.hstore.register_hstore()` to configure dumping `!dict` to |hstore| and parsing |hstore| back to `!dict`, in the context where the adapter is registered. diff --git a/docs/basic/transactions.rst b/docs/basic/transactions.rst index 0cf8dfd54..e79f58ed4 100644 --- a/docs/basic/transactions.rst +++ b/docs/basic/transactions.rst @@ -9,7 +9,7 @@ Transaction management ====================== -Psycopg has a behaviour that may result surprising compared to +Psycopg has a behaviour that may seem surprising compared to :program:`psql`: by default, any database operation will start a new transaction. As a consequence, changes made by any cursor of the connection will not be visible until `Connection.commit()` is called, and will be @@ -79,13 +79,13 @@ sequence of database statements: cur.execute("INSERT INTO data VALUES (%s)", ("Hello",)) # This statement is executed inside the transaction - # No exception the end of the block: + # No exception at the end of the block: # COMMIT is executed. This way we don't have to remember to call neither `!close()` nor `!commit()` -and the database operation have actually a persistent effect. The code might +and the database operations actually have a persistent effect. The code might still do something you don't expect: keep a transaction from the first -operation to the connection closure. You can have a finer control on the +operation to the connection closure. You can have a finer control over the transactions using an :ref:`autocommit transaction ` and/or :ref:`transaction contexts `. @@ -227,7 +227,7 @@ context. .. hint:: The interaction between non-autocommit transactions and transaction contexts is probably surprising. Although the non-autocommit default is - what demanded by the DBAPI, the personal preference of several experienced + what's demanded by the DBAPI, the personal preference of several experienced developers is to: - use a connection block: ``with psycopg.connect(...) as conn``; @@ -278,7 +278,7 @@ transaction block, by raising the `Rollback` exception. The exception "jumps" to the end of a transaction block, rolling back its transaction but allowing the program execution to continue from there. By default the exception rolls back the innermost transaction block, but any current block can be specified -as the target. In the following example, an hypothetical `!CancelCommand` +as the target. In the following example, a hypothetical `!CancelCommand` may stop the processing and cancel any operation previously performed, but not entirely committed yet. @@ -305,8 +305,8 @@ Transaction characteristics You can set `transaction parameters`__ for the transactions that Psycopg handles. They affect the transactions started implicitly by non-autocommit transactions and the ones started explicitly by `Connection.transaction()` for -both autocommit and non-autocommit transactions. Leaving these parameters to -`!None` will leave the behaviour to the server's default (which is controlled +both autocommit and non-autocommit transactions. Leaving these parameters as +`!None` will use the server's default behaviour (which is controlled by server settings such as default_transaction_isolation__). .. __: https://www.postgresql.org/docs/current/sql-set-transaction.html diff --git a/docs/basic/usage.rst b/docs/basic/usage.rst index b592465ad..3785690ab 100644 --- a/docs/basic/usage.rst +++ b/docs/basic/usage.rst @@ -189,7 +189,7 @@ equivalent of: Note that, while the above pattern is what most people would use, `connect()` doesn't enter a block itself, but returns an "un-entered" connection, so that it is still possible to use a connection regardless of the code scope and the -developer is free to use (and responsible of calling) `~Connection.commit()`, +developer is free to use (and responsible for calling) `~Connection.commit()`, `~Connection.rollback()`, `~Connection.close()` as and where needed. .. warning:: diff --git a/psycopg/psycopg/connection.py b/psycopg/psycopg/connection.py index c0b695df0..d2bedf8e3 100644 --- a/psycopg/psycopg/connection.py +++ b/psycopg/psycopg/connection.py @@ -351,7 +351,7 @@ class BaseConnection(Generic[Row]): """ Number of times a query is executed before it is prepared. - - If it is set to 0, every query is prepared the first time is + - If it is set to 0, every query is prepared the first time it is executed. - If it is set to `!None`, prepared statements are disabled on the connection.