From: Fardin Alizadeh Date: Fri, 19 Dec 2025 20:24:23 +0000 (+0330) Subject: fix typos (#13047) X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=81e72ca9cd4230bd29cebc3965cc6fa0913e26e3;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git fix typos (#13047) * fix typos * fix typos * fix typos --------- Co-authored-by: fardyn --- diff --git a/doc/build/changelog/changelog_05.rst b/doc/build/changelog/changelog_05.rst index c0125f7dee..9ae9470c62 100644 --- a/doc/build/changelog/changelog_05.rst +++ b/doc/build/changelog/changelog_05.rst @@ -1254,7 +1254,7 @@ :tags: postgresql :tickets: 1327 - Refection of unknown PG types won't crash when those + Reflection of unknown PG types won't crash when those types are specified within a domain. .. change:: @@ -2348,7 +2348,7 @@ :tags: general :tickets: - global "propigate"->"propagate" change. + global "propagate"->"propagate" change. .. change:: :tags: orm @@ -3666,7 +3666,7 @@ :tags: general :tickets: - global "propigate"->"propagate" change. + global "propagate"->"propagate" change. .. change:: :tags: orm diff --git a/doc/build/changelog/changelog_07.rst b/doc/build/changelog/changelog_07.rst index 300985f021..02e0147076 100644 --- a/doc/build/changelog/changelog_07.rst +++ b/doc/build/changelog/changelog_07.rst @@ -1862,7 +1862,7 @@ There's probably no real-world performance hit here; select() objects are almost always made ad-hoc, and systems that - wish to optimize the re-use of a select() + wish to optimize the reuse of a select() would be using the "compiled_cache" feature. A hit which would occur when calling select.bind has been reduced, but the vast majority diff --git a/doc/build/changelog/changelog_12.rst b/doc/build/changelog/changelog_12.rst index a0187bc857..4453260d56 100644 --- a/doc/build/changelog/changelog_12.rst +++ b/doc/build/changelog/changelog_12.rst @@ -1395,7 +1395,7 @@ .. change:: :tags: feature, orm - Added new argument :paramref:`.attributes.set_attribute.inititator` + Added new argument :paramref:`.attributes.set_attribute.initiator` to the :func:`.attributes.set_attribute` function, allowing an event token received from a listener function to be propagated to subsequent set events. diff --git a/doc/build/changelog/changelog_14.rst b/doc/build/changelog/changelog_14.rst index 5cdb6ec7b4..0f18ece0b6 100644 --- a/doc/build/changelog/changelog_14.rst +++ b/doc/build/changelog/changelog_14.rst @@ -5045,7 +5045,7 @@ This document details individual issue-level changes made throughout Fixed issue where using a :class:`_sql.Select` as a subquery in an ORM context would modify the :class:`_sql.Select` in place to disable eagerloads on that object, which would then cause that same - :class:`_sql.Select` to not eagerload if it were then re-used in a + :class:`_sql.Select` to not eagerload if it were then reused in a top-level execution context. @@ -5380,7 +5380,7 @@ This document details individual issue-level changes made throughout :tags: usecase, orm :tickets: 6267 - Established support for :func:`_orm.synoynm` in conjunction with + Established support for :func:`_orm.synonym` in conjunction with hybrid property, assocaitionproxy is set up completely, including that synonyms can be established linking to these constructs which work fully. This is a behavior that was semi-explicitly disallowed previously, diff --git a/doc/build/changelog/changelog_20.rst b/doc/build/changelog/changelog_20.rst index a091b0e743..4b1296d5f5 100644 --- a/doc/build/changelog/changelog_20.rst +++ b/doc/build/changelog/changelog_20.rst @@ -1580,10 +1580,10 @@ Fixed issue in history_meta example where the "version" column in the versioned table needs to default to the most recent version number in the history table on INSERT, to suit the use case of a table where rows are - deleted, and can then be replaced by new rows that re-use the same primary + deleted, and can then be replaced by new rows that reuse the same primary key identity. This fix adds an additional SELECT query per INSERT in the main table, which may be inefficient; for cases where primary keys are not - re-used, the default function may be omitted. Patch courtesy Philipp H. + reused, the default function may be omitted. Patch courtesy Philipp H. v. Loewenfeld. .. change:: diff --git a/doc/build/changelog/migration_06.rst b/doc/build/changelog/migration_06.rst index 320f34009a..a8fd5f573c 100644 --- a/doc/build/changelog/migration_06.rst +++ b/doc/build/changelog/migration_06.rst @@ -825,7 +825,7 @@ few changes there: subclasses NUMERIC, FLOAT, DECIMAL don't generate any length or scale unless specified. This also continues to include the controversial ``String`` and ``VARCHAR`` types - (although MySQL dialect will pre-emptively raise when + (although MySQL dialect will preemptively raise when asked to render VARCHAR with no length). No defaults are assumed, and if they are used in a CREATE TABLE statement, an error will be raised if the underlying database does diff --git a/doc/build/changelog/migration_07.rst b/doc/build/changelog/migration_07.rst index 4f1c98be1a..4fae00d500 100644 --- a/doc/build/changelog/migration_07.rst +++ b/doc/build/changelog/migration_07.rst @@ -163,7 +163,7 @@ scenarios. Highlights of this release include: ``cursor.execute`` for a large bulk insert of joined- table objects can be cut in half, allowing native DBAPI optimizations to take place for those statements passed - to ``cursor.executemany()`` (such as re-using a prepared + to ``cursor.executemany()`` (such as reusing a prepared statement). * The codepath invoked when accessing a many-to-one @@ -199,7 +199,7 @@ scenarios. Highlights of this release include: * The collection of "bind processors" for a particular ``Compiled`` instance of a statement is also cached on the ``Compiled`` object, taking further advantage of the - "compiled cache" used by the flush process to re-use the + "compiled cache" used by the flush process to reuse the same compiled form of INSERT, UPDATE, DELETE statements. A demonstration of callcount reduction including a sample diff --git a/doc/build/changelog/migration_09.rst b/doc/build/changelog/migration_09.rst index 61cd9a3a30..835b0f43ee 100644 --- a/doc/build/changelog/migration_09.rst +++ b/doc/build/changelog/migration_09.rst @@ -1892,7 +1892,7 @@ Firebird ``fdb`` and ``kinterbasdb`` set ``retaining=False`` by default Both the ``fdb`` and ``kinterbasdb`` DBAPIs support a flag ``retaining=True`` which can be passed to the ``commit()`` and ``rollback()`` methods of its connection. The documented rationale for this flag is so that the DBAPI -can re-use internal transaction state for subsequent transactions, for the +can reuse internal transaction state for subsequent transactions, for the purposes of improving performance. However, newer documentation refers to analyses of Firebird's "garbage collection" which expresses that this flag can have a negative effect on the database's ability to process cleanup diff --git a/doc/build/changelog/migration_10.rst b/doc/build/changelog/migration_10.rst index 1e61b30857..2e975253a2 100644 --- a/doc/build/changelog/migration_10.rst +++ b/doc/build/changelog/migration_10.rst @@ -2117,7 +2117,7 @@ for additional positions: [SQL: u'INSERT INTO my_table (id, data) VALUES (?, ?), (?, ?), (?, ?)'] [parameters: (1, 'd1', 'd2', 'd3')] -And with a "named" dialect, the same value for "id" would be re-used in +And with a "named" dialect, the same value for "id" would be reused in each row (hence this change is backwards-incompatible with a system that relied on this): diff --git a/doc/build/changelog/migration_11.rst b/doc/build/changelog/migration_11.rst index 15ef6fcd0c..03a0d1202d 100644 --- a/doc/build/changelog/migration_11.rst +++ b/doc/build/changelog/migration_11.rst @@ -1115,7 +1115,7 @@ Note that upon invalidation, the immediate DBAPI connection used by being used subsequent to the exception raise, will use a new DBAPI connection for subsequent operations upon next use; however, the state of any transaction in progress is lost and the appropriate ``.rollback()`` method -must be called if applicable before this re-use can proceed. +must be called if applicable before this reuse can proceed. In order to identify this change, it was straightforward to demonstrate a pymysql or mysqlclient / MySQL-Python connection moving into a corrupted state when diff --git a/doc/build/changelog/migration_13.rst b/doc/build/changelog/migration_13.rst index a86e5bc089..529dee5cec 100644 --- a/doc/build/changelog/migration_13.rst +++ b/doc/build/changelog/migration_13.rst @@ -468,7 +468,7 @@ descriptor would raise an error. Additionally, it would assume that the first class to be seen by ``__get__()`` would be the only parent class it needed to know about. This is despite the fact that if a particular class has inheriting subclasses, the association proxy is really working on behalf of more than one -parent class even though it was not explicitly re-used. While even with this +parent class even though it was not explicitly reused. While even with this shortcoming, the association proxy would still get pretty far with its current behavior, it still leaves shortcomings in some cases as well as the complex problem of determining the best "owner" class. @@ -1326,7 +1326,7 @@ built-in ``Queue`` class in order to store database connections waiting to be used. The ``Queue`` features first-in-first-out behavior, which is intended to provide a round-robin use of the database connections that are persistently in the pool. However, a potential downside of this is that -when the utilization of the pool is low, the re-use of each connection in series +when the utilization of the pool is low, the reuse of each connection in series means that a server-side timeout strategy that attempts to reduce unused connections is prevented from shutting down these connections. To suit this use case, a new flag :paramref:`_sa.create_engine.pool_use_lifo` is added diff --git a/doc/build/changelog/migration_21.rst b/doc/build/changelog/migration_21.rst index 78abbd4cee..7c2261a38e 100644 --- a/doc/build/changelog/migration_21.rst +++ b/doc/build/changelog/migration_21.rst @@ -369,7 +369,7 @@ when dealing with ORM-enabled :func:`_dml.insert` or :func:`_dml.update`:: Additionally, a new helper :func:`_sql.from_dml_column` is added, which may be used with the :meth:`.hybrid_property.update_expression` hook to indicate -re-use of a column expression from elsewhere in the UPDATE statement's SET +reuse of a column expression from elsewhere in the UPDATE statement's SET clause:: from sqlalchemy import from_dml_column @@ -1211,7 +1211,7 @@ Examples to summarize the change are as follows:: # omit the driver portion, will use the psycopg dialect engine = create_engine("postgresql://user:pass@host/dbname") - # indicate the psycopg driver/dialect explcitly (preferred) + # indicate the psycopg driver/dialect explicitly (preferred) engine = create_engine("postgresql+psycopg://user:pass@host/dbname") # use the legacy psycopg2 driver/dialect @@ -1542,7 +1542,7 @@ Examples to summarize the change are as follows:: # omit the driver portion, will use the oracledb dialect engine = create_engine("oracle://user:pass@host/dbname") - # indicate the oracledb driver/dialect explcitly (preferred) + # indicate the oracledb driver/dialect explicitly (preferred) engine = create_engine("oracle+oracledb://user:pass@host/dbname") # use the legacy cx_oracle driver/dialect diff --git a/doc/build/changelog/whatsnew_20.rst b/doc/build/changelog/whatsnew_20.rst index f7c2b74f03..8603fcb303 100644 --- a/doc/build/changelog/whatsnew_20.rst +++ b/doc/build/changelog/whatsnew_20.rst @@ -682,7 +682,7 @@ example below adds additional ``Annotated`` types in addition to our Above, columns that are mapped with ``Mapped[str50]``, ``Mapped[intpk]``, or ``Mapped[user_fk]`` draw from both the :paramref:`_orm.registry.type_annotation_map` as well as the -``Annotated`` construct directly in order to re-use pre-established typing +``Annotated`` construct directly in order to reuse pre-established typing and column configurations. Optional step - turn mapped classes into dataclasses_ diff --git a/doc/build/core/connections.rst b/doc/build/core/connections.rst index 13cdbdcf78..6b634ab3bb 100644 --- a/doc/build/core/connections.rst +++ b/doc/build/core/connections.rst @@ -69,7 +69,7 @@ When the :class:`_engine.Connection` is closed at the end of the ``with:`` block referenced DBAPI connection is :term:`released` to the connection pool. From the perspective of the database itself, the connection pool will not actually "close" the connection assuming the pool has room to store this connection for -the next use. When the connection is returned to the pool for re-use, the +the next use. When the connection is returned to the pool for reuse, the pooling mechanism issues a ``rollback()`` call on the DBAPI connection so that any transactional state or locks are removed (this is known as :ref:`pool_reset_on_return`), and the connection is ready for its next use. @@ -1264,7 +1264,7 @@ strings that are safe to reuse for many statement invocations, given a particular cache key that is keyed to that SQL string. This means that any literal values in a statement, such as the LIMIT/OFFSET values for a SELECT, can not be hardcoded in the dialect's compilation scheme, as -the compiled string will not be re-usable. SQLAlchemy supports rendered +the compiled string will not be reusable. SQLAlchemy supports rendered bound parameters using the :meth:`_sql.BindParameter.render_literal_execute` method which can be applied to the existing ``Select._limit_clause`` and ``Select._offset_clause`` attributes by a custom compiler, which diff --git a/doc/build/core/pooling.rst b/doc/build/core/pooling.rst index 6b75ea9fcd..991287ea30 100644 --- a/doc/build/core/pooling.rst +++ b/doc/build/core/pooling.rst @@ -6,7 +6,7 @@ Connection Pooling .. module:: sqlalchemy.pool A connection pool is a standard technique used to maintain -long running connections in memory for efficient re-use, +long running connections in memory for efficient reuse, as well as to provide management for the total number of connections an application might use simultaneously. diff --git a/doc/build/errors.rst b/doc/build/errors.rst index 122c2fb2c7..c7240bfcc7 100644 --- a/doc/build/errors.rst +++ b/doc/build/errors.rst @@ -65,7 +65,7 @@ familiar with. does not necessarily establish a new connection to the database at the moment the connection object is acquired; it instead consults the connection pool for a connection, which will often retrieve an existing - connection from the pool to be re-used. If no connections are available, + connection from the pool to be reused. If no connections are available, the pool will create a new database connection, but only if the pool has not surpassed a configured capacity. diff --git a/doc/build/orm/dataclasses.rst b/doc/build/orm/dataclasses.rst index 88560f9e20..ec2dbec2c9 100644 --- a/doc/build/orm/dataclasses.rst +++ b/doc/build/orm/dataclasses.rst @@ -306,7 +306,7 @@ Integration with Annotated The approach introduced at :ref:`orm_declarative_mapped_column_pep593` illustrates how to use :pep:`593` ``Annotated`` objects to package whole -:func:`_orm.mapped_column` constructs for re-use. While ``Annotated`` objects +:func:`_orm.mapped_column` constructs for reuse. While ``Annotated`` objects can be combined with the use of dataclasses, **dataclass-specific keyword arguments unfortunately cannot be used within the Annotated construct**. This includes :pep:`681`-specific arguments ``init``, ``default``, ``repr``, and @@ -572,7 +572,7 @@ In the example below, the ``User`` class is declared using ``id``, ``name`` and ``password_hash`` as mapped features, but makes use of init-only ``password`` and ``repeat_password`` fields to represent the user creation process (note: to run this example, replace -the function ``your_crypt_function_here()`` with a third party crypt +the function ``your_hash_function_here()`` with a third party hash function, such as `bcrypt `_ or `argon2-cffi `_):: @@ -603,7 +603,7 @@ function, such as `bcrypt `_ or if password != repeat_password: raise ValueError("passwords do not match") - self.password_hash = your_crypt_function_here(password) + self.password_hash = your_hash_function_here(password) The above object is created with parameters ``password`` and ``repeat_password``, which are consumed up front so that the ``password_hash`` @@ -611,7 +611,7 @@ variable may be generated:: >>> u1 = User(name="some_user", password="xyz", repeat_password="xyz") >>> u1.password_hash - '$6$9ppc... (example crypted string....)' + '$6$9ppc... (example hashed string....)' .. versionchanged:: 2.0.0rc1 When using :meth:`_orm.registry.mapped_as_dataclass` or :class:`.MappedAsDataclass`, fields that do not include the diff --git a/doc/build/orm/declarative_tables.rst b/doc/build/orm/declarative_tables.rst index 3ecb67c4fe..9064a4da69 100644 --- a/doc/build/orm/declarative_tables.rst +++ b/doc/build/orm/declarative_tables.rst @@ -1085,7 +1085,7 @@ key style that is common to all mapped classes. There also may be common column configurations such as timestamps with defaults and other fields of pre-established sizes and configurations. We can compose these configurations into :func:`_orm.mapped_column` instances that we then bundle directly into -instances of ``Annotated``, which are then re-used in any number of class +instances of ``Annotated``, which are then reused in any number of class declarations. Declarative will unpack an ``Annotated`` object when provided in this manner, skipping over any other directives that don't apply to SQLAlchemy and searching only for SQLAlchemy ORM constructs. diff --git a/doc/build/orm/extensions/asyncio.rst b/doc/build/orm/extensions/asyncio.rst index 31d2248701..6985c4c34c 100644 --- a/doc/build/orm/extensions/asyncio.rst +++ b/doc/build/orm/extensions/asyncio.rst @@ -980,7 +980,7 @@ default pool implementation. If an :class:`_asyncio.AsyncEngine` is be passed from one event loop to another, the method :meth:`_asyncio.AsyncEngine.dispose()` should be called before it's -re-used on a new event loop. Failing to do so may lead to a ``RuntimeError`` +reused on a new event loop. Failing to do so may lead to a ``RuntimeError`` along the lines of ``Task got Future attached to a different loop`` diff --git a/doc/build/orm/extensions/baked.rst b/doc/build/orm/extensions/baked.rst index 8e718ec98c..107424e257 100644 --- a/doc/build/orm/extensions/baked.rst +++ b/doc/build/orm/extensions/baked.rst @@ -176,7 +176,7 @@ Rationale The "lambda" approach above is a superset of what would be a more traditional "parameterized" approach. Suppose we wished to build a simple system where we build a :class:`~.query.Query` just once, then -store it in a dictionary for re-use. This is possible right now by +store it in a dictionary for reuse. This is possible right now by just building up the query, and removing its :class:`.Session` by calling ``my_cached_query = query.with_session(None)``:: @@ -193,7 +193,7 @@ just building up the query, and removing its :class:`.Session` by calling return query.params(id=id_argument).all() The above approach gets us a very minimal performance benefit. -By re-using a :class:`~.query.Query`, we save on the Python work within +By reusing a :class:`~.query.Query`, we save on the Python work within the ``session.query(Model)`` constructor as well as calling upon ``filter(Model.id == bindparam('id'))``, which will skip for us the building up of the Core expression as well as sending it to :meth:`_query.Query.filter`. diff --git a/doc/build/orm/session_events.rst b/doc/build/orm/session_events.rst index 8ab2842bae..4c192b9b7b 100644 --- a/doc/build/orm/session_events.rst +++ b/doc/build/orm/session_events.rst @@ -235,7 +235,7 @@ Above, a custom execution option is passed to :meth:`_sql.Select.execution_options` in order to establish a "cache key" that will then be intercepted by the :meth:`_orm.SessionEvents.do_orm_execute` hook. This cache key is then matched to a :class:`_engine.FrozenResult` object that may be -present in the cache, and if present, the object is re-used. The recipe makes +present in the cache, and if present, the object is reused. The recipe makes use of the :meth:`_engine.Result.freeze` method to "freeze" a :class:`_engine.Result` object, which above will contain ORM results, such that it can be stored in a cache and used multiple times. In order to return a live diff --git a/doc/build/orm/versioning.rst b/doc/build/orm/versioning.rst index 9c08acef68..941a2fcca4 100644 --- a/doc/build/orm/versioning.rst +++ b/doc/build/orm/versioning.rst @@ -21,9 +21,9 @@ the value held in memory matches the database value. The purpose of this feature is to detect when two concurrent transactions are modifying the same row at roughly the same time, or alternatively to provide -a guard against the usage of a "stale" row in a system that might be re-using +a guard against the usage of a "stale" row in a system that might be reusing data from a previous transaction without refreshing (e.g. if one sets ``expire_on_commit=False`` -with a :class:`.Session`, it is possible to re-use the data from a previous +with a :class:`.Session`, it is possible to reuse the data from a previous transaction). .. topic:: Concurrent transaction updates diff --git a/examples/versioned_history/history_meta.py b/examples/versioned_history/history_meta.py index 88fb16a004..ab6e3583dd 100644 --- a/examples/versioned_history/history_meta.py +++ b/examples/versioned_history/history_meta.py @@ -179,7 +179,7 @@ def _history_mapper(local_mapper): "version", Integer, # if rows are not being deleted from the main table with - # subsequent re-use of primary key, this default can be + # subsequent reuse of primary key, this default can be # "1" instead of running a query per INSERT default=default_version_from_history, nullable=False, diff --git a/lib/sqlalchemy/dialects/mssql/base.py b/lib/sqlalchemy/dialects/mssql/base.py index 94f265ebed..b618564ef5 100644 --- a/lib/sqlalchemy/dialects/mssql/base.py +++ b/lib/sqlalchemy/dialects/mssql/base.py @@ -2037,14 +2037,14 @@ class MSSQLCompiler(compiler.SQLCompiler): def visit_aggregate_strings_func(self, fn, **kw): cl = list(fn.clauses) - expr, delimeter = cl[0:2] + expr, delimiter = cl[0:2] literal_exec = dict(kw) literal_exec["literal_execute"] = True return ( f"string_agg({expr._compiler_dispatch(self, **kw)}, " - f"{delimeter._compiler_dispatch(self, **literal_exec)})" + f"{delimiter._compiler_dispatch(self, **literal_exec)})" ) def visit_pow_func(self, fn, **kw): diff --git a/lib/sqlalchemy/dialects/mysql/base.py b/lib/sqlalchemy/dialects/mysql/base.py index 129c6e36ff..c47705b730 100644 --- a/lib/sqlalchemy/dialects/mysql/base.py +++ b/lib/sqlalchemy/dialects/mysql/base.py @@ -1363,7 +1363,7 @@ class MySQLCompiler(compiler.SQLCompiler): order_by = getattr(fn.clauses, "aggregate_order_by", None) cl = list(fn.clauses) - expr, delimeter = cl[0:2] + expr, delimiter = cl[0:2] literal_exec = dict(kw) literal_exec["literal_execute"] = True @@ -1373,13 +1373,13 @@ class MySQLCompiler(compiler.SQLCompiler): f"group_concat({expr._compiler_dispatch(self, **kw)} " f"ORDER BY {order_by._compiler_dispatch(self, **kw)} " f"SEPARATOR " - f"{delimeter._compiler_dispatch(self, **literal_exec)})" + f"{delimiter._compiler_dispatch(self, **literal_exec)})" ) else: return ( f"group_concat({expr._compiler_dispatch(self, **kw)} " f"SEPARATOR " - f"{delimeter._compiler_dispatch(self, **literal_exec)})" + f"{delimiter._compiler_dispatch(self, **literal_exec)})" ) def visit_sequence(self, sequence: sa_schema.Sequence, **kw: Any) -> str: diff --git a/lib/sqlalchemy/dialects/oracle/base.py b/lib/sqlalchemy/dialects/oracle/base.py index bbcde831a1..63a8d45cc5 100644 --- a/lib/sqlalchemy/dialects/oracle/base.py +++ b/lib/sqlalchemy/dialects/oracle/base.py @@ -2687,7 +2687,7 @@ class OracleDialect(default.DefaultDialect): and ObjectKind.TABLE in kind and ObjectKind.MATERIALIZED_VIEW not in kind ): - # cant use EXCEPT ALL / MINUS here because we don't have an + # can't use EXCEPT ALL / MINUS here because we don't have an # excludable row vs. the query above # outerjoin + where null works better on oracle 21 but 11 does # not like it at all. this is the next best thing diff --git a/lib/sqlalchemy/dialects/oracle/provision.py b/lib/sqlalchemy/dialects/oracle/provision.py index 997ca3b589..9b17674c6d 100644 --- a/lib/sqlalchemy/dialects/oracle/provision.py +++ b/lib/sqlalchemy/dialects/oracle/provision.py @@ -211,7 +211,7 @@ def _oracle_post_configure_engine(url, engine, follower_ident): # https://github.com/oracle/python-cx_Oracle/issues/519 # TODO: oracledb claims to have this feature built in somehow, # see if that's in use and/or if it needs to be enabled - # (or if this doesnt even apply to the newer oracle's we're using) + # (or if this doesn't even apply to the newer oracle's we're using) try: sc = dbapi_connection.stmtcachesize except: diff --git a/lib/sqlalchemy/dialects/postgresql/base.py b/lib/sqlalchemy/dialects/postgresql/base.py index 80bc93a3d2..edd726fa38 100644 --- a/lib/sqlalchemy/dialects/postgresql/base.py +++ b/lib/sqlalchemy/dialects/postgresql/base.py @@ -4455,7 +4455,7 @@ class PGDialect(default.DefaultDialect): if isinstance(coltype, DOMAIN): if not default: # domain can override the default value but - # cant set it to None + # can't set it to None if coltype.default is not None: default = coltype.default diff --git a/lib/sqlalchemy/dialects/sqlite/pysqlite.py b/lib/sqlalchemy/dialects/sqlite/pysqlite.py index 116c89e8b6..0d629e2d7e 100644 --- a/lib/sqlalchemy/dialects/sqlite/pysqlite.py +++ b/lib/sqlalchemy/dialects/sqlite/pysqlite.py @@ -255,7 +255,7 @@ parameter:: It's been observed that the :class:`.NullPool` implementation incurs an extremely small performance overhead for repeated checkouts due to the lack of -connection re-use implemented by :class:`.QueuePool`. However, it still +connection reuse implemented by :class:`.QueuePool`. However, it still may be beneficial to use this class if the application is experiencing issues with files being locked. diff --git a/lib/sqlalchemy/ext/asyncio/engine.py b/lib/sqlalchemy/ext/asyncio/engine.py index dfc727a302..69b194db2d 100644 --- a/lib/sqlalchemy/ext/asyncio/engine.py +++ b/lib/sqlalchemy/ext/asyncio/engine.py @@ -581,7 +581,7 @@ class AsyncConnection( # type:ignore[misc] """ if not self.dialect.supports_server_side_cursors: raise exc.InvalidRequestError( - "Cant use `stream` or `stream_scalars` with the current " + "Can't use `stream` or `stream_scalars` with the current " "dialect since it does not support server side cursors." ) diff --git a/lib/sqlalchemy/ext/hybrid.py b/lib/sqlalchemy/ext/hybrid.py index 84fe41114d..850348b457 100644 --- a/lib/sqlalchemy/ext/hybrid.py +++ b/lib/sqlalchemy/ext/hybrid.py @@ -253,7 +253,7 @@ is not available to SQLAlchemy under :pep:`484` compliance. In order to produce a reasonable syntax while remaining typing compliant, the :attr:`.hybrid_property.inplace` decorator allows the same -decorator to be re-used with different method names, while still producing +decorator to be reused with different method names, while still producing a single decorator under one name:: # correct use which is also accepted by pep-484 tooling @@ -1563,7 +1563,7 @@ class hybrid_property(interfaces.InspectionAttrInfo, ORMDescriptor[_T]): """Return the inplace mutator for this :class:`.hybrid_property`. This is to allow in-place mutation of the hybrid, allowing the first - hybrid method of a certain name to be re-used in order to add + hybrid method of a certain name to be reused in order to add more methods without having to name those methods the same, e.g.:: class Interval(Base): diff --git a/lib/sqlalchemy/orm/_orm_constructors.py b/lib/sqlalchemy/orm/_orm_constructors.py index f2f99eac55..ac5ddefc95 100644 --- a/lib/sqlalchemy/orm/_orm_constructors.py +++ b/lib/sqlalchemy/orm/_orm_constructors.py @@ -2350,7 +2350,7 @@ def clear_mappers() -> None: are never discarded independently of their class. If a mapped class itself is garbage collected, its mapper is automatically disposed of as well. As such, :func:`.clear_mappers` is only for usage in test suites - that re-use the same classes with different mappings, which is itself an + that reuse the same classes with different mappings, which is itself an extremely rare use case - the only such use case is in fact SQLAlchemy's own test suite, and possibly the test suites of other ORM extension libraries which intend to test various combinations of mapper construction diff --git a/lib/sqlalchemy/orm/bulk_persistence.py b/lib/sqlalchemy/orm/bulk_persistence.py index a608f9a8ea..2397b00fca 100644 --- a/lib/sqlalchemy/orm/bulk_persistence.py +++ b/lib/sqlalchemy/orm/bulk_persistence.py @@ -1236,7 +1236,7 @@ class _BulkORMInsert(_ORMDMLState, InsertDMLState): # for ORM object loading, like ORMContext, we have to disable # result set adapt_to_context, because we will be generating a # new statement with specific columns that's cached inside of - # an ORMFromStatementCompileState, which we will re-use for + # an ORMFromStatementCompileState, which we will reuse for # each result. if not execution_options: execution_options = context._orm_load_exec_options diff --git a/lib/sqlalchemy/orm/query.py b/lib/sqlalchemy/orm/query.py index e07d7fc778..5d341d8595 100644 --- a/lib/sqlalchemy/orm/query.py +++ b/lib/sqlalchemy/orm/query.py @@ -2901,7 +2901,7 @@ class Query( try: yield from result # type: ignore except GeneratorExit: - # issue #8710 - direct iteration is not re-usable after + # issue #8710 - direct iteration is not reusable after # an iterable block is broken, so close the result result._soft_close() raise diff --git a/lib/sqlalchemy/orm/scoping.py b/lib/sqlalchemy/orm/scoping.py index f610948ef6..149b3b9132 100644 --- a/lib/sqlalchemy/orm/scoping.py +++ b/lib/sqlalchemy/orm/scoping.py @@ -557,7 +557,7 @@ class scoped_session(Generic[_S]): :meth:`_orm.Session.close` and :meth:`_orm.Session.reset`. :meth:`_orm.Session.close` - a similar method will additionally - prevent re-use of the Session when the parameter + prevent reuse of the Session when the parameter :paramref:`_orm.Session.close_resets_only` is set to ``False``. """ # noqa: E501 diff --git a/lib/sqlalchemy/orm/session.py b/lib/sqlalchemy/orm/session.py index 1f1333c3d7..9119a9f212 100644 --- a/lib/sqlalchemy/orm/session.py +++ b/lib/sqlalchemy/orm/session.py @@ -1735,7 +1735,7 @@ class Session(_SessionClassMethods, EventTarget): :param close_resets_only: Defaults to ``True``. Determines if the session should reset itself after calling ``.close()`` - or should pass in a no longer usable state, disabling re-use. + or should pass in a no longer usable state, disabling reuse. .. versionadded:: 2.0.22 added flag ``close_resets_only``. A future SQLAlchemy version may change the default value of @@ -2579,7 +2579,7 @@ class Session(_SessionClassMethods, EventTarget): :meth:`_orm.Session.close` and :meth:`_orm.Session.reset`. :meth:`_orm.Session.close` - a similar method will additionally - prevent re-use of the Session when the parameter + prevent reuse of the Session when the parameter :paramref:`_orm.Session.close_resets_only` is set to ``False``. """ self._close_impl(invalidate=False, is_reset=True) diff --git a/lib/sqlalchemy/orm/state_changes.py b/lib/sqlalchemy/orm/state_changes.py index 4581a6a006..1a9bdf2fb1 100644 --- a/lib/sqlalchemy/orm/state_changes.py +++ b/lib/sqlalchemy/orm/state_changes.py @@ -125,7 +125,7 @@ class _StateChange: ) else: raise sa_exc.IllegalStateChangeError( - f"Cant run operation '{fn.__name__}()' here; " + f"Can't run operation '{fn.__name__}()' here; " f"will move to state {moves_to!r} where we are " f"expecting {next_state!r}", code="isce", diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index e636ef7dd5..a529c4196f 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -1752,7 +1752,7 @@ class _SubqueryLoader(_PostLoader): loadopt, ): # note that because the subqueryload object - # does not re-use the cached query, instead always making + # does not reuse the cached query, instead always making # use of the current invoked query, while we have two queries # here (orig and context.query), they are both non-cached # queries and we can transfer the options as is without diff --git a/lib/sqlalchemy/orm/strategy_options.py b/lib/sqlalchemy/orm/strategy_options.py index 96d2024e52..83f02cbb68 100644 --- a/lib/sqlalchemy/orm/strategy_options.py +++ b/lib/sqlalchemy/orm/strategy_options.py @@ -1245,7 +1245,7 @@ class Load(_AbstractLoad): ) elif path_is_property(self.path): - # re-use the lookup which will raise a nicely formatted + # reuse the lookup which will raise a nicely formatted # LoaderStrategyException if strategy: self.path.prop._strategy_lookup(self.path.prop, strategy[0]) diff --git a/lib/sqlalchemy/sql/_elements_constructors.py b/lib/sqlalchemy/sql/_elements_constructors.py index 354db4e903..c4c2cfacc4 100644 --- a/lib/sqlalchemy/sql/_elements_constructors.py +++ b/lib/sqlalchemy/sql/_elements_constructors.py @@ -504,7 +504,7 @@ def from_dml_column(column: _OnlyColumnArgument[_T]) -> DMLTargetCopy[_T]: ) The :func:`_sql.from_dml_column` construct allows automatic copying - of an expression assigned to a different column to be re-used:: + of an expression assigned to a different column to be reused:: >>> stmt = t.insert().values(x=func.foobar(3), y=from_dml_column(t.c.x) + 5) >>> print(stmt) diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py index 67a28b6601..53030dcf9e 100644 --- a/lib/sqlalchemy/sql/compiler.py +++ b/lib/sqlalchemy/sql/compiler.py @@ -3051,9 +3051,9 @@ class SQLCompiler(Compiled): literal_exec["literal_execute"] = True # break up the function into its components so we can apply - # literal_execute to the second argument (the delimeter) + # literal_execute to the second argument (the delimiter) cl = list(fn.clauses) - expr, delimeter = cl[0:2] + expr, delimiter = cl[0:2] if ( order_by is not None and self.dialect.aggregate_order_by_style @@ -3061,13 +3061,13 @@ class SQLCompiler(Compiled): ): return ( f"{use_function_name}({expr._compiler_dispatch(self, **kw)}, " - f"{delimeter._compiler_dispatch(self, **literal_exec)} " + f"{delimiter._compiler_dispatch(self, **literal_exec)} " f"ORDER BY {order_by._compiler_dispatch(self, **kw)})" ) else: return ( f"{use_function_name}({expr._compiler_dispatch(self, **kw)}, " - f"{delimeter._compiler_dispatch(self, **literal_exec)})" + f"{delimiter._compiler_dispatch(self, **literal_exec)})" ) def visit_extract(self, extract, **kwargs): @@ -6084,7 +6084,7 @@ class SQLCompiler(Compiled): # likely the least amount of callcounts, though looks clumsy if self.positional and visiting_cte is None: # if we are inside a CTE, don't count parameters - # here since they wont be for insertmanyvalues. keep + # here since they won't be for insertmanyvalues. keep # visited_bindparam at None so no counting happens. # see #9173 visited_bindparam = [] diff --git a/lib/sqlalchemy/sql/crud.py b/lib/sqlalchemy/sql/crud.py index 3d44b5c5f7..a87b930cbe 100644 --- a/lib/sqlalchemy/sql/crud.py +++ b/lib/sqlalchemy/sql/crud.py @@ -1685,7 +1685,7 @@ def _get_returning_modifiers(compiler, stmt, compile_state, toplevel): implicit_returning = ( # statement itself can veto it need_pks - # the dialect can veto it if it just doesnt support RETURNING + # the dialect can veto it if it just doesn't support RETURNING # with INSERT and dialect.insert_returning # user-defined implicit_returning on Table can veto it @@ -1697,7 +1697,7 @@ def _get_returning_modifiers(compiler, stmt, compile_state, toplevel): and ( # since we support MariaDB and SQLite which also support lastrowid, # decide if we should use lastrowid or RETURNING. for insert - # that didnt call return_defaults() and has just one set of + # that didn't call return_defaults() and has just one set of # parameters, we can use lastrowid. this is more "traditional" # and a lot of weird use cases are supported by it. # SQLite lastrowid times 3x faster than returning, diff --git a/lib/sqlalchemy/sql/elements.py b/lib/sqlalchemy/sql/elements.py index a43981152f..88ad1cb728 100644 --- a/lib/sqlalchemy/sql/elements.py +++ b/lib/sqlalchemy/sql/elements.py @@ -2644,7 +2644,7 @@ class TextClause(AbstractTextClause, inspection.Inspectable["TextClause"]): The :meth:`_expression.TextClause.bindparams` method can be called repeatedly, - where it will re-use existing :class:`.BindParameter` objects to add + where it will reuse existing :class:`.BindParameter` objects to add new information. For example, we can call :meth:`_expression.TextClause.bindparams` first with typing information, and a diff --git a/lib/sqlalchemy/sql/lambdas.py b/lib/sqlalchemy/sql/lambdas.py index 02fcd34131..da07cbce14 100644 --- a/lib/sqlalchemy/sql/lambdas.py +++ b/lib/sqlalchemy/sql/lambdas.py @@ -1202,7 +1202,7 @@ class AnalyzedFunction: """Run the tracker-generated expression through coercion rules. After the user-defined lambda has been invoked to produce a statement - for re-use, run it through coercion rules to both check that it's the + for reuse, run it through coercion rules to both check that it's the correct type of object and also to coerce it to its useful form. """ diff --git a/lib/sqlalchemy/sql/selectable.py b/lib/sqlalchemy/sql/selectable.py index e1bf22856c..8ac353dad9 100644 --- a/lib/sqlalchemy/sql/selectable.py +++ b/lib/sqlalchemy/sql/selectable.py @@ -2019,7 +2019,7 @@ class TableValuedAlias(LateralFromClause, Alias): """ # noqa: E501 # note: don't use the @_generative system here, keep a reference - # to the original object. otherwise you can have re-use of the + # to the original object. otherwise you can have reuse of the # python id() of the original which can cause name conflicts if # a new anon-name grabs the same identifier as the local anon-name # (just saw it happen on CI) diff --git a/pyproject.toml b/pyproject.toml index 9da9ea25c0..4676b346b8 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -360,6 +360,6 @@ build = "*" archs = ["arm64"] # On an Linux Intel runner with qemu installed, build Intel and ARM wheels -# NOTE: this is overriden in the pipeline using the CIBW_ARCHS_LINUX env variable to speed up the build +# NOTE: this is overridden in the pipeline using the CIBW_ARCHS_LINUX env variable to speed up the build [tool.cibuildwheel.linux] archs = ["x86_64", "aarch64"] diff --git a/test/dialect/mssql/test_compiler.py b/test/dialect/mssql/test_compiler.py index f8d9d54886..62ef92bb89 100644 --- a/test/dialect/mssql/test_compiler.py +++ b/test/dialect/mssql/test_compiler.py @@ -530,7 +530,7 @@ class CompileTest(fixtures.TestBase, AssertsCompiledSQL): crit = q.c.myid == table1.c.myid if style.plain: - # the "plain" style of fetch doesnt use TOP right now, so + # the "plain" style of fetch doesn't use TOP right now, so # there's an order_by implicit in the row_number part of it self.assert_compile( select("*").where(crit), diff --git a/test/dialect/postgresql/test_types.py b/test/dialect/postgresql/test_types.py index d374a72e17..62966247c5 100644 --- a/test/dialect/postgresql/test_types.py +++ b/test/dialect/postgresql/test_types.py @@ -779,7 +779,7 @@ class NamedTypeTest( def test_enum_doesnt_construct_ENUM(self): """in 2.0 we made ENUM name required. check that Enum adapt to - ENUM doesnt call this constructor.""" + ENUM doesn't call this constructor.""" e1 = Enum("x", "y") eq_(e1.name, None) diff --git a/test/engine/test_transaction.py b/test/engine/test_transaction.py index d5c6094a04..948ce6a178 100644 --- a/test/engine/test_transaction.py +++ b/test/engine/test_transaction.py @@ -1148,7 +1148,7 @@ class AutoRollbackTest(fixtures.TestBase): class IsolationLevelTest(fixtures.TestBase): """see also sqlalchemy/testing/suite/test_dialect.py::IsolationLevelTest - this suite has sparse_backend so wont take place + this suite has sparse_backend so won't take place for every dbdriver under a nox run. the suite test should cover that end of it diff --git a/test/ext/asyncio/test_engine.py b/test/ext/asyncio/test_engine.py index 7d965854cb..ba31a6aa74 100644 --- a/test/ext/asyncio/test_engine.py +++ b/test/ext/asyncio/test_engine.py @@ -893,7 +893,7 @@ class AsyncEngineTest(EngineFixture): async with async_engine.connect() as c: with expect_raises_message( exc.InvalidRequestError, - "Cant use `stream` or `stream_scalars` with the current " + "Can't use `stream` or `stream_scalars` with the current " "dialect since it does not support server side cursors.", ): if method == "stream": diff --git a/test/ext/test_mutable.py b/test/ext/test_mutable.py index e83550aecd..c72f182f8f 100644 --- a/test/ext/test_mutable.py +++ b/test/ext/test_mutable.py @@ -269,7 +269,7 @@ class MiscTest(fixtures.TestBase): decl_base.registry.configure() - # the event hook itself doesnt do anything for repeated calls + # the event hook itself doesn't do anything for repeated calls # already, so there's really nothing else to assert other than there's # only one "set" event listener diff --git a/test/orm/declarative/test_dc_transforms.py b/test/orm/declarative/test_dc_transforms.py index 6bb07dec0d..1b485857d6 100644 --- a/test/orm/declarative/test_dc_transforms.py +++ b/test/orm/declarative/test_dc_transforms.py @@ -593,7 +593,7 @@ class DCTransformsTest(AssertsCompiledSQL, fixtures.TestBase): class B(A): b_data: Mapped[str] = mapped_column(default="bd") - # ensure we didnt break dataclasses contract of removing Field + # ensure we didn't break dataclasses contract of removing Field # issue #8880 eq_(A.__dict__["some_field"], 5) assert "ctrl_one" not in A.__dict__ diff --git a/test/orm/declarative/test_dc_transforms_future_anno_sync.py b/test/orm/declarative/test_dc_transforms_future_anno_sync.py index 5f7da5e5b7..851a950115 100644 --- a/test/orm/declarative/test_dc_transforms_future_anno_sync.py +++ b/test/orm/declarative/test_dc_transforms_future_anno_sync.py @@ -606,7 +606,7 @@ class DCTransformsTest(AssertsCompiledSQL, fixtures.TestBase): class B(A): b_data: Mapped[str] = mapped_column(default="bd") - # ensure we didnt break dataclasses contract of removing Field + # ensure we didn't break dataclasses contract of removing Field # issue #8880 eq_(A.__dict__["some_field"], 5) assert "ctrl_one" not in A.__dict__ diff --git a/test/orm/dml/test_bulk_statements.py b/test/orm/dml/test_bulk_statements.py index d9dbda2399..ba24704604 100644 --- a/test/orm/dml/test_bulk_statements.py +++ b/test/orm/dml/test_bulk_statements.py @@ -2772,7 +2772,7 @@ class DMLCompileScenariosTest(testing.AssertsCompiledSQL, fixtures.TestBase): # e.g. insert(A). In the update() case, the WHERE clause can also # pull in the ORM entity, which is how we found the issue here, but # for INSERT there's no current method that does this; returning() - # could do this in theory but currently doesnt. So for now, cheat, + # could do this in theory but currently doesn't. So for now, cheat, # and pretend there's some conversion that's going to propagate # from an ORM expression coercions.expect( diff --git a/test/orm/dml/test_orm_upd_del_basic.py b/test/orm/dml/test_orm_upd_del_basic.py index 34c0465cf7..9c9608db39 100644 --- a/test/orm/dml/test_orm_upd_del_basic.py +++ b/test/orm/dml/test_orm_upd_del_basic.py @@ -484,7 +484,7 @@ class UpdateDeleteTest(fixtures.MappedTest): ): """test #5664. - approach is revised in SQLAlchemy 2.0 to not pre-emptively + approach is revised in SQLAlchemy 2.0 to not preemptively unexpire the involved attributes """ diff --git a/test/orm/test_dynamic.py b/test/orm/test_dynamic.py index 9661c22421..7871dcacf0 100644 --- a/test/orm/test_dynamic.py +++ b/test/orm/test_dynamic.py @@ -1667,7 +1667,7 @@ class WriteOnlyBulkTest( u1 = User(name="x") sess.add(u1) - # ha ha! u1 is not persistent yet. autoflush wont happen + # ha ha! u1 is not persistent yet. autoflush won't happen # until sess.scalars() actually runs. statement has to be # created with a pending parameter, not actual parameter assert inspect(u1).pending @@ -1738,7 +1738,7 @@ class WriteOnlyBulkTest( ) sess.add(u1) - # ha ha! u1 is not persistent yet. autoflush wont happen + # ha ha! u1 is not persistent yet. autoflush won't happen # until sess.scalars() actually runs. statement has to be # created with a pending parameter, not actual parameter assert inspect(u1).pending @@ -1831,7 +1831,7 @@ class WriteOnlyBulkTest( ) sess.add(u1) - # ha ha! u1 is not persistent yet. autoflush wont happen + # ha ha! u1 is not persistent yet. autoflush won't happen # until sess.scalars() actually runs. statement has to be # created with a pending parameter, not actual parameter assert inspect(u1).pending diff --git a/test/orm/test_session_state_change.py b/test/orm/test_session_state_change.py index e2635abc22..d4794b021a 100644 --- a/test/orm/test_session_state_change.py +++ b/test/orm/test_session_state_change.py @@ -201,7 +201,7 @@ class StateMachineTest(fixtures.TestBase): eq_(m._state, _NO_CHANGE) with expect_raises_message( sa_exc.IllegalStateChangeError, - r"Cant run operation '_inner_move_to_c\(\)' here; will move " + r"Can't run operation '_inner_move_to_c\(\)' here; will move " r"to state where we are " "expecting ", ): diff --git a/test/sql/test_insert.py b/test/sql/test_insert.py index eccb9d8ea2..9e00cbebaa 100644 --- a/test/sql/test_insert.py +++ b/test/sql/test_insert.py @@ -1405,7 +1405,7 @@ class InsertImplicitReturningTest( ) params = None elif paramtype == "params": - # for params, compiler doesnt have the value available to look + # for params, compiler doesn't have the value available to look # at. we assume non-NULL stmt = t.insert() if insert_null_still_autoincrements: diff --git a/test/sql/test_insert_exec.py b/test/sql/test_insert_exec.py index d9476aec40..e362a011b9 100644 --- a/test/sql/test_insert_exec.py +++ b/test/sql/test_insert_exec.py @@ -1102,7 +1102,7 @@ class InsertManyValuesTest(fixtures.RemovesEvents, fixtures.TablesTest): multiple parameter sets, i.e. "INSERT INTO table (anycol) VALUES (DEFAULT) (DEFAULT) (DEFAULT) ... RETURNING col" - if the database doesnt support this (like SQLite, mssql), it + if the database doesn't support this (like SQLite, mssql), it actually runs the statement that many times on the cursor. This is much less efficient, but is still more efficient than how it worked previously where we'd run the statement that many diff --git a/test/sql/test_resultset.py b/test/sql/test_resultset.py index 189850c7ae..f5dbf2b46c 100644 --- a/test/sql/test_resultset.py +++ b/test/sql/test_resultset.py @@ -2856,7 +2856,7 @@ class KeyTargetingTest(fixtures.TablesTest): This copies the _keymap from one to the other in terms of the selected columns of a target selectable. - This is used by the statement caching process to re-use the + This is used by the statement caching process to reuse the CursorResultMetadata from the cached statement against the same statement sent separately. diff --git a/test/sql/test_types.py b/test/sql/test_types.py index 709d586d2e..44792d135a 100644 --- a/test/sql/test_types.py +++ b/test/sql/test_types.py @@ -2401,7 +2401,7 @@ class EnumTest(AssertsCompiledSQL, fixtures.TablesTest): ), ) - # the base String() didnt create a constraint or even do any + # the base String() didn't create a constraint or even do any # events. But Column looked for SchemaType in _variant_mapping # and found our type anyway. eq_( diff --git a/test/typing/plain_files/orm/relationship.py b/test/typing/plain_files/orm/relationship.py index f818791970..737297fac0 100644 --- a/test/typing/plain_files/orm/relationship.py +++ b/test/typing/plain_files/orm/relationship.py @@ -50,7 +50,7 @@ class User(Base): name: Mapped[str] = mapped_column() group_id = mapped_column(ForeignKey("group.id")) - # this currently doesnt generate an error. not sure how to get the + # this currently doesn't generate an error. not sure how to get the # overloads to hit this one, nor am i sure i really want to do that # anyway name_this_works_atm: Mapped[str] = mapped_column(nullable=True) diff --git a/tox.ini b/tox.ini index f95e81ecc6..925c60aea7 100644 --- a/tox.ini +++ b/tox.ini @@ -8,7 +8,7 @@ extras= sqlite: aiosqlite sqlite_file: aiosqlite - # asyncpg doesnt build on free threading backends + # asyncpg doesn't build on free threading backends py{38,39,310,311,312,313,314}-postgresql: postgresql_asyncpg mysql: asyncmy