:tags: postgresql
:tickets: 1327
- Refection of unknown PG types won't crash when those
+ Reflection of unknown PG types won't crash when those
types are specified within a domain.
.. change::
:tags: general
:tickets:
- global "propigate"->"propagate" change.
+ global "propagate"->"propagate" change.
.. change::
:tags: orm
:tags: general
:tickets:
- global "propigate"->"propagate" change.
+ global "propagate"->"propagate" change.
.. change::
:tags: orm
There's probably no real-world
performance hit here; select() objects are
almost always made ad-hoc, and systems that
- wish to optimize the re-use of a select()
+ wish to optimize the reuse of a select()
would be using the "compiled_cache" feature.
A hit which would occur when calling select.bind
has been reduced, but the vast majority
.. change::
:tags: feature, orm
- Added new argument :paramref:`.attributes.set_attribute.inititator`
+ Added new argument :paramref:`.attributes.set_attribute.initiator`
to the :func:`.attributes.set_attribute` function, allowing an
event token received from a listener function to be propagated
to subsequent set events.
Fixed issue where using a :class:`_sql.Select` as a subquery in an ORM
context would modify the :class:`_sql.Select` in place to disable
eagerloads on that object, which would then cause that same
- :class:`_sql.Select` to not eagerload if it were then re-used in a
+ :class:`_sql.Select` to not eagerload if it were then reused in a
top-level execution context.
:tags: usecase, orm
:tickets: 6267
- Established support for :func:`_orm.synoynm` in conjunction with
+ Established support for :func:`_orm.synonym` in conjunction with
hybrid property, assocaitionproxy is set up completely, including that
synonyms can be established linking to these constructs which work
fully. This is a behavior that was semi-explicitly disallowed previously,
Fixed issue in history_meta example where the "version" column in the
versioned table needs to default to the most recent version number in the
history table on INSERT, to suit the use case of a table where rows are
- deleted, and can then be replaced by new rows that re-use the same primary
+ deleted, and can then be replaced by new rows that reuse the same primary
key identity. This fix adds an additional SELECT query per INSERT in the
main table, which may be inefficient; for cases where primary keys are not
- re-used, the default function may be omitted. Patch courtesy Philipp H.
+ reused, the default function may be omitted. Patch courtesy Philipp H.
v. Loewenfeld.
.. change::
subclasses NUMERIC, FLOAT, DECIMAL don't generate any
length or scale unless specified. This also continues to
include the controversial ``String`` and ``VARCHAR`` types
- (although MySQL dialect will pre-emptively raise when
+ (although MySQL dialect will preemptively raise when
asked to render VARCHAR with no length). No defaults are
assumed, and if they are used in a CREATE TABLE statement,
an error will be raised if the underlying database does
``cursor.execute`` for a large bulk insert of joined-
table objects can be cut in half, allowing native DBAPI
optimizations to take place for those statements passed
- to ``cursor.executemany()`` (such as re-using a prepared
+ to ``cursor.executemany()`` (such as reusing a prepared
statement).
* The codepath invoked when accessing a many-to-one
* The collection of "bind processors" for a particular
``Compiled`` instance of a statement is also cached on
the ``Compiled`` object, taking further advantage of the
- "compiled cache" used by the flush process to re-use the
+ "compiled cache" used by the flush process to reuse the
same compiled form of INSERT, UPDATE, DELETE statements.
A demonstration of callcount reduction including a sample
Both the ``fdb`` and ``kinterbasdb`` DBAPIs support a flag ``retaining=True``
which can be passed to the ``commit()`` and ``rollback()`` methods of its
connection. The documented rationale for this flag is so that the DBAPI
-can re-use internal transaction state for subsequent transactions, for the
+can reuse internal transaction state for subsequent transactions, for the
purposes of improving performance. However, newer documentation refers
to analyses of Firebird's "garbage collection" which expresses that this flag
can have a negative effect on the database's ability to process cleanup
[SQL: u'INSERT INTO my_table (id, data) VALUES (?, ?), (?, ?), (?, ?)']
[parameters: (1, 'd1', 'd2', 'd3')]
-And with a "named" dialect, the same value for "id" would be re-used in
+And with a "named" dialect, the same value for "id" would be reused in
each row (hence this change is backwards-incompatible with a system that
relied on this):
being used subsequent to the exception raise, will use a new
DBAPI connection for subsequent operations upon next use; however, the state of
any transaction in progress is lost and the appropriate ``.rollback()`` method
-must be called if applicable before this re-use can proceed.
+must be called if applicable before this reuse can proceed.
In order to identify this change, it was straightforward to demonstrate a pymysql or
mysqlclient / MySQL-Python connection moving into a corrupted state when
class to be seen by ``__get__()`` would be the only parent class it needed to
know about. This is despite the fact that if a particular class has inheriting
subclasses, the association proxy is really working on behalf of more than one
-parent class even though it was not explicitly re-used. While even with this
+parent class even though it was not explicitly reused. While even with this
shortcoming, the association proxy would still get pretty far with its current
behavior, it still leaves shortcomings in some cases as well as the complex
problem of determining the best "owner" class.
to be used. The ``Queue`` features first-in-first-out behavior, which is
intended to provide a round-robin use of the database connections that are
persistently in the pool. However, a potential downside of this is that
-when the utilization of the pool is low, the re-use of each connection in series
+when the utilization of the pool is low, the reuse of each connection in series
means that a server-side timeout strategy that attempts to reduce unused
connections is prevented from shutting down these connections. To suit
this use case, a new flag :paramref:`_sa.create_engine.pool_use_lifo` is added
Additionally, a new helper :func:`_sql.from_dml_column` is added, which may be
used with the :meth:`.hybrid_property.update_expression` hook to indicate
-re-use of a column expression from elsewhere in the UPDATE statement's SET
+reuse of a column expression from elsewhere in the UPDATE statement's SET
clause::
from sqlalchemy import from_dml_column
# omit the driver portion, will use the psycopg dialect
engine = create_engine("postgresql://user:pass@host/dbname")
- # indicate the psycopg driver/dialect explcitly (preferred)
+ # indicate the psycopg driver/dialect explicitly (preferred)
engine = create_engine("postgresql+psycopg://user:pass@host/dbname")
# use the legacy psycopg2 driver/dialect
# omit the driver portion, will use the oracledb dialect
engine = create_engine("oracle://user:pass@host/dbname")
- # indicate the oracledb driver/dialect explcitly (preferred)
+ # indicate the oracledb driver/dialect explicitly (preferred)
engine = create_engine("oracle+oracledb://user:pass@host/dbname")
# use the legacy cx_oracle driver/dialect
Above, columns that are mapped with ``Mapped[str50]``, ``Mapped[intpk]``,
or ``Mapped[user_fk]`` draw from both the
:paramref:`_orm.registry.type_annotation_map` as well as the
-``Annotated`` construct directly in order to re-use pre-established typing
+``Annotated`` construct directly in order to reuse pre-established typing
and column configurations.
Optional step - turn mapped classes into dataclasses_
referenced DBAPI connection is :term:`released` to the connection pool. From
the perspective of the database itself, the connection pool will not actually
"close" the connection assuming the pool has room to store this connection for
-the next use. When the connection is returned to the pool for re-use, the
+the next use. When the connection is returned to the pool for reuse, the
pooling mechanism issues a ``rollback()`` call on the DBAPI connection so that
any transactional state or locks are removed (this is known as
:ref:`pool_reset_on_return`), and the connection is ready for its next use.
a particular cache key that is keyed to that SQL string. This means
that any literal values in a statement, such as the LIMIT/OFFSET values for
a SELECT, can not be hardcoded in the dialect's compilation scheme, as
-the compiled string will not be re-usable. SQLAlchemy supports rendered
+the compiled string will not be reusable. SQLAlchemy supports rendered
bound parameters using the :meth:`_sql.BindParameter.render_literal_execute`
method which can be applied to the existing ``Select._limit_clause`` and
``Select._offset_clause`` attributes by a custom compiler, which
.. module:: sqlalchemy.pool
A connection pool is a standard technique used to maintain
-long running connections in memory for efficient re-use,
+long running connections in memory for efficient reuse,
as well as to provide
management for the total number of connections an application
might use simultaneously.
does not necessarily establish a new connection to the database at the
moment the connection object is acquired; it instead consults the
connection pool for a connection, which will often retrieve an existing
- connection from the pool to be re-used. If no connections are available,
+ connection from the pool to be reused. If no connections are available,
the pool will create a new database connection, but only if the
pool has not surpassed a configured capacity.
The approach introduced at :ref:`orm_declarative_mapped_column_pep593`
illustrates how to use :pep:`593` ``Annotated`` objects to package whole
-:func:`_orm.mapped_column` constructs for re-use. While ``Annotated`` objects
+:func:`_orm.mapped_column` constructs for reuse. While ``Annotated`` objects
can be combined with the use of dataclasses, **dataclass-specific keyword
arguments unfortunately cannot be used within the Annotated construct**. This
includes :pep:`681`-specific arguments ``init``, ``default``, ``repr``, and
class is declared using ``id``, ``name`` and ``password_hash`` as mapped features,
but makes use of init-only ``password`` and ``repeat_password`` fields to
represent the user creation process (note: to run this example, replace
-the function ``your_crypt_function_here()`` with a third party crypt
+the function ``your_hash_function_here()`` with a third party hash
function, such as `bcrypt <https://pypi.org/project/bcrypt/>`_ or
`argon2-cffi <https://pypi.org/project/argon2-cffi/>`_)::
if password != repeat_password:
raise ValueError("passwords do not match")
- self.password_hash = your_crypt_function_here(password)
+ self.password_hash = your_hash_function_here(password)
The above object is created with parameters ``password`` and
``repeat_password``, which are consumed up front so that the ``password_hash``
>>> u1 = User(name="some_user", password="xyz", repeat_password="xyz")
>>> u1.password_hash
- '$6$9ppc... (example crypted string....)'
+ '$6$9ppc... (example hashed string....)'
.. versionchanged:: 2.0.0rc1 When using :meth:`_orm.registry.mapped_as_dataclass`
or :class:`.MappedAsDataclass`, fields that do not include the
common column configurations such as timestamps with defaults and other fields of
pre-established sizes and configurations. We can compose these configurations
into :func:`_orm.mapped_column` instances that we then bundle directly into
-instances of ``Annotated``, which are then re-used in any number of class
+instances of ``Annotated``, which are then reused in any number of class
declarations. Declarative will unpack an ``Annotated`` object
when provided in this manner, skipping over any other directives that don't
apply to SQLAlchemy and searching only for SQLAlchemy ORM constructs.
If an :class:`_asyncio.AsyncEngine` is be passed from one event loop to another,
the method :meth:`_asyncio.AsyncEngine.dispose()` should be called before it's
-re-used on a new event loop. Failing to do so may lead to a ``RuntimeError``
+reused on a new event loop. Failing to do so may lead to a ``RuntimeError``
along the lines of
``Task <Task pending ...> got Future attached to a different loop``
The "lambda" approach above is a superset of what would be a more
traditional "parameterized" approach. Suppose we wished to build
a simple system where we build a :class:`~.query.Query` just once, then
-store it in a dictionary for re-use. This is possible right now by
+store it in a dictionary for reuse. This is possible right now by
just building up the query, and removing its :class:`.Session` by calling
``my_cached_query = query.with_session(None)``::
return query.params(id=id_argument).all()
The above approach gets us a very minimal performance benefit.
-By re-using a :class:`~.query.Query`, we save on the Python work within
+By reusing a :class:`~.query.Query`, we save on the Python work within
the ``session.query(Model)`` constructor as well as calling upon
``filter(Model.id == bindparam('id'))``, which will skip for us the building
up of the Core expression as well as sending it to :meth:`_query.Query.filter`.
:meth:`_sql.Select.execution_options` in order to establish a "cache key" that
will then be intercepted by the :meth:`_orm.SessionEvents.do_orm_execute` hook. This
cache key is then matched to a :class:`_engine.FrozenResult` object that may be
-present in the cache, and if present, the object is re-used. The recipe makes
+present in the cache, and if present, the object is reused. The recipe makes
use of the :meth:`_engine.Result.freeze` method to "freeze" a
:class:`_engine.Result` object, which above will contain ORM results, such that
it can be stored in a cache and used multiple times. In order to return a live
The purpose of this feature is to detect when two concurrent transactions
are modifying the same row at roughly the same time, or alternatively to provide
-a guard against the usage of a "stale" row in a system that might be re-using
+a guard against the usage of a "stale" row in a system that might be reusing
data from a previous transaction without refreshing (e.g. if one sets ``expire_on_commit=False``
-with a :class:`.Session`, it is possible to re-use the data from a previous
+with a :class:`.Session`, it is possible to reuse the data from a previous
transaction).
.. topic:: Concurrent transaction updates
"version",
Integer,
# if rows are not being deleted from the main table with
- # subsequent re-use of primary key, this default can be
+ # subsequent reuse of primary key, this default can be
# "1" instead of running a query per INSERT
default=default_version_from_history,
nullable=False,
def visit_aggregate_strings_func(self, fn, **kw):
cl = list(fn.clauses)
- expr, delimeter = cl[0:2]
+ expr, delimiter = cl[0:2]
literal_exec = dict(kw)
literal_exec["literal_execute"] = True
return (
f"string_agg({expr._compiler_dispatch(self, **kw)}, "
- f"{delimeter._compiler_dispatch(self, **literal_exec)})"
+ f"{delimiter._compiler_dispatch(self, **literal_exec)})"
)
def visit_pow_func(self, fn, **kw):
order_by = getattr(fn.clauses, "aggregate_order_by", None)
cl = list(fn.clauses)
- expr, delimeter = cl[0:2]
+ expr, delimiter = cl[0:2]
literal_exec = dict(kw)
literal_exec["literal_execute"] = True
f"group_concat({expr._compiler_dispatch(self, **kw)} "
f"ORDER BY {order_by._compiler_dispatch(self, **kw)} "
f"SEPARATOR "
- f"{delimeter._compiler_dispatch(self, **literal_exec)})"
+ f"{delimiter._compiler_dispatch(self, **literal_exec)})"
)
else:
return (
f"group_concat({expr._compiler_dispatch(self, **kw)} "
f"SEPARATOR "
- f"{delimeter._compiler_dispatch(self, **literal_exec)})"
+ f"{delimiter._compiler_dispatch(self, **literal_exec)})"
)
def visit_sequence(self, sequence: sa_schema.Sequence, **kw: Any) -> str:
and ObjectKind.TABLE in kind
and ObjectKind.MATERIALIZED_VIEW not in kind
):
- # cant use EXCEPT ALL / MINUS here because we don't have an
+ # can't use EXCEPT ALL / MINUS here because we don't have an
# excludable row vs. the query above
# outerjoin + where null works better on oracle 21 but 11 does
# not like it at all. this is the next best thing
# https://github.com/oracle/python-cx_Oracle/issues/519
# TODO: oracledb claims to have this feature built in somehow,
# see if that's in use and/or if it needs to be enabled
- # (or if this doesnt even apply to the newer oracle's we're using)
+ # (or if this doesn't even apply to the newer oracle's we're using)
try:
sc = dbapi_connection.stmtcachesize
except:
if isinstance(coltype, DOMAIN):
if not default:
# domain can override the default value but
- # cant set it to None
+ # can't set it to None
if coltype.default is not None:
default = coltype.default
It's been observed that the :class:`.NullPool` implementation incurs an
extremely small performance overhead for repeated checkouts due to the lack of
-connection re-use implemented by :class:`.QueuePool`. However, it still
+connection reuse implemented by :class:`.QueuePool`. However, it still
may be beneficial to use this class if the application is experiencing
issues with files being locked.
"""
if not self.dialect.supports_server_side_cursors:
raise exc.InvalidRequestError(
- "Cant use `stream` or `stream_scalars` with the current "
+ "Can't use `stream` or `stream_scalars` with the current "
"dialect since it does not support server side cursors."
)
In order to produce a reasonable syntax while remaining typing compliant,
the :attr:`.hybrid_property.inplace` decorator allows the same
-decorator to be re-used with different method names, while still producing
+decorator to be reused with different method names, while still producing
a single decorator under one name::
# correct use which is also accepted by pep-484 tooling
"""Return the inplace mutator for this :class:`.hybrid_property`.
This is to allow in-place mutation of the hybrid, allowing the first
- hybrid method of a certain name to be re-used in order to add
+ hybrid method of a certain name to be reused in order to add
more methods without having to name those methods the same, e.g.::
class Interval(Base):
are never discarded independently of their class. If a mapped class
itself is garbage collected, its mapper is automatically disposed of as
well. As such, :func:`.clear_mappers` is only for usage in test suites
- that re-use the same classes with different mappings, which is itself an
+ that reuse the same classes with different mappings, which is itself an
extremely rare use case - the only such use case is in fact SQLAlchemy's
own test suite, and possibly the test suites of other ORM extension
libraries which intend to test various combinations of mapper construction
# for ORM object loading, like ORMContext, we have to disable
# result set adapt_to_context, because we will be generating a
# new statement with specific columns that's cached inside of
- # an ORMFromStatementCompileState, which we will re-use for
+ # an ORMFromStatementCompileState, which we will reuse for
# each result.
if not execution_options:
execution_options = context._orm_load_exec_options
try:
yield from result # type: ignore
except GeneratorExit:
- # issue #8710 - direct iteration is not re-usable after
+ # issue #8710 - direct iteration is not reusable after
# an iterable block is broken, so close the result
result._soft_close()
raise
:meth:`_orm.Session.close` and :meth:`_orm.Session.reset`.
:meth:`_orm.Session.close` - a similar method will additionally
- prevent re-use of the Session when the parameter
+ prevent reuse of the Session when the parameter
:paramref:`_orm.Session.close_resets_only` is set to ``False``.
""" # noqa: E501
:param close_resets_only: Defaults to ``True``. Determines if
the session should reset itself after calling ``.close()``
- or should pass in a no longer usable state, disabling re-use.
+ or should pass in a no longer usable state, disabling reuse.
.. versionadded:: 2.0.22 added flag ``close_resets_only``.
A future SQLAlchemy version may change the default value of
:meth:`_orm.Session.close` and :meth:`_orm.Session.reset`.
:meth:`_orm.Session.close` - a similar method will additionally
- prevent re-use of the Session when the parameter
+ prevent reuse of the Session when the parameter
:paramref:`_orm.Session.close_resets_only` is set to ``False``.
"""
self._close_impl(invalidate=False, is_reset=True)
)
else:
raise sa_exc.IllegalStateChangeError(
- f"Cant run operation '{fn.__name__}()' here; "
+ f"Can't run operation '{fn.__name__}()' here; "
f"will move to state {moves_to!r} where we are "
f"expecting {next_state!r}",
code="isce",
loadopt,
):
# note that because the subqueryload object
- # does not re-use the cached query, instead always making
+ # does not reuse the cached query, instead always making
# use of the current invoked query, while we have two queries
# here (orig and context.query), they are both non-cached
# queries and we can transfer the options as is without
)
elif path_is_property(self.path):
- # re-use the lookup which will raise a nicely formatted
+ # reuse the lookup which will raise a nicely formatted
# LoaderStrategyException
if strategy:
self.path.prop._strategy_lookup(self.path.prop, strategy[0])
)
The :func:`_sql.from_dml_column` construct allows automatic copying
- of an expression assigned to a different column to be re-used::
+ of an expression assigned to a different column to be reused::
>>> stmt = t.insert().values(x=func.foobar(3), y=from_dml_column(t.c.x) + 5)
>>> print(stmt)
literal_exec["literal_execute"] = True
# break up the function into its components so we can apply
- # literal_execute to the second argument (the delimeter)
+ # literal_execute to the second argument (the delimiter)
cl = list(fn.clauses)
- expr, delimeter = cl[0:2]
+ expr, delimiter = cl[0:2]
if (
order_by is not None
and self.dialect.aggregate_order_by_style
):
return (
f"{use_function_name}({expr._compiler_dispatch(self, **kw)}, "
- f"{delimeter._compiler_dispatch(self, **literal_exec)} "
+ f"{delimiter._compiler_dispatch(self, **literal_exec)} "
f"ORDER BY {order_by._compiler_dispatch(self, **kw)})"
)
else:
return (
f"{use_function_name}({expr._compiler_dispatch(self, **kw)}, "
- f"{delimeter._compiler_dispatch(self, **literal_exec)})"
+ f"{delimiter._compiler_dispatch(self, **literal_exec)})"
)
def visit_extract(self, extract, **kwargs):
# likely the least amount of callcounts, though looks clumsy
if self.positional and visiting_cte is None:
# if we are inside a CTE, don't count parameters
- # here since they wont be for insertmanyvalues. keep
+ # here since they won't be for insertmanyvalues. keep
# visited_bindparam at None so no counting happens.
# see #9173
visited_bindparam = []
implicit_returning = (
# statement itself can veto it
need_pks
- # the dialect can veto it if it just doesnt support RETURNING
+ # the dialect can veto it if it just doesn't support RETURNING
# with INSERT
and dialect.insert_returning
# user-defined implicit_returning on Table can veto it
and (
# since we support MariaDB and SQLite which also support lastrowid,
# decide if we should use lastrowid or RETURNING. for insert
- # that didnt call return_defaults() and has just one set of
+ # that didn't call return_defaults() and has just one set of
# parameters, we can use lastrowid. this is more "traditional"
# and a lot of weird use cases are supported by it.
# SQLite lastrowid times 3x faster than returning,
The :meth:`_expression.TextClause.bindparams`
method can be called repeatedly,
- where it will re-use existing :class:`.BindParameter` objects to add
+ where it will reuse existing :class:`.BindParameter` objects to add
new information. For example, we can call
:meth:`_expression.TextClause.bindparams`
first with typing information, and a
"""Run the tracker-generated expression through coercion rules.
After the user-defined lambda has been invoked to produce a statement
- for re-use, run it through coercion rules to both check that it's the
+ for reuse, run it through coercion rules to both check that it's the
correct type of object and also to coerce it to its useful form.
"""
""" # noqa: E501
# note: don't use the @_generative system here, keep a reference
- # to the original object. otherwise you can have re-use of the
+ # to the original object. otherwise you can have reuse of the
# python id() of the original which can cause name conflicts if
# a new anon-name grabs the same identifier as the local anon-name
# (just saw it happen on CI)
archs = ["arm64"]
# On an Linux Intel runner with qemu installed, build Intel and ARM wheels
-# NOTE: this is overriden in the pipeline using the CIBW_ARCHS_LINUX env variable to speed up the build
+# NOTE: this is overridden in the pipeline using the CIBW_ARCHS_LINUX env variable to speed up the build
[tool.cibuildwheel.linux]
archs = ["x86_64", "aarch64"]
crit = q.c.myid == table1.c.myid
if style.plain:
- # the "plain" style of fetch doesnt use TOP right now, so
+ # the "plain" style of fetch doesn't use TOP right now, so
# there's an order_by implicit in the row_number part of it
self.assert_compile(
select("*").where(crit),
def test_enum_doesnt_construct_ENUM(self):
"""in 2.0 we made ENUM name required. check that Enum adapt to
- ENUM doesnt call this constructor."""
+ ENUM doesn't call this constructor."""
e1 = Enum("x", "y")
eq_(e1.name, None)
class IsolationLevelTest(fixtures.TestBase):
"""see also sqlalchemy/testing/suite/test_dialect.py::IsolationLevelTest
- this suite has sparse_backend so wont take place
+ this suite has sparse_backend so won't take place
for every dbdriver under a nox run. the suite test should cover
that end of it
async with async_engine.connect() as c:
with expect_raises_message(
exc.InvalidRequestError,
- "Cant use `stream` or `stream_scalars` with the current "
+ "Can't use `stream` or `stream_scalars` with the current "
"dialect since it does not support server side cursors.",
):
if method == "stream":
decl_base.registry.configure()
- # the event hook itself doesnt do anything for repeated calls
+ # the event hook itself doesn't do anything for repeated calls
# already, so there's really nothing else to assert other than there's
# only one "set" event listener
class B(A):
b_data: Mapped[str] = mapped_column(default="bd")
- # ensure we didnt break dataclasses contract of removing Field
+ # ensure we didn't break dataclasses contract of removing Field
# issue #8880
eq_(A.__dict__["some_field"], 5)
assert "ctrl_one" not in A.__dict__
class B(A):
b_data: Mapped[str] = mapped_column(default="bd")
- # ensure we didnt break dataclasses contract of removing Field
+ # ensure we didn't break dataclasses contract of removing Field
# issue #8880
eq_(A.__dict__["some_field"], 5)
assert "ctrl_one" not in A.__dict__
# e.g. insert(A). In the update() case, the WHERE clause can also
# pull in the ORM entity, which is how we found the issue here, but
# for INSERT there's no current method that does this; returning()
- # could do this in theory but currently doesnt. So for now, cheat,
+ # could do this in theory but currently doesn't. So for now, cheat,
# and pretend there's some conversion that's going to propagate
# from an ORM expression
coercions.expect(
):
"""test #5664.
- approach is revised in SQLAlchemy 2.0 to not pre-emptively
+ approach is revised in SQLAlchemy 2.0 to not preemptively
unexpire the involved attributes
"""
u1 = User(name="x")
sess.add(u1)
- # ha ha! u1 is not persistent yet. autoflush wont happen
+ # ha ha! u1 is not persistent yet. autoflush won't happen
# until sess.scalars() actually runs. statement has to be
# created with a pending parameter, not actual parameter
assert inspect(u1).pending
)
sess.add(u1)
- # ha ha! u1 is not persistent yet. autoflush wont happen
+ # ha ha! u1 is not persistent yet. autoflush won't happen
# until sess.scalars() actually runs. statement has to be
# created with a pending parameter, not actual parameter
assert inspect(u1).pending
)
sess.add(u1)
- # ha ha! u1 is not persistent yet. autoflush wont happen
+ # ha ha! u1 is not persistent yet. autoflush won't happen
# until sess.scalars() actually runs. statement has to be
# created with a pending parameter, not actual parameter
assert inspect(u1).pending
eq_(m._state, _NO_CHANGE)
with expect_raises_message(
sa_exc.IllegalStateChangeError,
- r"Cant run operation '_inner_move_to_c\(\)' here; will move "
+ r"Can't run operation '_inner_move_to_c\(\)' here; will move "
r"to state <StateTestChange.c: 3> where we are "
"expecting <StateTestChange.b: 2>",
):
)
params = None
elif paramtype == "params":
- # for params, compiler doesnt have the value available to look
+ # for params, compiler doesn't have the value available to look
# at. we assume non-NULL
stmt = t.insert()
if insert_null_still_autoincrements:
multiple parameter sets, i.e. "INSERT INTO table (anycol) VALUES
(DEFAULT) (DEFAULT) (DEFAULT) ... RETURNING col"
- if the database doesnt support this (like SQLite, mssql), it
+ if the database doesn't support this (like SQLite, mssql), it
actually runs the statement that many times on the cursor.
This is much less efficient, but is still more efficient than
how it worked previously where we'd run the statement that many
This copies the _keymap from one to the other in terms of the
selected columns of a target selectable.
- This is used by the statement caching process to re-use the
+ This is used by the statement caching process to reuse the
CursorResultMetadata from the cached statement against the same
statement sent separately.
),
)
- # the base String() didnt create a constraint or even do any
+ # the base String() didn't create a constraint or even do any
# events. But Column looked for SchemaType in _variant_mapping
# and found our type anyway.
eq_(
name: Mapped[str] = mapped_column()
group_id = mapped_column(ForeignKey("group.id"))
- # this currently doesnt generate an error. not sure how to get the
+ # this currently doesn't generate an error. not sure how to get the
# overloads to hit this one, nor am i sure i really want to do that
# anyway
name_this_works_atm: Mapped[str] = mapped_column(nullable=True)
sqlite: aiosqlite
sqlite_file: aiosqlite
- # asyncpg doesnt build on free threading backends
+ # asyncpg doesn't build on free threading backends
py{38,39,310,311,312,313,314}-postgresql: postgresql_asyncpg
mysql: asyncmy