manager calling form, which invokes these methods automatically, is recommended
as a best practice.
-.. _connections_nested_transactions:
-
-Nesting of Transaction Blocks
------------------------------
-
-.. deprecated:: 1.4 The "transaction nesting" feature of SQLAlchemy is a legacy feature
- that is deprecated in the 1.4 release and will be removed in SQLAlchemy 2.0.
- The pattern has proven to be a little too awkward and complicated, unless an
- application makes more of a first-class framework around the behavior. See
- the following subsection :ref:`connections_avoid_nesting`.
-
-The :class:`.Transaction` object also handles "nested" behavior by keeping
-track of the outermost begin/commit pair. In this example, two functions both
-issue a transaction on a :class:`_engine.Connection`, but only the outermost
-:class:`.Transaction` object actually takes effect when it is committed.
-
-.. sourcecode:: python+sql
-
- # method_a starts a transaction and calls method_b
- def method_a(connection):
- with connection.begin(): # open a transaction
- method_b(connection)
-
- # method_b also starts a transaction
- def method_b(connection):
- with connection.begin(): # open a transaction - this runs in the
- # context of method_a's transaction
- connection.execute(text("insert into mytable values ('bat', 'lala')"))
- connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
-
- # open a Connection and call method_a
- with engine.connect() as conn:
- method_a(conn)
-
-Above, ``method_a`` is called first, which calls ``connection.begin()``. Then
-it calls ``method_b``. When ``method_b`` calls ``connection.begin()``, it just
-increments a counter that is decremented when it calls ``commit()``. If either
-``method_a`` or ``method_b`` calls ``rollback()``, the whole transaction is
-rolled back. The transaction is not committed until ``method_a`` calls the
-``commit()`` method. This "nesting" behavior allows the creation of functions
-which "guarantee" that a transaction will be used if one was not already
-available, but will automatically participate in an enclosing transaction if
-one exists.
-
-.. _connections_avoid_nesting:
-
-Arbitrary Transaction Nesting as an Antipattern
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-With many years of experience, the above "nesting" pattern has not proven to
-be very popular, and where it has been observed in large projects such
-as Openstack, it tends to be complicated.
-
-The most ideal way to organize an application would have a single, or at
-least very few, points at which the "beginning" and "commit" of all
-database transactions is demarcated. This is also the general
-idea discussed in terms of the ORM at :ref:`session_faq_whentocreate`. To
-adapt the example from the previous section to this practice looks like::
-
-
- # method_a calls method_b
- def method_a(connection):
- method_b(connection)
-
- # method_b uses the connection and assumes the transaction
- # is external
- def method_b(connection):
- connection.execute(text("insert into mytable values ('bat', 'lala')"))
- connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
-
- # open a Connection inside of a transaction and call method_a
- with engine.begin() as conn:
- method_a(conn)
-
-That is, ``method_a()`` and ``method_b()`` do not deal with the details
-of the transaction at all; the transactional scope of the connection is
-defined **externally** to the functions that have a SQL dialogue with the
-connection.
-
-It may be observed that the above code has fewer lines, and less indentation
-which tends to correlate with lower :term:`cyclomatic complexity`. The
-above code is organized such that ``method_a()`` and ``method_b()`` are always
-invoked from a point at which a transaction is begun. The previous
-version of the example features a ``method_a()`` and a ``method_b()`` that are
-trying to be agnostic of this fact, which suggests they are prepared for
-at least twice as many potential codepaths through them.
-
-.. _connections_subtransactions:
-
-Migrating from the "nesting" pattern
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-As SQLAlchemy's intrinsic-nested pattern is considered legacy, an application
-that for either legacy or novel reasons still seeks to have a context that
-automatically frames transactions should seek to maintain this functionality
-through the use of a custom Python context manager. A similar example is also
-provided in terms of the ORM in the "seealso" section below.
-
-To provide backwards compatibility for applications that make use of this
-pattern, the following context manager or a similar implementation based on
-a decorator may be used::
-
- import contextlib
-
- @contextlib.contextmanager
- def transaction(connection):
- if not connection.in_transaction():
- with connection.begin():
- yield connection
- else:
- yield connection
-
-The above contextmanager would be used as::
-
- # method_a starts a transaction and calls method_b
- def method_a(connection):
- with transaction(connection): # open a transaction
- method_b(connection)
-
- # method_b either starts a transaction, or uses the one already
- # present
- def method_b(connection):
- with transaction(connection): # open a transaction
- connection.execute(text("insert into mytable values ('bat', 'lala')"))
- connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
-
- # open a Connection and call method_a
- with engine.connect() as conn:
- method_a(conn)
-
-A similar approach may be taken such that connectivity is established
-on demand as well; the below approach features a single-use context manager
-that accesses an enclosing state in order to test if connectivity is already
-present::
-
- import contextlib
-
- def connectivity(engine):
- connection = None
-
- @contextlib.contextmanager
- def connect():
- nonlocal connection
-
- if connection is None:
- connection = engine.connect()
- with connection:
- with connection.begin():
- yield connection
- else:
- yield connection
-
- return connect
-
-Using the above would look like::
-
- # method_a passes along connectivity context, at the same time
- # it chooses to establish a connection by calling "with"
- def method_a(connectivity):
- with connectivity():
- method_b(connectivity)
-
- # method_b also wants to use a connection from the context, so it
- # also calls "with:", but also it actually uses the connection.
- def method_b(connectivity):
- with connectivity() as connection:
- connection.execute(text("insert into mytable values ('bat', 'lala')"))
- connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
-
- # create a new connection/transaction context object and call
- # method_a
- method_a(connectivity(engine))
-
-The above context manager acts not only as a "transaction" context but also
-as a context that manages having an open connection against a particular
-:class:`_engine.Engine`. When using the ORM :class:`_orm.Session`, this
-connectivty management is provided by the :class:`_orm.Session` itself.
-An overview of ORM connectivity patterns is at :ref:`unitofwork_transaction`.
-
-.. seealso::
-
- :ref:`session_subtransactions` - ORM version
-
.. _autocommit:
Library Level (e.g. emulated) Autocommit
with connection.begin():
connection.execute(<statement>)
-.. note:: The return value of
- the :meth:`_engine.Connection.execution_options` method is a so-called
- "branched" connection under the SQLAlchemy 1.x series when not using
- :paramref:`_sa.create_engine.future` mode, which is a shallow
- copy of the original :class:`_engine.Connection` object. Despite this,
- the ``isolation_level`` execution option applies to the
- original :class:`_engine.Connection` object and all "branches" overall.
-
- When using :paramref:`_sa.create_engine.future` mode (i.e. :term:`2.0 style`
- usage), the concept of these so-called "branched" connections is removed,
- and :meth:`_engine.Connection.execution_options` returns the **same**
- :class:`_engine.Connection` object without creating any copies.
+.. tip:: The return value of
+ the :meth:`_engine.Connection.execution_options` method is the same
+ :class:`_engine.Connection` object upon which the method was called,
+ meaning, it modifies the state of the :class:`_engine.Connection`
+ object in place. This is a new behavior as of SQLAlchemy 2.0.
+ This behavior does not apply to the :meth:`_engine.Engine.execution_options`
+ method; that method still returns a copy of the :class:`.Engine` and
+ as described below may be used to construct multiple :class:`.Engine`
+ objects with different execution options, which nonetheless share the same
+ dialect and connection pool.
The :paramref:`_engine.Connection.execution_options.isolation_level` option may
also be set engine wide, as is often preferable. This is achieved by
============================
This package includes a relatively small number of transitional elements
-to allow "2.0 mode" to take place within SQLAlchemy 1.4. The primary
-objects provided here are :class:`_future.Engine` and :class:`_future.Connection`,
-which are both subclasses of the existing :class:`_engine.Engine` and
-:class:`_engine.Connection` objects with essentially a smaller set of
-methods and the removal of "autocommit".
+to allow "2.0 mode" to take place within SQLAlchemy 1.4.
-Within the 1.4 series, the "2.0" style of engines and connections is enabled
-by passing the :paramref:`_sa.create_engine.future` flag to
-:func:`_sa.create_engine`::
+In the 2.0 release of SQLAlchemy, the objects published here are the same
+:class:`_engine.Engine`, :class:`_engine.Connection` and
+:func:`_sa.create_engine` classes and functions that are
+used by default. The package is here for backwards compatibility with
+SQLAlchemy 1.4.
- from sqlalchemy import create_engine
- engine = create_engine("postgresql://user:pass@host/dbname", future=True)
-
-Similarly, with the ORM, to enable "future" behavior in the ORM :class:`.Session`,
-pass the :paramref:`_orm.Session.future` parameter either to the
-:class:`.Session` constructor directly, or via the :class:`_orm.sessionmaker`
-class::
-
- from sqlalchemy.orm import sessionmaker
-
- Session = sessionmaker(engine, future=True)
+The ``sqlalchemy.future`` package will be deprecated in a subsequent
+2.x release and eventually removed.
.. seealso::
:ref:`migration_20_toplevel` - Introduction to the 2.0 series of SQLAlchemy
-.. module:: sqlalchemy.future
-
-.. autoclass:: sqlalchemy.future.Connection
- :members:
-
-.. autofunction:: sqlalchemy.future.create_engine
-
-.. autoclass:: sqlalchemy.future.Engine
- :members:
-
-.. autofunction:: sqlalchemy.future.select
-
@event.listens_for(some_engine, "engine_connect")
def ping_connection(connection, branch):
if branch:
- # "branch" refers to a sub-connection of a connection,
- # we don't want to bother pinging on these.
+ # this parameter is always False as of SQLAlchemy 2.0,
+ # but is still accepted by the event hook. In 1.x versions
+ # of SQLAlchemy, "branched" connections should be skipped.
return
try:
.. _error_8s2b:
-Can't reconnect until invalid transaction is rolled back
-----------------------------------------------------------
+Can't reconnect until invalid transaction is rolled back. Please rollback() fully before proceeding
+-----------------------------------------------------------------------------------------------------
This error condition refers to the case where a :class:`_engine.Connection` was
invalidated, either due to a database disconnect detection or due to an
explicit call to :meth:`_engine.Connection.invalidate`, but there is still a
-transaction present that was initiated by the :meth:`_engine.Connection.begin`
-method. When a connection is invalidated, any :class:`_engine.Transaction`
+transaction present that was initiated either explicitly by the :meth:`_engine.Connection.begin`
+method, or due to the connection automatically beginning a transaction as occurs
+in the 2.x series of SQLAlchemy when any SQL statements are emitted. When a connection is invalidated, any :class:`_engine.Transaction`
that was in progress is now in an invalid state, and must be explicitly rolled
back in order to remove it from the :class:`_engine.Connection`.
-.. _error_8s2a:
-
-This connection is on an inactive transaction. Please rollback() fully before proceeding
-------------------------------------------------------------------------------------------
-
-This error condition was added to SQLAlchemy as of version 1.4. The error
-refers to the state where a :class:`_engine.Connection` is placed into a
-transaction using a method like :meth:`_engine.Connection.begin`, and then a
-further "marker" transaction is created within that scope; the "marker"
-transaction is then rolled back using :meth:`.Transaction.rollback` or closed
-using :meth:`.Transaction.close`, however the outer transaction is still
-present in an "inactive" state and must be rolled back.
-
-The pattern looks like::
-
- engine = create_engine(...)
-
- connection = engine.connect()
- transaction1 = connection.begin()
-
- # this is a "sub" or "marker" transaction, a logical nesting
- # structure based on "real" transaction transaction1
- transaction2 = connection.begin()
- transaction2.rollback()
-
- # transaction1 is still present and needs explicit rollback,
- # so this will raise
- connection.execute(text("select 1"))
-
-Above, ``transaction2`` is a "marker" transaction, which indicates a logical
-nesting of transactions within an outer one; while the inner transaction
-can roll back the whole transaction via its rollback() method, its commit()
-method has no effect except to close the scope of the "marker" transaction
-itself. The call to ``transaction2.rollback()`` has the effect of
-**deactivating** transaction1 which means it is essentially rolled back
-at the database level, however is still present in order to accommodate
-a consistent nesting pattern of transactions.
-
-The correct resolution is to ensure the outer transaction is also
-rolled back::
-
- transaction1.rollback()
-
-This pattern is not commonly used in Core. Within the ORM, a similar issue can
-occur which is the product of the ORM's "logical" transaction structure; this
-is described in the FAQ entry at :ref:`faq_session_rollback`.
-
-The "subtransaction" pattern is to be removed in SQLAlchemy 2.0 so that this
-particular programming pattern will no longer be available and this
-error message will no longer occur in Core.
-
.. _error_dbapi:
DBAPI Errors
of classes; "joined", "single", and "concrete". The section
:ref:`inheritance_toplevel` describes inheritance mapping fully.
- generative
- A term that SQLAlchemy uses to refer what's normally known
- as :term:`method chaining`; see that term for details.
-
method chaining
- An object-oriented technique whereby the state of an object
- is constructed by calling methods on the object. The
- object features any number of methods, each of which return
- a new object (or in some cases the same object) with
- additional state added to the object.
+ generative
+ "Method chaining", referred to within SQLAlchemy documentation as
+ "generative", is an object-oriented technique whereby the state of an
+ object is constructed by calling methods on the object. The object
+ features any number of methods, each of which return a new object (or
+ in some cases the same object) with additional state added to the
+ object.
The two SQLAlchemy objects that make the most use of
method chaining are the :class:`_expression.Select`
:class:`_expression.Select` object with additional qualifiers
added.
- .. seealso::
-
- :term:`generative`
-
release
releases
released
:attr:`_orm.ORMExecuteState.execution_options`
.. attribute:: execution_options
+
The complete dictionary of current execution options.
This is a merge of the statement level options with the
locally passed execution options.
+ .. seealso::
+
+ :attr:`_orm.ORMExecuteState.local_execution_options`
+
+ :meth:`_sql.Executable.execution_options`
+
+ :ref:`orm_queryguide_execution_options`
+
.. autoclass:: Session
:members:
:inherited-members:
from ...types import VARBINARY
from ...util import topological
-AUTOCOMMIT_RE = re.compile(
- r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER|LOAD +DATA|REPLACE)",
- re.I | re.UNICODE,
-)
SET_RE = re.compile(
r"\s*SET\s+(?:(?:GLOBAL|SESSION)\s+)?\w", re.I | re.UNICODE
)
class MySQLExecutionContext(default.DefaultExecutionContext):
- def should_autocommit_text(self, statement):
- return AUTOCOMMIT_RE.match(statement)
-
def create_server_side_cursor(self):
if self.dialect.supports_server_side_cursors:
return self._dbapi_connection.cursor(self.dialect._sscursor)
IDX_USING = re.compile(r"^(?:btree|hash|gist|gin|[\w_]+)$", re.I)
-AUTOCOMMIT_REGEXP = re.compile(
- r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER|GRANT|REVOKE|"
- "IMPORT FOREIGN SCHEMA|REFRESH MATERIALIZED VIEW|TRUNCATE)",
- re.I | re.UNICODE,
-)
-
RESERVED_WORDS = set(
[
"all",
return super(PGExecutionContext, self).get_insert_default(column)
- def should_autocommit_text(self, statement):
- return AUTOCOMMIT_REGEXP.match(statement)
-
class PGReadOnlyConnectionCharacteristic(
characteristics.ConnectionCharacteristic
@drop_all_schema_objects_pre_tables.for_db("postgresql")
def drop_all_schema_objects_pre_tables(cfg, eng):
with eng.connect().execution_options(isolation_level="AUTOCOMMIT") as conn:
- for xid in conn.execute("select gid from pg_prepared_xacts").scalars():
+ for xid in conn.exec_driver_sql(
+ "select gid from pg_prepared_xacts"
+ ).scalars():
conn.execute("ROLLBACK PREPARED '%s'" % xid)
from .interfaces import Connectable
from .interfaces import ConnectionEventsTarget
from .interfaces import ExceptionContext
-from .util import _distill_params
from .util import _distill_params_20
+from .util import _distill_raw_params
from .util import TransactionalContext
from .. import exc
-from .. import inspection
from .. import log
from .. import util
from ..sql import compiler
"""
_EMPTY_EXECUTION_OPTS = util.immutabledict()
+NO_OPTIONS = util.immutabledict()
class Connection(Connectable):
"""Provides high-level functionality for a wrapped DB-API connection.
- **This is the SQLAlchemy 1.x.x version** of the :class:`_engine.Connection`
- class. For the :term:`2.0 style` version, which features some API
- differences, see :class:`_future.Connection`.
-
The :class:`_engine.Connection` object is procured by calling
the :meth:`_engine.Engine.connect` method of the :class:`_engine.Engine`
object, and provides services for execution of SQL statements as well
"""
- _is_future = False
_sqla_logger_namespace = "sqlalchemy.engine.Connection"
# used by sqlalchemy.engine.util.TransactionalContext
self,
engine,
connection=None,
- _branch_from=None,
- _execution_options=None,
- _dispatch=None,
_has_events=None,
_allow_revalidate=True,
+ _allow_autobegin=True,
):
"""Construct a new Connection."""
self.engine = engine
- self.dialect = engine.dialect
- self.__branch_from = _branch_from
+ self.dialect = dialect = engine.dialect
- if _branch_from:
- # branching is always "from" the root connection
- assert _branch_from.__branch_from is None
- self._dbapi_connection = connection
- self._execution_options = _execution_options
- self._echo = _branch_from._echo
- self.dispatch = _dispatch
- self._has_events = _branch_from._has_events
+ if connection is None:
+ try:
+ self._dbapi_connection = engine.raw_connection()
+ except dialect.dbapi.Error as err:
+ Connection._handle_dbapi_exception_noconnection(
+ err, dialect, engine
+ )
+ raise
else:
- self._dbapi_connection = (
- connection
- if connection is not None
- else engine.raw_connection()
- )
-
- self._transaction = self._nested_transaction = None
- self.__savepoint_seq = 0
- self.__in_begin = False
-
- self.__can_reconnect = _allow_revalidate
- self._echo = self.engine._should_log_info()
+ self._dbapi_connection = connection
- if _has_events is None:
- # if _has_events is sent explicitly as False,
- # then don't join the dispatch of the engine; we don't
- # want to handle any of the engine's events in that case.
- self.dispatch = self.dispatch._join(engine.dispatch)
- self._has_events = _has_events or (
- _has_events is None and engine._has_events
- )
+ self._transaction = self._nested_transaction = None
+ self.__savepoint_seq = 0
+ self.__in_begin = False
+
+ self.__can_reconnect = _allow_revalidate
+ self._allow_autobegin = _allow_autobegin
+ self._echo = self.engine._should_log_info()
+
+ if _has_events is None:
+ # if _has_events is sent explicitly as False,
+ # then don't join the dispatch of the engine; we don't
+ # want to handle any of the engine's events in that case.
+ self.dispatch = self.dispatch._join(engine.dispatch)
+ self._has_events = _has_events or (
+ _has_events is None and engine._has_events
+ )
- assert not _execution_options
- self._execution_options = engine._execution_options
+ self._execution_options = engine._execution_options
if self._has_events or self.engine._has_events:
- self.dispatch.engine_connect(self, _branch_from is not None)
+ self.dispatch.engine_connect(self)
@util.memoized_property
def _message_formatter(self):
else:
return name
- def _branch(self):
- """Return a new Connection which references this Connection's
- engine and connection; whose close() method does nothing.
-
- .. deprecated:: 1.4 the "branching" concept will be removed in
- SQLAlchemy 2.0 as well as the "Connection.connect()" method which
- is the only consumer for this.
-
- The Core uses this very sparingly, only in the case of
- custom SQL default functions that are to be INSERTed as the
- primary key of a row where we need to get the value back, so we have
- to invoke it distinctly - this is a very uncommon case.
-
- Userland code accesses _branch() when the connect()
- method is called. The branched connection
- acts as much as possible like the parent, except that it stays
- connected when a close() event occurs.
-
- """
- return self.engine._connection_cls(
- self.engine,
- self._dbapi_connection,
- _branch_from=self.__branch_from if self.__branch_from else self,
- _execution_options=self._execution_options,
- _has_events=self._has_events,
- _dispatch=self.dispatch,
- )
-
- def _generate_for_options(self):
- """define connection method chaining behavior for execution_options"""
-
- if self._is_future:
- return self
- else:
- c = self.__class__.__new__(self.__class__)
- c.__dict__ = self.__dict__.copy()
- return c
-
def __enter__(self):
return self
self.close()
def execution_options(self, **opt):
- r""" Set non-SQL options for the connection which take effect
+ r"""Set non-SQL options for the connection which take effect
during execution.
- For a "future" style connection, this method returns this same
- :class:`_future.Connection` object with the new options added.
-
- For a legacy connection, this method returns a copy of this
- :class:`_engine.Connection` which references the same underlying DBAPI
- connection, but also defines the given execution options which will
- take effect for a call to
- :meth:`execute`. As the new :class:`_engine.Connection` references the
- same underlying resource, it's usually a good idea to ensure that
- the copies will be discarded immediately, which is implicit if used
- as in::
-
- result = connection.execution_options(stream_results=True).\
- execute(stmt)
-
- Note that any key/value can be passed to
- :meth:`_engine.Connection.execution_options`,
- and it will be stored in the
- ``_execution_options`` dictionary of the :class:`_engine.Connection`.
- It
- is suitable for usage by end-user schemes to communicate with
- event listeners, for example.
+ This method modifies this :class:`_engine.Connection` **in-place**;
+ the return value is the same :class:`_engine.Connection` object
+ upon which the method is called. Note that this is in contrast
+ to the behavior of the ``execution_options`` methods on other
+ objects such as :meth:`_engine.Engine.execution_options` and
+ :meth:`_sql.Executable.execution_options`. The rationale is that many
+ such execution options necessarily modify the state of the base
+ DBAPI connection in any case so there is no feasible means of
+ keeping the effect of such an option localized to a "sub" connection.
+
+ .. versionchanged:: 2.0 The :meth:`_engine.Connection.execution_options`
+ method, in constrast to other objects with this method, modifies
+ the connection in-place without creating copy of it.
+
+ As discussed elsewhere, the :meth:`_engine.Connection.execution_options`
+ method accepts any arbitrary parameters including user defined names.
+ All parameters given are consumable in a number of ways including
+ by using the :meth:`_engine.Connection.get_execution_options` method.
+ See the examples at :meth:`_sql.Executable.execution_options`
+ and :meth:`_engine.Engine.execution_options`.
The keywords that are currently recognized by SQLAlchemy itself
include all those listed under :meth:`.Executable.execution_options`,
as well as others that are specific to :class:`_engine.Connection`.
- :param autocommit: Available on: Connection, statement.
- When True, a COMMIT will be invoked after execution
- when executed in 'autocommit' mode, i.e. when an explicit
- transaction is not begun on the connection. Note that this
- is **library level, not DBAPI level autocommit**. The DBAPI
- connection will remain in a real transaction unless the
- "AUTOCOMMIT" isolation level is used.
-
- .. deprecated:: 1.4 The "autocommit" execution option is deprecated
- and will be removed in SQLAlchemy 2.0. See
- :ref:`migration_20_autocommit` for discussion.
+ :param compiled_cache: Available on: :class:`_engine.Connection`,
+ :class:`_engine.Engine`.
- :param compiled_cache: Available on: Connection.
A dictionary where :class:`.Compiled` objects
will be cached when the :class:`_engine.Connection`
compiles a clause
specified here.
:param logging_token: Available on: :class:`_engine.Connection`,
- :class:`_engine.Engine`.
+ :class:`_engine.Engine`, :class:`_sql.Executable`.
Adds the specified string token surrounded by brackets in log
messages logged by the connection, i.e. the logging that's enabled
:paramref:`_sa.create_engine.logging_name` - adds a name to the
name used by the Python logger object itself.
- :param isolation_level: Available on: :class:`_engine.Connection`.
+ :param isolation_level: Available on: :class:`_engine.Connection`,
+ :class:`_engine.Engine`.
Set the transaction isolation level for the lifespan of this
:class:`_engine.Connection` object.
valid levels.
The isolation level option applies the isolation level by emitting
- statements on the DBAPI connection, and **necessarily affects the
- original Connection object overall**, not just the copy that is
- returned by the call to :meth:`_engine.Connection.execution_options`
- method. The isolation level will remain at the given setting until
- the DBAPI connection itself is returned to the connection pool, i.e.
- the :meth:`_engine.Connection.close` method on the original
- :class:`_engine.Connection` is called,
- where an event handler will emit
- additional statements on the DBAPI connection in order to revert the
- isolation level change.
-
- .. warning:: The ``isolation_level`` execution option should
- **not** be used when a transaction is already established, that
- is, the :meth:`_engine.Connection.begin`
- method or similar has been
- called. A database cannot change the isolation level on a
- transaction in progress, and different DBAPIs and/or
- SQLAlchemy dialects may implicitly roll back or commit
- the transaction, or not affect the connection at all.
+ statements on the DBAPI connection, and **necessarily affects the
+ original Connection object overall**. The isolation level will remain
+ at the given setting until explicitly changed, or when the DBAPI
+ connection itself is :term:`released` to the connection pool, i.e. the
+ :meth:`_engine.Connection.close` method is called, at which time an
+ event handler will emit additional statements on the DBAPI connection
+ in order to revert the isolation level change.
+
+ .. note:: The ``isolation_level`` execution option may only be
+ established before the :meth:`_engine.Connection.begin` method is
+ called, as well as before any SQL statements are emitted which
+ would otherwise trigger "autobegin", or directly after a call to
+ :meth:`_engine.Connection.commit` or
+ :meth:`_engine.Connection.rollback`. A database cannot change the
+ isolation level on a transaction in progress.
.. note:: The ``isolation_level`` execution option is implicitly
reset if the :class:`_engine.Connection` is invalidated, e.g. via
the :meth:`_engine.Connection.invalidate` method, or if a
- disconnection error occurs. The new connection produced after
- the invalidation will not have the isolation level re-applied
- to it automatically.
+ disconnection error occurs. The new connection produced after the
+ invalidation will **not** have the selected isolation level
+ re-applied to it automatically.
.. seealso::
- :paramref:`_sa.create_engine.isolation_level`
- - set per :class:`_engine.Engine` isolation level
+ :ref:`dbapi_autocommit`
:meth:`_engine.Connection.get_isolation_level`
- view current level
- :ref:`SQLite Transaction Isolation <sqlite_isolation_level>`
-
- :ref:`PostgreSQL Transaction Isolation <postgresql_isolation_level>`
-
- :ref:`MySQL Transaction Isolation <mysql_isolation_level>`
-
- :ref:`SQL Server Transaction Isolation <mssql_isolation_level>`
-
- :ref:`session_transaction_isolation` - for the ORM
+ :param no_parameters: Available on: :class:`_engine.Connection`,
+ :class:`_sql.Executable`.
- :param no_parameters: When ``True``, if the final parameter
+ When ``True``, if the final parameter
list or dictionary is totally empty, will invoke the
statement on the cursor as ``cursor.execute(statement)``,
not passing the parameter collection at all.
or piped into a script that's later invoked by
command line tools.
- :param stream_results: Available on: Connection, statement.
+ :param stream_results: Available on: :class:`_engine.Connection`,
+ :class:`_sql.Executable`.
+
Indicate to the dialect that results should be
"streamed" and not pre-buffered, if possible. This is a limitation
of many DBAPIs. The flag is currently understood within a subset
:ref:`engine_stream_results`
- :param schema_translate_map: Available on: Connection, Engine.
+ :param schema_translate_map: Available on: :class:`_engine.Connection`,
+ :class:`_engine.Engine`, :class:`_sql.Executable`.
+
A dictionary mapping schema names to schema names, that will be
applied to the :paramref:`_schema.Table.schema` element of each
:class:`_schema.Table`
:meth:`_engine.Connection.get_execution_options`
+ :ref:`orm_queryguide_execution_options` - documentation on all
+ ORM-specific execution options
""" # noqa
- c = self._generate_for_options()
- c._execution_options = c._execution_options.union(opt)
+ self._execution_options = self._execution_options.union(opt)
if self._has_events or self.engine._has_events:
- self.dispatch.set_connection_execution_options(c, opt)
- self.dialect.set_connection_execution_options(c, opt)
- return c
+ self.dispatch.set_connection_execution_options(self, opt)
+ self.dialect.set_connection_execution_options(self, opt)
+ return self
def get_execution_options(self):
"""Get the non-SQL options which will take effect during execution.
def closed(self):
"""Return True if this connection is closed."""
- # note this is independent for a "branched" connection vs.
- # the base
-
return self._dbapi_connection is None and not self.__can_reconnect
@property
# "closed" does not need to be "invalid". So the state is now
# represented by the two facts alone.
- if self.__branch_from:
- return self.__branch_from.invalidated
-
return self._dbapi_connection is None and not self.closed
@property
return self.dialect.default_isolation_level
def _invalid_transaction(self):
- if self.invalidated:
- raise exc.PendingRollbackError(
- "Can't reconnect until invalid %stransaction is rolled "
- "back."
- % (
- "savepoint "
- if self._nested_transaction is not None
- else ""
- ),
- code="8s2b",
- )
- else:
- assert not self._is_future
- raise exc.PendingRollbackError(
- "This connection is on an inactive %stransaction. "
- "Please rollback() fully before proceeding."
- % (
- "savepoint "
- if self._nested_transaction is not None
- else ""
- ),
- code="8s2a",
- )
+ raise exc.PendingRollbackError(
+ "Can't reconnect until invalid %stransaction is rolled "
+ "back. Please rollback() fully before proceeding"
+ % ("savepoint " if self._nested_transaction is not None else ""),
+ code="8s2b",
+ )
def _revalidate_connection(self):
- if self.__branch_from:
- return self.__branch_from._revalidate_connection()
if self.__can_reconnect and self.invalidated:
if self._transaction is not None:
self._invalid_transaction()
- self._dbapi_connection = self.engine.raw_connection(
- _connection=self
- )
+ self._dbapi_connection = self.engine.raw_connection()
return self._dbapi_connection
raise exc.ResourceClosedError("This Connection is closed")
return self.connection.info
- @util.deprecated_20(":meth:`.Connection.connect`")
- def connect(
- self,
- ):
- """Returns a branched version of this :class:`_engine.Connection`.
-
- The :meth:`_engine.Connection.close` method on the returned
- :class:`_engine.Connection` can be called and this
- :class:`_engine.Connection` will remain open.
-
- This method provides usage symmetry with
- :meth:`_engine.Engine.connect`, including for usage
- with context managers.
-
- """
-
- return self._branch()
-
def invalidate(self, exception=None):
"""Invalidate the underlying DBAPI connection associated with
this :class:`_engine.Connection`.
"""
- if self.__branch_from:
- return self.__branch_from.invalidate(exception=exception)
-
if self.invalidated:
return
self._dbapi_connection.detach()
def _autobegin(self):
- self.begin()
+ if self._allow_autobegin:
+ self.begin()
def begin(self):
- """Begin a transaction and return a transaction handle.
+ """Begin a transaction prior to autobegin occurring.
- The returned object is an instance of :class:`.Transaction`.
- This object represents the "scope" of the transaction,
- which completes when either the :meth:`.Transaction.rollback`
- or :meth:`.Transaction.commit` method is called.
+ E.g.::
- .. tip::
+ with engine.connect() as conn:
+ with conn.begin() as trans:
+ conn.execute(table.insert(), {"username": "sandy"})
- The :meth:`_engine.Connection.begin` method is invoked when using
- the :meth:`_engine.Engine.begin` context manager method as well.
- All documentation that refers to behaviors specific to the
- :meth:`_engine.Connection.begin` method also apply to use of the
- :meth:`_engine.Engine.begin` method.
- Legacy use: nested calls to :meth:`.begin` on the same
- :class:`_engine.Connection` will return new :class:`.Transaction`
- objects that represent an emulated transaction within the scope of the
- enclosing transaction, that is::
+ The returned object is an instance of :class:`_engine.RootTransaction`.
+ This object represents the "scope" of the transaction,
+ which completes when either the :meth:`_engine.Transaction.rollback`
+ or :meth:`_engine.Transaction.commit` method is called; the object
+ also works as a context manager as illustrated above.
- trans = conn.begin() # outermost transaction
- trans2 = conn.begin() # "nested"
- trans2.commit() # does nothing
- trans.commit() # actually commits
+ The :meth:`_engine.Connection.begin` method begins a
+ transaction that normally will be begun in any case when the connection
+ is first used to execute a statement. The reason this method might be
+ used would be to invoke the :meth:`_events.ConnectionEvents.begin`
+ event at a specific time, or to organize code within the scope of a
+ connection checkout in terms of context managed blocks, such as::
- Calls to :meth:`.Transaction.commit` only have an effect
- when invoked via the outermost :class:`.Transaction` object, though the
- :meth:`.Transaction.rollback` method of any of the
- :class:`.Transaction` objects will roll back the
- transaction.
+ with engine.connect() as conn:
+ with conn.begin():
+ conn.execute(...)
+ conn.execute(...)
- .. tip::
+ with conn.begin():
+ conn.execute(...)
+ conn.execute(...)
- The above "nesting" behavior is a legacy behavior specific to
- :term:`1.x style` use and will be removed in SQLAlchemy 2.0. For
- notes on :term:`2.0 style` use, see
- :meth:`_future.Connection.begin`.
+ The above code is not fundamentally any different in its behavior than
+ the following code which does not use
+ :meth:`_engine.Connection.begin`; the below style is referred towards
+ as "commit as you go" style::
+ with engine.connect() as conn:
+ conn.execute(...)
+ conn.execute(...)
+ conn.commit()
+
+ conn.execute(...)
+ conn.execute(...)
+ conn.commit()
+
+ From a database point of view, the :meth:`_engine.Connection.begin`
+ method does not emit any SQL or change the state of the underlying
+ DBAPI connection in any way; the Python DBAPI does not have any
+ concept of explicit transaction begin.
.. seealso::
+ :ref:`tutorial_working_with_transactions` - in the
+ :ref:`unified_tutorial`
+
:meth:`_engine.Connection.begin_nested` - use a SAVEPOINT
:meth:`_engine.Connection.begin_twophase` -
:class:`_engine.Engine`
"""
- if self._is_future:
- assert not self.__branch_from
- elif self.__branch_from:
- return self.__branch_from.begin()
-
if self.__in_begin:
# for dialects that emit SQL within the process of
# dialect.do_begin() or dialect.do_begin_twophase(), this
self._transaction = RootTransaction(self)
return self._transaction
else:
- if self._is_future:
- raise exc.InvalidRequestError(
- "This connection has already initialized a SQLAlchemy "
- "Transaction() object via begin() or autobegin; can't "
- "call begin() here unless rollback() or commit() "
- "is called first."
- )
- else:
- return MarkerTransaction(self)
+ raise exc.InvalidRequestError(
+ "This connection has already initialized a SQLAlchemy "
+ "Transaction() object via begin() or autobegin; can't "
+ "call begin() here unless rollback() or commit() "
+ "is called first."
+ )
def begin_nested(self):
- """Begin a nested transaction (i.e. SAVEPOINT) and return a
- transaction handle, assuming an outer transaction is already
- established.
-
- Nested transactions require SAVEPOINT support in the
- underlying database. Any transaction in the hierarchy may
- ``commit`` and ``rollback``, however the outermost transaction
- still controls the overall ``commit`` or ``rollback`` of the
- transaction of a whole.
-
- The legacy form of :meth:`_engine.Connection.begin_nested` method has
- alternate behaviors based on whether or not the
- :meth:`_engine.Connection.begin` method was called previously. If
- :meth:`_engine.Connection.begin` was not called, then this method will
- behave the same as the :meth:`_engine.Connection.begin` method and
- return a :class:`.RootTransaction` object that begins and commits a
- real transaction - **no savepoint is invoked**. If
- :meth:`_engine.Connection.begin` **has** been called, and a
- :class:`.RootTransaction` is already established, then this method
- returns an instance of :class:`.NestedTransaction` which will invoke
- and manage the scope of a SAVEPOINT.
-
- .. tip::
-
- The above mentioned behavior of
- :meth:`_engine.Connection.begin_nested` is a legacy behavior
- specific to :term:`1.x style` use. In :term:`2.0 style` use, the
- :meth:`_future.Connection.begin_nested` method instead autobegins
- the outer transaction that can be committed using
- "commit-as-you-go" style; see
- :meth:`_future.Connection.begin_nested` for migration details.
-
- .. versionchanged:: 1.4.13 The behavior of
- :meth:`_engine.Connection.begin_nested`
- as returning a :class:`.RootTransaction` if
- :meth:`_engine.Connection.begin` were not called has been restored
- as was the case in 1.3.x versions; in previous 1.4.x versions, an
- outer transaction would be "autobegun" but would not be committed.
+ """Begin a nested transaction (i.e. SAVEPOINT) and return a transaction
+ handle that controls the scope of the SAVEPOINT.
+
+ E.g.::
+
+ with engine.begin() as connection:
+ with connection.begin_nested():
+ connection.execute(table.insert(), {"username": "sandy"})
+
+ The returned object is an instance of
+ :class:`_engine.NestedTransaction`, which includes transactional
+ methods :meth:`_engine.NestedTransaction.commit` and
+ :meth:`_engine.NestedTransaction.rollback`; for a nested transaction,
+ these methods correspond to the operations "RELEASE SAVEPOINT <name>"
+ and "ROLLBACK TO SAVEPOINT <name>". The name of the savepoint is local
+ to the :class:`_engine.NestedTransaction` object and is generated
+ automatically. Like any other :class:`_engine.Transaction`, the
+ :class:`_engine.NestedTransaction` may be used as a context manager as
+ illustrated above which will "release" or "rollback" corresponding to
+ if the operation within the block were successful or raised an
+ exception.
+
+ Nested transactions require SAVEPOINT support in the underlying
+ database, else the behavior is undefined. SAVEPOINT is commonly used to
+ run operations within a transaction that may fail, while continuing the
+ outer transaction. E.g.::
+
+ from sqlalchemy import exc
+
+ with engine.begin() as connection:
+ trans = connection.begin_nested()
+ try:
+ connection.execute(table.insert(), {"username": "sandy"})
+ trans.commit()
+ except exc.IntegrityError: # catch for duplicate username
+ trans.rollback() # rollback to savepoint
+
+ # outer transaction continues
+ connection.execute( ... )
+
+ If :meth:`_engine.Connection.begin_nested` is called without first
+ calling :meth:`_engine.Connection.begin` or
+ :meth:`_engine.Engine.begin`, the :class:`_engine.Connection` object
+ will "autobegin" the outer transaction first. This outer transaction
+ may be committed using "commit-as-you-go" style, e.g.::
+ with engine.connect() as connection: # begin() wasn't called
+
+ with connection.begin_nested(): will auto-"begin()" first
+ connection.execute( ... )
+ # savepoint is released
+
+ connection.execute( ... )
+
+ # explicitly commit outer transaction
+ connection.commit()
+
+ # can continue working with connection here
+
+ .. versionchanged:: 2.0
+
+ :meth:`_engine.Connection.begin_nested` will now participate
+ in the connection "autobegin" behavior that is new as of
+ 2.0 / "future" style connections in 1.4.
.. seealso::
:meth:`_engine.Connection.begin`
- :meth:`_engine.Connection.begin_twophase`
-
"""
- if self._is_future:
- assert not self.__branch_from
- elif self.__branch_from:
- return self.__branch_from.begin_nested()
-
if self._transaction is None:
- if not self._is_future:
- util.warn_deprecated_20(
- "Calling Connection.begin_nested() in 2.0 style use will "
- "return a NestedTransaction (SAVEPOINT) in all cases, "
- "that will not commit the outer transaction. For code "
- "that is cross-compatible between 1.x and 2.0 style use, "
- "ensure Connection.begin() is called before calling "
- "Connection.begin_nested()."
- )
- return self.begin()
- else:
- self._autobegin()
+ self._autobegin()
return NestedTransaction(self)
"""
- if self.__branch_from:
- return self.__branch_from.begin_twophase(xid=xid)
-
if self._transaction is not None:
raise exc.InvalidRequestError(
"Cannot start a two phase transaction when a transaction "
xid = self.engine.dialect.create_xid()
return TwoPhaseTransaction(self, xid)
+ def commit(self):
+ """Commit the transaction that is currently in progress.
+
+ This method commits the current transaction if one has been started.
+ If no transaction was started, the method has no effect, assuming
+ the connection is in a non-invalidated state.
+
+ A transaction is begun on a :class:`_engine.Connection` automatically
+ whenever a statement is first executed, or when the
+ :meth:`_engine.Connection.begin` method is called.
+
+ .. note:: The :meth:`_engine.Connection.commit` method only acts upon
+ the primary database transaction that is linked to the
+ :class:`_engine.Connection` object. It does not operate upon a
+ SAVEPOINT that would have been invoked from the
+ :meth:`_engine.Connection.begin_nested` method; for control of a
+ SAVEPOINT, call :meth:`_engine.NestedTransaction.commit` on the
+ :class:`_engine.NestedTransaction` that is returned by the
+ :meth:`_engine.Connection.begin_nested` method itself.
+
+
+ """
+ if self._transaction:
+ self._transaction.commit()
+
+ def rollback(self):
+ """Roll back the transaction that is currently in progress.
+
+ This method rolls back the current transaction if one has been started.
+ If no transaction was started, the method has no effect. If a
+ transaction was started and the connection is in an invalidated state,
+ the transaction is cleared using this method.
+
+ A transaction is begun on a :class:`_engine.Connection` automatically
+ whenever a statement is first executed, or when the
+ :meth:`_engine.Connection.begin` method is called.
+
+ .. note:: The :meth:`_engine.Connection.rollback` method only acts
+ upon the primary database transaction that is linked to the
+ :class:`_engine.Connection` object. It does not operate upon a
+ SAVEPOINT that would have been invoked from the
+ :meth:`_engine.Connection.begin_nested` method; for control of a
+ SAVEPOINT, call :meth:`_engine.NestedTransaction.rollback` on the
+ :class:`_engine.NestedTransaction` that is returned by the
+ :meth:`_engine.Connection.begin_nested` method itself.
+
+
+ """
+ if self._transaction:
+ self._transaction.rollback()
+
def recover_twophase(self):
return self.engine.dialect.do_recover_twophase(self)
def in_transaction(self):
"""Return True if a transaction is in progress."""
- if self.__branch_from is not None:
- return self.__branch_from.in_transaction()
-
return self._transaction is not None and self._transaction.is_active
def in_nested_transaction(self):
"""Return True if a transaction is in progress."""
- if self.__branch_from is not None:
- return self.__branch_from.in_nested_transaction()
-
return (
self._nested_transaction is not None
and self._nested_transaction.is_active
"""
- if self.__branch_from is not None:
- return self.__branch_from.get_transaction()
-
return self._transaction
def get_nested_transaction(self):
.. versionadded:: 1.4
"""
- if self.__branch_from is not None:
-
- return self.__branch_from.get_nested_transaction()
-
return self._nested_transaction
def _begin_impl(self, transaction):
- assert not self.__branch_from
-
if self._echo:
self._log_info("BEGIN (implicit)")
self.__in_begin = False
def _rollback_impl(self):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.rollback(self)
except BaseException as e:
self._handle_dbapi_exception(e, None, None, None, None)
- def _commit_impl(self, autocommit=False):
- assert not self.__branch_from
-
- # AUTOCOMMIT isolation-level is a dialect-specific concept, however
- # if a connection has this set as the isolation level, we can skip
- # the "autocommit" warning as the operation will do "autocommit"
- # in any case
- if autocommit and not self._is_autocommit():
- util.warn_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit, which will be removed in "
- "SQLAlchemy 2.0. "
- "Use the .begin() method of Engine or Connection in order to "
- "use an explicit transaction for DML and DDL statements."
- )
+ def _commit_impl(self):
if self._has_events or self.engine._has_events:
self.dispatch.commit(self)
self._handle_dbapi_exception(e, None, None, None, None)
def _savepoint_impl(self, name=None):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.savepoint(self, name)
return name
def _rollback_to_savepoint_impl(self, name):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.rollback_savepoint(self, name, None)
self.engine.dialect.do_rollback_to_savepoint(self, name)
def _release_savepoint_impl(self, name):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.release_savepoint(self, name, None)
self.engine.dialect.do_release_savepoint(self, name)
def _begin_twophase_impl(self, transaction):
- assert not self.__branch_from
-
if self._echo:
self._log_info("BEGIN TWOPHASE (implicit)")
if self._has_events or self.engine._has_events:
self.__in_begin = False
def _prepare_twophase_impl(self, xid):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.prepare_twophase(self, xid)
self._handle_dbapi_exception(e, None, None, None, None)
def _rollback_twophase_impl(self, xid, is_prepared):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.rollback_twophase(self, xid, is_prepared)
self._handle_dbapi_exception(e, None, None, None, None)
def _commit_twophase_impl(self, xid, is_prepared):
- assert not self.__branch_from
-
if self._has_events or self.engine._has_events:
self.dispatch.commit_twophase(self, xid, is_prepared)
except BaseException as e:
self._handle_dbapi_exception(e, None, None, None, None)
- def _autorollback(self):
- if self.__branch_from:
- self.__branch_from._autorollback()
-
- if not self.in_transaction():
- self._rollback_impl()
-
- def _warn_for_legacy_exec_format(self):
- util.warn_deprecated_20(
- "The connection.execute() method in "
- "SQLAlchemy 2.0 will accept parameters as a single "
- "dictionary or a "
- "single sequence of dictionaries only. "
- "Parameters passed as keyword arguments, tuples or positionally "
- "oriented dictionaries and/or tuples "
- "will no longer be accepted."
- )
-
def close(self):
"""Close this :class:`_engine.Connection`.
of any :class:`.Transaction` object that may be
outstanding with regards to this :class:`_engine.Connection`.
+ This has the effect of also calling :meth:`_engine.Connection.rollback`
+ if any transaction is in place.
+
After :meth:`_engine.Connection.close` is called, the
:class:`_engine.Connection` is permanently in a closed state,
and will allow no further operations.
"""
- if self.__branch_from:
- assert not self._is_future
- util.warn_deprecated_20(
- "The .close() method on a so-called 'branched' connection is "
- "deprecated as of 1.4, as are 'branched' connections overall, "
- "and will be removed in a future release. If this is a "
- "default-handling function, don't close the connection."
- )
- self._dbapi_connection = None
- self.__can_reconnect = False
- return
-
if self._transaction:
self._transaction.close()
skip_reset = True
self._dbapi_connection = None
self.__can_reconnect = False
- def scalar(self, object_, *multiparams, **params):
- """Executes and returns the first column of the first row.
+ def scalar(self, statement, parameters=None, execution_options=None):
+ r"""Executes a SQL statement construct and returns a scalar object.
- The underlying result/cursor is closed after execution.
+ This method is shorthand for invoking the
+ :meth:`_engine.Result.scalar` method after invoking the
+ :meth:`_engine.Connection.execute` method. Parameters are equivalent.
- """
+ :return: a scalar Python value representing the first column of the
+ first row returned.
- return self.execute(object_, *multiparams, **params).scalar()
+ """
+ return self.execute(statement, parameters, execution_options).scalar()
- def scalars(self, object_, *multiparams, **params):
+ def scalars(self, statement, parameters=None, execution_options=None):
"""Executes and returns a scalar result set, which yields scalar values
from the first column of each row.
"""
- return self.execute(object_, *multiparams, **params).scalars()
+ return self.execute(statement, parameters, execution_options).scalars()
- def execute(self, statement, *multiparams, **params):
+ def execute(self, statement, parameters=None, execution_options=None):
r"""Executes a SQL statement construct and returns a
- :class:`_engine.CursorResult`.
-
- :param statement: The statement to be executed. May be
- one of:
-
- * a plain string (deprecated)
- * any :class:`_expression.ClauseElement` construct that is also
- a subclass of :class:`.Executable`, such as a
- :func:`_expression.select` construct
- * a :class:`.FunctionElement`, such as that generated
- by :data:`.func`, will be automatically wrapped in
- a SELECT statement, which is then executed.
- * a :class:`.DDLElement` object
- * a :class:`.DefaultGenerator` object
- * a :class:`.Compiled` object
-
- .. deprecated:: 2.0 passing a string to
- :meth:`_engine.Connection.execute` is
- deprecated and will be removed in version 2.0. Use the
- :func:`_expression.text` construct with
- :meth:`_engine.Connection.execute`, or the
- :meth:`_engine.Connection.exec_driver_sql`
- method to invoke a driver-level
- SQL string.
-
- :param \*multiparams/\**params: represent bound parameter
- values to be used in the execution. Typically,
- the format is either a collection of one or more
- dictionaries passed to \*multiparams::
-
- conn.execute(
- table.insert(),
- {"id":1, "value":"v1"},
- {"id":2, "value":"v2"}
- )
-
- ...or individual key/values interpreted by \**params::
-
- conn.execute(
- table.insert(), id=1, value="v1"
- )
-
- In the case that a plain SQL string is passed, and the underlying
- DBAPI accepts positional bind parameters, a collection of tuples
- or individual values in \*multiparams may be passed::
-
- conn.execute(
- "INSERT INTO table (id, value) VALUES (?, ?)",
- (1, "v1"), (2, "v2")
- )
-
- conn.execute(
- "INSERT INTO table (id, value) VALUES (?, ?)",
- 1, "v1"
- )
-
- Note above, the usage of a question mark "?" or other
- symbol is contingent upon the "paramstyle" accepted by the DBAPI
- in use, which may be any of "qmark", "named", "pyformat", "format",
- "numeric". See `pep-249
- <https://www.python.org/dev/peps/pep-0249/>`_ for details on
- paramstyle.
-
- To execute a textual SQL statement which uses bound parameters in a
- DBAPI-agnostic way, use the :func:`_expression.text` construct.
-
- .. deprecated:: 2.0 use of tuple or scalar positional parameters
- is deprecated. All params should be dicts or sequences of dicts.
- Use :meth:`.exec_driver_sql` to execute a plain string with
- tuple or scalar positional parameters.
+ :class:`_engine.Result`.
+
+ :param statement: The statement to be executed. This is always
+ an object that is in both the :class:`_expression.ClauseElement` and
+ :class:`_expression.Executable` hierarchies, including:
+
+ * :class:`_expression.Select`
+ * :class:`_expression.Insert`, :class:`_expression.Update`,
+ :class:`_expression.Delete`
+ * :class:`_expression.TextClause` and
+ :class:`_expression.TextualSelect`
+ * :class:`_schema.DDL` and objects which inherit from
+ :class:`_schema.DDLElement`
+
+ :param parameters: parameters which will be bound into the statement.
+ This may be either a dictionary of parameter names to values,
+ or a mutable sequence (e.g. a list) of dictionaries. When a
+ list of dictionaries is passed, the underlying statement execution
+ will make use of the DBAPI ``cursor.executemany()`` method.
+ When a single dictionary is passed, the DBAPI ``cursor.execute()``
+ method will be used.
+
+ :param execution_options: optional dictionary of execution options,
+ which will be associated with the statement execution. This
+ dictionary can provide a subset of the options that are accepted
+ by :meth:`_engine.Connection.execution_options`.
+
+ :return: a :class:`_engine.Result` object.
"""
-
- if isinstance(statement, util.string_types):
- util.warn_deprecated_20(
- "Passing a string to Connection.execute() is "
- "deprecated and will be removed in version 2.0. Use the "
- "text() construct, "
- "or the Connection.exec_driver_sql() method to invoke a "
- "driver-level SQL string."
- )
-
- return self._exec_driver_sql(
- statement,
- multiparams,
- params,
- _EMPTY_EXECUTION_OPTS,
- future=False,
- )
-
+ distilled_parameters = _distill_params_20(parameters)
try:
meth = statement._execute_on_connection
except AttributeError as err:
exc.ObjectNotExecutableError(statement), replace_context=err
)
else:
- return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
+ return meth(
+ self,
+ distilled_parameters,
+ execution_options or NO_OPTIONS,
+ )
- def _execute_function(self, func, multiparams, params, execution_options):
+ def _execute_function(self, func, distilled_parameters, execution_options):
"""Execute a sql.FunctionElement object."""
return self._execute_clauseelement(
- func.select(), multiparams, params, execution_options
+ func.select(), distilled_parameters, execution_options
)
def _execute_default(
- self,
- default,
- multiparams,
- params,
- # migrate is calling this directly :(
- execution_options=_EMPTY_EXECUTION_OPTS,
+ self, default, distilled_parameters, execution_options
):
"""Execute a schema.ColumnDefault object."""
execution_options
)
- distilled_parameters = _distill_params(self, multiparams, params)
-
+ # note for event handlers, the "distilled parameters" which is always
+ # a list of dicts is broken out into separate "multiparams" and
+ # "params" collections, which allows the handler to distinguish
+ # between an executemany and execute style set of parameters.
if self._has_events or self.engine._has_events:
(
default,
- distilled_params,
+ distilled_parameters,
event_multiparams,
event_params,
) = self._invoke_before_exec_event(
return ret
- def _execute_ddl(self, ddl, multiparams, params, execution_options):
+ def _execute_ddl(self, ddl, distilled_parameters, execution_options):
"""Execute a schema.DDL object."""
execution_options = ddl._execution_options.merge_with(
self._execution_options, execution_options
)
- distilled_parameters = _distill_params(self, multiparams, params)
-
if self._has_events or self.engine._has_events:
(
ddl,
- distilled_params,
+ distilled_parameters,
event_multiparams,
event_params,
) = self._invoke_before_exec_event(
return elem, distilled_params, event_multiparams, event_params
def _execute_clauseelement(
- self, elem, multiparams, params, execution_options
+ self, elem, distilled_parameters, execution_options
):
"""Execute a sql.ClauseElement object."""
self._execution_options, execution_options
)
- distilled_params = _distill_params(self, multiparams, params)
-
has_events = self._has_events or self.engine._has_events
if has_events:
(
elem,
- distilled_params,
+ distilled_parameters,
event_multiparams,
event_params,
) = self._invoke_before_exec_event(
- elem, distilled_params, execution_options
+ elem, distilled_parameters, execution_options
)
- if distilled_params:
+ if distilled_parameters:
# ensure we don't retain a link to the view object for keys()
# which links to the values, which we don't want to cache
- keys = sorted(distilled_params[0])
- for_executemany = len(distilled_params) > 1
+ keys = sorted(distilled_parameters[0])
+ for_executemany = len(distilled_parameters) > 1
else:
keys = []
for_executemany = False
dialect,
dialect.execution_ctx_cls._init_compiled,
compiled_sql,
- distilled_params,
+ distilled_parameters,
execution_options,
compiled_sql,
- distilled_params,
+ distilled_parameters,
elem,
extracted_params,
cache_hit=cache_hit,
def _execute_compiled(
self,
compiled,
- multiparams,
- params,
+ distilled_parameters,
execution_options=_EMPTY_EXECUTION_OPTS,
):
"""Execute a sql.Compiled object.
execution_options = compiled.execution_options.merge_with(
self._execution_options, execution_options
)
- distilled_parameters = _distill_params(self, multiparams, params)
if self._has_events or self.engine._has_events:
(
compiled,
- distilled_params,
+ distilled_parameters,
event_multiparams,
event_params,
) = self._invoke_before_exec_event(
)
return ret
- def _exec_driver_sql(
- self, statement, multiparams, params, execution_options, future
- ):
-
- execution_options = self._execution_options.merge_with(
- execution_options
- )
-
- distilled_parameters = _distill_params(self, multiparams, params)
-
- if not future:
- if self._has_events or self.engine._has_events:
- (
- statement,
- distilled_params,
- event_multiparams,
- event_params,
- ) = self._invoke_before_exec_event(
- statement, distilled_parameters, execution_options
- )
-
- dialect = self.dialect
- ret = self._execute_context(
- dialect,
- dialect.execution_ctx_cls._init_statement,
- statement,
- distilled_parameters,
- execution_options,
- statement,
- distilled_parameters,
- )
-
- if not future:
- if self._has_events or self.engine._has_events:
- self.dispatch.after_execute(
- self,
- statement,
- event_multiparams,
- event_params,
- execution_options,
- ret,
- )
- return ret
-
- def _execute_20(
- self,
- statement,
- parameters=None,
- execution_options=_EMPTY_EXECUTION_OPTS,
- ):
- args_10style, kwargs_10style = _distill_params_20(parameters)
- try:
- meth = statement._execute_on_connection
- except AttributeError as err:
- util.raise_(
- exc.ObjectNotExecutableError(statement), replace_context=err
- )
- else:
- return meth(self, args_10style, kwargs_10style, execution_options)
-
def exec_driver_sql(
self, statement, parameters=None, execution_options=None
):
"""
- args_10style, kwargs_10style = _distill_params_20(parameters)
+ distilled_parameters = _distill_raw_params(parameters)
- return self._exec_driver_sql(
+ execution_options = self._execution_options.merge_with(
+ execution_options
+ )
+
+ dialect = self.dialect
+ ret = self._execute_context(
+ dialect,
+ dialect.execution_ctx_cls._init_statement,
statement,
- args_10style,
- kwargs_10style,
+ distilled_parameters,
execution_options,
- future=True,
+ statement,
+ distilled_parameters,
)
+ return ret
+
def _execute_context(
self,
dialect,
"""Create an :class:`.ExecutionContext` and execute, returning
a :class:`_engine.CursorResult`."""
- if self.__branch_from:
- # if this is a "branched" connection, do everything in terms
- # of the "root" connection, *except* for .close(), which is
- # the only feature that branching provides
- self = self.__branch_from
-
try:
conn = self._dbapi_connection
if conn is None:
elif self._trans_context_manager:
TransactionalContext._trans_ctx_check(self)
- if self._is_future and self._transaction is None:
+ if self._transaction is None:
self._autobegin()
context.pre_exec()
result = context._setup_result_proxy()
- if not self._is_future:
-
- if (
- # usually we're in a transaction so avoid relatively
- # expensive / legacy should_autocommit call
- self._transaction is None
- and context.should_autocommit
- ):
- self._commit_impl(autocommit=True)
-
except BaseException as e:
self._handle_dbapi_exception(
e, statement, parameters, cursor, context
if cursor:
self._safe_close_cursor(cursor)
with util.safe_reraise(warn_only=True):
- self._autorollback()
+ # "autorollback" was mostly relevant in 1.x series.
+ # It's very unlikely to reach here, as the connection
+ # does autobegin so when we are here, we are usually
+ # in an explicit / semi-explicit transaction.
+ # however we have a test which manufactures this
+ # scenario in any case using an event handler.
+ if not self.in_transaction():
+ self._rollback_impl()
if newraise:
util.raise_(newraise, with_traceback=exc_info[2], from_=e)
"""
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Connection.transaction` "
- "method is deprecated and will be "
- "removed in a future release. Use the :meth:`_engine.Engine.begin` "
- "context manager instead.",
- )
- def transaction(self, callable_, *args, **kwargs):
- r"""Execute the given function within a transaction boundary.
-
- The function is passed this :class:`_engine.Connection`
- as the first argument, followed by the given \*args and \**kwargs,
- e.g.::
-
- def do_something(conn, x, y):
- conn.execute(text("some statement"), {'x':x, 'y':y})
-
- conn.transaction(do_something, 5, 10)
-
- The operations inside the function are all invoked within the
- context of a single :class:`.Transaction`.
- Upon success, the transaction is committed. If an
- exception is raised, the transaction is rolled back
- before propagating the exception.
-
- .. note::
-
- The :meth:`.transaction` method is superseded by
- the usage of the Python ``with:`` statement, which can
- be used with :meth:`_engine.Connection.begin`::
-
- with conn.begin():
- conn.execute(text("some statement"), {'x':5, 'y':10})
-
- As well as with :meth:`_engine.Engine.begin`::
-
- with engine.begin() as conn:
- conn.execute(text("some statement"), {'x':5, 'y':10})
-
- .. seealso::
-
- :meth:`_engine.Engine.begin` - engine-level transactional
- context
-
- :meth:`_engine.Engine.transaction` - engine-level version of
- :meth:`_engine.Connection.transaction`
-
- """
-
- kwargs["_sa_skip_warning"] = True
- trans = self.begin()
- try:
- ret = self.run_callable(callable_, *args, **kwargs)
- trans.commit()
- return ret
- except:
- with util.safe_reraise():
- trans.rollback()
-
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Connection.run_callable` "
- "method is deprecated and will "
- "be removed in a future release. Invoke the callable function "
- "directly, passing the Connection.",
- )
- def run_callable(self, callable_, *args, **kwargs):
- r"""Given a callable object or function, execute it, passing
- a :class:`_engine.Connection` as the first argument.
-
- The given \*args and \**kwargs are passed subsequent
- to the :class:`_engine.Connection` argument.
-
- This function, along with :meth:`_engine.Engine.run_callable`,
- allows a function to be run with a :class:`_engine.Connection`
- or :class:`_engine.Engine` object without the need to know
- which one is being dealt with.
-
- """
- return callable_(self, *args, **kwargs)
-
class ExceptionContextImpl(ExceptionContext):
"""Implement the :class:`.ExceptionContext` interface."""
def __init__(self, connection):
raise NotImplementedError()
- def _do_deactivate(self):
- """do whatever steps are necessary to set this transaction as
- "deactive", however leave this transaction object in place as far
- as the connection's state.
-
- for a "real" transaction this should roll back the transaction
- and ensure this transaction is no longer a reset agent.
-
- this is used for nesting of marker transactions where the marker
- can set the "real" transaction as rolled back, however it stays
- in place.
-
- for 2.0 we hope to remove this nesting feature.
-
- """
- raise NotImplementedError()
-
@property
def _deactivated_from_connection(self):
"""True if this transaction is totally deactivated from the connection
return not self._deactivated_from_connection
-class MarkerTransaction(Transaction):
- """A 'marker' transaction that is used for nested begin() calls.
-
- .. deprecated:: 1.4 future connection for 2.0 won't support this pattern.
-
- """
-
- __slots__ = ("connection", "_is_active", "_transaction")
-
- def __init__(self, connection):
- assert connection._transaction is not None
- if not connection._transaction.is_active:
- raise exc.InvalidRequestError(
- "the current transaction on this connection is inactive. "
- "Please issue a rollback first."
- )
-
- assert not connection._is_future
- util.warn_deprecated_20(
- "Calling .begin() when a transaction is already begun, creating "
- "a 'sub' transaction, is deprecated "
- "and will be removed in 2.0. See the documentation section "
- "'Migrating from the nesting pattern' for background on how "
- "to migrate from this pattern."
- )
-
- self.connection = connection
-
- if connection._trans_context_manager:
- TransactionalContext._trans_ctx_check(connection)
-
- if connection._nested_transaction is not None:
- self._transaction = connection._nested_transaction
- else:
- self._transaction = connection._transaction
- self._is_active = True
-
- @property
- def _deactivated_from_connection(self):
- return not self.is_active
-
- @property
- def is_active(self):
- return self._is_active and self._transaction.is_active
-
- def _deactivate(self):
- self._is_active = False
-
- def _do_close(self):
- # does not actually roll back the root
- self._deactivate()
-
- def _do_rollback(self):
- # does roll back the root
- if self._is_active:
- try:
- self._transaction._do_deactivate()
- finally:
- self._deactivate()
-
- def _do_commit(self):
- self._deactivate()
-
-
class RootTransaction(Transaction):
"""Represent the "root" transaction on a :class:`_engine.Connection`.
def _deactivated_from_connection(self):
return self.connection._transaction is not self
- def _do_deactivate(self):
- # called from a MarkerTransaction to cancel this root transaction.
- # the transaction stays in place as connection._transaction, but
- # is no longer active and is no longer the reset agent for the
- # pooled connection. the connection won't support a new begin()
- # until this transaction is explicitly closed, rolled back,
- # or committed.
-
- assert self.connection._transaction is self
-
- if self.is_active:
- self._connection_rollback_impl()
-
- # handle case where a savepoint was created inside of a marker
- # transaction that refers to a root. nested has to be cancelled
- # also.
- if self.connection._nested_transaction:
- self.connection._nested_transaction._cancel()
-
- self._deactivate_from_connection()
-
def _connection_begin_impl(self):
self.connection._begin_impl(self)
if deactivate_from_connection:
assert self.connection._nested_transaction is not self
- def _do_deactivate(self):
- self._close_impl(False, False)
-
def _do_close(self):
self._close_impl(True, False)
* The logging configuration and logging_name is copied from the parent
:class:`_engine.Engine`.
+ .. TODO: the below autocommit link will have a more specific ref
+ for the example in an upcoming commit
+
The intent of the :meth:`_engine.Engine.execution_options` method is
- to implement "sharding" schemes where multiple :class:`_engine.Engine`
+ to implement schemes where multiple :class:`_engine.Engine`
objects refer to the same connection pool, but are differentiated
- by options that would be consumed by a custom event::
+ by options that affect some execution-level behavior for each
+ engine. One such example is breaking into separate "reader" and
+ "writer" :class:`_engine.Engine` instances, where one
+ :class:`_engine.Engine`
+ has a lower :term:`isolation level` setting configured or is even
+ transaction-disabled using "autocommit". An example of this
+ configuration is at :ref:`dbapi_autocommit`.
+
+ Another example is one that
+ uses a custom option ``shard_id`` which is consumed by an event
+ to change the current schema on a database connection::
+
+ from sqlalchemy import event
+ from sqlalchemy.engine import Engine
primary_engine = create_engine("mysql://")
shard1 = primary_engine.execution_options(shard_id="shard1")
shard2 = primary_engine.execution_options(shard_id="shard2")
- Above, the ``shard1`` engine serves as a factory for
- :class:`_engine.Connection`
- objects that will contain the execution option
- ``shard_id=shard1``, and ``shard2`` will produce
- :class:`_engine.Connection`
- objects that contain the execution option ``shard_id=shard2``.
-
- An event handler can consume the above execution option to perform
- a schema switch or other operation, given a connection. Below
- we emit a MySQL ``use`` statement to switch databases, at the same
- time keeping track of which database we've established using the
- :attr:`_engine.Connection.info` dictionary,
- which gives us a persistent
- storage space that follows the DBAPI connection::
-
- from sqlalchemy import event
- from sqlalchemy.engine import Engine
-
- shards = {"default": "base", shard_1: "db1", "shard_2": "db2"}
+ shards = {"default": "base", "shard_1": "db1", "shard_2": "db2"}
@event.listens_for(Engine, "before_cursor_execute")
def _switch_shard(conn, cursor, stmt,
params, context, executemany):
- shard_id = conn._execution_options.get('shard_id', "default")
+ shard_id = conn.get_execution_options().get('shard_id', "default")
current_shard = conn.info.get("current_shard", None)
if current_shard != shard_id:
cursor.execute("use %s" % shards[shard_id])
conn.info["current_shard"] = shard_id
+ The above recipe illustrates two :class:`_engine.Engine` objects that
+ will each serve as factories for :class:`_engine.Connection` objects
+ that have pre-established "shard_id" execution options present. A
+ :meth:`_events.ConnectionEvents.before_cursor_execute` event handler
+ then interprets this execution option to emit a MySQL ``use`` statement
+ to switch databases before a statement execution, while at the same
+ time keeping track of which database we've established using the
+ :attr:`_engine.Connection.info` dictionary.
+
.. seealso::
:meth:`_engine.Connection.execution_options`
:meth:`_engine.Engine.get_execution_options`
- """
+ """ # noqa E501
return self._option_cls(self, opt)
def get_execution_options(self):
self.pool = self.pool.recreate()
self.dispatch.engine_disposed(self)
- def _execute_default(
- self, default, multiparams=(), params=util.EMPTY_DICT
- ):
- with self.connect() as conn:
- return conn._execute_default(default, multiparams, params)
-
@contextlib.contextmanager
def _optional_conn_ctx_manager(self, connection=None):
if connection is None:
else:
yield connection
- class _trans_ctx(object):
- def __init__(self, conn, transaction):
- self.conn = conn
- self.transaction = transaction
-
- def __enter__(self):
- self.transaction.__enter__()
- return self.conn
-
- def __exit__(self, type_, value, traceback):
- try:
- self.transaction.__exit__(type_, value, traceback)
- finally:
- self.conn.close()
-
+ @util.contextmanager
def begin(self):
"""Return a context manager delivering a :class:`_engine.Connection`
with a :class:`.Transaction` established.
for a particular :class:`_engine.Connection`.
"""
- conn = self.connect()
- try:
- trans = conn.begin()
- except:
- with util.safe_reraise():
- conn.close()
- return Engine._trans_ctx(conn, trans)
-
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Engine.transaction` "
- "method is deprecated and will be "
- "removed in a future release. Use the :meth:`_engine.Engine.begin` "
- "context "
- "manager instead.",
- )
- def transaction(self, callable_, *args, **kwargs):
- r"""Execute the given function within a transaction boundary.
-
- The function is passed a :class:`_engine.Connection` newly procured
- from :meth:`_engine.Engine.connect` as the first argument,
- followed by the given \*args and \**kwargs.
-
- e.g.::
-
- def do_something(conn, x, y):
- conn.execute(text("some statement"), {'x':x, 'y':y})
-
- engine.transaction(do_something, 5, 10)
-
- The operations inside the function are all invoked within the
- context of a single :class:`.Transaction`.
- Upon success, the transaction is committed. If an
- exception is raised, the transaction is rolled back
- before propagating the exception.
-
- .. note::
-
- The :meth:`.transaction` method is superseded by
- the usage of the Python ``with:`` statement, which can
- be used with :meth:`_engine.Engine.begin`::
-
- with engine.begin() as conn:
- conn.execute(text("some statement"), {'x':5, 'y':10})
-
- .. seealso::
-
- :meth:`_engine.Engine.begin` - engine-level transactional
- context
-
- :meth:`_engine.Connection.transaction`
- - connection-level version of
- :meth:`_engine.Engine.transaction`
-
- """
- kwargs["_sa_skip_warning"] = True
- with self.connect() as conn:
- return conn.transaction(callable_, *args, **kwargs)
-
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Engine.run_callable` "
- "method is deprecated and will be "
- "removed in a future release. Use the :meth:`_engine.Engine.begin` "
- "context manager instead.",
- )
- def run_callable(self, callable_, *args, **kwargs):
- r"""Given a callable object or function, execute it, passing
- a :class:`_engine.Connection` as the first argument.
-
- The given \*args and \**kwargs are passed subsequent
- to the :class:`_engine.Connection` argument.
-
- This function, along with :meth:`_engine.Connection.run_callable`,
- allows a function to be run with a :class:`_engine.Connection`
- or :class:`_engine.Engine` object without the need to know
- which one is being dealt with.
-
- """
- kwargs["_sa_skip_warning"] = True
with self.connect() as conn:
- return conn.run_callable(callable_, *args, **kwargs)
+ with conn.begin():
+ yield conn
def _run_ddl_visitor(self, visitorcallable, element, **kwargs):
with self.begin() as conn:
def connect(self):
"""Return a new :class:`_engine.Connection` object.
- The :class:`_engine.Connection` object is a facade that uses a DBAPI
- connection internally in order to communicate with the database. This
- connection is procured from the connection-holding :class:`_pool.Pool`
- referenced by this :class:`_engine.Engine`. When the
- :meth:`_engine.Connection.close` method of the
- :class:`_engine.Connection` object
- is called, the underlying DBAPI connection is then returned to the
- connection pool, where it may be used again in a subsequent call to
- :meth:`_engine.Engine.connect`.
+ The :class:`_engine.Connection` acts as a Python context manager, so
+ the typical use of this method looks like::
- """
-
- return self._connection_cls(self)
+ with engine.connect() as connection:
+ connection.execute(text("insert into table values ('foo')"))
+ connection.commit()
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Engine.table_names` "
- "method is deprecated and will be "
- "removed in a future release. Please refer to "
- ":meth:`_reflection.Inspector.get_table_names`.",
- )
- def table_names(self, schema=None, connection=None):
- """Return a list of all table names available in the database.
-
- :param schema: Optional, retrieve names from a non-default schema.
-
- :param connection: Optional, use a specified connection.
- """
- with self._optional_conn_ctx_manager(connection) as conn:
- insp = inspection.inspect(conn)
- return insp.get_table_names(schema)
-
- @util.deprecated(
- "1.4",
- "The :meth:`_engine.Engine.has_table` "
- "method is deprecated and will be "
- "removed in a future release. Please refer to "
- ":meth:`_reflection.Inspector.has_table`.",
- )
- def has_table(self, table_name, schema=None):
- """Return True if the given backend has a table of the given name.
+ Where above, after the block is completed, the connection is "closed"
+ and its underlying DBAPI resources are returned to the connection pool.
+ This also has the effect of rolling back any transaction that
+ was explicitly begun or was begun via autobegin, and will
+ emit the :meth:`_events.ConnectionEvents.rollback` event if one was
+ started and is still in progress.
.. seealso::
- :ref:`metadata_reflection_inspector` - detailed schema inspection
- using the :class:`_reflection.Inspector` interface.
-
- :class:`.quoted_name` - used to pass quoting information along
- with a schema identifier.
+ :meth:`_engine.Engine.begin`
"""
- with self._optional_conn_ctx_manager(None) as conn:
- insp = inspection.inspect(conn)
- return insp.has_table(table_name, schema=schema)
- def _wrap_pool_connect(self, fn, connection):
- dialect = self.dialect
- try:
- return fn()
- except dialect.dbapi.Error as e:
- if connection is None:
- Connection._handle_dbapi_exception_noconnection(
- e, dialect, self
- )
- else:
- util.raise_(
- sys.exc_info()[1], with_traceback=sys.exc_info()[2]
- )
+ return self._connection_cls(self)
- def raw_connection(self, _connection=None):
+ def raw_connection(self):
"""Return a "raw" DBAPI connection from the connection pool.
The returned object is a proxied version of the DBAPI
:ref:`dbapi_connections`
"""
- return self._wrap_pool_connect(self.pool.connect, _connection)
+ return self.pool.connect()
class OptionEngineMixin(object):
be applied to all connections. See
:meth:`~sqlalchemy.engine.Connection.execution_options`
- :param future: Use the 2.0 style :class:`_future.Engine` and
- :class:`_future.Connection` API.
+ :param future: Use the 2.0 style :class:`_engine.Engine` and
+ :class:`_engine.Connection` API.
+
+ As of SQLAlchemy 2.0, this parameter is present for backwards
+ compatibility only and must remain at its default value of ``True``.
+
+ The :paramref:`_sa.create_engine.future` parameter will be
+ deprecated in a subsequent 2.x release and eventually removed.
.. versionadded:: 1.4
+ .. versionchanged:: 2.0 All :class:`_engine.Engine` objects are
+ "future" style engines and there is no longer a ``future=False``
+ mode of operation.
+
.. seealso::
:ref:`migration_20_toplevel`
pool._dialect = dialect
# create engine.
- if pop_kwarg("future", False):
- from sqlalchemy import future
-
- default_engine_class = future.Engine
- else:
- default_engine_class = base.Engine
+ if not pop_kwarg("future", True):
+ raise exc.ArgumentError(
+ "The 'future' parameter passed to "
+ "create_engine() may only be set to True."
+ )
- engineclass = kwargs.pop("_future_engine_class", default_engine_class)
+ engineclass = base.Engine
engine_args = {}
for k in util.get_cls_kwargs(engineclass):
# internal flags used by the test suite for instrumenting / proxying
# engines with mocks etc.
_initialize = kwargs.pop("_initialize", True)
- _wrap_do_on_connect = kwargs.pop("_wrap_do_on_connect", None)
# all kwargs should be consumed
if kwargs:
if _initialize:
do_on_connect = dialect.on_connect_url(u)
if do_on_connect:
- if _wrap_do_on_connect:
- do_on_connect = _wrap_do_on_connect(do_on_connect)
def on_connect(dbapi_connection, connection_record):
do_on_connect(dbapi_connection)
# reconnecting will be a reentrant condition, so if the
# connection goes away, Connection is then closed
_allow_revalidate=False,
+ # dont trigger the autobegin sequence
+ # within the up front dialect checks
+ _allow_autobegin=False,
)
c._execution_options = util.EMPTY_DICT
from ..sql import expression
from ..sql.elements import quoted_name
-AUTOCOMMIT_REGEXP = re.compile(
- r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER)", re.I | re.UNICODE
-)
-
# When we're handed literal SQL, ensure it's a SELECT query
SERVER_SIDE_CURSOR_RE = re.compile(r"\s*SELECT", re.I | re.UNICODE)
# *not* the FLOAT type however.
supports_native_decimal = False
- if util.py3k:
- supports_unicode_statements = True
- supports_unicode_binds = True
- returns_unicode_strings = sqltypes.String.RETURNS_UNICODE
- description_encoding = None
- else:
- supports_unicode_statements = False
- supports_unicode_binds = False
- returns_unicode_strings = sqltypes.String.RETURNS_UNKNOWN
- description_encoding = "use_encoding"
+ supports_unicode_statements = True
+ supports_unicode_binds = True
+ returns_unicode_strings = sqltypes.String.RETURNS_UNICODE
+ description_encoding = None
name = "default"
except NotImplementedError:
self.default_isolation_level = None
- if self.returns_unicode_strings is sqltypes.String.RETURNS_UNKNOWN:
- if util.py3k:
- raise exc.InvalidRequestError(
- "RETURNS_UNKNOWN is unsupported in Python 3"
- )
- self.returns_unicode_strings = self._check_unicode_returns(
- connection
- )
-
if (
self.description_encoding is not None
and self._check_unicode_description(connection)
"""
return self.get_isolation_level(dbapi_conn)
- def _check_unicode_returns(self, connection, additional_tests=None):
- cast_to = util.text_type
-
- if self.positional:
- parameters = self.execute_sequence_format()
- else:
- parameters = {}
-
- def check_unicode(test):
- statement = cast_to(expression.select(test).compile(dialect=self))
- try:
- cursor = connection.connection.cursor()
- connection._cursor_execute(cursor, statement, parameters)
- row = cursor.fetchone()
- cursor.close()
- except exc.DBAPIError as de:
- # note that _cursor_execute() will have closed the cursor
- # if an exception is thrown.
- util.warn(
- "Exception attempting to "
- "detect unicode returns: %r" % de
- )
- return False
- else:
- return isinstance(row[0], util.text_type)
-
- tests = [
- # detect plain VARCHAR
- expression.cast(
- expression.literal_column("'test plain returns'"),
- sqltypes.VARCHAR(60),
- ),
- # detect if there's an NVARCHAR type with different behavior
- # available
- expression.cast(
- expression.literal_column("'test unicode returns'"),
- sqltypes.Unicode(60),
- ),
- ]
-
- if additional_tests:
- tests += additional_tests
-
- results = {check_unicode(test) for test in tests}
-
- if results.issuperset([True, False]):
- return sqltypes.String.RETURNS_CONDITIONAL
- else:
- return (
- sqltypes.String.RETURNS_UNICODE
- if results == {True}
- else sqltypes.String.RETURNS_BYTES
- )
-
def _check_unicode_description(self, connection):
cast_to = util.text_type
)
@event.listens_for(engine, "engine_connect")
- def set_connection_characteristics(connection, branch):
- if not branch:
- self._set_connection_characteristics(
- connection, characteristics
- )
+ def set_connection_characteristics(connection):
+ self._set_connection_characteristics(
+ connection, characteristics
+ )
def set_connection_execution_options(self, connection, opts):
supported_names = set(self.connection_characteristics).intersection(
if obj.transactional
]
if trans_objs:
- if connection._is_future:
- raise exc.InvalidRequestError(
- "This connection has already initialized a SQLAlchemy "
- "Transaction() object via begin() or autobegin; "
- "%s may not be altered unless rollback() or commit() "
- "is called first."
- % (", ".join(name for name, obj in trans_objs))
- )
- else:
- util.warn(
- "Connection is already established with a "
- "Transaction; "
- "setting %s may implicitly rollback or "
- "commit "
- "the existing transaction, or have no effect until "
- "next transaction"
- % (", ".join(name for name, obj in trans_objs))
- )
+ raise exc.InvalidRequestError(
+ "This connection has already initialized a SQLAlchemy "
+ "Transaction() object via begin() or autobegin; "
+ "%s may not be altered unless rollback() or commit() "
+ "is called first."
+ % (", ".join(name for name, obj in trans_objs))
+ )
dbapi_connection = connection.connection.dbapi_connection
for name, characteristic, value in characteristic_values:
def no_parameters(self):
return self.execution_options.get("no_parameters", False)
- @util.memoized_property
- def should_autocommit(self):
- autocommit = self.execution_options.get(
- "autocommit",
- not self.compiled
- and self.statement
- and expression.PARSE_AUTOCOMMIT
- or False,
- )
-
- if autocommit is expression.PARSE_AUTOCOMMIT:
- return self.should_autocommit_text(self.unicode_statement)
- else:
- return autocommit
-
def _execute_scalar(self, stmt, type_, parameters=None):
"""Execute a string statement on the current cursor, returning a
scalar result.
return proc(r)
return r
- @property
+ @util.memoized_property
def connection(self):
- conn = self.root_connection
- if conn._is_future:
- return conn
- else:
- return conn._branch()
-
- def should_autocommit_text(self, statement):
- return AUTOCOMMIT_REGEXP.match(statement)
+ return self.root_connection
def _use_server_side_cursor(self):
if not self.dialect.supports_server_side_cursors:
if self.isddl or self.is_text:
return
- inputsizes = self.compiled._get_set_input_sizes_lookup(
+ compiled = self.compiled
+
+ inputsizes = compiled._get_set_input_sizes_lookup(
include_types=self.include_set_input_sizes,
exclude_types=self.exclude_set_input_sizes,
)
if inputsizes is None:
return
- if self.dialect._has_events:
+ dialect = self.dialect
+
+ if dialect._has_events:
inputsizes = dict(inputsizes)
- self.dialect.dispatch.do_setinputsizes(
+ dialect.dispatch.do_setinputsizes(
inputsizes, self.cursor, self.statement, self.parameters, self
)
- has_escaped_names = bool(self.compiled.escaped_bind_names)
+ has_escaped_names = bool(compiled.escaped_bind_names)
if has_escaped_names:
- escaped_bind_names = self.compiled.escaped_bind_names
+ escaped_bind_names = compiled.escaped_bind_names
- if self.dialect.positional:
+ if dialect.positional:
items = [
- (key, self.compiled.binds[key])
- for key in self.compiled.positiontup
+ (key, compiled.binds[key]) for key in compiled.positiontup
]
else:
items = [
(key, bindparam)
- for bindparam, key in self.compiled.bind_names.items()
+ for bindparam, key in compiled.bind_names.items()
]
generic_inputsizes = []
for key, bindparam in items:
- if bindparam in self.compiled.literal_execute_params:
+ if bindparam in compiled.literal_execute_params:
continue
if key in self._expanded_parameters:
(escaped_name, dbtype, bindparam.type)
)
try:
- self.dialect.do_set_input_sizes(
- self.cursor, generic_inputsizes, self
- )
+ dialect.do_set_input_sizes(self.cursor, generic_inputsizes, self)
except BaseException as e:
self.root_connection._handle_dbapi_exception(
e, None, None, None, self
The hook is called while the cursor from the failed operation
(if any) is still open and accessible. Special cleanup operations
can be called on this cursor; SQLAlchemy will attempt to close
- this cursor subsequent to this hook being invoked. If the connection
- is in "autocommit" mode, the transaction also remains open within
- the scope of this hook; the rollback of the per-statement transaction
- also occurs after the hook is called.
+ this cursor subsequent to this hook being invoked.
.. note::
"""
- def engine_connect(self, conn, branch):
+ @event._legacy_signature(
+ "2.0", ["conn", "branch"], converter=lambda conn: (conn, False)
+ )
+ def engine_connect(self, conn):
"""Intercept the creation of a new :class:`_engine.Connection`.
This event is called typically as the direct result of calling
events within the lifespan
of a single :class:`_engine.Connection` object, if that
:class:`_engine.Connection`
- is invalidated and re-established. There can also be multiple
- :class:`_engine.Connection`
- objects generated for the same already-checked-out
- DBAPI connection, in the case that a "branch" of a
- :class:`_engine.Connection`
- is produced.
+ is invalidated and re-established.
:param conn: :class:`_engine.Connection` object.
- :param branch: if True, this is a "branch" of an existing
- :class:`_engine.Connection`. A branch is generated within the course
- of a statement execution to invoke supplemental statements, most
- typically to pre-execute a SELECT of a default value for the purposes
- of an INSERT statement.
.. seealso::
that transactions are implicit. This hook is provided for those
DBAPIs that might need additional help in this area.
- Note that :meth:`.Dialect.do_begin` is not called unless a
- :class:`.Transaction` object is in use. The
- :meth:`.Dialect.do_autocommit`
- hook is provided for DBAPIs that need some extra commands emitted
- after a commit in order to enter the next transaction, when the
- SQLAlchemy :class:`_engine.Connection`
- is used in its default "autocommit"
- mode.
-
:param dbapi_connection: a DBAPI connection, typically
proxied within a :class:`.ConnectionFairy`.
isupdate
True if the statement is an UPDATE.
- should_autocommit
- True if the statement is a "committable" statement.
-
prefetch_cols
a list of Column objects for which a client-side default
was fired off. Applies to inserts and updates.
raise NotImplementedError()
- def should_autocommit_text(self, statement):
- """Parse the given textual statement and return True if it refers to
- a "committable" statement"""
-
- raise NotImplementedError()
-
def lastrow_has_defaults(self):
"""Return True if the last INSERT or UPDATE row contained
inlined or database-side defaults.
_no_tuple = ()
-_no_kw = util.immutabledict()
-def _distill_params(connection, multiparams, params):
- r"""Given arguments from the calling form \*multiparams, \**params,
- return a list of bind parameter structures, usually a list of
- dictionaries.
-
- In the case of 'raw' execution which accepts positional parameters,
- it may be a list of tuples or lists.
-
- """
-
- if not multiparams:
- if params:
- connection._warn_for_legacy_exec_format()
- return [params]
- else:
- return []
- elif len(multiparams) == 1:
- zero = multiparams[0]
- if isinstance(zero, (list, tuple)):
- if (
- not zero
- or hasattr(zero[0], "__iter__")
- and not hasattr(zero[0], "strip")
- ):
- # execute(stmt, [{}, {}, {}, ...])
- # execute(stmt, [(), (), (), ...])
- return zero
- else:
- # this is used by exec_driver_sql only, so a deprecation
- # warning would already be coming from passing a plain
- # textual statement with positional parameters to
- # execute().
- # execute(stmt, ("value", "value"))
- return [zero]
- elif hasattr(zero, "keys"):
- # execute(stmt, {"key":"value"})
- return [zero]
- else:
- connection._warn_for_legacy_exec_format()
- # execute(stmt, "value")
- return [[zero]]
- else:
- connection._warn_for_legacy_exec_format()
- if hasattr(multiparams[0], "__iter__") and not hasattr(
- multiparams[0], "strip"
+def _distill_params_20(params):
+ if params is None:
+ return _no_tuple
+ elif isinstance(params, (list, tuple)):
+ # collections_abc.MutableSequence): # avoid abc.__instancecheck__
+ if params and not isinstance(
+ params[0], (collections_abc.Mapping, tuple)
):
- return multiparams
- else:
- return [multiparams]
-
-
-def _distill_cursor_params(connection, multiparams, params):
- """_distill_params without any warnings. more appropriate for
- "cursor" params that can include tuple arguments, lists of tuples,
- etc.
-
- """
+ raise exc.ArgumentError(
+ "List argument must consist only of tuples or dictionaries"
+ )
- if not multiparams:
- if params:
- return [params]
- else:
- return []
- elif len(multiparams) == 1:
- zero = multiparams[0]
- if isinstance(zero, (list, tuple)):
- if (
- not zero
- or hasattr(zero[0], "__iter__")
- and not hasattr(zero[0], "strip")
- ):
- # execute(stmt, [{}, {}, {}, ...])
- # execute(stmt, [(), (), (), ...])
- return zero
- else:
- # this is used by exec_driver_sql only, so a deprecation
- # warning would already be coming from passing a plain
- # textual statement with positional parameters to
- # execute().
- # execute(stmt, ("value", "value"))
-
- return [zero]
- elif hasattr(zero, "keys"):
- # execute(stmt, {"key":"value"})
- return [zero]
- else:
- # execute(stmt, "value")
- return [[zero]]
+ return params
+ elif isinstance(
+ params,
+ (dict, immutabledict),
+ # only do abc.__instancecheck__ for Mapping after we've checked
+ # for plain dictionaries and would otherwise raise
+ ) or isinstance(params, collections_abc.Mapping):
+ return [params]
else:
- if hasattr(multiparams[0], "__iter__") and not hasattr(
- multiparams[0], "strip"
- ):
- return multiparams
- else:
- return [multiparams]
+ raise exc.ArgumentError("mapping or sequence expected for parameters")
-def _distill_params_20(params):
+def _distill_raw_params(params):
if params is None:
- return _no_tuple, _no_kw
- elif isinstance(params, list):
+ return _no_tuple
+ elif isinstance(params, (list,)):
# collections_abc.MutableSequence): # avoid abc.__instancecheck__
if params and not isinstance(
params[0], (collections_abc.Mapping, tuple)
"List argument must consist only of tuples or dictionaries"
)
- return (params,), _no_kw
+ return params
elif isinstance(
params,
(tuple, dict, immutabledict),
# only do abc.__instancecheck__ for Mapping after we've checked
# for plain dictionaries and would otherwise raise
) or isinstance(params, collections_abc.Mapping):
- return (params,), _no_kw
+ return [params]
else:
raise exc.ArgumentError("mapping or sequence expected for parameters")
def _legacy_signature(since, argnames, converter=None):
+ """legacy sig decorator
+
+
+ :param since: string version for deprecation warning
+ :param argnames: list of strings, which is *all* arguments that the legacy
+ version accepted, including arguments that are still there
+ :param converter: lambda that will accept tuple of this full arg signature
+ and return tuple of new arg signature.
+
+ """
+
def leg(fn):
if not hasattr(fn, "_legacy_signatures"):
fn._legacy_signatures = []
conn = self._sync_connection()
result = await greenlet_spawn(
- conn._execute_20,
+ conn.execute,
statement,
parameters,
util.EMPTY_DICT.merge_with(
conn = self._sync_connection()
result = await greenlet_spawn(
- conn._execute_20,
+ conn.execute,
statement,
parameters,
execution_options,
functionality is already available using the
:meth:`_expression.Insert.from_select` method.
-.. note::
-
- The above ``InsertFromSelect`` construct probably wants to have "autocommit"
- enabled. See :ref:`enabling_compiled_autocommit` for this step.
Cross Compiling between SQL and DDL compilers
---------------------------------------------
supported.
-.. _enabling_compiled_autocommit:
-
-Enabling Autocommit on a Construct
-==================================
-
-Recall from the section :ref:`autocommit` that the :class:`_engine.Engine`,
-when
-asked to execute a construct in the absence of a user-defined transaction,
-detects if the given construct represents DML or DDL, that is, a data
-modification or data definition statement, which requires (or may require,
-in the case of DDL) that the transaction generated by the DBAPI be committed
-(recall that DBAPI always has a transaction going on regardless of what
-SQLAlchemy does). Checking for this is actually accomplished by checking for
-the "autocommit" execution option on the construct. When building a
-construct like an INSERT derivation, a new DDL type, or perhaps a stored
-procedure that alters data, the "autocommit" option needs to be set in order
-for the statement to function with "connectionless" execution
-(as described in :ref:`dbengine_implicit`).
-
-Currently a quick way to do this is to subclass :class:`.Executable`, then
-add the "autocommit" flag to the ``_execution_options`` dictionary (note this
-is a "frozen" dictionary which supplies a generative ``union()`` method)::
-
- from sqlalchemy.sql.expression import Executable, ClauseElement
-
- class MyInsertThing(Executable, ClauseElement):
- _execution_options = \
- Executable._execution_options.union({'autocommit': True})
-
-More succinctly, if the construct is truly similar to an INSERT, UPDATE, or
-DELETE, :class:`.UpdateBase` can be used, which already is a subclass
-of :class:`.Executable`, :class:`_expression.ClauseElement` and includes the
-``autocommit`` flag::
-
- from sqlalchemy.sql.expression import UpdateBase
-
- class MyInsertThing(UpdateBase):
- def __init__(self, ...):
- ...
-
-
-
-
-DDL elements that subclass :class:`.DDLElement` already have the
-"autocommit" flag turned on.
-
-from .. import util
-from ..engine import Connection as _LegacyConnection
-from ..engine import create_engine as _create_engine
-from ..engine import Engine as _LegacyEngine
-from ..engine.base import OptionEngineMixin
-
-NO_OPTIONS = util.immutabledict()
-
-
-def create_engine(*arg, **kw):
- """Create a new :class:`_future.Engine` instance.
-
- Arguments passed to :func:`_future.create_engine` are mostly identical
- to those passed to the 1.x :func:`_sa.create_engine` function.
- The difference is that the object returned is the :class:`._future.Engine`
- which has the 2.0 version of the API.
-
- """
-
- kw["_future_engine_class"] = Engine
- return _create_engine(*arg, **kw)
-
-
-class Connection(_LegacyConnection):
- """Provides high-level functionality for a wrapped DB-API connection.
-
- The :class:`_future.Connection` object is procured by calling
- the :meth:`_future.Engine.connect` method of the :class:`_future.Engine`
- object, and provides services for execution of SQL statements as well
- as transaction control.
-
- **This is the SQLAlchemy 2.0 version** of the :class:`_engine.Connection`
- class. The API and behavior of this object is largely the same, with the
- following differences in behavior:
-
- * The result object returned for results is the
- :class:`_engine.CursorResult`
- object, which is a subclass of the :class:`_engine.Result`.
- This object has a slightly different API and behavior than the
- :class:`_engine.LegacyCursorResult` returned for 1.x style usage.
-
- * The object has :meth:`_future.Connection.commit` and
- :meth:`_future.Connection.rollback` methods which commit or roll back
- the current transaction in progress, if any.
-
- * The object features "autobegin" behavior, such that any call to
- :meth:`_future.Connection.execute` will
- unconditionally start a
- transaction which can be controlled using the above mentioned
- :meth:`_future.Connection.commit` and
- :meth:`_future.Connection.rollback` methods.
-
- * The object does not have any "autocommit" functionality. Any SQL
- statement or DDL statement will not be followed by any COMMIT until
- the transaction is explicitly committed, either via the
- :meth:`_future.Connection.commit` method, or if the connection is
- being used in a context manager that commits such as the one
- returned by :meth:`_future.Engine.begin`.
-
- * The SAVEPOINT method :meth:`_future.Connection.begin_nested` returns
- a :class:`_engine.NestedTransaction` as was always the case, and the
- savepoint can be controlled by invoking
- :meth:`_engine.NestedTransaction.commit` or
- :meth:`_engine.NestedTransaction.rollback` as was the case before.
- However, this savepoint "transaction" is not associated with the
- transaction that is controlled by the connection itself; the overall
- transaction can be committed or rolled back directly which will not emit
- any special instructions for the SAVEPOINT (this will typically have the
- effect that one desires).
-
- * The :class:`_future.Connection` object does not support "branching",
- which was a pattern by which a sub "connection" would be used that
- refers to this connection as a parent.
-
-
-
- """
-
- _is_future = True
-
- def _branch(self):
- raise NotImplementedError(
- "sqlalchemy.future.Connection does not support "
- "'branching' of new connections."
- )
-
- def begin(self):
- """Begin a transaction prior to autobegin occurring.
-
- The returned object is an instance of :class:`_engine.RootTransaction`.
- This object represents the "scope" of the transaction,
- which completes when either the :meth:`_engine.Transaction.rollback`
- or :meth:`_engine.Transaction.commit` method is called.
-
- The :meth:`_future.Connection.begin` method in SQLAlchemy 2.0 begins a
- transaction that normally will be begun in any case when the connection
- is first used to execute a statement. The reason this method might be
- used would be to invoke the :meth:`_events.ConnectionEvents.begin`
- event at a specific time, or to organize code within the scope of a
- connection checkout in terms of context managed blocks, such as::
-
- with engine.connect() as conn:
- with conn.begin():
- conn.execute(...)
- conn.execute(...)
-
- with conn.begin():
- conn.execute(...)
- conn.execute(...)
-
- The above code is not fundamentally any different in its behavior than
- the following code which does not use
- :meth:`_future.Connection.begin`; the below style is referred towards
- as "commit as you go" style::
-
- with engine.connect() as conn:
- conn.execute(...)
- conn.execute(...)
- conn.commit()
-
- conn.execute(...)
- conn.execute(...)
- conn.commit()
-
- From a database point of view, the :meth:`_future.Connection.begin`
- method does not emit any SQL or change the state of the underlying
- DBAPI connection in any way; the Python DBAPI does not have any
- concept of explicit transaction begin.
-
- .. seealso::
-
- :ref:`tutorial_working_with_transactions` - in the
- :ref:`unified_tutorial`
-
- :meth:`_future.Connection.begin_nested` - use a SAVEPOINT
-
- :meth:`_engine.Connection.begin_twophase` -
- use a two phase /XID transaction
-
- :meth:`_future.Engine.begin` - context manager available from
- :class:`_future.Engine`
-
- """
- return super(Connection, self).begin()
-
- def begin_nested(self):
- """Begin a nested transaction (i.e. SAVEPOINT) and return a transaction
- handle.
-
- The returned object is an instance of
- :class:`_engine.NestedTransaction`.
-
- Nested transactions require SAVEPOINT support in the
- underlying database. Any transaction in the hierarchy may
- ``commit`` and ``rollback``, however the outermost transaction
- still controls the overall ``commit`` or ``rollback`` of the
- transaction of a whole.
-
- If an outer :class:`.RootTransaction` is not present on this
- :class:`_future.Connection`, a new one is created using "autobegin".
- This outer transaction may be completed using "commit-as-you-go" style
- usage, by calling upon :meth:`_future.Connection.commit` or
- :meth:`_future.Connection.rollback`.
-
- .. tip::
-
- The "autobegin" behavior of :meth:`_future.Connection.begin_nested`
- is specific to :term:`2.0 style` use; for legacy behaviors, see
- :meth:`_engine.Connection.begin_nested`.
-
- The :class:`_engine.NestedTransaction` remains independent of the
- :class:`_future.Connection` object itself. Calling the
- :meth:`_future.Connection.commit` or
- :meth:`_future.Connection.rollback` will always affect the actual
- containing database transaction itself, and not the SAVEPOINT itself.
- When a database transaction is committed, any SAVEPOINTs that have been
- established are cleared and the data changes within their scope is also
- committed.
-
- .. seealso::
-
- :meth:`_future.Connection.begin`
-
-
- """
- return super(Connection, self).begin_nested()
-
- def commit(self):
- """Commit the transaction that is currently in progress.
-
- This method commits the current transaction if one has been started.
- If no transaction was started, the method has no effect, assuming
- the connection is in a non-invalidated state.
-
- A transaction is begun on a :class:`_future.Connection` automatically
- whenever a statement is first executed, or when the
- :meth:`_future.Connection.begin` method is called.
-
- .. note:: The :meth:`_future.Connection.commit` method only acts upon
- the primary database transaction that is linked to the
- :class:`_future.Connection` object. It does not operate upon a
- SAVEPOINT that would have been invoked from the
- :meth:`_future.Connection.begin_nested` method; for control of a
- SAVEPOINT, call :meth:`_engine.NestedTransaction.commit` on the
- :class:`_engine.NestedTransaction` that is returned by the
- :meth:`_future.Connection.begin_nested` method itself.
-
-
- """
- if self._transaction:
- self._transaction.commit()
-
- def rollback(self):
- """Roll back the transaction that is currently in progress.
-
- This method rolls back the current transaction if one has been started.
- If no transaction was started, the method has no effect. If a
- transaction was started and the connection is in an invalidated state,
- the transaction is cleared using this method.
-
- A transaction is begun on a :class:`_future.Connection` automatically
- whenever a statement is first executed, or when the
- :meth:`_future.Connection.begin` method is called.
-
- .. note:: The :meth:`_future.Connection.rollback` method only acts
- upon the primary database transaction that is linked to the
- :class:`_future.Connection` object. It does not operate upon a
- SAVEPOINT that would have been invoked from the
- :meth:`_future.Connection.begin_nested` method; for control of a
- SAVEPOINT, call :meth:`_engine.NestedTransaction.rollback` on the
- :class:`_engine.NestedTransaction` that is returned by the
- :meth:`_future.Connection.begin_nested` method itself.
-
-
- """
- if self._transaction:
- self._transaction.rollback()
-
- def close(self):
- """Close this :class:`_future.Connection`.
-
- This has the effect of also calling :meth:`_future.Connection.rollback`
- if any transaction is in place.
-
- """
- super(Connection, self).close()
-
- def execute(self, statement, parameters=None, execution_options=None):
- r"""Executes a SQL statement construct and returns a
- :class:`_engine.Result`.
-
- :param statement: The statement to be executed. This is always
- an object that is in both the :class:`_expression.ClauseElement` and
- :class:`_expression.Executable` hierarchies, including:
-
- * :class:`_expression.Select`
- * :class:`_expression.Insert`, :class:`_expression.Update`,
- :class:`_expression.Delete`
- * :class:`_expression.TextClause` and
- :class:`_expression.TextualSelect`
- * :class:`_schema.DDL` and objects which inherit from
- :class:`_schema.DDLElement`
-
- :param parameters: parameters which will be bound into the statement.
- This may be either a dictionary of parameter names to values,
- or a mutable sequence (e.g. a list) of dictionaries. When a
- list of dictionaries is passed, the underlying statement execution
- will make use of the DBAPI ``cursor.executemany()`` method.
- When a single dictionary is passed, the DBAPI ``cursor.execute()``
- method will be used.
-
- :param execution_options: optional dictionary of execution options,
- which will be associated with the statement execution. This
- dictionary can provide a subset of the options that are accepted
- by :meth:`_future.Connection.execution_options`.
-
- :return: a :class:`_engine.Result` object.
-
- """
- return self._execute_20(
- statement, parameters, execution_options or NO_OPTIONS
- )
-
- def scalar(self, statement, parameters=None, execution_options=None):
- r"""Executes a SQL statement construct and returns a scalar object.
-
- This method is shorthand for invoking the
- :meth:`_engine.Result.scalar` method after invoking the
- :meth:`_future.Connection.execute` method. Parameters are equivalent.
-
- :return: a scalar Python value representing the first column of the
- first row returned.
-
- """
- return self.execute(statement, parameters, execution_options).scalar()
-
-
-class Engine(_LegacyEngine):
- """Connects a :class:`_pool.Pool` and
- :class:`_engine.Dialect` together to provide a
- source of database connectivity and behavior.
-
- **This is the SQLAlchemy 2.0 version** of the :class:`~.engine.Engine`.
-
- An :class:`.future.Engine` object is instantiated publicly using the
- :func:`~sqlalchemy.future.create_engine` function.
-
- .. seealso::
-
- :doc:`/core/engines`
-
- :ref:`connections_toplevel`
-
- """
-
- _connection_cls = Connection
- _is_future = True
-
- def _not_implemented(self, *arg, **kw):
- raise NotImplementedError(
- "This method is not implemented for SQLAlchemy 2.0."
- )
-
- transaction = (
- run_callable
- ) = (
- execute
- ) = (
- scalar
- ) = (
- _execute_clauseelement
- ) = _execute_compiled = table_names = has_table = _not_implemented
-
- def _run_ddl_visitor(self, visitorcallable, element, **kwargs):
- # TODO: this is for create_all support etc. not clear if we
- # want to provide this in 2.0, that is, a way to execute SQL where
- # they aren't calling "engine.begin()" explicitly, however, DDL
- # may be a special case for which we want to continue doing it this
- # way. A big win here is that the full DDL sequence is inside of a
- # single transaction rather than COMMIT for each statement.
- with self.begin() as conn:
- conn._run_ddl_visitor(visitorcallable, element, **kwargs)
-
- @classmethod
- def _future_facade(self, legacy_engine):
- return Engine(
- legacy_engine.pool,
- legacy_engine.dialect,
- legacy_engine.url,
- logging_name=legacy_engine.logging_name,
- echo=legacy_engine.echo,
- hide_parameters=legacy_engine.hide_parameters,
- execution_options=legacy_engine._execution_options,
- )
-
- @util.contextmanager
- def begin(self):
- """Return a :class:`_future.Connection` object with a transaction
- begun.
-
- Use of this method is similar to that of
- :meth:`_future.Engine.connect`, typically as a context manager, which
- will automatically maintain the state of the transaction when the block
- ends, either by calling :meth:`_future.Connection.commit` when the
- block succeeds normally, or :meth:`_future.Connection.rollback` when an
- exception is raised, before propagating the exception outwards::
-
- with engine.begin() as connection:
- connection.execute(text("insert into table values ('foo')"))
-
-
- .. seealso::
-
- :meth:`_future.Engine.connect`
-
- :meth:`_future.Connection.begin`
-
- """
- with self.connect() as conn:
- with conn.begin():
- yield conn
-
- def connect(self):
- """Return a new :class:`_future.Connection` object.
-
- The :class:`_future.Connection` acts as a Python context manager, so
- the typical use of this method looks like::
-
- with engine.connect() as connection:
- connection.execute(text("insert into table values ('foo')"))
- connection.commit()
-
- Where above, after the block is completed, the connection is "closed"
- and its underlying DBAPI resources are returned to the connection pool.
- This also has the effect of rolling back any transaction that
- was explicitly begun or was begun via autobegin, and will
- emit the :meth:`_events.ConnectionEvents.rollback` event if one was
- started and is still in progress.
-
- .. seealso::
-
- :meth:`_future.Engine.begin`
-
-
- """
- return super(Engine, self).connect()
-
-
-class OptionEngine(OptionEngineMixin, Engine):
- pass
-
-
-Engine._option_cls = OptionEngine
+from ..engine import Connection # noqa
+from ..engine import create_engine # noqa
+from ..engine import Engine # noqa
has_all_defaults,
has_all_pks,
) in records:
- c = connection._execute_20(
+ c = connection.execute(
statement.values(value_params),
params,
execution_options=execution_options,
has_all_defaults,
has_all_pks,
) in records:
- c = connection._execute_20(
+ c = connection.execute(
statement, params, execution_options=execution_options
)
assert_singlerow and len(multiparams) == 1
)
- c = connection._execute_20(
+ c = connection.execute(
statement, multiparams, execution_options=execution_options
)
records = list(records)
multiparams = [rec[2] for rec in records]
- c = connection._execute_20(
+ c = connection.execute(
statement, multiparams, execution_options=execution_options
)
if do_executemany:
multiparams = [rec[2] for rec in records]
- c = connection._execute_20(
+ c = connection.execute(
statement, multiparams, execution_options=execution_options
)
has_all_defaults,
) in records:
if value_params:
- result = connection._execute_20(
+ result = connection.execute(
statement.values(value_params),
params,
execution_options=execution_options,
)
else:
- result = connection._execute_20(
+ result = connection.execute(
statement,
params,
execution_options=execution_options,
check_rowcount = assert_singlerow
for state, state_dict, mapper_rec, connection, params in records:
- c = connection._execute_20(
+ c = connection.execute(
statement, params, execution_options=execution_options
)
assert_singlerow and len(multiparams) == 1
)
- c = connection._execute_20(
+ c = connection.execute(
statement, multiparams, execution_options=execution_options
)
# rows can be verified
for params in del_objects:
- c = connection._execute_20(
+ c = connection.execute(
statement, params, execution_options=execution_options
)
rows_matched += c.rowcount
"- versioning cannot be verified."
% connection.dialect.dialect_description
)
- connection._execute_20(
+ connection.execute(
statement, del_objects, execution_options=execution_options
)
else:
- c = connection._execute_20(
+ c = connection.execute(
statement, del_objects, execution_options=execution_options
)
by :meth:`_engine.Connection.execution_options`, and may also
provide additional options understood only in an ORM context.
+ .. seealso::
+
+ :ref:`orm_queryguide_execution_options` - ORM-specific execution
+ options
+
:param bind_arguments: dictionary of additional arguments to determine
the bind. May include "mapper", "bind", or other custom arguments.
Contents of this dictionary are passed to the
bind = self.get_bind(**bind_arguments)
conn = self._connection_for_bind(bind)
- result = conn._execute_20(statement, params or {}, execution_options)
+ result = conn.execute(statement, params or {}, execution_options)
if compile_state_cls:
result = compile_state_cls.orm_setup_cursor_result(
any data changes present on the transaction
are committed unconditionally.
* ``None`` - don't do anything on the connection.
- This setting is only appropriate if the database / DBAPI
+ This setting may be appropriate if the database / DBAPI
works in pure "autocommit" mode at all times, or if the
application uses the :class:`_engine.Engine` with consistent
connectivity patterns. See the section
elements = None
type_api = None
-PARSE_AUTOCOMMIT = util.symbol("PARSE_AUTOCOMMIT")
NO_ARG = util.symbol("NO_ARG")
"""Set non-SQL options for the statement which take effect during
execution.
- Execution options can be set on a per-statement or
- per :class:`_engine.Connection` basis. Additionally, the
- :class:`_engine.Engine` and ORM :class:`~.orm.query.Query`
- objects provide
- access to execution options which they in turn configure upon
- connections.
-
- The :meth:`execution_options` method is generative. A new
- instance of this statement is returned that contains the options::
+ Execution options can be set at many scopes, including per-statement,
+ per-connection, or per execution, using methods such as
+ :meth:`_engine.Connection.execution_options` and parameters which
+ accept a dictionary of options such as
+ :paramref:`_engine.Connection.execute.execution_options` and
+ :paramref:`_orm.Session.execute.execution_options`.
+
+ The primary characteristic of an execution option, as opposed to
+ other kinds of options such as ORM loader options, is that
+ **execution options never affect the compiled SQL of a query, only
+ things that affect how the SQL statement itself is invoked or how
+ results are fetched**. That is, execution options are not part of
+ what's accommodated by SQL compilation nor are they considered part of
+ the cached state of a statement.
+
+ The :meth:`_sql.Executable.execution_options` method is
+ :term:`generative`, as
+ is the case for the method as applied to the :class:`_engine.Engine`
+ and :class:`_orm.Query` objects, which means when the method is called,
+ a copy of the object is returned, which applies the given parameters to
+ that new copy, but leaves the original unchanged::
statement = select(table.c.x, table.c.y)
- statement = statement.execution_options(autocommit=True)
-
- Note that only a subset of possible execution options can be applied
- to a statement - these include "autocommit" and "stream_results",
- but not "isolation_level" or "compiled_cache".
- See :meth:`_engine.Connection.execution_options` for a full list of
- possible options.
+ new_statement = statement.execution_options(my_option=True)
+
+ An exception to this behavior is the :class:`_engine.Connection`
+ object, where the :meth:`_engine.Connection.execution_options` method
+ is explicitly **not** generative.
+
+ The kinds of options that may be passed to
+ :meth:`_sql.Executable.execution_options` and other related methods and
+ parameter dictionaries include parameters that are explicitly consumed
+ by SQLAlchemy Core or ORM, as well as arbitrary keyword arguments not
+ defined by SQLAlchemy, which means the methods and/or parameter
+ dictionaries may be used for user-defined parameters that interact with
+ custom code, which may access the parameters using methods such as
+ :meth:`_sql.Executable.get_execution_options` and
+ :meth:`_engine.Connection.get_execution_options`, or within selected
+ event hooks using a dedicated ``execution_options`` event parameter
+ such as
+ :paramref:`_events.ConnectionEvents.before_execute.execution_options`
+ or :attr:`_orm.ORMExecuteState.execution_options`, e.g.::
+
+ from sqlalchemy import event
+
+ @event.listens_for(some_engine, "before_execute")
+ def _process_opt(conn, statement, multiparams, params, execution_options):
+ "run a SQL function before invoking a statement"
+
+ if execution_options.get("do_special_thing", False):
+ conn.exec_driver_sql("run_special_function()")
+
+ Within the scope of options that are explicitly recognized by
+ SQLAlchemy, most apply to specific classes of objects and not others.
+ The most common execution options include:
+
+ * :paramref:`_engine.Connection.execution_options.isolation_level` -
+ sets the isolation level for a connection or a class of connections
+ via an :class:`_engine.Engine`. This option is accepted only
+ by :class:`_engine.Connection` or :class:`_engine.Engine`.
+
+ * :paramref:`_engine.Connection.execution_options.stream_results` -
+ indicates results should be fetched using a server side cursor;
+ this option is accepted by :class:`_engine.Connection`, by the
+ :paramref:`_engine.Connection.execute.execution_options` parameter
+ on :meth:`_engine.Connection.execute`, and additionally by
+ :meth:`_sql.Executable.execution_options` on a SQL statement object,
+ as well as by ORM constructs like :meth:`_orm.Session.execute`.
+
+ * :paramref:`_engine.Connection.execution_options.compiled_cache` -
+ indicates a dictionary that will serve as the
+ :ref:`SQL compilation cache <sql_caching>`
+ for a :class:`_engine.Connection` or :class:`_engine.Engine`, as
+ well as for ORM methods like :meth:`_orm.Session.execute`.
+ Can be passed as ``None`` to disable caching for statements.
+ This option is not accepted by
+ :meth:`_sql.Executable.execution_options` as it is inadvisable to
+ carry along a compilation cache within a statement object.
+
+ * :paramref:`_engine.Connection.execution_options.schema_translate_map`
+ - a mapping of schema names used by the
+ :ref:`Schema Translate Map <schema_translating>` feature, accepted
+ by :class:`_engine.Connection`, :class:`_engine.Engine`,
+ :class:`_sql.Executable`, as well as by ORM constructs
+ like :meth:`_orm.Session.execute`.
.. seealso::
:meth:`_engine.Connection.execution_options`
- :meth:`_query.Query.execution_options`
+ :paramref:`_engine.Connection.execute.execution_options`
- :meth:`.Executable.get_execution_options`
+ :paramref:`_orm.Session.execute.execution_options`
- """
+ :ref:`orm_queryguide_execution_options` - documentation on all
+ ORM-specific execution options
+
+ """ # noqa E501
if "isolation_level" in kw:
raise exc.ArgumentError(
"'isolation_level' execution option may only be specified "
self._gen_time = util.perf_counter()
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
if self.can_execute:
return connection._execute_compiled(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
else:
raise exc.ObjectNotExecutableError(self.statement)
"""
from . import roles
-from .base import _bind_or_error
from .base import _generative
from .base import Executable
from .base import SchemaVisitor
"""
- _execution_options = Executable._execution_options.union(
- {"autocommit": True}
- )
-
target = None
on = None
dialect = None
callable_ = None
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
return connection._execute_ddl(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
- @util.deprecated_20(
- ":meth:`.DDLElement.execute`",
- alternative="All statement execution in SQLAlchemy 2.0 is performed "
- "by the :meth:`_engine.Connection.execute` method of "
- ":class:`_engine.Connection`, "
- "or in the ORM by the :meth:`.Session.execute` method of "
- ":class:`.Session`.",
- )
- def execute(self, bind=None, target=None):
- """Execute this DDL immediately.
-
- Executes the DDL statement in isolation using the supplied
- :class:`.Connectable` or
- :class:`.Connectable` assigned to the ``.bind``
- property, if not supplied. If the DDL has a conditional ``on``
- criteria, it will be invoked with None as the event.
-
- :param bind:
- Optional, an ``Engine`` or ``Connection``. If not supplied, a valid
- :class:`.Connectable` must be present in the
- ``.bind`` property.
-
- :param target:
- Optional, defaults to None. The target :class:`_schema.SchemaItem`
- for the execute call. This is equivalent to passing the
- :class:`_schema.SchemaItem` to the :meth:`.DDLElement.against`
- method and then invoking :meth:`_schema.DDLElement.execute`
- upon the resulting :class:`_schema.DDLElement` object. See
- :meth:`.DDLElement.against` for further detail.
-
- """
-
- if bind is None:
- bind = _bind_or_error(self)
-
- if self._should_execute(target, bind):
- return bind.execute(self.against(target))
- else:
- bind.engine.logger.info("DDL execution skipped, criteria not met.")
-
@_generative
def against(self, target):
"""Return a copy of this :class:`_schema.DDLElement` which will include
__visit_name__ = "update_base"
- _execution_options = Executable._execution_options.union(
- {"autocommit": True}
- )
_hints = util.immutabledict()
named_with_column = False
from .base import HasMemoized
from .base import Immutable
from .base import NO_ARG
-from .base import PARSE_AUTOCOMMIT
from .base import SingletonConstant
from .coercions import _document_text_coercion
from .traversals import HasCopyInternals
return d
def _execute_on_connection(
- self, connection, multiparams, params, execution_options, _force=False
+ self, connection, distilled_params, execution_options, _force=False
):
if _force or self.supports_execution:
return connection._execute_clauseelement(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
else:
raise exc.ObjectNotExecutableError(self)
_is_textual = True
_bind_params_regex = re.compile(r"(?<![:\w\x5c]):(\w+)(?!:)", re.UNICODE)
- _execution_options = Executable._execution_options.union(
- {"autocommit": PARSE_AUTOCOMMIT}
- )
_is_implicitly_boolean = False
_render_label_in_columns_clause = False
:func:`_expression.text` is also used for the construction
of a full, standalone statement using plain text.
As such, SQLAlchemy refers
- to it as an :class:`.Executable` object, and it supports
- the :meth:`Executable.execution_options` method. For example,
- a :func:`_expression.text`
- construct that should be subject to "autocommit"
- can be set explicitly so using the
- :paramref:`.Connection.execution_options.autocommit` option::
-
- t = text("EXEC my_procedural_thing()").\
- execution_options(autocommit=True)
-
- .. deprecated:: 1.4 The "autocommit" execution option is deprecated
- and will be removed in SQLAlchemy 2.0. See
- :ref:`migration_20_autocommit` for discussion.
+ to it as an :class:`.Executable` object and may be used
+ like any other statement passed to an ``.execute()`` method.
:param text:
the text of the SQL statement to be created. Use ``:<param>``
class _IdentifiedClause(Executable, ClauseElement):
__visit_name__ = "identified"
- _execution_options = Executable._execution_options.union(
- {"autocommit": False}
- )
def __init__(self, ident):
self.ident = ident
from .base import _select_iterables
from .base import ColumnCollection
from .base import Executable
-from .base import PARSE_AUTOCOMMIT
from .dml import Delete
from .dml import Insert
from .dml import Update
)
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
return connection._execute_function(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
def scalar_table_valued(self, name, type_=None):
s = s.execution_options(**self._execution_options)
return s
- @util.deprecated_20(
- ":meth:`.FunctionElement.scalar`",
- alternative="Scalar execution in SQLAlchemy 2.0 is performed "
- "by the :meth:`_engine.Connection.scalar` method of "
- ":class:`_engine.Connection`, "
- "or in the ORM by the :meth:`.Session.scalar` method of "
- ":class:`.Session`.",
- )
- def scalar(self):
- """Execute this :class:`.FunctionElement` against an embedded
- 'bind' and return a scalar value.
-
- This first calls :meth:`~.FunctionElement.select` to
- produce a SELECT construct.
-
- Note that :class:`.FunctionElement` can be passed to
- the :meth:`.Connectable.scalar` method of :class:`_engine.Connection`
- or :class:`_engine.Engine`.
-
- """
- return self.select().execute().scalar()
-
- @util.deprecated_20(
- ":meth:`.FunctionElement.execute`",
- alternative="All statement execution in SQLAlchemy 2.0 is performed "
- "by the :meth:`_engine.Connection.execute` method of "
- ":class:`_engine.Connection`, "
- "or in the ORM by the :meth:`.Session.execute` method of "
- ":class:`.Session`.",
- )
- def execute(self):
- """Execute this :class:`.FunctionElement` against an embedded
- 'bind'.
-
- This first calls :meth:`~.FunctionElement.select` to
- produce a SELECT construct.
-
- Note that :class:`.FunctionElement` can be passed to
- the :meth:`.Connectable.execute` method of :class:`_engine.Connection`
- or :class:`_engine.Engine`.
-
- """
- return self.select().execute()
-
def _bind_param(self, operator, obj, type_=None, **kw):
return BindParameter(
None,
return LinkedLambdaElement(other, parent_lambda=self, opts=opts)
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
if self._rec.expected_expr.supports_execution:
return connection._execute_clauseelement(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
else:
raise exc.ObjectNotExecutableError(self)
return NullLambdaStatement(statement)
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
if self._resolved.supports_execution:
return connection._execute_clauseelement(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
else:
raise exc.ObjectNotExecutableError(self)
self.column.default = self
def _execute_on_connection(
- self, connection, multiparams, params, execution_options
+ self, connection, distilled_params, execution_options
):
return connection._execute_default(
- self, multiparams, params, execution_options
+ self, distilled_params, execution_options
)
@property
def _execute_on_connection(
self,
connection,
- multiparams,
- params,
+ distilled_params,
execution_options,
):
util.warn_deprecated(
"1.4",
)
return self.element._execute_on_connection(
- connection, multiparams, params, execution_options, _force=True
+ connection, distilled_params, execution_options, _force=True
)
from .. import util
from ..engine import url
from ..engine.default import DefaultDialect
-from ..engine.util import _distill_cursor_params
from ..schema import _DDLCompiles
def __init__(self, context, clauseelement, multiparams, params):
self.context = context
self.clauseelement = clauseelement
- self.parameters = _distill_cursor_params(
- context.connection, tuple(multiparams), params
- )
+
+ if multiparams:
+ self.parameters = multiparams
+ elif params:
+ self.parameters = [params]
+ else:
+ self.parameters = []
self.statements = []
def __repr__(self):
def testing_engine(
url=None,
options=None,
- future=None,
asyncio=False,
transfer_staticpool=False,
):
if asyncio:
from sqlalchemy.ext.asyncio import create_async_engine as create_engine
- elif future or (
- config.db and config.db._is_future and future is not False
- ):
- from sqlalchemy.future import create_engine
else:
from sqlalchemy import create_engine
from sqlalchemy.engine.url import make_url
def __getattr__(self, key):
return getattr(self.conn, key)
-
-
-def proxying_engine(
- conn_cls=DBAPIProxyConnection, cursor_cls=DBAPIProxyCursor
-):
- """Produce an engine that provides proxy hooks for
- common methods.
-
- """
-
- def mock_conn():
- return conn_cls(config.db, cursor_cls)
-
- def _wrap_do_on_connect(do_on_connect):
- def go(dbapi_conn):
- return do_on_connect(dbapi_conn.conn)
-
- return go
-
- return testing_engine(
- options={
- "creator": mock_conn,
- "_wrap_do_on_connect": _wrap_do_on_connect,
- }
- )
# This module is part of SQLAlchemy and is released under
# the MIT License: https://www.opensource.org/licenses/mit-license.php
-import contextlib
import re
import sys
@config.fixture()
def future_engine(self):
- eng = getattr(self, "bind", None) or config.db
- with _push_future_engine(eng):
- yield
+ yield
@config.fixture()
def testing_engine(self):
return engines.testing_engine(
url=url,
options=options,
- future=future,
asyncio=asyncio,
transfer_staticpool=transfer_staticpool,
)
_connection_fixture_connection = None
-@contextlib.contextmanager
-def _push_future_engine(engine):
-
- from ..future.engine import Engine
- from sqlalchemy import testing
-
- facade = Engine._future_facade(engine)
- config._current.push_engine(facade, testing)
-
- yield facade
-
- config._current.pop(testing)
-
-
class FutureEngineMixin(object):
- @config.fixture(autouse=True, scope="class")
- def _push_future_engine(self):
- eng = getattr(self, "bind", None) or config.db
- with _push_future_engine(eng):
- yield
+ """alembic's suite still using this"""
class TablesTest(TestBase):
conn.scalar(select(self.tables.some_table.c.id)),
1 if autocommit else None,
)
+ conn.rollback()
with conn.begin():
conn.execute(self.tables.some_table.delete())
).exec_driver_sql("select 1")
assert self._is_server_side(result.cursor)
+ # the connection has autobegun, which means at the end of the
+ # block, we will roll back, which on MySQL at least will fail
+ # with "Commands out of sync" if the result set
+ # is not closed, so we close it first.
+ #
+ # fun fact! why did we not have this result.close() in this test
+ # before 2.0? don't we roll back in the connection pool
+ # unconditionally? yes! and in fact if you run this test in 1.4
+ # with stdout shown, there is in fact "Exception during reset or
+ # similar" with "Commands out sync" emitted a warning! 2.0's
+ # architecture finds and fixes what was previously an expensive
+ # silent error condition.
+ result.close()
+
def test_stmt_enabled_conn_option_disabled(self):
engine = self._fixture(False)
r = sess.connection().execute(
compile_state.statement,
execution_options=exec_opts,
- bind_arguments=bind_arguments,
)
r.context.compiled.compile_state = compile_state
assert c2.dialect.has_table(
c2, "#myveryveryuniquetemptablename"
)
+ c2.rollback()
finally:
with c1.begin():
c1.exec_driver_sql(
from sqlalchemy.testing import mock
from sqlalchemy.testing.assertions import AssertsCompiledSQL
from .test_compiler import ReservedWordFixture
-from ...engine import test_deprecations
class BackendDialectTest(
isolation_level="AUTOCOMMIT"
)
assert c.exec_driver_sql("SELECT @@autocommit;").scalar()
+ c.rollback()
c = c.execution_options(isolation_level="READ COMMITTED")
assert not c.exec_driver_sql("SELECT @@autocommit;").scalar()
def test_sysdate(self, connection):
d = connection.execute(func.sysdate()).scalar()
assert isinstance(d, datetime.datetime)
-
-
-class AutocommitTextTest(
- test_deprecations.AutocommitKeywordFixture, fixtures.TestBase
-):
- __only_on__ = "mysql", "mariadb"
-
- def test_load_data(self):
- self._test_keyword("LOAD DATA STUFF")
-
- def test_replace(self):
- self._test_keyword("REPLACE THING")
def run_test(self):
connection = testing.db.connect()
connection.exec_driver_sql("set innodb_lock_wait_timeout=1")
- main_trans = connection.begin()
try:
yield Session(bind=connection)
finally:
- main_trans.rollback()
+ connection.rollback()
connection.close()
def _assert_a_is_locked(self, should_be_locked):
from sqlalchemy.testing.assertions import ne_
from sqlalchemy.util import u
from sqlalchemy.util import ue
-from ...engine import test_deprecations
if True:
from sqlalchemy.dialects.postgresql.psycopg2 import (
engine = engines.testing_engine()
with engine.connect() as conn:
ne_(conn.connection.status, STATUS_IN_TRANSACTION)
-
-
-class AutocommitTextTest(test_deprecations.AutocommitTextTest):
- __only_on__ = "postgresql"
-
- def test_grant(self):
- self._test_keyword("GRANT USAGE ON SCHEMA fooschema TO foorole")
-
- def test_import_foreign_schema(self):
- self._test_keyword("IMPORT FOREIGN SCHEMA foob")
-
- def test_refresh_view(self):
- self._test_keyword("REFRESH MATERIALIZED VIEW fooview")
-
- def test_revoke(self):
- self._test_keyword("REVOKE USAGE ON SCHEMA fooschema FROM foorole")
-
- def test_truncate(self):
- self._test_keyword("TRUNCATE footable")
self.metadata = MetaData()
def teardown_test(self):
+ self.conn.rollback()
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
connection.close()
-class FutureSavepointTest(fixtures.FutureEngineMixin, SavepointTest):
- pass
-
-
class TypeReflectionTest(fixtures.TestBase):
__only_on__ = "sqlite"
finally:
m.drop_all(testing.db)
- def _listening_engine_fixture(self, future=False):
- eng = engines.testing_engine(future=future)
+ @testing.fixture
+ def listening_engine_fixture(self):
+ eng = engines.testing_engine()
m1 = mock.Mock()
return eng, m1
- @testing.fixture
- def listening_engine_fixture(self):
- return self._listening_engine_fixture(future=False)
-
- @testing.fixture
- def future_listening_engine_fixture(self):
- return self._listening_engine_fixture(future=True)
-
- def test_ddl_legacy_engine(
- self, metadata_fixture, listening_engine_fixture
- ):
+ def test_ddl_engine(self, metadata_fixture, listening_engine_fixture):
eng, m1 = listening_engine_fixture
metadata_fixture.create_all(eng)
],
)
- def test_ddl_future_engine(
- self, metadata_fixture, future_listening_engine_fixture
- ):
- eng, m1 = future_listening_engine_fixture
-
- metadata_fixture.create_all(eng)
-
- eq_(
- m1.mock_calls,
- [
- mock.call.begin(mock.ANY),
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.commit(mock.ANY),
- ],
- )
-
- def test_ddl_legacy_connection_no_transaction(
- self, metadata_fixture, listening_engine_fixture
- ):
- eng, m1 = listening_engine_fixture
-
- with eng.connect() as conn:
- with testing.expect_deprecated(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- metadata_fixture.create_all(conn)
-
- eq_(
- m1.mock_calls,
- [
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.commit(mock.ANY),
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.commit(mock.ANY),
- ],
- )
-
- def test_ddl_legacy_connection_transaction(
+ def test_ddl_connection_autobegin_transaction(
self, metadata_fixture, listening_engine_fixture
):
eng, m1 = listening_engine_fixture
- with eng.connect() as conn:
- with conn.begin():
- metadata_fixture.create_all(conn)
-
- eq_(
- m1.mock_calls,
- [
- mock.call.begin(mock.ANY),
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.cursor_execute("CREATE TABLE ..."),
- mock.call.commit(mock.ANY),
- ],
- )
-
- def test_ddl_future_connection_autobegin_transaction(
- self, metadata_fixture, future_listening_engine_fixture
- ):
- eng, m1 = future_listening_engine_fixture
-
with eng.connect() as conn:
metadata_fixture.create_all(conn)
],
)
- def test_ddl_future_connection_explicit_begin_transaction(
- self, metadata_fixture, future_listening_engine_fixture
+ def test_ddl_connection_explicit_begin_transaction(
+ self, metadata_fixture, listening_engine_fixture
):
- eng, m1 = future_listening_engine_fixture
+ eng, m1 = listening_engine_fixture
with eng.connect() as conn:
with conn.begin():
import sqlalchemy as tsa
import sqlalchemy as sa
-from sqlalchemy import bindparam
from sqlalchemy import create_engine
-from sqlalchemy import DDL
from sqlalchemy import engine
from sqlalchemy import event
from sqlalchemy import exc
from sqlalchemy import ForeignKey
-from sqlalchemy import func
from sqlalchemy import inspect
-from sqlalchemy import INT
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import pool
from sqlalchemy import testing
from sqlalchemy import text
from sqlalchemy import ThreadLocalMetaData
-from sqlalchemy import VARCHAR
from sqlalchemy.engine import reflection
from sqlalchemy.engine.base import Connection
from sqlalchemy.engine.base import Engine
from sqlalchemy.testing import is_instance_of
from sqlalchemy.testing import is_true
from sqlalchemy.testing import mock
+from sqlalchemy.testing.assertions import expect_deprecated
from sqlalchemy.testing.engines import testing_engine
from sqlalchemy.testing.mock import Mock
from sqlalchemy.testing.schema import Column
from sqlalchemy.testing.schema import Table
-from .test_transaction import ResetFixture
def _string_deprecation_expect():
is_(i1.bind, testing.db)
self.check_usage(i1)
- def test_bind_close_conn(self):
- e = testing.db
- conn = e.connect()
-
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered",
- r"The .close\(\) method on a so-called 'branched' connection is "
- r"deprecated as of 1.4, as are 'branched' connections overall, "
- r"and will be removed in a future release.",
- ):
- with conn.connect() as c2:
- assert not c2.closed
- assert not conn.closed
- assert c2.closed
-
class CreateEngineTest(fixtures.TestBase):
def test_strategy_keyword_mock(self):
)
-class TransactionTest(ResetFixture, fixtures.TablesTest):
- __backend__ = True
-
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "users",
- metadata,
- Column("user_id", Integer, primary_key=True),
- Column("user_name", String(20)),
- test_needs_acid=True,
- )
- Table("inserttable", metadata, Column("data", String(20)))
-
- @testing.fixture
- def local_connection(self):
- with testing.db.connect() as conn:
- yield conn
-
- def test_transaction_container(self):
- users = self.tables.users
-
- def go(conn, table, data):
- for d in data:
- conn.execute(table.insert(), d)
-
- with testing.expect_deprecated(
- r"The Engine.transaction\(\) method is deprecated"
- ):
- testing.db.transaction(
- go, users, [dict(user_id=1, user_name="user1")]
- )
-
- with testing.db.connect() as conn:
- eq_(conn.execute(users.select()).fetchall(), [(1, "user1")])
- with testing.expect_deprecated(
- r"The Engine.transaction\(\) method is deprecated"
- ):
- assert_raises(
- tsa.exc.DBAPIError,
- testing.db.transaction,
- go,
- users,
- [
- {"user_id": 2, "user_name": "user2"},
- {"user_id": 1, "user_name": "user3"},
- ],
- )
- with testing.db.connect() as conn:
- eq_(conn.execute(users.select()).fetchall(), [(1, "user1")])
-
- def test_begin_begin_rollback_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- trans2.rollback()
- trans.rollback()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
-
- def test_begin_begin_commit_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- trans2.commit()
- trans.commit()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.commit(connection),
- mock.call.do_commit(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
-
- def test_branch_nested_rollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- connection.begin()
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- assert branched.in_transaction()
- branched.execute(users.insert(), dict(user_id=1, user_name="user1"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- nested = branched.begin()
- branched.execute(users.insert(), dict(user_id=2, user_name="user2"))
- nested.rollback()
- assert not connection.in_transaction()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "This connection is on an inactive transaction. Please",
- connection.exec_driver_sql,
- "select 1",
- )
-
- @testing.requires.savepoints
- def test_savepoint_cancelled_by_toplevel_marker(self, local_connection):
- conn = local_connection
- users = self.tables.users
- trans = conn.begin()
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
-
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- mk1 = conn.begin()
-
- sp1 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
-
- mk1.rollback()
-
- assert not sp1.is_active
- assert not trans.is_active
- assert conn._transaction is trans
- assert conn._nested_transaction is None
-
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 0,
- )
-
- @testing.requires.savepoints
- def test_rollback_to_subtransaction(self, local_connection):
- connection = local_connection
- users = self.tables.users
- transaction = connection.begin()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- trans2 = connection.begin_nested()
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
-
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans3 = connection.begin()
- connection.execute(users.insert(), dict(user_id=3, user_name="user3"))
- trans3.rollback()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "This connection is on an inactive savepoint transaction.",
- connection.exec_driver_sql,
- "select 1",
- )
- trans2.rollback()
- assert connection._nested_transaction is None
-
- connection.execute(users.insert(), dict(user_id=4, user_name="user4"))
- transaction.commit()
- eq_(
- connection.execute(
- select(users.c.user_id).order_by(users.c.user_id)
- ).fetchall(),
- [(1,), (4,)],
- )
-
- # PG emergency shutdown:
- # select * from pg_prepared_xacts
- # ROLLBACK PREPARED '<xid>'
- # MySQL emergency shutdown:
- # for arg in `mysql -u root -e "xa recover" | cut -c 8-100 |
- # grep sa`; do mysql -u root -e "xa rollback '$arg'"; done
- @testing.requires.skip_mysql_on_windows
- @testing.requires.two_phase_transactions
- @testing.requires.savepoints
- def test_mixed_two_phase_transaction(self, local_connection):
- connection = local_connection
- users = self.tables.users
- transaction = connection.begin_twophase()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- transaction2 = connection.begin()
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
- transaction3 = connection.begin_nested()
- connection.execute(users.insert(), dict(user_id=3, user_name="user3"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- transaction4 = connection.begin()
- connection.execute(users.insert(), dict(user_id=4, user_name="user4"))
- transaction4.commit()
- transaction3.rollback()
- connection.execute(users.insert(), dict(user_id=5, user_name="user5"))
- transaction2.commit()
- transaction.prepare()
- transaction.commit()
- eq_(
- connection.execute(
- select(users.c.user_id).order_by(users.c.user_id)
- ).fetchall(),
- [(1,), (2,), (5,)],
- )
-
- @testing.requires.savepoints
- def test_inactive_due_to_subtransaction_on_nested_no_commit(
- self, local_connection
- ):
- connection = local_connection
- trans = connection.begin()
-
- nested = connection.begin_nested()
-
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- trans2.rollback()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "This connection is on an inactive savepoint transaction. "
- "Please rollback",
- nested.commit,
- )
- trans.commit()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "This nested transaction is inactive",
- nested.commit,
- )
-
- def test_close(self, local_connection):
- connection = local_connection
- users = self.tables.users
- transaction = connection.begin()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
- connection.execute(users.insert(), dict(user_id=3, user_name="user3"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- connection.execute(users.insert(), dict(user_id=4, user_name="user4"))
- connection.execute(users.insert(), dict(user_id=5, user_name="user5"))
- assert connection.in_transaction()
- trans2.close()
- assert connection.in_transaction()
- transaction.commit()
- assert not connection.in_transaction()
- self.assert_(
- connection.exec_driver_sql(
- "select count(*) from " "users"
- ).scalar()
- == 5
- )
- result = connection.exec_driver_sql("select * from users")
- assert len(result.fetchall()) == 5
-
- def test_close2(self, local_connection):
- connection = local_connection
- users = self.tables.users
- transaction = connection.begin()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
- connection.execute(users.insert(), dict(user_id=3, user_name="user3"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- connection.execute(users.insert(), dict(user_id=4, user_name="user4"))
- connection.execute(users.insert(), dict(user_id=5, user_name="user5"))
- assert connection.in_transaction()
- trans2.close()
- assert connection.in_transaction()
- transaction.close()
- assert not connection.in_transaction()
- self.assert_(
- connection.exec_driver_sql(
- "select count(*) from " "users"
- ).scalar()
- == 0
- )
- result = connection.exec_driver_sql("select * from users")
- assert len(result.fetchall()) == 0
-
- def test_inactive_due_to_subtransaction_no_commit(self, local_connection):
- connection = local_connection
- trans = connection.begin()
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- trans2.rollback()
- assert_raises_message(
- exc.InvalidRequestError,
- "This connection is on an inactive transaction. Please rollback",
- trans.commit,
- )
-
- trans.rollback()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "This transaction is inactive",
- trans.commit,
- )
-
- def test_nested_rollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- try:
- transaction = connection.begin()
- try:
- connection.execute(
- users.insert(), dict(user_id=1, user_name="user1")
- )
- connection.execute(
- users.insert(), dict(user_id=2, user_name="user2")
- )
- connection.execute(
- users.insert(), dict(user_id=3, user_name="user3")
- )
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- try:
- connection.execute(
- users.insert(), dict(user_id=4, user_name="user4")
- )
- connection.execute(
- users.insert(), dict(user_id=5, user_name="user5")
- )
- raise Exception("uh oh")
- trans2.commit()
- except Exception:
- trans2.rollback()
- raise
- transaction.rollback()
- except Exception:
- transaction.rollback()
- raise
- except Exception as e:
- # and not "This transaction is inactive"
- # comment moved here to fix pep8
- assert str(e) == "uh oh"
- else:
- assert False
-
- def test_nesting(self, local_connection):
- connection = local_connection
- users = self.tables.users
- transaction = connection.begin()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
- connection.execute(users.insert(), dict(user_id=3, user_name="user3"))
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- trans2 = connection.begin()
- connection.execute(users.insert(), dict(user_id=4, user_name="user4"))
- connection.execute(users.insert(), dict(user_id=5, user_name="user5"))
- trans2.commit()
- transaction.rollback()
- self.assert_(
- connection.exec_driver_sql(
- "select count(*) from " "users"
- ).scalar()
- == 0
- )
- result = connection.exec_driver_sql("select * from users")
- assert len(result.fetchall()) == 0
-
- def test_no_marker_on_inactive_trans(self, local_connection):
- conn = local_connection
- conn.begin()
-
- with testing.expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already "
- "begun, creating a 'sub' transaction"
- ):
- mk1 = conn.begin()
-
- mk1.rollback()
-
- assert_raises_message(
- exc.InvalidRequestError,
- "the current transaction on this connection is inactive.",
- conn.begin,
- )
-
- def test_implicit_autocommit_compiled(self):
- users = self.tables.users
-
- with testing.db.connect() as conn:
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted "
- "using implicit autocommit."
- ):
- conn.execute(
- users.insert(), {"user_id": 1, "user_name": "user3"}
- )
-
- def test_implicit_autocommit_text(self):
- with testing.db.connect() as conn:
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted "
- "using implicit autocommit."
- ):
- conn.execute(
- text("insert into inserttable (data) values ('thedata')")
- )
-
- def test_implicit_autocommit_driversql(self):
- with testing.db.connect() as conn:
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted "
- "using implicit autocommit."
- ):
- conn.exec_driver_sql(
- "insert into inserttable (data) values ('thedata')"
- )
-
- def test_branch_autorollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- branched.execute(
- users.insert(), dict(user_id=1, user_name="user1")
- )
- assert_raises(
- exc.DBAPIError,
- branched.execute,
- users.insert(),
- dict(user_id=1, user_name="user1"),
- )
- # can continue w/o issue
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- branched.execute(
- users.insert(), dict(user_id=2, user_name="user2")
- )
-
- def test_branch_orig_rollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- branched.execute(
- users.insert(), dict(user_id=1, user_name="user1")
- )
- nested = branched.begin()
- assert branched.in_transaction()
- branched.execute(users.insert(), dict(user_id=2, user_name="user2"))
- nested.rollback()
- eq_(
- connection.exec_driver_sql("select count(*) from users").scalar(),
- 1,
- )
-
- @testing.requires.independent_connections
- def test_branch_autocommit(self, local_connection):
- users = self.tables.users
- with testing.db.connect() as connection:
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- branched.execute(
- users.insert(), dict(user_id=1, user_name="user1")
- )
-
- eq_(
- local_connection.execute(
- text("select count(*) from users")
- ).scalar(),
- 1,
- )
-
- @testing.requires.savepoints
- def test_branch_savepoint_rollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- trans = connection.begin()
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- assert branched.in_transaction()
- branched.execute(users.insert(), dict(user_id=1, user_name="user1"))
- nested = branched.begin_nested()
- branched.execute(users.insert(), dict(user_id=2, user_name="user2"))
- nested.rollback()
- assert connection.in_transaction()
- trans.commit()
- eq_(
- connection.exec_driver_sql("select count(*) from users").scalar(),
- 1,
- )
-
- @testing.requires.two_phase_transactions
- def test_branch_twophase_rollback(self, local_connection):
- connection = local_connection
- users = self.tables.users
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- branched = connection.connect()
- assert not branched.in_transaction()
- with testing.expect_deprecated_20(
- r"The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- branched.execute(
- users.insert(), dict(user_id=1, user_name="user1")
- )
- nested = branched.begin_twophase()
- branched.execute(users.insert(), dict(user_id=2, user_name="user2"))
- nested.rollback()
- assert not connection.in_transaction()
- eq_(
- connection.exec_driver_sql("select count(*) from users").scalar(),
- 1,
- )
-
-
class HandleInvalidatedOnConnectTest(fixtures.TestBase):
__requires__ = ("sqlite",)
return str(select(1).compile(dialect=db.dialect))
-class DeprecatedEngineFeatureTest(fixtures.TablesTest):
- __backend__ = True
-
- @classmethod
- def define_tables(cls, metadata):
- cls.table = Table(
- "exec_test",
- metadata,
- Column("a", Integer),
- Column("b", Integer),
- test_needs_acid=True,
- )
-
- def _trans_fn(self, is_transaction=False):
- def go(conn, x, value=None):
- if is_transaction:
- conn = conn.connection
- conn.execute(self.table.insert().values(a=x, b=value))
-
- return go
-
- def _trans_rollback_fn(self, is_transaction=False):
- def go(conn, x, value=None):
- if is_transaction:
- conn = conn.connection
- conn.execute(self.table.insert().values(a=x, b=value))
- raise SomeException("breakage")
-
- return go
-
- def _assert_no_data(self):
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count("*")).select_from(self.table)),
- 0,
- )
-
- def _assert_fn(self, x, value=None):
- with testing.db.connect() as conn:
- eq_(conn.execute(self.table.select()).fetchall(), [(x, value)])
-
- def test_transaction_engine_fn_commit(self):
- fn = self._trans_fn()
- with testing.expect_deprecated(r"The Engine.transaction\(\) method"):
- testing.db.transaction(fn, 5, value=8)
- self._assert_fn(5, value=8)
-
- def test_transaction_engine_fn_rollback(self):
- fn = self._trans_rollback_fn()
- with testing.expect_deprecated(
- r"The Engine.transaction\(\) method is deprecated"
- ):
- assert_raises_message(
- Exception, "breakage", testing.db.transaction, fn, 5, value=8
- )
- self._assert_no_data()
-
- def test_transaction_connection_fn_commit(self):
- fn = self._trans_fn()
- with testing.db.connect() as conn:
- with testing.expect_deprecated(
- r"The Connection.transaction\(\) method is deprecated"
- ):
- conn.transaction(fn, 5, value=8)
- self._assert_fn(5, value=8)
-
- def test_transaction_connection_fn_rollback(self):
- fn = self._trans_rollback_fn()
- with testing.db.connect() as conn:
- with testing.expect_deprecated(r""):
- assert_raises(Exception, conn.transaction, fn, 5, value=8)
- self._assert_no_data()
-
- def test_execute_plain_string(self):
- with testing.db.connect() as conn:
- with _string_deprecation_expect():
- conn.execute(select1(testing.db)).scalar()
-
- def test_execute_plain_string_events(self):
-
- m1 = Mock()
- select1_str = select1(testing.db)
- with _string_deprecation_expect():
- with testing.db.connect() as conn:
- event.listen(conn, "before_execute", m1.before_execute)
- event.listen(conn, "after_execute", m1.after_execute)
- result = conn.execute(select1_str)
- eq_(
- m1.mock_calls,
- [
- mock.call.before_execute(mock.ANY, select1_str, [], {}, {}),
- mock.call.after_execute(
- mock.ANY, select1_str, [], {}, {}, result
- ),
- ],
- )
-
- def test_scalar_plain_string(self):
- with testing.db.connect() as conn:
- with _string_deprecation_expect():
- conn.scalar(select1(testing.db))
-
- # Tests for the warning when non dict params are used
- # @testing.combinations(42, (42,))
- # def test_execute_positional_non_dicts(self, args):
- # with testing.expect_deprecated(
- # r"Usage of tuple or scalars as positional arguments of "
- # ):
- # testing.db.execute(text(select1(testing.db)), args).scalar()
-
- # @testing.combinations(42, (42,))
- # def test_scalar_positional_non_dicts(self, args):
- # with testing.expect_deprecated(
- # r"Usage of tuple or scalars as positional arguments of "
- # ):
- # testing.db.scalar(text(select1(testing.db)), args)
-
-
-class DeprecatedConnectionFeatureTest(fixtures.TablesTest):
- __backend__ = True
-
- def test_execute_plain_string(self):
- with _string_deprecation_expect():
- with testing.db.connect() as conn:
- conn.execute(select1(testing.db)).scalar()
-
- def test_scalar_plain_string(self):
- with _string_deprecation_expect():
- with testing.db.connect() as conn:
- conn.scalar(select1(testing.db))
-
- # Tests for the warning when non dict params are used
- # @testing.combinations(42, (42,))
- # def test_execute_positional_non_dicts(self, args):
- # with testing.expect_deprecated(
- # r"Usage of tuple or scalars as positional arguments of "
- # ):
- # with testing.db.connect() as conn:
- # conn.execute(text(select1(testing.db)), args).scalar()
-
- # @testing.combinations(42, (42,))
- # def test_scalar_positional_non_dicts(self, args):
- # with testing.expect_deprecated(
- # r"Usage of tuple or scalars as positional arguments of "
- # ):
- # with testing.db.connect() as conn:
- # conn.scalar(text(select1(testing.db)), args)
-
-
class DeprecatedReflectionTest(fixtures.TablesTest):
@classmethod
def define_tables(cls, metadata):
Column("email", String(50)),
)
- def test_exists(self):
- dont_exist = Table("dont_exist", MetaData())
- with testing.expect_deprecated(
- r"The Table.exists\(\) method is deprecated"
- ):
- is_false(dont_exist.exists(testing.db))
-
- user = self.tables.user
- with testing.expect_deprecated(
- r"The Table.exists\(\) method is deprecated"
- ):
- is_true(user.exists(testing.db))
-
- def test_create_drop_explicit(self):
- metadata = MetaData()
- table = Table("test_table", metadata, Column("foo", Integer))
- bind = testing.db
- for args in [([], {"bind": bind}), ([bind], {})]:
- metadata.create_all(*args[0], **args[1])
- with testing.expect_deprecated(
- r"The Table.exists\(\) method is deprecated"
- ):
- assert table.exists(*args[0], **args[1])
- metadata.drop_all(*args[0], **args[1])
- table.create(*args[0], **args[1])
- table.drop(*args[0], **args[1])
- with testing.expect_deprecated(
- r"The Table.exists\(\) method is deprecated"
- ):
- assert not table.exists(*args[0], **args[1])
-
- def test_create_drop_err_table(self):
- metadata = MetaData()
- table = Table("test_table", metadata, Column("foo", Integer))
-
- with testing.expect_deprecated(
- r"The Table.exists\(\) method is deprecated"
- ):
- assert_raises_message(
- tsa.exc.UnboundExecutionError,
- (
- "Table object 'test_table' is not bound to an Engine or "
- "Connection."
- ),
- table.exists,
- )
-
- def test_engine_has_table(self):
- with testing.expect_deprecated(
- r"The Engine.has_table\(\) method is deprecated"
- ):
- is_false(testing.db.has_table("dont_exist"))
-
- with testing.expect_deprecated(
- r"The Engine.has_table\(\) method is deprecated"
- ):
- is_true(testing.db.has_table("user"))
-
- def test_engine_table_names(self):
- metadata = self.tables_test_metadata
-
- with testing.expect_deprecated(
- r"The Engine.table_names\(\) method is deprecated"
- ):
- table_names = testing.db.table_names()
- is_true(set(table_names).issuperset(metadata.tables))
-
def test_reflecttable(self):
inspector = inspect(testing.db)
metadata = MetaData()
eq_(res, exp)
-class ExecutionOptionsTest(fixtures.TestBase):
- def test_branched_connection_execution_options(self):
- engine = engines.testing_engine("sqlite://")
-
- conn = engine.connect()
- c2 = conn.execution_options(foo="bar")
-
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered "
- ):
- c2_branch = c2.connect()
- eq_(c2_branch._execution_options, {"foo": "bar"})
-
-
-class RawExecuteTest(fixtures.TablesTest):
- __backend__ = True
-
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "users",
- metadata,
- Column("user_id", INT, primary_key=True, autoincrement=False),
- Column("user_name", VARCHAR(20)),
- )
- Table(
- "users_autoinc",
- metadata,
- Column(
- "user_id", INT, primary_key=True, test_needs_autoincrement=True
- ),
- Column("user_name", VARCHAR(20)),
- )
-
- def test_no_params_option(self, connection):
- stmt = (
- "SELECT '%'"
- + testing.db.dialect.statement_compiler(
- testing.db.dialect, None
- ).default_from()
- )
-
- with _string_deprecation_expect():
- result = (
- connection.execution_options(no_parameters=True)
- .execute(stmt)
- .scalar()
- )
- eq_(result, "%")
-
- @testing.requires.qmark_paramstyle
- def test_raw_qmark(self, connection):
- conn = connection
-
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (?, ?)",
- (1, "jack"),
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (?, ?)",
- [2, "fred"],
- )
-
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (?, ?)",
- [3, "ed"],
- [4, "horse"],
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (?, ?)",
- (5, "barney"),
- (6, "donkey"),
- )
-
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (?, ?)",
- 7,
- "sally",
- )
-
- with _string_deprecation_expect():
- res = conn.execute("select * from users order by user_id")
- assert res.fetchall() == [
- (1, "jack"),
- (2, "fred"),
- (3, "ed"),
- (4, "horse"),
- (5, "barney"),
- (6, "donkey"),
- (7, "sally"),
- ]
- for multiparam, param in [
- (("jack", "fred"), {}),
- ((["jack", "fred"],), {}),
- ]:
- with _string_deprecation_expect():
- res = conn.execute(
- "select * from users where user_name=? or "
- "user_name=? order by user_id",
- *multiparam,
- **param
- )
- assert res.fetchall() == [(1, "jack"), (2, "fred")]
-
- with _string_deprecation_expect():
- res = conn.execute("select * from users where user_name=?", "jack")
- assert res.fetchall() == [(1, "jack")]
-
- @testing.requires.format_paramstyle
- def test_raw_sprintf(self, connection):
- conn = connection
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (%s, %s)",
- [1, "jack"],
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (%s, %s)",
- [2, "ed"],
- [3, "horse"],
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) " "values (%s, %s)",
- 4,
- "sally",
- )
- with _string_deprecation_expect():
- conn.execute("insert into users (user_id) values (%s)", 5)
- with _string_deprecation_expect():
- res = conn.execute("select * from users order by user_id")
- assert res.fetchall() == [
- (1, "jack"),
- (2, "ed"),
- (3, "horse"),
- (4, "sally"),
- (5, None),
- ]
- for multiparam, param in [
- (("jack", "ed"), {}),
- ((["jack", "ed"],), {}),
- ]:
- with _string_deprecation_expect():
- res = conn.execute(
- "select * from users where user_name=%s or "
- "user_name=%s order by user_id",
- *multiparam,
- **param
- )
- assert res.fetchall() == [(1, "jack"), (2, "ed")]
- with _string_deprecation_expect():
- res = conn.execute(
- "select * from users where user_name=%s", "jack"
- )
- assert res.fetchall() == [(1, "jack")]
-
- @testing.requires.pyformat_paramstyle
- def test_raw_python(self, connection):
- conn = connection
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (%(id)s, %(name)s)",
- {"id": 1, "name": "jack"},
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (%(id)s, %(name)s)",
- {"id": 2, "name": "ed"},
- {"id": 3, "name": "horse"},
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (%(id)s, %(name)s)",
- id=4,
- name="sally",
- )
- with _string_deprecation_expect():
- res = conn.execute("select * from users order by user_id")
- assert res.fetchall() == [
- (1, "jack"),
- (2, "ed"),
- (3, "horse"),
- (4, "sally"),
- ]
-
- @testing.requires.named_paramstyle
- def test_raw_named(self, connection):
- conn = connection
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (:id, :name)",
- {"id": 1, "name": "jack"},
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (:id, :name)",
- {"id": 2, "name": "ed"},
- {"id": 3, "name": "horse"},
- )
- with _string_deprecation_expect():
- conn.execute(
- "insert into users (user_id, user_name) "
- "values (:id, :name)",
- id=4,
- name="sally",
- )
- with _string_deprecation_expect():
- res = conn.execute("select * from users order by user_id")
- assert res.fetchall() == [
- (1, "jack"),
- (2, "ed"),
- (3, "horse"),
- (4, "sally"),
- ]
-
-
-class DeprecatedExecParamsTest(fixtures.TablesTest):
- __backend__ = True
-
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "users",
- metadata,
- Column("user_id", INT, primary_key=True, autoincrement=False),
- Column("user_name", VARCHAR(20)),
- )
-
- Table(
- "users_autoinc",
- metadata,
- Column(
- "user_id", INT, primary_key=True, test_needs_autoincrement=True
- ),
- Column("user_name", VARCHAR(20)),
- )
-
- def test_kwargs(self, connection):
- users = self.tables.users
-
- with testing.expect_deprecated_20(
- r"The connection.execute\(\) method in "
- "SQLAlchemy 2.0 will accept parameters as a single "
- ):
- connection.execute(
- users.insert(), user_id=5, user_name="some name"
- )
-
- eq_(connection.execute(select(users)).all(), [(5, "some name")])
-
- def test_positional_dicts(self, connection):
- users = self.tables.users
-
- with testing.expect_deprecated_20(
- r"The connection.execute\(\) method in "
- "SQLAlchemy 2.0 will accept parameters as a single "
- ):
- connection.execute(
- users.insert(),
- {"user_id": 5, "user_name": "some name"},
- {"user_id": 6, "user_name": "some other name"},
- )
-
- eq_(
- connection.execute(select(users).order_by(users.c.user_id)).all(),
- [(5, "some name"), (6, "some other name")],
- )
-
- @testing.requires.empty_inserts
- def test_single_scalar(self, connection):
-
- users = self.tables.users_autoinc
-
- with testing.expect_deprecated_20(
- r"The connection.execute\(\) method in "
- "SQLAlchemy 2.0 will accept parameters as a single "
- ):
- # TODO: I'm not even sure what this exec format is or how
- # it worked if at all
- connection.execute(users.insert(), "some name")
-
- eq_(
- connection.execute(select(users).order_by(users.c.user_id)).all(),
- [(1, None)],
- )
-
-
class EngineEventsTest(fixtures.TestBase):
__requires__ = ("ad_hoc_engines",)
__backend__ = True
):
break
- @testing.combinations(
- ((), {"z": 10}, [], {"z": 10}, testing.requires.legacy_engine),
- )
- def test_modify_parameters_from_event_one(
- self, multiparams, params, expected_multiparams, expected_params
- ):
- # this is testing both the normalization added to parameters
- # as of I97cb4d06adfcc6b889f10d01cc7775925cffb116 as well as
- # that the return value from the event is taken as the new set
- # of parameters.
- def before_execute(
- conn, clauseelement, multiparams, params, execution_options
- ):
- eq_(multiparams, expected_multiparams)
- eq_(params, expected_params)
- return clauseelement, (), {"q": "15"}
+ def test_engine_connect(self, testing_engine):
+ e1 = testing_engine(config.db_url)
- def after_execute(
- conn, clauseelement, multiparams, params, result, execution_options
- ):
- eq_(multiparams, ())
- eq_(params, {"q": "15"})
+ canary = Mock()
- e1 = testing_engine(config.db_url)
- event.listen(e1, "before_execute", before_execute, retval=True)
- event.listen(e1, "after_execute", after_execute)
+ def thing(conn, branch):
+ canary(conn, branch)
- with e1.connect() as conn:
- with testing.expect_deprecated_20(
- r"The connection\.execute\(\) method"
- ):
- result = conn.execute(
- select(bindparam("q", type_=String)),
- *multiparams,
- **params
- )
- eq_(result.all(), [("15",)])
+ event.listen(e1, "engine_connect", thing)
- @testing.only_on("sqlite")
- def test_modify_statement_string(self, connection):
- @event.listens_for(connection, "before_execute", retval=True)
- def _modify(
- conn, clauseelement, multiparams, params, execution_options
- ):
- return clauseelement.replace("hi", "there"), multiparams, params
+ msg = (
+ r"The argument signature for the "
+ r'"ConnectionEvents.engine_connect" event listener has changed as '
+ r"of version 2.0, and conversion for the old argument signature "
+ r"will be removed in a future release. The new signature is "
+ r'"def engine_connect\(conn\)'
+ )
- with _string_deprecation_expect():
- eq_(connection.scalar("select 'hi'"), "there")
+ with expect_deprecated(msg):
+ c1 = e1.connect()
+ c1.close()
+
+ with expect_deprecated(msg):
+ c2 = e1.connect()
+ c2.close()
+
+ eq_(canary.mock_calls, [mock.call(c1, False), mock.call(c2, False)])
def test_retval_flag(self):
canary = []
with e1.connect() as conn:
result = conn.execute(select(1))
result.close()
-
-
-class DDLExecutionTest(fixtures.TestBase):
- def setup_test(self):
- self.engine = engines.mock_engine()
- self.metadata = MetaData()
- self.users = Table(
- "users",
- self.metadata,
- Column("user_id", Integer, primary_key=True),
- Column("user_name", String(40)),
- )
-
-
-class AutocommitKeywordFixture(object):
- def _test_keyword(self, keyword, expected=True):
- dbapi = Mock(
- connect=Mock(
- return_value=Mock(
- cursor=Mock(return_value=Mock(description=()))
- )
- )
- )
- engine = engines.testing_engine(
- options={"_initialize": False, "pool_reset_on_return": None}
- )
- engine.dialect.dbapi = dbapi
-
- with engine.connect() as conn:
- if expected:
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted "
- "using implicit autocommit"
- ):
- conn.exec_driver_sql(
- "%s something table something" % keyword
- )
- else:
- conn.exec_driver_sql("%s something table something" % keyword)
-
- if expected:
- eq_(
- [n for (n, k, s) in dbapi.connect().mock_calls],
- ["cursor", "commit"],
- )
- else:
- eq_(
- [n for (n, k, s) in dbapi.connect().mock_calls], ["cursor"]
- )
-
-
-class AutocommitTextTest(AutocommitKeywordFixture, fixtures.TestBase):
- __backend__ = True
-
- def test_update(self):
- self._test_keyword("UPDATE")
-
- def test_insert(self):
- self._test_keyword("INSERT")
-
- def test_delete(self):
- self._test_keyword("DELETE")
-
- def test_alter(self):
- self._test_keyword("ALTER TABLE")
-
- def test_create(self):
- self._test_keyword("CREATE TABLE foobar")
-
- def test_drop(self):
- self._test_keyword("DROP TABLE foobar")
-
- def test_select(self):
- self._test_keyword("SELECT foo FROM table", False)
-
-
-class ExplicitAutoCommitTest(fixtures.TablesTest):
-
- """test the 'autocommit' flag on select() and text() objects.
-
- Requires PostgreSQL so that we may define a custom function which
- modifies the database."""
-
- __only_on__ = "postgresql"
-
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "foo",
- metadata,
- Column("id", Integer, primary_key=True),
- Column("data", String(100)),
- )
-
- event.listen(
- metadata,
- "after_create",
- DDL(
- "create function insert_foo(varchar) "
- "returns integer as 'insert into foo(data) "
- "values ($1);select 1;' language sql"
- ),
- )
- event.listen(
- metadata, "before_drop", DDL("drop function insert_foo(varchar)")
- )
-
- def test_control(self):
-
- # test that not using autocommit does not commit
- foo = self.tables.foo
-
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
- conn1.execute(select(func.insert_foo("data1")))
- assert conn2.execute(select(foo.c.data)).fetchall() == []
- conn1.execute(text("select insert_foo('moredata')"))
- assert conn2.execute(select(foo.c.data)).fetchall() == []
- trans = conn1.begin()
- trans.commit()
- assert conn2.execute(select(foo.c.data)).fetchall() == [
- ("data1",),
- ("moredata",),
- ]
- conn1.close()
- conn2.close()
-
- def test_explicit_compiled(self):
- foo = self.tables.foo
-
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
-
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- conn1.execute(
- select(func.insert_foo("data1")).execution_options(
- autocommit=True
- )
- )
- assert conn2.execute(select(foo.c.data)).fetchall() == [("data1",)]
- conn1.close()
- conn2.close()
-
- def test_explicit_connection(self):
- foo = self.tables.foo
-
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- conn1.execution_options(autocommit=True).execute(
- select(func.insert_foo("data1"))
- )
- eq_(conn2.execute(select(foo.c.data)).fetchall(), [("data1",)])
-
- # connection supersedes statement
-
- conn1.execution_options(autocommit=False).execute(
- select(func.insert_foo("data2")).execution_options(autocommit=True)
- )
- eq_(conn2.execute(select(foo.c.data)).fetchall(), [("data1",)])
-
- # ditto
-
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- conn1.execution_options(autocommit=True).execute(
- select(func.insert_foo("data3")).execution_options(
- autocommit=False
- )
- )
- eq_(
- conn2.execute(select(foo.c.data)).fetchall(),
- [("data1",), ("data2",), ("data3",)],
- )
- conn1.close()
- conn2.close()
-
- def test_explicit_text(self):
- foo = self.tables.foo
-
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- conn1.execute(
- text("select insert_foo('moredata')").execution_options(
- autocommit=True
- )
- )
- assert conn2.execute(select(foo.c.data)).fetchall() == [("moredata",)]
- conn1.close()
- conn2.close()
-
- def test_implicit_text(self):
- foo = self.tables.foo
-
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit"
- ):
- conn1.execute(
- text("insert into foo (data) values ('implicitdata')")
- )
- assert conn2.execute(select(foo.c.data)).fetchall() == [
- ("implicitdata",)
- ]
- conn1.close()
- conn2.close()
from sqlalchemy.testing import engines
from sqlalchemy.testing import eq_
from sqlalchemy.testing import expect_raises_message
-from sqlalchemy.testing import expect_warnings
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import is_
from sqlalchemy.testing import is_false
)
eq_(result, "%")
+ def test_no_strings(self, connection):
+ with expect_raises_message(
+ tsa.exc.ObjectNotExecutableError,
+ "Not an executable object: 'select 1'",
+ ):
+ connection.execute("select 1")
+
def test_raw_positional_invalid(self, connection):
assert_raises_message(
tsa.exc.ArgumentError,
res = conn.scalars(select(users.c.user_name).order_by(users.c.user_id))
eq_(res.all(), ["sandy", "spongebob"])
+ @testing.combinations(
+ ({}, {}, {}),
+ ({"a": "b"}, {}, {"a": "b"}),
+ ({"a": "b", "d": "e"}, {"a": "c"}, {"a": "c", "d": "e"}),
+ argnames="conn_opts, exec_opts, expected",
+ )
+ def test_execution_opts_per_invoke(
+ self, connection, conn_opts, exec_opts, expected
+ ):
+ opts = []
-class UnicodeReturnsTest(fixtures.TestBase):
- def test_unicode_test_not_in(self):
- eng = engines.testing_engine()
- eng.dialect.returns_unicode_strings = String.RETURNS_UNKNOWN
+ @event.listens_for(connection, "before_cursor_execute")
+ def before_cursor_execute(
+ conn, cursor, statement, parameters, context, executemany
+ ):
+ opts.append(context.execution_options)
- assert_raises_message(
- tsa.exc.InvalidRequestError,
- "RETURNS_UNKNOWN is unsupported in Python 3",
- eng.connect,
- )
+ if conn_opts:
+ connection = connection.execution_options(**conn_opts)
+
+ if exec_opts:
+ connection.execute(select(1), execution_options=exec_opts)
+ else:
+ connection.execute(select(1))
+
+ eq_(opts, [expected])
+
+ @testing.combinations(
+ ({}, {}, {}, {}),
+ ({}, {"a": "b"}, {}, {"a": "b"}),
+ ({}, {"a": "b", "d": "e"}, {"a": "c"}, {"a": "c", "d": "e"}),
+ (
+ {"q": "z", "p": "r"},
+ {"a": "b", "p": "x", "d": "e"},
+ {"a": "c"},
+ {"q": "z", "p": "x", "a": "c", "d": "e"},
+ ),
+ argnames="stmt_opts, conn_opts, exec_opts, expected",
+ )
+ def test_execution_opts_per_invoke_execute_events(
+ self, connection, stmt_opts, conn_opts, exec_opts, expected
+ ):
+ opts = []
+
+ @event.listens_for(connection, "before_execute")
+ def before_execute(
+ conn, clauseelement, multiparams, params, execution_options
+ ):
+ opts.append(("before", execution_options))
+
+ @event.listens_for(connection, "after_execute")
+ def after_execute(
+ conn,
+ clauseelement,
+ multiparams,
+ params,
+ execution_options,
+ result,
+ ):
+ opts.append(("after", execution_options))
+
+ stmt = select(1)
+
+ if stmt_opts:
+ stmt = stmt.execution_options(**stmt_opts)
+
+ if conn_opts:
+ connection = connection.execution_options(**conn_opts)
+
+ if exec_opts:
+ connection.execute(stmt, execution_options=exec_opts)
+ else:
+ connection.execute(stmt)
+
+ eq_(opts, [("before", expected), ("after", expected)])
+
+ @testing.combinations(
+ ({"user_id": 1, "user_name": "name1"},),
+ ([{"user_id": 1, "user_name": "name1"}],),
+ (({"user_id": 1, "user_name": "name1"},),),
+ (
+ [
+ {"user_id": 1, "user_name": "name1"},
+ {"user_id": 2, "user_name": "name2"},
+ ],
+ ),
+ argnames="parameters",
+ )
+ def test_params_interpretation(self, connection, parameters):
+ users = self.tables.users
+
+ connection.execute(users.insert(), parameters)
class ConvenienceExecuteTest(fixtures.TablesTest):
return_value=Mock(begin=Mock(side_effect=Exception("boom")))
)
with mock.patch.object(engine, "_connection_cls", mock_connection):
- if testing.requires.legacy_engine.enabled:
- with expect_raises_message(Exception, "boom"):
- engine.begin()
- else:
- # context manager isn't entered, doesn't actually call
- # connect() or connection.begin()
- engine.begin()
+ # context manager isn't entered, doesn't actually call
+ # connect() or connection.begin()
+ engine.begin()
- if testing.requires.legacy_engine.enabled:
- eq_(mock_connection.return_value.close.mock_calls, [call()])
- else:
- eq_(mock_connection.return_value.close.mock_calls, [])
+ eq_(mock_connection.return_value.close.mock_calls, [])
def test_transaction_engine_ctx_begin_fails_include_enter(self):
- """test #7272"""
+ """test #7272
+
+ Note this behavior for 2.0 required that we add a new flag to
+ Connection _allow_autobegin=False, so that the first-connect
+ initialization sequence in create.py does not actually run begin()
+ events. previously, the initialize sequence used a future=False
+ connection unconditionally (and I didn't notice this).
+
+ """
engine = engines.testing_engine()
close_mock = Mock()
fn(conn, 5, value=8)
self._assert_fn(5, value=8)
- @testing.requires.legacy_engine
- def test_connect_as_ctx_noautocommit(self):
- fn = self._trans_fn()
- self._assert_no_data()
-
- with testing.db.connect() as conn:
- ctx = conn.execution_options(autocommit=False)
- testing.run_as_contextmanager(ctx, fn, 5, value=8)
- # autocommit is off
- self._assert_no_data()
-
-
-class FutureConvenienceExecuteTest(
- fixtures.FutureEngineMixin, ConvenienceExecuteTest
-):
- __backend__ = True
-
class CompiledCacheTest(fixtures.TestBase):
__backend__ = True
with self.sql_execution_asserter(connection) as asserter:
conn = connection
execution_options = {"schema_translate_map": map_}
- conn._execute_20(
+ conn.execute(
t1.insert(), {"x": 1}, execution_options=execution_options
)
- conn._execute_20(
+ conn.execute(
t2.insert(), {"x": 1}, execution_options=execution_options
)
- conn._execute_20(
+ conn.execute(
t3.insert(), {"x": 1}, execution_options=execution_options
)
- conn._execute_20(
+ conn.execute(
t1.update().values(x=1).where(t1.c.x == 1),
execution_options=execution_options,
)
- conn._execute_20(
+ conn.execute(
t2.update().values(x=2).where(t2.c.x == 1),
execution_options=execution_options,
)
- conn._execute_20(
+ conn.execute(
t3.update().values(x=3).where(t3.c.x == 1),
execution_options=execution_options,
)
eq_(
- conn._execute_20(
+ conn.execute(
select(t1.c.x), execution_options=execution_options
).scalar(),
1,
)
eq_(
- conn._execute_20(
+ conn.execute(
select(t2.c.x), execution_options=execution_options
).scalar(),
2,
)
eq_(
- conn._execute_20(
+ conn.execute(
select(t3.c.x), execution_options=execution_options
).scalar(),
3,
)
- conn._execute_20(t1.delete(), execution_options=execution_options)
- conn._execute_20(t2.delete(), execution_options=execution_options)
- conn._execute_20(t3.delete(), execution_options=execution_options)
+ conn.execute(t1.delete(), execution_options=execution_options)
+ conn.execute(t2.delete(), execution_options=execution_options)
+ conn.execute(t3.delete(), execution_options=execution_options)
asserter.assert_(
CompiledSQL("INSERT INTO [SCHEMA__none].t1 (x) VALUES (:x)"),
):
break
+ def test_engine_connect(self, testing_engine):
+ e1 = testing_engine(config.db_url)
+
+ canary = Mock()
+
+ # use a real def to trigger legacy signature decorator
+ # logic, if present
+ def thing(conn):
+ canary(conn)
+
+ event.listen(e1, "engine_connect", thing)
+
+ c1 = e1.connect()
+ c1.close()
+
+ c2 = e1.connect()
+ c2.close()
+
+ eq_(canary.mock_calls, [mock.call(c1), mock.call(c2)])
+
def test_per_engine_independence(self, testing_engine):
e1 = testing_engine(config.db_url)
e2 = testing_engine(config.db_url)
canary.got_result(result)
with e1.connect() as conn:
- assert not conn._is_future
+ conn.execute(select(1)).scalar()
+
+ assert conn.in_transaction()
- with conn.begin():
- conn.execute(select(1)).scalar()
- assert conn.in_transaction()
+ conn.commit()
assert not conn.in_transaction()
eq_(canary.be1.call_count, 1)
eq_(canary.be2.call_count, 1)
- if testing.requires.legacy_engine.enabled:
- conn._branch().execute(select(1))
- eq_(canary.be1.call_count, 2)
- eq_(canary.be2.call_count, 2)
-
@testing.combinations(
(True, False),
(True, True),
def init(connection):
initialize(connection)
+ connection.execute(select(1))
+ # begin mock added as part of migration to future only
+ # where we don't want anything related to begin() happening
+ # as part of create
+ # note we can't use an event to ensure begin() is not called
+ # because create also blocks events from happening
with mock.patch.object(
e1.dialect, "initialize", side_effect=init
- ) as m1:
+ ) as m1, mock.patch.object(
+ e1._connection_cls, "begin"
+ ) as begin_mock:
@event.listens_for(e1, "connect", insert=True)
def go1(dbapi_conn, xyz):
c1.close()
c2.close()
+ eq_(begin_mock.mock_calls, [])
+
if add_our_own_onconnect:
calls = [
mock.call.foo("custom event first"),
eq_(canary.be1.call_count, 1)
- conn._branch().execute(select(1))
- eq_(canary.be1.call_count, 2)
-
def test_force_conn_events_false(self, testing_engine):
canary = Mock()
e1 = testing_engine(config.db_url, future=False)
eq_(canary.be1.call_count, 0)
- conn._branch().execute(select(1))
- eq_(canary.be1.call_count, 0)
-
def test_cursor_events_ctx_execute_scalar(self, testing_engine):
canary = Mock()
e1 = testing_engine(config.db_url)
# event is not called at all
eq_(m1.mock_calls, [])
- @testing.combinations((True,), (False,), argnames="future")
@testing.only_on("sqlite")
- def test_modify_statement_internal_driversql(self, connection, future):
+ def test_modify_statement_internal_driversql(self, connection):
m1 = mock.Mock()
@event.listens_for(connection, "before_execute", retval=True)
return clauseelement.replace("hi", "there"), multiparams, params
eq_(
- connection._exec_driver_sql(
- "select 'hi'", [], {}, {}, future=future
- ).scalar(),
- "hi" if future else "there",
+ connection.exec_driver_sql("select 'hi'").scalar(),
+ "hi",
)
- if future:
- eq_(m1.mock_calls, [])
- else:
- eq_(m1.mock_calls, [call.run_event()])
+ eq_(m1.mock_calls, [])
def test_modify_statement_clauseelement(self, connection):
@event.listens_for(connection, "before_execute", retval=True)
conn.execute(select(1).compile(dialect=e1.dialect))
conn._execute_compiled(
- select(1).compile(dialect=e1.dialect), (), {}, {}
+ select(1).compile(dialect=e1.dialect), (), {}
)
def test_execute_events(self):
conn.execute(select(1))
eq_(canary, ["execute", "cursor_execute"])
- @testing.requires.legacy_engine
- def test_engine_connect(self):
- engine = engines.testing_engine()
-
- tracker = Mock()
- event.listen(engine, "engine_connect", tracker)
-
- c1 = engine.connect()
- c2 = c1._branch()
- c1.close()
- eq_(tracker.mock_calls, [call(c1, False), call(c2, True)])
-
def test_execution_options(self):
engine = engines.testing_engine()
)
-class FutureEngineEventsTest(fixtures.FutureEngineMixin, EngineEventsTest):
- def test_future_fixture(self, testing_engine):
- e1 = testing_engine()
-
- assert e1._is_future
- with e1.connect() as conn:
- assert conn._is_future
-
- def test_emit_sql_in_autobegin(self, testing_engine):
- e1 = testing_engine(config.db_url)
-
- canary = Mock()
-
- @event.listens_for(e1, "begin")
- def begin(connection):
- result = connection.execute(select(1)).scalar()
- canary.got_result(result)
-
- with e1.connect() as conn:
- assert conn._is_future
- conn.execute(select(1)).scalar()
-
- assert conn.in_transaction()
-
- conn.commit()
-
- assert not conn.in_transaction()
-
- eq_(canary.mock_calls, [call.got_result(1)])
-
-
class HandleErrorTest(fixtures.TestBase):
__requires__ = ("ad_hoc_engines",)
__backend__ = True
)
eq_(patched.call_count, 1)
- def test_exception_autorollback_fails(self):
+ @testing.only_on("sqlite", "using specific DB message")
+ def test_exception_no_autorollback(self):
+ """with the 2.0 engine, a SQL statement will have run
+ "autobegin", so that we are in a transaction. so if an error
+ occurs, we report the error but stay in the transaction.
+
+ previously, we'd see the rollback failing due to autorollback
+ when transaction isn't started.
+ """
engine = engines.testing_engine()
conn = engine.connect()
def boom(connection):
raise engine.dialect.dbapi.OperationalError("rollback failed")
- with expect_warnings(
- r"An exception has occurred during handling of a previous "
- r"exception. The previous exception "
- r"is.*(?:i_dont_exist|does not exist)",
- py2konly=True,
- ):
- with patch.object(conn.dialect, "do_rollback", boom):
- assert_raises_message(
- tsa.exc.OperationalError,
- "rollback failed",
- conn.exec_driver_sql,
- "insert into i_dont_exist (x) values ('y')",
- )
+ with patch.object(conn.dialect, "do_rollback", boom):
+ assert_raises_message(
+ tsa.exc.OperationalError,
+ "no such table: i_dont_exist",
+ conn.exec_driver_sql,
+ "insert into i_dont_exist (x) values ('y')",
+ )
+
+ # we're still in a transaction
+ assert conn._transaction
+
+ # only fails when we actually call rollback
+ assert_raises_message(
+ tsa.exc.OperationalError,
+ "rollback failed",
+ conn.rollback,
+ )
+
+ def test_actual_autorollback(self):
+ """manufacture an autorollback scenario that works in 2.x."""
+
+ engine = engines.testing_engine()
+ conn = engine.connect()
+
+ def boom(connection):
+ raise engine.dialect.dbapi.OperationalError("rollback failed")
+
+ @event.listens_for(conn, "begin")
+ def _do_begin(conn):
+ # run a breaking statement before begin actually happens
+ conn.exec_driver_sql("insert into i_dont_exist (x) values ('y')")
+
+ with patch.object(conn.dialect, "do_rollback", boom):
+ assert_raises_message(
+ tsa.exc.OperationalError,
+ "rollback failed",
+ conn.begin,
+ )
def test_exception_event_ad_hoc_context(self):
"""test that handle_error is called with a context in
dbapi.OperationalError("test"), None, None
)
+ def test_dont_create_transaction_on_initialize(self):
+ """test that engine init doesn't invoke autobegin.
+
+ this happened implicitly in 1.4 due to use of a non-future
+ connection for initialize.
+
+ to fix for 2.0 we added a new flag _allow_autobegin=False
+ for init purposes only.
+
+ """
+ e = create_engine("sqlite://")
+
+ init_connection = None
+
+ def mock_initialize(connection):
+ # definitely trigger what would normally be an autobegin
+ connection.execute(select(1))
+ nonlocal init_connection
+ init_connection = connection
+
+ with mock.patch.object(
+ e._connection_cls, "begin"
+ ) as mock_begin, mock.patch.object(
+ e.dialect, "initialize", Mock(side_effect=mock_initialize)
+ ) as mock_init:
+ conn = e.connect()
+
+ eq_(mock_begin.mock_calls, [])
+ is_not(init_connection, None)
+ is_not(conn, init_connection)
+ is_false(init_connection._allow_autobegin)
+ eq_(mock_init.mock_calls, [mock.call(init_connection)])
+
+ # assert the mock works too
+ conn.begin()
+ eq_(mock_begin.mock_calls, [mock.call()])
+
+ conn.close()
+
def test_invalidate_on_connect(self):
"""test that is_disconnect() is called during connect.
eq_(conn.info["boom"], "one")
-class FutureExecuteTest(fixtures.FutureEngineMixin, fixtures.TablesTest):
- __backend__ = True
-
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "users",
- metadata,
- Column("user_id", INT, primary_key=True, autoincrement=False),
- Column("user_name", VARCHAR(20)),
- test_needs_acid=True,
- )
- Table(
- "users_autoinc",
- metadata,
- Column(
- "user_id", INT, primary_key=True, test_needs_autoincrement=True
- ),
- Column("user_name", VARCHAR(20)),
- test_needs_acid=True,
- )
-
- def test_non_dict_mapping(self, connection):
- """ensure arbitrary Mapping works for execute()"""
-
- class NotADict(collections_abc.Mapping):
- def __init__(self, _data):
- self._data = _data
-
- def __iter__(self):
- return iter(self._data)
-
- def __len__(self):
- return len(self._data)
-
- def __getitem__(self, key):
- return self._data[key]
-
- def keys(self):
- return self._data.keys()
-
- nd = NotADict({"a": 10, "b": 15})
- eq_(dict(nd), {"a": 10, "b": 15})
-
- result = connection.execute(
- select(
- bindparam("a", type_=Integer), bindparam("b", type_=Integer)
- ),
- nd,
- )
- eq_(result.first(), (10, 15))
-
- def test_row_works_as_mapping(self, connection):
- """ensure the RowMapping object works as a parameter dictionary for
- execute."""
-
- result = connection.execute(
- select(literal(10).label("a"), literal(15).label("b"))
- )
- row = result.first()
- eq_(row, (10, 15))
- eq_(row._mapping, {"a": 10, "b": 15})
-
- result = connection.execute(
- select(
- bindparam("a", type_=Integer).label("a"),
- bindparam("b", type_=Integer).label("b"),
- ),
- row._mapping,
- )
- row = result.first()
- eq_(row, (10, 15))
- eq_(row._mapping, {"a": 10, "b": 15})
-
- @testing.combinations(
- ({}, {}, {}),
- ({"a": "b"}, {}, {"a": "b"}),
- ({"a": "b", "d": "e"}, {"a": "c"}, {"a": "c", "d": "e"}),
- argnames="conn_opts, exec_opts, expected",
- )
- def test_execution_opts_per_invoke(
- self, connection, conn_opts, exec_opts, expected
- ):
- opts = []
-
- @event.listens_for(connection, "before_cursor_execute")
- def before_cursor_execute(
- conn, cursor, statement, parameters, context, executemany
- ):
- opts.append(context.execution_options)
-
- if conn_opts:
- connection = connection.execution_options(**conn_opts)
-
- if exec_opts:
- connection.execute(select(1), execution_options=exec_opts)
- else:
- connection.execute(select(1))
-
- eq_(opts, [expected])
-
- @testing.combinations(
- ({}, {}, {}, {}),
- ({}, {"a": "b"}, {}, {"a": "b"}),
- ({}, {"a": "b", "d": "e"}, {"a": "c"}, {"a": "c", "d": "e"}),
- (
- {"q": "z", "p": "r"},
- {"a": "b", "p": "x", "d": "e"},
- {"a": "c"},
- {"q": "z", "p": "x", "a": "c", "d": "e"},
- ),
- argnames="stmt_opts, conn_opts, exec_opts, expected",
- )
- def test_execution_opts_per_invoke_execute_events(
- self, connection, stmt_opts, conn_opts, exec_opts, expected
- ):
- opts = []
-
- @event.listens_for(connection, "before_execute")
- def before_execute(
- conn, clauseelement, multiparams, params, execution_options
- ):
- opts.append(("before", execution_options))
-
- @event.listens_for(connection, "after_execute")
- def after_execute(
- conn,
- clauseelement,
- multiparams,
- params,
- execution_options,
- result,
- ):
- opts.append(("after", execution_options))
-
- stmt = select(1)
-
- if stmt_opts:
- stmt = stmt.execution_options(**stmt_opts)
-
- if conn_opts:
- connection = connection.execution_options(**conn_opts)
-
- if exec_opts:
- connection.execute(stmt, execution_options=exec_opts)
- else:
- connection.execute(stmt)
-
- eq_(opts, [("before", expected), ("after", expected)])
-
- def test_no_branching(self, connection):
- with testing.expect_deprecated(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- assert_raises_message(
- NotImplementedError,
- "sqlalchemy.future.Connection does not support "
- "'branching' of new connections.",
- connection.connect,
- )
-
-
class SetInputSizesTest(fixtures.TablesTest):
__backend__ = True
)
def test_log_positional_array(self):
- with self.eng.connect() as conn:
+ with self.eng.begin() as conn:
exc_info = assert_raises(
tsa.exc.DBAPIError,
conn.execute,
)
eq_regex(
- self.buf.buffer[1].message,
+ self.buf.buffer[2].message,
r"\[generated .*\] \(\[1, 2, 3\], 'hi'\)",
)
e1.echo = True
- with e1.connect() as conn:
+ with e1.begin() as conn:
conn.execute(select(1)).close()
- with e2.connect() as conn:
+ with e2.begin() as conn:
conn.execute(select(2)).close()
e1.echo = False
- with e1.connect() as conn:
+ with e1.begin() as conn:
conn.execute(select(3)).close()
- with e2.connect() as conn:
+ with e2.begin() as conn:
conn.execute(select(4)).close()
e2.echo = True
- with e1.connect() as conn:
+ with e1.begin() as conn:
conn.execute(select(5)).close()
- with e2.connect() as conn:
+ with e2.begin() as conn:
conn.execute(select(6)).close()
- assert self.buf.buffer[0].getMessage().startswith("SELECT 1")
- assert self.buf.buffer[2].getMessage().startswith("SELECT 6")
- assert len(self.buf.buffer) == 4
+ assert self.buf.buffer[1].getMessage().startswith("SELECT 1")
+
+ assert self.buf.buffer[5].getMessage().startswith("SELECT 6")
+ assert len(self.buf.buffer) == 8
from sqlalchemy.testing import is_true
from sqlalchemy.testing import mock
from sqlalchemy.testing.assertions import expect_deprecated
+from sqlalchemy.testing.assertions import expect_raises_message
from sqlalchemy.testing.mock import call
from sqlalchemy.testing.mock import MagicMock
from sqlalchemy.testing.mock import Mock
)
assert e.echo is True
- def test_engine_from_config_future(self):
+ def test_engine_from_config_future_parameter_ignored(self):
dbapi = mock_dbapi
config = {
"sqlalchemy.future": "true",
}
- e = engine_from_config(config, module=dbapi, _initialize=False)
- assert e._is_future
+ engine_from_config(config, module=dbapi, _initialize=False)
- def test_engine_from_config_not_future(self):
+ def test_engine_from_config_future_false_raises(self):
dbapi = mock_dbapi
config = {
"sqlalchemy.future": "false",
}
- e = engine_from_config(config, module=dbapi, _initialize=False)
- assert not e._is_future
+ with expect_raises_message(
+ exc.ArgumentError,
+ r"The 'future' parameter passed to create_engine\(\) "
+ r"may only be set to True.",
+ ):
+ engine_from_config(config, module=dbapi, _initialize=False)
def test_pool_reset_on_return_from_config(self):
dbapi = mock_dbapi
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import eq_
from sqlalchemy.testing import fixtures
-from sqlalchemy.testing import mock
class _BooleanProcessorTest(fixtures.TestBase):
from sqlalchemy import cprocessors
cls.module = cprocessors
-
-
-class _DistillArgsTest(fixtures.TestBase):
- def test_distill_none(self):
- eq_(self.module._distill_params(mock.Mock(), None, None), [])
-
- def test_distill_no_multi_no_param(self):
- eq_(self.module._distill_params(mock.Mock(), (), {}), [])
-
- def test_distill_dict_multi_none_param(self):
- eq_(
- self.module._distill_params(mock.Mock(), None, {"foo": "bar"}),
- [{"foo": "bar"}],
- )
-
- def test_distill_dict_multi_empty_param(self):
- eq_(
- self.module._distill_params(mock.Mock(), (), {"foo": "bar"}),
- [{"foo": "bar"}],
- )
-
- def test_distill_single_dict(self):
- eq_(
- self.module._distill_params(mock.Mock(), ({"foo": "bar"},), {}),
- [{"foo": "bar"}],
- )
-
- def test_distill_single_list_strings(self):
- eq_(
- self.module._distill_params(mock.Mock(), (["foo", "bar"],), {}),
- [["foo", "bar"]],
- )
-
- def test_distill_single_list_tuples(self):
- eq_(
- self.module._distill_params(
- mock.Mock(), ([("foo", "bar"), ("bat", "hoho")],), {}
- ),
- [("foo", "bar"), ("bat", "hoho")],
- )
-
- def test_distill_single_list_tuple(self):
- eq_(
- self.module._distill_params(mock.Mock(), ([("foo", "bar")],), {}),
- [("foo", "bar")],
- )
-
- def test_distill_multi_list_tuple(self):
- eq_(
- self.module._distill_params(
- mock.Mock(), ([("foo", "bar")], [("bar", "bat")]), {}
- ),
- ([("foo", "bar")], [("bar", "bat")]),
- )
-
- def test_distill_multi_strings(self):
- eq_(
- self.module._distill_params(mock.Mock(), ("foo", "bar"), {}),
- [("foo", "bar")],
- )
-
- def test_distill_single_list_dicts(self):
- eq_(
- self.module._distill_params(
- mock.Mock(), ([{"foo": "bar"}, {"foo": "hoho"}],), {}
- ),
- [{"foo": "bar"}, {"foo": "hoho"}],
- )
-
- def test_distill_single_string(self):
- eq_(self.module._distill_params(mock.Mock(), ("arg",), {}), [["arg"]])
-
- def test_distill_multi_string_tuple(self):
- eq_(
- self.module._distill_params(mock.Mock(), (("arg", "arg"),), {}),
- [("arg", "arg")],
- )
-
-
-class PyDistillArgsTest(_DistillArgsTest):
- @classmethod
- def setup_test_class(cls):
- from sqlalchemy.engine import util
-
- cls.module = util
from sqlalchemy.testing import engines
from sqlalchemy.testing import eq_
from sqlalchemy.testing import expect_raises
+from sqlalchemy.testing import expect_raises_message
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import is_
from sqlalchemy.testing import is_false
from sqlalchemy.testing.engines import testing_engine
from sqlalchemy.testing.mock import call
from sqlalchemy.testing.mock import Mock
-from sqlalchemy.testing.mock import patch
from sqlalchemy.testing.schema import Column
from sqlalchemy.testing.schema import Table
from sqlalchemy.testing.util import gc_collect
# error stays consistent
assert_raises_message(
tsa.exc.PendingRollbackError,
- "This connection is on an inactive transaction. Please rollback",
+ r"Can't reconnect until invalid transaction is rolled back. "
+ r"Please rollback\(\) fully before proceeding",
conn.execute,
select(1),
)
assert_raises_message(
tsa.exc.PendingRollbackError,
- "This connection is on an inactive transaction. Please rollback",
+ r"Can't reconnect until invalid transaction is rolled back. "
+ r"Please rollback\(\) fully before proceeding",
trans.commit,
)
assert_raises_message(
tsa.exc.PendingRollbackError,
- "This connection is on an inactive transaction. Please rollback",
+ r"Can't reconnect until invalid transaction is rolled back. "
+ r"Please rollback\(\) fully before proceeding",
conn.execute,
select(1),
)
self.dbapi.shutdown()
- assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
+ with expect_raises(tsa.exc.DBAPIError):
+ conn.execute(select(1))
assert not conn.closed
assert conn.invalidated
eq_([c.close.mock_calls for c in self.dbapi.connections], [[call()]])
+ # trans was autobegin. they have to call rollback
+ with expect_raises(tsa.exc.PendingRollbackError):
+ conn.execute(select(1))
+
+ # ok
+ conn.rollback()
+
+ # now we are good
# test reconnects
conn.execute(select(1))
assert not conn.invalidated
conn.close()
def test_noreconnect_rollback(self):
+ # this test changes in 2.x due to autobegin.
+
conn = self.db.connect()
+ conn.execute(select(1))
+
self.dbapi.shutdown("rollback_no_disconnect")
- # raises error
- assert_raises_message(
+ # previously, running a select() here which would fail would then
+ # trigger autorollback which would also fail, this is not the
+ # case now as autorollback does not normally occur
+ with expect_raises_message(
tsa.exc.DBAPIError,
- "something broke on rollback but we didn't " "lose the connection",
- conn.execute,
- select(1),
- )
+ r"something broke on rollback but we didn't lose the connection",
+ ):
+ conn.rollback()
assert not conn.closed
assert not conn.invalidated
assert_raises_message(
tsa.exc.DBAPIError,
"Lost the DB connection on rollback",
- conn.execute,
- select(1),
+ conn.rollback,
)
assert not conn.closed
assert conn.invalidated
assert conn.invalidated
+
+ with expect_raises(tsa.exc.PendingRollbackError):
+ conn.execute(select(1))
+
+ conn.rollback()
+
eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
_assert_invalidated(conn.execute, select(1))
assert conn.invalidated
+ conn.rollback()
+
eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
# pool isn't replaced
assert self.engine.pool is p2
- def test_branched_invalidate_branch_to_parent(self):
- with self.engine.connect() as c1:
-
- with patch.object(self.engine.pool, "logger") as logger:
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- c1_branch = c1.connect()
- eq_(c1_branch.execute(select(1)).scalar(), 1)
-
- self.engine.test_shutdown()
-
- _assert_invalidated(c1_branch.execute, select(1))
- assert c1.invalidated
- assert c1_branch.invalidated
-
- c1_branch._revalidate_connection()
- assert not c1.invalidated
- assert not c1_branch.invalidated
-
- assert "Invalidate connection" in logger.mock_calls[0][1][0]
-
- def test_branched_invalidate_parent_to_branch(self):
- with self.engine.connect() as c1:
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- c1_branch = c1.connect()
- eq_(c1_branch.execute(select(1)).scalar(), 1)
-
- self.engine.test_shutdown()
-
- _assert_invalidated(c1.execute, select(1))
- assert c1.invalidated
- assert c1_branch.invalidated
-
- c1._revalidate_connection()
- assert not c1.invalidated
- assert not c1_branch.invalidated
-
- def test_branch_invalidate_state(self):
- with self.engine.connect() as c1:
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- c1_branch = c1.connect()
-
- eq_(c1_branch.execute(select(1)).scalar(), 1)
-
- self.engine.test_shutdown()
-
- _assert_invalidated(c1_branch.execute, select(1))
- assert not c1_branch.closed
- assert not c1_branch._still_open_and_dbapi_connection_is_valid
-
def test_ensure_is_disconnect_gets_connection(self):
def is_disconnect(e, conn, cursor):
# connection is still present
self.engine.test_shutdown()
assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
+ # aiosqlite is not able to run close() here without an
+ # error.
+ conn.invalidate()
+
def test_rollback_on_invalid_plain(self):
with self.engine.connect() as conn:
trans = conn.begin()
_assert_invalidated(conn.execute, select(1))
assert not conn.closed
assert conn.invalidated
+ conn.rollback()
eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
# to get a real "cut the server off" kind of fixture we'd need to do
# something in provisioning that seeks out the TCP connection at the
# OS level and kills it.
- __only_on__ = ("mysql+mysqldb", "mysql+pymysql")
-
- future = False
+ __only_on__ = ("+mysqldb", "+pymysql")
def make_engine(self, engine):
num_retries = 3
)
connection.invalidate()
- if self.future:
- connection.rollback()
- else:
- trans = connection.get_transaction()
- if trans:
- trans.rollback()
+ connection.rollback()
time.sleep(retry_interval)
context.cursor = (
__backend__ = True
def setup_test(self):
- self.engine = engines.reconnecting_engine(
- options=dict(future=self.future)
- )
+ self.engine = engines.reconnecting_engine()
self.meta = MetaData()
self.table = Table(
"sometable",
{"id": 6, "name": "some name 6"},
],
)
- if self.future:
- conn.rollback()
- else:
- trans = conn.get_transaction()
- trans.rollback()
-
-
-class FutureReconnectRecipeTest(ReconnectRecipeTest):
- future = True
+ conn.rollback()
-import sys
-
from sqlalchemy import event
from sqlalchemy import exc
from sqlalchemy import func
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import mock
from sqlalchemy.testing import ne_
-from sqlalchemy.testing.assertions import expect_deprecated_20
from sqlalchemy.testing.assertions import expect_raises_message
from sqlalchemy.testing.engines import testing_engine
from sqlalchemy.testing.schema import Column
with testing.db.connect() as conn:
yield conn
- def test_interrupt_ctxmanager_engine(self, trans_ctx_manager_fixture):
- fn = trans_ctx_manager_fixture
-
- # add commit/rollback to the legacy Connection object so that
- # we can test this less-likely case in use with the legacy
- # Engine.begin() context manager
- class ConnWCommitRollback(testing.db._connection_cls):
- def commit(self):
- self.get_transaction().commit()
-
- def rollback(self):
- self.get_transaction().rollback()
-
- with mock.patch.object(
- testing.db, "_connection_cls", ConnWCommitRollback
- ):
- fn(testing.db, trans_on_subject=False, execute_on_subject=False)
-
- def test_interrupt_ctxmanager_connection(self, trans_ctx_manager_fixture):
- fn = trans_ctx_manager_fixture
-
- with testing.db.connect() as conn:
- fn(conn, trans_on_subject=False, execute_on_subject=True)
-
def test_commits(self, local_connection):
users = self.tables.users
connection = local_connection
assert not local_connection.in_transaction()
- @testing.combinations((True,), (False,), argnames="roll_back_in_block")
- def test_ctxmanager_rolls_back(self, local_connection, roll_back_in_block):
- m1 = mock.Mock()
-
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
-
- with expect_raises_message(Exception, "test"):
- with local_connection.begin() as trans:
- if roll_back_in_block:
- trans.rollback()
-
- if 1 == 1:
- raise Exception("test")
-
- assert not trans.is_active
- assert not local_connection.in_transaction()
- assert trans._deactivated_from_connection
-
- eq_(m1.mock_calls, [mock.call.rollback(local_connection)])
-
- @testing.combinations((True,), (False,), argnames="roll_back_in_block")
- def test_ctxmanager_rolls_back_legacy_marker(
- self, local_connection, roll_back_in_block
- ):
- m1 = mock.Mock()
-
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
-
- with expect_deprecated_20(
- r"Calling .begin\(\) when a transaction is already begun"
- ):
- with local_connection.begin() as trans:
- with expect_raises_message(Exception, "test"):
- with local_connection.begin() as marker_trans:
- if roll_back_in_block:
- marker_trans.rollback()
- if 1 == 1:
- raise Exception("test")
-
- assert not marker_trans.is_active
- assert marker_trans._deactivated_from_connection
-
- assert not trans._deactivated_from_connection
- assert not trans.is_active
- assert not local_connection.in_transaction()
-
- eq_(m1.mock_calls, [mock.call.rollback(local_connection)])
-
@testing.combinations((True,), (False,), argnames="roll_back_in_block")
@testing.requires.savepoints
def test_ctxmanager_rolls_back_savepoint(
],
)
- def test_ctxmanager_commits_real_trans_from_nested(self, local_connection):
- m1 = mock.Mock()
-
- event.listen(
- local_connection, "rollback_savepoint", m1.rollback_savepoint
- )
- event.listen(
- local_connection, "release_savepoint", m1.release_savepoint
- )
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
- event.listen(local_connection, "begin", m1.begin)
- event.listen(local_connection, "savepoint", m1.savepoint)
-
- with testing.expect_deprecated_20(
- r"Calling Connection.begin_nested\(\) in 2.0 style use will return"
- ):
- with local_connection.begin_nested() as nested_trans:
- pass
-
- assert not nested_trans.is_active
- assert nested_trans._deactivated_from_connection
- # legacy mode, no savepoint at all
- eq_(
- m1.mock_calls,
- [
- mock.call.begin(local_connection),
- mock.call.commit(local_connection),
- ],
- )
-
def test_deactivated_warning_straight(self, local_connection):
with expect_warnings(
"transaction already deassociated from connection"
0,
)
- def test_with_interface(self, local_connection):
+ def test_ctxmanager_interface(self, local_connection):
+ # a legacy test, adapted for 2.x style, was called
+ # "test_with_interface". this is likely an early test for when
+ # the "with" construct was first added.
+
connection = local_connection
users = self.tables.users
trans = connection.begin()
- trans.__enter__()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- connection.execute(users.insert(), dict(user_id=2, user_name="user2"))
- try:
+
+ with trans:
+ connection.execute(
+ users.insert(), dict(user_id=1, user_name="user1")
+ )
connection.execute(
- users.insert(), dict(user_id=2, user_name="user2.5")
+ users.insert(), dict(user_id=2, user_name="user2")
)
- except Exception:
- trans.__exit__(*sys.exc_info())
- assert not trans.is_active
- self.assert_(
- connection.exec_driver_sql(
- "select count(*) from " "users"
- ).scalar()
- == 0
- )
+ assert trans.is_active
- trans = connection.begin()
- trans.__enter__()
- connection.execute(users.insert(), dict(user_id=1, user_name="user1"))
- trans.__exit__(None, None, None)
assert not trans.is_active
- self.assert_(
+
+ eq_(
connection.exec_driver_sql(
"select count(*) from " "users"
- ).scalar()
- == 1
+ ).scalar(),
+ 2,
)
+ connection.rollback()
def test_close(self, local_connection):
connection = local_connection
)
eq_(result.fetchall(), [])
+ def test_interrupt_ctxmanager_engine(self, trans_ctx_manager_fixture):
+ fn = trans_ctx_manager_fixture
-class AutoRollbackTest(fixtures.TestBase):
- __backend__ = True
-
- @classmethod
- def setup_test_class(cls):
- global metadata
- metadata = MetaData()
+ fn(testing.db, trans_on_subject=False, execute_on_subject=False)
- @classmethod
- def teardown_test_class(cls):
- metadata.drop_all(testing.db)
+ @testing.combinations((True,), (False,), argnames="trans_on_subject")
+ def test_interrupt_ctxmanager_connection(
+ self, trans_ctx_manager_fixture, trans_on_subject
+ ):
+ fn = trans_ctx_manager_fixture
- def test_rollback_deadlock(self):
- """test that returning connections to the pool clears any object
- locks."""
+ with testing.db.connect() as conn:
+ fn(
+ conn,
+ trans_on_subject=trans_on_subject,
+ execute_on_subject=True,
+ )
- conn1 = testing.db.connect()
- conn2 = testing.db.connect()
- users = Table(
- "deadlock_users",
- metadata,
- Column("user_id", INT, primary_key=True),
- Column("user_name", VARCHAR(20)),
- test_needs_acid=True,
- )
- with conn1.begin():
- users.create(conn1)
- conn1.exec_driver_sql("select * from deadlock_users")
- conn1.close()
+ def test_autobegin_rollback(self):
+ users = self.tables.users
+ with testing.db.connect() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ conn.rollback()
- # without auto-rollback in the connection pool's return() logic,
- # this deadlocks in PostgreSQL, because conn1 is returned to the
- # pool but still has a lock on "deadlock_users". comment out the
- # rollback in pool/ConnectionFairy._close() to see !
+ eq_(conn.scalar(select(func.count(1)).select_from(users)), 0)
- with conn2.begin():
- users.drop(conn2)
- conn2.close()
+ @testing.requires.autocommit
+ def test_autocommit_isolation_level(self):
+ users = self.tables.users
+ with testing.db.connect().execution_options(
+ isolation_level="AUTOCOMMIT"
+ ) as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ conn.rollback()
-class IsolationLevelTest(fixtures.TestBase):
- __requires__ = (
- "isolation_level",
- "ad_hoc_engines",
- "legacy_isolation_level",
- )
- __backend__ = True
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- def _default_isolation_level(self):
- return testing.requires.get_isolation_levels(testing.config)["default"]
+ @testing.requires.autocommit
+ def test_no_autocommit_w_begin(self):
- def _non_default_isolation_level(self):
- levels = testing.requires.get_isolation_levels(testing.config)
+ with testing.db.begin() as conn:
+ assert_raises_message(
+ exc.InvalidRequestError,
+ r"This connection has already initialized a SQLAlchemy "
+ r"Transaction\(\) object via begin\(\) or autobegin; "
+ r"isolation_level may not be altered unless rollback\(\) or "
+ r"commit\(\) is called first.",
+ conn.execution_options,
+ isolation_level="AUTOCOMMIT",
+ )
- default = levels["default"]
- supported = levels["supported"]
+ @testing.requires.autocommit
+ def test_no_autocommit_w_autobegin(self):
- s = set(supported).difference(["AUTOCOMMIT", default])
- if s:
- return s.pop()
- else:
- assert False, "no non-default isolation level available"
+ with testing.db.connect() as conn:
+ conn.execute(select(1))
- def test_engine_param_stays(self):
+ assert_raises_message(
+ exc.InvalidRequestError,
+ r"This connection has already initialized a SQLAlchemy "
+ r"Transaction\(\) object via begin\(\) or autobegin; "
+ r"isolation_level may not be altered unless rollback\(\) or "
+ r"commit\(\) is called first.",
+ conn.execution_options,
+ isolation_level="AUTOCOMMIT",
+ )
- eng = testing_engine()
- isolation_level = eng.dialect.get_isolation_level(
- eng.connect().connection
- )
- level = self._non_default_isolation_level()
+ conn.rollback()
- ne_(isolation_level, level)
+ conn.execution_options(isolation_level="AUTOCOMMIT")
- eng = testing_engine(options=dict(isolation_level=level))
- eq_(eng.dialect.get_isolation_level(eng.connect().connection), level)
+ def test_autobegin_commit(self):
+ users = self.tables.users
- # check that it stays
- conn = eng.connect()
- eq_(eng.dialect.get_isolation_level(conn.connection), level)
- conn.close()
+ with testing.db.connect() as conn:
- conn = eng.connect()
- eq_(eng.dialect.get_isolation_level(conn.connection), level)
- conn.close()
+ assert not conn.in_transaction()
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- def test_default_level(self):
- eng = testing_engine(options=dict())
- isolation_level = eng.dialect.get_isolation_level(
- eng.connect().connection
- )
- eq_(isolation_level, self._default_isolation_level())
+ assert conn.in_transaction()
+ conn.commit()
- def test_reset_level(self):
- eng = testing_engine(options=dict())
- conn = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._default_isolation_level(),
- )
+ assert not conn.in_transaction()
- eng.dialect.set_isolation_level(
- conn.connection, self._non_default_isolation_level()
- )
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._non_default_isolation_level(),
- )
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- eng.dialect.reset_isolation_level(conn.connection)
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._default_isolation_level(),
- )
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name 2"})
- conn.close()
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
- def test_reset_level_with_setting(self):
- eng = testing_engine(
- options=dict(isolation_level=self._non_default_isolation_level())
- )
- conn = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._non_default_isolation_level(),
- )
- eng.dialect.set_isolation_level(
- conn.connection, self._default_isolation_level()
- )
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._default_isolation_level(),
- )
- eng.dialect.reset_isolation_level(conn.connection)
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._non_default_isolation_level(),
- )
- conn.close()
+ assert conn.in_transaction()
+ conn.rollback()
+ assert not conn.in_transaction()
- def test_invalid_level(self):
- eng = testing_engine(options=dict(isolation_level="FOO"))
- assert_raises_message(
- exc.ArgumentError,
- "Invalid value '%s' for isolation_level. "
- "Valid isolation levels for %s are %s"
- % (
- "FOO",
- eng.dialect.name,
- ", ".join(eng.dialect._isolation_lookup),
- ),
- eng.connect,
- )
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- def test_connection_invalidated(self):
- eng = testing_engine()
- conn = eng.connect()
- c2 = conn.execution_options(
- isolation_level=self._non_default_isolation_level()
- )
- c2.invalidate()
- c2.connection
+ def test_rollback_on_close(self):
+ canary = mock.Mock()
+ with testing.db.connect() as conn:
+ event.listen(conn, "rollback", canary)
+ conn.execute(select(1))
+ assert conn.in_transaction()
- # TODO: do we want to rebuild the previous isolation?
- # for now, this is current behavior so we will leave it.
- eq_(c2.get_isolation_level(), self._default_isolation_level())
+ eq_(canary.mock_calls, [mock.call(conn)])
- def test_per_connection(self):
- from sqlalchemy.pool import QueuePool
+ def test_no_on_close_no_transaction(self):
+ canary = mock.Mock()
+ with testing.db.connect() as conn:
+ event.listen(conn, "rollback", canary)
+ conn.execute(select(1))
+ conn.rollback()
+ assert not conn.in_transaction()
- eng = testing_engine(
- options=dict(poolclass=QueuePool, pool_size=2, max_overflow=0)
- )
+ eq_(canary.mock_calls, [mock.call(conn)])
- c1 = eng.connect()
- c1 = c1.execution_options(
- isolation_level=self._non_default_isolation_level()
- )
- c2 = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(c1.connection),
- self._non_default_isolation_level(),
- )
- eq_(
- eng.dialect.get_isolation_level(c2.connection),
- self._default_isolation_level(),
- )
- c1.close()
- c2.close()
- c3 = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(c3.connection),
- self._default_isolation_level(),
- )
- c4 = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(c4.connection),
- self._default_isolation_level(),
- )
+ def test_rollback_on_exception(self):
+ canary = mock.Mock()
+ try:
+ with testing.db.connect() as conn:
+ event.listen(conn, "rollback", canary)
+ conn.execute(select(1))
+ assert conn.in_transaction()
+ raise Exception("some error")
+ assert False
+ except:
+ pass
- c3.close()
- c4.close()
+ eq_(canary.mock_calls, [mock.call(conn)])
- def test_warning_in_transaction(self):
- eng = testing_engine()
- c1 = eng.connect()
- with expect_warnings(
- "Connection is already established with a Transaction; "
- "setting isolation_level may implicitly rollback or commit "
- "the existing transaction, or have no effect until next "
- "transaction"
- ):
- with c1.begin():
- c1 = c1.execution_options(
- isolation_level=self._non_default_isolation_level()
- )
+ def test_rollback_on_exception_if_no_trans(self):
+ canary = mock.Mock()
+ try:
+ with testing.db.connect() as conn:
+ event.listen(conn, "rollback", canary)
+ assert not conn.in_transaction()
+ raise Exception("some error")
+ assert False
+ except:
+ pass
- eq_(
- eng.dialect.get_isolation_level(c1.connection),
- self._non_default_isolation_level(),
- )
- # stays outside of transaction
- eq_(
- eng.dialect.get_isolation_level(c1.connection),
- self._non_default_isolation_level(),
- )
+ eq_(canary.mock_calls, [])
- def test_per_statement_bzzt(self):
- assert_raises_message(
- exc.ArgumentError,
- r"'isolation_level' execution option may only be specified "
- r"on Connection.execution_options\(\), or "
- r"per-engine using the isolation_level "
- r"argument to create_engine\(\).",
- select(1).execution_options,
- isolation_level=self._non_default_isolation_level(),
- )
+ def test_commit_no_begin(self):
+ with testing.db.connect() as conn:
+ assert not conn.in_transaction()
+ conn.commit()
- def test_per_engine(self):
- # new in 0.9
- eng = testing_engine(
- testing.db.url,
- options=dict(
- execution_options={
- "isolation_level": self._non_default_isolation_level()
- }
- ),
- )
- conn = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._non_default_isolation_level(),
- )
+ @testing.requires.independent_connections
+ def test_commit_inactive(self):
+ with testing.db.connect() as conn:
+ conn.begin()
+ conn.invalidate()
- def test_per_option_engine(self):
- eng = testing_engine(testing.db.url).execution_options(
- isolation_level=self._non_default_isolation_level()
- )
+ assert_raises_message(
+ exc.InvalidRequestError, "Can't reconnect until", conn.commit
+ )
- conn = eng.connect()
- eq_(
- eng.dialect.get_isolation_level(conn.connection),
- self._non_default_isolation_level(),
- )
+ @testing.requires.independent_connections
+ def test_rollback_inactive(self):
+ users = self.tables.users
+ with testing.db.connect() as conn:
- def test_isolation_level_accessors_connection_default(self):
- eng = testing_engine(testing.db.url)
- with eng.connect() as conn:
- eq_(conn.default_isolation_level, self._default_isolation_level())
- with eng.connect() as conn:
- eq_(conn.get_isolation_level(), self._default_isolation_level())
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ conn.commit()
- def test_isolation_level_accessors_connection_option_modified(self):
- eng = testing_engine(testing.db.url)
- with eng.connect() as conn:
- c2 = conn.execution_options(
- isolation_level=self._non_default_isolation_level()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+
+ conn.invalidate()
+
+ assert_raises_message(
+ exc.PendingRollbackError,
+ "Can't reconnect",
+ conn.execute,
+ select(1),
)
- eq_(conn.default_isolation_level, self._default_isolation_level())
+
+ conn.rollback()
eq_(
- conn.get_isolation_level(), self._non_default_isolation_level()
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
)
- eq_(c2.get_isolation_level(), self._non_default_isolation_level())
+ def test_rollback_no_begin(self):
+ with testing.db.connect() as conn:
+ assert not conn.in_transaction()
+ conn.rollback()
-class ConnectionCharacteristicTest(fixtures.TestBase):
- @testing.fixture
- def characteristic_fixture(self):
- class FooCharacteristic(characteristics.ConnectionCharacteristic):
- transactional = True
+ def test_rollback_end_ctx_manager(self):
+ with testing.db.begin() as conn:
+ assert conn.in_transaction()
+ conn.rollback()
+ assert not conn.in_transaction()
- def reset_characteristic(self, dialect, dbapi_conn):
+ def test_rollback_end_ctx_manager_autobegin(self, local_connection):
+ m1 = mock.Mock()
- dialect.reset_foo(dbapi_conn)
+ event.listen(local_connection, "rollback", m1.rollback)
+ event.listen(local_connection, "commit", m1.commit)
- def set_characteristic(self, dialect, dbapi_conn, value):
+ with local_connection.begin() as trans:
+ assert local_connection.in_transaction()
+ trans.rollback()
+ assert not local_connection.in_transaction()
- dialect.set_foo(dbapi_conn, value)
+ # previously, would be subject to autocommit.
+ # now it raises
+ with expect_raises_message(
+ exc.InvalidRequestError,
+ "Can't operate on closed transaction inside context manager. "
+ "Please complete the context manager before emitting "
+ "further commands.",
+ ):
+ local_connection.execute(select(1))
- def get_characteristic(self, dialect, dbapi_conn):
- return dialect.get_foo(dbapi_conn)
+ assert not local_connection.in_transaction()
- class FooDialect(default.DefaultDialect):
- connection_characteristics = util.immutabledict(
- {"foo": FooCharacteristic()}
- )
+ @testing.combinations((True,), (False,), argnames="roll_back_in_block")
+ def test_ctxmanager_rolls_back(self, local_connection, roll_back_in_block):
+ m1 = mock.Mock()
- def reset_foo(self, dbapi_conn):
- dbapi_conn.foo = "original_value"
+ event.listen(local_connection, "rollback", m1.rollback)
+ event.listen(local_connection, "commit", m1.commit)
- def set_foo(self, dbapi_conn, value):
- dbapi_conn.foo = value
+ with expect_raises_message(Exception, "test"):
+ with local_connection.begin() as trans:
+ if roll_back_in_block:
+ trans.rollback()
- def get_foo(self, dbapi_conn):
- return dbapi_conn.foo
+ if 1 == 1:
+ raise Exception("test")
- connection = mock.Mock()
+ assert not trans.is_active
+ assert not local_connection.in_transaction()
+ assert trans._deactivated_from_connection
- def creator():
- connection.foo = "original_value"
- return connection
+ eq_(m1.mock_calls, [mock.call.rollback(local_connection)])
- pool = _pool.SingletonThreadPool(creator=creator)
- u = url.make_url("foo://")
- return base.Engine(pool, FooDialect(), u), connection
+ @testing.requires.savepoints
+ def test_ctxmanager_autobegins_real_trans_from_nested(
+ self, local_connection
+ ):
+ # the legacy version of this test in 1.4
+ # was test_ctxmanager_commits_real_trans_from_nested
+ m1 = mock.Mock()
- def test_engine_param_stays(self, characteristic_fixture):
+ event.listen(
+ local_connection, "rollback_savepoint", m1.rollback_savepoint
+ )
+ event.listen(
+ local_connection, "release_savepoint", m1.release_savepoint
+ )
+ event.listen(local_connection, "rollback", m1.rollback)
+ event.listen(local_connection, "commit", m1.commit)
+ event.listen(local_connection, "begin", m1.begin)
+ event.listen(local_connection, "savepoint", m1.savepoint)
- engine, connection = characteristic_fixture
+ with local_connection.begin_nested() as nested_trans:
+ pass
- foo_level = engine.dialect.get_foo(engine.connect().connection)
+ assert not nested_trans.is_active
+ assert nested_trans._deactivated_from_connection
+ eq_(
+ m1.mock_calls,
+ [
+ mock.call.begin(local_connection),
+ mock.call.savepoint(local_connection, mock.ANY),
+ mock.call.release_savepoint(
+ local_connection, mock.ANY, mock.ANY
+ ),
+ ],
+ )
- new_level = "new_level"
+ def test_explicit_begin(self):
+ users = self.tables.users
- ne_(foo_level, new_level)
+ with testing.db.connect() as conn:
+ assert not conn.in_transaction()
+ conn.begin()
+ assert conn.in_transaction()
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ conn.commit()
- eng = engine.execution_options(foo=new_level)
- eq_(eng.dialect.get_foo(eng.connect().connection), new_level)
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- # check that it stays
- conn = eng.connect()
- eq_(eng.dialect.get_foo(conn.connection), new_level)
- conn.close()
+ def test_no_double_begin(self):
+ with testing.db.connect() as conn:
+ conn.begin()
- conn = eng.connect()
- eq_(eng.dialect.get_foo(conn.connection), new_level)
- conn.close()
+ assert_raises_message(
+ exc.InvalidRequestError,
+ r"This connection has already initialized a SQLAlchemy "
+ r"Transaction\(\) object via begin\(\) or autobegin; can't "
+ r"call begin\(\) here unless rollback\(\) or commit\(\) is "
+ r"called first.",
+ conn.begin,
+ )
- def test_default_level(self, characteristic_fixture):
- engine, connection = characteristic_fixture
+ def test_no_autocommit(self):
+ users = self.tables.users
- eq_(
- engine.dialect.get_foo(engine.connect().connection),
- "original_value",
- )
+ with testing.db.connect() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- def test_connection_invalidated(self, characteristic_fixture):
- engine, connection = characteristic_fixture
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 0,
+ )
- conn = engine.connect()
- c2 = conn.execution_options(foo="new_value")
- eq_(connection.foo, "new_value")
- c2.invalidate()
- c2.connection
+ def test_begin_block(self):
+ users = self.tables.users
- eq_(connection.foo, "original_value")
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- def test_warning_in_transaction(self, characteristic_fixture):
- engine, connection = characteristic_fixture
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- c1 = engine.connect()
- with expect_warnings(
- "Connection is already established with a Transaction; "
- "setting foo may implicitly rollback or commit "
- "the existing transaction, or have no effect until next "
- "transaction"
- ):
- with c1.begin():
- c1 = c1.execution_options(foo="new_foo")
+ @testing.requires.savepoints
+ def test_savepoint_one(self):
+ users = self.tables.users
- eq_(
- engine.dialect.get_foo(c1.connection),
- "new_foo",
- )
- # stays outside of transaction
- eq_(engine.dialect.get_foo(c1.connection), "new_foo")
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- @testing.fails("no error is raised yet here.")
- def test_per_statement_bzzt(self, characteristic_fixture):
- engine, connection = characteristic_fixture
+ savepoint = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- # this would need some on-execute mechanism to look inside of
- # the characteristics list. unfortunately this would
- # add some latency.
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
+ savepoint.rollback()
- assert_raises_message(
- exc.ArgumentError,
- r"'foo' execution option may only be specified "
- r"on Connection.execution_options\(\), or "
- r"per-engine using the isolation_level "
- r"argument to create_engine\(\).",
- connection.execute,
- select([1]).execution_options(foo="bar"),
- )
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- def test_per_engine(self, characteristic_fixture):
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- engine, connection = characteristic_fixture
+ @testing.requires.savepoints
+ def test_savepoint_two(self):
+ users = self.tables.users
- pool, dialect, url = engine.pool, engine.dialect, engine.url
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- eng = base.Engine(
- pool, dialect, url, execution_options={"foo": "new_value"}
- )
+ savepoint = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- conn = eng.connect()
- eq_(eng.dialect.get_foo(conn.connection), "new_value")
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
+ savepoint.commit()
- def test_per_option_engine(self, characteristic_fixture):
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
- engine, connection = characteristic_fixture
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
- eng = engine.execution_options(foo="new_value")
+ @testing.requires.savepoints
+ def test_savepoint_three(self):
+ users = self.tables.users
- conn = eng.connect()
- eq_(
- eng.dialect.get_foo(conn.connection),
- "new_value",
- )
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
-class ResetFixture(object):
- @testing.fixture()
- def reset_agent(self, testing_engine):
- engine = testing_engine()
- engine.connect().close()
+ conn.rollback()
- harness = mock.Mock(
- do_rollback=mock.Mock(side_effect=testing.db.dialect.do_rollback),
- do_commit=mock.Mock(side_effect=testing.db.dialect.do_commit),
- engine=engine,
- )
- event.listen(engine, "rollback", harness.rollback)
- event.listen(engine, "commit", harness.commit)
- event.listen(engine, "rollback_savepoint", harness.rollback_savepoint)
- event.listen(engine, "rollback_twophase", harness.rollback_twophase)
- event.listen(engine, "commit_twophase", harness.commit_twophase)
+ assert not conn.in_transaction()
- with mock.patch.object(
- engine.dialect, "do_rollback", harness.do_rollback
- ), mock.patch.object(engine.dialect, "do_commit", harness.do_commit):
- yield harness
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 0,
+ )
- event.remove(engine, "rollback", harness.rollback)
- event.remove(engine, "commit", harness.commit)
- event.remove(engine, "rollback_savepoint", harness.rollback_savepoint)
- event.remove(engine, "rollback_twophase", harness.rollback_twophase)
- event.remove(engine, "commit_twophase", harness.commit_twophase)
+ @testing.requires.savepoints
+ def test_savepoint_four(self):
+ users = self.tables.users
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
-class ResetAgentTest(ResetFixture, fixtures.TestBase):
- # many of these tests illustate rollback-on-return being redundant
- # vs. what the transaction just did, however this is to ensure
- # even if statements were invoked on the DBAPI connection directly,
- # the state is cleared. options to optimize this with clear
- # docs etc. should be added.
+ sp1 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- __backend__ = True
+ sp2 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
- def test_begin_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
- )
+ sp2.rollback()
- def test_begin_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans.rollback()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ assert not sp2.is_active
+ assert sp1.is_active
+ assert conn.in_transaction()
- def test_begin_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans.commit()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.commit(connection),
- mock.call.do_commit(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ assert not sp1.is_active
- def test_trans_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans.close()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 2,
+ )
@testing.requires.savepoints
- def test_begin_nested_trans_close_one(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- t1 = connection.begin()
- t2 = connection.begin_nested()
- assert connection._nested_transaction is t2
- assert connection._transaction is t1
- t2.close()
- assert connection._nested_transaction is None
- assert connection._transaction is t1
- t1.close()
- assert not t1.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ def test_savepoint_five(self):
+ users = self.tables.users
- @testing.requires.savepoints
- def test_begin_nested_trans_close_two(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- t1 = connection.begin()
- t2 = connection.begin_nested()
- assert connection._nested_transaction is t2
- assert connection._transaction is t1
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- t1.close()
+ conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- assert connection._nested_transaction is None
- assert connection._transaction is None
+ sp2 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
- assert not t1.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ sp2.commit()
- @testing.requires.savepoints
- def test_begin_nested_trans_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- t1 = connection.begin()
- t2 = connection.begin_nested()
- assert connection._nested_transaction is t2
- assert connection._transaction is t1
- t2.close()
- assert connection._nested_transaction is None
- assert connection._transaction is t1
- t1.rollback()
- assert connection._transaction is None
- assert not t2.is_active
- assert not t1.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ assert conn.in_transaction()
- @testing.requires.savepoints
- def test_begin_nested_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- with testing.expect_deprecated_20(
- r"Calling Connection.begin_nested\(\) in "
- r"2.0 style use will return"
- ):
- trans = connection.begin_nested()
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 3,
+ )
@testing.requires.savepoints
- def test_begin_begin_nested_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- assert not trans2.is_active
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ def test_savepoint_six(self):
+ users = self.tables.users
- @testing.requires.savepoints
- def test_begin_begin_nested_rollback_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- trans2.rollback()
- trans.commit()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
- mock.call.commit(connection),
- mock.call.do_commit(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ with testing.db.begin() as conn:
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- @testing.requires.savepoints
- def test_begin_begin_nested_rollback_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- trans2.rollback()
- trans.rollback()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ sp1 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase() # noqa
+ assert conn._nested_transaction is sp1
- # pg8000 rolls back via the rollback_twophase
- eq_(
- reset_agent.mock_calls[0],
- mock.call.rollback_twophase(connection, mock.ANY, mock.ANY),
- )
+ sp2 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase()
- trans.commit()
- eq_(
- reset_agent.mock_calls[0],
- mock.call.commit_twophase(connection, mock.ANY, mock.ANY),
- )
+ assert conn._nested_transaction is sp2
- eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
+ sp2.commit()
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase()
- trans.rollback()
- eq_(
- reset_agent.mock_calls[0:2],
- [
- mock.call.rollback_twophase(connection, mock.ANY, mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ assert conn._nested_transaction is sp1
- eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
+ sp1.rollback()
+ assert conn._nested_transaction is None
-class FutureResetAgentTest(
- ResetFixture, fixtures.FutureEngineMixin, fixtures.TestBase
-):
+ assert conn.in_transaction()
- __backend__ = True
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 1,
+ )
- def test_reset_agent_no_conn_transaction(self, reset_agent):
- with reset_agent.engine.connect():
- pass
+ @testing.requires.savepoints
+ def test_savepoint_seven(self):
+ users = self.tables.users
- eq_(reset_agent.mock_calls, [mock.call.do_rollback(mock.ANY)])
+ conn = testing.db.connect()
+ trans = conn.begin()
+ conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- def test_begin_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
-
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
- )
-
- def test_begin_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans.rollback()
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
-
- def test_begin_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans.commit()
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.commit(connection),
- mock.call.do_commit(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
-
- @testing.requires.savepoints
- def test_begin_nested_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_nested()
- # it's a savepoint, but root made sure it closed
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
- )
+ sp1 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
- @testing.requires.savepoints
- def test_begin_begin_nested_close(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- assert not trans2.is_active
- assert not trans.is_active
- eq_(
- reset_agent.mock_calls,
- [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
- )
+ sp2 = conn.begin_nested()
+ conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
- @testing.requires.savepoints
- def test_begin_begin_nested_rollback_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- trans2.rollback() # this is not a connection level event
- trans.commit()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, None),
- mock.call.commit(connection),
- mock.call.do_commit(mock.ANY),
- mock.call.do_rollback(mock.ANY),
- ],
- )
+ assert conn.in_transaction()
- @testing.requires.savepoints
- def test_begin_begin_nested_rollback_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin()
- trans2 = connection.begin_nested()
- trans2.rollback()
- trans.rollback()
- eq_(
- reset_agent.mock_calls,
- [
- mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
- mock.call.rollback(connection),
- mock.call.do_rollback(mock.ANY),
- mock.call.do_rollback(mock.ANY), # this is the reset on return
- ],
- )
+ trans.close()
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase()
+ assert not sp1.is_active
+ assert not sp2.is_active
assert not trans.is_active
- # pg8000 uses the rollback_twophase as the full rollback.
- eq_(
- reset_agent.mock_calls[0],
- mock.call.rollback_twophase(connection, mock.ANY, False),
- )
-
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase_commit(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase()
- trans.commit()
+ assert conn._transaction is None
+ assert conn._nested_transaction is None
- # again pg8000 vs. other PG drivers have different API
- eq_(
- reset_agent.mock_calls[0],
- mock.call.commit_twophase(connection, mock.ANY, False),
- )
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count(1)).select_from(users)),
+ 0,
+ )
- eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
- @testing.requires.two_phase_transactions
- def test_reset_via_agent_begin_twophase_rollback(self, reset_agent):
- with reset_agent.engine.connect() as connection:
- trans = connection.begin_twophase()
- trans.rollback()
+class AutoRollbackTest(fixtures.TestBase):
+ __backend__ = True
- # pg8000 vs. the other postgresql drivers have different
- # twophase implementations. the base postgresql driver emits
- # "ROLLBACK PREPARED" explicitly then calls do_rollback().
- # pg8000 has a dedicated API method. so we get either one or
- # two do_rollback() at the end, just need at least one.
- eq_(
- reset_agent.mock_calls[0:2],
- [
- mock.call.rollback_twophase(connection, mock.ANY, False),
- mock.call.do_rollback(mock.ANY),
- # mock.call.do_rollback(mock.ANY),
- ],
- )
- eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
+ @classmethod
+ def setup_test_class(cls):
+ global metadata
+ metadata = MetaData()
+ @classmethod
+ def teardown_test_class(cls):
+ metadata.drop_all(testing.db)
-class FutureTransactionTest(fixtures.FutureEngineMixin, fixtures.TablesTest):
- __backend__ = True
+ def test_rollback_deadlock(self):
+ """test that returning connections to the pool clears any object
+ locks."""
- @classmethod
- def define_tables(cls, metadata):
- Table(
- "users",
- metadata,
- Column("user_id", INT, primary_key=True, autoincrement=False),
- Column("user_name", VARCHAR(20)),
- test_needs_acid=True,
- )
- Table(
- "users_autoinc",
+ conn1 = testing.db.connect()
+ conn2 = testing.db.connect()
+ users = Table(
+ "deadlock_users",
metadata,
- Column(
- "user_id", INT, primary_key=True, test_needs_autoincrement=True
- ),
+ Column("user_id", INT, primary_key=True),
Column("user_name", VARCHAR(20)),
test_needs_acid=True,
)
+ with conn1.begin():
+ users.create(conn1)
+ conn1.exec_driver_sql("select * from deadlock_users")
+ conn1.close()
- @testing.fixture
- def local_connection(self):
- with testing.db.connect() as conn:
- yield conn
+ # without auto-rollback in the connection pool's return() logic,
+ # this deadlocks in PostgreSQL, because conn1 is returned to the
+ # pool but still has a lock on "deadlock_users". comment out the
+ # rollback in pool/ConnectionFairy._close() to see !
- def test_interrupt_ctxmanager_engine(self, trans_ctx_manager_fixture):
- fn = trans_ctx_manager_fixture
+ with conn2.begin():
+ users.drop(conn2)
+ conn2.close()
- fn(testing.db, trans_on_subject=False, execute_on_subject=False)
- @testing.combinations((True,), (False,), argnames="trans_on_subject")
- def test_interrupt_ctxmanager_connection(
- self, trans_ctx_manager_fixture, trans_on_subject
- ):
- fn = trans_ctx_manager_fixture
+class IsolationLevelTest(fixtures.TestBase):
+ __requires__ = (
+ "isolation_level",
+ "ad_hoc_engines",
+ )
+ __backend__ = True
- with testing.db.connect() as conn:
- fn(
- conn,
- trans_on_subject=trans_on_subject,
- execute_on_subject=True,
- )
+ def _default_isolation_level(self):
+ return testing.requires.get_isolation_levels(testing.config)["default"]
- def test_autobegin_rollback(self):
- users = self.tables.users
- with testing.db.connect() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- conn.rollback()
+ def _non_default_isolation_level(self):
+ levels = testing.requires.get_isolation_levels(testing.config)
- eq_(conn.scalar(select(func.count(1)).select_from(users)), 0)
+ default = levels["default"]
+ supported = levels["supported"]
- @testing.requires.autocommit
- def test_autocommit_isolation_level(self):
- users = self.tables.users
+ s = set(supported).difference(["AUTOCOMMIT", default])
+ if s:
+ return s.pop()
+ else:
+ assert False, "no non-default isolation level available"
- with testing.db.connect().execution_options(
- isolation_level="AUTOCOMMIT"
- ) as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- conn.rollback()
+ @testing.requires.legacy_isolation_level
+ def test_engine_param_stays(self):
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ eng = testing_engine()
+ isolation_level = eng.dialect.get_isolation_level(
+ eng.connect().connection
+ )
+ level = self._non_default_isolation_level()
- @testing.requires.autocommit
- def test_no_autocommit_w_begin(self):
+ ne_(isolation_level, level)
- with testing.db.begin() as conn:
- assert_raises_message(
- exc.InvalidRequestError,
- r"This connection has already initialized a SQLAlchemy "
- r"Transaction\(\) object via begin\(\) or autobegin; "
- r"isolation_level may not be altered unless rollback\(\) or "
- r"commit\(\) is called first.",
- conn.execution_options,
- isolation_level="AUTOCOMMIT",
- )
-
- @testing.requires.autocommit
- def test_no_autocommit_w_autobegin(self):
-
- with testing.db.connect() as conn:
- conn.execute(select(1))
-
- assert_raises_message(
- exc.InvalidRequestError,
- r"This connection has already initialized a SQLAlchemy "
- r"Transaction\(\) object via begin\(\) or autobegin; "
- r"isolation_level may not be altered unless rollback\(\) or "
- r"commit\(\) is called first.",
- conn.execution_options,
- isolation_level="AUTOCOMMIT",
- )
-
- conn.rollback()
-
- conn.execution_options(isolation_level="AUTOCOMMIT")
-
- def test_autobegin_commit(self):
- users = self.tables.users
-
- with testing.db.connect() as conn:
+ eng = testing_engine(options=dict(isolation_level=level))
+ eq_(eng.dialect.get_isolation_level(eng.connect().connection), level)
- assert not conn.in_transaction()
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ # check that it stays
+ conn = eng.connect()
+ eq_(eng.dialect.get_isolation_level(conn.connection), level)
+ conn.close()
- assert conn.in_transaction()
- conn.commit()
+ conn = eng.connect()
+ eq_(eng.dialect.get_isolation_level(conn.connection), level)
+ conn.close()
- assert not conn.in_transaction()
+ def test_default_level(self):
+ eng = testing_engine(options=dict())
+ isolation_level = eng.dialect.get_isolation_level(
+ eng.connect().connection
+ )
+ eq_(isolation_level, self._default_isolation_level())
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ def test_reset_level(self):
+ eng = testing_engine(options=dict())
+ conn = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._default_isolation_level(),
+ )
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name 2"})
+ eng.dialect.set_isolation_level(
+ conn.connection, self._non_default_isolation_level()
+ )
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._non_default_isolation_level(),
+ )
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
+ eng.dialect.reset_isolation_level(conn.connection)
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._default_isolation_level(),
+ )
- assert conn.in_transaction()
- conn.rollback()
- assert not conn.in_transaction()
+ conn.close()
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ @testing.requires.legacy_isolation_level
+ def test_reset_level_with_setting(self):
+ eng = testing_engine(
+ options=dict(isolation_level=self._non_default_isolation_level())
+ )
+ conn = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._non_default_isolation_level(),
+ )
+ eng.dialect.set_isolation_level(
+ conn.connection, self._default_isolation_level()
+ )
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._default_isolation_level(),
+ )
+ eng.dialect.reset_isolation_level(conn.connection)
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._non_default_isolation_level(),
+ )
+ conn.close()
- def test_rollback_on_close(self):
- canary = mock.Mock()
- with testing.db.connect() as conn:
- event.listen(conn, "rollback", canary)
- conn.execute(select(1))
- assert conn.in_transaction()
+ @testing.requires.legacy_isolation_level
+ def test_invalid_level_engine_param(self):
+ eng = testing_engine(options=dict(isolation_level="FOO"))
+ assert_raises_message(
+ exc.ArgumentError,
+ "Invalid value '%s' for isolation_level. "
+ "Valid isolation levels for %s are %s"
+ % (
+ "FOO",
+ eng.dialect.name,
+ ", ".join(eng.dialect._isolation_lookup),
+ ),
+ eng.connect,
+ )
- eq_(canary.mock_calls, [mock.call(conn)])
+ # TODO: all the dialects seem to be manually raising ArgumentError
+ # individually within their set_isolation_level() methods, when this
+ # should be a default dialect feature so that
+ # error messaging etc. is consistent, including that it works for 3rd
+ # party dialects.
+ # TODO: barring that, at least implement this for the Oracle dialect
+ @testing.fails_on(
+ "oracle",
+ "cx_oracle dialect doesnt have argument error here, "
+ "raises it via the DB rejecting it",
+ )
+ def test_invalid_level_execution_option(self):
+ eng = testing_engine(
+ options=dict(execution_options=dict(isolation_level="FOO"))
+ )
+ assert_raises_message(
+ exc.ArgumentError,
+ "Invalid value '%s' for isolation_level. "
+ "Valid isolation levels for %s are %s"
+ % (
+ "FOO",
+ eng.dialect.name,
+ ", ".join(eng.dialect._isolation_lookup),
+ ),
+ eng.connect,
+ )
- def test_no_on_close_no_transaction(self):
- canary = mock.Mock()
- with testing.db.connect() as conn:
- event.listen(conn, "rollback", canary)
- conn.execute(select(1))
- conn.rollback()
- assert not conn.in_transaction()
+ def test_connection_invalidated(self):
+ eng = testing_engine()
+ conn = eng.connect()
+ c2 = conn.execution_options(
+ isolation_level=self._non_default_isolation_level()
+ )
+ c2.invalidate()
+ c2.connection
- eq_(canary.mock_calls, [mock.call(conn)])
+ # TODO: do we want to rebuild the previous isolation?
+ # for now, this is current behavior so we will leave it.
+ eq_(c2.get_isolation_level(), self._default_isolation_level())
- def test_rollback_on_exception(self):
- canary = mock.Mock()
- try:
- with testing.db.connect() as conn:
- event.listen(conn, "rollback", canary)
- conn.execute(select(1))
- assert conn.in_transaction()
- raise Exception("some error")
- assert False
- except:
- pass
+ def test_per_connection(self):
+ from sqlalchemy.pool import QueuePool
- eq_(canary.mock_calls, [mock.call(conn)])
+ eng = testing_engine(
+ options=dict(poolclass=QueuePool, pool_size=2, max_overflow=0)
+ )
- def test_rollback_on_exception_if_no_trans(self):
- canary = mock.Mock()
- try:
- with testing.db.connect() as conn:
- event.listen(conn, "rollback", canary)
- assert not conn.in_transaction()
- raise Exception("some error")
- assert False
- except:
- pass
+ c1 = eng.connect()
+ c1 = c1.execution_options(
+ isolation_level=self._non_default_isolation_level()
+ )
+ c2 = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(c1.connection),
+ self._non_default_isolation_level(),
+ )
+ eq_(
+ eng.dialect.get_isolation_level(c2.connection),
+ self._default_isolation_level(),
+ )
+ c1.close()
+ c2.close()
+ c3 = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(c3.connection),
+ self._default_isolation_level(),
+ )
+ c4 = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(c4.connection),
+ self._default_isolation_level(),
+ )
- eq_(canary.mock_calls, [])
+ c3.close()
+ c4.close()
- def test_commit_no_begin(self):
- with testing.db.connect() as conn:
- assert not conn.in_transaction()
- conn.commit()
+ def test_exception_in_transaction(self):
+ eng = testing_engine()
+ c1 = eng.connect()
+ with expect_raises_message(
+ exc.InvalidRequestError,
+ r"This connection has already initialized a SQLAlchemy "
+ r"Transaction\(\) object via begin\(\) or autobegin; "
+ r"isolation_level may not be altered unless rollback\(\) or "
+ r"commit\(\) is called first.",
+ ):
+ with c1.begin():
+ c1 = c1.execution_options(
+ isolation_level=self._non_default_isolation_level()
+ )
- @testing.requires.independent_connections
- def test_commit_inactive(self):
- with testing.db.connect() as conn:
- conn.begin()
- conn.invalidate()
+ # was never set, so we are on original value
+ eq_(
+ eng.dialect.get_isolation_level(c1.connection),
+ self._default_isolation_level(),
+ )
- assert_raises_message(
- exc.InvalidRequestError, "Can't reconnect until", conn.commit
- )
+ def test_per_statement_bzzt(self):
+ assert_raises_message(
+ exc.ArgumentError,
+ r"'isolation_level' execution option may only be specified "
+ r"on Connection.execution_options\(\), or "
+ r"per-engine using the isolation_level "
+ r"argument to create_engine\(\).",
+ select(1).execution_options,
+ isolation_level=self._non_default_isolation_level(),
+ )
- @testing.requires.independent_connections
- def test_rollback_inactive(self):
- users = self.tables.users
- with testing.db.connect() as conn:
+ def test_per_engine(self):
+ # new in 0.9
+ eng = testing_engine(
+ testing.db.url,
+ options=dict(
+ execution_options={
+ "isolation_level": self._non_default_isolation_level()
+ }
+ ),
+ )
+ conn = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._non_default_isolation_level(),
+ )
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- conn.commit()
+ def test_per_option_engine(self):
+ eng = testing_engine(testing.db.url).execution_options(
+ isolation_level=self._non_default_isolation_level()
+ )
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ conn = eng.connect()
+ eq_(
+ eng.dialect.get_isolation_level(conn.connection),
+ self._non_default_isolation_level(),
+ )
- conn.invalidate()
+ def test_isolation_level_accessors_connection_default(self):
+ eng = testing_engine(testing.db.url)
+ with eng.connect() as conn:
+ eq_(conn.default_isolation_level, self._default_isolation_level())
+ with eng.connect() as conn:
+ eq_(conn.get_isolation_level(), self._default_isolation_level())
- assert_raises_message(
- exc.PendingRollbackError,
- "Can't reconnect",
- conn.execute,
- select(1),
+ def test_isolation_level_accessors_connection_option_modified(self):
+ eng = testing_engine(testing.db.url)
+ with eng.connect() as conn:
+ c2 = conn.execution_options(
+ isolation_level=self._non_default_isolation_level()
)
-
- conn.rollback()
+ eq_(conn.default_isolation_level, self._default_isolation_level())
eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
+ conn.get_isolation_level(), self._non_default_isolation_level()
)
+ eq_(c2.get_isolation_level(), self._non_default_isolation_level())
- def test_rollback_no_begin(self):
- with testing.db.connect() as conn:
- assert not conn.in_transaction()
- conn.rollback()
-
- def test_rollback_end_ctx_manager(self):
- with testing.db.begin() as conn:
- assert conn.in_transaction()
- conn.rollback()
- assert not conn.in_transaction()
-
- def test_rollback_end_ctx_manager_autobegin(self, local_connection):
- m1 = mock.Mock()
-
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
-
- with local_connection.begin() as trans:
- assert local_connection.in_transaction()
- trans.rollback()
- assert not local_connection.in_transaction()
-
- # previously, would be subject to autocommit.
- # now it raises
- with expect_raises_message(
- exc.InvalidRequestError,
- "Can't operate on closed transaction inside context manager. "
- "Please complete the context manager before emitting "
- "further commands.",
- ):
- local_connection.execute(select(1))
-
- assert not local_connection.in_transaction()
-
- @testing.combinations((True,), (False,), argnames="roll_back_in_block")
- def test_ctxmanager_rolls_back(self, local_connection, roll_back_in_block):
- m1 = mock.Mock()
-
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
-
- with expect_raises_message(Exception, "test"):
- with local_connection.begin() as trans:
- if roll_back_in_block:
- trans.rollback()
-
- if 1 == 1:
- raise Exception("test")
-
- assert not trans.is_active
- assert not local_connection.in_transaction()
- assert trans._deactivated_from_connection
-
- eq_(m1.mock_calls, [mock.call.rollback(local_connection)])
- @testing.requires.savepoints
- def test_ctxmanager_autobegins_real_trans_from_nested(
- self, local_connection
- ):
- m1 = mock.Mock()
-
- event.listen(
- local_connection, "rollback_savepoint", m1.rollback_savepoint
- )
- event.listen(
- local_connection, "release_savepoint", m1.release_savepoint
- )
- event.listen(local_connection, "rollback", m1.rollback)
- event.listen(local_connection, "commit", m1.commit)
- event.listen(local_connection, "begin", m1.begin)
- event.listen(local_connection, "savepoint", m1.savepoint)
+class ConnectionCharacteristicTest(fixtures.TestBase):
+ @testing.fixture
+ def characteristic_fixture(self):
+ class FooCharacteristic(characteristics.ConnectionCharacteristic):
+ transactional = True
- with local_connection.begin_nested() as nested_trans:
- pass
+ def reset_characteristic(self, dialect, dbapi_conn):
- assert not nested_trans.is_active
- assert nested_trans._deactivated_from_connection
- # legacy mode, no savepoint at all
- eq_(
- m1.mock_calls,
- [
- mock.call.begin(local_connection),
- mock.call.savepoint(local_connection, mock.ANY),
- mock.call.release_savepoint(
- local_connection, mock.ANY, mock.ANY
- ),
- ],
- )
+ dialect.reset_foo(dbapi_conn)
- def test_explicit_begin(self):
- users = self.tables.users
+ def set_characteristic(self, dialect, dbapi_conn, value):
- with testing.db.connect() as conn:
- assert not conn.in_transaction()
- conn.begin()
- assert conn.in_transaction()
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- conn.commit()
+ dialect.set_foo(dbapi_conn, value)
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
+ def get_characteristic(self, dialect, dbapi_conn):
+ return dialect.get_foo(dbapi_conn)
+
+ class FooDialect(default.DefaultDialect):
+ connection_characteristics = util.immutabledict(
+ {"foo": FooCharacteristic()}
)
- def test_no_double_begin(self):
- with testing.db.connect() as conn:
- conn.begin()
+ def reset_foo(self, dbapi_conn):
+ dbapi_conn.foo = "original_value"
- assert_raises_message(
- exc.InvalidRequestError,
- r"This connection has already initialized a SQLAlchemy "
- r"Transaction\(\) object via begin\(\) or autobegin; can't "
- r"call begin\(\) here unless rollback\(\) or commit\(\) is "
- r"called first.",
- conn.begin,
- )
+ def set_foo(self, dbapi_conn, value):
+ dbapi_conn.foo = value
- def test_no_autocommit(self):
- users = self.tables.users
+ def get_foo(self, dbapi_conn):
+ return dbapi_conn.foo
- with testing.db.connect() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ connection = mock.Mock()
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 0,
- )
+ def creator():
+ connection.foo = "original_value"
+ return connection
- def test_begin_block(self):
- users = self.tables.users
+ pool = _pool.SingletonThreadPool(creator=creator)
+ u = url.make_url("foo://")
+ return base.Engine(pool, FooDialect(), u), connection
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ def test_engine_param_stays(self, characteristic_fixture):
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ engine, connection = characteristic_fixture
- @testing.requires.savepoints
- def test_savepoint_one(self):
- users = self.tables.users
+ foo_level = engine.dialect.get_foo(engine.connect().connection)
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ new_level = "new_level"
- savepoint = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ ne_(foo_level, new_level)
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
- savepoint.rollback()
+ eng = engine.execution_options(foo=new_level)
+ eq_(eng.dialect.get_foo(eng.connect().connection), new_level)
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ # check that it stays
+ conn = eng.connect()
+ eq_(eng.dialect.get_foo(conn.connection), new_level)
+ conn.close()
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ conn = eng.connect()
+ eq_(eng.dialect.get_foo(conn.connection), new_level)
+ conn.close()
- @testing.requires.savepoints
- def test_savepoint_two(self):
- users = self.tables.users
+ def test_default_level(self, characteristic_fixture):
+ engine, connection = characteristic_fixture
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ eq_(
+ engine.dialect.get_foo(engine.connect().connection),
+ "original_value",
+ )
- savepoint = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ def test_connection_invalidated(self, characteristic_fixture):
+ engine, connection = characteristic_fixture
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
- savepoint.commit()
+ conn = engine.connect()
+ c2 = conn.execution_options(foo="new_value")
+ eq_(connection.foo, "new_value")
+ c2.invalidate()
+ c2.connection
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
+ eq_(connection.foo, "original_value")
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
+ def test_exception_in_transaction(self, characteristic_fixture):
+ # this was a warning in 1.x. it appears we did not test the
+ # 2.0 error case in 1.4
- @testing.requires.savepoints
- def test_savepoint_three(self):
- users = self.tables.users
+ engine, connection = characteristic_fixture
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ c1 = engine.connect()
+ with expect_raises_message(
+ exc.InvalidRequestError,
+ r"This connection has already initialized a SQLAlchemy "
+ r"Transaction\(\) object via begin\(\) or autobegin; "
+ r"foo may not be altered unless rollback\(\) or "
+ r"commit\(\) is called first.",
+ ):
+ with c1.begin():
+ c1 = c1.execution_options(foo="new_foo")
- conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ # was never set, so we are on original value
+ eq_(engine.dialect.get_foo(c1.connection), "original_value")
- conn.rollback()
+ @testing.fails("no error is raised yet here.")
+ def test_per_statement_bzzt(self, characteristic_fixture):
+ engine, connection = characteristic_fixture
- assert not conn.in_transaction()
+ # this would need some on-execute mechanism to look inside of
+ # the characteristics list. unfortunately this would
+ # add some latency.
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 0,
- )
+ assert_raises_message(
+ exc.ArgumentError,
+ r"'foo' execution option may only be specified "
+ r"on Connection.execution_options\(\), or "
+ r"per-engine using the isolation_level "
+ r"argument to create_engine\(\).",
+ connection.execute,
+ select([1]).execution_options(foo="bar"),
+ )
- @testing.requires.savepoints
- def test_savepoint_four(self):
- users = self.tables.users
+ def test_per_engine(self, characteristic_fixture):
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ engine, connection = characteristic_fixture
- sp1 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ pool, dialect, url = engine.pool, engine.dialect, engine.url
- sp2 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
+ eng = base.Engine(
+ pool, dialect, url, execution_options={"foo": "new_value"}
+ )
- sp2.rollback()
+ conn = eng.connect()
+ eq_(eng.dialect.get_foo(conn.connection), "new_value")
- assert not sp2.is_active
- assert sp1.is_active
- assert conn.in_transaction()
+ def test_per_option_engine(self, characteristic_fixture):
- assert not sp1.is_active
+ engine, connection = characteristic_fixture
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 2,
- )
+ eng = engine.execution_options(foo="new_value")
- @testing.requires.savepoints
- def test_savepoint_five(self):
- users = self.tables.users
+ conn = eng.connect()
+ eq_(
+ eng.dialect.get_foo(conn.connection),
+ "new_value",
+ )
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
- conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+class ResetFixture(object):
+ @testing.fixture()
+ def reset_agent(self, testing_engine):
+ engine = testing_engine()
+ engine.connect().close()
- sp2 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
+ harness = mock.Mock(
+ do_rollback=mock.Mock(side_effect=testing.db.dialect.do_rollback),
+ do_commit=mock.Mock(side_effect=testing.db.dialect.do_commit),
+ engine=engine,
+ )
+ event.listen(engine, "rollback", harness.rollback)
+ event.listen(engine, "commit", harness.commit)
+ event.listen(engine, "rollback_savepoint", harness.rollback_savepoint)
+ event.listen(engine, "rollback_twophase", harness.rollback_twophase)
+ event.listen(engine, "commit_twophase", harness.commit_twophase)
+
+ with mock.patch.object(
+ engine.dialect, "do_rollback", harness.do_rollback
+ ), mock.patch.object(engine.dialect, "do_commit", harness.do_commit):
+ yield harness
+
+ event.remove(engine, "rollback", harness.rollback)
+ event.remove(engine, "commit", harness.commit)
+ event.remove(engine, "rollback_savepoint", harness.rollback_savepoint)
+ event.remove(engine, "rollback_twophase", harness.rollback_twophase)
+ event.remove(engine, "commit_twophase", harness.commit_twophase)
+
+
+class ResetAgentTest(ResetFixture, fixtures.TestBase):
+ # many of these tests illustate rollback-on-return being redundant
+ # vs. what the transaction just did, however this is to ensure
+ # even if statements were invoked on the DBAPI connection directly,
+ # the state is cleared. options to optimize this with clear
+ # docs etc. should be added.
- sp2.commit()
+ __backend__ = True
- assert conn.in_transaction()
+ def test_begin_close(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 3,
- )
+ assert not trans.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
+ )
- @testing.requires.savepoints
- def test_savepoint_six(self):
- users = self.tables.users
+ def test_begin_rollback(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans.rollback()
+ assert not trans.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- with testing.db.begin() as conn:
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ def test_begin_commit(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans.commit()
+ assert not trans.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.commit(connection),
+ mock.call.do_commit(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- sp1 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ def test_trans_close(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans.close()
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- assert conn._nested_transaction is sp1
+ @testing.requires.savepoints
+ def test_begin_nested_trans_close_one(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ t1 = connection.begin()
+ t2 = connection.begin_nested()
+ assert connection._nested_transaction is t2
+ assert connection._transaction is t1
+ t2.close()
+ assert connection._nested_transaction is None
+ assert connection._transaction is t1
+ t1.close()
+ assert not t1.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- sp2 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
+ @testing.requires.savepoints
+ def test_begin_nested_trans_close_two(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ t1 = connection.begin()
+ t2 = connection.begin_nested()
+ assert connection._nested_transaction is t2
+ assert connection._transaction is t1
- assert conn._nested_transaction is sp2
+ t1.close()
- sp2.commit()
+ assert connection._nested_transaction is None
+ assert connection._transaction is None
- assert conn._nested_transaction is sp1
+ assert not t1.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- sp1.rollback()
+ @testing.requires.savepoints
+ def test_begin_nested_trans_rollback(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ t1 = connection.begin()
+ t2 = connection.begin_nested()
+ assert connection._nested_transaction is t2
+ assert connection._transaction is t1
+ t2.close()
+ assert connection._nested_transaction is None
+ assert connection._transaction is t1
+ t1.rollback()
+ assert connection._transaction is None
+ assert not t2.is_active
+ assert not t1.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
- assert conn._nested_transaction is None
+ @testing.requires.savepoints
+ def test_begin_nested_close(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin_nested()
+ # it's a savepoint, but root made sure it closed
+ assert not trans.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
+ )
- assert conn.in_transaction()
+ @testing.requires.savepoints
+ def test_begin_begin_nested_close(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans2 = connection.begin_nested()
+ assert not trans2.is_active
+ assert not trans.is_active
+ eq_(
+ reset_agent.mock_calls,
+ [mock.call.rollback(connection), mock.call.do_rollback(mock.ANY)],
+ )
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 1,
- )
+ @testing.requires.savepoints
+ def test_begin_begin_nested_rollback_commit(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans2 = connection.begin_nested()
+ trans2.rollback() # this is not a connection level event
+ trans.commit()
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback_savepoint(connection, mock.ANY, None),
+ mock.call.commit(connection),
+ mock.call.do_commit(mock.ANY),
+ mock.call.do_rollback(mock.ANY),
+ ],
+ )
@testing.requires.savepoints
- def test_savepoint_seven(self):
- users = self.tables.users
+ def test_begin_begin_nested_rollback_rollback(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin()
+ trans2 = connection.begin_nested()
+ trans2.rollback()
+ trans.rollback()
+ eq_(
+ reset_agent.mock_calls,
+ [
+ mock.call.rollback_savepoint(connection, mock.ANY, mock.ANY),
+ mock.call.rollback(connection),
+ mock.call.do_rollback(mock.ANY),
+ mock.call.do_rollback(mock.ANY), # this is the reset on return
+ ],
+ )
- conn = testing.db.connect()
- trans = conn.begin()
- conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
+ @testing.requires.two_phase_transactions
+ def test_reset_via_agent_begin_twophase(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin_twophase()
+ assert not trans.is_active
+ # pg8000 uses the rollback_twophase as the full rollback.
+ eq_(
+ reset_agent.mock_calls[0],
+ mock.call.rollback_twophase(connection, mock.ANY, False),
+ )
- sp1 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
+ @testing.requires.two_phase_transactions
+ def test_reset_via_agent_begin_twophase_commit(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin_twophase()
+ trans.commit()
- sp2 = conn.begin_nested()
- conn.execute(users.insert(), {"user_id": 3, "user_name": "name3"})
+ # again pg8000 vs. other PG drivers have different API
+ eq_(
+ reset_agent.mock_calls[0],
+ mock.call.commit_twophase(connection, mock.ANY, False),
+ )
- assert conn.in_transaction()
+ eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
- trans.close()
+ @testing.requires.two_phase_transactions
+ def test_reset_via_agent_begin_twophase_rollback(self, reset_agent):
+ with reset_agent.engine.connect() as connection:
+ trans = connection.begin_twophase()
+ trans.rollback()
- assert not sp1.is_active
- assert not sp2.is_active
- assert not trans.is_active
- assert conn._transaction is None
- assert conn._nested_transaction is None
+ # pg8000 vs. the other postgresql drivers have different
+ # twophase implementations. the base postgresql driver emits
+ # "ROLLBACK PREPARED" explicitly then calls do_rollback().
+ # pg8000 has a dedicated API method. so we get either one or
+ # two do_rollback() at the end, just need at least one.
+ eq_(
+ reset_agent.mock_calls[0:2],
+ [
+ mock.call.rollback_twophase(connection, mock.ANY, False),
+ mock.call.do_rollback(mock.ANY),
+ # mock.call.do_rollback(mock.ANY),
+ ],
+ )
+ eq_(reset_agent.mock_calls[-1], mock.call.do_rollback(mock.ANY))
- with testing.db.connect() as conn:
- eq_(
- conn.scalar(select(func.count(1)).select_from(users)),
- 0,
- )
+ def test_reset_agent_no_conn_transaction(self, reset_agent):
+ with reset_agent.engine.connect():
+ pass
+
+ eq_(reset_agent.mock_calls, [mock.call.do_rollback(mock.ANY)])
with testing.db.connect() as c:
sess = Session(bind=c)
+
u = User(name="u1")
sess.add(u)
sess.flush()
+
+ # new in 2.0:
+ # autobegin occurred, so c is in a transaction.
+
+ assert c.in_transaction()
sess.close()
+
+ # .close() does a rollback, so that will end the
+ # transaction on the connection. This is how it was
+ # working before also even if transaction was started.
+ # is this what we really want?
assert not c.in_transaction()
+
assert (
c.exec_driver_sql("select count(1) from users").scalar() == 0
)
sess.add(u)
sess.flush()
sess.commit()
+
+ # new in 2.0:
+ # commit OTOH doesn't actually do a commit.
+ # so still in transaction due to autobegin
+ assert c.in_transaction()
+
+ sess = Session(bind=c)
+ u = User(name="u3")
+ sess.add(u)
+ sess.flush()
+ sess.rollback()
+
+ # like .close(), rollback() also ends the transaction
assert not c.in_transaction()
- assert (
- c.exec_driver_sql("select count(1) from users").scalar() == 1
- )
- with c.begin():
- c.exec_driver_sql("delete from users")
assert (
c.exec_driver_sql("select count(1) from users").scalar() == 0
)
from sqlalchemy import Table
from sqlalchemy import testing
from sqlalchemy import text
-from sqlalchemy.future import Engine
from sqlalchemy.orm import attributes
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import exc as orm_exc
@testing.fixture
def future_conn(self):
- engine = Engine._future_facade(testing.db)
+ engine = testing.db
with engine.connect() as conn:
yield conn
class NewStyleJoinIntoAnExternalTransactionTest(
- JoinIntoAnExternalTransactionFixture
+ JoinIntoAnExternalTransactionFixture, fixtures.MappedTest
):
"""A new recipe for "join into an external transaction" that works
for both legacy and future engines/sessions
self._assert_count(1)
-class FutureJoinIntoAnExternalTransactionTest(
- NewStyleJoinIntoAnExternalTransactionTest,
- fixtures.FutureEngineMixin,
- fixtures.MappedTest,
-):
- pass
-
-
-class NonFutureJoinIntoAnExternalTransactionTest(
- NewStyleJoinIntoAnExternalTransactionTest,
- fixtures.MappedTest,
-):
- pass
-
-
class LegacyJoinIntoAnExternalTransactionTest(
JoinIntoAnExternalTransactionFixture,
fixtures.MappedTest,
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
-
-
-class LegacyBranchedJoinIntoAnExternalTransactionTest(
- LegacyJoinIntoAnExternalTransactionTest
-):
- def setup_session(self):
- # begin a non-ORM transaction
- self.trans = self.connection.begin()
-
- class A(object):
- pass
-
- self.mapper_registry.map_imperatively(A, self.table)
- self.A = A
-
- # neutron is doing this inside of a migration
- # 1df244e556f5_add_unique_ha_router_agent_port_bindings.py
- with testing.expect_deprecated_20(
- r"The Connection.connect\(\) method is considered legacy"
- ):
- self.session = Session(bind=self.connection.connect())
-
- if testing.requires.savepoints.enabled:
- # start the session in a SAVEPOINT...
- self.session.begin_nested()
-
- # then each time that SAVEPOINT ends, reopen it
- @event.listens_for(self.session, "after_transaction_end")
- def restart_savepoint(session, transaction):
- if transaction.nested and not transaction._parent.nested:
-
- # ensure that state is expired the way
- # session.commit() at the top level normally does
- # (optional step)
- session.expire_all()
-
- session.begin_nested()
eq_(s.query(cast(JSONThing.data_null, String)).scalar(), None)
-class EnsureCacheTest(fixtures.FutureEngineMixin, UOWTest):
+class EnsureCacheTest(UOWTest):
def test_ensure_cache(self):
users, User = self.tables.users, self.classes.User
# TEST: test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 54
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 54
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 47
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_connection_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 47
# TEST: test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 93
-test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 93
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 101
+test.aaa_profiling.test_resultset.ExecutionTest.test_minimal_engine_execute x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 101
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 8
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 9
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 8
+test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 9
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 8
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 9
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 8
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 8
test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 9
-# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy
-
-
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 2570
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 15574
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 89310
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 102314
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 2563
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 2603
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 15607
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 2558
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 15562
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 2511
-test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 15515
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 2604
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 15608
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 89344
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 102348
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 2597
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 15601
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 2637
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 15641
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 2592
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 15596
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 2547
+test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 15551
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0]
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 14
+test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-0] x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 21
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 14
+test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 16
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 16
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-1] x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 21
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 14
+test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 16
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 16
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[False-2] x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 14
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 24
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 26
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 17
+test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 17
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 19
test.aaa_profiling.test_resultset.ResultSetTest.test_one_or_none[True-1] x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 17
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 267
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6267
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87007
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93007
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 235
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 327
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6327
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 257
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6257
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 225
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6225
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 301
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6301
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87041
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93041
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 269
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 6269
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 361
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6361
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 291
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6291
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 260
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6260
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 267
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6267
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87007
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93007
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 235
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 327
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6327
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 257
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6257
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 225
-test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6225
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 301
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6301
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87041
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93041
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 269
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 6269
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 361
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6361
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 291
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6291
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 260
+test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6260
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_string
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 563
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6567
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87303
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93307
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 556
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 596
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6600
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 551
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6555
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 504
-test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6508
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 597
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6601
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87337
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93341
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 590
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 6594
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 630
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6634
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 585
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6589
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 540
+test.aaa_profiling.test_resultset.ResultSetTest.test_string x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6544
# TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_unicode
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 563
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6567
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87303
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93307
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 556
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 596
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6600
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 551
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6555
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 504
-test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6508
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_cextensions 597
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_mysqldb_dbapiunicode_nocextensions 6601
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_cextensions 87337
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mariadb_pymysql_dbapiunicode_nocextensions 93341
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_cextensions 590
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_mssql_pyodbc_dbapiunicode_nocextensions 6594
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_cextensions 630
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_oracle_cx_oracle_dbapiunicode_nocextensions 6634
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_cextensions 585
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_postgresql_psycopg2_dbapiunicode_nocextensions 6589
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_cextensions 540
+test.aaa_profiling.test_resultset.ResultSetTest.test_unicode x86_64_linux_cpython_3.10_sqlite_pysqlite_dbapiunicode_nocextensions 6544
@property
def legacy_isolation_level(self):
- # refers to the engine isolation_level setting
+ # refers dialects where "isolation_level" can be passed to
+ # create_engine
return only_on(
("postgresql", "sqlite", "mysql", "mariadb", "mssql"),
"DBAPI has no isolation level support",
def mssql_freetds(self):
return only_on(["mssql+pymssql"])
- @property
- def legacy_engine(self):
- return exclusions.skip_if(lambda config: config.db._is_future)
-
@property
def ad_hoc_engines(self):
return skip_if(self._sqlite_file_db)
class ExecutionOptionsTest(fixtures.TestBase):
def test_non_dml(self):
- stmt = table1.select()
+ stmt = table1.select().execution_options(foo="bar")
compiled = stmt.compile()
- eq_(compiled.execution_options, {})
+ eq_(compiled.execution_options, {"foo": "bar"})
def test_dml(self):
- stmt = table1.insert()
+ stmt = table1.insert().execution_options(foo="bar")
compiled = stmt.compile()
- eq_(compiled.execution_options, {"autocommit": True})
+ eq_(compiled.execution_options, {"foo": "bar"})
def test_embedded_element_true_to_none(self):
- stmt = table1.insert()
- eq_(stmt._execution_options, {"autocommit": True})
+ stmt = table1.insert().execution_options(foo="bar")
+ eq_(stmt._execution_options, {"foo": "bar"})
s2 = select(table1).select_from(stmt.cte())
eq_(s2._execution_options, {})
compiled = s2.compile()
- eq_(compiled.execution_options, {"autocommit": True})
+ eq_(compiled.execution_options, {})
def test_embedded_element_true_to_false(self):
- stmt = table1.insert()
- eq_(stmt._execution_options, {"autocommit": True})
+ stmt = table1.insert().execution_options(foo="bar")
+ eq_(stmt._execution_options, {"foo": "bar"})
s2 = (
- select(table1)
- .select_from(stmt.cte())
- .execution_options(autocommit=False)
+ select(table1).select_from(stmt.cte()).execution_options(foo="bat")
)
- eq_(s2._execution_options, {"autocommit": False})
+ eq_(s2._execution_options, {"foo": "bat"})
compiled = s2.compile()
- eq_(compiled.execution_options, {"autocommit": False})
+ eq_(compiled.execution_options, {"foo": "bat"})
class DDLTest(fixtures.TestBase, AssertsCompiledSQL):
.cte("t")
)
stmt = t.select()
- assert "autocommit" not in stmt._execution_options
- eq_(stmt.compile().execution_options["autocommit"], True)
self.assert_compile(
stmt,
stmt = select(cte)
- assert "autocommit" not in stmt._execution_options
- eq_(stmt.compile().execution_options["autocommit"], True)
-
self.assert_compile(
stmt,
"WITH pd AS "
products = table("products", column("id"), column("price"))
cte = products.select().cte("pd")
- assert "autocommit" not in cte.select()._execution_options
stmt = products.update().where(products.c.price == cte.c.price)
- eq_(stmt.compile().execution_options["autocommit"], True)
self.assert_compile(
stmt,
products = table("products", column("id"), column("price"))
cte = products.select().cte("pd")
- assert "autocommit" not in cte.select()._execution_options
stmt = update(cte)
- eq_(stmt.compile().execution_options["autocommit"], True)
self.assert_compile(
stmt,
products = table("products", column("id"), column("price"))
cte = products.select().cte("pd")
- assert "autocommit" not in cte.select()._execution_options
stmt = delete(cte)
- eq_(stmt.compile().execution_options["autocommit"], True)
self.assert_compile(
stmt,
)
cte = q.cte("deldup")
stmt = delete(cte).where(text("RN > 1"))
- eq_(stmt.compile().execution_options["autocommit"], True)
self.assert_compile(
stmt,
stmt = select(cte)
- assert "autocommit" not in stmt._execution_options
-
- eq_(stmt.compile().execution_options["autocommit"], True)
-
self.assert_compile(
stmt,
"WITH insert_cte AS "
eq_(55, row._mapping["col3"])
-class FutureDefaultRoundTripTest(
- fixtures.FutureEngineMixin, DefaultRoundTripTest
-):
-
- __backend__ = True
-
-
class CTEDefaultTest(fixtures.TablesTest):
__requires__ = ("ctes", "returning", "ctes_on_dml")
__backend__ = True
from sqlalchemy.testing import eq_
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import is_
-from sqlalchemy.testing import is_false
from sqlalchemy.testing import is_true
from sqlalchemy.testing import mock
from sqlalchemy.testing.schema import Column
],
)
- def test_autoincrement_autocommit(self):
- with testing.db.connect() as conn:
- with testing.expect_deprecated_20(
- "The current statement is being autocommitted using "
- "implicit autocommit, "
- ):
- self._test_autoincrement(conn)
-
-
-class DefaultTest(fixtures.TestBase):
- __backend__ = True
-
- @testing.provide_metadata
- def test_close_on_branched(self):
- metadata = self.metadata
-
- def mydefault_using_connection(ctx):
- conn = ctx.connection
- try:
- return conn.execute(select(text("12"))).scalar()
- finally:
- # ensure a "close()" on this connection does nothing,
- # since its a "branched" connection
- conn.close()
-
- table = Table(
- "foo",
- metadata,
- Column("x", Integer),
- Column("y", Integer, default=mydefault_using_connection),
- )
-
- metadata.create_all(testing.db)
- with testing.db.connect() as conn:
- with testing.expect_deprecated_20(
- r"The .close\(\) method on a so-called 'branched' "
- r"connection is deprecated as of 1.4, as are "
- r"'branched' connections overall"
- ):
- conn.execute(table.insert().values(x=5))
-
- eq_(conn.execute(select(table)).first(), (5, 12))
-
class DMLTest(_UpdateFromTestBase, fixtures.TablesTest, AssertsCompiledSQL):
__dialect__ = "default"
if inspect(conn).has_table("foo"):
conn.execute(schema.DropTable(table("foo")))
- def test_bind_ddl_deprecated(self, connection):
- with testing.expect_deprecated_20(
- "The DDL.bind argument is deprecated"
- ):
- ddl = schema.DDL("create table foo(id integer)", bind=connection)
-
- with testing.expect_deprecated_20(
- r"The DDLElement.execute\(\) method is considered legacy"
- ):
- ddl.execute()
-
- def test_bind_create_table_deprecated(self, connection):
- t1 = Table("foo", MetaData(), Column("id", Integer))
-
- with testing.expect_deprecated_20(
- "The CreateTable.bind argument is deprecated"
- ):
- ddl = schema.CreateTable(t1, bind=connection)
-
- with testing.expect_deprecated_20(
- r"The DDLElement.execute\(\) method is considered legacy"
- ):
- ddl.execute()
-
- is_true(inspect(connection).has_table("foo"))
-
- def test_bind_create_index_deprecated(self, connection):
- t1 = Table("foo", MetaData(), Column("id", Integer))
- t1.create(connection)
-
- idx = schema.Index("foo_idx", t1.c.id)
-
- with testing.expect_deprecated_20(
- "The CreateIndex.bind argument is deprecated"
- ):
- ddl = schema.CreateIndex(idx, bind=connection)
-
- with testing.expect_deprecated_20(
- r"The DDLElement.execute\(\) method is considered legacy"
- ):
- ddl.execute()
-
- is_true(
- "foo_idx"
- in [ix["name"] for ix in inspect(connection).get_indexes("foo")]
- )
-
- def test_bind_drop_table_deprecated(self, connection):
- t1 = Table("foo", MetaData(), Column("id", Integer))
-
- t1.create(connection)
-
- with testing.expect_deprecated_20(
- "The DropTable.bind argument is deprecated"
- ):
- ddl = schema.DropTable(t1, bind=connection)
-
- with testing.expect_deprecated_20(
- r"The DDLElement.execute\(\) method is considered legacy"
- ):
- ddl.execute()
-
- is_false(inspect(connection).has_table("foo"))
-
- def test_bind_drop_index_deprecated(self, connection):
- t1 = Table("foo", MetaData(), Column("id", Integer))
- idx = schema.Index("foo_idx", t1.c.id)
- t1.create(connection)
-
- is_true(
- "foo_idx"
- in [ix["name"] for ix in inspect(connection).get_indexes("foo")]
- )
-
- with testing.expect_deprecated_20(
- "The DropIndex.bind argument is deprecated"
- ):
- ddl = schema.DropIndex(idx, bind=connection)
-
- with testing.expect_deprecated_20(
- r"The DDLElement.execute\(\) method is considered legacy"
- ):
- ddl.execute()
-
- is_false(
- "foo_idx"
- in [ix["name"] for ix in inspect(connection).get_indexes("foo")]
- )
-
@testing.combinations(
(schema.AddConstraint,),
(schema.DropConstraint,),
stmt2 = stmt_fn(self)
cache = {}
- result = connection._execute_20(
+ result = connection.execute(
stmt1,
execution_options={"compiled_cache": cache},
)
result.close()
assert cache
- result = connection._execute_20(
+ result = connection.execute(
stmt2,
execution_options={"compiled_cache": cache},
)
self._assert_seq_result(r.inserted_primary_key[0])
-class FutureSequenceExecTest(fixtures.FutureEngineMixin, SequenceExecTest):
- __requires__ = ("sequences",)
- __backend__ = True
-
-
class SequenceTest(fixtures.TestBase, testing.AssertsCompiledSQL):
__requires__ = ("sequences",)
__backend__ = True
assert isinstance(seq.next_value().type, BigInteger)
-class FutureSequenceTest(fixtures.FutureEngineMixin, SequenceTest):
- __requires__ = ("sequences",)
- __backend__ = True
-
-
class TableBoundSequenceTest(fixtures.TablesTest):
__requires__ = ("sequences",)
__backend__ = True