does not want to use Declarative with ``__table__``, can instead use the above
pattern which basically does the same thing.
+.. _migration_20_unify_select:
ORM Query Unified with Core Select
==================================
SQLAlchemy 2.0 Future (Core)
============================
+This package includes a relatively small number of transitional elements
+to allow "2.0 mode" to take place within SQLAlchemy 1.4. The primary
+objects provided here are :class:`_future.Engine` and :class:`_future.Connection`,
+which are both subclasses of the existing :class:`_engine.Engine` and
+:class:`_engine.Connection` objects with essentially a smaller set of
+methods and the removal of "autocommit".
+
+Within the 1.4 series, the "2.0" style of engines and connections is enabled
+by passing the :paramref:`_sa.create_engine.future` flag to
+:func:`_sa.create_engine`::
+
+ from sqlalchemy import create_engine
+ engine = create_engine("postgresql://user:pass@host/dbname", future=True)
+
+Similarly, with the ORM, to enable "future" behavior in the ORM :class:`.Session`,
+pass the :paramref:`_orm.Session.future` parameter either to the
+:class:`.Session` constructor directly, or via the :class:`_orm.sessionmaker`
+class::
+
+ from sqlalchemy.orm import sessionmaker
+
+ Session = sessionmaker(engine, future=True)
+
.. seealso::
:ref:`migration_20_toplevel` - Introduction to the 2.0 series of SQLAlchemy
.. autoclass:: Lateral
:members:
+.. autoclass:: ReturnsRows
+ :members:
+ :inherited-members: ClauseElement
+
.. autoclass:: ScalarSelect
:members:
.. autoclass:: Select
:members:
:inherited-members: ClauseElement
- :exclude-members: memoized_attribute, memoized_instancemethod
+ :exclude-members: memoized_attribute, memoized_instancemethod, append_correlation, append_column, append_prefix, append_whereclause, append_having, append_from, append_order_by, append_group_by
+
.. autoclass:: Selectable
:members:
+ :inherited-members: ClauseElement
.. autoclass:: SelectBase
:members:
somecolumn.bool_op('-->')('some value')
-.. versionadded:: 1.2.0b3 Added the :meth:`.Operators.bool_op` method.
+
+Commonly Used Operators
+-------------------------
+
+
+Here's a rundown of some of the most common operators used in both the
+Core expression language as well as in the ORM. Here we see expressions
+that are most commonly present when using the :meth:`_sql.Select.where` method,
+but can be used in other scenarios as well.
+
+A listing of all the column-level operations common to all column-like
+objects is at :class:`.ColumnOperators`.
+
+
+* :meth:`equals <.ColumnOperators.__eq__>`::
+
+ statement.where(users.c.name == 'ed')
+
+* :meth:`not equals <.ColumnOperators.__ne__>`::
+
+ statement.where(users.c.name != 'ed')
+
+* :meth:`LIKE <.ColumnOperators.like>`::
+
+ statement.where(users.c.name.like('%ed%'))
+
+ .. note:: :meth:`.ColumnOperators.like` renders the LIKE operator, which
+ is case insensitive on some backends, and case sensitive
+ on others. For guaranteed case-insensitive comparisons, use
+ :meth:`.ColumnOperators.ilike`.
+
+* :meth:`ILIKE <.ColumnOperators.ilike>` (case-insensitive LIKE)::
+
+ statement.where(users.c.name.ilike('%ed%'))
+
+ .. note:: most backends don't support ILIKE directly. For those,
+ the :meth:`.ColumnOperators.ilike` operator renders an expression
+ combining LIKE with the LOWER SQL function applied to each operand.
+
+* :meth:`IN <.ColumnOperators.in_>`::
+
+ statement.where(users.c..name.in_(['ed', 'wendy', 'jack']))
+
+ # works with Select objects too:
+ statement.where.filter(users.c.name.in_(
+ select(users.c.name).where(users.c.name.like('%ed%'))
+ ))
+
+ # use tuple_() for composite (multi-column) queries
+ from sqlalchemy import tuple_
+ statement.where(
+ tuple_(users.c.name, users.c.nickname).\
+ in_([('ed', 'edsnickname'), ('wendy', 'windy')])
+ )
+
+* :meth:`NOT IN <.ColumnOperators.notin_>`::
+
+ statement.where(~users.c.name.in_(['ed', 'wendy', 'jack']))
+
+* :meth:`IS NULL <.ColumnOperators.is_>`::
+
+ statement.where(users.c. == None)
+
+ # alternatively, if pep8/linters are a concern
+ statement.where(users.c.name.is_(None))
+
+* :meth:`IS NOT NULL <.ColumnOperators.isnot>`::
+
+ statement.where(users.c.name != None)
+
+ # alternatively, if pep8/linters are a concern
+ statement.where(users.c.name.isnot(None))
+
+* :func:`AND <.sql.expression.and_>`::
+
+ # use and_()
+ from sqlalchemy import and_
+ statement.where(and_(users.c.name == 'ed', users.c.fullname == 'Ed Jones'))
+
+ # or send multiple expressions to .where()
+ statement.where(users.c.name == 'ed', users.c.fullname == 'Ed Jones')
+
+ # or chain multiple where() calls
+ statement.where(users.c.name == 'ed').where(users.c.fullname == 'Ed Jones')
+
+ .. note:: Make sure you use :func:`.and_` and **not** the
+ Python ``and`` operator!
+
+* :func:`OR <.sql.expression.or_>`::
+
+ from sqlalchemy import or_
+ statement.where(or_(users.c.name == 'ed', users.c.name == 'wendy'))
+
+ .. note:: Make sure you use :func:`.or_` and **not** the
+ Python ``or`` operator!
+
+* :meth:`MATCH <.ColumnOperators.match>`::
+
+ statement.where(users.c.name.match('wendy'))
+
+ .. note::
+
+ :meth:`~.ColumnOperators.match` uses a database-specific ``MATCH``
+ or ``CONTAINS`` function; its behavior will vary by backend and is not
+ available on some backends such as SQLite.
+
Operator Customization
----------------------
.. glossary::
:sorted:
+ 1.x style
+ 2.0 style
+ 1.x-style
+ 2.0-style
+ These terms are new in SQLAlchemy 1.4 and refer to the SQLAlchemy 1.4->
+ 2.0 transition plan, described at :ref:`migration_20_toplevel`. The
+ term "1.x style" refers to an API used in the way it's been documented
+ throughout the 1.x series of SQLAlhcemy and earlier (e.g. 1.3, 1.2, etc)
+ and the term "2.0 style" refers to the way an API will look in version
+ 2.0. Version 1.4 implements nearly all of 2.0's API in so-called
+ "transition mode".
+
+ .. seealso::
+
+ :ref:`migration_20_toplevel`
+
relational
relational algebra
in terms of one particular table alias or another, based on its position
within the join expression.
+ plugin
+ plugin-specific
+ "plugin-specific" generally indicates a function or method in
+ SQLAlchemy Core which will behave differently when used in an ORM
+ context.
+
+ SQLAlchemy allows Core consrtucts such as :class:`_sql.Select` objects
+ to participate in a "plugin" system, which can inject additional
+ behaviors and features into the object that are not present by default.
+
+ Specifically, the primary "plugin" is the "orm" plugin, which is
+ at the base of the system that the SQLAlchemy ORM makes use of
+ Core constructs in order to compose and execute SQL queries that
+ return ORM results.
+
+ .. seealso::
+
+ :ref:`migration_20_unify_select`
+
crud
CRUD
An acronym meaning "Create, Update, Delete". The term in SQL refers to the
:doc:`Core Event Interfaces <core/events>` |
:doc:`Creating Custom SQL Constructs <core/compiler>` |
-* **SQLAlchemy 2.0 Compatibility:** :doc:`SQLAlchemy 2.0 Future (Core) <core/future>`
+* **SQLAlchemy 2.0 Compatibility:** :ref:`migration_20_toplevel`
Dialect Documentation
======================
Dynamic Relationship Loaders
----------------------------
-A key feature to enable management of a large collection is the so-called "dynamic"
-relationship. This is an optional form of :func:`~sqlalchemy.orm.relationship` which
-returns a :class:`~sqlalchemy.orm.query.Query` object in place of a collection
-when accessed. :func:`~sqlalchemy.orm.query.Query.filter` criterion may be
-applied as well as limits and offsets, either explicitly or via array slices::
+A key feature to enable management of a large collection is the so-called
+"dynamic" relationship. This is an optional form of
+:func:`_orm.relationship` which returns a
+:class:`_orm.AppenderQuery` object in place of a collection
+when accessed. Filtering criterion may be applied as well as limits and
+offsets, either explicitly or via array slices::
class User(Base):
__tablename__ = 'user'
posts = jack.posts[5:20]
The dynamic relationship supports limited write operations, via the
-``append()`` and ``remove()`` methods::
+:meth:`_orm.AppenderQuery.append` and :meth:`_orm.AppenderQuery.remove` methods::
oldpost = jack.posts.filter(Post.headline=='old post').one()
jack.posts.remove(oldpost)
Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
+.. autoclass:: sqlalchemy.orm.AppenderQuery
+ :members:
+
.. note::
The :func:`_orm.dynamic_loader` function is essentially the same
Additional Persistence Techniques
=================================
+.. _session_deleting_from_collections:
+
+Notes on Delete - Deleting Objects Referenced from Collections and Scalar Relationships
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ORM in general never modifies the contents of a collection or scalar
+relationship during the flush process. This means, if your class has a
+:func:`_orm.relationship` that refers to a collection of objects, or a reference
+to a single object such as many-to-one, the contents of this attribute will
+not be modified when the flush process occurs. Instead, it is expected
+that the :class:`.Session` would eventually be expired, either through the expire-on-commit behavior of
+:meth:`.Session.commit` or through explicit use of :meth:`.Session.expire`.
+At that point, any referenced object or collection associated with that
+:class:`.Session` will be cleared and will re-load itself upon next access.
+
+A common confusion that arises regarding this behavior involves the use of the
+:meth:`~.Session.delete` method. When :meth:`.Session.delete` is invoked upon
+an object and the :class:`.Session` is flushed, the row is deleted from the
+database. Rows that refer to the target row via foreign key, assuming they
+are tracked using a :func:`_orm.relationship` between the two mapped object types,
+will also see their foreign key attributes UPDATED to null, or if delete
+cascade is set up, the related rows will be deleted as well. However, even
+though rows related to the deleted object might be themselves modified as well,
+**no changes occur to relationship-bound collections or object references on
+the objects** involved in the operation within the scope of the flush
+itself. This means if the object was a
+member of a related collection, it will still be present on the Python side
+until that collection is expired. Similarly, if the object were
+referenced via many-to-one or one-to-one from another object, that reference
+will remain present on that object until the object is expired as well.
+
+Below, we illustrate that after an ``Address`` object is marked
+for deletion, it's still present in the collection associated with the
+parent ``User``, even after a flush::
+
+ >>> address = user.addresses[1]
+ >>> session.delete(address)
+ >>> session.flush()
+ >>> address in user.addresses
+ True
+
+When the above session is committed, all attributes are expired. The next
+access of ``user.addresses`` will re-load the collection, revealing the
+desired state::
+
+ >>> session.commit()
+ >>> address in user.addresses
+ False
+
+There is a recipe for intercepting :meth:`.Session.delete` and invoking this
+expiration automatically; see `ExpireRelationshipOnFKChange <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ExpireRelationshipOnFKChange>`_ for this. However, the usual practice of
+deleting items within collections is to forego the usage of
+:meth:`~.Session.delete` directly, and instead use cascade behavior to
+automatically invoke the deletion as a result of removing the object from the
+parent collection. The ``delete-orphan`` cascade accomplishes this, as
+illustrated in the example below::
+
+ class User(Base):
+ __tablename__ = 'user'
+
+ # ...
+
+ addresses = relationship(
+ "Address", cascade="all, delete-orphan")
+
+ # ...
+
+ del user.addresses[1]
+ session.flush()
+
+Where above, upon removing the ``Address`` object from the ``User.addresses``
+collection, the ``delete-orphan`` cascade has the effect of marking the ``Address``
+object for deletion in the same way as passing it to :meth:`~.Session.delete`.
+
+The ``delete-orphan`` cascade can also be applied to a many-to-one
+or one-to-one relationship, so that when an object is de-associated from its
+parent, it is also automatically marked for deletion. Using ``delete-orphan``
+cascade on a many-to-one or one-to-one requires an additional flag
+:paramref:`_orm.relationship.single_parent` which invokes an assertion
+that this related object is not to shared with any other parent simultaneously::
+
+ class User(Base):
+ # ...
+
+ preference = relationship(
+ "Preference", cascade="all, delete-orphan",
+ single_parent=True)
+
+
+Above, if a hypothetical ``Preference`` object is removed from a ``User``,
+it will be deleted on flush::
+
+ some_user.preference = None
+ session.flush() # will delete the Preference object
+
+.. seealso::
+
+ :ref:`unitofwork_cascades` for detail on cascades.
+
+
+
.. _flush_embedded_sql_expressions:
Embedding SQL Insert/Update Expressions into a Flush
.. module:: sqlalchemy.orm.session
-The :func:`_orm.mapper` function and :mod:`~sqlalchemy.ext.declarative` extensions
-are the primary configurational interface for the ORM. Once mappings are
-configured, the primary usage interface for persistence operations is the
+The declarative base and ORM mapping functions described at
+:ref:`mapper_config_toplevel` are the primary configurational interface for the
+ORM. Once mappings are configured, the primary usage interface for
+persistence operations is the
:class:`.Session`.
.. toctree::
What does the Session do ?
==========================
-In the most general sense, the :class:`~.Session` establishes all
-conversations with the database and represents a "holding zone" for all the
-objects which you've loaded or associated with it during its lifespan. It
-provides the entrypoint to acquire a :class:`_query.Query` object, which sends
-queries to the database using the :class:`~.Session` object's current database
-connection, populating result rows into objects that are then stored in the
-:class:`.Session`, inside a structure called the `Identity Map
-<http://martinfowler.com/eaaCatalog/identityMap.html>`_ - a data structure
-that maintains unique copies of each object, where "unique" means "only one
-object with a particular primary key".
-
-The :class:`.Session` begins in an essentially stateless form. Once queries
-are issued or other objects are persisted with it, it requests a connection
-resource from an :class:`_engine.Engine` that is associated either with the
-:class:`.Session` itself or with the mapped :class:`_schema.Table` objects being
-operated upon. This connection represents an ongoing transaction, which
-remains in effect until the :class:`.Session` is instructed to commit or roll
-back its pending state.
-
-All changes to objects maintained by a :class:`.Session` are tracked - before
-the database is queried again or before the current transaction is committed,
-it **flushes** all pending changes to the database. This is known as the `Unit
-of Work <http://martinfowler.com/eaaCatalog/unitOfWork.html>`_ pattern.
-
-When using a :class:`.Session`, it's important to note that the objects
-which are associated with it are **proxy objects** to the transaction being
-held by the :class:`.Session` - there are a variety of events that will cause
-objects to re-access the database in order to keep synchronized. It is
-possible to "detach" objects from a :class:`.Session`, and to continue using
-them, though this practice has its caveats. It's intended that
-usually, you'd re-associate detached objects with another :class:`.Session` when you
-want to work with them again, so that they can resume their normal task of
-representing database state.
+In the most general sense, the :class:`~.Session` establishes all conversations
+with the database and represents a "holding zone" for all the objects which
+you've loaded or associated with it during its lifespan. It provides the
+interface where SELECT and other queries are made that will return and modify
+ORM-mapped objects. The ORM objects themselves are maintained inside the
+:class:`.Session`, inside a structure called the :term:`identity map` - a data
+structure that maintains unique copies of each object, where "unique" means
+"only one object with a particular primary key".
+
+The :class:`.Session` begins in a mostly stateless form. Once queries are
+issued or other objects are persisted with it, it requests a connection
+resource from an :class:`_engine.Engine` that is associated with the
+:class:`.Session`, and then establishes a transaction on that connection. This
+transaction remains in effect until the :class:`.Session` is instructed to
+commit or roll back the transaction.
+
+The ORM objects maintained by a :class:`_orm.Session` are :term:`instrumented`
+such that whenever an attribute or a collection is modified in the Python
+program, a change event is generated which is recorded by the
+:class:`_orm.Session`. Whenever the database is about to be queried, or when
+the transaction is about to be committed, the :class:`_orm.Session` first
+**flushes** all pending changes stored in memory to the database. This is
+known as the :term:`unit of work` pattern.
+
+When using a :class:`.Session`, it's useful to consider the ORM mapped objects
+that it maintains as **proxy objects** to database rows, which are local to the
+transaction being held by the :class:`.Session`. In order to maintain the
+state on the objects as matching what's actually in the database, there are a
+variety of events that will cause objects to re-access the database in order to
+keep synchronized. It is possible to "detach" objects from a
+:class:`.Session`, and to continue using them, though this practice has its
+caveats. It's intended that usually, you'd re-associate detached objects with
+another :class:`.Session` when you want to work with them again, so that they
+can resume their normal task of representing database state.
+
+
+
+Basics of Using a Session
+=========================
+
+The most basic :class:`.Session` use patterns are presented here.
.. _session_getting:
-Getting a Session
-=================
+Instantiating
+-------------
-:class:`.Session` is a regular Python class which can
-be directly instantiated. However, to standardize how sessions are configured
-and acquired, the :class:`.sessionmaker` class is normally
-used to create a top level :class:`.Session`
-configuration which can then be used throughout an application without the
-need to repeat the configurational arguments.
+The :class:`_orm.Session` may be constructed on its own or by using the
+:class:`_orm.sessionmaker` class. It typically is passed a single
+:class:`_engine.Engine` as a source of connectivity up front. A typical use
+may look like::
-The usage of :class:`.sessionmaker` is illustrated below:
+ from sqlalchemy import create_engine
+ from sqlalchemy.orm import Session
-.. sourcecode:: python+sql
+ # an Engine, which the Session will use for connection
+ # resources
+ engine = create_engine('postgresql://scott:tiger@localhost/')
+
+ # create session and add objects
+ with Session(engine) as session:
+ session.add(some_object)
+ session.add(some_other_object)
+ session.commit()
+
+Above, the :class:`_orm.Session` is instantiated given an :class:`_engine.Engine`
+associated with a particular database URL. It is then used in a Python
+context manager (i.e. ``with:`` statement) so that it is automatically
+closed at the end of the block; this is equivalent
+to calling the :meth:`_orm.Session.close` method.
+
+The call to :meth:`_orm.Session.commit` is optional, and is only needed if the
+work we've done with the :class:`_orm.Session` includes new data to be
+persisted to the database. If we were only issuing SELECT calls and did not
+need to write any changes, then the call to :meth:`_orm.Session.commit` would
+be unnecessary.
+
+When using the :class:`_orm.sessionmaker`, this object acts as a factory
+for :class:`_orm.Session` objects and provides an all-in-one transactional
+context manager analogous to :meth:`_engine.Engine.begin`; this context manager
+begins and commits a transaction at the end of the block, or in the case
+of an exception throw, will call upon :meth:`_orm.Session.rollback` to
+roll back the transaction::
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# an Engine, which the Session will use for connection
# resources
- some_engine = create_engine('postgresql://scott:tiger@localhost/')
+ engine = create_engine('postgresql://scott:tiger@localhost/')
+
+ Session = sessionmaker(engine)
+
+ with Session.begin() as session:
+ session.add(some_object)
+ session.add(some_other_object)
+ # commits the transaction, closes the session
+
+Where above, the :class:`_orm.Session` will both have its transaction committed
+as well as that the :class:`_orm.Session` will be closed, when the above
+``with:`` block ends.
+
+When you write your application, the
+:class:`.sessionmaker` factory should be globally scoped, the same way as
+the :class:`_engine.Engine` object created by :func:`_sa.create_engine` is
+also typically globally scoped. As these objects are both factories, they
+can be used by any number of functions and threads simultaenously.
+
+.. seealso::
+
+ :class:`_orm.sessionmaker`
+
+ :class:`_orm.Session`
+
+Querying (1.x Style)
+--------------------
+
+The :meth:`~.Session.query` function takes one or more
+**entities** and returns a new :class:`~sqlalchemy.orm.query.Query` object which
+will issue mapper queries within the context of this Session. By
+"entity" we refer to a mapped class, an attribute of a mapped class, or
+other ORM constructs such as an :func:`_orm.aliased` construct::
+
+ # query from a class
+ results = session.query(User).filter_by(name='ed').all()
+
+ # query with multiple classes, returns tuples
+ results = session.query(User, Address).join('addresses').filter_by(name='ed').all()
+
+ # query using orm-columns, also returns tuples
+ results = session.query(User.name, User.fullname).all()
- # create a configured "Session" class
- Session = sessionmaker(bind=some_engine)
+When ORM objects are returned in results, they are also stored in the identity
+map. When an incoming database row has a primary key that matches an object
+which is already present, the same object is returned, and those attributes
+of the object which already have a value are not re-populated.
- # create a Session
- session = Session()
+The :class:`_orm.Session` automatically expires all instances along transaction
+boundaries (i.e. when the current transaction is committed or rolled back) so
+that with a normally isolated transaction, data will refresh itself when a new
+transaction begins.
+
+The :class:`_query.Query` object is introduced in great detail in
+:ref:`ormtutorial_toplevel`, and further documented in
+:ref:`query_api_toplevel`.
+
+.. seealso::
+
+ :ref:`ormtutorial_toplevel`
+
+ :meth:`_orm.Session.query`
+
+ :ref:`query_api_toplevel`
+
+Querying (2.0 style)
+--------------------
+
+.. versionadded:: 1.4
+
+SQLAlchemy 2.0 will standardize the production of SELECT statements across both
+Core and ORM by making direct use of the :class:`_sql.Select` object within the
+ORM, removing the need for there to be a separate :class:`_orm.query.Query`
+object. This mode of operation is available in SQLAlchemy 1.4 right now to
+support applications that will be migrating to 2.0. The :class:`_orm.Session`
+must be instantiated with the
+:paramref:`_orm.Session.future` flag set to ``True``; from that point on the
+:meth:`_orm.Session.execute` method will return ORM results via the
+standard :class:`_engine.Result` object when invoking :func:`_sql.select`
+statements that use ORM entities::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import Session
+
+ session = Session(engine, future=True)
+
+ # query from a class
+ statement = select(User).filter_by(name="ed")
+
+ # list of first element of each row (i.e. User objects)
+ result = session.execute(statement).scalars().all()
+
+ # query with multiple classes
+ statement = select(User, Address).join('addresses').filter_by(name='ed')
+
+ # list of tuples
+ result = session.execute(statement).all()
+
+ # query with ORM columns
+ statement = select(User.name, User.fullname)
+
+ # list of tuples
+ result = session.execute(statement).all()
+
+It's important to note that while methods of :class:`_query.Query` such as
+:meth:`_query.Query.all` and :meth:`_query.Query.one` will return instances
+of ORM mapped objects directly in the case that only a single complete
+entity were requested, the :class:`_engine.Result` object returned
+by :meth:`_orm.Session.execute` will always deliver rows (named tuples)
+by default; this is so that results against single or multiple ORM objects,
+columns, tables, etc. may all be handled identically.
+
+If only one ORM entity was queried, the rows returned will have exactly one
+column, consisting of the ORM-mapped object instance for each row. To convert
+these rows into object instances without the tuples, the
+:meth:`_engine.Result.scalars` method is used to first apply a "scalars" filter
+to the result; then the :class:`_engine.Result` can be iterated or deliver rows
+via standard methods such as :meth:`_engine.Result.all`,
+:meth:`_engine.Result.first`, etc.
+
+.. seealso::
- # work with sess
- myobject = MyObject('foo', 'bar')
- session.add(myobject)
+ :ref:`migration_20_toplevel`
+
+Adding New or Existing Items
+----------------------------
+
+:meth:`~.Session.add` is used to place instances in the
+session. For :term:`transient` (i.e. brand new) instances, this will have the effect
+of an INSERT taking place for those instances upon the next flush. For
+instances which are :term:`persistent` (i.e. were loaded by this session), they are
+already present and do not need to be added. Instances which are :term:`detached`
+(i.e. have been removed from a session) may be re-associated with a session
+using this method::
+
+ user1 = User(name='user1')
+ user2 = User(name='user2')
+ session.add(user1)
+ session.add(user2)
+
+ session.commit() # write changes to the database
+
+To add a list of items to the session at once, use
+:meth:`~.Session.add_all`::
+
+ session.add_all([item1, item2, item3])
+
+The :meth:`~.Session.add` operation **cascades** along
+the ``save-update`` cascade. For more details see the section
+:ref:`unitofwork_cascades`.
+
+
+Deleting
+--------
+
+The :meth:`~.Session.delete` method places an instance
+into the Session's list of objects to be marked as deleted::
+
+ # mark two objects to be deleted
+ session.delete(obj1)
+ session.delete(obj2)
+
+ # commit (or flush)
session.commit()
-Above, the :class:`.sessionmaker` call creates a factory for us,
-which we assign to the name ``Session``. This factory, when
-called, will create a new :class:`.Session` object using the configurational
-arguments we've given the factory. In this case, as is typical,
-we've configured the factory to specify a particular :class:`_engine.Engine` for
-connection resources.
-
-A typical setup will associate the :class:`.sessionmaker` with an :class:`_engine.Engine`,
-so that each :class:`.Session` generated will use this :class:`_engine.Engine`
-to acquire connection resources. This association can
-be set up as in the example above, using the ``bind`` argument.
-
-When you write your application, place the
-:class:`.sessionmaker` factory at the global level. This
-factory can then
-be used by the rest of the application as the source of new :class:`.Session`
-instances, keeping the configuration for how :class:`.Session` objects
-are constructed in one place.
-
-The :class:`.sessionmaker` factory can also be used in conjunction with
-other helpers, which are passed a user-defined :class:`.sessionmaker` that
-is then maintained by the helper. Some of these helpers are discussed in the
-section :ref:`session_faq_whentocreate`.
-
-Adding Additional Configuration to an Existing sessionmaker()
--------------------------------------------------------------
-
-A common scenario is where the :class:`.sessionmaker` is invoked
-at module import time, however the generation of one or more :class:`_engine.Engine`
-instances to be associated with the :class:`.sessionmaker` has not yet proceeded.
-For this use case, the :class:`.sessionmaker` construct offers the
-:meth:`.sessionmaker.configure` method, which will place additional configuration
-directives into an existing :class:`.sessionmaker` that will take place
-when the construct is invoked::
+:meth:`_orm.Session.delete` marks an object for deletion, which will
+result in a DELETE statement emitted for each primary key affected.
+Before the pending deletes are flushed, objects marked by "delete" are present
+in the :attr:`_orm.Session.deleted` collection. After the DELETE, they
+are expunged from the :class:`_orm.Session`, which becomes permanent after
+the transaction is committed.
+
+There are various important behaviors related to the
+:meth:`_orm.Session.delete` operation, particularly in how relationships to
+other objects and collections are handled. There's more information on how
+this works in the section :ref:`unitofwork_cascades`, but in general
+the rules are:
+
+* Rows that correspond to mapped objects that are related to a deleted
+ object via the :func:`_orm.relationship` directive are **not
+ deleted by default**. If those objects have a foreign key constraint back
+ to the row being deleted, those columns are set to NULL. This will
+ cause a constraint violation if the columns are non-nullable.
+
+* To change the "SET NULL" into a DELETE of a related object's row, use the
+ :ref:`cascade_delete` cascade on the :func:`_orm.relationship`.
+
+* Rows that are in tables linked as "many-to-many" tables, via the
+ :paramref:`_orm.relationship.secondary` parameter, **are** deleted in all
+ cases when the object they refer to is deleted.
+
+* When related objects include a foreign key constraint back to the object
+ being deleted, and the related collections to which they belong are not
+ currently loaded into memory, the unit of work will emit a SELECT to fetch
+ all related rows, so that their primary key values can be used to emit either
+ UPDATE or DELETE statements on those related rows. In this way, the ORM
+ without further instruction will perform the function of ON DELETE CASCADE,
+ even if this is configured on Core :class:`_schema.ForeignKeyConstraint`
+ objects.
+
+* The :paramref:`_orm.relationship.passive_deletes` parameter can be used
+ to tune this behavior and rely upon "ON DELETE CASCADE" more naturally;
+ when set to True, this SELECT operation will no longer take place, however
+ rows that are locally present will still be subject to explicit SET NULL
+ or DELETE. Setting :paramref:`_orm.relationship.passive_deletes` to
+ the string ``"all"`` will disable **all** related object update/delete.
+
+* When the DELETE occurs for an object marked for deletion, the object
+ is not automatically removed from collections or object references that
+ refer to it. When the :class:`_orm.Session` is expired, these collections
+ may be loaded again so that the object is no longer present. However,
+ it is preferable that instead of using :meth:`_orm.Session.delete` for
+ these objects, the object should instead be removed from its collection
+ and then :ref:`cascade_delete_orphan` should be used so that it is
+ deleted as a secondary effect of that collection removal. See the
+ section :ref:`session_deleting_from_collections` for an example of this.
+.. seealso::
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy import create_engine
+ :ref:`cascade_delete` - describes "delete cascade", which marks related
+ objects for deletion when a lead object is deleted.
- # configure Session class with desired options
- Session = sessionmaker()
+ :ref:`cascade_delete_orphan` - describes "delete orphan cascade", which
+ marks related objects for deletion when they are de-associated from their
+ lead object.
- # later, we create the engine
- engine = create_engine('postgresql://...')
+ :ref:`session_deleting_from_collections` - important background on
+ :meth:`_orm.Session.delete` as involves relationships being refreshed
+ in memory.
- # associate it with our custom Session class
- Session.configure(bind=engine)
+.. _session_flushing:
- # work with the session
- session = Session()
+Flushing
+--------
-Creating Ad-Hoc Session Objects with Alternate Arguments
---------------------------------------------------------
+When the :class:`~sqlalchemy.orm.session.Session` is used with its default
+configuration, the flush step is nearly always done transparently.
+Specifically, the flush occurs before any individual
+SQL statement is issued as a result of a :class:`_query.Query` or
+a :term:`2.0-style` :meth:`_orm.Session.execute` call, as well as within the
+:meth:`~.Session.commit` call before the transaction is
+committed. It also occurs before a SAVEPOINT is issued when
+:meth:`~.Session.begin_nested` is used.
+
+Regardless of the autoflush setting, a flush can always be forced by issuing
+:meth:`~.Session.flush`::
+
+ session.flush()
+
+The "flush-on-Query" aspect of the behavior can be disabled by constructing
+:class:`.sessionmaker` with the flag ``autoflush=False``::
+
+ Session = sessionmaker(autoflush=False)
+
+Additionally, autoflush can be temporarily disabled by setting the
+``autoflush`` flag at any time::
+
+ mysession = Session()
+ mysession.autoflush = False
+
+More conveniently, it can be turned off within a context managed block using :attr:`.Session.no_autoflush`::
+
+ with mysession.no_autoflush:
+ mysession.add(some_object)
+ mysession.flush()
+
+The flush process *always* occurs within a transaction, even if the
+:class:`~sqlalchemy.orm.session.Session` has been configured with
+``autocommit=True``, a setting that disables the session's persistent
+transactional state. If no transaction is present,
+:meth:`~.Session.flush` creates its own transaction and
+commits it. Any failures during flush will always result in a rollback of
+whatever transaction is present. If the Session is not in ``autocommit=True``
+mode, an explicit call to :meth:`~.Session.rollback` is
+required after a flush fails, even though the underlying transaction will have
+been rolled back already - this is so that the overall nesting pattern of
+so-called "subtransactions" is consistently maintained.
+
+.. _bulk_update_delete:
+
+Bulk UPDATE and DELETE
+----------------------
+
+The sections above on :meth:`_orm.Session.flush` and :meth:`_orm.Session.delete`
+detail how rows can be inserted, updated and deleted in the database,
+based on primary key identities that are referred towards by mapped Python
+objets in the application. The :class:`_orm.Session` can also emit UPDATE
+and DELETE statements with arbitrary WHERE clauses as well, and at the same
+time refresh locally present objects which match those rows.
+
+To emit a bulk UPDATE in :term:`1.x style`, the :meth:`_query.Query.update` method
+may be used::
+
+ session.query(User).filter(User.nane == "squidward").\
+ update({"name": "spongebob"}, syncronize_session="fetch")
+
+Above, an UPDATE will be emitted against all rows that match the name
+"squidward" and be updated to the name "spongebob". The
+:paramref:`_query.Query.update.synchronize_session` parameter referring to
+"fetch" indicates the list of affected primary keys should be fetched either
+via a separate SELECT statement or via RETURNING if the backend database supports it;
+objects locally present in memory will be updated in memory based on these
+primary key identities.
+
+For bulk UPDATEs in :term:`2.0 style`, :meth:`_orm.Session.execute` is used with the
+Core :class:`_sql.Update` construct. The :meth:`_orm.Session` must
+be configured with :paramref:`_orm.Session.future` set to ``True``::
+
+ session = Session(future=True)
+
+ from sqlalchemy import update
+
+ stmt = update(User).where(User.nane == "squidward").values(name="spongebob")).\
+ execution_options(syncronize_session="fetch")
+
+ session.execute(stmt)
+
+Above, the :meth:`_dml.Update.execution_options` method may be used to
+establish execution-time options such as "synchronize_session".
+
+DELETEs work in the same way as UPDATE except there is no "values / set"
+clause established. When synchronize_session is used, matching objects
+within the :class:`_orm.Session` will be marked as deleted and expunged.
+
+Bulk delete, :term:`1.x style`::
-For the use case where an application needs to create a new :class:`.Session` with
-special arguments that deviate from what is normally used throughout the application,
-such as a :class:`.Session` that binds to an alternate
-source of connectivity, or a :class:`.Session` that should
-have other arguments such as ``expire_on_commit`` established differently from
-what most of the application wants, specific arguments can be passed to the
-:class:`.sessionmaker` factory's :meth:`.sessionmaker.__call__` method.
-These arguments will override whatever
-configurations have already been placed, such as below, where a new :class:`.Session`
-is constructed against a specific :class:`_engine.Connection`::
+ session.query(User).filter(User.nane == "squidward").\
+ delete(syncronize_session="fetch")
- # at the module level, the global sessionmaker,
- # bound to a specific Engine
- Session = sessionmaker(bind=engine)
+Bulk delete, :term:`2.0 style`. The :meth:`_orm.Session` must
+be configured with :paramref:`_orm.Session.future` set to ``True``::
- # later, some unit of code wants to create a
- # Session that is bound to a specific Connection
- conn = engine.connect()
- session = Session(bind=conn)
+ session = Session(future=True)
+ from sqlalchemy import delete
-The typical rationale for the association of a :class:`.Session` with a specific
-:class:`_engine.Connection` is that of a test fixture that maintains an external
-transaction - see :ref:`session_external_transaction` for an example of this.
+ stmt = delete(User).where(User.nane == "squidward").execution_options(syncronize_session="fetch")
+
+ session.execute(stmt)
+
+
+With both the 1.x and 2.0 form of bulk updates and deletes, the following
+values for ``synchronize_session`` are supported:
+
+* ``False`` - don't synchronize the session. This option is the most
+ efficient and is reliable once the session is expired, which
+ typically occurs after a commit(), or explicitly using
+ expire_all(). Before the expiration, objects that were updated or deleted
+ in the database may still
+ remain in the session with stale values, which
+ can lead to confusing results.
+
+* ``'fetch'`` - Retrieves the primary key identity of affected rows by either
+ performing a SELECT before the UPDATE or DELETE, or by using RETURNING
+ if the database supports it, so that in-memory objects which are affected
+ by the operation can be refreshed with new values (updates) or expunged
+ from the :class:`_orm.Session` (deletes)
+
+* ``'evaluate'`` - Evaluate the WHERE criteria given in the UPDATE or DELETE
+ statement in Python, to locate matching objects within the
+ :class:`_orm.Session`. This approach does not add any round trips and in
+ the absense of RETURNING support is more efficient. For UPDATE or DELETE
+ statements with complex criteria, the ``'evaluate'`` strategy may not be
+ able to evaluate the expression in Python and will raise an error. If
+ this occurs, use the ``'fetch'`` strategy for the operation instead.
+
+ .. warning::
+
+ The ``"evaluate"`` strategy should be avoided if an UPDATE operation is
+ to run on a :class:`_orm.Session` that has many objects which have
+ been expired, because it will necessarily need to refresh those objects
+ as they are located which will emit a SELECT for each one. The
+ :class:`_orm.Session` may have expired objects if it is being used
+ across multiple :meth:`_orm.Session.commit` calls and the
+ :paramref:`_orm.Session.expire_on_commit` flag is at its default
+ value of ``True``.
+
+
+.. warning:: **Additional Caveats for bulk updates and deletes**
+
+ The bulk UPDATE and DELETE features bypass ORM unit-of-work automation in
+ favor being able to emit a single UPDATE or DELETE statement at once
+ without complextiy.
+
+ * The operations do not offer in-Python cascading of
+ relationships - it is assumed that ON UPDATE CASCADE and/or
+ ON DELETE CASCADE is
+ configured for any foreign key references which require
+ it, otherwise the database may emit an integrity
+ violation if foreign key references are being enforced.
+
+ * After the UPDATE or DELETE, dependent objects in the
+ :class:`.Session` which were impacted by an ON UPDATE CASCADE or ON
+ DELETE CASCADE on related tables may not contain the current state;
+ this issue is resolved once the :class:`.Session` is expired, which
+ normally occurs upon :meth:`.Session.commit` or can be forced by
+ using
+ :meth:`.Session.expire_all`.
+
+ * The ``'fetch'`` strategy, when run on a database that does not support
+ RETURNING such as MySQL or SQLite, results in an additional SELECT
+ statement emitted which may reduce performance. Use SQL echoing when
+ developing to evaluate the impact of SQL emitted.
+
+ * Bulk UPDATEs and DELETEs do not handle joined table inheritance
+ automatically. If the operation is against multiple tables, typically
+ individual UPDATE / DELETE statements against the individual tables
+ should be used. Some databases support multiple table UPDATEs.
+ Similar guidelines as those detailed at :ref:`multi_table_updates`
+ may be applied.
+
+ * The polymorphic identity WHERE criteria is **not** included
+ for single- or
+ joined- table updates - this must be added **manually**, even
+ for single table inheritance.
+
+ * In order to intercept bulk UPDATE and DELETE operations with event
+ handlers, use the :meth:`_orm.SessionEvents.do_orm_execute` event.
+
+
+
+.. _session_committing:
+
+Committing
+----------
+
+:meth:`~.Session.commit` is used to commit the current
+transaction. It always issues :meth:`~.Session.flush`
+beforehand to flush any remaining state to the database; this is independent
+of the "autoflush" setting.
+
+If the :class:`_orm.Session` does not currently have a transaction present,
+the method will silently pass, unless the legacy "autocommit" mode is enabled
+in which it will raise an error.
+
+Another behavior of :meth:`~.Session.commit` is that by
+default it expires the state of all instances present after the commit is
+complete. This is so that when the instances are next accessed, either through
+attribute access or by them being present in the result of a SELECT,
+they receive the most recent state. This behavior may be controlled
+by the :paramref:`_orm.Session.expire_on_commit` flag, which may be set
+to ``False`` when this behavior is undesirable.
+
+.. _session_rollback:
+
+Rolling Back
+------------
+
+:meth:`~.Session.rollback` rolls back the current
+transaction. With a default configured session, the post-rollback state of the
+session is as follows:
+
+ * All transactions are rolled back and all connections returned to the
+ connection pool, unless the Session was bound directly to a Connection, in
+ which case the connection is still maintained (but still rolled back).
+ * Objects which were initially in the :term:`pending` state when they were added
+ to the :class:`~sqlalchemy.orm.session.Session` within the lifespan of the
+ transaction are expunged, corresponding to their INSERT statement being
+ rolled back. The state of their attributes remains unchanged.
+ * Objects which were marked as :term:`deleted` within the lifespan of the
+ transaction are promoted back to the :term:`persistent` state, corresponding to
+ their DELETE statement being rolled back. Note that if those objects were
+ first :term:`pending` within the transaction, that operation takes precedence
+ instead.
+ * All objects not expunged are fully expired.
+
+With that state understood, the :class:`~sqlalchemy.orm.session.Session` may
+safely continue usage after a rollback occurs.
+
+When a :meth:`~.Session.flush` fails, typically for
+reasons like primary key, foreign key, or "not nullable" constraint
+violations, a ROLLBACK is issued
+automatically (it's currently not possible for a flush to continue after a
+partial failure). However, the :class:`_orm.Session` goes into a state
+known as "inactive" at this point, and the calling application must
+always call the :meth:`_orm.Session.rollback` method explicitly so that
+the :class:`_orm.Session` can go back into a useable state (it can also
+be simply closed and discarded). See the FAQ entry at
+:ref:`faq_session_rollback` for further discussion.
+
+
+
+Closing
+-------
+
+The :meth:`~.Session.close` method issues a
+:meth:`~.Session.expunge_all`, and :term:`releases` any
+transactional/connection resources. When connections are returned to the
+connection pool, transactional state is rolled back as well.
+
+It's recommended that the scope of a :class:`_orm.Session` be limited by
+a call to :meth:`_orm.Session.close` at the end, especially if the
+:meth:`_orm.Session.commit` or :meth:`_orm.Session.rollback` methods are not
+used. The :class:`_orm.Session` may be used as a context manager to ensure
+that :meth:`_orm.Session.close` is called::
+
+ with Session(engine) as session:
+ result = session.execute(select(User))
+
+ # closes session automatically
.. _session_faq:
The :class:`.Session`, whenever it is used to talk to the database,
begins a database transaction as soon as it starts communicating.
-Assuming the ``autocommit`` flag is left at its recommended default
-of ``False``, this transaction remains in progress until the :class:`.Session`
+This transaction remains in progress until the :class:`.Session`
is rolled back, committed, or closed. The :class:`.Session` will
begin a new transaction if it is used again, subsequent to the previous
transaction ending; from this it follows that the :class:`.Session`
one at a time. We refer to these two concepts as **transaction scope**
and **session scope**.
-The implication here is that the SQLAlchemy ORM is encouraging the
-developer to establish these two scopes in their application,
-including not only when the scopes begin and end, but also the
-expanse of those scopes, for example should a single
-:class:`.Session` instance be local to the execution flow within a
-function or method, should it be a global object used by the
-entire application, or somewhere in between these two.
-
-The burden placed on the developer to determine this scope is one
-area where the SQLAlchemy ORM necessarily has a strong opinion
-about how the database should be used. The :term:`unit of work` pattern
-is specifically one of accumulating changes over time and flushing
-them periodically, keeping in-memory state in sync with what's
-known to be present in a local transaction. This pattern is only
-effective when meaningful transaction scopes are in place.
-
It's usually not very hard to determine the best points at which
to begin and end the scope of a :class:`.Session`, though the wide
variety of application architectures possible can introduce
challenging situations.
-A common choice is to tear down the :class:`.Session` at the same
-time the transaction ends, meaning the transaction and session scopes
-are the same. This is a great choice to start out with as it
-removes the need to consider session scope as separate from transaction
-scope.
-
-While there's no one-size-fits-all recommendation for how transaction
-scope should be determined, there are common patterns. Especially
-if one is writing a web application, the choice is pretty much established.
-
-A web application is the easiest case because such an application is already
-constructed around a single, consistent scope - this is the **request**,
-which represents an incoming request from a browser, the processing
-of that request to formulate a response, and finally the delivery of that
-response back to the client. Integrating web applications with the
-:class:`.Session` is then the straightforward task of linking the
-scope of the :class:`.Session` to that of the request. The :class:`.Session`
-can be established as the request begins, or using a :term:`lazy initialization`
-pattern which establishes one as soon as it is needed. The request
-then proceeds, with some system in place where application logic can access
-the current :class:`.Session` in a manner associated with how the actual
-request object is accessed. As the request ends, the :class:`.Session`
-is torn down as well, usually through the usage of event hooks provided
-by the web framework. The transaction used by the :class:`.Session`
-may also be committed at this point, or alternatively the application may
-opt for an explicit commit pattern, only committing for those requests
-where one is warranted, but still always tearing down the :class:`.Session`
-unconditionally at the end.
-
-Some web frameworks include infrastructure to assist in the task
-of aligning the lifespan of a :class:`.Session` with that of a web request.
-This includes products such as `Flask-SQLAlchemy <http://flask-sqlalchemy.pocoo.org>`_,
-for usage in conjunction with the Flask web framework,
-and `Zope-SQLAlchemy <http://pypi.python.org/pypi/zope.sqlalchemy>`_,
-typically used with the Pyramid framework.
-SQLAlchemy recommends that these products be used as available.
-
-In those situations where the integration libraries are not
-provided or are insufficient, SQLAlchemy includes its own "helper" class known as
-:class:`.scoped_session`. A tutorial on the usage of this object
-is at :ref:`unitofwork_contextual`. It provides both a quick way
-to associate a :class:`.Session` with the current thread, as well as
-patterns to associate :class:`.Session` objects with other kinds of
-scopes.
-
-As mentioned before, for non-web applications there is no one clear
-pattern, as applications themselves don't have just one pattern
-of architecture. The best strategy is to attempt to demarcate
-"operations", points at which a particular thread begins to perform
-a series of operations for some period of time, which can be committed
-at the end. Some examples:
+Some sample scenarios include:
+
+* Web applications. In this case, it's best to make use of the SQLAlchemy
+ integrations provided by the web framework in use. Or otherwise, the
+ basic pattern is create a :class:`_orm.Session` at the start of a web
+ request, call the :meth:`_orm.Session.commit` method at the end of
+ web requests that do POST, PUT, or DELETE, and then close the session
+ at the end of web request. It's also usually a good idea to set
+ :paramref:`_orm.Session.expire_on_commit` to False so that subsequent
+ access to objects that came from a :class:`_orm.Session` within the
+ view layer do not need to emit new SQL queries to refresh the objects,
+ if the transaction has been committed already.
* A background daemon which spawns off child forks
would want to create a :class:`.Session` local to each child
ThingTwo().go()
Keep the lifecycle of the session (and usually the transaction)
-**separate and external**::
+**separate and external**. The example below illustrates how this might look,
+and additionally makes use of a Python context manager (i.e. the ``with:``
+keyword) in order to manage the scope of the :class:`_orm.Session` and its
+transaction automatically::
### this is a **better** (but not the only) way to do it ###
session.query(Widget).update({"q": 18})
def run_my_program():
- session = Session()
- try:
- ThingOne().go(session)
- ThingTwo().go(session)
-
- session.commit()
- except:
- session.rollback()
- raise
- finally:
- session.close()
-
-The most comprehensive approach, recommended for more substantial applications,
-will try to keep the details of session, transaction and exception management
-as far as possible from the details of the program doing its work. For
-example, we can further separate concerns using a `context manager
-<http://docs.python.org/3/library/co
-ntextlib.html#contextlib.contextmanager>`_::
-
- ### another way (but again *not the only way*) to do it ###
-
- from contextlib import contextmanager
-
- @contextmanager
- def session_scope():
- """Provide a transactional scope around a series of operations."""
- session = Session()
- try:
- yield session
- session.commit()
- except:
- session.rollback()
- raise
- finally:
- session.close()
-
+ with Session() as session:
+ with session.begin():
+ ThingOne().go(session)
+ ThingTwo().go(session)
- def run_my_program():
- with session_scope() as session:
- ThingOne().go(session)
- ThingTwo().go(session)
+.. versionchanged:: 1.4 The :class:`_orm.Session` may be used as a context
+ manager without the use of external helper functions.
Is the Session a cache?
-----------------------
but to instead *copy* objects from one :class:`.Session` to another, often
using the :meth:`.Session.merge` method to copy the state of an object into
a new object local to a different :class:`.Session`.
-
-Basics of Using a Session
-=========================
-
-The most basic :class:`.Session` use patterns are presented here.
-
-Querying
---------
-
-The :meth:`~.Session.query` function takes one or more
-*entities* and returns a new :class:`~sqlalchemy.orm.query.Query` object which
-will issue mapper queries within the context of this Session. An entity is
-defined as a mapped class, a :class:`~sqlalchemy.orm.mapper.Mapper` object, an
-orm-enabled *descriptor*, or an ``AliasedClass`` object::
-
- # query from a class
- session.query(User).filter_by(name='ed').all()
-
- # query with multiple classes, returns tuples
- session.query(User, Address).join('addresses').filter_by(name='ed').all()
-
- # query using orm-enabled descriptors
- session.query(User.name, User.fullname).all()
-
- # query from a mapper
- user_mapper = class_mapper(User)
- session.query(user_mapper)
-
-When :class:`~sqlalchemy.orm.query.Query` returns results, each object
-instantiated is stored within the identity map. When a row matches an object
-which is already present, the same object is returned. In the latter case,
-whether or not the row is populated onto an existing object depends upon
-whether the attributes of the instance have been *expired* or not. A
-default-configured :class:`~sqlalchemy.orm.session.Session` automatically
-expires all instances along transaction boundaries, so that with a normally
-isolated transaction, there shouldn't be any issue of instances representing
-data which is stale with regards to the current transaction.
-
-The :class:`_query.Query` object is introduced in great detail in
-:ref:`ormtutorial_toplevel`, and further documented in
-:ref:`query_api_toplevel`.
-
-Adding New or Existing Items
-----------------------------
-
-:meth:`~.Session.add` is used to place instances in the
-session. For :term:`transient` (i.e. brand new) instances, this will have the effect
-of an INSERT taking place for those instances upon the next flush. For
-instances which are :term:`persistent` (i.e. were loaded by this session), they are
-already present and do not need to be added. Instances which are :term:`detached`
-(i.e. have been removed from a session) may be re-associated with a session
-using this method::
-
- user1 = User(name='user1')
- user2 = User(name='user2')
- session.add(user1)
- session.add(user2)
-
- session.commit() # write changes to the database
-
-To add a list of items to the session at once, use
-:meth:`~.Session.add_all`::
-
- session.add_all([item1, item2, item3])
-
-The :meth:`~.Session.add` operation **cascades** along
-the ``save-update`` cascade. For more details see the section
-:ref:`unitofwork_cascades`.
-
-
-Deleting
---------
-
-The :meth:`~.Session.delete` method places an instance
-into the Session's list of objects to be marked as deleted::
-
- # mark two objects to be deleted
- session.delete(obj1)
- session.delete(obj2)
-
- # commit (or flush)
- session.commit()
-
-.. _session_deleting_from_collections:
-
-Deleting Objects Referenced from Collections and Scalar Relationships
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ORM in general never modifies the contents of a collection or scalar
-relationship during the flush process. This means, if your class has a
-:func:`_orm.relationship` that refers to a collection of objects, or a reference
-to a single object such as many-to-one, the contents of this attribute will
-not be modified when the flush process occurs. Instead, if the :class:`.Session`
-is expired afterwards, either through the expire-on-commit behavior of
-:meth:`.Session.commit` or through explicit use of :meth:`.Session.expire`,
-the referenced object or collection upon a given object associated with that
-:class:`.Session` will be cleared and will re-load itself upon next access.
-
-This behavior is not to be confused with the flush process' impact on column-
-bound attributes that refer to foreign key and primary key columns; these
-attributes are modified liberally within the flush, since these are the
-attributes that the flush process intends to manage. Nor should it be confused
-with the behavior of backreferences, as described at
-:ref:`relationships_backref`; a backreference event will modify a collection
-or scalar attribute reference, however this behavior takes place during
-direct manipulation of related collections and object references, which is
-explicit within the calling application and is outside of the flush process.
-
-A common confusion that arises regarding this behavior involves the use of the
-:meth:`~.Session.delete` method. When :meth:`.Session.delete` is invoked upon
-an object and the :class:`.Session` is flushed, the row is deleted from the
-database. Rows that refer to the target row via foreign key, assuming they
-are tracked using a :func:`_orm.relationship` between the two mapped object types,
-will also see their foreign key attributes UPDATED to null, or if delete
-cascade is set up, the related rows will be deleted as well. However, even
-though rows related to the deleted object might be themselves modified as well,
-**no changes occur to relationship-bound collections or object references on
-the objects** involved in the operation within the scope of the flush
-itself. This means if the object was a
-member of a related collection, it will still be present on the Python side
-until that collection is expired. Similarly, if the object were
-referenced via many-to-one or one-to-one from another object, that reference
-will remain present on that object until the object is expired as well.
-
-Below, we illustrate that after an ``Address`` object is marked
-for deletion, it's still present in the collection associated with the
-parent ``User``, even after a flush::
-
- >>> address = user.addresses[1]
- >>> session.delete(address)
- >>> session.flush()
- >>> address in user.addresses
- True
-
-When the above session is committed, all attributes are expired. The next
-access of ``user.addresses`` will re-load the collection, revealing the
-desired state::
-
- >>> session.commit()
- >>> address in user.addresses
- False
-
-There is a recipe for intercepting :meth:`.Session.delete` and invoking this
-expiration automatically; see `ExpireRelationshipOnFKChange <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ExpireRelationshipOnFKChange>`_ for this. However, the usual practice of
-deleting items within collections is to forego the usage of
-:meth:`~.Session.delete` directly, and instead use cascade behavior to
-automatically invoke the deletion as a result of removing the object from the
-parent collection. The ``delete-orphan`` cascade accomplishes this, as
-illustrated in the example below::
-
- class User(Base):
- __tablename__ = 'user'
-
- # ...
-
- addresses = relationship(
- "Address", cascade="all, delete-orphan")
-
- # ...
-
- del user.addresses[1]
- session.flush()
-
-Where above, upon removing the ``Address`` object from the ``User.addresses``
-collection, the ``delete-orphan`` cascade has the effect of marking the ``Address``
-object for deletion in the same way as passing it to :meth:`~.Session.delete`.
-
-The ``delete-orphan`` cascade can also be applied to a many-to-one
-or one-to-one relationship, so that when an object is de-associated from its
-parent, it is also automatically marked for deletion. Using ``delete-orphan``
-cascade on a many-to-one or one-to-one requires an additional flag
-:paramref:`_orm.relationship.single_parent` which invokes an assertion
-that this related object is not to shared with any other parent simultaneously::
-
- class User(Base):
- # ...
-
- preference = relationship(
- "Preference", cascade="all, delete-orphan",
- single_parent=True)
-
-
-Above, if a hypothetical ``Preference`` object is removed from a ``User``,
-it will be deleted on flush::
-
- some_user.preference = None
- session.flush() # will delete the Preference object
-
-.. seealso::
-
- :ref:`unitofwork_cascades` for detail on cascades.
-
-
-Deleting based on Filter Criterion
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The caveat with ``Session.delete()`` is that you need to have an object handy
-already in order to delete. The Query includes a
-:func:`~sqlalchemy.orm.query.Query.delete` method which deletes based on
-filtering criteria::
-
- session.query(User).filter(User.id==7).delete()
-
-The ``Query.delete()`` method includes functionality to "expire" objects
-already in the session which match the criteria. However it does have some
-caveats, including that "delete" and "delete-orphan" cascades won't be fully
-expressed for collections which are already loaded. See the API docs for
-:meth:`~sqlalchemy.orm.query.Query.delete` for more details.
-
-.. _session_flushing:
-
-Flushing
---------
-
-When the :class:`~sqlalchemy.orm.session.Session` is used with its default
-configuration, the flush step is nearly always done transparently.
-Specifically, the flush occurs before any individual
-:class:`~sqlalchemy.orm.query.Query` is issued, as well as within the
-:meth:`~.Session.commit` call before the transaction is
-committed. It also occurs before a SAVEPOINT is issued when
-:meth:`~.Session.begin_nested` is used.
-
-Regardless of the autoflush setting, a flush can always be forced by issuing
-:meth:`~.Session.flush`::
-
- session.flush()
-
-The "flush-on-Query" aspect of the behavior can be disabled by constructing
-:class:`.sessionmaker` with the flag ``autoflush=False``::
-
- Session = sessionmaker(autoflush=False)
-
-Additionally, autoflush can be temporarily disabled by setting the
-``autoflush`` flag at any time::
-
- mysession = Session()
- mysession.autoflush = False
-
-More conveniently, it can be turned off within a context managed block using :attr:`.Session.no_autoflush`::
-
- with mysession.no_autoflush:
- mysession.add(some_object)
- mysession.flush()
-
-The flush process *always* occurs within a transaction, even if the
-:class:`~sqlalchemy.orm.session.Session` has been configured with
-``autocommit=True``, a setting that disables the session's persistent
-transactional state. If no transaction is present,
-:meth:`~.Session.flush` creates its own transaction and
-commits it. Any failures during flush will always result in a rollback of
-whatever transaction is present. If the Session is not in ``autocommit=True``
-mode, an explicit call to :meth:`~.Session.rollback` is
-required after a flush fails, even though the underlying transaction will have
-been rolled back already - this is so that the overall nesting pattern of
-so-called "subtransactions" is consistently maintained.
-
-.. _session_committing:
-
-Committing
-----------
-
-:meth:`~.Session.commit` is used to commit the current
-transaction. It always issues :meth:`~.Session.flush`
-beforehand to flush any remaining state to the database; this is independent
-of the "autoflush" setting. If no transaction is present, it raises an error.
-Note that the default behavior of the :class:`~sqlalchemy.orm.session.Session`
-is that a "transaction" is always present; this behavior can be disabled by
-setting ``autocommit=True``. In autocommit mode, a transaction can be
-initiated by calling the :meth:`~.Session.begin` method.
-
-.. note::
-
- The term "transaction" here refers to a transactional
- construct within the :class:`.Session` itself which may be
- maintaining zero or more actual database (DBAPI) transactions. An individual
- DBAPI connection begins participation in the "transaction" as it is first
- used to execute a SQL statement, then remains present until the session-level
- "transaction" is completed. See :ref:`unitofwork_transaction` for
- further detail.
-
-Another behavior of :meth:`~.Session.commit` is that by
-default it expires the state of all instances present after the commit is
-complete. This is so that when the instances are next accessed, either through
-attribute access or by them being present in a
-:class:`~sqlalchemy.orm.query.Query` result set, they receive the most recent
-state. To disable this behavior, configure
-:class:`.sessionmaker` with ``expire_on_commit=False``.
-
-Normally, instances loaded into the :class:`~sqlalchemy.orm.session.Session`
-are never changed by subsequent queries; the assumption is that the current
-transaction is isolated so the state most recently loaded is correct as long
-as the transaction continues. Setting ``autocommit=True`` works against this
-model to some degree since the :class:`~sqlalchemy.orm.session.Session`
-behaves in exactly the same way with regard to attribute state, except no
-transaction is present.
-
-.. _session_rollback:
-
-Rolling Back
-------------
-
-:meth:`~.Session.rollback` rolls back the current
-transaction. With a default configured session, the post-rollback state of the
-session is as follows:
-
- * All transactions are rolled back and all connections returned to the
- connection pool, unless the Session was bound directly to a Connection, in
- which case the connection is still maintained (but still rolled back).
- * Objects which were initially in the *pending* state when they were added
- to the :class:`~sqlalchemy.orm.session.Session` within the lifespan of the
- transaction are expunged, corresponding to their INSERT statement being
- rolled back. The state of their attributes remains unchanged.
- * Objects which were marked as *deleted* within the lifespan of the
- transaction are promoted back to the *persistent* state, corresponding to
- their DELETE statement being rolled back. Note that if those objects were
- first *pending* within the transaction, that operation takes precedence
- instead.
- * All objects not expunged are fully expired.
-
-With that state understood, the :class:`~sqlalchemy.orm.session.Session` may
-safely continue usage after a rollback occurs.
-
-When a :meth:`~.Session.flush` fails, typically for
-reasons like primary key, foreign key, or "not nullable" constraint
-violations, a :meth:`~.Session.rollback` is issued
-automatically (it's currently not possible for a flush to continue after a
-partial failure). However, the flush process always uses its own transactional
-demarcator called a *subtransaction*, which is described more fully in the
-docstrings for :class:`~sqlalchemy.orm.session.Session`. What it means here is
-that even though the database transaction has been rolled back, the end user
-must still issue :meth:`~.Session.rollback` to fully
-reset the state of the :class:`~sqlalchemy.orm.session.Session`.
-
-
-Closing
--------
-
-The :meth:`~.Session.close` method issues a
-:meth:`~.Session.expunge_all`, and :term:`releases` any
-transactional/connection resources. When connections are returned to the
-connection pool, transactional state is rolled back as well.
-
-
Managing Transactions
=====================
-A newly constructed :class:`.Session` may be said to be in the "begin" state.
-In this state, the :class:`.Session` has not established any connection or
-transactional state with any of the :class:`_engine.Engine` objects that may be associated
-with it.
-
-The :class:`.Session` then receives requests to operate upon a database connection.
-Typically, this means it is called upon to execute SQL statements using a particular
-:class:`_engine.Engine`, which may be via :meth:`.Session.query`, :meth:`.Session.execute`,
-or within a flush operation of pending data, which occurs when such state exists
-and :meth:`.Session.commit` or :meth:`.Session.flush` is called.
-
-As these requests are received, each new :class:`_engine.Engine` encountered is associated
-with an ongoing transactional state maintained by the :class:`.Session`.
-When the first :class:`_engine.Engine` is operated upon, the :class:`.Session` can be said
-to have left the "begin" state and entered "transactional" state. For each
-:class:`_engine.Engine` encountered, a :class:`_engine.Connection` is associated with it,
-which is acquired via the :meth:`_engine.Engine.connect` method. If a
-:class:`_engine.Connection` was directly associated with the :class:`.Session` (see :ref:`session_external_transaction`
-for an example of this), it is
-added to the transactional state directly.
-
-For each :class:`_engine.Connection`, the :class:`.Session` also maintains a :class:`.Transaction` object,
-which is acquired by calling :meth:`_engine.Connection.begin` on each :class:`_engine.Connection`,
-or if the :class:`.Session`
-object has been established using the flag ``twophase=True``, a :class:`.TwoPhaseTransaction`
-object acquired via :meth:`_engine.Connection.begin_twophase`. These transactions are all committed or
-rolled back corresponding to the invocation of the
-:meth:`.Session.commit` and :meth:`.Session.rollback` methods. A commit operation will
-also call the :meth:`.TwoPhaseTransaction.prepare` method on all transactions if applicable.
-
-When the transactional state is completed after a rollback or commit, the :class:`.Session`
-:term:`releases` all :class:`.Transaction` and :class:`_engine.Connection` resources,
-and goes back to the "begin" state, which
-will again invoke new :class:`_engine.Connection` and :class:`.Transaction` objects as new
-requests to emit SQL statements are received.
-
-The example below illustrates this lifecycle::
-
- engine = create_engine("...")
- Session = sessionmaker(bind=engine)
+.. versionchanged:: 1.4 Session transaction management has been revised
+ to be clearer and easier to use. In particular, it now features
+ "autobegin" operation, which means the point at which a transaction begins
+ may be controlled, without using the legacy "autocommit" mode.
+
+The :class:`_orm.Session` tracks the state of a single "virtual" transaction
+at a time, using an object called
+:class:`_orm.SessionTransaction`. This object then makes use of the underyling
+:class:`_engine.Engine` or engines to which the :class:`_orm.Session`
+object is bound in order to start real connection-level transactions using
+the :class:`_engine.Connection` object as needed.
+
+This "virtual" transaction is created automatically when needed, or can
+alternatively be started using the :meth:`_orm.Session.begin` method. To
+as great a degree as possible, Python context manager use is supported both
+at the level of creating :class:`_orm.Session` objects as well as to maintain
+the scope of the :class:`_orm.SessionTransaction`.
+
+Below, assume we start with a :class:`_orm.Session`::
- # new session. no connections are in use.
- session = Session()
- try:
- # first query. a Connection is acquired
- # from the Engine, and a Transaction
- # started.
- item1 = session.query(Item).get(1)
+ from sqlalchemy.orm import Session
+ session = Session(engine)
- # second query. the same Connection/Transaction
- # are used.
- item2 = session.query(Item).get(2)
+We can now run operations within a demarcated transaction using a context
+manager::
- # pending changes are created.
- item1.foo = 'bar'
- item2.bar = 'foo'
+ with session.begin():
+ session.add(some_object())
+ session.add(some_other_object())
+ # commits transaction at the end, or rolls back if there
+ # was an exception raised
+
+At the end of the above context, assuming no exceptions were raised, any
+pending objects will be flushed to the database and the database transaction
+will be committed. If an exception was raised within the above block, then the
+transaction would be rolled back. In both cases, the above
+:class:`_orm.Session` subsequent to exiting the block is ready to be used in
+subsequent transactions.
+
+The :meth:`_orm.Session.begin` method is optional, and the
+:class:`_orm.Session` may also be used in a commit-as-you-go approach, where it
+will begin transactions automatically as needed; these only need be committed
+or rolled back::
+
+ session = Session(engine)
+
+ session.add(some_object())
+ session.add(some_other_object())
+
+ session.commit() # commits
+
+ # will automatically begin again
+ result = session.execute(< some select statment >)
+ session.add_all([more_objects, ...])
+ session.commit() # commits
+
+ session.add(still_another_object)
+ session.flush() # flush still_another_object
+ session.rollback() # rolls back still_another_object
+
+The :class:`_orm.Session` itself features a :meth:`_orm.Session.close`
+method. If the :class:`_orm.Session` is begun within a transaction that
+has not yet been committed or rolled back, this method will cancel
+(i.e. rollback) that transaction, and also expunge all objects contained
+within the :class:`_orm.Session` object's state. If the :class:`_orm.Session`
+is being used in such a way that a call to :meth:`_orm.Session.commit`
+or :meth:`_orm.Session.rollback` is not guaranteed (e.g. not within a context
+manager or similar), the :class:`_orm.Session.close` method may be used
+to ensure all resources are released::
+
+ # expunges all objects, releases all transactions unconditionally
+ # (with rollback), releases all database connections back to their
+ # engines
+ session.close()
- # commit. The pending changes above
- # are flushed via flush(), the Transaction
- # is committed, the Connection object closed
- # and discarded, the underlying DBAPI connection
- # returned to the connection pool.
- session.commit()
- except:
- # on rollback, the same closure of state
- # as that of commit proceeds.
- session.rollback()
- raise
- finally:
- # close the Session. This will expunge any remaining
- # objects as well as reset any existing SessionTransaction
- # state. Neither of these steps are usually essential.
- # However, if the commit() or rollback() itself experienced
- # an unanticipated internal failure (such as due to a mis-behaved
- # user-defined event handler), .close() will ensure that
- # invalid state is removed.
- session.close()
+Finally, the session construction / close process can itself be run
+via context manager. This is the best way to ensure that the scope of
+a :class:`_orm.Session` object's use is scoped within a fixed block.
+Illustrated via the :class:`_orm.Session` constructor
+first::
+
+ with Session(engine) as session:
+ session.add(some_object())
+ session.add(some_other_object())
+
+ session.commit() # commits
+
+ session.add(still_another_object)
+ session.flush() # flush still_another_object
+
+ session.commit() # commits
+
+ result = session.execute(<some SELECT statement>)
+
+ # remaining transactional state from the .execute() call is
+ # discarded
+
+Similarly, the :class:`_orm.sessionmaker` can be used in the same way::
+
+ Session = sesssionmaker(engine)
+
+ with Session() as session:
+ with session.begin():
+ session.add(some_object)
+ # commits
+
+ # closes the Session
+
+:class:`_orm.sessionmaker` itself includes a :meth:`_orm.sessionmaker.begin`
+method to allow both operations to take place at once::
+
+ with Session.begin() as session:
+ session.add(some_object):
delineated using the :meth:`~.Session.begin_nested`
method::
+
Session = sessionmaker()
- session = Session()
- session.add(u1)
- session.add(u2)
- session.begin_nested() # establish a savepoint
- session.add(u3)
- session.rollback() # rolls back u3, keeps u1 and u2
+ with Session.begin() as session:
+ session.add(u1)
+ session.add(u2)
- session.commit() # commits u1 and u2
+ nested = session.begin_nested() # establish a savepoint
+ session.add(u3)
+ nested.rollback() # rolls back u3, keeps u1 and u2
-:meth:`~.Session.begin_nested` may be called any number
-of times, which will issue a new SAVEPOINT with a unique identifier for each
-call. For each :meth:`~.Session.begin_nested` call, a
-corresponding :meth:`~.Session.rollback` or
-:meth:`~.Session.commit` must be issued. (But note that if the return value is
-used as a context manager, i.e. in a with-statement, then this rollback/commit
-is issued by the context manager upon exiting the context, and so should not be
-added explicitly.)
+ # commits u1 and u2
-When :meth:`~.Session.begin_nested` is called, a
-:meth:`~.Session.flush` is unconditionally issued
-(regardless of the ``autoflush`` setting). This is so that when a
-:meth:`~.Session.rollback` occurs, the full state of the
-session is expired, thus causing all subsequent attribute/instance access to
-reference the full state of the :class:`~sqlalchemy.orm.session.Session` right
-before :meth:`~.Session.begin_nested` was called.
+Each time :meth:`_orm.Session.begin_nested` is called, a new "BEGIN SAVEPOINT"
+command is emitted to the database wih a unique identifier. When
+:meth:`_orm.SessionTransaction.commit` is called, "RELEASE SAVEPOINT"
+is emitted on the database, and if instead
+:meth:`_orm.SessionTransaction.rollback` is called, "ROLLBACK TO SAVEPOINT"
+is emitted.
-:meth:`~.Session.begin_nested`, in the same manner as the less often
-used :meth:`~.Session.begin` method, returns a :class:`.SessionTransaction` object
-which works as a context manager.
-It can be succinctly used around individual record inserts in order to catch
-things like unique constraint exceptions::
+:meth:`_orm.Session.begin_nested` may also be used as a context manager in the
+same manner as that of the :meth:`_orm.Session.begin` method::
for record in records:
try:
print("Skipped record %s" % record)
session.commit()
+When :meth:`~.Session.begin_nested` is called, a
+:meth:`~.Session.flush` is unconditionally issued
+(regardless of the ``autoflush`` setting). This is so that when a
+rollback on this nested transaction occurs, the full state of the
+session is expired, thus causing all subsequent attribute/instance access to
+reference the full state of the :class:`~sqlalchemy.orm.session.Session` right
+before :meth:`~.Session.begin_nested` was called.
+
+
+Session-level vs. Engine level transaction control
+--------------------------------------------------
+
+As of SQLAlchemy 1.4, the :class:`_orm.sessionmaker` and Core
+:class:`_engine.Engine` objects both support :term:`2.0 style` operation,
+by making use of the :paramref:`_orm.Session.future` flag as well as the
+:paramref:`_engine.create_engine.future` flag so that these two objects
+assume 2.0-style semantics.
+
+When using future mode, there should be equivalent semantics between
+the two packages, at the level of the :class:`_orm.sessionmaker` vs.
+the :class:`_future.Engine`, as well as the :class:`_orm.Session` vs.
+the :class:`_future.Connection`. The following sections detail
+these scenarios based on the following scheme::
+
+
+ ORM (using future Session) Core (using future engine)
+ ----------------------------------------- -----------------------------------
+ sessionmaker Engine
+ Session Connection
+ sessionmaker.begin() Engine.begin()
+ some_session.commit() some_connection.commit()
+ with some_sessionmaker() as session: with some_engine.connect() as conn:
+ with some_sessionmaker.begin() as session: with some_engine.begin() as conn:
+ with some_session.begin_nested() as sp: with some_connection.begin_nested() as sp:
+
+Commit as you go
+~~~~~~~~~~~~~~~~
+
+Both :class:`_orm.Session` and :class:`_future.Connection` feature
+:meth:`_future.Connection.commit` and :meth:`_future.Connection.rollback`
+methods. Using SQLAlchemy 2.0-style operation, these methods affect the
+**outermost** transaction in all cases.
+
+
+Engine::
+
+ engine = create_engine("postgresql://user:pass@host/dbname", future=True)
+
+ with engine.connect() as conn:
+ conn.execute(
+ some_table.insert(),
+ [
+ {"data": "some data one"},
+ {"data": "some data two"},
+ {"data": "some data three"}
+ ]
+ )
+ conn.commit()
+
+Session::
+
+ Session = sessionmaker(engine, future=True)
+
+ with Session() as session:
+ session.add_all([
+ SomeClass(data="some data one"),
+ SomeClass(data="some data two"),
+ SomeClass(data="some data three")
+ ])
+ session.commit()
+
+Commit at once
+~~~~~~~~~~~~~~~~
+
+Both :class:`_orm.sessionmaker` and :class:`_future.Engine` feature a
+:meth:`_future.Engine.begin` method that will both procure a new object
+with which to execute SQL statements (the :class:`_orm.Session` and
+:class:`_future.Connection`, respectively) and then return a context manager
+that will maintain a begin/commit/rollback context for that object.
+
+Engine::
+
+ engine = create_engine("postgresql://user:pass@host/dbname", future=True)
+
+ with engine.begin() as conn:
+ conn.execute(
+ some_table.insert(),
+ [
+ {"data": "some data one"},
+ {"data": "some data two"},
+ {"data": "some data three"}
+ ]
+ )
+ # commits and closes automatically
+
+Session::
+
+ Session = sessionmaker(engine, future=True)
+
+ with Session.begin() as session:
+ session.add_all([
+ SomeClass(data="some data one"),
+ SomeClass(data="some data two"),
+ SomeClass(data="some data three")
+ ])
+ # commits and closes automatically
+
+
+Nested Transaction
+~~~~~~~~~~~~~~~~~~~~
+
+When using a SAVEPOINT via the :meth:`_orm.Session.begin_nested` or
+:meth:`_engine.Connection.begin_nested` methods, the transaction object
+returned must be used to commit or rollback the SAVEPOINT. Calling
+the :meth:`_orm.Session.commit` or :meth:`_future.Connection.commit` methods
+will always commit the **outermost** transaction; this is a SQLAlchemy 2.0
+specific behavior that is reversed from the 1.x series.
+
+Engine::
+
+ engine = create_engine("postgresql://user:pass@host/dbname", future=True)
+
+ with engine.begin() as conn:
+ savepoint = conn.begin_nested()
+ conn.execute(
+ some_table.insert(),
+ [
+ {"data": "some data one"},
+ {"data": "some data two"},
+ {"data": "some data three"}
+ ]
+ )
+ savepoint.commit() # or rollback
+
+ # commits automatically
+
+Session::
+
+ Session = sessionmaker(engine, future=True)
+
+ with Session.begin() as session:
+ savepoint = session.begin_nested()
+ session.add_all([
+ SomeClass(data="some data one"),
+ SomeClass(data="some data two"),
+ SomeClass(data="some data three")
+ ])
+ savepoint.commit() # or rollback
+ # commits automatically
+
+
+
+
.. _session_autocommit:
-Autocommit Mode
+.. _session_explicit_begin:
+
+Explicit Begin
---------------
-The examples of session lifecycle at :ref:`unitofwork_transaction` refer
-to a :class:`.Session` that runs in its default mode of ``autocommit=False``.
-In this mode, the :class:`.Session` begins new transactions automatically
-as soon as it needs to do work upon a database connection; the transaction
-then stays in progress until the :meth:`.Session.commit` or :meth:`.Session.rollback`
-methods are called.
-
-The :class:`.Session` also features an older legacy mode of use called
-**autocommit mode**, where a transaction is not started implicitly, and unless
-the :meth:`.Session.begin` method is invoked, the :class:`.Session` will
-perform each database operation on a new connection checked out from the
-connection pool, which is then released back to the pool immediately
-after the operation completes. This refers to
-methods like :meth:`.Session.execute` as well as when executing a query
-returned by :meth:`.Session.query`. For a flush operation, the :class:`.Session`
-starts a new transaction for the duration of the flush, and commits it when
-complete.
-
-.. warning::
-
- "autocommit" mode is a **legacy mode of use** and should not be
- considered for new projects. If autocommit mode is used, it is strongly
- advised that the application at least ensure that transaction scope
- is made present via the :meth:`.Session.begin` method, rather than
- using the session in pure autocommit mode. An upcoming release of
- SQLAlchemy will include a new mode of usage that provides this pattern
- as a first class feature.
-
- If the :meth:`.Session.begin` method is not used, and operations are allowed
- to proceed using ad-hoc connections with immediate autocommit, then the
- application probably should set ``autoflush=False, expire_on_commit=False``,
- since these features are intended to be used only within the context
- of a database transaction.
-
-Modern usage of "autocommit mode" tends to be for framework integrations that
-wish to control specifically when the "begin" state occurs. A session which is
-configured with ``autocommit=True`` may be placed into the "begin" state using
-the :meth:`.Session.begin` method. After the cycle completes upon
-:meth:`.Session.commit` or :meth:`.Session.rollback`, connection and
-transaction resources are :term:`released` and the :class:`.Session` goes back
-into "autocommit" mode, until :meth:`.Session.begin` is called again::
-
- Session = sessionmaker(bind=engine, autocommit=True)
+.. versionchanged:: 1.4
+ SQLAlchemy 1.4 deprecates "autocommit mode", which is historically enabled
+ by using the :paramref:`_orm.Session.autocommit` flag. This flag allows
+ the :class:`_orm.Session` to invoke SQL statements within individual,
+ ad-hoc transactions and has been recommended against for many years.
+ Instead, the :meth:`_orm.Session.begin` method may now be called when
+ a :class:`_orm.Session` is first constructed, or after the previous
+ transaction has ended and before it begins a new one.
+
+The :class:`_orm.Session` features "autobegin" behavior, meaning that as soon
+as operations begin to take place, it ensures a :class:`_orm.SessionTransaction`
+is present to track ongoing operations. This transaction is completed
+when :meth:`_orm.Session.commit` is called.
+
+It is often desirable, particularly in framework integrations, to control the
+point at which the "begin" operation occurs. To suit this, the
+:class:`_orm.Session` uses an "autobegin" strategy, such that the
+:meth:`_orm.Session.begin` method may be called directly for a
+:class:`_orm.Session` that has not already had a transaction begun::
+
+ Session = sessionmaker(bind=engine)
session = Session()
session.begin()
try:
session.rollback()
raise
-The :meth:`.Session.begin` method also returns a transactional token which is
-compatible with the ``with`` statement::
+The above pattern is more idiomatically invoked using a context manager::
- Session = sessionmaker(bind=engine, autocommit=True)
+ Session = sessionmaker(bind=engine)
session = Session()
with session.begin():
item1 = session.query(Item).get(1)
item1.foo = 'bar'
item2.bar = 'foo'
+The :meth:`_orm.Session.begin` method and the session's "autobegin" process
+use the same sequence of steps to begin the transaction. This includes
+that the :meth:`_orm.SessionEvents.after_transaction_create` event is invoked
+when it occurs; this hook is used by frameworks in order to integrate their
+own trasactional processes with that of the ORM :class:`_orm.Session`.
+
+
.. _session_subtransactions:
-Using Subtransactions with Autocommit
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Migrating from the "subtransaction" pattern
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-A subtransaction indicates usage of the :meth:`.Session.begin` method in conjunction with
-the ``subtransactions=True`` flag. This produces a non-transactional, delimiting construct that
-allows nesting of calls to :meth:`~.Session.begin` and :meth:`~.Session.commit`.
-Its purpose is to allow the construction of code that can function within a transaction
-both independently of any external code that starts a transaction,
-as well as within a block that has already demarcated a transaction.
+.. deprecated:: 1.4 The :paramref:`_orm.Session.begin.subtransactions`
+ flag is deprecated. While the :class:`_orm.Session` still uses the
+ "subtransactions" pattern internally, it is not suitable for end-user
+ use as it leads to confusion, and additionally it may be removed from
+ the :class:`_orm.Session` itself in version 2.0 once "autocommit"
+ mode is removed.
-``subtransactions=True`` is generally only useful in conjunction with
-autocommit, and is equivalent to the pattern described at :ref:`connections_nested_transactions`,
-where any number of functions can call :meth:`_engine.Connection.begin` and :meth:`.Transaction.commit`
-as though they are the initiator of the transaction, but in fact may be participating
-in an already ongoing transaction::
+The "subtransaction" pattern that was often used with autocommit mode is
+also deprecated in 1.4. This pattern allowed the use of the
+:meth:`_orm.Session.begin` method when a tranasction were already begun,
+resulting in a construct called a "subtransaction", which was essentially
+a block that would prevent the :meth:`_orm.Session.commit` method from actually
+committing.
+
+This pattern has been shown to be confusing in real world applications, and
+it is preferable for an application to ensure that the top-most level of database
+operations are performed with a single begin/commit pair.
+
+To provide backwards compatibility for applications that make use of this
+pattern, the following context manager or a similar implementation based on
+a decorator may be used::
+
+
+ import contextlib
+
+ @contextlib.contextmanager
+ def transaction(session):
+
+ if session.in_transaction():
+ outermost = False
+ else:
+ outermost = True
+ session.begin()
- # method_a starts a transaction and calls method_b
- def method_a(session):
- session.begin(subtransactions=True)
try:
- method_b(session)
- session.commit() # transaction is committed here
+ yield
except:
- session.rollback() # rolls back the transaction
+ if session.in_transaction():
+ session.rollback()
raise
+ else:
+ if outermost and session.in_transaction():
+ session.commit()
+
+
+The above context manager may be used in the same way the
+"subtransaction" flag works, such as in the following example::
+
+
+ # method_a starts a transaction and calls method_b
+ def method_a(session):
+ with transaction(session):
+ method_b(session)
# method_b also starts a transaction, but when
# called from method_a participates in the ongoing
# transaction.
def method_b(session):
- session.begin(subtransactions=True)
- try:
+ with transaction(session):
session.add(SomeObject('bat', 'lala'))
- session.commit() # transaction is not committed yet
- except:
- session.rollback() # rolls back the transaction, in this case
- # the one that was initiated in method_a().
- raise
# create a Session and call method_a
- session = Session(autocommit=True)
- method_a(session)
- session.close()
+ with Session() as session:
+ method_a(session)
+
+To compare towards the preferred idiomatic pattern, the begin block should
+be at the outermost level. This removes the need for individual functions
+or methods to be concerned with the details of transaction demarcation::
+
+ def method_a(session):
+ method_b(session)
+
+ def method_b(session):
+ session.add(SomeObject('bat', 'lala'))
+
+ # create a Session and call method_a
+ with Session() as session:
+ with session.begin():
+ method_a(session)
-Subtransactions are used by the :meth:`.Session.flush` process to ensure that the
-flush operation takes place within a transaction, regardless of autocommit. When
-autocommit is disabled, it is still useful in that it forces the :class:`.Session`
-into a "pending rollback" state, as a failed flush cannot be resumed in mid-operation,
-where the end user still maintains the "scope" of the transaction overall.
.. _session_twophase:
PostgreSQL), the session can be instructed to use two-phase commit semantics.
This will coordinate the committing of transactions across databases so that
the transaction is either committed or rolled back in all databases. You can
-also :meth:`~.Session.prepare` the session for
+also :meth:`_orm.Session.prepare` the session for
interacting with transactions not managed by SQLAlchemy. To use two phase
transactions set the flag ``twophase=True`` on the session::
# and reverted to its previous isolation level.
sess.commit()
-The :paramref:`.Session.connection.execution_options` argument is only
-accepted on the **first** call to :meth:`.Session.connection` for a
-particular bind within a transaction. If a transaction is already begun
-on the target connection, a warning is emitted::
-
- >>> session = Session(eng)
- >>> session.execute("select 1")
- <sqlalchemy.engine.result.CursorResult object at 0x1017a6c50>
- >>> session.connection(execution_options={'isolation_level': 'SERIALIZABLE'})
- sqlalchemy/orm/session.py:310: SAWarning: Connection is already established
- for the given bind; execution_options ignored
-
-.. versionadded:: 0.9.9 Added the
- :paramref:`.Session.connection.execution_options`
- parameter to :meth:`.Session.connection`.
Tracking Transaction State with Events
--------------------------------------
:class:`.Session` to that :class:`_engine.Connection`. The usual rationale for this
is a test suite that allows ORM code to work freely with a :class:`.Session`,
including the ability to call :meth:`.Session.commit`, where afterwards the
-entire database interaction is rolled back::
+entire database interaction is rolled back.
+
+.. versionchanged:: 1.4 This section introduces a new version of the
+ "join into an external transaction" recipe that will work equally well
+ for both "future" and "non-future" engines and sessions. The recipe
+ here from previous versions such as 1.3 will also continue to work for
+ "non-future" engines and sessions.
+
+
+The recipe works by establishing a :class:`_engine.Connection` within a
+transaction and optionally a SAVEPOINT, then passing it to a :class:`_orm.Session` as the
+"bind". The :class:`_orm.Session` detects that the given :class:`_engine.Connection`
+is already in a transaction and will not run COMMIT on it if the transaction
+is in fact an outermost transaction. Then when the test tears down, the
+transaction is rolled back so that any data changes throughout the test
+are reverted::
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
# begin a non-ORM transaction
self.trans = self.connection.begin()
+
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
+
+ ### optional ###
+
+ # if the database supports SAVEPOINT (SQLite needs special
+ # config for this to work), starting a savepoint
+ # will allow tests to also use rollback within tests
+
+ self.nested = self.connection.begin_nested()
+
+ @event.listens_for(self.session, "after_transaction_end")
+ def end_savepoint(session, transaction):
+ if not self.nested.is_active:
+ self.nested = self.connection.begin_nested()
+
+ ### ^^^ optional ^^^ ###
+
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
+ def test_something_with_rollbacks(self):
+ # if the SAVEPOINT steps are taken, then a test can also
+ # use session.rollback() and continue working with the database
+
+ self.session.add(Bar())
+ self.session.flush()
+ self.session.rollback()
+
+ self.session.add(Foo())
+ self.session.commit()
+
def tearDown(self):
self.session.close()
# return connection to the Engine
self.connection.close()
-Above, we issue :meth:`.Session.commit` as well as
-:meth:`.Transaction.rollback`. This is an example of where we take advantage
-of the :class:`_engine.Connection` object's ability to maintain *subtransactions*, or
-nested begin/commit-or-rollback pairs where only the outermost begin/commit
-pair actually commits the transaction, or if the outermost block rolls back,
-everything is rolled back.
-
-.. topic:: Supporting Tests with Rollbacks
-
- The above recipe works well for any kind of database enabled test, except
- for a test that needs to actually invoke :meth:`.Session.rollback` within
- the scope of the test itself. The above recipe can be expanded, such
- that the :class:`.Session` always runs all operations within the scope
- of a SAVEPOINT, which is established at the start of each transaction,
- so that tests can also rollback the "transaction" as well while still
- remaining in the scope of a larger "transaction" that's never committed,
- using two extra events::
-
- from sqlalchemy import event
-
-
- class SomeTest(TestCase):
-
- def setUp(self):
- # connect to the database
- self.connection = engine.connect()
-
- # begin a non-ORM transaction
- self.trans = connection.begin()
-
- # bind an individual Session to the connection
- self.session = Session(bind=self.connection)
-
- # start the session in a SAVEPOINT...
- self.session.begin_nested()
-
- # then each time that SAVEPOINT ends, reopen it
- @event.listens_for(self.session, "after_transaction_end")
- def restart_savepoint(session, transaction):
- if transaction.nested and not transaction._parent.nested:
-
- # ensure that state is expired the way
- # session.commit() at the top level normally does
- # (optional step)
- session.expire_all()
-
- session.begin_nested()
-
- # ... the tearDown() method stays the same
+The above recipe is part of SQLAlchemy's own CI to ensure that it remains
+working as expected.
return self._transaction is not None and self._transaction.is_active
+ def in_nested_transaction(self):
+ """Return True if a transaction is in progress."""
+ if self.__branch_from is not None:
+ return self.__branch_from.in_nested_transaction()
+
+ return (
+ self._nested_transaction is not None
+ and self._nested_transaction.is_active
+ )
+
+ def get_transaction(self):
+ """Return the current root transaction in progress, if any.
+
+ .. versionadded:: 1.4
+
+ """
+
+ if self.__branch_from is not None:
+ return self.__branch_from.get_transaction()
+
+ return self._transaction
+
+ def get_nested_transaction(self):
+ """Return the current nested transaction in progress, if any.
+
+ .. versionadded:: 1.4
+
+ """
+ if self.__branch_from is not None:
+
+ return self.__branch_from.get_nested_transaction()
+
+ return self._nested_transaction
+
def _begin_impl(self, transaction):
assert not self.__branch_from
updated usage model and calling facade for SQLAlchemy Core and
SQLAlchemy ORM. In Core, it forms the basis of the
:class:`.CursorResult` object which replaces the previous
- :class:`.ResultProxy` interface.
+ :class:`.ResultProxy` interface. When using the ORM, a higher level
+ object called :class:`.ChunkedIteratorResult` is normally used.
"""
# used by the Connection, which in itself is more expensive to
# generate than what BakedQuery was able to provide in 1.3 and prior
- if statement.compile_options._bake_ok:
+ if statement._compile_options._bake_ok:
self._bakery[self._effective_key(session)] = (
query,
statement,
create_session().
"""
+
+ if kwargs.get("future", False):
+ kwargs.setdefault("autocommit", False)
+ else:
+ kwargs.setdefault("autocommit", True)
+
kwargs.setdefault("autoflush", False)
- kwargs.setdefault("autocommit", True)
kwargs.setdefault("expire_on_commit", False)
return Session(bind=bind, **kwargs)
def __go(lcls):
global __all__
+ global AppenderQuery
from .. import util as sa_util # noqa
from . import dynamic # noqa
from . import events # noqa
from . import loading # noqa
import inspect as _inspect
+ from .dynamic import AppenderQuery
+
__all__ = sorted(
name
for name, obj in lcls.items()
raise NotImplementedError()
@classmethod
- def get_column_descriptions(self, statement):
+ def get_column_descriptions(cls, statement):
return _column_descriptions(statement)
@classmethod
if is_reentrant_invoke:
return statement, execution_options
- load_options = execution_options.get(
- "_sa_orm_load_options", QueryContext.default_load_options
+ (
+ load_options,
+ execution_options,
+ ) = QueryContext.default_load_options.from_execution_options(
+ "_sa_orm_load_options",
+ {"populate_existing", "autoflush", "yield_per"},
+ execution_options,
+ statement._execution_options,
)
bind_arguments["clause"] = statement
load_options = execution_options.get(
"_sa_orm_load_options", QueryContext.default_load_options
)
+
querycontext = QueryContext(
compile_state,
statement,
self._primary_entity = None
self.use_legacy_query_style = (
- statement_container.compile_options._use_legacy_query_style
+ statement_container._compile_options._use_legacy_query_style
)
self.statement_container = self.select_statement = statement_container
self.requested_statement = statement = statement_container.element
_QueryEntity.to_compile_state(self, statement_container._raw_columns)
- self.compile_options = statement_container.compile_options
+ self.compile_options = statement_container._compile_options
- self.current_path = statement_container.compile_options._current_path
+ self.current_path = statement_container._compile_options._current_path
if toplevel and statement_container._with_options:
self.attributes = {"_unbound_load_dedupes": set()}
# if we are a select() that was never a legacy Query, we won't
# have ORM level compile options.
- statement.compile_options = cls.default_compile_options.safe_merge(
- statement.compile_options
+ statement._compile_options = cls.default_compile_options.safe_merge(
+ statement._compile_options
)
self = cls.__new__(cls)
# indicates this select() came from Query.statement
self.for_statement = (
for_statement
- ) = select_statement.compile_options._for_statement
+ ) = select_statement._compile_options._for_statement
if not for_statement and not toplevel:
# for subqueries, turn off eagerloads.
# if "for_statement" mode is set, Query.subquery()
# would have set this flag to False already if that's what's
# desired
- select_statement.compile_options += {
+ select_statement._compile_options += {
"_enable_eagerloads": False,
}
# generally if we are from Query or directly from a select()
self.use_legacy_query_style = (
- select_statement.compile_options._use_legacy_query_style
+ select_statement._compile_options._use_legacy_query_style
)
self._entities = []
self._no_yield_pers = set()
# legacy: only for query.with_polymorphic()
- if select_statement.compile_options._with_polymorphic_adapt_map:
+ if select_statement._compile_options._with_polymorphic_adapt_map:
self._with_polymorphic_adapt_map = dict(
- select_statement.compile_options._with_polymorphic_adapt_map
+ select_statement._compile_options._with_polymorphic_adapt_map
)
self._setup_with_polymorphics()
_QueryEntity.to_compile_state(self, select_statement._raw_columns)
- self.compile_options = select_statement.compile_options
+ self.compile_options = select_statement._compile_options
# determine label style. we can make different decisions here.
# at the moment, trying to see if we can always use DISAMBIGUATE_ONLY
else:
self.label_style = self.select_statement._label_style
- self.current_path = select_statement.compile_options._current_path
+ self.current_path = select_statement._compile_options._current_path
self.eager_order_by = ()
self._polymorphic_adapters = {}
compile_options = cls.default_compile_options.safe_merge(
- query.compile_options
+ query._compile_options
)
# legacy: only for query.with_polymorphic()
if compile_options._with_polymorphic_adapt_map:
for elem in _select_iterables([element]):
yield elem
+ @classmethod
+ @util.preload_module("sqlalchemy.orm.query")
+ def from_statement(cls, statement, from_statement):
+ query = util.preloaded.orm_query
+
+ from_statement = coercions.expect(
+ roles.SelectStatementRole,
+ from_statement,
+ apply_propagate_attrs=statement,
+ )
+
+ stmt = query.FromStatement(statement._raw_columns, from_statement)
+ stmt.__dict__.update(
+ _with_options=statement._with_options,
+ _with_context_options=statement._with_context_options,
+ _execution_options=statement._execution_options,
+ _propagate_attrs=statement._propagate_attrs,
+ )
+ return stmt
+
def _setup_with_polymorphics(self):
# legacy: only for query.with_polymorphic()
for ext_info, wp in self._with_polymorphic_adapt_map.items():
from .query import Query
from .. import exc
from .. import log
+from .. import sql
from .. import util
+from ..engine import result as _result
+from ..sql import selectable
+from ..sql.base import _generative
+from ..sql.base import Generative
@log.class_logger
dispatch,
target_mapper,
order_by,
- query_class=None,
**kw
):
super(DynamicAttributeImpl, self).__init__(
)
self.target_mapper = target_mapper
self.order_by = order_by
- if not query_class:
- self.query_class = AppenderQuery
- elif AppenderMixin in query_class.mro():
- self.query_class = query_class
- else:
- self.query_class = mixin_user_query(query_class)
+ self.query_class = AppenderQuery
def get(self, state, dict_, passive=attributes.PASSIVE_OFF):
if not passive & attributes.SQL_OK:
self.remove(state, dict_, value, initiator, passive=passive)
-class AppenderMixin(object):
- query_class = None
+class AppenderQuery(Generative):
+ """A dynamic query that supports basic collection storage operations."""
def __init__(self, attr, state):
- super(AppenderMixin, self).__init__(attr.target_mapper, None)
+
+ # this can be select() except for aliased=True flag on join()
+ # and corresponding behaviors on select().
+ self._is_core = False
+ self._statement = Query([attr.target_mapper], None)
+
+ # self._is_core = True
+ # self._statement = sql.select(attr.target_mapper)._set_label_style(
+ # selectable.LABEL_STYLE_TABLENAME_PLUS_COL
+ # )
+
+ self._autoflush = True
self.instance = instance = state.obj()
self.attr = attr
- mapper = object_mapper(instance)
+ self.mapper = mapper = object_mapper(instance)
prop = mapper._props[self.attr.key]
if prop.secondary is not None:
# is in the FROM. So we purposely put the mapper selectable
# in _from_obj[0] to ensure a user-defined join() later on
# doesn't fail, and secondary is then in _from_obj[1].
- self._from_obj = (prop.mapper.selectable, prop.secondary)
+ self._statement = self._statement.select_from(
+ prop.mapper.selectable, prop.secondary
+ )
- self._where_criteria += (
+ self._statement = self._statement.where(
prop._with_parent(instance, alias_secondary=False),
)
if self.attr.order_by:
- if (
- self._order_by_clauses is False
- or self._order_by_clauses is None
- ):
- self._order_by_clauses = tuple(self.attr.order_by)
- else:
- self._order_by_clauses = self._order_by_clauses + tuple(
- self.attr.order_by
- )
+ self._statement = self._statement.order_by(*self.attr.order_by)
+
+ @_generative
+ def autoflush(self, setting):
+ """Set autoflush to a specific setting.
+
+ Note that a Session with autoflush=False will
+ not autoflush, even if this flag is set to True at the
+ Query level. Therefore this flag is usually used only
+ to disable autoflush for a specific Query.
+
+ """
+ self._autoflush = setting
+
+ @property
+ def statement(self):
+ """Return the Core statement represented by this
+ :class:`.AppenderQuery`.
+
+ """
+ if self._is_core:
+ return self._statement._set_label_style(
+ selectable.LABEL_STYLE_DISAMBIGUATE_ONLY
+ )
+
+ else:
+ return self._statement.statement
+
+ def filter(self, *criteria):
+ """A synonym for the :meth:`_orm.AppenderQuery.where` method."""
+
+ return self.where(*criteria)
+
+ @_generative
+ def where(self, *criteria):
+ r"""Apply the given WHERE criterion, using SQL expressions.
+
+ Equivalent to :meth:`.Select.where`.
+
+ """
+ self._statement = self._statement.where(*criteria)
+
+ @_generative
+ def order_by(self, *criteria):
+ r"""Apply the given ORDER BY criterion, using SQL expressions.
+
+ Equivalent to :meth:`.Select.order_by`.
+
+ """
+ self._statement = self._statement.order_by(*criteria)
+
+ @_generative
+ def filter_by(self, **kwargs):
+ r"""Apply the given filtering criterion using keyword expressions.
+
+ Equivalent to :meth:`.Select.filter_by`.
+
+ """
+ self._statement = self._statement.filter_by(**kwargs)
+
+ @_generative
+ def join(self, target, *props, **kwargs):
+ r"""Create a SQL JOIN against this
+ object's criterion.
+
+ Equivalent to :meth:`.Select.join`.
+ """
+
+ self._statement = self._statement.join(target, *props, **kwargs)
+
+ @_generative
+ def outerjoin(self, target, *props, **kwargs):
+ r"""Create a SQL LEFT OUTER JOIN against this
+ object's criterion.
+
+ Equivalent to :meth:`.Select.outerjoin`.
+
+ """
+
+ self._statement = self._statement.outerjoin(target, *props, **kwargs)
+
+ def scalar(self):
+ """Return the first element of the first result or None
+ if no rows present. If multiple rows are returned,
+ raises MultipleResultsFound.
+
+ Equivalent to :meth:`_query.Query.scalar`.
+
+ .. versionadded:: 1.1.6
+
+ """
+ return self._iter().scalar()
+
+ def first(self):
+ """Return the first row.
+
+ Equivalent to :meth:`_query.Query.first`.
+
+ """
+
+ # replicates limit(1) behavior
+ if self._statement is not None:
+ return self._iter().first()
+ else:
+ return self.limit(1)._iter().first()
+
+ def one(self):
+ """Return exactly one result or raise an exception.
+
+ Equivalent to :meth:`_query.Query.one`.
+
+ """
+ return self._iter().one()
+
+ def one_or_none(self):
+ """Return one or zero results, or raise an exception for multiple
+ rows.
+
+ Equivalent to :meth:`_query.Query.one_or_none`.
+
+ .. versionadded:: 1.0.9
+
+ """
+ return self._iter().one_or_none()
+
+ def all(self):
+ """Return all rows.
+
+ Equivalent to :meth:`_query.Query.all`.
+
+ """
+ return self._iter().all()
def session(self):
sess = object_session(self.instance)
if (
sess is not None
- and self.autoflush
+ and self._autoflush
and sess.autoflush
and self.instance in sess
):
session = property(session, lambda s, x: None)
- def __iter__(self):
+ def _execute(self, sess=None):
+ # note we're returning an entirely new Query class instance
+ # here without any assignment capabilities; the class of this
+ # query is determined by the session.
+ instance = self.instance
+ if sess is None:
+ sess = object_session(instance)
+ if sess is None:
+ raise orm_exc.DetachedInstanceError(
+ "Parent instance %s is not bound to a Session, and no "
+ "contextual session is established; lazy load operation "
+ "of attribute '%s' cannot proceed"
+ % (orm_util.instance_str(instance), self.attr.key)
+ )
+
+ result = sess.execute(self._statement, future=True)
+ result = result.scalars()
+
+ if result._attributes.get("filtered", False):
+ result = result.unique()
+
+ return result
+
+ def _iter(self):
sess = self.session
if sess is None:
- return iter(
- self.attr._get_collection_history(
- attributes.instance_state(self.instance),
- attributes.PASSIVE_NO_INITIALIZE,
- ).added_items
- )
+ instance = self.instance
+ state = attributes.instance_state(instance)
+
+ if state.detached:
+ raise orm_exc.DetachedInstanceError(
+ "Parent instance %s is not bound to a Session, and no "
+ "contextual session is established; lazy load operation "
+ "of attribute '%s' cannot proceed"
+ % (orm_util.instance_str(instance), self.attr.key)
+ )
+ else:
+ iterator = (
+ (item,)
+ for item in self.attr._get_collection_history(
+ state, attributes.PASSIVE_NO_INITIALIZE,
+ ).added_items
+ )
+
+ row_metadata = _result.SimpleResultMetaData(
+ (self.mapper.class_.__name__,), [], _unique_filters=[id],
+ )
+
+ return _result.IteratorResult(row_metadata, iterator).scalars()
else:
- return iter(self._generate(sess))
+ return self._execute(sess)
+
+ def __iter__(self):
+ return iter(self._iter())
def __getitem__(self, index):
sess = self.session
attributes.PASSIVE_NO_INITIALIZE,
).indexed(index)
else:
- return self._generate(sess).__getitem__(index)
+ return orm_util._getitem(self, index)
+
+ def slice(self, start, stop):
+ """Computes the "slice" represented by
+ the given indices and apply as LIMIT/OFFSET.
+
+
+ """
+ limit_clause, offset_clause = orm_util._make_slice(
+ self._statement._limit_clause,
+ self._statement._offset_clause,
+ start,
+ stop,
+ )
+ self._statement = self._statement.limit(limit_clause).offset(
+ offset_clause
+ )
def count(self):
+ """return the 'count'.
+
+ Equivalent to :meth:`_query.Query.count`.
+
+
+ """
+
sess = self.session
if sess is None:
return len(
).added_items
)
else:
- return self._generate(sess).count()
-
- def _generate(self, sess=None):
- # note we're returning an entirely new Query class instance
- # here without any assignment capabilities; the class of this
- # query is determined by the session.
- instance = self.instance
- if sess is None:
- sess = object_session(instance)
- if sess is None:
- raise orm_exc.DetachedInstanceError(
- "Parent instance %s is not bound to a Session, and no "
- "contextual session is established; lazy load operation "
- "of attribute '%s' cannot proceed"
- % (orm_util.instance_str(instance), self.attr.key)
- )
-
- if self.query_class:
- query = self.query_class(self.attr.target_mapper, session=sess)
- else:
- query = sess.query(self.attr.target_mapper)
-
- query._where_criteria = self._where_criteria
- query._from_obj = self._from_obj
- query._order_by_clauses = self._order_by_clauses
+ col = sql.func.count(sql.literal_column("*"))
- return query
+ stmt = sql.select(col).select_from(self._statement.subquery())
+ return self.session.execute(stmt).scalar()
def extend(self, iterator):
for item in iterator:
)
-class AppenderQuery(AppenderMixin, Query):
- """A dynamic query that supports basic collection storage operations."""
-
-
-def mixin_user_query(cls):
- """Return a new class with AppenderQuery functionality layered over."""
- name = "Appender" + cls.__name__
- return type(name, (AppenderMixin, cls), {"query_class": cls})
-
-
class CollectionHistory(object):
"""Overrides AttributeHistory to receive append/remove events directly."""
load_options = QueryContext.default_load_options
compile_options = ORMCompileState.default_compile_options.safe_merge(
- q.compile_options
+ q._compile_options
)
if primary_key_identity is not None:
# TODO: most of the compile_options that are not legacy only involve this
# function, so try to see if handling of them can mostly be local to here
- q.compile_options, load_options = _set_get_options(
+ q._compile_options, load_options = _set_get_options(
compile_options,
load_options,
populate_existing=bool(refresh_state),
if is_reentrant_invoke:
return statement, execution_options
- sync = execution_options.get("synchronize_session", None)
- if sync is None:
- sync = statement._execution_options.get(
- "synchronize_session", None
- )
-
- update_options = execution_options.get(
+ (
+ update_options,
+ execution_options,
+ ) = BulkUDCompileState.default_update_options.from_execution_options(
"_sa_orm_update_options",
- BulkUDCompileState.default_update_options,
+ {"synchronize_session"},
+ execution_options,
+ statement._execution_options,
)
+ sync = update_options._synchronize_session
if sync is not None:
if sync not in ("evaluate", "fetch", False):
raise sa_exc.ArgumentError(
"Valid strategies for session synchronization "
"are 'evaluate', 'fetch', False"
)
- update_options += {"_synchronize_session": sync}
bind_arguments["clause"] = statement
try:
import operator
import types
-from . import attributes
from . import exc as orm_exc
from . import interfaces
from . import loading
+from . import util as orm_util
from .base import _assertions
from .context import _column_descriptions
from .context import _legacy_determine_last_joined_entity
_legacy_setup_joins = ()
_label_style = LABEL_STYLE_NONE
- compile_options = ORMCompileState.default_compile_options
+ _compile_options = ORMCompileState.default_compile_options
load_options = QueryContext.default_load_options
for elem in obj
]
- self.compile_options += {"_set_base_alias": set_base_alias}
+ self._compile_options += {"_set_base_alias": set_base_alias}
self._from_obj = tuple(fa)
@_generative
self._from_obj = self._legacy_setup_joins = ()
if self._statement is not None:
- self.compile_options += {"_statement": None}
+ self._compile_options += {"_statement": None}
self._where_criteria = ()
self._distinct = False
if load_options:
self.load_options += load_options
if compile_options:
- self.compile_options += compile_options
+ self._compile_options += compile_options
return self
# passed into the execute process and wont generate its own cache
# key; this will all occur in terms of the ORM-enabled Select.
if (
- not self.compile_options._set_base_alias
- and not self.compile_options._with_polymorphic_adapt_map
+ not self._compile_options._set_base_alias
+ and not self._compile_options._with_polymorphic_adapt_map
):
# if we don't have legacy top level aliasing features in use
# then convert to a future select() directly
if new_query is not None and new_query is not self:
self = new_query
if not fn._bake_ok:
- self.compile_options += {"_bake_ok": False}
+ self._compile_options += {"_bake_ok": False}
- compile_options = self.compile_options
+ compile_options = self._compile_options
compile_options += {
"_for_statement": for_statement,
"_use_legacy_query_style": use_legacy_query_style,
stmt.__dict__.update(
_with_options=self._with_options,
_with_context_options=self._with_context_options,
- compile_options=compile_options,
+ _compile_options=compile_options,
_execution_options=self._execution_options,
+ _propagate_attrs=self._propagate_attrs,
)
- stmt._propagate_attrs = self._propagate_attrs
else:
# Query / select() internal attributes are 99% cross-compatible
stmt = Select.__new__(Select)
stmt.__dict__.update(self.__dict__)
stmt.__dict__.update(
_label_style=self._label_style,
- compile_options=compile_options,
+ _compile_options=compile_options,
+ _propagate_attrs=self._propagate_attrs,
)
stmt.__dict__.pop("session", None)
- stmt._propagate_attrs = self._propagate_attrs
return stmt
def subquery(
selectable, or when using :meth:`_query.Query.yield_per`.
"""
- self.compile_options += {"_enable_eagerloads": value}
+ self._compile_options += {"_enable_eagerloads": value}
@_generative
def with_labels(self):
query intended for the deferred load.
"""
- self.compile_options += {"_current_path": path}
+ self._compile_options += {"_current_path": path}
# TODO: removed in 2.0
@_generative
polymorphic_on=polymorphic_on,
)
- self.compile_options = self.compile_options.add_to_element(
+ self._compile_options = self._compile_options.add_to_element(
"_with_polymorphic_adapt_map", ((entity, inspect(wp)),)
)
{"stream_results": True, "max_row_buffer": count}
)
+ @util.deprecated_20(
+ ":meth:`_orm.Query.get`",
+ alternative="The method is now available as :meth:`_orm.Session.get`",
+ )
def get(self, ident):
"""Return an instance based on the given primary key identifier,
or ``None`` if not found.
however, and will be used if the object is not
yet locally present.
- A lazy-loading, many-to-one attribute configured
- by :func:`_orm.relationship`, using a simple
- foreign-key-to-primary-key criterion, will also use an
- operation equivalent to :meth:`_query.Query.get` in order to retrieve
- the target value from the local identity map
- before querying the database. See :doc:`/orm/loading_relationships`
- for further details on relationship loading.
-
:param ident: A scalar, tuple, or dictionary representing the
primary key. For a composite (e.g. multiple column) primary key,
a tuple or dictionary should be passed.
"""
self._no_criterion_assertion("get", order_by=False, distinct=False)
+
+ # we still implement _get_impl() so that baked query can override
+ # it
return self._get_impl(ident, loading.load_on_pk_identity)
def _get_impl(self, primary_key_identity, db_load_fn, identity_token=None):
-
- # convert composite types to individual args
- if hasattr(primary_key_identity, "__composite_values__"):
- primary_key_identity = primary_key_identity.__composite_values__()
-
mapper = self._only_full_mapper_zero("get")
-
- is_dict = isinstance(primary_key_identity, dict)
- if not is_dict:
- primary_key_identity = util.to_list(
- primary_key_identity, default=(None,)
- )
-
- if len(primary_key_identity) != len(mapper.primary_key):
- raise sa_exc.InvalidRequestError(
- "Incorrect number of values in identifier to formulate "
- "primary key for query.get(); primary key columns are %s"
- % ",".join("'%s'" % c for c in mapper.primary_key)
- )
-
- if is_dict:
- try:
- primary_key_identity = list(
- primary_key_identity[prop.key]
- for prop in mapper._identity_key_props
- )
-
- except KeyError as err:
- util.raise_(
- sa_exc.InvalidRequestError(
- "Incorrect names of values in identifier to formulate "
- "primary key for query.get(); primary key attribute "
- "names are %s"
- % ",".join(
- "'%s'" % prop.key
- for prop in mapper._identity_key_props
- )
- ),
- replace_context=err,
- )
-
- if (
- not self.load_options._populate_existing
- and not mapper.always_refresh
- and self._for_update_arg is None
- ):
-
- instance = self.session._identity_lookup(
- mapper, primary_key_identity, identity_token=identity_token
- )
-
- if instance is not None:
- self._get_existing_condition()
- # reject calls for id in identity map but class
- # mismatch.
- if not issubclass(instance.__class__, mapper.class_):
- return None
- return instance
- elif instance is attributes.PASSIVE_CLASS_MISMATCH:
- return None
-
- # apply_labels() not strictly necessary, however this will ensure that
- # tablename_colname style is used which at the moment is asserted
- # in a lot of unit tests :)
-
- statement = self._statement_20().apply_labels()
- return db_load_fn(
- self.session,
- statement,
+ return self.session._get_impl(
+ mapper,
primary_key_identity,
- load_options=self.load_options,
+ db_load_fn,
+ populate_existing=self.load_options._populate_existing,
+ with_for_update=self._for_update_arg,
+ options=self._with_options,
+ identity_token=identity_token,
+ execution_options=self._execution_options,
)
@property
@property
def _current_path(self):
- return self.compile_options._current_path
+ return self._compile_options._current_path
@_generative
def correlate(self, *fromclauses):
@_generative
def _set_enable_single_crit(self, val):
- self.compile_options += {"_enable_single_crit": val}
+ self._compile_options += {"_enable_single_crit": val}
@_generative
def _from_selectable(self, fromclause, set_entity_from=True):
):
self.__dict__.pop(attr, None)
self._set_select_from([fromclause], set_entity_from)
- self.compile_options += {
+ self._compile_options += {
"_enable_single_crit": False,
"_statement": None,
}
# legacy. see test/orm/test_froms.py for various
# "oldstyle" tests that rely on this and the correspoinding
# "newtyle" that do not.
- self.compile_options += {"_orm_only_from_obj_alias": False}
+ self._compile_options += {"_orm_only_from_obj_alias": False}
@util.deprecated(
"1.4",
"""
opts = tuple(util.flatten_iterator(args))
- if self.compile_options._current_path:
+ if self._compile_options._current_path:
for opt in opts:
if opt._is_legacy_option:
opt.process_query_conditionally(self)
params = self.load_options._params.union(kwargs)
self.load_options += {"_params": params}
+ def where(self, *criterion):
+ """A synonym for :meth:`.Query.filter`.
+
+ .. versionadded:: 1.4
+
+ """
+ return self.filter(*criterion)
+
@_generative
@_assertions(_no_statement_condition, _no_limit_offset)
def filter(self, *criterion):
SQLAlchemy versions was the primary ORM-level joining interface.
"""
+
aliased, from_joinpoint, isouter, full = (
kwargs.pop("aliased", False),
kwargs.pop("from_joinpoint", False),
"""
self._set_select_from([from_obj], True)
- self.compile_options += {"_enable_single_crit": False}
+ self._compile_options += {"_enable_single_crit": False}
def __getitem__(self, item):
- if isinstance(item, slice):
- start, stop, step = util.decode_slice(item)
-
- if (
- isinstance(stop, int)
- and isinstance(start, int)
- and stop - start <= 0
- ):
- return []
-
- # perhaps we should execute a count() here so that we
- # can still use LIMIT/OFFSET ?
- elif (isinstance(start, int) and start < 0) or (
- isinstance(stop, int) and stop < 0
- ):
- return list(self)[item]
-
- res = self.slice(start, stop)
- if step is not None:
- return list(res)[None : None : item.step]
- else:
- return list(res)
- else:
- if item == -1:
- return list(self)[-1]
- else:
- return list(self[item : item + 1])[0]
+ return orm_util._getitem(self, item)
@_generative
@_assertions(_no_statement_condition)
:meth:`_query.Query.offset`
"""
- # for calculated limit/offset, try to do the addition of
- # values to offset in Python, howver if a SQL clause is present
- # then the addition has to be on the SQL side.
- if start is not None and stop is not None:
- offset_clause = self._offset_or_limit_clause_asint_if_possible(
- self._offset_clause
- )
- if offset_clause is None:
- offset_clause = 0
- if start != 0:
- offset_clause = offset_clause + start
-
- if offset_clause == 0:
- self._offset_clause = None
- else:
- self._offset_clause = self._offset_or_limit_clause(
- offset_clause
- )
-
- self._limit_clause = self._offset_or_limit_clause(stop - start)
-
- elif start is None and stop is not None:
- self._limit_clause = self._offset_or_limit_clause(stop)
- elif start is not None and stop is None:
- offset_clause = self._offset_or_limit_clause_asint_if_possible(
- self._offset_clause
- )
- if offset_clause is None:
- offset_clause = 0
-
- if start != 0:
- offset_clause = offset_clause + start
-
- if offset_clause == 0:
- self._offset_clause = None
- else:
- self._offset_clause = self._offset_or_limit_clause(
- offset_clause
- )
+ self._limit_clause, self._offset_clause = orm_util._make_slice(
+ self._limit_clause, self._offset_clause, start, stop
+ )
@_generative
@_assertions(_no_statement_condition)
``Query``.
"""
- self._limit_clause = self._offset_or_limit_clause(limit)
+ self._limit_clause = orm_util._offset_or_limit_clause(limit)
@_generative
@_assertions(_no_statement_condition)
``Query``.
"""
- self._offset_clause = self._offset_or_limit_clause(offset)
-
- def _offset_or_limit_clause(self, element, name=None, type_=None):
- """Convert the given value to an "offset or limit" clause.
-
- This handles incoming integers and converts to an expression; if
- an expression is already given, it is passed through.
-
- """
- return coercions.expect(
- roles.LimitOffsetRole, element, name=name, type_=type_
- )
-
- def _offset_or_limit_clause_asint_if_possible(self, clause):
- """Return the offset or limit clause as a simple integer if possible,
- else return the clause.
-
- """
- if clause is None:
- return None
- if hasattr(clause, "_limit_offset_value"):
- value = clause._limit_offset_value
- return util.asint(value)
- else:
- return clause
+ self._offset_clause = orm_util._offset_or_limit_clause(offset)
@_generative
@_assertions(_no_statement_condition)
roles.SelectStatementRole, statement, apply_propagate_attrs=self
)
self._statement = statement
- self.compile_options += {"_statement": statement}
+ self._compile_options += {"_statement": statement}
def first(self):
"""Return the first result of this ``Query`` or
sess.query(User).filter(User.age == 25).\
delete(synchronize_session='evaluate')
- .. warning:: The :meth:`_query.Query.delete`
- method is a "bulk" operation,
- which bypasses ORM unit-of-work automation in favor of greater
- performance. **Please read all caveats and warnings below.**
-
- :param synchronize_session: chooses the strategy for the removal of
- matched objects from the session. Valid values are:
-
- ``False`` - don't synchronize the session. This option is the most
- efficient and is reliable once the session is expired, which
- typically occurs after a commit(), or explicitly using
- expire_all(). Before the expiration, objects may still remain in
- the session which were in fact deleted which can lead to confusing
- results if they are accessed via get() or already loaded
- collections.
-
- ``'fetch'`` - performs a select query before the delete to find
- objects that are matched by the delete query and need to be
- removed from the session. Matched objects are removed from the
- session.
+ .. warning::
- ``'evaluate'`` - Evaluate the query's criteria in Python straight
- on the objects in the session. If evaluation of the criteria isn't
- implemented, an error is raised.
+ See the section :ref:`bulk_update_delete` for important caveats
+ and warnings, including limitations when using bulk UPDATE
+ and DELETE with mapper inheritance configurations.
- The expression evaluator currently doesn't account for differing
- string collations between the database and Python.
+ :param synchronize_session: chooses the strategy to update the
+ attributes on objects in the session. See the section
+ :ref:`bulk_update_delete` for a discussion of these strategies.
:return: the count of rows matched as returned by the database's
"row count" feature.
- .. warning:: **Additional Caveats for bulk query deletes**
-
- * This method does **not work for joined
- inheritance mappings**, since the **multiple table
- deletes are not supported by SQL** as well as that the
- **join condition of an inheritance mapper is not
- automatically rendered**. Care must be taken in any
- multiple-table delete to first accommodate via some other means
- how the related table will be deleted, as well as to
- explicitly include the joining
- condition between those tables, even in mappings where
- this is normally automatic. E.g. if a class ``Engineer``
- subclasses ``Employee``, a DELETE against the ``Employee``
- table would look like::
-
- session.query(Engineer).\
- filter(Engineer.id == Employee.id).\
- filter(Employee.name == 'dilbert').\
- delete()
-
- However the above SQL will not delete from the Engineer table,
- unless an ON DELETE CASCADE rule is established in the database
- to handle it.
-
- Short story, **do not use this method for joined inheritance
- mappings unless you have taken the additional steps to make
- this feasible**.
-
- * The polymorphic identity WHERE criteria is **not** included
- for single- or
- joined- table updates - this must be added **manually** even
- for single table inheritance.
-
- * The method does **not** offer in-Python cascading of
- relationships - it is assumed that ON DELETE CASCADE/SET
- NULL/etc. is configured for any foreign key references
- which require it, otherwise the database may emit an
- integrity violation if foreign key references are being
- enforced.
-
- After the DELETE, dependent objects in the
- :class:`.Session` which were impacted by an ON DELETE
- may not contain the current state, or may have been
- deleted. This issue is resolved once the
- :class:`.Session` is expired, which normally occurs upon
- :meth:`.Session.commit` or can be forced by using
- :meth:`.Session.expire_all`. Accessing an expired
- object whose row has been deleted will invoke a SELECT
- to locate the row; when the row is not found, an
- :class:`~sqlalchemy.orm.exc.ObjectDeletedError` is
- raised.
-
- * The ``'fetch'`` strategy results in an additional
- SELECT statement emitted and will significantly reduce
- performance.
-
- * The ``'evaluate'`` strategy performs a scan of
- all matching objects within the :class:`.Session`; if the
- contents of the :class:`.Session` are expired, such as
- via a proceeding :meth:`.Session.commit` call, **this will
- result in SELECT queries emitted for every matching object**.
-
- * The :meth:`.MapperEvents.before_delete` and
- :meth:`.MapperEvents.after_delete`
- events **are not invoked** from this method. Instead, the
- :meth:`.SessionEvents.after_bulk_delete` method is provided to
- act upon a mass DELETE of entity rows.
-
.. seealso::
- :meth:`_query.Query.update`
-
- :ref:`inserts_and_updates` - Core SQL tutorial
+ :ref:`bulk_update_delete`
"""
sess.query(User).filter(User.age == 25).\
update({"age": User.age - 10}, synchronize_session='evaluate')
+ .. warning::
- .. warning:: The :meth:`_query.Query.update`
- method is a "bulk" operation,
- which bypasses ORM unit-of-work automation in favor of greater
- performance. **Please read all caveats and warnings below.**
-
+ See the section :ref:`bulk_update_delete` for important caveats
+ and warnings, including limitations when using bulk UPDATE
+ and DELETE with mapper inheritance configurations.
:param values: a dictionary with attributes names, or alternatively
mapped attributes or SQL expressions, as keys, and literal
flag is passed to the :paramref:`.Query.update.update_args` dictionary
as well.
- .. versionchanged:: 1.0.0 - string names in the values dictionary
- are now resolved against the mapped entity; previously, these
- strings were passed as literal column names with no mapper-level
- translation.
-
:param synchronize_session: chooses the strategy to update the
- attributes on objects in the session. Valid values are:
-
- ``False`` - don't synchronize the session. This option is the most
- efficient and is reliable once the session is expired, which
- typically occurs after a commit(), or explicitly using
- expire_all(). Before the expiration, updated objects may still
- remain in the session with stale values on their attributes, which
- can lead to confusing results.
-
- ``'fetch'`` - performs a select query before the update to find
- objects that are matched by the update query. The updated
- attributes are expired on matched objects.
-
- ``'evaluate'`` - Evaluate the Query's criteria in Python straight
- on the objects in the session. If evaluation of the criteria isn't
- implemented, an exception is raised.
-
- The expression evaluator currently doesn't account for differing
- string collations between the database and Python.
+ attributes on objects in the session. See the section
+ :ref:`bulk_update_delete` for a discussion of these strategies.
:param update_args: Optional dictionary, if present will be passed
to the underlying :func:`_expression.update`
as ``mysql_limit``, as well as other special arguments such as
:paramref:`~sqlalchemy.sql.expression.update.preserve_parameter_order`.
- .. versionadded:: 1.0.0
-
:return: the count of rows matched as returned by the database's
"row count" feature.
- .. warning:: **Additional Caveats for bulk query updates**
-
- * The method does **not** offer in-Python cascading of
- relationships - it is assumed that ON UPDATE CASCADE is
- configured for any foreign key references which require
- it, otherwise the database may emit an integrity
- violation if foreign key references are being enforced.
-
- After the UPDATE, dependent objects in the
- :class:`.Session` which were impacted by an ON UPDATE
- CASCADE may not contain the current state; this issue is
- resolved once the :class:`.Session` is expired, which
- normally occurs upon :meth:`.Session.commit` or can be
- forced by using :meth:`.Session.expire_all`.
-
- * The ``'fetch'`` strategy results in an additional
- SELECT statement emitted and will significantly reduce
- performance.
-
- * The ``'evaluate'`` strategy performs a scan of
- all matching objects within the :class:`.Session`; if the
- contents of the :class:`.Session` are expired, such as
- via a proceeding :meth:`.Session.commit` call, **this will
- result in SELECT queries emitted for every matching object**.
-
- * The method supports multiple table updates, as detailed
- in :ref:`multi_table_updates`, and this behavior does
- extend to support updates of joined-inheritance and
- other multiple table mappings. However, the **join
- condition of an inheritance mapper is not
- automatically rendered**. Care must be taken in any
- multiple-table update to explicitly include the joining
- condition between those tables, even in mappings where
- this is normally automatic. E.g. if a class ``Engineer``
- subclasses ``Employee``, an UPDATE of the ``Engineer``
- local table using criteria against the ``Employee``
- local table might look like::
-
- session.query(Engineer).\
- filter(Engineer.id == Employee.id).\
- filter(Employee.name == 'dilbert').\
- update({"engineer_type": "programmer"})
-
- * The polymorphic identity WHERE criteria is **not** included
- for single- or
- joined- table updates - this must be added **manually**, even
- for single table inheritance.
-
- * The :meth:`.MapperEvents.before_update` and
- :meth:`.MapperEvents.after_update`
- events **are not invoked from this method**. Instead, the
- :meth:`.SessionEvents.after_bulk_update` method is provided to
- act upon a mass UPDATE of entity rows.
.. seealso::
- :meth:`_query.Query.delete`
+ :ref:`bulk_update_delete`
- :ref:`inserts_and_updates` - Core SQL tutorial
"""
"""
stmt = self._statement_20(for_statement=for_statement, **kw)
- assert for_statement == stmt.compile_options._for_statement
+ assert for_statement == stmt._compile_options._for_statement
# this chooses between ORMFromStatementCompileState and
# ORMSelectCompileState. We could also base this on
__visit_name__ = "orm_from_statement"
- compile_options = ORMFromStatementCompileState.default_compile_options
+ _compile_options = ORMFromStatementCompileState.default_compile_options
_compile_state_factory = ORMFromStatementCompileState.create_for_statement
from .. import engine
from .. import exc as sa_exc
from .. import future
+from .. import sql
from .. import util
from ..inspection import inspect
from ..sql import coercions
return self.statement._execution_options.union(self._execution_options)
def _orm_compile_options(self):
- opts = self.statement.compile_options
+ opts = self.statement._compile_options
if isinstance(opts, context.ORMCompileState.default_compile_options):
return opts
else:
class SessionTransaction(object):
"""A :class:`.Session`-level transaction.
- :class:`.SessionTransaction` is a mostly behind-the-scenes object
- not normally referenced directly by application code. It coordinates
- among multiple :class:`_engine.Connection` objects, maintaining a database
- transaction for each one individually, committing or rolling them
- back all at once. It also provides optional two-phase commit behavior
- which can augment this coordination operation.
-
- The :attr:`.Session.transaction` attribute of :class:`.Session`
- refers to the current :class:`.SessionTransaction` object in use, if any.
- The :attr:`.SessionTransaction.parent` attribute refers to the parent
- :class:`.SessionTransaction` in the stack of :class:`.SessionTransaction`
- objects. If this attribute is ``None``, then this is the top of the stack.
- If non-``None``, then this :class:`.SessionTransaction` refers either
- to a so-called "subtransaction" or a "nested" transaction. A
- "subtransaction" is a scoping concept that demarcates an inner portion
- of the outermost "real" transaction. A nested transaction, which
- is indicated when the :attr:`.SessionTransaction.nested`
- attribute is also True, indicates that this :class:`.SessionTransaction`
- corresponds to a SAVEPOINT.
-
- **Life Cycle**
-
- A :class:`.SessionTransaction` is associated with a :class:`.Session` in
- its default mode of ``autocommit=False`` whenever the "autobegin" process
- takes place, associated with no database connections. As the
- :class:`.Session` is called upon to emit SQL on behalf of various
- :class:`_engine.Engine` or :class:`_engine.Connection` objects,
- a corresponding
- :class:`_engine.Connection` and associated :class:`.Transaction`
- is added to a
- collection within the :class:`.SessionTransaction` object, becoming one of
- the connection/transaction pairs maintained by the
- :class:`.SessionTransaction`. The start of a :class:`.SessionTransaction`
- can be tracked using the :meth:`.SessionEvents.after_transaction_create`
- event.
-
- The lifespan of the :class:`.SessionTransaction` ends when the
- :meth:`.Session.commit`, :meth:`.Session.rollback` or
- :meth:`.Session.close` methods are called. At this point, the
- :class:`.SessionTransaction` removes its association with its parent
- :class:`.Session`. A :class:`.Session` that is in ``autocommit=False``
- mode will create a new :class:`.SessionTransaction` to replace it when the
- next "autobegin" event occurs, whereas a :class:`.Session` that's in
- ``autocommit=True`` mode will remain without a :class:`.SessionTransaction`
- until the :meth:`.Session.begin` method is called. The end of a
- :class:`.SessionTransaction` can be tracked using the
- :meth:`.SessionEvents.after_transaction_end` event.
-
- .. versionchanged:: 1.4 the :class:`.SessionTransaction` is not created
- immediately within a :class:`.Session` when constructed or when the
- previous transaction is removed, it instead is created when the
- :class:`.Session` is next used.
-
- **Nesting and Subtransactions**
-
- Another detail of :class:`.SessionTransaction` behavior is that it is
- capable of "nesting". This means that the :meth:`.Session.begin` method
- can be called while an existing :class:`.SessionTransaction` is already
- present, producing a new :class:`.SessionTransaction` that temporarily
- replaces the parent :class:`.SessionTransaction`. When a
- :class:`.SessionTransaction` is produced as nested, it assigns itself to
- the :attr:`.Session.transaction` attribute, and it additionally will assign
- the previous :class:`.SessionTransaction` to its :attr:`.Session.parent`
- attribute. The behavior is effectively a
- stack, where :attr:`.Session.transaction` refers to the current head of
- the stack, and the :attr:`.SessionTransaction.parent` attribute allows
- traversal up the stack until :attr:`.SessionTransaction.parent` is
- ``None``, indicating the top of the stack.
-
- When the scope of :class:`.SessionTransaction` is ended via
- :meth:`.Session.commit` or :meth:`.Session.rollback`, it restores its
- parent :class:`.SessionTransaction` back onto the
- :attr:`.Session.transaction` attribute.
-
- The purpose of this stack is to allow nesting of
- :meth:`.Session.rollback` or :meth:`.Session.commit` calls in context
- with various flavors of :meth:`.Session.begin`. This nesting behavior
- applies to when :meth:`.Session.begin_nested` is used to emit a
- SAVEPOINT transaction, and is also used to produce a so-called
- "subtransaction" which allows a block of code to use a
- begin/rollback/commit sequence regardless of whether or not its enclosing
- code block has begun a transaction. The :meth:`.flush` method, whether
- called explicitly or via autoflush, is the primary consumer of the
- "subtransaction" feature, in that it wishes to guarantee that it works
- within in a transaction block regardless of whether or not the
- :class:`.Session` is in transactional mode when the method is called.
-
- Note that the flush process that occurs within the "autoflush" feature
- as well as when the :meth:`.Session.flush` method is used **always**
- creates a :class:`.SessionTransaction` object. This object is normally
- a subtransaction, unless the :class:`.Session` is in autocommit mode
- and no transaction exists at all, in which case it's the outermost
- transaction. Any event-handling logic or other inspection logic
- needs to take into account whether a :class:`.SessionTransaction`
- is the outermost transaction, a subtransaction, or a "nested" / SAVEPOINT
- transaction.
+ :class:`.SessionTransaction` is produced from the
+ :meth:`_orm.Session.begin`
+ and :meth:`_orm.Session.begin_nested` methods. It's largely an internal
+ object that in modern use provides a context manager for session
+ transactions.
- .. seealso::
+ Documentation on interacting with :class:`_orm.SessionTransaction` is
+ at: :ref:`unitofwork_transaction`.
- :meth:`.Session.rollback`
- :meth:`.Session.commit`
+ .. versionchanged:: 1.4 The scoping and API methods to work with the
+ :class:`_orm.SessionTransaction` object directly have been simplified.
+
+ .. seealso::
+
+ :ref:`unitofwork_transaction`
:meth:`.Session.begin`
:meth:`.Session.begin_nested`
- :attr:`.Session.is_active`
+ :meth:`.Session.rollback`
+
+ :meth:`.Session.commit`
- :meth:`.SessionEvents.after_transaction_create`
+ :meth:`.Session.in_transaction`
- :meth:`.SessionEvents.after_transaction_end`
+ :meth:`.Session.in_nested_transaction`
- :meth:`.SessionEvents.after_commit`
+ :meth:`.Session.get_transaction`
- :meth:`.SessionEvents.after_rollback`
+ :meth:`.Session.get_nested_transaction`
- :meth:`.SessionEvents.after_soft_rollback`
"""
_rollback_exception = None
- def __init__(self, session, parent=None, nested=False, autobegin=False):
+ def __init__(
+ self, session, parent=None, nested=False, autobegin=False,
+ ):
self.session = session
self._connections = {}
self._parent = parent
self.nested = nested
+ if nested:
+ self._previous_nested_transaction = session._nested_transaction
self._state = ACTIVE
if not parent and nested:
raise sa_exc.InvalidRequestError(
return self._connections[bind][0]
local_connect = False
+ should_commit = True
+
if self._parent:
conn = self._parent._connection_for_bind(bind, execution_options)
if not self.nested:
transaction = conn.begin_twophase()
elif self.nested:
transaction = conn.begin_nested()
- else:
- if conn._is_future and conn.in_transaction():
- transaction = conn._transaction
+ elif conn.in_transaction():
+ # if given a future connection already in a transaction, don't
+ # commit that transaction unless it is a savepoint
+ if conn.in_nested_transaction():
+ transaction = conn.get_nested_transaction()
else:
- transaction = conn.begin()
+ transaction = conn.get_transaction()
+ should_commit = False
+ else:
+ transaction = conn.begin()
except:
# connection will not not be associated with this Session;
# close it immediately so that it isn't closed under GC
self._connections[conn] = self._connections[conn.engine] = (
conn,
transaction,
- not bind_is_connection or not conn._is_future,
+ should_commit,
not bind_is_connection,
)
self.session.dispatch.after_begin(self.session, self, conn)
if self._parent is None or self.nested:
self.session.dispatch.before_commit(self.session)
- stx = self.session.transaction
+ stx = self.session._transaction
if stx is not self:
for subtransaction in stx._iterate_self_and_parents(upto=self):
subtransaction.commit()
self._state = PREPARED
- def commit(self):
+ def commit(self, _to_root=False):
self._assert_active(prepared_ok=True)
if self._state is not PREPARED:
self._prepare_impl()
self._remove_snapshot()
self.close()
+
+ if _to_root and self._parent:
+ return self._parent.commit(_to_root=True)
+
return self._parent
- def rollback(self, _capture_exception=False):
+ def rollback(self, _capture_exception=False, _to_root=False):
self._assert_active(prepared_ok=True, rollback_ok=True)
- stx = self.session.transaction
+ stx = self.session._transaction
if stx is not self:
for subtransaction in stx._iterate_self_and_parents(upto=self):
subtransaction.close()
sess.dispatch.after_soft_rollback(sess, self)
+ if _to_root and self._parent:
+ return self._parent.rollback(_to_root=True)
return self._parent
def close(self, invalidate=False):
+ if self.nested:
+ self.session._nested_transaction = (
+ self._previous_nested_transaction
+ )
+
self.session._transaction = self._parent
+
if self._parent is None:
for connection, transaction, should_commit, autoclose in set(
self._connections.values()
):
if invalidate:
connection.invalidate()
+ if should_commit and transaction.is_active:
+ transaction.close()
if autoclose:
connection.close()
- else:
- transaction.close()
self._state = CLOSED
self.session.dispatch.after_transaction_end(self.session, self)
"scalar",
)
+ @util.deprecated_params(
+ autocommit=(
+ "2.0",
+ "The :paramref:`.Session.autocommit` parameter is deprecated "
+ "and will be removed in SQLAlchemy version 2.0. Please use the "
+ ":paramref:`.Session.autobegin` parameter set to False to support "
+ "explicit use of the :meth:`.Session.begin` method.",
+ ),
+ )
def __init__(
self,
bind=None,
:class:`.Session` dictionary will be local to that
:class:`.Session`.
- .. versionadded:: 0.9.0
-
:param query_cls: Class which should be used to create new Query
objects, as returned by the :meth:`~.Session.query` method.
Defaults to :class:`_query.Query`.
self._flushing = False
self._warn_on_events = False
self._transaction = None
+ self._nested_transaction = None
self.future = future
self.hash_key = _new_sessionid()
self.autoflush = autoflush
- self.autocommit = autocommit
self.expire_on_commit = expire_on_commit
self.enable_baked_queries = enable_baked_queries
+ if autocommit:
+ if future:
+ raise sa_exc.ArgumentError(
+ "Cannot use autocommit mode with future=True. "
+ "use the autobegin flag."
+ )
+ self.autocommit = True
+ else:
+ self.autocommit = False
+
self.twophase = twophase
self._query_cls = query_cls if query_cls else query.Query
if info:
connection_callable = None
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type_, value, traceback):
+ self.close()
+
@property
+ @util.deprecated_20(
+ "The :attr:`_orm.Session.transaction` accessor is deprecated and "
+ "will be removed in SQLAlchemy version 2.0. "
+ "For context manager use, use :meth:`_orm.Session.begin`. To access "
+ "the current root transaction, use "
+ ":meth:`_orm.Session.get_transaction()"
+ )
def transaction(self):
"""The current active or inactive :class:`.SessionTransaction`.
- If this session is in "autobegin" mode and the transaction was not
- begun, this accessor will implicitly begin the transaction.
+ May be None if no transaction has begun yet.
.. versionchanged:: 1.4 the :attr:`.Session.transaction` attribute
- is now a read-only descriptor that will automatically start a
- transaction in "autobegin" mode if one is not present.
+ is now a read-only descriptor that also may return None if no
+ transaction has begun yet.
+
"""
- self._autobegin()
+ if not self.future:
+ self._autobegin()
return self._transaction
+ def in_transaction(self):
+ """Return True if this :class:`_orm.Session` has begun a transaction.
+
+ .. versionadded:: 1.4
+
+ .. seealso::
+
+ :attr:`_orm.Session.is_active`
+
+
+ """
+ return self._transaction is not None
+
+ def in_nested_transaction(self):
+ """Return True if this :class:`_orm.Session` has begun a nested
+ transaction, e.g. SAVEPOINT.
+
+ .. versionadded:: 1.4
+
+ """
+ return self._nested_transaction is not None
+
+ def get_transaction(self):
+ """Return the current root transaction in progress, if any.
+
+ .. versionadded:: 1.4
+
+ """
+ trans = self._transaction
+ while trans is not None and trans._parent is not None:
+ trans = trans._parent
+ return trans
+
+ def get_nested_transaction(self):
+ """Return the current nested transaction in progress, if any.
+
+ .. versionadded:: 1.4
+
+ """
+
+ return self._nested_transaction
+
@util.memoized_property
def info(self):
"""A user-modifiable dictionary.
here is always local to this :class:`.Session` and can be modified
independently of all other :class:`.Session` objects.
- .. versionadded:: 0.9.0
-
"""
return {}
return False
- def begin(self, subtransactions=False, nested=False):
+ @util.deprecated_params(
+ subtransactions=(
+ "2.0",
+ "The :paramref:`_orm.Session.begin.subtransactions` flag is "
+ "deprecated and "
+ "will be removed in SQLAlchemy version 2.0. The "
+ ":attr:`_orm.Session.transaction` flag may "
+ "be checked for None before invoking :meth:`_orm.Session.begin`.",
+ )
+ )
+ def begin(self, subtransactions=False, nested=False, _subtrans=False):
"""Begin a transaction on this :class:`.Session`.
.. warning::
"""
+ if subtransactions and self.future:
+ raise NotImplementedError(
+ "subtransactions are not implemented in future "
+ "Session objects."
+ )
if self._autobegin():
if not subtransactions and not nested:
- return
+ return self._transaction
if self._transaction is not None:
- if subtransactions or nested:
- self._transaction = self._transaction._begin(nested=nested)
+ if subtransactions or _subtrans or nested:
+ trans = self._transaction._begin(nested=nested)
+ self._transaction = trans
+ if nested:
+ self._nested_transaction = trans
else:
raise sa_exc.InvalidRequestError(
- "A transaction is already begun. Use "
- "subtransactions=True to allow subtransactions."
+ "A transaction is already begun on this Session."
)
else:
self._transaction = SessionTransaction(self, nested=nested)
if self._transaction is None:
pass
else:
- self._transaction.rollback()
+ self._transaction.rollback(_to_root=self.future)
def commit(self):
"""Flush pending changes and commit the current transaction.
if not self._autobegin():
raise sa_exc.InvalidRequestError("No transaction is begun.")
- self._transaction.commit()
+ self._transaction.commit(_to_root=self.future)
def prepare(self):
"""Prepare the current transaction in progress for two phase commit.
present within the :class:`.Session`, a warning is emitted and
the arguments are ignored.
- .. versionadded:: 0.9.9
-
.. seealso::
:ref:`session_transaction_isolation`
)
assert self._transaction is None
+ assert self.autocommit
conn = engine.connect(**kw)
if execution_options:
conn = conn.execution_options(**execution_options)
This is a variant of :meth:`.Session.close` that will additionally
ensure that the :meth:`_engine.Connection.invalidate`
- method will be called
- on all :class:`_engine.Connection` objects. This can be called when
- the database is known to be in a state where the connections are
- no longer safe to be used.
+ method will be called on each :class:`_engine.Connection` object
+ that is currently in use for a transaction (typically there is only
+ one connection unless the :class:`_orm.Session` is used with
+ multiple engines).
- E.g.::
+ This can be called when the database is known to be in a state where
+ the connections are no longer safe to be used.
+
+ Below illustrates a scenario when using `gevent
+ <http://www.gevent.org/>`_, which can produce ``Timeout`` exceptions
+ that may mean the underlying connection should be discarded::
+
+ import gevent
try:
sess = Session()
sess.rollback()
raise
- This clears all items and ends any transaction in progress.
-
- If this session were created with ``autocommit=False``, a new
- transaction is immediately begun. Note that this new transaction does
- not use any connection resources until they are first needed.
-
- .. versionadded:: 0.9.9
+ The method additionally does everything that :meth:`_orm.Session.close`
+ does, including that all ORM objects are expunged.
"""
self._close_impl(invalidate=True)
"A blank dictionary is ambiguous."
)
- if with_for_update is not None:
- if with_for_update is True:
- with_for_update = query.ForUpdateArg()
- elif with_for_update:
- with_for_update = query.ForUpdateArg(**with_for_update)
- else:
- with_for_update = None
+ with_for_update = query.ForUpdateArg._from_argument(with_for_update)
stmt = future.select(object_mapper(instance))
if (
for o, m, st_, dct_ in cascade_states:
self._delete_impl(st_, o, False)
+ def get(
+ self,
+ entity,
+ ident,
+ options=None,
+ populate_existing=False,
+ with_for_update=None,
+ identity_token=None,
+ ):
+ """Return an instance based on the given primary key identifier,
+ or ``None`` if not found.
+
+ E.g.::
+
+ my_user = session.get(User, 5)
+
+ some_object = session.get(VersionedFoo, (5, 10))
+
+ some_object = session.get(
+ VersionedFoo,
+ {"id": 5, "version_id": 10}
+ )
+
+ .. versionadded:: 1.4 Added :meth:`_orm.Session.get`, which is moved
+ from the now deprecated :meth:`_orm.Query.get` method.
+
+ :meth:`_orm.Session.get` is special in that it provides direct
+ access to the identity map of the :class:`.Session`.
+ If the given primary key identifier is present
+ in the local identity map, the object is returned
+ directly from this collection and no SQL is emitted,
+ unless the object has been marked fully expired.
+ If not present,
+ a SELECT is performed in order to locate the object.
+
+ :meth:`_orm.Session.get` also will perform a check if
+ the object is present in the identity map and
+ marked as expired - a SELECT
+ is emitted to refresh the object as well as to
+ ensure that the row is still present.
+ If not, :class:`~sqlalchemy.orm.exc.ObjectDeletedError` is raised.
+
+ :param entity: a mapped class or :class:`.Mapper` indicating the
+ type of entity to be loaded.
+
+ :param ident: A scalar, tuple, or dictionary representing the
+ primary key. For a composite (e.g. multiple column) primary key,
+ a tuple or dictionary should be passed.
+
+ For a single-column primary key, the scalar calling form is typically
+ the most expedient. If the primary key of a row is the value "5",
+ the call looks like::
+
+ my_object = session.get(SomeClass, 5)
+
+ The tuple form contains primary key values typically in
+ the order in which they correspond to the mapped
+ :class:`_schema.Table`
+ object's primary key columns, or if the
+ :paramref:`_orm.Mapper.primary_key` configuration parameter were
+ used, in
+ the order used for that parameter. For example, if the primary key
+ of a row is represented by the integer
+ digits "5, 10" the call would look like::
+
+ my_object = session.get(SomeClass, (5, 10))
+
+ The dictionary form should include as keys the mapped attribute names
+ corresponding to each element of the primary key. If the mapped class
+ has the attributes ``id``, ``version_id`` as the attributes which
+ store the object's primary key value, the call would look like::
+
+ my_object = session.get(SomeClass, {"id": 5, "version_id": 10})
+
+ :param options: optional sequence of loader options which will be
+ applied to the query, if one is emitted.
+
+ :param populate_existing: causes the method to unconditionally emit
+ a SQL query and refresh the object with the newly loaded data,
+ regardless of whether or not the object is already present.
+
+ :param with_for_update: optional boolean ``True`` indicating FOR UPDATE
+ should be used, or may be a dictionary containing flags to
+ indicate a more specific set of FOR UPDATE flags for the SELECT;
+ flags should match the parameters of
+ :meth:`_query.Query.with_for_update`.
+ Supersedes the :paramref:`.Session.refresh.lockmode` parameter.
+
+ :return: The object instance, or ``None``.
+
+ """
+ return self._get_impl(
+ entity,
+ ident,
+ loading.load_on_pk_identity,
+ options,
+ populate_existing=populate_existing,
+ with_for_update=with_for_update,
+ identity_token=identity_token,
+ )
+
+ def _get_impl(
+ self,
+ entity,
+ primary_key_identity,
+ db_load_fn,
+ options=None,
+ populate_existing=False,
+ with_for_update=None,
+ identity_token=None,
+ execution_options=None,
+ ):
+
+ # convert composite types to individual args
+ if hasattr(primary_key_identity, "__composite_values__"):
+ primary_key_identity = primary_key_identity.__composite_values__()
+
+ mapper = inspect(entity)
+
+ is_dict = isinstance(primary_key_identity, dict)
+ if not is_dict:
+ primary_key_identity = util.to_list(
+ primary_key_identity, default=(None,)
+ )
+
+ if len(primary_key_identity) != len(mapper.primary_key):
+ raise sa_exc.InvalidRequestError(
+ "Incorrect number of values in identifier to formulate "
+ "primary key for query.get(); primary key columns are %s"
+ % ",".join("'%s'" % c for c in mapper.primary_key)
+ )
+
+ if is_dict:
+ try:
+ primary_key_identity = list(
+ primary_key_identity[prop.key]
+ for prop in mapper._identity_key_props
+ )
+
+ except KeyError as err:
+ util.raise_(
+ sa_exc.InvalidRequestError(
+ "Incorrect names of values in identifier to formulate "
+ "primary key for query.get(); primary key attribute "
+ "names are %s"
+ % ",".join(
+ "'%s'" % prop.key
+ for prop in mapper._identity_key_props
+ )
+ ),
+ replace_context=err,
+ )
+
+ if (
+ not populate_existing
+ and not mapper.always_refresh
+ and with_for_update is None
+ ):
+
+ instance = self._identity_lookup(
+ mapper, primary_key_identity, identity_token=identity_token
+ )
+
+ if instance is not None:
+ # reject calls for id in identity map but class
+ # mismatch.
+ if not issubclass(instance.__class__, mapper.class_):
+ return None
+ return instance
+ elif instance is attributes.PASSIVE_CLASS_MISMATCH:
+ return None
+
+ # apply_labels() not strictly necessary, however this will ensure that
+ # tablename_colname style is used which at the moment is asserted
+ # in a lot of unit tests :)
+
+ load_options = context.QueryContext.default_load_options
+
+ if populate_existing:
+ load_options += {"_populate_existing": populate_existing}
+ statement = sql.select(mapper).apply_labels()
+ if with_for_update is not None:
+ statement._for_update_arg = query.ForUpdateArg._from_argument(
+ with_for_update
+ )
+
+ if options:
+ statement = statement.options(*options)
+ if execution_options:
+ statement = statement.execution_options(**execution_options)
+ return db_load_fn(
+ self, statement, primary_key_identity, load_options=load_options,
+ )
+
def merge(self, instance, load=True):
"""Copy the state of a given instance into a corresponding instance
within this :class:`.Session`.
new_instance = True
elif key_is_persistent:
- merged = self.query(mapper.class_).get(key[1])
+ merged = self.get(mapper.class_, key[1], identity_token=key[2])
if merged is None:
merged = mapper.class_manager.new_instance()
if not flush_context.has_work:
return
- flush_context.transaction = transaction = self.begin(
- subtransactions=True
- )
+ flush_context.transaction = transaction = self.begin(_subtrans=True)
try:
self._warn_on_events = True
try:
mapper = _class_to_mapper(mapper)
self._flushing = True
- transaction = self.begin(subtransactions=True)
+ transaction = self.begin(_subtrans=True)
try:
if isupdate:
persistence._bulk_update(
@property
def is_active(self):
- """True if this :class:`.Session` is in "transaction mode" and
- is not in "partial rollback" state.
-
- The :class:`.Session` in its default mode of ``autocommit=False``
- is essentially always in "transaction mode", in that a
- :class:`.SessionTransaction` is associated with it as soon as
- it is instantiated. This :class:`.SessionTransaction` is immediately
- replaced with a new one as soon as it is ended, due to a rollback,
- commit, or close operation.
-
- "Transaction mode" does *not* indicate whether
- or not actual database connection resources are in use; the
- :class:`.SessionTransaction` object coordinates among zero or more
- actual database transactions, and starts out with none, accumulating
- individual DBAPI connections as different data sources are used
- within its scope. The best way to track when a particular
- :class:`.Session` has actually begun to use DBAPI resources is to
- implement a listener using the :meth:`.SessionEvents.after_begin`
- method, which will deliver both the :class:`.Session` as well as the
- target :class:`_engine.Connection` to a user-defined event listener.
-
- The "partial rollback" state refers to when an "inner" transaction,
- typically used during a flush, encounters an error and emits a
- rollback of the DBAPI connection. At this point, the
- :class:`.Session` is in "partial rollback" and awaits for the user to
- call :meth:`.Session.rollback`, in order to close out the
- transaction stack. It is in this "partial rollback" period that the
- :attr:`.is_active` flag returns False. After the call to
- :meth:`.Session.rollback`, the :class:`.SessionTransaction` is
- replaced with a new one and :attr:`.is_active` returns ``True`` again.
-
- When a :class:`.Session` is used in ``autocommit=True`` mode, the
- :class:`.SessionTransaction` is only instantiated within the scope
- of a flush call, or when :meth:`.Session.begin` is called. So
- :attr:`.is_active` will always be ``False`` outside of a flush or
- :meth:`.Session.begin` block in this mode, and will be ``True``
- within the :meth:`.Session.begin` block as long as it doesn't enter
- "partial rollback" state.
-
- From all the above, it follows that the only purpose to this flag is
- for application frameworks that wish to detect if a "rollback" is
- necessary within a generic error handling routine, for
- :class:`.Session` objects that would otherwise be in
- "partial rollback" mode. In a typical integration case, this is also
- not necessary as it is standard practice to emit
- :meth:`.Session.rollback` unconditionally within the outermost
- exception catch.
-
- To track the transactional state of a :class:`.Session` fully,
- use event listeners, primarily the :meth:`.SessionEvents.after_begin`,
- :meth:`.SessionEvents.after_commit`,
- :meth:`.SessionEvents.after_rollback` and related events.
+ """True if this :class:`.Session` not in "partial rollback" state.
+
+ .. versionchanged:: 1.4 The :class:`_orm.Session` no longer begins
+ a new transaction immediately, so this attribute will be False
+ when the :class:`_orm.Session` is first instantiated.
+
+ "partial rollback" state typically indicates that the flush process
+ of the :class:`_orm.Session` has failed, and that the
+ :meth:`_orm.Session.rollback` method must be emitted in order to
+ fully roll back the transaction.
+
+ If this :class:`_orm.Session` is not in a transaction at all, the
+ :class:`_orm.Session` will autobegin when it is first used, so in this
+ case :attr:`_orm.Session.is_active` will return True.
+
+ Otherwise, if this :class:`_orm.Session` is within a transaction,
+ and that transaction has not been rolled back internally, the
+ :attr:`_orm.Session.is_active` will also return True.
+
+ .. seealso::
+
+ :ref:`faq_session_rollback`
+
+ :meth:`_orm.Session.in_transaction`
"""
- self._autobegin()
- return self._transaction and self._transaction.is_active
+ if self.autocommit:
+ return (
+ self._transaction is not None and self._transaction.is_active
+ )
+ else:
+ return self._transaction is None or self._transaction.is_active
identity_map = None
"""A mapping of object identities to objects themselves.
e.g.::
- # global scope
- Session = sessionmaker(autoflush=False)
+ from sqlalchemy import create_engine
+ from sqlalchemy.orm import sessionmaker
- # later, in a local scope, create and use a session:
- sess = Session()
+ # an Engine, which the Session will use for connection
+ # resources
+ engine = create_engine('postgresql://scott:tiger@localhost/')
- Any keyword arguments sent to the constructor itself will override the
- "configured" keywords::
+ Session = sessionmaker(engine)
- Session = sessionmaker()
+ with Session() as session:
+ session.add(some_object)
+ session.add(some_other_object)
+ session.commit()
+
+ Context manager use is optional; otherwise, the returned
+ :class:`_orm.Session` object may be closed explicitly via the
+ :meth:`_orm.Session.close` method. Using a
+ ``try:/finally:`` block is optional, however will ensure that the close
+ takes place even if there are database errors::
+
+ session = Session()
+ try:
+ session.add(some_object)
+ session.add(some_other_object)
+ session.commit()
+ finally:
+ session.close()
+
+ :class:`.sessionmaker` acts as a factory for :class:`_orm.Session`
+ objects in the same way as an :class:`_engine.Engine` acts as a factory
+ for :class:`_engine.Connection` objects. In this way it also includes
+ a :meth:`_orm.sessionmaker.begin` method, that provides a context
+ manager which both begins and commits a transaction, as well as closes
+ out the :class:`_orm.Session` when complete, rolling back the transaction
+ if any errors occur::
+
+ Session = sessionmaker(engine)
+
+ wih Session.begin() as session:
+ session.add(some_object)
+ session.add(some_other_object)
+ # commits transaction, closes session
+
+ .. versionadded:: 1.4
+
+ When calling upon :class:`_orm.sessionmaker` to construct a
+ :class:`_orm.Session`, keyword arguments may also be passed to the
+ method; these arguments will override that of the globally configured
+ parameters. Below we use a :class:`_orm.sessionmaker` bound to a certain
+ :class:`_engine.Engine` to produce a :class:`_orm.Session` that is instead
+ bound to a specific :class:`_engine.Connection` procured from that engine::
+
+ Session = sessionmaker(engine)
# bind an individual session to a connection
- sess = Session(bind=connection)
- The class also includes a method :meth:`.configure`, which can
- be used to specify additional keyword arguments to the factory, which
- will take effect for subsequent :class:`.Session` objects generated.
- This is usually used to associate one or more :class:`_engine.Engine`
- objects
- with an existing :class:`.sessionmaker` factory before it is first
- used::
+ with engine.connect() as connection:
+ with Session(bind=connection) as session:
+ # work with session
+
+ The class also includes a method :meth:`_orm.sessionmaker.configure`, which
+ can be used to specify additional keyword arguments to the factory, which
+ will take effect for subsequent :class:`.Session` objects generated. This
+ is usually used to associate one or more :class:`_engine.Engine` objects
+ with an existing
+ :class:`.sessionmaker` factory before it is first used::
- # application starts
+ # application starts, sessionmaker does not have
+ # an engine bound yet
Session = sessionmaker()
- # ... later
+ # ... later, when an engine URL is read from a configuration
+ # file or other events allow the engine to be created
engine = create_engine('sqlite:///foo.db')
Session.configure(bind=engine)
sess = Session()
+ # work with session
.. seealso::
replaced, when the ``info`` parameter is specified to the specific
:class:`.Session` construction operation.
- .. versionadded:: 0.9.0
-
:param \**kw: all other keyword arguments are passed to the
constructor of newly created :class:`.Session` objects.
# events can be associated with it specifically.
self.class_ = type(class_.__name__, (class_,), {})
+ @util.contextmanager
+ def begin(self):
+ """Produce a context manager that both provides a new
+ :class:`_orm.Session` as well as a transaction that commits.
+
+
+ e.g.::
+
+ Session = sessionmaker(some_engine)
+
+ with Session.begin() as session:
+ session.add(some_object)
+
+ # commits transaction, closes session
+
+ .. versionadded:: 1.4
+
+
+ """
+ with self() as session:
+ with session.begin():
+ yield session
+
def __call__(self, **local_kw):
"""Produce a new :class:`.Session` object using the configuration
established in this :class:`.sessionmaker`.
call to :meth:`.Session.merge` in that a given persistent state
can be manufactured without any SQL calls.
- .. versionadded:: 0.9.5
-
.. seealso::
:func:`.make_transient`
self._equated_columns[c] = self._equated_columns[col]
self.logger.info(
- "%s will use query.get() to " "optimize instance loads", self
+ "%s will use Session.get() to " "optimize instance loads", self
)
def init_class_attribute(self, mapper):
topological.set = (
unitofwork.set
) = session.set = mapper.set = dependency.set = RandomSet
+
+
+def _offset_or_limit_clause(element, name=None, type_=None):
+ """Convert the given value to an "offset or limit" clause.
+
+ This handles incoming integers and converts to an expression; if
+ an expression is already given, it is passed through.
+
+ """
+ return coercions.expect(
+ roles.LimitOffsetRole, element, name=name, type_=type_
+ )
+
+
+def _offset_or_limit_clause_asint_if_possible(clause):
+ """Return the offset or limit clause as a simple integer if possible,
+ else return the clause.
+
+ """
+ if clause is None:
+ return None
+ if hasattr(clause, "_limit_offset_value"):
+ value = clause._limit_offset_value
+ return util.asint(value)
+ else:
+ return clause
+
+
+def _make_slice(limit_clause, offset_clause, start, stop):
+ """Compute LIMIT/OFFSET in terms of slice start/end
+ """
+
+ # for calculated limit/offset, try to do the addition of
+ # values to offset in Python, however if a SQL clause is present
+ # then the addition has to be on the SQL side.
+ if start is not None and stop is not None:
+ offset_clause = _offset_or_limit_clause_asint_if_possible(
+ offset_clause
+ )
+ if offset_clause is None:
+ offset_clause = 0
+
+ if start != 0:
+ offset_clause = offset_clause + start
+
+ if offset_clause == 0:
+ offset_clause = None
+ else:
+ offset_clause = _offset_or_limit_clause(offset_clause)
+
+ limit_clause = _offset_or_limit_clause(stop - start)
+
+ elif start is None and stop is not None:
+ limit_clause = _offset_or_limit_clause(stop)
+ elif start is not None and stop is None:
+ offset_clause = _offset_or_limit_clause_asint_if_possible(
+ offset_clause
+ )
+ if offset_clause is None:
+ offset_clause = 0
+
+ if start != 0:
+ offset_clause = offset_clause + start
+
+ if offset_clause == 0:
+ offset_clause = None
+ else:
+ offset_clause = _offset_or_limit_clause(offset_clause)
+
+ return limit_clause, offset_clause
+
+
+def _getitem(iterable_query, item):
+ """calculate __getitem__ in terms of an iterable query object
+ that also has a slice() method.
+
+ """
+
+ if isinstance(item, slice):
+ start, stop, step = util.decode_slice(item)
+
+ if (
+ isinstance(stop, int)
+ and isinstance(start, int)
+ and stop - start <= 0
+ ):
+ return []
+
+ # perhaps we should execute a count() here so that we
+ # can still use LIMIT/OFFSET ?
+ elif (isinstance(start, int) and start < 0) or (
+ isinstance(stop, int) and stop < 0
+ ):
+ return list(iterable_query)[item]
+
+ res = iterable_query.slice(start, stop)
+ if step is not None:
+ return list(res)[None : None : item.step]
+ else:
+ return list(res)
+ else:
+ if item == -1:
+ return list(iterable_query)[-1]
+ else:
+ return list(iterable_query[item : item + 1])[0]
o1.__dict__.update(other)
return o1
+ def __eq__(self, other):
+ # TODO: very inefficient. This is used only in test suites
+ # right now.
+ for a, b in util.zip_longest(self._cache_attrs, other._cache_attrs):
+ if getattr(self, a) != getattr(other, b):
+ return False
+ return True
+
+ def __repr__(self):
+ # TODO: fairly inefficient, used only in debugging right now.
+
+ return "%s(%s)" % (
+ self.__class__.__name__,
+ ", ".join(
+ "%s=%r" % (k, self.__dict__[k])
+ for k in self._cache_attrs
+ if k in self.__dict__
+ ),
+ )
+
@hybridmethod
def add_to_element(self, name, value):
return self + {name: getattr(self, name) + value}
)
return cls + d
+ @classmethod
+ def from_execution_options(
+ cls, key, attrs, exec_options, statement_exec_options
+ ):
+ """"process Options argument in terms of execution options.
+
+
+ e.g.::
+
+ (
+ load_options,
+ execution_options,
+ ) = QueryContext.default_load_options.from_execution_options(
+ "_sa_orm_load_options",
+ {
+ "populate_existing",
+ "autoflush",
+ "yield_per"
+ },
+ execution_options,
+ statement._execution_options,
+ )
+
+ get back the Options and refresh "_sa_orm_load_options" in the
+ exec options dict w/ the Options as well
+
+ """
+
+ # common case is that no options we are looking for are
+ # in either dictionary, so cancel for that first
+ check_argnames = attrs.intersection(
+ set(exec_options).union(statement_exec_options)
+ )
+
+ existing_options = exec_options.get(key, cls)
+
+ if check_argnames:
+ result = {}
+ for argname in check_argnames:
+ local = "_" + argname
+ if argname in exec_options:
+ result[local] = exec_options[argname]
+ elif argname in statement_exec_options:
+ result[local] = statement_exec_options[argname]
+
+ new_options = existing_options + result
+ exec_options = util.immutabledict().merge_with(
+ exec_options, {key: new_options}
+ )
+ return new_options, exec_options
+
+ else:
+ return existing_options, exec_options
+
class CacheableOptions(Options, HasCacheKey):
@hybridmethod
)
+def _expression_collection_was_a_list(attrname, fnname, args):
+ if args and isinstance(args[0], (list, set)) and len(args) == 1:
+ util.warn_deprecated_20(
+ 'The "%s" argument to %s() is now passed as a series of '
+ "positional "
+ "elements, rather than as a list. " % (attrname, fnname)
+ )
+ return args[0]
+ else:
+ return args
+
+
def expect(role, element, apply_propagate_attrs=None, argname=None, **kw):
if (
role.allows_lambda
stmt = select([users_table]).\
where(
case(
- [
- (users_table.c.name == 'wendy', 'W'),
- (users_table.c.name == 'jack', 'J')
- ],
+ (users_table.c.name == 'wendy', 'W'),
+ (users_table.c.name == 'jack', 'J'),
else_='E'
)
)
("else_", InternalTraversal.dp_clauseelement),
]
- def __init__(self, whens, value=None, else_=None):
+ # TODO: for Py2k removal, this will be:
+ # def __init__(self, *whens, value=None, else_=None):
+
+ def __init__(self, *whens, **kw):
r"""Produce a ``CASE`` expression.
The ``CASE`` construct in SQL is a conditional object that
stmt = select([users_table]).\
where(
case(
- [
- (users_table.c.name == 'wendy', 'W'),
- (users_table.c.name == 'jack', 'J')
- ],
+ (users_table.c.name == 'wendy', 'W'),
+ (users_table.c.name == 'jack', 'J'),
else_='E'
)
)
from sqlalchemy import case, literal_column
case(
- [
- (
- orderline.c.qty > 100,
- literal_column("'greaterthan100'")
- ),
- (
- orderline.c.qty > 10,
- literal_column("'greaterthan10'")
- )
- ],
+ (
+ orderline.c.qty > 100,
+ literal_column("'greaterthan100'")
+ ),
+ (
+ orderline.c.qty > 10,
+ literal_column("'greaterthan10'")
+ ),
else_=literal_column("'lessthan10'")
)
ELSE 'lessthan10'
END
- :param whens: The criteria to be compared against,
+ :param \*whens: The criteria to be compared against,
:paramref:`.case.whens` accepts two different forms, based on
whether or not :paramref:`.case.value` is used.
+ .. versionchanged:: 1.4 the :func:`_sql.case`
+ function now accepts the series of WHEN conditions positionally;
+ passing the expressions within a list is deprecated.
+
In the first form, it accepts a list of 2-tuples; each 2-tuple
consists of ``(<sql expression>, <value>)``, where the SQL
expression is a boolean expression and "value" is a resulting value,
e.g.::
- case([
+ case(
(users_table.c.name == 'wendy', 'W'),
(users_table.c.name == 'jack', 'J')
- ])
+ )
In the second form, it accepts a Python dictionary of comparison
values mapped to a resulting value; this form requires
"""
+ if "whens" in kw:
+ util.warn_deprecated_20(
+ 'The "whens" argument to case() is now passed as a series of '
+ "positional "
+ "elements, rather than as a list. "
+ )
+ whens = kw.pop("whens")
+ else:
+ whens = coercions._expression_collection_was_a_list(
+ "whens", "case", whens
+ )
try:
whens = util.dictlike_iteritems(whens)
except TypeError:
pass
+ value = kw.pop("value", None)
if value is not None:
whenlist = [
(
self.type = type_
self.whens = whenlist
+
+ else_ = kw.pop("else_", None)
if else_ is not None:
self.else_ = coercions.expect(roles.ExpressionElementRole, else_)
else:
self.else_ = None
+ if kw:
+ raise TypeError("unknown arguments: %s" % (", ".join(sorted(kw))))
+
@property
def _from_objects(self):
return list(
:class:`.Boolean`, and those that do not will be of the same
type as the left-hand operand.
- .. versionadded:: 1.2.0b3 - added the
- :paramref:`.Operators.op.return_type` argument.
-
.. seealso::
:ref:`types_operators`
:paramref:`.Operators.op.is_comparison`
flag with True.
- .. versionadded:: 1.2.0b3
-
.. seealso::
:meth:`.Operators.op`
With the value of ``:param`` as ``"foo/%bar"``.
- .. versionadded:: 1.2
-
- .. versionchanged:: 1.2.0 The
- :paramref:`.ColumnOperators.startswith.autoescape` parameter is
- now a simple boolean rather than a character; the escape
- character itself is also escaped, and defaults to a forwards
- slash, which itself can be customized using the
- :paramref:`.ColumnOperators.startswith.escape` parameter.
-
:param escape: a character which when given will render with the
``ESCAPE`` keyword to establish that character as the escape
character. This character can then be placed preceding occurrences
With the value of ``:param`` as ``"foo/%bar"``.
- .. versionadded:: 1.2
-
- .. versionchanged:: 1.2.0 The
- :paramref:`.ColumnOperators.endswith.autoescape` parameter is
- now a simple boolean rather than a character; the escape
- character itself is also escaped, and defaults to a forwards
- slash, which itself can be customized using the
- :paramref:`.ColumnOperators.endswith.escape` parameter.
-
:param escape: a character which when given will render with the
``ESCAPE`` keyword to establish that character as the escape
character. This character can then be placed preceding occurrences
With the value of ``:param`` as ``"foo/%bar"``.
- .. versionadded:: 1.2
-
- .. versionchanged:: 1.2.0 The
- :paramref:`.ColumnOperators.contains.autoescape` parameter is
- now a simple boolean rather than a character; the escape
- character itself is also escaped, and defaults to a forwards
- slash, which itself can be customized using the
- :paramref:`.ColumnOperators.contains.escape` parameter.
-
:param escape: a character which when given will render with the
``ESCAPE`` keyword to establish that character as the escape
character. This character can then be placed preceding occurrences
("skip_locked", InternalTraversal.dp_boolean),
]
+ @classmethod
+ def _from_argument(cls, with_for_update):
+ if isinstance(with_for_update, ForUpdateArg):
+ return with_for_update
+ elif with_for_update in (None, False):
+ return None
+ elif with_for_update is True:
+ return ForUpdateArg()
+ else:
+ return ForUpdateArg(**with_for_update)
+
def __eq__(self, other):
return (
isinstance(other, ForUpdateArg)
class DeprecatedSelectBaseGenerations(object):
+ """A collection of methods available on :class:`_sql.Select` and
+ :class:`_sql.CompoundSelect`, these are all **deprecated** methods as they
+ modify the object in-place.
+
+ """
+
@util.deprecated(
"1.4",
"The :meth:`_expression.GenerativeSelect.append_order_by` "
as it
provides standard :term:`method chaining`.
- .. seealso::
-
- :meth:`_expression.GenerativeSelect.group_by`
"""
self.group_by.non_generative(self, *clauses)
class DeprecatedSelectGenerations(object):
+ """A collection of methods available on :class:`_sql.Select`, these
+ are all **deprecated** methods as they modify the :class:`_sql.Select`
+ object in -place.
+
+ """
+
@util.deprecated(
"1.4",
"The :meth:`_expression.Select.append_correlation` "
"1.4",
"The :meth:`_expression.Select.append_column` method is deprecated "
"and will be removed in a future release. Use the generative "
- "method :meth:`_expression.Select.column`.",
+ "method :meth:`_expression.Select.add_columns`.",
)
def append_column(self, column):
"""Append the given column expression to the columns clause of this
my_select.append_column(some_table.c.new_column)
This is an **in-place** mutation method; the
- :meth:`_expression.Select.column` method is preferred,
+ :meth:`_expression.Select.add_columns` method is preferred,
as it provides standard
:term:`method chaining`.
- See the documentation for :meth:`_expression.Select.with_only_columns`
- for guidelines on adding /replacing the columns of a
- :class:`_expression.Select` object.
-
"""
self.add_columns.non_generative(self, column)
self.columns_plus_names = statement._generate_columns_plus_names(True)
+ @classmethod
+ def _plugin_not_implemented(cls):
+ raise NotImplementedError(
+ "The default SELECT construct without plugins does not "
+ "implement this method."
+ )
+
+ @classmethod
+ def get_column_descriptions(cls, statement):
+ cls._plugin_not_implemented()
+
+ @classmethod
+ def from_statement(cls, statement, from_statement):
+ cls._plugin_not_implemented()
+
def _get_froms(self, statement):
seen = set()
froms = []
):
"""Represents a ``SELECT`` statement.
+ The :class:`_sql.Select` object is normally constructed using the
+ :func:`_sql.select` function. See that function for details.
+
+ .. seealso::
+
+ :func:`_sql.select`
+
+ :ref:`coretutorial_selecting` - in the Core tutorial
+
"""
__visit_name__ = "select"
_from_obj = ()
_auto_correlate = True
- compile_options = SelectState.default_select_compile_options
+ _compile_options = SelectState.default_select_compile_options
_traverse_internals = (
[
)
_cache_key_traversal = _traverse_internals + [
- ("compile_options", InternalTraversal.dp_has_cache_key)
+ ("_compile_options", InternalTraversal.dp_has_cache_key)
]
@classmethod
@property
def column_descriptions(self):
"""Return a 'column descriptions' structure which may be
- plugin-specific.
+ :term:`plugin-specific`.
"""
meth = SelectState.get_plugin_class(self).get_column_descriptions
return meth(self)
+ def from_statement(self, statement):
+ """Apply the columns which this :class:`.Select` would select
+ onto another statement.
+
+ This operation is :term:`plugin-specific` and will raise a not
+ supported exception if this :class:`_sql.Select` does not select from
+ plugin-enabled entities.
+
+
+ The statement is typically either a :func:`_expression.text` or
+ :func:`_expression.select` construct, and should return the set of
+ columns appropriate to the entities represented by this
+ :class:`.Select`.
+
+ .. seealso::
+
+ :ref:`orm_tutorial_literal_sql` - usage examples in the
+ ORM tutorial
+
+ """
+ meth = SelectState.get_plugin_class(self).from_statement
+ return meth(self, statement)
+
@_generative
def join(self, target, onclause=None, isouter=False, full=False):
r"""Create a SQL JOIN against this :class:`_expresson.Select`
)
@_generative
- def with_only_columns(self, columns):
+ def with_only_columns(self, *columns):
r"""Return a new :func:`_expression.select` construct with its columns
clause replaced with the given columns.
:func:`_expression.select` had been called with the given columns
clause. I.e. a statement::
- s = select([table1.c.a, table1.c.b])
- s = s.with_only_columns([table1.c.b])
+ s = select(table1.c.a, table1.c.b)
+ s = s.with_only_columns(table1.c.b)
should be exactly equivalent to::
- s = select([table1.c.b])
-
- This means that FROM clauses which are only derived
- from the column list will be discarded if the new column
- list no longer contains that FROM::
-
- >>> table1 = table('t1', column('a'), column('b'))
- >>> table2 = table('t2', column('a'), column('b'))
- >>> s1 = select([table1.c.a, table2.c.b])
- >>> print(s1)
- SELECT t1.a, t2.b FROM t1, t2
- >>> s2 = s1.with_only_columns([table2.c.b])
- >>> print(s2)
- SELECT t2.b FROM t1
-
- The preferred way to maintain a specific FROM clause
- in the construct, assuming it won't be represented anywhere
- else (i.e. not in the WHERE clause, etc.) is to set it using
- :meth:`_expression.Select.select_from`::
-
- >>> s1 = select([table1.c.a, table2.c.b]).\
- ... select_from(table1.join(table2,
- ... table1.c.a==table2.c.a))
- >>> s2 = s1.with_only_columns([table2.c.b])
- >>> print(s2)
- SELECT t2.b FROM t1 JOIN t2 ON t1.a=t2.a
-
- Care should also be taken to use the correct set of column objects
- passed to :meth:`_expression.Select.with_only_columns`.
- Since the method is
- essentially equivalent to calling the :func:`_expression.select`
- construct in the first place with the given columns, the columns passed
- to :meth:`_expression.Select.with_only_columns`
- should usually be a subset of
- those which were passed to the :func:`_expression.select`
- construct, not those which are available from the ``.c`` collection of
- that :func:`_expression.select`. That is::
-
- s = select([table1.c.a, table1.c.b]).select_from(table1)
- s = s.with_only_columns([table1.c.b])
-
- and **not**::
-
- # usually incorrect
- s = s.with_only_columns([s.c.b])
-
- The latter would produce the SQL::
-
- SELECT b
- FROM (SELECT t1.a AS a, t1.b AS b
- FROM t1), t1
-
- Since the :func:`_expression.select` construct is essentially
- being asked to select both from ``table1`` as well as itself.
+ s = select(table1.c.b)
+
+ Note that this will also dynamically alter the FROM clause of the
+ statement if it is not explicitly stated. To maintain the FROM
+ clause, ensure the :meth:`_sql.Select.select_from` method is
+ used appropriately::
+
+ s = select(table1.c.a, table2.c.b)
+ s = s.select_from(table2.c.b).with_only_columns(table1.c.a)
+
+ :param \*columns: column expressions to be used.
+
+ .. versionchanged:: 1.4 the :meth:`_sql.Select.with_only_columns`
+ method accepts the list of column expressions positionally;
+ passing the expressions as a list is deprecateed.
"""
self._assert_no_memoizations()
rc = []
- for c in columns:
+ for c in coercions._expression_collection_was_a_list(
+ "columns", "Select.with_only_columns", columns
+ ):
c = coercions.expect(roles.ColumnsClauseRole, c,)
# TODO: why are we doing this here?
if isinstance(c, ScalarSelect):
from sqlalchemy import orm
+ if isinstance(clause, orm.dynamic.AppenderQuery):
+ clause = clause._statement
+
if isinstance(clause, orm.Query):
compile_state = clause._compile_state()
compile_state.statement._label_style = (
@profile_memory()
def go():
- sess = create_session()
- a1 = A(col2="a1")
- a2 = A(col2="a2")
- a3 = A(col2="a3")
- a1.bs.append(B(col2="b1"))
- a1.bs.append(B(col2="b2"))
- a3.bs.append(B(col2="b3"))
- for x in [a1, a2, a3]:
- sess.add(x)
- sess.flush()
- sess.expunge_all()
-
- alist = sess.query(A).order_by(A.col1).all()
- eq_(
- [
- A(col2="a1", bs=[B(col2="b1"), B(col2="b2")]),
- A(col2="a2", bs=[]),
- A(col2="a3", bs=[B(col2="b3")]),
- ],
- alist,
- )
+ with Session() as sess:
+ a1 = A(col2="a1")
+ a2 = A(col2="a2")
+ a3 = A(col2="a3")
+ a1.bs.append(B(col2="b1"))
+ a1.bs.append(B(col2="b2"))
+ a3.bs.append(B(col2="b3"))
+ for x in [a1, a2, a3]:
+ sess.add(x)
+ sess.commit()
+
+ alist = sess.query(A).order_by(A.col1).all()
+ eq_(
+ [
+ A(col2="a1", bs=[B(col2="b1"), B(col2="b2")]),
+ A(col2="a2", bs=[]),
+ A(col2="a3", bs=[B(col2="b3")]),
+ ],
+ alist,
+ )
- for a in alist:
- sess.delete(a)
- sess.flush()
+ for a in alist:
+ sess.delete(a)
+ sess.commit()
go()
"use_reaper": False,
}
)
- sess = create_session(bind=engine)
-
- a1 = A(col2="a1")
- a2 = A(col2="a2")
- a3 = A(col2="a3")
- a1.bs.append(B(col2="b1"))
- a1.bs.append(B(col2="b2"))
- a3.bs.append(B(col2="b3"))
- for x in [a1, a2, a3]:
- sess.add(x)
- sess.flush()
- sess.expunge_all()
+ with Session(engine) as sess:
+ a1 = A(col2="a1")
+ a2 = A(col2="a2")
+ a3 = A(col2="a3")
+ a1.bs.append(B(col2="b1"))
+ a1.bs.append(B(col2="b2"))
+ a3.bs.append(B(col2="b3"))
+ for x in [a1, a2, a3]:
+ sess.add(x)
+ sess.commit()
+
+ alist = sess.query(A).order_by(A.col1).all()
+ eq_(
+ [
+ A(col2="a1", bs=[B(col2="b1"), B(col2="b2")]),
+ A(col2="a2", bs=[]),
+ A(col2="a3", bs=[B(col2="b3")]),
+ ],
+ alist,
+ )
- alist = sess.query(A).order_by(A.col1).all()
- eq_(
- [
- A(col2="a1", bs=[B(col2="b1"), B(col2="b2")]),
- A(col2="a2", bs=[]),
- A(col2="a3", bs=[B(col2="b3")]),
- ],
- alist,
- )
+ for a in alist:
+ sess.delete(a)
+ sess.commit()
- for a in alist:
- sess.delete(a)
- sess.flush()
- sess.close()
engine.dispose()
go()
mapper(Wide, wide_table, _compiled_cache_size=10)
metadata.create_all()
- session = create_session()
- w1 = Wide()
- session.add(w1)
- session.flush()
- session.close()
+ with Session() as session:
+ w1 = Wide()
+ session.add(w1)
+ session.commit()
del session
counter = [1]
@profile_memory()
def go():
- session = create_session()
- w1 = session.query(Wide).first()
- x = counter[0]
- dec = 10
- while dec > 0:
- # trying to count in binary here,
- # works enough to trip the test case
- if pow(2, dec) < x:
- setattr(w1, "col%d" % dec, counter[0])
- x -= pow(2, dec)
- dec -= 1
- session.flush()
- session.close()
+ with Session() as session:
+ w1 = session.query(Wide).first()
+ x = counter[0]
+ dec = 10
+ while dec > 0:
+ # trying to count in binary here,
+ # works enough to trip the test case
+ if pow(2, dec) < x:
+ setattr(w1, "col%d" % dec, counter[0])
+ x -= pow(2, dec)
+ dec -= 1
+ session.commit()
counter[0] += 1
try:
)
mapper(Sub, subtable, inherits=Base, polymorphic_identity=2)
- sess = create_session()
+ sess = Session(autoflush=False)
b1 = Base(value="b1")
s1 = Sub(value="sub1", subdata="some subdata")
sess.add(b1)
sess.add(s1)
- sess.flush()
+ sess.commit()
- sess2 = create_session()
- s2 = sess2.query(Base).get(s1.id)
+ sess2 = Session(autoflush=False)
+ s2 = sess2.get(Base, s1.id)
s2.subdata = "sess2 subdata"
s1.subdata = "sess1 subdata"
- sess.flush()
+ sess.commit()
assert_raises(
orm_exc.StaleDataError,
- sess2.query(Base).with_for_update(read=True).get,
+ sess2.get,
+ Base,
s1.id,
+ with_for_update=dict(read=True),
)
- if not testing.db.dialect.supports_sane_rowcount:
- sess2.flush()
- else:
- assert_raises(orm_exc.StaleDataError, sess2.flush)
+ sess2.rollback()
sess2.refresh(s2)
if testing.db.dialect.supports_sane_rowcount:
)
mapper(Sub, subtable, inherits=Base, polymorphic_identity=2)
- sess = create_session()
+ sess = Session(autoflush=False, expire_on_commit=False)
b1 = Base(value="b1")
s1 = Sub(value="sub1", subdata="some subdata")
sess.add(s1)
sess.add(s2)
- sess.flush()
+ sess.commit()
- sess2 = create_session()
- s3 = sess2.query(Base).get(s1.id)
+ sess2 = Session(autoflush=False, expire_on_commit=False)
+ s3 = sess2.get(Base, s1.id)
sess2.delete(s3)
- sess2.flush()
+ sess2.commit()
s2.subdata = "some new subdata"
- sess.flush()
+ sess.commit()
s1.subdata = "some new subdata"
if testing.db.dialect.supports_sane_rowcount:
- assert_raises(orm_exc.StaleDataError, sess.flush)
+ assert_raises(orm_exc.StaleDataError, sess.commit)
else:
- sess.flush()
+ sess.commit()
class DistinctPKTest(fixtures.MappedTest):
from sqlalchemy import cast
from sqlalchemy import desc
from sqlalchemy import event
+from sqlalchemy import exc as sa_exc
from sqlalchemy import func
from sqlalchemy import Integer
from sqlalchemy import literal_column
from sqlalchemy.orm import with_polymorphic
from sqlalchemy.orm.collections import collection
from sqlalchemy.orm.util import polymorphic_union
+from sqlalchemy.sql import elements
+from sqlalchemy.testing import assert_raises
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import assertions
from sqlalchemy.testing import AssertsCompiledSQL
from .test_events import _RemoveListeners
from .test_options import PathTest as OptionsPathTest
from .test_query import QueryTest
+from .test_transaction import _LocalFixture
class DeprecatedQueryTest(_fixtures.FixtureTest, AssertsCompiledSQL):
)
+class SessionTest(fixtures.RemovesEvents, _LocalFixture):
+ def test_subtransactions_deprecated(self):
+ s1 = Session(testing.db)
+ s1.begin()
+
+ with testing.expect_deprecated_20(
+ "The Session.begin.subtransactions flag is deprecated "
+ "and will be removed in SQLAlchemy version 2.0."
+ ):
+ s1.begin(subtransactions=True)
+
+ s1.close()
+
+ def test_autocommit_deprecated(Self):
+ with testing.expect_deprecated_20(
+ "The Session.autocommit parameter is deprecated "
+ "and will be removed in SQLAlchemy version 2.0."
+ ):
+ Session(autocommit=True)
+
+ @testing.requires.independent_connections
+ @testing.emits_warning(".*previous exception")
+ def test_failed_rollback_deactivates_transaction_ctx_integration(self):
+ # test #4050 in the same context as that of oslo.db
+
+ User = self.classes.User
+
+ with testing.expect_deprecated_20(
+ "The Session.autocommit parameter is deprecated"
+ ):
+ session = Session(bind=testing.db, autocommit=True)
+
+ evented_exceptions = []
+ caught_exceptions = []
+
+ def canary(context):
+ evented_exceptions.append(context.original_exception)
+
+ rollback_error = testing.db.dialect.dbapi.InterfaceError(
+ "Can't roll back to savepoint"
+ )
+
+ def prevent_savepoint_rollback(
+ cursor, statement, parameters, context=None
+ ):
+ if (
+ context is not None
+ and context.compiled
+ and isinstance(
+ context.compiled.statement,
+ elements.RollbackToSavepointClause,
+ )
+ ):
+ raise rollback_error
+
+ self.event_listen(testing.db, "handle_error", canary, retval=True)
+ self.event_listen(
+ testing.db.dialect, "do_execute", prevent_savepoint_rollback
+ )
+
+ with session.begin():
+ session.add(User(id=1, name="x"))
+
+ try:
+ with session.begin():
+ try:
+ with session.begin_nested():
+ # raises IntegrityError on flush
+ session.add(User(id=1, name="x"))
+
+ # outermost is the failed SAVEPOINT rollback
+ # from the "with session.begin_nested()"
+ except sa_exc.DBAPIError as dbe_inner:
+ caught_exceptions.append(dbe_inner.orig)
+ raise
+ except sa_exc.DBAPIError as dbe_outer:
+ caught_exceptions.append(dbe_outer.orig)
+
+ is_true(
+ isinstance(
+ evented_exceptions[0], testing.db.dialect.dbapi.IntegrityError
+ )
+ )
+ eq_(evented_exceptions[1], rollback_error)
+ eq_(len(evented_exceptions), 2)
+ eq_(caught_exceptions, [rollback_error, rollback_error])
+
+ def test_contextmanager_commit(self):
+ User = self.classes.User
+
+ with testing.expect_deprecated_20(
+ "The Session.autocommit parameter is deprecated"
+ ):
+ sess = Session(autocommit=True)
+ with sess.begin():
+ sess.add(User(name="u1"))
+
+ sess.rollback()
+ eq_(sess.query(User).count(), 1)
+
+ def test_contextmanager_rollback(self):
+ User = self.classes.User
+
+ with testing.expect_deprecated_20(
+ "The Session.autocommit parameter is deprecated"
+ ):
+ sess = Session(autocommit=True)
+
+ def go():
+ with sess.begin():
+ sess.add(User()) # name can't be null
+
+ assert_raises(sa_exc.DBAPIError, go)
+
+ eq_(sess.query(User).count(), 0)
+
+ with sess.begin():
+ sess.add(User(name="u1"))
+ eq_(sess.query(User).count(), 1)
+
+
class DeprecatedInhTest(_poly_fixtures._Polymorphic):
def test_with_polymorphic(self):
Person = _poly_fixtures.Person
from sqlalchemy.orm import create_session
from sqlalchemy.orm import exc as orm_exc
from sqlalchemy.orm import mapper
-from sqlalchemy.orm import Query
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
-from sqlalchemy.orm.dynamic import AppenderMixin
from sqlalchemy.testing import assert_raises
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import AssertsCompiledSQL
mapper(Item, items)
return Order, Item
+ def _user_order_item_fixture(self):
+ (
+ users,
+ Keyword,
+ items,
+ order_items,
+ item_keywords,
+ Item,
+ User,
+ keywords,
+ Order,
+ orders,
+ ) = (
+ self.tables.users,
+ self.classes.Keyword,
+ self.tables.items,
+ self.tables.order_items,
+ self.tables.item_keywords,
+ self.classes.Item,
+ self.classes.User,
+ self.tables.keywords,
+ self.classes.Order,
+ self.tables.orders,
+ )
+
+ mapper(
+ User,
+ users,
+ properties={
+ "orders": relationship(
+ Order, order_by=orders.c.id, lazy="dynamic"
+ )
+ },
+ )
+ mapper(
+ Order,
+ orders,
+ properties={
+ "items": relationship(
+ Item, secondary=order_items, order_by=items.c.id
+ ),
+ },
+ )
+ mapper(
+ Item,
+ items,
+ properties={
+ "keywords": relationship(
+ Keyword, secondary=item_keywords
+ ) # m2m
+ },
+ )
+ mapper(Keyword, keywords)
+
+ return User, Order, Item, Keyword
+
class DynamicTest(_DynamicFixture, _fixtures.FixtureTest, AssertsCompiledSQL):
def test_basic(self):
sess = create_session()
u = sess.query(User).get(8)
sess.expunge(u)
- assert_raises(
- orm_exc.DetachedInstanceError,
- u.addresses.filter_by,
- email_address="e",
- )
+
+ q = u.addresses.filter_by(email_address="e")
+
+ assert_raises(orm_exc.DetachedInstanceError, q.first)
def test_no_uselist_false(self):
User, Address = self._user_address_fixture(
use_default_dialect=True,
)
+ @testing.combinations(
+ # lambda
+ )
+ def test_join_syntaxes(self, expr):
+ User, Order, Item, Keyword = self._user_order_item_fixture()
+
def test_transient_count(self):
User, Address = self._user_address_fixture()
u1 = User()
u1.addresses.append(Address())
eq_(u1.addresses[0], Address())
- def test_custom_query(self):
- class MyQuery(Query):
- pass
-
- User, Address = self._user_address_fixture(
- addresses_args={"query_class": MyQuery}
- )
-
- sess = create_session()
- u = User()
- sess.add(u)
-
- col = u.addresses
- assert isinstance(col, Query)
- assert isinstance(col, MyQuery)
- assert hasattr(col, "append")
- eq_(type(col).__name__, "AppenderMyQuery")
-
- q = col.limit(1)
- assert isinstance(q, Query)
- assert isinstance(q, MyQuery)
- assert not hasattr(q, "append")
- eq_(type(q).__name__, "MyQuery")
-
- def test_custom_query_with_custom_mixin(self):
- class MyAppenderMixin(AppenderMixin):
- def add(self, items):
- if isinstance(items, list):
- for item in items:
- self.append(item)
- else:
- self.append(items)
-
- class MyQuery(Query):
- pass
-
- class MyAppenderQuery(MyAppenderMixin, MyQuery):
- query_class = MyQuery
-
- User, Address = self._user_address_fixture(
- addresses_args={"query_class": MyAppenderQuery}
- )
-
- sess = create_session()
- u = User()
- sess.add(u)
-
- col = u.addresses
- assert isinstance(col, Query)
- assert isinstance(col, MyQuery)
- assert hasattr(col, "append")
- assert hasattr(col, "add")
- eq_(type(col).__name__, "MyAppenderQuery")
-
- q = col.limit(1)
- assert isinstance(q, Query)
- assert isinstance(q, MyQuery)
- assert not hasattr(q, "append")
- assert not hasattr(q, "add")
- eq_(type(q).__name__, "MyQuery")
-
class UOWTest(
_DynamicFixture, _fixtures.FixtureTest, testing.AssertsExecutionResults
self.assert_sql_count(testing.db, go, 1)
+ def test_contains_eager_four_future(self):
+ users, addresses, User = (
+ self.tables.users,
+ self.tables.addresses,
+ self.classes.User,
+ )
+
+ sess = create_session(future=True)
+
+ selectquery = users.outerjoin(addresses).select(
+ users.c.id < 10,
+ use_labels=True,
+ order_by=[users.c.id, addresses.c.id],
+ )
+
+ q = select(User)
+
+ def go():
+ result = (
+ sess.execute(
+ q.options(contains_eager("addresses")).from_statement(
+ selectquery
+ )
+ )
+ .scalars()
+ .unique()
+ .all()
+ )
+ assert self.static.user_address_result[0:3] == result
+
+ self.assert_sql_count(testing.db, go, 1)
+
def test_contains_eager_aliased(self):
User, Address = self.classes.User, self.classes.Address
(user10, None),
]
- sess = create_session()
+ sess = create_session(future=True)
selectquery = users.outerjoin(addresses).select(
use_labels=True, order_by=[users.c.id, addresses.c.id]
)
+
+ result = sess.execute(
+ select(User, Address).from_statement(selectquery)
+ )
eq_(
- list(sess.query(User, Address).from_statement(selectquery)),
- expected,
+ list(result), expected,
)
sess.expunge_all()
ent.entity_zero
for ent in q._compile_state()._mapper_entities
],
- q.compile_options._current_path,
+ q._compile_options._current_path,
attr,
False,
)
ent.entity_zero
for ent in query._compile_state()._mapper_entities
],
- query.compile_options._current_path,
+ query._compile_options._current_path,
attr,
False,
)
ent.entity_zero
for ent in query._compile_state()._mapper_entities
],
- query.compile_options._current_path,
+ query._compile_options._current_path,
attr,
False,
)
for tb in opt._to_bind:
tb._bind_loader(
[ent.entity_zero for ent in ctx._mapper_entities],
- query.compile_options._current_path,
+ query._compile_options._current_path,
attr,
False,
)
ent.entity_zero
for ent in q._compile_state()._mapper_entities
],
- q.compile_options._current_path,
+ q._compile_options._current_path,
attr_a,
False,
)
ent.entity_zero
for ent in q._compile_state()._mapper_entities
],
- q.compile_options._current_path,
+ q._compile_options._current_path,
attr_b,
False,
)
from sqlalchemy import column
from sqlalchemy import desc
from sqlalchemy import distinct
+from sqlalchemy import event
from sqlalchemy import exc as sa_exc
from sqlalchemy import exists
from sqlalchemy import ForeignKey
from sqlalchemy.orm import Session
from sqlalchemy.orm import subqueryload
from sqlalchemy.orm import synonym
+from sqlalchemy.orm.context import QueryContext
from sqlalchemy.orm.util import join
from sqlalchemy.orm.util import with_parent
from sqlalchemy.sql import expression
class GetTest(QueryTest):
+ def test_loader_options(self):
+ User = self.classes.User
+
+ s = Session()
+
+ u1 = s.query(User).options(joinedload(User.addresses)).get(8)
+ eq_(len(u1.__dict__["addresses"]), 3)
+
+ def test_loader_options_future(self):
+ User = self.classes.User
+
+ s = Session()
+
+ u1 = s.get(User, 8, options=[joinedload(User.addresses)])
+ eq_(len(u1.__dict__["addresses"]), 3)
+
def test_get_composite_pk_keyword_based_no_result(self):
CompositePk = self.classes.CompositePk
u2 = s.query(User).get(7)
assert u is not u2
+ def test_get_future(self):
+ User = self.classes.User
+
+ s = create_session()
+ assert s.get(User, 19) is None
+ u = s.get(User, 7)
+ u2 = s.get(User, 7)
+ assert u is u2
+ s.expunge_all()
+ u2 = s.get(User, 7)
+ assert u is not u2
+
def test_get_composite_pk_no_result(self):
CompositePk = self.classes.CompositePk
assert u.addresses[0].email_address == "jack@bean.com"
assert u.orders[1].items[2].description == "item 5"
+ def test_populate_existing_future(self):
+ User, Address = self.classes.User, self.classes.Address
+
+ s = Session(future=True, autoflush=False)
+
+ userlist = s.query(User).all()
+
+ u = userlist[0]
+ u.name = "foo"
+ a = Address(name="ed")
+ u.addresses.append(a)
+
+ self.assert_(a in u.addresses)
+
+ stmt = select(User).execution_options(populate_existing=True)
+
+ s.execute(stmt,).scalars().all()
+
+ self.assert_(u not in s.dirty)
+
+ self.assert_(u.name == "jack")
+
+ self.assert_(a not in u.addresses)
+
+ u.addresses[0].email_address = "lala"
+ u.orders[1].items[2].description = "item 12"
+ # test that lazy load doesn't change child items
+ s.query(User).populate_existing().all()
+ assert u.addresses[0].email_address == "lala"
+ assert u.orders[1].items[2].description == "item 12"
+
+ # eager load does
+
+ stmt = (
+ select(User)
+ .options(
+ joinedload("addresses"),
+ joinedload("orders").joinedload("items"),
+ )
+ .execution_options(populate_existing=True)
+ )
+
+ s.execute(stmt).scalars().all()
+
+ assert u.addresses[0].email_address == "jack@bean.com"
+ assert u.orders[1].items[2].description == "item 5"
+
+ def test_option_transfer_future(self):
+ User = self.classes.User
+ stmt = select(User).execution_options(
+ populate_existing=True, autoflush=False, yield_per=10
+ )
+ s = Session(testing.db, future=True)
+
+ m1 = mock.Mock()
+
+ event.listen(s, "do_orm_execute", m1)
+
+ s.execute(stmt)
+
+ eq_(
+ m1.mock_calls[0].args[0].load_options,
+ QueryContext.default_load_options(
+ _autoflush=False, _populate_existing=True, _yield_per=10
+ ),
+ )
+
class InvalidGenerationsTest(QueryTest, AssertsCompiledSQL):
@testing.combinations(
None,
)
+ def test_select_star_future(self):
+ User = self.classes.User
+
+ sess = Session(future=True)
+ eq_(
+ sess.execute(
+ select(User).from_statement(
+ text("select * from users order by id")
+ )
+ )
+ .scalars()
+ .first(),
+ User(id=7),
+ )
+ eq_(
+ sess.execute(
+ select(User).from_statement(
+ text("select * from users where name='nonexistent'")
+ )
+ )
+ .scalars()
+ .first(),
+ None,
+ )
+
def test_columns_mismatched(self):
# test that columns using column._label match, as well as that
# ordering doesn't matter
],
)
+ def test_columns_mismatched_future(self):
+ # test that columns using column._label match, as well as that
+ # ordering doesn't matter
+ User = self.classes.User
+
+ s = create_session(future=True)
+ q = select(User).from_statement(
+ text(
+ "select name, 27 as foo, id as users_id from users order by id"
+ )
+ )
+ eq_(
+ s.execute(q).scalars().all(),
+ [
+ User(id=7, name="jack"),
+ User(id=8, name="ed"),
+ User(id=9, name="fred"),
+ User(id=10, name="chuck"),
+ ],
+ )
+
def test_columns_multi_table_uselabels(self):
# test that columns using column._label match, as well as that
# ordering doesn't matter.
],
)
+ def test_columns_multi_table_uselabels_future(self):
+ # test that columns using column._label match, as well as that
+ # ordering doesn't matter.
+ User = self.classes.User
+ Address = self.classes.Address
+
+ s = create_session(future=True)
+ q = select(User, Address).from_statement(
+ text(
+ "select users.name AS users_name, users.id AS users_id, "
+ "addresses.id AS addresses_id FROM users JOIN addresses "
+ "ON users.id = addresses.user_id WHERE users.id=8 "
+ "ORDER BY addresses.id"
+ )
+ )
+
+ eq_(
+ s.execute(q).all(),
+ [
+ (User(id=8), Address(id=2)),
+ (User(id=8), Address(id=3)),
+ (User(id=8), Address(id=4)),
+ ],
+ )
+
def test_columns_multi_table_uselabels_contains_eager(self):
# test that columns using column._label match, as well as that
# ordering doesn't matter.
self.assert_sql_count(testing.db, go, 1)
+ def test_columns_multi_table_uselabels_contains_eager_future(self):
+ # test that columns using column._label match, as well as that
+ # ordering doesn't matter.
+ User = self.classes.User
+ Address = self.classes.Address
+
+ s = create_session(future=True)
+ q = (
+ select(User)
+ .from_statement(
+ text(
+ "select users.name AS users_name, users.id AS users_id, "
+ "addresses.id AS addresses_id FROM users JOIN addresses "
+ "ON users.id = addresses.user_id WHERE users.id=8 "
+ "ORDER BY addresses.id"
+ )
+ )
+ .options(contains_eager(User.addresses))
+ )
+
+ def go():
+ r = s.execute(q).unique().scalars().all()
+ eq_(r[0].addresses, [Address(id=2), Address(id=3), Address(id=4)])
+
+ self.assert_sql_count(testing.db, go, 1)
+
def test_columns_multi_table_uselabels_cols_contains_eager(self):
# test that columns using column._label match, as well as that
# ordering doesn't matter.
self.assert_sql_count(testing.db, go, 1)
+ def test_columns_multi_table_uselabels_cols_contains_eager_future(self):
+ # test that columns using column._label match, as well as that
+ # ordering doesn't matter.
+ User = self.classes.User
+ Address = self.classes.Address
+
+ s = create_session(future=True)
+ q = (
+ select(User)
+ .from_statement(
+ text(
+ "select users.name AS users_name, users.id AS users_id, "
+ "addresses.id AS addresses_id FROM users JOIN addresses "
+ "ON users.id = addresses.user_id WHERE users.id=8 "
+ "ORDER BY addresses.id"
+ ).columns(User.name, User.id, Address.id)
+ )
+ .options(contains_eager(User.addresses))
+ )
+
+ def go():
+ r = s.execute(q).unique().scalars().all()
+ eq_(r[0].addresses, [Address(id=2), Address(id=3), Address(id=4)])
+
+ self.assert_sql_count(testing.db, go, 1)
+
def test_textual_select_orm_columns(self):
# test that columns using column._label match, as well as that
# ordering doesn't matter.
[User(id=9)],
)
+ def test_whereclause_future(self):
+ User = self.classes.User
+
+ s = create_session(future=True)
+ eq_(
+ s.execute(select(User).filter(text("id in (8, 9)")))
+ .scalars()
+ .all(),
+ [User(id=8), User(id=9)],
+ )
+
+ eq_(
+ s.execute(
+ select(User).filter(text("name='fred'")).filter(text("id=9"))
+ )
+ .scalars()
+ .all(),
+ [User(id=9)],
+ )
+ eq_(
+ s.execute(
+ select(User).filter(text("name='fred'")).filter(User.id == 9)
+ )
+ .scalars()
+ .all(),
+ [User(id=9)],
+ )
+
def test_binds_coerce(self):
User = self.classes.User
e1 = sess.query(Employee).filter_by(name="emp1").one()
e5 = sess.query(Employee).filter_by(name="emp5").one()
- test_e1 = sess.query(Employee).get([c1.company_id, e1.emp_id])
+ test_e1 = sess.get(Employee, [c1.company_id, e1.emp_id])
assert test_e1.name == "emp1", test_e1.name
- test_e5 = sess.query(Employee).get([c2.company_id, e5.emp_id])
+ test_e5 = sess.get(Employee, [c2.company_id, e5.emp_id])
assert test_e5.name == "emp5", test_e5.name
assert [x.name for x in test_e1.employees] == ["emp2", "emp3"]
- assert (
- sess.query(Employee).get([c1.company_id, 3]).reports_to.name
- == "emp1"
- )
- assert (
- sess.query(Employee).get([c2.company_id, 3]).reports_to.name
- == "emp5"
- )
+ assert sess.get(Employee, [c1.company_id, 3]).reports_to.name == "emp1"
+ assert sess.get(Employee, [c2.company_id, 3]).reports_to.name == "emp5"
def _test_join_aliasing(self, sess):
Employee = self.classes.Employee
assert_raises_message(
sa.exc.InvalidRequestError,
- "A transaction is already begun. Use "
- "subtransactions=True to allow subtransactions.",
+ "A transaction is already begun on this Session.",
s.begin,
)
session.flush()
session.commit()
- def test_active_flag(self):
+ def test_active_flag_autocommit(self):
sess = create_session(bind=config.db, autocommit=True)
assert not sess.is_active
sess.begin()
sess.rollback()
assert not sess.is_active
+ def test_active_flag_autobegin(self):
+ sess = create_session(bind=config.db, autocommit=False)
+ assert sess.is_active
+ assert not sess.in_transaction()
+ sess.begin()
+ assert sess.is_active
+ sess.rollback()
+ assert sess.is_active
+
+ def test_active_flag_autobegin_future(self):
+ sess = create_session(bind=config.db, future=True)
+ assert sess.is_active
+ assert not sess.in_transaction()
+ sess.begin()
+ assert sess.is_active
+ sess.rollback()
+ assert sess.is_active
+
+ def test_active_flag_partial_rollback(self):
+ sess = create_session(bind=config.db, autocommit=False)
+ assert sess.is_active
+ assert not sess.in_transaction()
+ sess.begin()
+ assert sess.is_active
+ sess.begin(_subtrans=True)
+ sess.rollback()
+ assert not sess.is_active
+
+ sess.rollback()
+ assert sess.is_active
+
@engines.close_open_connections
def test_add_delete(self):
User, Address, addresses, users = (
+import contextlib
+
from sqlalchemy import Column
from sqlalchemy import event
from sqlalchemy import exc as sa_exc
from sqlalchemy import func
from sqlalchemy import inspect
+from sqlalchemy import Integer
+from sqlalchemy import MetaData
from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy.testing import fixtures
from sqlalchemy.testing import is_
from sqlalchemy.testing import is_not_
-from sqlalchemy.testing import is_true
from sqlalchemy.testing import mock
from sqlalchemy.testing.util import gc_collect
from test.orm._fixtures import FixtureTest
sess.add(u1)
sess.flush()
- sess.begin_nested()
+ savepoint = sess.begin_nested()
u2 = User(name="u2")
sess.add(u2)
sess.flush()
- sess.rollback()
+ savepoint.rollback()
trans.commit()
assert len(sess.query(User).all()) == 1
users = self.tables.users
engine = Engine._future_facade(testing.db)
- session = create_session(engine)
+ session = create_session(engine, autocommit=False)
session.begin()
session.connection().execute(users.insert().values(name="user1"))
assert attributes.instance_state(u1).expired
eq_(u1.name, "u1")
+ @testing.requires.savepoints
+ def test_dirty_state_transferred_deep_nesting_future(self):
+ User, users = self.classes.User, self.tables.users
+
+ mapper(User, users)
+
+ s = Session(testing.db, future=True)
+ u1 = User(name="u1")
+ s.add(u1)
+ s.commit()
+
+ nt1 = s.begin_nested()
+ nt2 = s.begin_nested()
+ u1.name = "u2"
+ assert attributes.instance_state(u1) not in nt2._dirty
+ assert attributes.instance_state(u1) not in nt1._dirty
+ s.flush()
+ assert attributes.instance_state(u1) in nt2._dirty
+ assert attributes.instance_state(u1) not in nt1._dirty
+
+ nt2.commit()
+ assert attributes.instance_state(u1) in nt2._dirty
+ assert attributes.instance_state(u1) in nt1._dirty
+
+ nt1.rollback()
+ assert attributes.instance_state(u1).expired
+ eq_(u1.name, "u1")
+
@testing.requires.independent_connections
def test_transactions_isolated(self):
User, users = self.classes.User, self.tables.users
sess.add(u2)
sess.flush()
- sess.rollback()
+ sess.rollback() # rolls back nested only
sess.commit()
assert len(sess.query(User).all()) == 1
sess.close()
+ @testing.requires.savepoints
+ def test_nested_autotrans_future(self):
+ User, users = self.classes.User, self.tables.users
+
+ mapper(User, users)
+ sess = create_session(autocommit=False, future=True)
+ u = User(name="u1")
+ sess.add(u)
+ sess.flush()
+
+ sess.begin_nested() # nested transaction
+
+ u2 = User(name="u2")
+ sess.add(u2)
+ sess.flush()
+
+ sess.rollback() # rolls back the whole trans
+
+ sess.commit()
+ assert len(sess.query(User).all()) == 0
+ sess.close()
+
@testing.requires.savepoints
def test_nested_transaction_connection_add(self):
users, User = self.tables.users, self.classes.User
testing.db.dialect, "do_execute", prevent_savepoint_rollback
)
- with session.transaction:
+ with session.begin():
session.add(User(id=1, name="x"))
session.begin_nested()
)
# rollback succeeds, because the Session is deactivated
- eq_(session.transaction._state, _session.DEACTIVE)
+ eq_(session._transaction._state, _session.DEACTIVE)
+ eq_(session.is_active, False)
session.rollback()
# back to normal
- eq_(session.transaction._state, _session.ACTIVE)
+ eq_(session._transaction._state, _session.ACTIVE)
+ eq_(session.is_active, True)
- trans = session.transaction
+ trans = session._transaction
# leave the outermost trans
session.rollback()
eq_(trans._state, _session.CLOSED)
# outermost transaction is new
- is_not_(session.transaction, trans)
-
- # outermost is active
- eq_(session.transaction._state, _session.ACTIVE)
-
- @testing.requires.independent_connections
- @testing.emits_warning(".*previous exception")
- def test_failed_rollback_deactivates_transaction_ctx_integration(self):
- # test #4050 in the same context as that of oslo.db
-
- users, User = self.tables.users, self.classes.User
-
- mapper(User, users)
- session = Session(bind=testing.db, autocommit=True)
-
- evented_exceptions = []
- caught_exceptions = []
-
- def canary(context):
- evented_exceptions.append(context.original_exception)
-
- rollback_error = testing.db.dialect.dbapi.InterfaceError(
- "Can't roll back to savepoint"
- )
-
- def prevent_savepoint_rollback(
- cursor, statement, parameters, context=None
- ):
- if (
- context is not None
- and context.compiled
- and isinstance(
- context.compiled.statement,
- elements.RollbackToSavepointClause,
- )
- ):
- raise rollback_error
-
- self.event_listen(testing.db, "handle_error", canary, retval=True)
- self.event_listen(
- testing.db.dialect, "do_execute", prevent_savepoint_rollback
- )
-
- with session.begin():
- session.add(User(id=1, name="x"))
+ is_not_(session._transaction, trans)
- try:
- with session.begin():
- try:
- with session.begin_nested():
- # raises IntegrityError on flush
- session.add(User(id=1, name="x"))
-
- # outermost is the failed SAVEPOINT rollback
- # from the "with session.begin_nested()"
- except sa_exc.DBAPIError as dbe_inner:
- caught_exceptions.append(dbe_inner.orig)
- raise
- except sa_exc.DBAPIError as dbe_outer:
- caught_exceptions.append(dbe_outer.orig)
-
- is_true(
- isinstance(
- evented_exceptions[0], testing.db.dialect.dbapi.IntegrityError
- )
- )
- eq_(evented_exceptions[1], rollback_error)
- eq_(len(evented_exceptions), 2)
- eq_(caught_exceptions, [rollback_error, rollback_error])
+ is_(session._transaction, None)
+ eq_(session.is_active, True)
def test_no_prepare_wo_twophase(self):
sess = create_session(bind=testing.db, autocommit=False)
)
)
)
+
sess = Session(bind=bind)
c1 = sess.connection(execution_options={"isolation_level": "FOO"})
eq_(bind.mock_calls, [mock.call.connect()])
bind.connect().mock_calls,
[mock.call.execution_options(isolation_level="FOO")],
)
- eq_(bind.connect().execution_options().mock_calls, [mock.call.begin()])
+
eq_(c1, bind.connect().execution_options())
def test_execution_options_ignored_mid_transaction(self):
session = create_session(autocommit=False)
session.add(User(name="ed"))
session.transaction.commit()
- assert (
- session.transaction is not None
- ), "autocommit=False should start a new transaction"
+
+ is_not_(session.transaction, None)
+
+ def test_no_autocommit_with_explicit_commit_future(self):
+ User, users = self.classes.User, self.tables.users
+
+ mapper(User, users)
+ session = create_session(autocommit=False, future=True)
+ session.add(User(name="ed"))
+ session.transaction.commit()
+
+ # new in 1.4
+ is_(session.transaction, None)
@testing.requires.python2
@testing.requires.savepoints_w_release
mapper(Address, addresses)
+class SubtransactionRecipeTest(FixtureTest):
+ run_inserts = None
+ __backend__ = True
+
+ future = False
+
+ @testing.fixture
+ def subtransaction_recipe(self):
+ @contextlib.contextmanager
+ def transaction(session):
+
+ if session.in_transaction():
+ outermost = False
+ else:
+ outermost = True
+ session.begin()
+
+ try:
+ yield
+ except:
+ if session.in_transaction():
+ session.rollback()
+ raise
+ else:
+ if outermost and session.in_transaction():
+ session.commit()
+
+ return transaction
+
+ @testing.requires.savepoints
+ def test_recipe_heavy_nesting(self, subtransaction_recipe):
+ users = self.tables.users
+
+ session = Session(testing.db, future=self.future)
+
+ with subtransaction_recipe(session):
+ session.connection().execute(users.insert().values(name="user1"))
+ with subtransaction_recipe(session):
+ savepoint = session.begin_nested()
+ session.connection().execute(
+ users.insert().values(name="user2")
+ )
+ assert (
+ session.connection()
+ .exec_driver_sql("select count(1) from users")
+ .scalar()
+ == 2
+ )
+ savepoint.rollback()
+
+ with subtransaction_recipe(session):
+ assert (
+ session.connection()
+ .exec_driver_sql("select count(1) from users")
+ .scalar()
+ == 1
+ )
+ session.connection().execute(
+ users.insert().values(name="user3")
+ )
+ assert (
+ session.connection()
+ .exec_driver_sql("select count(1) from users")
+ .scalar()
+ == 2
+ )
+
+ @engines.close_open_connections
+ def test_recipe_subtransaction_on_external_subtrans(
+ self, subtransaction_recipe
+ ):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+ conn = testing.db.connect()
+ trans = conn.begin()
+ sess = Session(conn, future=self.future)
+
+ with subtransaction_recipe(sess):
+ u = User(name="ed")
+ sess.add(u)
+ sess.flush()
+ # commit does nothing
+ trans.rollback() # rolls back
+ assert len(sess.query(User).all()) == 0
+ sess.close()
+
+ def test_recipe_commit_one(self, subtransaction_recipe):
+ User, users = self.classes.User, self.tables.users
+
+ mapper(User, users)
+ sess = Session(testing.db, future=self.future)
+
+ with subtransaction_recipe(sess):
+ u = User(name="u1")
+ sess.add(u)
+ sess.close()
+ assert len(sess.query(User).all()) == 1
+
+ def test_recipe_subtransaction_on_noautocommit(
+ self, subtransaction_recipe
+ ):
+ User, users = self.classes.User, self.tables.users
+
+ mapper(User, users)
+ sess = Session(testing.db, future=self.future)
+
+ sess.begin()
+ with subtransaction_recipe(sess):
+ u = User(name="u1")
+ sess.add(u)
+ sess.flush()
+ sess.rollback() # rolls back
+ assert len(sess.query(User).all()) == 0
+ sess.close()
+
+ @testing.requires.savepoints
+ def test_recipe_mixed_transaction_control(self, subtransaction_recipe):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+
+ sess = Session(testing.db, future=self.future)
+
+ sess.begin()
+ sess.begin_nested()
+
+ with subtransaction_recipe(sess):
+
+ sess.add(User(name="u1"))
+
+ sess.commit()
+ sess.commit()
+
+ eq_(len(sess.query(User).all()), 1)
+ sess.close()
+
+ t1 = sess.begin()
+ t2 = sess.begin_nested()
+
+ sess.add(User(name="u2"))
+
+ t2.commit()
+ assert sess.transaction is t1
+
+ sess.close()
+
+ def test_recipe_error_on_using_inactive_session_commands(
+ self, subtransaction_recipe
+ ):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+ sess = Session(testing.db, future=self.future)
+ sess.begin()
+
+ try:
+ with subtransaction_recipe(sess):
+ sess.add(User(name="u1"))
+ sess.flush()
+ raise Exception("force rollback")
+ except:
+ pass
+
+ # that was a real rollback, so no transaction
+ is_(sess.get_transaction(), None)
+
+ sess.close()
+
+ def test_recipe_multi_nesting(self, subtransaction_recipe):
+ sess = Session(testing.db, future=self.future)
+
+ with subtransaction_recipe(sess):
+ assert sess.in_transaction()
+
+ try:
+ with subtransaction_recipe(sess):
+ assert sess.transaction
+ raise Exception("force rollback")
+ except:
+ pass
+
+ assert not sess.in_transaction()
+
+ def test_recipe_deactive_status_check(self, subtransaction_recipe):
+ sess = Session(testing.db, future=self.future)
+ sess.begin()
+
+ with subtransaction_recipe(sess):
+ sess.rollback()
+
+ assert not sess.in_transaction()
+ sess.commit() # no error
+
+
+class FutureSubtransactionRecipeTest(SubtransactionRecipeTest):
+ future = True
+
+
class FixtureDataTest(_LocalFixture):
run_inserts = "each"
__backend__ = True
run_inserts = None
__backend__ = True
- def _run_test(self, update_fn):
+ def _run_test(self, update_fn, future=False):
User, users = self.classes.User, self.tables.users
mapper(User, users)
- s = Session(bind=testing.db)
+ s = Session(bind=testing.db, future=future)
u1 = User(name="u1")
u2 = User(name="u2")
s.add_all([u1, u2])
s.commit()
u1.name
u2.name
+ trans = s._transaction
+ assert trans is not None
s.begin_nested()
update_fn(s, u2)
eq_(u2.name, "u2modified")
s.rollback()
- eq_(u1.__dict__["name"], "u1")
+
+ if future:
+ assert s._transaction is None
+ assert "name" not in u1.__dict__
+ else:
+ assert s._transaction is trans
+ eq_(u1.__dict__["name"], "u1")
assert "name" not in u2.__dict__
eq_(u2.name, "u2")
self._run_test(update_fn)
-class ContextManagerTest(FixtureTest):
- run_inserts = None
- __backend__ = True
-
- @testing.requires.savepoints
- @engines.close_open_connections
- def test_contextmanager_nested_rollback(self):
- users, User = self.tables.users, self.classes.User
-
- mapper(User, users)
-
- sess = Session()
-
- def go():
- with sess.begin_nested():
- sess.add(User()) # name can't be null
- sess.flush()
-
- # and not InvalidRequestError
- assert_raises(sa_exc.DBAPIError, go)
-
- with sess.begin_nested():
- sess.add(User(name="u1"))
-
- eq_(sess.query(User).count(), 1)
-
- def test_contextmanager_commit(self):
- users, User = self.tables.users, self.classes.User
-
- mapper(User, users)
-
- sess = Session(autocommit=True)
- with sess.begin():
- sess.add(User(name="u1"))
-
- sess.rollback()
- eq_(sess.query(User).count(), 1)
-
- def test_contextmanager_rollback(self):
- users, User = self.tables.users, self.classes.User
-
- mapper(User, users)
-
- sess = Session(autocommit=True)
-
- def go():
- with sess.begin():
- sess.add(User()) # name can't be null
-
- assert_raises(sa_exc.DBAPIError, go)
-
- eq_(sess.query(User).count(), 0)
-
- with sess.begin():
- sess.add(User(name="u1"))
- eq_(sess.query(User).count(), 1)
-
-
class AutoExpireTest(_LocalFixture):
__backend__ = True
eq_(u1.id, 3)
-class NaturalPKRollbackTest(fixtures.MappedTest):
+class ContextManagerPlusFutureTest(FixtureTest):
+ run_inserts = None
__backend__ = True
- @classmethod
- def define_tables(cls, metadata):
- Table("users", metadata, Column("name", String(50), primary_key=True))
+ @testing.requires.savepoints
+ @engines.close_open_connections
+ def test_contextmanager_nested_rollback(self):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+
+ sess = Session()
+
+ def go():
+ with sess.begin_nested():
+ sess.add(User()) # name can't be null
+ sess.flush()
+
+ # and not InvalidRequestError
+ assert_raises(sa_exc.DBAPIError, go)
+
+ with sess.begin_nested():
+ sess.add(User(name="u1"))
+
+ eq_(sess.query(User).count(), 1)
+
+ def test_contextmanager_commit(self):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+
+ sess = Session()
+ with sess.begin():
+ sess.add(User(name="u1"))
+
+ sess.rollback()
+ eq_(sess.query(User).count(), 1)
+
+ def test_contextmanager_rollback(self):
+ users, User = self.tables.users, self.classes.User
+
+ mapper(User, users)
+
+ sess = Session()
+
+ def go():
+ with sess.begin():
+ sess.add(User()) # name can't be null
+
+ assert_raises(sa_exc.DBAPIError, go)
+
+ eq_(sess.query(User).count(), 0)
+ sess.close()
+
+ with sess.begin():
+ sess.add(User(name="u1"))
+ eq_(sess.query(User).count(), 1)
+
+ def test_explicit_begin(self):
+ s1 = Session(testing.db)
+ with s1.begin() as trans:
+ is_(trans, s1.transaction)
+ s1.connection()
+
+ is_(s1._transaction, None)
+
+ def test_no_double_begin_explicit(self):
+ s1 = Session(testing.db)
+ s1.begin()
+ assert_raises_message(
+ sa_exc.InvalidRequestError,
+ "A transaction is already begun on this Session.",
+ s1.begin,
+ )
+
+ @testing.requires.savepoints
+ def test_future_rollback_is_global(self):
+ users = self.tables.users
+
+ s1 = Session(testing.db, future=True)
+
+ s1.begin()
+
+ s1.connection().execute(users.insert(), [{"id": 1, "name": "n1"}])
+
+ s1.begin_nested()
+
+ s1.connection().execute(
+ users.insert(), [{"id": 2, "name": "n2"}, {"id": 3, "name": "n3"}]
+ )
+
+ eq_(s1.connection().scalar(select(func.count()).select_from(users)), 3)
+
+ # rolls back the whole transaction
+ s1.rollback()
+ is_(s1.transaction, None)
+
+ eq_(s1.connection().scalar(select(func.count()).select_from(users)), 0)
+
+ s1.commit()
+ is_(s1.transaction, None)
+
+ @testing.requires.savepoints
+ def test_old_rollback_is_local(self):
+ users = self.tables.users
+
+ s1 = Session(testing.db)
+
+ t1 = s1.begin()
+
+ s1.connection().execute(users.insert(), [{"id": 1, "name": "n1"}])
+
+ s1.begin_nested()
+
+ s1.connection().execute(
+ users.insert(), [{"id": 2, "name": "n2"}, {"id": 3, "name": "n3"}]
+ )
+
+ eq_(s1.connection().scalar(select(func.count()).select_from(users)), 3)
+
+ # rolls back only the savepoint
+ s1.rollback()
+
+ is_(s1.transaction, t1)
+
+ eq_(s1.connection().scalar(select(func.count()).select_from(users)), 1)
+
+ s1.commit()
+ eq_(s1.connection().scalar(select(func.count()).select_from(users)), 1)
+ is_not_(s1.transaction, None)
+
+ def test_session_as_ctx_manager_one(self):
+ users = self.tables.users
+
+ with Session(testing.db) as sess:
+ is_not_(sess.transaction, None)
+
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(), [(1, "user1")]
+ )
+
+ is_not_(sess.transaction, None)
+
+ is_not_(sess.transaction, None)
+
+ # did not commit
+ eq_(sess.connection().execute(users.select()).all(), [])
+
+ def test_session_as_ctx_manager_future_one(self):
+ users = self.tables.users
+
+ with Session(testing.db, future=True) as sess:
+ is_(sess.transaction, None)
+
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(), [(1, "user1")]
+ )
+
+ is_not_(sess.transaction, None)
+
+ is_(sess.transaction, None)
+
+ # did not commit
+ eq_(sess.connection().execute(users.select()).all(), [])
+
+ def test_session_as_ctx_manager_two(self):
+ users = self.tables.users
+
+ try:
+ with Session(testing.db) as sess:
+ is_not_(sess.transaction, None)
+
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+ is_not_(sess.transaction, None)
+
+ def test_session_as_ctx_manager_two_future(self):
+ users = self.tables.users
+
+ try:
+ with Session(testing.db, future=True) as sess:
+ is_(sess.transaction, None)
+
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+ is_(sess.transaction, None)
+
+ def test_begin_context_manager(self):
+ users = self.tables.users
+
+ with Session(testing.db) as sess:
+ with sess.begin():
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ # committed
+ eq_(sess.connection().execute(users.select()).all(), [(1, "user1")])
+
+ def test_sessionmaker_begin_context_manager(self):
+ users = self.tables.users
+
+ session = sessionmaker(testing.db)
+
+ with session.begin() as sess:
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ # committed
+ eq_(sess.connection().execute(users.select()).all(), [(1, "user1")])
+
+ def test_begin_context_manager_rollback_trans(self):
+ users = self.tables.users
+
+ try:
+ with Session(testing.db) as sess:
+ with sess.begin():
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+
+ # rolled back
+ eq_(sess.connection().execute(users.select()).all(), [])
+
+ def test_begin_context_manager_rollback_outer(self):
+ users = self.tables.users
+
+ try:
+ with Session(testing.db) as sess:
+ with sess.begin():
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+
+ # committed
+ eq_(sess.connection().execute(users.select()).all(), [(1, "user1")])
+
+ def test_sessionmaker_begin_context_manager_rollback_trans(self):
+ users = self.tables.users
+
+ session = sessionmaker(testing.db)
+
+ try:
+ with session.begin() as sess:
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+
+ # rolled back
+ eq_(sess.connection().execute(users.select()).all(), [])
+
+ def test_sessionmaker_begin_context_manager_rollback_outer(self):
+ users = self.tables.users
+
+ session = sessionmaker(testing.db)
+
+ try:
+ with session.begin() as sess:
+ sess.connection().execute(
+ users.insert().values(id=1, name="user1")
+ )
+
+ eq_(
+ sess.connection().execute(users.select()).all(),
+ [(1, "user1")],
+ )
+
+ raise Exception("force rollback")
+ except:
+ pass
+
+ # committed
+ eq_(sess.connection().execute(users.select()).all(), [(1, "user1")])
+
+
+class TransactionFlagsTest(fixtures.TestBase):
+ def test_in_transaction(self):
+ s1 = Session(testing.db)
+
+ eq_(s1.in_transaction(), False)
+
+ trans = s1.begin()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1.get_transaction(), trans)
+
+ n1 = s1.begin_nested()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1.get_transaction(), trans)
+ is_(s1.get_nested_transaction(), n1)
+
+ n1.rollback()
+
+ is_(s1.get_nested_transaction(), None)
+ is_(s1.get_transaction(), trans)
+
+ eq_(s1.in_transaction(), True)
+
+ s1.commit()
+
+ eq_(s1.in_transaction(), False)
+ is_(s1.get_transaction(), None)
+
+ def test_in_transaction_subtransactions(self):
+ """we'd like to do away with subtransactions for future sessions
+ entirely. at the moment we are still using them internally.
+ it might be difficult to keep the internals working in exactly
+ the same way if remove this concept, so for now just test that
+ the external API works.
+
+ """
+ s1 = Session(testing.db)
+
+ eq_(s1.in_transaction(), False)
+
+ trans = s1.begin()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1.get_transaction(), trans)
+
+ subtrans = s1.begin(_subtrans=True)
+ is_(s1.get_transaction(), trans)
+ eq_(s1.in_transaction(), True)
+
+ is_(s1._transaction, subtrans)
+
+ s1.rollback()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1._transaction, trans)
+
+ s1.rollback()
+
+ eq_(s1.in_transaction(), False)
+ is_(s1._transaction, None)
+
+ def test_in_transaction_nesting(self):
+ s1 = Session(testing.db)
+
+ eq_(s1.in_transaction(), False)
+
+ trans = s1.begin()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1.get_transaction(), trans)
+
+ sp1 = s1.begin_nested()
+
+ eq_(s1.in_transaction(), True)
+ is_(s1.get_transaction(), trans)
+ is_(s1.get_nested_transaction(), sp1)
+
+ sp2 = s1.begin_nested()
+
+ eq_(s1.in_transaction(), True)
+ eq_(s1.in_nested_transaction(), True)
+ is_(s1.get_transaction(), trans)
+ is_(s1.get_nested_transaction(), sp2)
+
+ sp2.rollback()
+
+ eq_(s1.in_transaction(), True)
+ eq_(s1.in_nested_transaction(), True)
+ is_(s1.get_transaction(), trans)
+ is_(s1.get_nested_transaction(), sp1)
+
+ sp1.rollback()
+
+ is_(s1.get_nested_transaction(), None)
+ eq_(s1.in_transaction(), True)
+ eq_(s1.in_nested_transaction(), False)
+ is_(s1.get_transaction(), trans)
+
+ s1.rollback()
+
+ eq_(s1.in_transaction(), False)
+ is_(s1.get_transaction(), None)
+
+
+class NaturalPKRollbackTest(fixtures.MappedTest):
+ __backend__ = True
+
+ @classmethod
+ def define_tables(cls, metadata):
+ Table("users", metadata, Column("name", String(50), primary_key=True))
@classmethod
def setup_classes(cls):
assert u2 not in s
assert s.identity_map[identity_key(User, ("u1",))] is u1
+
+
+class JoinIntoAnExternalTransactionFixture(object):
+ """Test the "join into an external transaction" examples"""
+
+ def setup(self):
+ self.connection = testing.db.connect()
+
+ self.metadata = MetaData()
+ self.table = Table(
+ "t1", self.metadata, Column("id", Integer, primary_key=True)
+ )
+ with self.connection.begin():
+ self.table.create(self.connection, checkfirst=True)
+
+ self.setup_session()
+
+ def test_something(self):
+ A = self.A
+
+ a1 = A()
+ self.session.add(a1)
+ self.session.commit()
+
+ self._assert_count(1)
+
+ @testing.requires.savepoints
+ def test_something_with_rollback(self):
+ A = self.A
+
+ a1 = A()
+ self.session.add(a1)
+ self.session.flush()
+
+ self._assert_count(1)
+ self.session.rollback()
+ self._assert_count(0)
+
+ a1 = A()
+ self.session.add(a1)
+ self.session.commit()
+ self._assert_count(1)
+
+ a2 = A()
+
+ self.session.add(a2)
+ self.session.flush()
+ self._assert_count(2)
+
+ self.session.rollback()
+ self._assert_count(1)
+
+ def _assert_count(self, count):
+ result = self.connection.scalar(
+ select(func.count()).select_from(self.table)
+ )
+ eq_(result, count)
+
+ def teardown(self):
+ self.teardown_session()
+
+ with self.connection.begin():
+ self._assert_count(0)
+
+ with self.connection.begin():
+ self.table.drop(self.connection)
+
+ # return connection to the Engine
+ self.connection.close()
+
+
+class NewStyleJoinIntoAnExternalTransactionTest(
+ JoinIntoAnExternalTransactionFixture
+):
+ """A new recipe for "join into an external transaction" that works
+ for both legacy and future engines/sessions
+
+ """
+
+ def setup_session(self):
+ # begin a non-ORM transaction
+ self.trans = self.connection.begin()
+
+ class A(object):
+ pass
+
+ mapper(A, self.table)
+ self.A = A
+
+ # bind an individual Session to the connection
+ self.session = Session(bind=self.connection, future=True)
+
+ if testing.requires.savepoints.enabled:
+ self.nested = self.connection.begin_nested()
+
+ @event.listens_for(self.session, "after_transaction_end")
+ def end_savepoint(session, transaction):
+ if not self.nested.is_active:
+ self.nested = self.connection.begin_nested()
+
+ def teardown_session(self):
+ self.session.close()
+
+ # rollback - everything that happened with the
+ # Session above (including calls to commit())
+ # is rolled back.
+ self.trans.rollback()
+
+
+class FutureJoinIntoAnExternalTransactionTest(
+ NewStyleJoinIntoAnExternalTransactionTest,
+ fixtures.FutureEngineMixin,
+ fixtures.TestBase,
+):
+ pass
+
+
+class NonFutureJoinIntoAnExternalTransactionTest(
+ NewStyleJoinIntoAnExternalTransactionTest, fixtures.TestBase,
+):
+ pass
+
+
+class LegacyJoinIntoAnExternalTransactionTest(
+ JoinIntoAnExternalTransactionFixture, fixtures.TestBase,
+):
+ def setup_session(self):
+ # begin a non-ORM transaction
+ self.trans = self.connection.begin()
+
+ class A(object):
+ pass
+
+ mapper(A, self.table)
+ self.A = A
+
+ # bind an individual Session to the connection
+ self.session = Session(bind=self.connection)
+
+ if testing.requires.savepoints.enabled:
+ # start the session in a SAVEPOINT...
+ self.session.begin_nested()
+
+ # then each time that SAVEPOINT ends, reopen it
+ @event.listens_for(self.session, "after_transaction_end")
+ def restart_savepoint(session, transaction):
+ if transaction.nested and not transaction._parent.nested:
+
+ # ensure that state is expired the way
+ # session.commit() at the top level normally does
+ # (optional step)
+ session.expire_all()
+
+ session.begin_nested()
+
+ def teardown_session(self):
+ self.session.close()
+
+ # rollback - everything that happened with the
+ # Session above (including calls to commit())
+ # is rolled back.
+ self.trans.rollback()
+
+
+class LegacyBranchedJoinIntoAnExternalTransactionTest(
+ LegacyJoinIntoAnExternalTransactionTest
+):
+ def setup_session(self):
+ # begin a non-ORM transaction
+ self.trans = self.connection.begin()
+
+ class A(object):
+ pass
+
+ mapper(A, self.table)
+ self.A = A
+
+ # neutron is doing this inside of a migration
+ # 1df244e556f5_add_unique_ha_router_agent_port_bindings.py
+ self.session = Session(bind=self.connection.connect())
+
+ if testing.requires.savepoints.enabled:
+ # start the session in a SAVEPOINT...
+ self.session.begin_nested()
+
+ # then each time that SAVEPOINT ends, reopen it
+ @event.listens_for(self.session, "after_transaction_end")
+ def restart_savepoint(session, transaction):
+ if transaction.nested and not transaction._parent.nested:
+
+ # ensure that state is expired the way
+ # session.commit() at the top level normally does
+ # (optional step)
+ session.expire_all()
+
+ session.begin_nested()
)
mapper(MyClass, mytable)
- session = create_session()
+ session = Session()
mc = MyClass()
mco = MyOtherClass()
mco.myclass = mc
session.add(mco)
- session.flush()
+ session.commit()
eq_(session.scalar(select(func.count("*")).select_from(mytable)), 1)
eq_(
session.expire(mco, ["myclass"])
session.delete(mco)
- session.flush()
+ session.commit()
# mytable wasn't deleted, is the point.
eq_(session.scalar(select(func.count("*")).select_from(mytable)), 1)
inner = select(
[
case(
- [
- [info_table.c.pk < 3, "lessthan3"],
- [
- and_(info_table.c.pk >= 3, info_table.c.pk < 7),
- "gt3",
- ],
- ]
+ (info_table.c.pk < 3, "lessthan3"),
+ (and_(info_table.c.pk >= 3, info_table.c.pk < 7), "gt3"),
).label("x"),
info_table.c.pk,
info_table.c.info,
# gt3 4 pk_4_data
# gt3 5 pk_5_data
# gt3 6 pk_6_data
- assert inner_result == [
- ("lessthan3", 1, "pk_1_data"),
- ("lessthan3", 2, "pk_2_data"),
- ("gt3", 3, "pk_3_data"),
- ("gt3", 4, "pk_4_data"),
- ("gt3", 5, "pk_5_data"),
- ("gt3", 6, "pk_6_data"),
- ]
+ eq_(
+ inner_result,
+ [
+ ("lessthan3", 1, "pk_1_data"),
+ ("lessthan3", 2, "pk_2_data"),
+ ("gt3", 3, "pk_3_data"),
+ ("gt3", 4, "pk_4_data"),
+ ("gt3", 5, "pk_5_data"),
+ ("gt3", 6, "pk_6_data"),
+ ],
+ )
outer = select([inner.alias("q_inner")])
w_else = select(
[
case(
- [
- [info_table.c.pk < 3, cast(3, Integer)],
- [and_(info_table.c.pk >= 3, info_table.c.pk < 6), 6],
- ],
+ [info_table.c.pk < 3, cast(3, Integer)],
+ [and_(info_table.c.pk >= 3, info_table.c.pk < 6), 6],
else_=0,
).label("x"),
info_table.c.pk,
else_result = w_else.execute().fetchall()
- assert else_result == [
- (3, 1, "pk_1_data"),
- (3, 2, "pk_2_data"),
- (6, 3, "pk_3_data"),
- (6, 4, "pk_4_data"),
- (6, 5, "pk_5_data"),
- (0, 6, "pk_6_data"),
- ]
+ eq_(
+ else_result,
+ [
+ (3, 1, "pk_1_data"),
+ (3, 2, "pk_2_data"),
+ (6, 3, "pk_3_data"),
+ (6, 4, "pk_4_data"),
+ (6, 5, "pk_5_data"),
+ (0, 6, "pk_6_data"),
+ ],
+ )
def test_literal_interpretation_ambiguous(self):
assert_raises_message(
exc.ArgumentError,
r"Column expression expected, got 'x'",
case,
- [("x", "y")],
+ ("x", "y"),
)
def test_literal_interpretation_ambiguous_tuple(self):
exc.ArgumentError,
r"Column expression expected, got \('x', 'y'\)",
case,
- [(("x", "y"), "z")],
+ (("x", "y"), "z"),
)
def test_literal_interpretation(self):
t = table("test", column("col1"))
self.assert_compile(
- case([("x", "y")], value=t.c.col1),
+ case(("x", "y"), value=t.c.col1),
"CASE test.col1 WHEN :param_1 THEN :param_2 END",
)
self.assert_compile(
- case([(t.c.col1 == 7, "y")], else_="z"),
+ case((t.c.col1 == 7, "y"), else_="z"),
"CASE WHEN (test.col1 = :col1_1) THEN :param_1 ELSE :param_2 END",
)
select(
[
case(
- [(info_table.c.info == "pk_4_data", text("'yes'"))],
+ (info_table.c.info == "pk_4_data", text("'yes'")),
else_=text("'no'"),
)
]
select(
[
case(
- [
- (
- info_table.c.info == "pk_4_data",
- literal_column("'yes'"),
- )
- ],
+ (
+ info_table.c.info == "pk_4_data",
+ literal_column("'yes'"),
+ ),
else_=literal_column("'no'"),
)
]
).order_by(info_table.c.info),
]:
- if testing.against("firebird"):
- eq_(
- s.execute().fetchall(),
- [
- ("no ",),
- ("no ",),
- ("no ",),
- ("yes",),
- ("no ",),
- ("no ",),
- ],
- )
- else:
- eq_(
- s.execute().fetchall(),
- [("no",), ("no",), ("no",), ("yes",), ("no",), ("no",)],
- )
+ eq_(
+ s.execute().fetchall(),
+ [("no",), ("no",), ("no",), ("yes",), ("no",), ("no",)],
+ )
- @testing.fails_on("firebird", "FIXME: unknown")
def testcase_with_dict(self):
query = select(
[
],
from_obj=[info_table],
)
- assert query.execute().fetchall() == [
- ("lessthan3", 1, "pk_1_data"),
- ("lessthan3", 2, "pk_2_data"),
- ("gt3", 3, "pk_3_data"),
- ("gt3", 4, "pk_4_data"),
- ("gt3", 5, "pk_5_data"),
- ("gt3", 6, "pk_6_data"),
- ]
-
- simple_query = select(
+ eq_(
+ query.execute().fetchall(),
[
+ ("lessthan3", 1, "pk_1_data"),
+ ("lessthan3", 2, "pk_2_data"),
+ ("gt3", 3, "pk_3_data"),
+ ("gt3", 4, "pk_4_data"),
+ ("gt3", 5, "pk_5_data"),
+ ("gt3", 6, "pk_6_data"),
+ ],
+ )
+
+ simple_query = (
+ select(
case(
{1: "one", 2: "two"}, value=info_table.c.pk, else_="other"
),
info_table.c.pk,
- ],
- whereclause=info_table.c.pk < 4,
- from_obj=[info_table],
+ )
+ .where(info_table.c.pk < 4)
+ .select_from(info_table)
)
assert simple_query.execute().fetchall() == [
ClauseList(table_a.c.a == 5, table_a.c.b == table_a.c.a),
),
lambda: (
- case(whens=[(table_a.c.a == 5, 10), (table_a.c.a == 10, 20)]),
- case(whens=[(table_a.c.a == 18, 10), (table_a.c.a == 10, 20)]),
- case(whens=[(table_a.c.a == 5, 10), (table_a.c.b == 10, 20)]),
+ case((table_a.c.a == 5, 10), (table_a.c.a == 10, 20)),
+ case((table_a.c.a == 18, 10), (table_a.c.a == 10, 20)),
+ case((table_a.c.a == 5, 10), (table_a.c.b == 10, 20)),
case(
- whens=[
- (table_a.c.a == 5, 10),
- (table_a.c.b == 10, 20),
- (table_a.c.a == 9, 12),
- ]
- ),
- case(
- whens=[(table_a.c.a == 5, 10), (table_a.c.a == 10, 20)],
- else_=30,
+ (table_a.c.a == 5, 10),
+ (table_a.c.b == 10, 20),
+ (table_a.c.a == 9, 12),
),
+ case((table_a.c.a == 5, 10), (table_a.c.a == 10, 20), else_=30,),
case({"wendy": "W", "jack": "J"}, value=table_a.c.a, else_="E"),
case({"wendy": "W", "jack": "J"}, value=table_a.c.b, else_="E"),
case({"wendy_w": "W", "jack": "J"}, value=table_a.c.a, else_="E"),
self._do_test(s)
def test_case(self):
- c = case([(self.criterion, self.column)], else_=self.column)
+ c = case((self.criterion, self.column), else_=self.column)
self._do_test(c)
def test_cast(self):
from sqlalchemy import alias
from sqlalchemy import and_
from sqlalchemy import bindparam
+from sqlalchemy import case
from sqlalchemy import CHAR
from sqlalchemy import column
from sqlalchemy import create_engine
"SELECT anon_1.a FROM (SELECT 1 AS a ORDER BY 1) AS anon_1",
)
+ def test_case_list_legacy(self):
+ t1 = table("t", column("q"))
+
+ with testing.expect_deprecated(
+ r"The \"whens\" argument to case\(\) is now passed"
+ ):
+ stmt = select(t1).where(
+ case(
+ [(t1.c.q == 5, "foo"), (t1.c.q == 10, "bar")], else_="bat"
+ )
+ != "bat"
+ )
+
+ self.assert_compile(
+ stmt,
+ "SELECT t.q FROM t WHERE CASE WHEN (t.q = :q_1) "
+ "THEN :param_1 WHEN (t.q = :q_2) THEN :param_2 "
+ "ELSE :param_3 END != :param_4",
+ )
+
+ def test_case_whens_kw(self):
+ t1 = table("t", column("q"))
+
+ with testing.expect_deprecated(
+ r"The \"whens\" argument to case\(\) is now passed"
+ ):
+ stmt = select(t1).where(
+ case(
+ whens=[(t1.c.q == 5, "foo"), (t1.c.q == 10, "bar")],
+ else_="bat",
+ )
+ != "bat"
+ )
+
+ self.assert_compile(
+ stmt,
+ "SELECT t.q FROM t WHERE CASE WHEN (t.q = :q_1) "
+ "THEN :param_1 WHEN (t.q = :q_2) THEN :param_2 "
+ "ELSE :param_3 END != :param_4",
+ )
+
+ def test_case_whens_dict_kw(self):
+ t1 = table("t", column("q"))
+
+ with testing.expect_deprecated(
+ r"The \"whens\" argument to case\(\) is now passed"
+ ):
+ stmt = select(t1).where(
+ case(whens={t1.c.q == 5: "foo"}, else_="bat",) != "bat"
+ )
+
+ self.assert_compile(
+ stmt,
+ "SELECT t.q FROM t WHERE CASE WHEN (t.q = :q_1) THEN "
+ ":param_1 ELSE :param_2 END != :param_3",
+ )
+
+ def test_case_kw_arg_detection(self):
+ # because we support py2k, case() has to parse **kw for now
+
+ assert_raises_message(
+ TypeError,
+ "unknown arguments: bat, foo",
+ case,
+ (column("x") == 10, 5),
+ else_=15,
+ foo="bar",
+ bat="hoho",
+ )
+
+ def test_with_only_generative(self):
+ table1 = table(
+ "table1",
+ column("col1"),
+ column("col2"),
+ column("col3"),
+ column("colx"),
+ )
+ s1 = table1.select().scalar_subquery()
+
+ with testing.expect_deprecated(
+ r"The \"columns\" argument to "
+ r"Select.with_only_columns\(\) is now passed"
+ ):
+ stmt = s1.with_only_columns([s1])
+ self.assert_compile(
+ stmt,
+ "SELECT (SELECT table1.col1, table1.col2, "
+ "table1.col3, table1.colx FROM table1) AS anon_1",
+ )
+
+ def test_from_list_with_columns(self):
+ table1 = table("t1", column("a"))
+ table2 = table("t2", column("b"))
+ s1 = select(table1.c.a, table2.c.b)
+ self.assert_compile(s1, "SELECT t1.a, t2.b FROM t1, t2")
+
+ with testing.expect_deprecated(
+ r"The \"columns\" argument to "
+ r"Select.with_only_columns\(\) is now passed"
+ ):
+ s2 = s1.with_only_columns([table2.c.b])
+
+ self.assert_compile(s2, "SELECT t2.b FROM t2")
+
def test_column(self):
stmt = select(column("x"))
with testing.expect_deprecated(
def test_append_column(self):
t1 = table("t1", column("q"), column("p"))
stmt = select(t1.c.q)
- with self._expect_deprecated("Select", "column", "column"):
+ with self._expect_deprecated("Select", "column", "add_columns"):
stmt.append_column(t1.c.p)
self.assert_compile(stmt, "SELECT t1.q, t1.p FROM t1")
def test_with_only_generative(self):
s1 = table1.select().scalar_subquery()
self.assert_compile(
- s1.with_only_columns([s1]),
+ s1.with_only_columns(s1),
"SELECT (SELECT table1.col1, table1.col2, "
"table1.col3, table1.colx FROM table1) AS anon_1",
)
table2 = table("t2", column("b"))
s1 = select(table1.c.a, table2.c.b)
self.assert_compile(s1, "SELECT t1.a, t2.b FROM t1, t2")
- s2 = s1.with_only_columns([table2.c.b])
+ s2 = s1.with_only_columns(table2.c.b)
self.assert_compile(s2, "SELECT t2.b FROM t2")
s3 = sql_util.ClauseAdapter(table1).traverse(s1)
self.assert_compile(s3, "SELECT t1.a, t2.b FROM t1, t2")
- s4 = s3.with_only_columns([table2.c.b])
+ s4 = s3.with_only_columns(table2.c.b)
self.assert_compile(s4, "SELECT t2.b FROM t2")
def test_from_list_against_existing_one(self):