Fixed bug in :ref:`change_3948` which prevented "selectin" and
"inline" settings in a multi-level class hierarchy from interacting
- together as expected. A new example is added to the documentation.
-
- .. seealso::
-
- :ref:`polymorphic_selectin_and_withpoly`
+ together as expected.
.. change::
:tags: bug, oracle
.. seealso::
- :ref:`deferred_raiseload`
+ :ref:`orm_queryguide_deferred_raiseload`
:ticket:`4826`
:ref:`orm_mapping_classes_toplevel` - all new unified documentation for
Declarative, classical mapping, dataclasses, attrs, etc.
+.. _migration_20_query_usage:
+
2.0 Migration - ORM Usage
---------------------------------------------
* unit of work flushes for objects added to the session using
:meth:`_orm.Session.add` and :meth:`_orm.Session.add_all`.
+* The new `ORM Bulk Insert Statement <orm_queryguide_bulk_insert>` feature,
+ which improves upon the experimental version of this feature first introduced
+ in SQLAlchemy 1.4.
* the :class:`_orm.Session` "bulk" operations described at
- :ref:`bulk_operations`
-* An upcoming feature known as "ORM Enabled Insert Statements" that will be
- an improvement upon the existing :ref:`orm_dml_returning_objects` first
- introduced as an experimental feature in SQLAlchemy 1.4.
+ :ref:`bulk_operations`, which are superseded by the above mentioned
+ ORM Bulk Insert feature.
To get a sense of the scale of the operation, below are performance
measurements using the ``test_flush_no_pk`` performance suite, which
:ref:`engine_insertmanyvalues` - Documentation and background on the
new feature as well as how to configure it
+ORM-enabled Insert, Upsert, Update and Delete Statements, with ORM RETURNING
+-----------------------------------------------------------------------------
+
+SQLAlchemy 1.4 ported the features of the legacy :class:`_orm.Query` object to
+:term:`2.0 style` execution, which meant that the :class:`.Select` construct
+could be passed to :meth:`_orm.Session.execute` to deliver ORM results. Support
+was also added for :class:`.Update` and :class:`.Delete` to be passed to
+:meth:`_orm.Session.execute`, to the degree that they could provide
+implementations of :meth:`_orm.Query.update` and :meth:`_orm.Query.delete`.
+
+The major missing element has been support for the :class:`_dml.Insert` construct.
+The 1.4 documentation addressed this with some recipes for "inserts" and "upserts"
+with use of :meth:`.Select.from_statement` to integrate RETURNING
+into an ORM context. 2.0 now fully closes the gap by integrating direct support for
+:class:`_dml.Insert` as an enhanced version of the :meth:`_orm.Session.bulk_insert_mappings`
+method, along with full ORM RETURNING support for all DML structures.
+
+Bulk Insert with RETURNING
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+:class:`_dml.Insert` can be passed to :meth:`_orm.Session.execute`, with
+or without :meth:`_dml.Insert.returning`, which when passed with a
+separate parameter list will invoke the same process as was previously
+implemented by
+:meth:`_orm.Session.bulk_insert_mappings`, with additional enhancements. This will optimize the
+batching of rows making use of the new :ref:`fast insertmany <change_6047>`
+feature, while also adding support for
+heterogenous parameter sets and multiple-table mappings like joined table
+inheritance::
+
+ >>> users = session.scalars(
+ ... insert(User).returning(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ >>> print(users.all())
+ [User(name='spongebob', fullname='Spongebob Squarepants'),
+ User(name='sandy', fullname='Sandy Cheeks'),
+ User(name='patrick', fullname='Patrick Star'),
+ User(name='squidward', fullname='Squidward Tentacles'),
+ User(name='ehkrabs', fullname='Eugene H. Krabs')]
+
+RETURNING is supported for all of these use cases, where the ORM will construct
+a full result set from multiple statement invocations.
+
+.. seealso::
+
+ :ref:`orm_queryguide_bulk_insert`
+
+Bulk UPDATE
+~~~~~~~~~~~
+
+In a similar manner as that of :class:`_dml.Insert`, passing the
+:class:`_dml.Update` construct along with a parameter list that includes
+primary key values to :meth:`_orm.Session.execute` will invoke the same process
+as previously supported by the :meth:`_orm.Session.bulk_update_mappings`
+method. This feature does not however support RETURNING, as it uses
+a SQL UPDATE statement that is invoked using DBAPI :term:`executemany`::
+
+ >>> from sqlalchemy import update
+ >>> session.execute(
+ ... update(User),
+ ... [
+ ... {"id": 1, "fullname": "Spongebob Squarepants"},
+ ... {"id": 3, "fullname": "Patrick Star"},
+ ... ]
+ ... )
+
+.. seealso::
+
+ :ref:`orm_queryguide_bulk_update`
+
+INSERT / upsert ... VALUES ... RETURNING
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When using :class:`_dml.Insert` with :meth:`_dml.Insert.values`, the set of
+parameters may include SQL expressions. Additionally, upsert variants
+such as those for SQLite, PostgreSQL and MariaDB are also supported.
+These statements may now include :meth:`_dml.Insert.returning` clauses
+with column expressions or full ORM entities::
+
+ >>> from sqlalchemy.dialects.sqlite import insert as sqlite_upsert
+ >>> stmt = sqlite_upsert(User).values(
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ >>> stmt = stmt.on_conflict_do_update(
+ ... index_elements=[User.name],
+ ... set_=dict(fullname=stmt.excluded.fullname)
+ ... )
+ >>> result = session.scalars(stmt.returning(User))
+ >>> print(result.all())
+ [User(name='spongebob', fullname='Spongebob Squarepants'),
+ User(name='sandy', fullname='Sandy Cheeks'),
+ User(name='patrick', fullname='Patrick Star'),
+ User(name='squidward', fullname='Squidward Tentacles'),
+ User(name='ehkrabs', fullname='Eugene H. Krabs')]
+
+.. seealso::
+
+ :ref:`orm_queryguide_insert_values`
+
+ :ref:`orm_queryguide_upsert`
+
+ORM UPDATE / DELETE with WHERE ... RETURNING
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+SQLAlchemy 1.4 also had some modest support for the RETURNING feature to be
+used with the :func:`_dml.update` and :func:`_dml.delete` constructs, when
+used with :meth:`_orm.Session.execute`. This support has now been upgraded
+to be fully native, including that the ``fetch`` synchronization strategy
+may also proceed whether or not explicit use of RETURNING is present::
+
+ >>> from sqlalchemy import update
+ >>> stmt = (
+ ... update(User).
+ ... where(User.name == "squidward").
+ ... values(name="spongebob").
+ ... returning(User)
+ ... )
+ >>> result = session.scalars(stmt, execution_options={"synchronize_session": "fetch"})
+ >>> print(result.all())
+
+
+.. seealso::
+
+ :ref:`orm_queryguide_update_delete_where`
+
+ :ref:`orm_queryguide_update_delete_where_returning`
+
+Improved ``synchronize_session`` behavior for ORM UPDATE / DELETE
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The default strategy for :ref:`synchronize_session <orm_queryguide_update_delete_sync>`
+is now a new value ``"auto"``. This strategy will attempt to use the
+``"evaluate"`` strategy and then automatically fall back to the ``"fetch"``
+strategy. For all backends other than MySQL / MariaDB, ``"fetch"`` uses
+RETURNING to fetch UPDATE/DELETEd primary key identifiers within the
+same statement, so is generally more efficient than previous versions
+(in 1.4, RETURNING was only available for PostgreSQL, SQL Server).
+
+.. seealso::
+
+ :ref:`orm_queryguide_update_delete_sync`
+
+Summary of Changes
+~~~~~~~~~~~~~~~~~~
+
+Listed tickets for new ORM DML with RETURNING features:
+
+* convert ``insert()`` at ORM level to interpret ``values()`` in an ORM
+ context - :ticket:`7864`
+* evaluate feasibility of dml.returning(Entity) to deliver ORM expressions,
+ automatically apply select().from_statement equiv - :ticket:`7865`
+* given ORM insert, try to carry the bulk methods along, re: inheritance -
+ :ticket:`8360`
.. _change_7311:
longer needs to rely upon the single-row-only
`cursor.lastrowid <https://peps.python.org/pep-0249/#lastrowid>`_ attribute
provided by the :term:`DBAPI` for most backends; RETURNING may now be used for
-all included backends with the exception of MySQL. The remaining performance
+all :ref:`SQLAlchemy-included <included_dialects>` backends with the exception
+of MySQL. The remaining performance
limitation, that the
`cursor.executemany() <https://peps.python.org/pep-0249/#executemany>`_ DBAPI
method does not allow for rows to be fetched, is resolved for most backends by
only the relational database contains a particular series of functions that are necessary
to coerce incoming and outgoing data between an application and persistence format.
Examples include using database-defined encryption/decryption functions, as well
-as stored procedures that handle geographic data. The PostGIS extension to PostgreSQL
-includes an extensive array of SQL functions that are necessary for coercing
-data into particular formats.
+as stored procedures that handle geographic data.
Any :class:`.TypeEngine`, :class:`.UserDefinedType` or :class:`.TypeDecorator` subclass
can include implementations of
{'pgp_sym_decrypt_1': 'this is my passphrase', 'username_1': 'some user'}
-.. seealso::
-
- :ref:`examples_postgis`
.. _types_operators:
All dialects require that an appropriate DBAPI driver is installed.
+.. _included_dialects:
+
Included Dialects
-----------------
common constructs are that of the :class:`_schema.Table` and that of the
:class:`_expression.Select` statement.
+ ORM-annotated
annotations
- Annotations are a concept used internally by SQLAlchemy in order to store
- additional information along with :class:`_expression.ClauseElement` objects. A Python
- dictionary is associated with a copy of the object, which contains key/value
- pairs significant to various internal systems, mostly within the ORM::
-
- some_column = Column('some_column', Integer)
- some_column_annotated = some_column._annotate({"entity": User})
-
- The annotation system differs from the public dictionary :attr:`_schema.Column.info`
- in that the above annotation operation creates a *copy* of the new :class:`_schema.Column`,
- rather than considering all annotation values to be part of a single
- unit. The ORM creates copies of expression objects in order to
- apply annotations that are specific to their context, such as to differentiate
- columns that should render themselves as relative to a joined-inheritance
- entity versus those which should render relative to their immediate parent
- table alone, as well as to differentiate columns within the "join condition"
- of a relationship where the column in some cases needs to be expressed
- in terms of one particular table alias or another, based on its position
- within the join expression.
+
+ The phrase "ORM-annotated" refers to an internal aspect of SQLAlchemy,
+ where a Core object such as a :class:`_schema.Column` object can carry along
+ additional runtime information that marks it as belonging to a particular
+ ORM mapping. The term should not be confused with the common phrase
+ "type annotation", which refers to Python source code "type hints" used
+ for static typing as introduced at :pep:`484`.
+
+ Most of SQLAlchemy's documented code examples are formatted with a
+ small note regarding "Annotated Example" or "Non-annotated Example".
+ This refers to whether or not the example is :pep:`484` annotated,
+ and is not related to the SQLAlchemy concept of "ORM-annotated".
+
+ When the phrase "ORM-annotated" appears in documentation, it is
+ referring to Core SQL expression objects such as :class:`.Table`,
+ :class:`.Column`, and :class:`.Select` objects, which originate from,
+ or refer to sub-elements that originate from, one or more ORM mappings,
+ and therefore will have ORM-specific interpretations and/or behaviors
+ when passed to ORM methods such as :meth:`_orm.Session.execute`.
+ For example, when we construct a :class:`.Select` object from an ORM
+ mapping, such as the ``User`` class illustrated in the
+ :ref:`ORM Tutorial <tutorial_declaring_mapped_classes>`::
+
+ >>> stmt = select(User)
+
+ The internal state of the above :class:`.Select` refers to the
+ :class:`.Table` to which ``User`` is mapped. The ``User`` class
+ itself is not immediately referenced. This is how the :class:`.Select`
+ construct remains compatible with Core-level processes (note that
+ the ``._raw_columns`` member of :class:`.Select` is private and
+ should not be accessed by end-user code)::
+
+ >>> stmt._raw_columns
+ [Table('user_account', MetaData(), Column('id', Integer(), ...)]
+
+ However, when our :class:`.Select` is passed along to an ORM
+ :class:`.Session`, the ORM entities that are indirectly associated
+ with the object are used to interpret this :class:`.Select` in an
+ ORM context. The actual "ORM annotations" can be seen in another
+ private variable ``._annotations``::
+
+ >>> stmt._raw_columns[0]._annotations
+ immutabledict({
+ 'entity_namespace': <Mapper at 0x7f4dd8098c10; User>,
+ 'parententity': <Mapper at 0x7f4dd8098c10; User>,
+ 'parentmapper': <Mapper at 0x7f4dd8098c10; User>
+ })
+
+ Therefore we refer to ``stmt`` as an **ORM-annotated select()** object.
+ It's a :class:`.Select` statement that contains additional information
+ that will cause it to be interpreted in an ORM-specific way when passed
+ to methods like :meth:`_orm.Session.execute`.
+
plugin
plugin-enabled
behavioral differences in comparison to the ``cursor.execute()``
method which is used for single-statement invocation. The "executemany"
method executes the given SQL statement multiple times, once for
- each set of parameters passed. As such, DBAPIs generally cannot
- return result sets when ``cursor.executemany()`` is used. An additional
- limitation of ``cursor.executemany()`` is that database drivers which
- support the ``cursor.lastrowid`` attribute, returning the most recently
- inserted integer primary key value, also don't support this attribute
- when using ``cursor.executemany()``.
-
- SQLAlchemy makes use of ``cursor.executemany()`` when the
- :meth:`_engine.Connection.execute` method is used, passing a list of
- parameter dictionaries, instead of just a single parameter dictionary.
- When using this form, the returned :class:`_result.Result` object will
- not return any rows, even if the given SQL statement uses a form such
- as RETURNING.
-
- Since "executemany" makes it generally impossible to receive results
- back that indicate the newly generated values of server-generated
- identifiers, the SQLAlchemy ORM can use "executemany" style
- statement invocations only in certain circumstances when INSERTing
- rows; while "executemany" is generally
- associated with faster performance for running many INSERT statements
- at once, the SQLAlchemy ORM can only make use of it in those
- circumstances where it does not need to fetch newly generated primary
- key values or server side default values. Newer versions of SQLAlchemy
- make use of an alternate form of INSERT which is to pass a single
- VALUES clause with many parameter sets at once, which does support
- RETURNING. This form is available
- in SQLAlchemy Core using the :meth:`.Insert.values` method.
+ each set of parameters passed. The general rationale for using
+ executemany is that of improved performance, wherein the DBAPI may
+ use techniques such as preparing the statement just once beforehand,
+ or otherwise optimizing for invoking the same statement many times.
+
+ SQLAlchemy typically makes use of the ``cursor.executemany()`` method
+ automatically when the :meth:`_engine.Connection.execute` method is
+ used where a list of parameter dictionaries were passed; this indicates
+ to SQLAlchemy Core that the SQL statement and processed parameter sets
+ should be passed to ``cursor.executemany()``, where the statement will
+ be invoked by the driver for each parameter dictionary individually.
+
+ A key limitation of the ``cursor.executemany()`` method as used with
+ all known DBAPIs is that the ``cursor`` is not configured to return
+ rows when this method is used. For **most** backends (a notable
+ exception being the cx_Oracle, / OracleDB DBAPIs), this means that
+ statements like ``INSERT..RETURNING`` typically cannot be used with
+ ``cursor.executemany()`` directly, since DBAPIs typically do not
+ aggregate the single row from each INSERT execution together.
+
+ To overcome this limitation, SQLAlchemy as of the 2.0 series implements
+ an alternative form of "executemany" which is referred towards as
+ :ref:`engine_insertmanyvalues`. This feature makes use of
+ ``cursor.execute()`` to invoke an INSERT statement that will proceed
+ with multiple parameter sets in one round trip, thus producing the same
+ effect as using ``cursor.executemany()`` while still supporting
+ RETURNING.
.. seealso::
:ref:`tutorial_multiple_parameters` - tutorial introduction to
"executemany"
+ :ref:`engine_insertmanyvalues` - SQLAlchemy feature which allows
+ RETURNING to be used with "executemany"
+
marshalling
data marshalling
The process of transforming the memory representation of an object to
discriminator
A result-set column which is used during :term:`polymorphic` loading
to determine what kind of mapped class should be applied to a particular
- incoming result row. In SQLAlchemy, the classes are always part
- of a hierarchy mapping using inheritance mapping.
+ incoming result row.
.. seealso::
the complexity and time spent within object fetches can
sometimes be reduced, in that
attributes for related tables don't need to be addressed
- immediately. Lazy loading is the opposite of :term:`eager loading`.
+ immediately.
+
+ Lazy loading is the opposite of :term:`eager loading`.
+
+ Within SQLAlchemy, lazy loading is a key feature of the ORM, and
+ applies to attributes which are :term:`mapped` on a user-defined class.
+ When attributes that refer to database columns or related objects
+ are accessed, for which no loaded value is present, the ORM makes
+ use of the :class:`_orm.Session` for which the current object is
+ associated with in the :term:`persistent` state, and emits a SELECT
+ statement on the current transaction, starting a new transaction if
+ one was not in progress. If the object is in the :term:`detached`
+ state and not associated with any :class:`_orm.Session`, this is
+ considered to be an error state and an
+ :ref:`informative exception <error_bhk3>` is raised.
.. seealso::
:term:`N plus one problem`
- :doc:`orm/loading_relationships`
+ :ref:`loading_columns` - includes information on lazy loading of
+ ORM mapped columns
+
+ :doc:`orm/queryguide/relationships` - includes information on lazy
+ loading of ORM related objects
+
+ :ref:`asyncio_orm_avoid_lazyloads` - tips on avoiding lazy loading
+ when using the :ref:`asyncio_toplevel` extension
eager load
eager loads
eager loaded
eager loading
+ eagerly load
+
+ In object relational mapping, an "eager load" refers to an attribute
+ that is populated with its database-side value at the same time as when
+ the object itself is loaded from the database. In SQLAlchemy, the term
+ "eager loading" usually refers to related collections and instances of
+ objects that are linked between mappings using the
+ :func:`_orm.relationship` construct, but can also refer to additional
+ column attributes being loaded, often from other tables related to a
+ particular table being queried, such as when using
+ :ref:`inheritance <inheritance_toplevel>` mappings.
- In object relational mapping, an "eager load" refers to
- an attribute that is populated with its database-side value
- at the same time as when the object itself is loaded from the database.
- In SQLAlchemy, "eager loading" usually refers to related collections
- of objects that are mapped using the :func:`_orm.relationship` construct.
Eager loading is the opposite of :term:`lazy loading`.
.. seealso::
- :doc:`orm/loading_relationships`
+ :doc:`orm/queryguide/relationships`
mapping
:ref:`tutorial_orm_loader_strategies`
- :doc:`orm/loading_relationships`
+ :doc:`orm/queryguide/relationships`
polymorphic
polymorphically
* **ORM Usage:**
:doc:`Session Usage and Guidelines <orm/session>` |
- :doc:`Querying Data, Loading Objects <orm/loading_objects>` |
+ :doc:`Querying Guide <orm/queryguide/index>` |
:doc:`AsyncIO Support <orm/extensions/asyncio>`
* **Configuration Extensions:**
"deferred" basis as defined
by the :paramref:`_orm.mapped_column.deferred` keyword. More documentation
on these particular concepts may be found at :ref:`relationship_patterns`,
-:ref:`mapper_column_property_sql_expressions`, and :ref:`deferred`.
+:ref:`mapper_column_property_sql_expressions`, and :ref:`orm_queryguide_column_deferral`.
Properties may be specified with a declarative mapping as above using
"hybrid table" style as well; the :class:`_schema.Column` objects that
* **deferred column loading** - The :paramref:`_orm.mapped_column.deferred`
boolean establishes the :class:`_schema.Column` using
- :ref:`deferred column loading <deferred>` by default. In the example
+ :ref:`deferred column loading <orm_queryguide_column_deferral>` by default. In the example
below, the ``User.bio`` column will not be loaded by default, but only
when accessed::
.. seealso::
- :ref:`deferred` - full description of deferred column loading
+ :ref:`orm_queryguide_column_deferral` - full description of deferred column loading
* **active history** - The :paramref:`_orm.mapped_column.active_history`
ensures that upon change of value for the attribute, the previous value
for invoking :func:`_orm.column_property` with the
:paramref:`_orm.column_property.deferred` parameter set to ``True``;
this construct establishes the :class:`_schema.Column` using
- :ref:`deferred column loading <deferred>` by default. In the example
+ :ref:`deferred column loading <orm_queryguide_column_deferral>` by default. In the example
below, the ``User.bio`` column will not be loaded by default, but only
when accessed::
.. seealso::
- :ref:`deferred` - full description of deferred column loading
+ :ref:`orm_queryguide_column_deferral` - full description of deferred column loading
* **active history** - The :paramref:`_orm.column_property.active_history`
ensures that upon change of value for the attribute, the previous value
.. automodule:: examples.generic_associations
-Large Collections
------------------
-
-.. automodule:: examples.large_collection
Materialized Paths
------------------
.. automodule:: examples.performance
-.. _examples_relationships:
-
-Relationship Join Conditions
-----------------------------
-
-.. automodule:: examples.join_conditions
.. _examples_spaceinvaders:
.. automodule:: examples.space_invaders
-.. _examples_xmlpersistence:
-
-XML Persistence
----------------
-
-.. automodule:: examples.elementtree
.. _examples_versioning:
.. automodule:: examples.dogpile_caching
-.. _examples_postgis:
-
-PostGIS Integration
--------------------
-
-.. automodule:: examples.postgis
-
* Appropriate loader options should be employed for :func:`_orm.deferred`
columns, if used at all, in addition to that of :func:`_orm.relationship`
- constructs as noted above. See :ref:`deferred` for background on
- deferred column loading.
+ constructs as noted above. See :ref:`orm_queryguide_column_deferral` for
+ background on deferred column loading.
.. _dynamic_asyncio:
quickstart
mapper_config
relationships
- loading_objects
+ queryguide/index
session
extending
extensions/index
.. seealso::
+ :ref:`loading_joined_inheritance` - in the :ref:`queryguide_toplevel`
+
:ref:`examples_inheritance` - complete examples of joined, single and
concrete inheritance
In joined table inheritance, each class along a hierarchy of classes
is represented by a distinct table. Querying for a particular subclass
in the hierarchy will render as a SQL JOIN along all tables in its
-inheritance path. If the queried class is the base class, the **default behavior
-is to include only the base table** in a SELECT statement. In all cases, the
-ultimate class to instantiate for a given row is determined by a discriminator
-column or an expression that works against the base table. When a subclass
-is loaded **only** against a base table, resulting objects will have base attributes
-populated at first; attributes that are local to the subclass will :term:`lazy load`
-when they are accessed. Alternatively, there are options which can change
-the default behavior, allowing the query to include columns corresponding to
-multiple tables/subclasses up front.
+inheritance path. If the queried class is the base class, the base table
+is queried instead, with options to include other tables at the same time
+or to allow attributes specific to sub-tables to load later.
+
+In all cases, the ultimate class to instantiate for a given row is determined
+by a :term:`discriminator` column or SQL expression, defined on the base class,
+which will yield a scalar value that is associated with a particular subclass.
+
The base class in a joined inheritance hierarchy is configured with
-additional arguments that will refer to the polymorphic discriminator
-column as well as the identifier for the base class::
+additional arguments that will indicate to the polymorphic discriminator
+column, and optionally a polymorphic identifier for the base class itself::
from sqlalchemy import ForeignKey
from sqlalchemy.orm import DeclarativeBase
"polymorphic_on": "type",
}
-Above, an additional column ``type`` is established to act as the
-**discriminator**, configured as such using the
-:paramref:`_orm.Mapper.polymorphic_on` parameter, which accepts a column-oriented
-expression specified either as a string name of the mapped attribute to use, or
-as a column expression object such as :class:`_schema.Column` or
-:func:`_orm.mapped_column` construct.
+ def __repr__(self):
+ return f"{self.__class__.__name__}({self.name!r})"
-This column will store a value which indicates the type of object
+In the above example, the discriminator is the ``type`` column, whichever is
+configured using the :paramref:`_orm.Mapper.polymorphic_on` parameter. This
+parameter accepts a column-oriented expression, specified either as a string
+name of the mapped attribute to use or as a column expression object such as
+:class:`_schema.Column` or :func:`_orm.mapped_column` construct.
+
+The discriminator column will store a value which indicates the type of object
represented within the row. The column may be of any datatype, though string
and integer are the most common. The actual data value to be applied to this
column for a particular row in the database is specified using the
:paramref:`_orm.Mapper.polymorphic_identity` parameter, described below.
While a polymorphic discriminator expression is not strictly necessary, it is
-required if polymorphic loading is desired. Establishing a simple column on
+required if polymorphic loading is desired. Establishing a column on
the base table is the easiest way to achieve this, however very sophisticated
-inheritance mappings may even configure a SQL expression such as a CASE
-statement as the polymorphic discriminator.
+inheritance mappings may make use of SQL expressions, such as a CASE
+expression, as the polymorphic discriminator.
.. note::
__mapper_args__ = {
"polymorphic_identity": "employee",
- "polymorphic_on": type,
+ "polymorphic_on": "type",
}
-.. _inheritance_loading_toplevel:
+:orphan:
-.. currentmodule:: sqlalchemy.orm
+This document has moved to :doc:`queryguide/inheritance`
-Loading Inheritance Hierarchies
-===============================
-
-When classes are mapped in inheritance hierarchies using the "joined",
-"single", or "concrete" table inheritance styles as described at
-:ref:`inheritance_toplevel`, the usual behavior is that a query for a
-particular base class will also yield objects corresponding to subclasses
-as well. When a single query is capable of returning a result with
-a different class or subclasses per result row, we use the term
-"polymorphic loading".
-
-Within the realm of polymorphic loading, specifically with joined and single
-table inheritance, there is an additional problem of which subclass attributes
-are to be queried up front, and which are to be loaded later. When an attribute
-of a particular subclass is queried up front, we can use it in our query as
-something to filter on, and it also will be loaded when we get our objects
-back. If it's not queried up front, it gets loaded later when we first need
-to access it. Basic control of this behavior is provided using the
-:func:`_orm.with_polymorphic` function, as well as two variants, the mapper
-configuration :paramref:`.mapper.with_polymorphic` in conjunction with
-the :paramref:`.mapper.polymorphic_load` option, and the :class:`_query.Query`
--level :meth:`_query.Query.with_polymorphic` method. The "with_polymorphic" family
-each provide a means of specifying which specific subclasses of a particular
-base class should be included within a query, which implies what columns and
-tables will be available in the SELECT.
-
-.. _with_polymorphic:
-
-Using with_polymorphic
-----------------------
-
-For the following sections, assume the ``Employee`` / ``Engineer`` / ``Manager``
-examples introduced in :ref:`inheritance_toplevel`.
-
-Normally, when a :class:`_query.Query` specifies the base class of an
-inheritance hierarchy, only the columns that are local to that base
-class are queried::
-
- session.query(Employee).all()
-
-Above, for both single and joined table inheritance, only the columns
-local to ``Employee`` will be present in the SELECT. We may get back
-instances of ``Engineer`` or ``Manager``, however they will not have the
-additional attributes loaded until we first access them, at which point a
-lazy load is emitted.
-
-Similarly, if we wanted to refer to columns mapped
-to ``Engineer`` or ``Manager`` in our query that's against ``Employee``,
-these columns aren't available directly in either the single or joined table
-inheritance case, since the ``Employee`` entity does not refer to these columns
-(note that for single-table inheritance, this is common if Declarative is used,
-but not for a classical mapping).
-
-To solve both of these issues, the :func:`_orm.with_polymorphic` function
-provides a special :class:`.AliasedClass` that represents a range of
-columns across subclasses. This object can be used in a :class:`_query.Query`
-like any other alias. When queried, it represents all the columns present in
-the classes given::
-
- from sqlalchemy.orm import with_polymorphic
-
- eng_plus_manager = with_polymorphic(Employee, [Engineer, Manager])
-
- query = session.query(eng_plus_manager)
-
-If the above mapping were using joined table inheritance, the SELECT
-statement for the above would be:
-
-.. sourcecode:: python+sql
-
- query.all()
- {opensql}
- SELECT
- employee.id AS employee_id,
- engineer.id AS engineer_id,
- manager.id AS manager_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- engineer.engineer_info AS engineer_engineer_info,
- manager.manager_data AS manager_manager_data
- FROM
- employee
- LEFT OUTER JOIN engineer ON employee.id = engineer.id
- LEFT OUTER JOIN manager ON employee.id = manager.id
- []
-
-Where above, the additional tables / columns for "engineer" and "manager" are
-included. Similar behavior occurs in the case of single table inheritance.
-
-:func:`_orm.with_polymorphic` accepts a single class or
-mapper, a list of classes/mappers, or the string ``'*'`` to indicate all
-subclasses:
-
-.. sourcecode:: python+sql
-
- # include columns for Engineer
- entity = with_polymorphic(Employee, Engineer)
-
- # include columns for Engineer, Manager
- entity = with_polymorphic(Employee, [Engineer, Manager])
-
- # include columns for all mapped subclasses
- entity = with_polymorphic(Employee, '*')
-
-.. tip::
-
- It's important to note that :func:`_orm.with_polymorphic` only affects the
- **columns that are included in fetched rows**, and not the **types of
- objects returned**. A call to ``with_polymorphic(Employee, [Manager])``
- will refer to rows that contain all types of ``Employee`` objects,
- including not only ``Manager`` objects, but also ``Engineer`` objects as
- these are subclasses of ``Employee``, as well as ``Employee`` instances if
- these are present in the database. The effect of using
- ``with_polymorphic(Employee, [Manager])`` would only provide the behavior
- that additional columns specific to ``Manager`` will be eagerly loaded in
- result rows, and as described below in
- :ref:`with_polymorphic_subclass_attributes` also be available for use
- within the WHERE clause of the SELECT statement.
-
-Using aliasing with with_polymorphic
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The :func:`_orm.with_polymorphic` function also provides "aliasing" of the
-polymorphic selectable itself, meaning, two different :func:`_orm.with_polymorphic`
-entities, referring to the same class hierarchy, can be used together. This
-is available using the :paramref:`.orm.with_polymorphic.aliased` flag.
-For a polymorphic selectable that is across multiple tables, the default behavior
-is to wrap the selectable into a subquery. Below we emit a query that will
-select for "employee or manager" paired with "employee or engineer" on employees
-with the same name:
-
-.. sourcecode:: python+sql
-
- engineer_employee = with_polymorphic(
- Employee, [Engineer], aliased=True)
- manager_employee = with_polymorphic(
- Employee, [Manager], aliased=True)
-
- q = s.query(engineer_employee, manager_employee).\
- join(
- manager_employee,
- and_(
- engineer_employee.id > manager_employee.id,
- engineer_employee.name == manager_employee.name
- )
- )
- q.all()
- {opensql}
- SELECT
- anon_1.employee_id AS anon_1_employee_id,
- anon_1.employee_name AS anon_1_employee_name,
- anon_1.employee_type AS anon_1_employee_type,
- anon_1.engineer_id AS anon_1_engineer_id,
- anon_1.engineer_engineer_name AS anon_1_engineer_engineer_name,
- anon_2.employee_id AS anon_2_employee_id,
- anon_2.employee_name AS anon_2_employee_name,
- anon_2.employee_type AS anon_2_employee_type,
- anon_2.manager_id AS anon_2_manager_id,
- anon_2.manager_manager_name AS anon_2_manager_manager_name
- FROM (
- SELECT
- employee.id AS employee_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- engineer.id AS engineer_id,
- engineer.engineer_name AS engineer_engineer_name
- FROM employee
- LEFT OUTER JOIN engineer ON employee.id = engineer.id
- ) AS anon_1
- JOIN (
- SELECT
- employee.id AS employee_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- manager.id AS manager_id,
- manager.manager_name AS manager_manager_name
- FROM employee
- LEFT OUTER JOIN manager ON employee.id = manager.id
- ) AS anon_2
- ON anon_1.employee_id > anon_2.employee_id
- AND anon_1.employee_name = anon_2.employee_name
-
-The creation of subqueries above is very verbose. While it creates the best
-encapsulation of the two distinct queries, it may be inefficient.
-:func:`_orm.with_polymorphic` includes an additional flag to help with this
-situation, :paramref:`.orm.with_polymorphic.flat`, which will "flatten" the
-subquery / join combination into straight joins, applying aliasing to the
-individual tables instead. Setting :paramref:`.orm.with_polymorphic.flat`
-implies :paramref:`.orm.with_polymorphic.aliased`, so only one flag
-is necessary:
-
-.. sourcecode:: python+sql
-
- engineer_employee = with_polymorphic(
- Employee, [Engineer], flat=True)
- manager_employee = with_polymorphic(
- Employee, [Manager], flat=True)
-
- q = s.query(engineer_employee, manager_employee).\
- join(
- manager_employee,
- and_(
- engineer_employee.id > manager_employee.id,
- engineer_employee.name == manager_employee.name
- )
- )
- q.all()
- {opensql}
- SELECT
- employee_1.id AS employee_1_id,
- employee_1.name AS employee_1_name,
- employee_1.type AS employee_1_type,
- engineer_1.id AS engineer_1_id,
- engineer_1.engineer_name AS engineer_1_engineer_name,
- employee_2.id AS employee_2_id,
- employee_2.name AS employee_2_name,
- employee_2.type AS employee_2_type,
- manager_1.id AS manager_1_id,
- manager_1.manager_name AS manager_1_manager_name
- FROM employee AS employee_1
- LEFT OUTER JOIN engineer AS engineer_1
- ON employee_1.id = engineer_1.id
- JOIN (
- employee AS employee_2
- LEFT OUTER JOIN manager AS manager_1
- ON employee_2.id = manager_1.id
- )
- ON employee_1.id > employee_2.id
- AND employee_1.name = employee_2.name
-
-Note above, when using :paramref:`.orm.with_polymorphic.flat`, it is often the
-case when used in conjunction with joined table inheritance that we get a
-right-nested JOIN in our statement. Some older databases, in particular older
-versions of SQLite, may have a problem with this syntax, although virtually all
-modern database versions now support this syntax.
-
-.. note::
-
- The :paramref:`.orm.with_polymorphic.flat` flag only applies to the use
- of :paramref:`.with_polymorphic` with **joined table inheritance** and when
- the :paramref:`.with_polymorphic.selectable` argument is **not** used.
-
-.. _with_polymorphic_subclass_attributes:
-
-Referring to Specific Subclass Attributes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The entity returned by :func:`_orm.with_polymorphic` is an :class:`.AliasedClass`
-object, which can be used in a :class:`_query.Query` like any other alias, including
-named attributes for those attributes on the ``Employee`` class. In our
-previous example, ``eng_plus_manager`` becomes the entity that we use to refer to the
-three-way outer join above. It also includes namespaces for each class named
-in the list of classes, so that attributes specific to those subclasses can be
-called upon as well. The following example illustrates calling upon attributes
-specific to ``Engineer`` as well as ``Manager`` in terms of ``eng_plus_manager``::
-
- eng_plus_manager = with_polymorphic(Employee, [Engineer, Manager])
- query = session.query(eng_plus_manager).filter(
- or_(
- eng_plus_manager.Engineer.engineer_info=='x',
- eng_plus_manager.Manager.manager_data=='y'
- )
- )
-
-A query as above would generate SQL resembling the following:
-
-.. sourcecode:: python+sql
-
- query.all()
- {opensql}
- SELECT
- employee.id AS employee_id,
- engineer.id AS engineer_id,
- manager.id AS manager_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- engineer.engineer_info AS engineer_engineer_info,
- manager.manager_data AS manager_manager_data
- FROM
- employee
- LEFT OUTER JOIN engineer ON employee.id = engineer.id
- LEFT OUTER JOIN manager ON employee.id = manager.id
- WHERE
- engineer.engineer_info=? OR
- manager.manager_data=?
- ['x', 'y']
-
-
-
-.. _with_polymorphic_mapper_config:
-
-Setting with_polymorphic at mapper configuration time
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The :func:`_orm.with_polymorphic` function serves the purpose of allowing
-"eager" loading of attributes from subclass tables, as well as the ability
-to refer to the attributes from subclass tables at query time. Historically,
-the "eager loading" of columns has been the more important part of the
-equation. So just as eager loading for relationships can be specified
-as a configurational option, the :paramref:`.mapper.with_polymorphic`
-configuration parameter allows an entity to use a polymorphic load by
-default. We can add the parameter to our ``Employee`` mapping
-first introduced at :ref:`joined_inheritance`::
-
- class Employee(Base):
- __tablename__ = 'employee'
- id = mapped_column(Integer, primary_key=True)
- name = mapped_column(String(50))
- type = mapped_column(String(50))
-
- __mapper_args__ = {
- 'polymorphic_identity':'employee',
- 'polymorphic_on':type,
- 'with_polymorphic': '*'
- }
-
-Above is a common setting for :paramref:`.mapper.with_polymorphic`,
-which is to indicate an asterisk to load all subclass columns. In the
-case of joined table inheritance, this option
-should be used sparingly, as it implies that the mapping will always emit
-a (often large) series of LEFT OUTER JOIN to many tables, which is not
-efficient from a SQL perspective. For single table inheritance, specifying the
-asterisk is often a good idea as the load is still against a single table only,
-but an additional lazy load of subclass-mapped columns will be prevented.
-
-Using :func:`_orm.with_polymorphic` or :meth:`_query.Query.with_polymorphic`
-will override the mapper-level :paramref:`.mapper.with_polymorphic` setting.
-
-The :paramref:`.mapper.with_polymorphic` option also accepts a list of
-classes just like :func:`_orm.with_polymorphic` to polymorphically load among
-a subset of classes. However, when using Declarative, providing classes
-to this list is not directly possible as the subclasses we'd like to add
-are not available yet. Instead, we can specify on each subclass
-that they should individually participate in polymorphic loading by
-default using the :paramref:`.mapper.polymorphic_load` parameter::
-
- class Engineer(Employee):
- __tablename__ = 'engineer'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- engineer_info = mapped_column(String(50))
- __mapper_args__ = {
- 'polymorphic_identity':'engineer',
- 'polymorphic_load': 'inline'
- }
-
- class Manager(Employee):
- __tablename__ = 'manager'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- manager_data = mapped_column(String(50))
- __mapper_args__ = {
- 'polymorphic_identity':'manager',
- 'polymorphic_load': 'inline'
- }
-
-Setting the :paramref:`.mapper.polymorphic_load` parameter to the value
-``"inline"`` means that the ``Engineer`` and ``Manager`` classes above
-are part of the "polymorphic load" of the base ``Employee`` class by default,
-exactly as though they had been appended to the
-:paramref:`.mapper.with_polymorphic` list of classes.
-
-Setting with_polymorphic against a query
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The :func:`_orm.with_polymorphic` function evolved from a query-level
-method :meth:`_query.Query.with_polymorphic`. This method has the same purpose
-as :func:`_orm.with_polymorphic`, except is not as
-flexible in its usage patterns in that it only applies to the first entity
-of the :class:`_query.Query`. It then takes effect for all occurrences of
-that entity, so that the entity (and its subclasses) can be referred to
-directly, rather than using an alias object. For simple cases it might be
-considered to be more succinct::
-
- session.query(Employee).\
- with_polymorphic([Engineer, Manager]).\
- filter(
- or_(
- Engineer.engineer_info=='w',
- Manager.manager_data=='q'
- )
- )
-
-The :meth:`_query.Query.with_polymorphic` method has a more complicated job
-than the :func:`_orm.with_polymorphic` function, as it needs to correctly
-transform entities like ``Engineer`` and ``Manager`` appropriately, but
-not interfere with other entities. If its flexibility is lacking, switch
-to using :func:`_orm.with_polymorphic`.
-
-.. _polymorphic_selectin:
-
-Polymorphic Selectin Loading
-----------------------------
-
-An alternative to using the :func:`_orm.with_polymorphic` family of
-functions to "eagerly" load the additional subclasses on an inheritance
-mapping, primarily when using joined table inheritance, is to use polymorphic
-"selectin" loading. This is an eager loading
-feature which works similarly to the :ref:`selectin_eager_loading` feature
-of relationship loading. Given our example mapping, we can instruct
-a load of ``Employee`` to emit an extra SELECT per subclass by using
-the :func:`_orm.selectin_polymorphic` loader option::
-
- from sqlalchemy.orm import selectin_polymorphic
-
- query = session.query(Employee).options(
- selectin_polymorphic(Employee, [Manager, Engineer])
- )
-
-When the above query is run, two additional SELECT statements will
-be emitted:
-
-.. sourcecode:: python+sql
-
- {opensql}query.all()
- SELECT
- employee.id AS employee_id,
- employee.name AS employee_name,
- employee.type AS employee_type
- FROM employee
- ()
-
- SELECT
- engineer.id AS engineer_id,
- employee.id AS employee_id,
- employee.type AS employee_type,
- engineer.engineer_name AS engineer_engineer_name
- FROM employee JOIN engineer ON employee.id = engineer.id
- WHERE employee.id IN (?, ?) ORDER BY employee.id
- (1, 2)
-
- SELECT
- manager.id AS manager_id,
- employee.id AS employee_id,
- employee.type AS employee_type,
- manager.manager_name AS manager_manager_name
- FROM employee JOIN manager ON employee.id = manager.id
- WHERE employee.id IN (?) ORDER BY employee.id
- (3,)
-
-We can similarly establish the above style of loading to take place
-by default by specifying the :paramref:`.mapper.polymorphic_load` parameter,
-using the value ``"selectin"`` on a per-subclass basis::
-
- class Employee(Base):
- __tablename__ = 'employee'
- id = mapped_column(Integer, primary_key=True)
- name = mapped_column(String(50))
- type = mapped_column(String(50))
-
- __mapper_args__ = {
- 'polymorphic_identity': 'employee',
- 'polymorphic_on': type
- }
-
- class Engineer(Employee):
- __tablename__ = 'engineer'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- engineer_name = mapped_column(String(30))
-
- __mapper_args__ = {
- 'polymorphic_load': 'selectin',
- 'polymorphic_identity': 'engineer',
- }
-
- class Manager(Employee):
- __tablename__ = 'manager'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- manager_name = mapped_column(String(30))
-
- __mapper_args__ = {
- 'polymorphic_load': 'selectin',
- 'polymorphic_identity': 'manager',
- }
-
-
-Unlike when using :func:`_orm.with_polymorphic`, when using the
-:func:`_orm.selectin_polymorphic` style of loading, we do **not** have the
-ability to refer to the ``Engineer`` or ``Manager`` entities within our main
-query as filter, order by, or other criteria, as these entities are not present
-in the initial query that is used to locate results. However, we can apply
-loader options that apply towards ``Engineer`` or ``Manager``, which will take
-effect when the secondary SELECT is emitted. Below we assume ``Manager`` has
-an additional relationship ``Manager.paperwork``, that we'd like to eagerly
-load as well. We can use any type of eager loading, such as joined eager
-loading via the :func:`_orm.joinedload` function::
-
- from sqlalchemy.orm import joinedload
- from sqlalchemy.orm import selectin_polymorphic
-
- query = session.query(Employee).options(
- selectin_polymorphic(Employee, [Manager, Engineer]),
- joinedload(Manager.paperwork)
- )
-
-Using the query above, we get three SELECT statements emitted, however
-the one against ``Manager`` will be:
-
-.. sourcecode:: sql
-
- SELECT
- manager.id AS manager_id,
- employee.id AS employee_id,
- employee.type AS employee_type,
- manager.manager_name AS manager_manager_name,
- paperwork_1.id AS paperwork_1_id,
- paperwork_1.manager_id AS paperwork_1_manager_id,
- paperwork_1.data AS paperwork_1_data
- FROM employee JOIN manager ON employee.id = manager.id
- LEFT OUTER JOIN paperwork AS paperwork_1
- ON manager.id = paperwork_1.manager_id
- WHERE employee.id IN (?) ORDER BY employee.id
- (3,)
-
-Note that selectin polymorphic loading has similar caveats as that of
-selectin relationship loading; for entities that make use of a composite
-primary key, the database in use must support tuples with "IN", currently
-known to work with MySQL and PostgreSQL.
-
-.. versionadded:: 1.2
-
-.. warning:: The selectin polymorphic loading feature should be considered
- as **experimental** within early releases of the 1.2 series.
-
-.. _polymorphic_selectin_and_withpoly:
-
-Combining selectin and with_polymorphic
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. note:: works as of 1.2.0b3
-
-With careful planning, selectin loading can be applied against a hierarchy
-that itself uses "with_polymorphic". A particular use case is that of
-using selectin loading to load a joined-inheritance subtable, which then
-uses "with_polymorphic" to refer to further sub-classes, which may be
-joined- or single-table inheritance. If we added a class ``VicePresident`` that
-extends ``Manager`` using single-table inheritance, we could ensure that
-a load of ``Manager`` also fully loads ``VicePresident`` subtypes at the same time::
-
- # use "Employee" example from the enclosing section
-
- class Manager(Employee):
- __tablename__ = 'manager'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- manager_name = mapped_column(String(30))
-
- __mapper_args__ = {
- 'polymorphic_load': 'selectin',
- 'polymorphic_identity': 'manager',
- }
-
- class VicePresident(Manager):
- vp_info = mapped_column(String(30))
-
- __mapper_args__ = {
- "polymorphic_load": "inline",
- "polymorphic_identity": "vp"
- }
-
-
-Above, we add a ``vp_info`` column to the ``manager`` table, local to the
-``VicePresident`` subclass. This subclass is linked to the polymorphic
-identity ``"vp"`` which refers to rows which have this data. By setting the
-load style to "inline", it means that a load of ``Manager`` objects will also
-ensure that the ``vp_info`` column is queried for in the same SELECT statement.
-A query against ``Employee`` that encounters a ``Manager`` row would emit
-similarly to the following:
-
-.. sourcecode:: sql
-
- SELECT employee.id AS employee_id, employee.name AS employee_name,
- employee.type AS employee_type
- FROM employee
- )
-
- SELECT manager.id AS manager_id, employee.id AS employee_id,
- employee.type AS employee_type,
- manager.manager_name AS manager_manager_name,
- manager.vp_info AS manager_vp_info
- FROM employee JOIN manager ON employee.id = manager.id
- WHERE employee.id IN (?) ORDER BY employee.id
- (1,)
-
-Combining "selectin" polymorphic loading with query-time
-:func:`_orm.with_polymorphic` usage is also possible (though this is very
-outer-space stuff!); assuming the above mappings had no ``polymorphic_load``
-set up, we could get the same result as follows::
-
- from sqlalchemy.orm import with_polymorphic, selectin_polymorphic
-
- manager_poly = with_polymorphic(Manager, [VicePresident])
-
- s.query(Employee).options(
- selectin_polymorphic(Employee, [manager_poly])).all()
-
-.. _inheritance_of_type:
-
-Referring to specific subtypes on relationships
------------------------------------------------
-
-Mapped attributes which correspond to a :func:`_orm.relationship` are used
-in querying in order to refer to the linkage between two mappings. Common
-uses for this are to refer to a :func:`_orm.relationship` in :meth:`_query.Query.join`
-as well as in loader options like :func:`_orm.joinedload`. When using
-:func:`_orm.relationship` where the target class is an inheritance hierarchy,
-the API allows that the join, eager load, or other linkage should target a specific
-subclass, alias, or :func:`_orm.with_polymorphic` alias, of that class hierarchy,
-rather than the class directly targeted by the :func:`_orm.relationship`.
-
-The :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type` method allows the
-construction of joins along :func:`~sqlalchemy.orm.relationship` paths while
-narrowing the criterion to specific derived aliases or subclasses. Suppose the
-``employees`` table represents a collection of employees which are associated
-with a ``Company`` object. We'll add a ``company_id`` column to the
-``employees`` table and a new table ``companies``:
-
-.. sourcecode:: python
-
- class Company(Base):
- __tablename__ = 'company'
- id = mapped_column(Integer, primary_key=True)
- name = mapped_column(String(50))
- employees = relationship("Employee",
- backref='company')
-
- class Employee(Base):
- __tablename__ = 'employee'
- id = mapped_column(Integer, primary_key=True)
- type = mapped_column(String(20))
- company_id = mapped_column(Integer, ForeignKey('company.id'))
- __mapper_args__ = {
- 'polymorphic_on':type,
- 'polymorphic_identity':'employee',
- }
-
- class Engineer(Employee):
- __tablename__ = 'engineer'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- engineer_info = mapped_column(String(50))
- __mapper_args__ = {'polymorphic_identity':'engineer'}
-
- class Manager(Employee):
- __tablename__ = 'manager'
- id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
- manager_data = mapped_column(String(50))
- __mapper_args__ = {'polymorphic_identity':'manager'}
-
-When querying from ``Company`` onto the ``Employee`` relationship, the
-:meth:`_query.Query.join` method as well as operators like :meth:`.PropComparator.any`
-and :meth:`.PropComparator.has` will create
-a join from ``company`` to ``employee``, without including ``engineer`` or
-``manager`` in the mix. If we wish to have criterion which is specifically
-against the ``Engineer`` class, we can tell those methods to join or subquery
-against the set of columns representing the subclass using the
-:meth:`~.orm.interfaces.PropComparator.of_type` operator::
-
- session.query(Company).\
- join(Company.employees.of_type(Engineer)).\
- filter(Engineer.engineer_info=='someinfo')
-
-Similarly, to join from ``Company`` to the polymorphic entity that includes both
-``Engineer`` and ``Manager`` columns::
-
- manager_and_engineer = with_polymorphic(
- Employee, [Manager, Engineer])
-
- session.query(Company).\
- join(Company.employees.of_type(manager_and_engineer)).\
- filter(
- or_(
- manager_and_engineer.Engineer.engineer_info == 'someinfo',
- manager_and_engineer.Manager.manager_data == 'somedata'
- )
- )
-
-The :meth:`.PropComparator.any` and :meth:`.PropComparator.has` operators also
-can be used with :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type`,
-such as when the embedded criterion is in terms of a subclass::
-
- session.query(Company).\
- filter(
- Company.employees.of_type(Engineer).
- any(Engineer.engineer_info=='someinfo')
- ).all()
-
-.. _eagerloading_polymorphic_subtypes:
-
-Eager Loading of Specific or Polymorphic Subtypes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The :func:`_orm.joinedload`, :func:`.subqueryload`, :func:`.contains_eager` and
-other eagerloader options support
-paths which make use of :func:`~.PropComparator.of_type`.
-Below, we load ``Company`` rows while eagerly loading related ``Engineer``
-objects, querying the ``employee`` and ``engineer`` tables simultaneously::
-
- session.query(Company).\
- options(
- subqueryload(Company.employees.of_type(Engineer)).
- subqueryload(Engineer.machines)
- )
- )
-
-As is the case with :meth:`_query.Query.join`, :meth:`~.PropComparator.of_type`
-can be used to combine eager loading and :func:`_orm.with_polymorphic`,
-so that all sub-attributes of all referenced subtypes
-can be loaded::
-
- manager_and_engineer = with_polymorphic(
- Employee, [Manager, Engineer],
- flat=True)
-
- session.query(Company).\
- options(
- joinedload(
- Company.employees.of_type(manager_and_engineer)
- )
- )
-
-.. note::
-
- When using :func:`.with_polymorphic` in conjunction with
- :func:`_orm.joinedload`, the :func:`.with_polymorphic` object must be against
- an "aliased" object, that is an instance of :class:`_expression.Alias`, so that the
- polymorphic selectable is aliased (an informative error message is raised
- otherwise).
-
- The typical way to do this is to include the
- :paramref:`.with_polymorphic.aliased` or :paramref:`.flat` flag, which will
- apply this aliasing automatically. However, if the
- :paramref:`.with_polymorphic.selectable` argument is being used to pass an
- object that is already an :class:`_expression.Alias` object then this flag should
- **not** be set. The "flat" option implies the "aliased" option and is an
- alternate form of aliasing against join objects that produces fewer
- subqueries.
-
-Once :meth:`~.PropComparator.of_type` is the target of the eager load,
-that's the entity we would use for subsequent chaining, not the original class
-or derived class. If we wanted to further eager load a collection on the
-eager-loaded ``Engineer`` class, we access this class from the namespace of the
-:func:`_orm.with_polymorphic` object::
-
- session.query(Company).\
- options(
- joinedload(Company.employees.of_type(manager_and_engineer)).\
- subqueryload(manager_and_engineer.Engineer.computers)
- )
- )
-
-.. _loading_joined_inheritance:
-
-Loading objects with joined table inheritance
----------------------------------------------
-
-When using joined table inheritance, if we query for a specific subclass
-that represents a JOIN of two tables such as our ``Engineer`` example
-from the inheritance section, the SQL emitted is a join::
-
- session.query(Engineer).all()
-
-The above query will emit SQL like:
-
-.. sourcecode:: python+sql
-
- {opensql}
- SELECT employee.id AS employee_id,
- employee.name AS employee_name, employee.type AS employee_type,
- engineer.name AS engineer_name
- FROM employee JOIN engineer
- ON employee.id = engineer.id
-
-We will then get a collection of ``Engineer`` objects back, which will
-contain all columns from ``employee`` and ``engineer`` loaded.
-
-However, when emitting a :class:`_query.Query` against a base class, the behavior
-is to load only from the base table::
-
- session.query(Employee).all()
-
-Above, the default behavior would be to SELECT only from the ``employee``
-table and not from any "sub" tables (``engineer`` and ``manager``, in our
-previous examples):
-
-.. sourcecode:: python+sql
-
- {opensql}
- SELECT employee.id AS employee_id,
- employee.name AS employee_name, employee.type AS employee_type
- FROM employee
- []
-
-After a collection of ``Employee`` objects has been returned from the
-query, and as attributes are requested from those ``Employee`` objects which are
-represented in either the ``engineer`` or ``manager`` child tables, a second
-load is issued for the columns in that related row, if the data was not
-already loaded. So above, after accessing the objects you'd see further SQL
-issued along the lines of:
-
-.. sourcecode:: python+sql
-
- {opensql}
- SELECT manager.id AS manager_id,
- manager.manager_data AS manager_manager_data
- FROM manager
- WHERE ? = manager.id
- [5]
- SELECT engineer.id AS engineer_id,
- engineer.engineer_info AS engineer_engineer_info
- FROM engineer
- WHERE ? = engineer.id
- [2]
-
-The :func:`_orm.with_polymorphic`
-function and related configuration options allow us to instead emit a JOIN up
-front which will conditionally load against ``employee``, ``engineer``, or
-``manager``, very much like joined eager loading works for relationships,
-removing the necessity for a second per-entity load::
-
- from sqlalchemy.orm import with_polymorphic
-
- eng_plus_manager = with_polymorphic(Employee, [Engineer, Manager])
-
- query = session.query(eng_plus_manager)
-
-The above produces a query which joins the ``employee`` table to both the
-``engineer`` and ``manager`` tables like the following:
-
-.. sourcecode:: python+sql
-
- query.all()
- {opensql}
- SELECT employee.id AS employee_id,
- engineer.id AS engineer_id,
- manager.id AS manager_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- engineer.engineer_info AS engineer_engineer_info,
- manager.manager_data AS manager_manager_data
- FROM employee
- LEFT OUTER JOIN engineer
- ON employee.id = engineer.id
- LEFT OUTER JOIN manager
- ON employee.id = manager.id
- []
-
-The section :ref:`with_polymorphic` discusses the :func:`_orm.with_polymorphic`
-function and its configurational variants.
-
-.. seealso::
-
- :ref:`with_polymorphic`
-
-.. _loading_single_inheritance:
-
-Loading objects with single table inheritance
----------------------------------------------
-
-In modern Declarative, single inheritance mappings produce :class:`_schema.Column`
-objects that are mapped only to a subclass, and not available from the
-superclass, even though they are present on the same table.
-In our example from :ref:`single_inheritance`, the ``Manager`` mapping for example had a
-:class:`_schema.Column` specified::
-
- class Manager(Employee):
- manager_data = mapped_column(String(50))
-
- __mapper_args__ = {
- 'polymorphic_identity':'manager'
- }
-
-Above, there would be no ``Employee.manager_data``
-attribute, even though the ``employee`` table has a ``manager_data`` column.
-A query against ``Manager`` will include this column in the query, as well
-as an IN clause to limit rows only to ``Manager`` objects:
-
-.. sourcecode:: python+sql
-
- session.query(Manager).all()
- {opensql}
- SELECT
- employee.id AS employee_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- employee.manager_data AS employee_manager_data
- FROM employee
- WHERE employee.type IN (?)
-
- ('manager',)
-
-However, in a similar way to that of joined table inheritance, a query
-against ``Employee`` will only query for columns mapped to ``Employee``:
-
-.. sourcecode:: python+sql
-
- session.query(Employee).all()
- {opensql}
- SELECT employee.id AS employee_id,
- employee.name AS employee_name,
- employee.type AS employee_type
- FROM employee
-
-If we get back an instance of ``Manager`` from our result, accessing
-additional columns only mapped to ``Manager`` emits a lazy load
-for those columns, in a similar way to joined inheritance::
-
- SELECT employee.manager_data AS employee_manager_data
- FROM employee
- WHERE employee.id = ? AND employee.type IN (?)
-
-The :func:`_orm.with_polymorphic` function serves a similar role as joined
-inheritance in the case of single inheritance; it allows both for eager loading
-of subclass attributes as well as specification of subclasses in a query,
-just without the overhead of using OUTER JOIN::
-
- employee_poly = with_polymorphic(Employee, '*')
-
- q = session.query(employee_poly).filter(
- or_(
- employee_poly.name == 'a',
- employee_poly.Manager.manager_data == 'b'
- )
- )
-
-Above, our query remains against a single table however we can refer to the
-columns present in ``Manager`` or ``Engineer`` using the "polymorphic" namespace.
-Since we specified ``"*"`` for the entities, both ``Engineer`` and
-``Manager`` will be loaded at once. SQL emitted would be:
-
-.. sourcecode:: python+sql
-
- q.all()
- {opensql}
- SELECT
- employee.id AS employee_id, employee.name AS employee_name,
- employee.type AS employee_type,
- employee.manager_data AS employee_manager_data,
- employee.engineer_info AS employee_engineer_info
- FROM employee
- WHERE employee.name = :name_1
- OR employee.manager_data = :manager_data_1
-
-
-Inheritance Loading API
------------------------
-
-.. autofunction:: sqlalchemy.orm.with_polymorphic
-
-.. autofunction:: sqlalchemy.orm.selectin_polymorphic
-.. _loading_columns:
+:orphan:
-.. currentmodule:: sqlalchemy.orm
-
-===============
-Loading Columns
-===============
-
-This section presents additional options regarding the loading of columns.
-
-.. _deferred:
-
-Deferred Column Loading
-=======================
-
-Deferred column loading allows particular columns of a table be loaded only
-upon direct access, instead of when the entity is queried using
-:class:`_sql.Select` or :class:`_orm.Query`. This feature is useful when one wants to avoid
-loading a large text or binary field into memory when it's not needed.
-
-Configuring Deferred Loading at Mapper Configuration Time
----------------------------------------------------------
-
-First introduced at :ref:`orm_declarative_column_options` and
-:ref:`orm_imperative_table_column_options`, the
-:paramref:`_orm.mapped_column.deferred` parameter of :func:`_orm.mapped_column`,
-as well as the :func:`_orm.deferred` ORM function may be used to indicate mapped
-columns as "deferred" at mapper configuration time. With this configuration,
-the target columns will not be loaded in SELECT statements by default, and
-will instead only be loaded "lazily" when their corresponding attribute is
-accessed on a mapped instance. Deferral can be configured for individual
-columns or groups of columns that will load together when any of them
-are accessed.
-
-In the example below, using :ref:`Declarative Table <orm_declarative_table>`
-configuration, we define a mapping that will load each of
-``.excerpt`` and ``.photo`` in separate, individual-row SELECT statements when each
-attribute is first referenced on the individual object instance::
-
- from sqlalchemy import Text
- from sqlalchemy.orm import DeclarativeBase
- from sqlalchemy.orm import Mapped
- from sqlalchemy.orm import mapped_column
-
- class Base(DeclarativeBase):
- pass
-
- class Book(Base):
- __tablename__ = 'book'
-
- book_id: Mapped[int] = mapped_column(primary_key=True)
- title: Mapped[str]
- summary: Mapped[str]
- excerpt: Mapped[str] = mapped_column(Text, deferred=True)
- photo: Mapped[bytes] = mapped_column(deferred=True)
-
-A :func:`_sql.select` construct for the above mapping will not include
-``excerpt`` and ``photo`` by default::
-
- >>> from sqlalchemy import select
- >>> print(select(Book))
- SELECT book.book_id, book.title, book.summary
- FROM book
-
-When an object of type ``Book`` is loaded by the ORM, accessing the
-``.excerpt`` or ``.photo`` attributes will instead :term:`lazy load` the
-data from each column using a new SQL statement.
-
-When using :ref:`Imperative Table <orm_imperative_table_configuration>`
-or fully :ref:`Imperative <orm_imperative_mapping>` configuration, the
-:func:`_orm.deferred` construct should be used instead, passing the
-target :class:`_schema.Column` object to be mapped as the argument::
-
- from sqlalchemy import Column, Integer, LargeBinary, String, Table, Text
- from sqlalchemy.orm import DeclarativeBase
- from sqlalchemy.orm import deferred
-
-
- class Base(DeclarativeBase):
- pass
-
-
- book = Table(
- "book",
- Base.metadata,
- Column("book_id", Integer, primary_key=True),
- Column("title", String),
- Column("summary", String),
- Column("excerpt", Text),
- Column("photo", LargeBinary),
- )
-
-
- class Book(Base):
- __table__ = book
-
- excerpt = deferred(book.c.excerpt)
- photo = deferred(book.c.photo)
-
-
-Deferred columns can be associated with a "group" name, so that they load
-together when any of them are first accessed. When using
-:func:`_orm.mapped_column`, this group name may be specified using the
-:paramref:`_orm.mapped_column.deferred_group` parameter, which implies
-:paramref:`_orm.mapped_column.deferred` if that parameter is not already
-set. When using :func:`_orm.deferred`, the :paramref:`_orm.deferred.group`
-parameter may be used.
-
-The example below defines a mapping with a ``photos`` deferred group. When
-an attribute within the group ``.photo1``, ``.photo2``, ``.photo3``
-is accessed on an instance of ``Book``, all three columns will be loaded in one SELECT
-statement. The ``.excerpt`` column however will only be loaded when it
-is directly accessed::
-
- from sqlalchemy import Text
- from sqlalchemy.orm import DeclarativeBase
- from sqlalchemy.orm import Mapped
- from sqlalchemy.orm import mapped_column
-
- class Base(DeclarativeBase):
- pass
-
- class Book(Base):
- __tablename__ = 'book'
-
- book_id: Mapped[int] = mapped_column(primary_key=True)
- title: Mapped[str]
- summary: Mapped[str]
- excerpt: Mapped[str] = mapped_column(Text, deferred=True)
- photo1: Mapped[bytes] = mapped_column(deferred_group="photos")
- photo2: Mapped[bytes] = mapped_column(deferred_group="photos")
- photo3: Mapped[bytes] = mapped_column(deferred_group="photos")
-
-
-.. _deferred_options:
-
-Deferred Column Loader Query Options
-------------------------------------
-At query time, the :func:`_orm.defer`, :func:`_orm.undefer` and
-:func:`_orm.undefer_group` loader options may be used to further control the
-"deferral behavior" of mapped columns.
-
-Columns can be marked as "deferred" or reset to "undeferred" at query time
-using options which are passed to the :meth:`_sql.Select.options` method; the most
-basic query options are :func:`_orm.defer` and
-:func:`_orm.undefer`::
-
- from sqlalchemy.orm import defer
- from sqlalchemy.orm import undefer
- from sqlalchemy import select
-
- stmt = select(Book)
- stmt = stmt.options(defer(Book.summary), undefer(Book.excerpt))
- book_objs = session.scalars(stmt).all()
-
-
-Above, the "summary" column will not load until accessed, and the "excerpt"
-column will load immediately even if it was mapped as a "deferred" column.
-
-:func:`_orm.deferred` attributes which are marked with a "group" can be undeferred
-using :func:`_orm.undefer_group`, sending in the group name::
-
- from sqlalchemy.orm import undefer_group
- from sqlalchemy import select
-
- stmt = select(Book)
- stmt = stmt.options(undefer_group('photos'))
- book_objs = session.scalars(stmt).all()
-
-
-.. _deferred_loading_w_multiple:
-
-Deferred Loading across Multiple Entities
------------------------------------------
-
-Column deferral may also be used for a statement that loads multiple types of
-entities at once, by referring to the appropriate class bound attribute
-within the :func:`_orm.defer` function. Suppose ``Book`` has a
-relationship ``Book.author`` to a related class ``Author``, we could write
-a query as follows which will defer the ``Author.bio`` column::
-
- from sqlalchemy.orm import defer
- from sqlalchemy import select
-
- stmt = select(Book, Author).join(Book.author)
- stmt = stmt.options(defer(Author.bio))
-
- book_author_objs = session.execute(stmt).all()
-
-
-Column deferral options may also indicate that they take place along various
-relationship paths, which are themselves often :ref:`eagerly loaded
-<loading_toplevel>` with loader options. All relationship-bound loader options
-support chaining onto additional loader options, which include loading for
-further levels of relationships, as well as onto column-oriented attributes at
-that path. Such as, to load ``Author`` instances, then joined-eager-load the
-``Author.books`` collection for each author, then apply deferral options to
-column-oriented attributes onto each ``Book`` entity from that relationship,
-the :func:`_orm.joinedload` loader option can be combined with the :func:`.load_only`
-option (described later in this section) to defer all ``Book`` columns except
-those explicitly specified::
-
- from sqlalchemy.orm import joinedload
- from sqlalchemy import select
-
- stmt = select(Author)
- stmt = stmt.options(
- joinedload(Author.books).load_only(Book.summary, Book.excerpt)
- )
-
- author_objs = session.scalars(stmt).all()
-
-Option structures as above can also be organized in more complex ways, such
-as hierarchically using the :meth:`_orm.Load.options`
-method, which allows multiple sub-options to be chained to a common parent
-option at once. The example below illustrates a more complex structure::
-
- from sqlalchemy.orm import defer
- from sqlalchemy.orm import joinedload
- from sqlalchemy.orm import load_only
- from sqlalchemy import select
-
- stmt = select(Author)
- stmt = stmt.options(
- joinedload(Author.book).options(
- load_only(Book.summary, Book.excerpt),
- joinedload(Book.citations).options(
- joinedload(Citation.author),
- defer(Citation.fulltext)
- )
- )
- )
- author_objs = session.scalars(stmt).all()
-
-
-Another way to apply options to a path is to use the :func:`_orm.defaultload`
-function. This function is used to indicate a particular path within a loader
-option structure without actually setting any options at that level, so that further
-sub-options may be applied. The :func:`_orm.defaultload` function can be used
-to create the same structure as we did above using :meth:`_orm.Load.options` as::
-
- from sqlalchemy import select
- from sqlalchemy.orm import defaultload
-
- stmt = select(Author)
- stmt = stmt.options(
- joinedload(Author.book).load_only(Book.summary, Book.excerpt),
- defaultload(Author.book).joinedload(Book.citations).joinedload(Citation.author),
- defaultload(Author.book).defaultload(Book.citations).defer(Citation.fulltext)
- )
-
- author_objs = session.scalars(stmt).all()
-
-.. seealso::
-
- :ref:`relationship_loader_options` - targeted towards relationship loading
-
-Load Only and Wildcard Options
-------------------------------
-
-The ORM loader option system supports the concept of "wildcard" loader options,
-in which a loader option can be passed an asterisk ``"*"`` to indicate that
-a particular option should apply to all applicable attributes of a mapped
-class. Such as, if we wanted to load the ``Book`` class but only
-the "summary" and "excerpt" columns, we could say::
-
- from sqlalchemy.orm import defer
- from sqlalchemy.orm import undefer
- from sqlalchemy import select
-
- stmt = select(Book).options(
- defer('*'), undefer(Book.summary), undefer(Book.excerpt))
-
- book_objs = session.scalars(stmt).all()
-
-Above, the :func:`.defer` option is applied using a wildcard to all column
-attributes on the ``Book`` class. Then, the :func:`.undefer` option is used
-against the "summary" and "excerpt" fields so that they are the only columns
-loaded up front. A query for the above entity will include only the "summary"
-and "excerpt" fields in the SELECT, along with the primary key columns which
-are always used by the ORM.
-
-A similar function is available with less verbosity by using the
-:func:`_orm.load_only` option. This is a so-called **exclusionary** option
-which will apply deferred behavior to all column attributes except those
-that are named::
-
- from sqlalchemy.orm import load_only
- from sqlalchemy import select
-
- stmt = select(Book).options(load_only(Book.summary, Book.excerpt))
-
- book_objs = session.scalars(stmt).all()
-
-Wildcard and Exclusionary Options with Multiple-Entity Queries
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Wildcard options and exclusionary options such as :func:`.load_only` may
-only be applied to a single entity at a time within a statement.
-To suit the less common case where a statement is returning multiple
-primary entities at once, a special calling style may be required in order
-to apply a wildcard or exclusionary option to a specific entity, which is to use the
-:class:`_orm.Load` object to indicate the starting entity for a deferral option.
-Such as, if we were loading ``Book`` and ``Author`` at once, the ORM
-will raise an informative error if we try to apply :func:`.load_only` to
-both at once. Instead, we may use :class:`_orm.Load` to apply the option
-to either or both of ``Book`` and ``Author`` individually::
-
- from sqlalchemy.orm import Load
-
- stmt = select(Book, Author).join(Book.author)
- stmt = stmt.options(
- Load(Book).load_only(Book.summary, Book.excerpt)
- )
- book_author_objs = session.execute(stmt).all()
-
-Above, :class:`_orm.Load` is used in conjunction with the exclusionary option
-:func:`.load_only` so that the deferral of all other columns only takes
-place for the ``Book`` class and not the ``Author`` class. Again,
-the ORM should raise an informative error message when
-the above calling style is actually required that describes those cases
-where explicit use of :class:`_orm.Load` is needed.
-
-.. _deferred_raiseload:
-
-Raiseload for Deferred Columns
-------------------------------
-
-.. versionadded:: 1.4
-
-The :func:`.deferred` loader option and the corresponding loader strategy also
-support the concept of "raiseload", which is a loader strategy that will raise
-:class:`.InvalidRequestError` if the attribute is accessed such that it would
-need to emit a SQL query in order to be loaded. This behavior is the
-column-based equivalent of the :func:`_orm.raiseload` feature for relationship
-loading, discussed at :ref:`prevent_lazy_with_raiseload`. Using the
-:paramref:`_orm.defer.raiseload` parameter on the :func:`_orm.defer` option,
-an exception is raised if the attribute is accessed::
-
- book = session.scalar(
- select(Book).options(defer(Book.summary, raiseload=True)).limit(1)
- )
-
- # would raise an exception
- book.summary
-
-Deferred "raiseload" can be configured at the mapper level via
-:paramref:`.orm.deferred.raiseload` on either :func:`_orm.mapped_column`
-or in :func:`.deferred`, so that an explicit
-:func:`.undefer` is required in order for the attribute to be usable.
-Below is a :ref:`Declarative table <orm_declarative_table>` configuration example::
-
-
- from sqlalchemy import Text
- from sqlalchemy.orm import DeclarativeBase
- from sqlalchemy.orm import Mapped
- from sqlalchemy.orm import mapped_column
-
- class Base(DeclarativeBase):
- pass
-
- class Book(Base):
- __tablename__ = 'book'
-
- book_id: Mapped[int] = mapped_column(primary_key=True)
- title: Mapped[str]
- summary: Mapped[str] = mapped_column(raiseload=True)
- excerpt: Mapped[str] = mapped_column(Text, raiseload=True)
-
-Alternatively, the example below illustrates the same mapping using a
-:ref:`Imperative table <orm_imperative_table_configuration>` configuration::
-
- from sqlalchemy import Column, Integer, LargeBinary, String, Table, Text
- from sqlalchemy.orm import DeclarativeBase
- from sqlalchemy.orm import deferred
-
-
- class Base(DeclarativeBase):
- pass
-
-
- book = Table(
- "book",
- Base.metadata,
- Column("book_id", Integer, primary_key=True),
- Column("title", String),
- Column("summary", String),
- Column("excerpt", Text),
- )
-
-
- class Book(Base):
- __table__ = book
-
- summary = deferred(book.c.summary, raiseload=True)
- excerpt = deferred(book.c.excerpt, raiseload=True)
-
-With both mappings, if we wish to have either or both of ``.excerpt``
-or ``.summary`` available on an object when loaded, we make use of the
-:func:`_orm.undefer` loader option::
-
- book_w_excerpt = session.scalars(
- select(Book).options(undefer(Book.excerpt)).where(Book.id == 12)
- ).first()
-
-The :func:`_orm.undefer` option will populate the ``.excerpt`` attribute
-above, even if the ``Book`` object were already loaded, assuming the
-``.excerpt`` field was not populated by some other means previously.
-
-
-Column Deferral API
--------------------
-
-.. autofunction:: defer
-
-.. autofunction:: deferred
-
-.. autofunction:: query_expression
-
-.. autofunction:: load_only
-
-.. autofunction:: undefer
-
-.. autofunction:: undefer_group
-
-.. autofunction:: with_expression
-
-.. _bundles:
-
-Column Bundles
-==============
-
-The :class:`_orm.Bundle` may be used to query for groups of columns under one
-namespace.
-
-The bundle allows columns to be grouped together::
-
- from sqlalchemy.orm import Bundle
- from sqlalchemy import select
-
- bn = Bundle('mybundle', MyClass.data1, MyClass.data2)
- for row in session.execute(select(bn).where(bn.c.title == "d1")):
- print(row.mybundle.data1, row.mybundle.data2)
-
-The bundle can be subclassed to provide custom behaviors when results
-are fetched. The method :meth:`.Bundle.create_row_processor` is given
-the statement object and a set of "row processor" functions at query execution
-time; these processor functions when given a result row will return the
-individual attribute value, which can then be adapted into any kind of
-return data structure. Below illustrates replacing the usual :class:`.Row`
-return structure with a straight Python dictionary::
-
- from sqlalchemy.orm import Bundle
-
- class DictBundle(Bundle):
- def create_row_processor(self, query, procs, labels):
- """Override create_row_processor to return values as dictionaries"""
- def proc(row):
- return dict(
- zip(labels, (proc(row) for proc in procs))
- )
- return proc
-
-.. note::
-
- The :class:`_orm.Bundle` construct only applies to column expressions.
- It does not apply to ORM attributes mapped using :func:`_orm.relationship`.
-
-.. versionchanged:: 1.0
-
- The ``proc()`` callable passed to the ``create_row_processor()``
- method of custom :class:`.Bundle` classes now accepts only a single
- "row" argument.
-
-A result from the above bundle will return dictionary values::
-
- bn = DictBundle('mybundle', MyClass.data1, MyClass.data2)
- for row in session.execute(select(bn)).where(bn.c.data1 == 'd1'):
- print(row.mybundle['data1'], row.mybundle['data2'])
-
-The :class:`.Bundle` construct is also integrated into the behavior
-of :func:`.composite`, where it is used to return composite attributes as objects
-when queried as individual attributes.
+This document has moved to :doc:`queryguide/columns`
-===============================
-Querying Data, Loading Objects
-===============================
+:orphan:
-The following sections refer to techniques for emitting SELECT statements within
-an ORM context. This involves primarily statements that return instances of
-ORM mapped objects, but also involves calling forms that deliver individual
-column or groups of columns as well.
+This document has moved to :doc:`queryguide/index`
-For an introduction to querying with the SQLAlchemy ORM, start with the
-:ref:`unified_tutorial`.
-
-.. toctree::
- :maxdepth: 3
-
- queryguide
- loading_columns
- loading_relationships
- inheritance_loading
- query
-.. _loading_toplevel:
+:orphan:
-.. currentmodule:: sqlalchemy.orm
+This document has moved to :doc:`queryguide/relationships`
-Relationship Loading Techniques
-===============================
-
-A big part of SQLAlchemy is providing a wide range of control over how related
-objects get loaded when querying. By "related objects" we refer to collections
-or scalar associations configured on a mapper using :func:`_orm.relationship`.
-This behavior can be configured at mapper construction time using the
-:paramref:`_orm.relationship.lazy` parameter to the :func:`_orm.relationship`
-function, as well as by using options with the :class:`_query.Query` object.
-
-The loading of relationships falls into three categories; **lazy** loading,
-**eager** loading, and **no** loading. Lazy loading refers to objects that are returned
-from a query without the related
-objects loaded at first. When the given collection or reference is
-first accessed on a particular object, an additional SELECT statement
-is emitted such that the requested collection is loaded.
-
-Eager loading refers to objects returned from a query with the related
-collection or scalar reference already loaded up front. The :class:`_query.Query`
-achieves this either by augmenting the SELECT statement it would normally
-emit with a JOIN to load in related rows simultaneously, or by emitting
-additional SELECT statements after the primary one to load collections
-or scalar references at once.
-
-"No" loading refers to the disabling of loading on a given relationship, either
-that the attribute is empty and is just never loaded, or that it raises
-an error when it is accessed, in order to guard against unwanted lazy loads.
-
-The primary forms of relationship loading are:
-
-* **lazy loading** - available via ``lazy='select'`` or the :func:`.lazyload`
- option, this is the form of loading that emits a SELECT statement at
- attribute access time to lazily load a related reference on a single
- object at a time. Lazy loading is detailed at :ref:`lazy_loading`.
-
-* **joined loading** - available via ``lazy='joined'`` or the :func:`_orm.joinedload`
- option, this form of loading applies a JOIN to the given SELECT statement
- so that related rows are loaded in the same result set. Joined eager loading
- is detailed at :ref:`joined_eager_loading`.
-
-* **subquery loading** - available via ``lazy='subquery'`` or the :func:`.subqueryload`
- option, this form of loading emits a second SELECT statement which re-states the
- original query embedded inside of a subquery, then JOINs that subquery to the
- related table to be loaded to load all members of related collections / scalar
- references at once. Subquery eager loading is detailed at :ref:`subquery_eager_loading`.
-
-* **select IN loading** - available via ``lazy='selectin'`` or the :func:`.selectinload`
- option, this form of loading emits a second (or more) SELECT statement which
- assembles the primary key identifiers of the parent objects into an IN clause,
- so that all members of related collections / scalar references are loaded at once
- by primary key. Select IN loading is detailed at :ref:`selectin_eager_loading`.
-
-* **raise loading** - available via ``lazy='raise'``, ``lazy='raise_on_sql'``,
- or the :func:`.raiseload` option, this form of loading is triggered at the
- same time a lazy load would normally occur, except it raises an ORM exception
- in order to guard against the application making unwanted lazy loads.
- An introduction to raise loading is at :ref:`prevent_lazy_with_raiseload`.
-
-* **no loading** - available via ``lazy='noload'``, or the :func:`.noload`
- option; this loading style turns the attribute into an empty attribute
- (``None`` or ``[]``) that will never load or have any loading effect. This
- seldom-used strategy behaves somewhat like an eager loader when objects are
- loaded in that an empty attribute or collection is placed, but for expired
- objects relies upon the default value of the attribute being returned on
- access; the net effect is the same except for whether or not the attribute
- name appears in the :attr:`.InstanceState.unloaded` collection. ``noload``
- may be useful for implementing a "write-only" attribute but this usage is not
- currently tested or formally supported.
-
-
-.. _relationship_lazy_option:
-
-Configuring Loader Strategies at Mapping Time
----------------------------------------------
-
-The loader strategy for a particular relationship can be configured
-at mapping time to take place in all cases where an object of the mapped
-type is loaded, in the absence of any query-level options that modify it.
-This is configured using the :paramref:`_orm.relationship.lazy` parameter to
-:func:`_orm.relationship`; common values for this parameter
-include ``select``, ``joined``, ``subquery`` and ``selectin``.
-
-For example, to configure a relationship to use joined eager loading when
-the parent object is queried::
-
- class Parent(Base):
- __tablename__ = 'parent'
-
- id = mapped_column(Integer, primary_key=True)
- children = relationship("Child", lazy='joined')
-
-Above, whenever a collection of ``Parent`` objects are loaded, each
-``Parent`` will also have its ``children`` collection populated, using
-rows fetched by adding a JOIN to the query for ``Parent`` objects.
-See :ref:`joined_eager_loading` for background on this style of loading.
-
-The default value of the :paramref:`_orm.relationship.lazy` argument is
-``"select"``, which indicates lazy loading. See :ref:`lazy_loading` for
-further background.
-
-.. _relationship_loader_options:
-
-Relationship Loading with Loader Options
-----------------------------------------
-
-The other, and possibly more common way to configure loading strategies
-is to set them up on a per-query basis against specific attributes using the
-:meth:`_query.Query.options` method. Very detailed
-control over relationship loading is available using loader options;
-the most common are
-:func:`~sqlalchemy.orm.joinedload`,
-:func:`~sqlalchemy.orm.subqueryload`, :func:`~sqlalchemy.orm.selectinload`
-and :func:`~sqlalchemy.orm.lazyload`. The option accepts either
-the string name of an attribute against a parent, or for greater specificity
-can accommodate a class-bound attribute directly::
-
- # set children to load lazily
- session.query(Parent).options(lazyload(Parent.children)).all()
-
- # set children to load eagerly with a join
- session.query(Parent).options(joinedload(Parent.children)).all()
-
-The loader options can also be "chained" using **method chaining**
-to specify how loading should occur further levels deep::
-
- session.query(Parent).options(
- joinedload(Parent.children).
- subqueryload(Child.subelements)).all()
-
-Chained loader options can be applied against a "lazy" loaded collection.
-This means that when a collection or association is lazily loaded upon
-access, the specified option will then take effect::
-
- session.query(Parent).options(
- lazyload(Parent.children).
- subqueryload(Child.subelements)).all()
-
-Above, the query will return ``Parent`` objects without the ``children``
-collections loaded. When the ``children`` collection on a particular
-``Parent`` object is first accessed, it will lazy load the related
-objects, but additionally apply eager loading to the ``subelements``
-collection on each member of ``children``.
-
-The above examples, using :class:`_orm.Query`, are now referred to as
-:term:`1.x style` queries. The options system is available as well for
-:term:`2.0 style` queries using the :meth:`_sql.Select.options` method::
-
- stmt = select(Parent).options(
- lazyload(Parent.children).
- subqueryload(Child.subelements))
-
- result = session.execute(stmt)
-
-Under the hood, :class:`_orm.Query` is ultimately using the above
-:class:`_sql.select` based mechanism.
-
-
-.. _loader_option_criteria:
-
-Adding Criteria to loader options
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The relationship attributes used to indicate loader options include the
-ability to add additional filtering criteria to the ON clause of the join
-that's created, or to the WHERE criteria involved, depending on the loader
-strategy. This can be achieved using the :meth:`.PropComparator.and_`
-method which will pass through an option such that loaded results are limited
-to the given filter criteria::
-
- session.query(A).options(lazyload(A.bs.and_(B.id > 5)))
-
-When using limiting criteria, if a particular collection is already loaded
-it won't be refreshed; to ensure the new criteria takes place, apply
-the :meth:`_query.Query.populate_existing` option::
-
- session.query(A).options(lazyload(A.bs.and_(B.id > 5))).populate_existing()
-
-In order to add filtering criteria to all occurrences of an entity throughout
-a query, regardless of loader strategy or where it occurs in the loading
-process, see the :func:`_orm.with_loader_criteria` function.
-
-.. versionadded:: 1.4
-
-Specifying Sub-Options with Load.options()
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Using method chaining, the loader style of each link in the path is explicitly
-stated. To navigate along a path without changing the existing loader style
-of a particular attribute, the :func:`.defaultload` method/function may be used::
-
- session.query(A).options(
- defaultload(A.atob).
- joinedload(B.btoc)).all()
-
-A similar approach can be used to specify multiple sub-options at once, using
-the :meth:`_orm.Load.options` method::
-
- session.query(A).options(
- defaultload(A.atob).options(
- joinedload(B.btoc),
- joinedload(B.btod)
- )).all()
-
-.. versionadded:: 1.3.6 added :meth:`_orm.Load.options`
-
-
-.. seealso::
-
- :ref:`deferred_loading_w_multiple` - illustrates examples of combining
- relationship and column-oriented loader options.
-
-
-.. note:: The loader options applied to an object's lazy-loaded collections
- are **"sticky"** to specific object instances, meaning they will persist
- upon collections loaded by that specific object for as long as it exists in
- memory. For example, given the previous example::
-
- session.query(Parent).options(
- lazyload(Parent.children).
- subqueryload(Child.subelements)).all()
-
- if the ``children`` collection on a particular ``Parent`` object loaded by
- the above query is expired (such as when a :class:`.Session` object's
- transaction is committed or rolled back, or :meth:`.Session.expire_all` is
- used), when the ``Parent.children`` collection is next accessed in order to
- re-load it, the ``Child.subelements`` collection will again be loaded using
- subquery eager loading.This stays the case even if the above ``Parent``
- object is accessed from a subsequent query that specifies a different set of
- options.To change the options on an existing object without expunging it and
- re-loading, they must be set explicitly in conjunction with the
- :meth:`_query.Query.populate_existing` method::
-
- # change the options on Parent objects that were already loaded
- session.query(Parent).populate_existing().options(
- lazyload(Parent.children).
- lazyload(Child.subelements)).all()
-
- If the objects loaded above are fully cleared from the :class:`.Session`,
- such as due to garbage collection or that :meth:`.Session.expunge_all`
- were used, the "sticky" options will also be gone and the newly created
- objects will make use of new options if loaded again.
-
- A future SQLAlchemy release may add more alternatives to manipulating
- the loader options on already-loaded objects.
-
-
-.. _lazy_loading:
-
-Lazy Loading
-------------
-
-By default, all inter-object relationships are **lazy loading**. The scalar or
-collection attribute associated with a :func:`~sqlalchemy.orm.relationship`
-contains a trigger which fires the first time the attribute is accessed. This
-trigger typically issues a SQL call at the point of access
-in order to load the related object or objects:
-
-.. sourcecode:: python+sql
-
- >>> jack.addresses
- {opensql}SELECT
- addresses.id AS addresses_id,
- addresses.email_address AS addresses_email_address,
- addresses.user_id AS addresses_user_id
- FROM addresses
- WHERE ? = addresses.user_id
- [5]
- {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
-
-The one case where SQL is not emitted is for a simple many-to-one relationship, when
-the related object can be identified by its primary key alone and that object is already
-present in the current :class:`.Session`. For this reason, while lazy loading
-can be expensive for related collections, in the case that one is loading
-lots of objects with simple many-to-ones against a relatively small set of
-possible target objects, lazy loading may be able to refer to these objects locally
-without emitting as many SELECT statements as there are parent objects.
-
-This default behavior of "load upon attribute access" is known as "lazy" or
-"select" loading - the name "select" because a "SELECT" statement is typically emitted
-when the attribute is first accessed.
-
-Lazy loading can be enabled for a given attribute that is normally
-configured in some other way using the :func:`.lazyload` loader option::
-
- from sqlalchemy.orm import lazyload
-
- # force lazy loading for an attribute that is set to
- # load some other way normally
- session.query(User).options(lazyload(User.addresses))
-
-.. _prevent_lazy_with_raiseload:
-
-Preventing unwanted lazy loads using raiseload
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The :func:`.lazyload` strategy produces an effect that is one of the most
-common issues referred to in object relational mapping; the
-:term:`N plus one problem`, which states that for any N objects loaded,
-accessing their lazy-loaded attributes means there will be N+1 SELECT
-statements emitted. In SQLAlchemy, the usual mitigation for the N+1 problem
-is to make use of its very capable eager load system. However, eager loading
-requires that the attributes which are to be loaded be specified with the
-:class:`_query.Query` up front. The problem of code that may access other attributes
-that were not eagerly loaded, where lazy loading is not desired, may be
-addressed using the :func:`.raiseload` strategy; this loader strategy
-replaces the behavior of lazy loading with an informative error being
-raised::
-
- from sqlalchemy.orm import raiseload
- session.query(User).options(raiseload(User.addresses))
-
-Above, a ``User`` object loaded from the above query will not have
-the ``.addresses`` collection loaded; if some code later on attempts to
-access this attribute, an ORM exception is raised.
-
-:func:`.raiseload` may be used with a so-called "wildcard" specifier to
-indicate that all relationships should use this strategy. For example,
-to set up only one attribute as eager loading, and all the rest as raise::
-
- session.query(Order).options(
- joinedload(Order.items), raiseload('*'))
-
-The above wildcard will apply to **all** relationships not just on ``Order``
-besides ``items``, but all those on the ``Item`` objects as well. To set up
-:func:`.raiseload` for only the ``Order`` objects, specify a full
-path with :class:`_orm.Load`::
-
- from sqlalchemy.orm import Load
-
- session.query(Order).options(
- joinedload(Order.items), Load(Order).raiseload('*'))
-
-Conversely, to set up the raise for just the ``Item`` objects::
-
- session.query(Order).options(
- joinedload(Order.items).raiseload('*'))
-
-
-The :func:`.raiseload` option applies only to relationship attributes. For
-column-oriented attributes, the :func:`.defer` option supports the
-:paramref:`.orm.defer.raiseload` option which works in the same way.
-
-.. versionchanged:: 1.4.0 The "raiseload" strategies **do not take place**
- within the unit of work flush process, as of SQLAlchemy 1.4.0. This means
- that if the unit of work needs to load a particular attribute in order to
- complete its work, it will perform the load. It's not always easy to prevent
- a particular relationship load from occurring within the UOW process
- particularly with less common kinds of relationships. The lazy="raise" case
- is more intended for explicit attribute access within the application space.
-
-.. seealso::
-
- :ref:`wildcard_loader_strategies`
-
- :ref:`deferred_raiseload`
-
-.. _joined_eager_loading:
-
-Joined Eager Loading
---------------------
-
-Joined eager loading is the most fundamental style of eager loading in the
-ORM. It works by connecting a JOIN (by default
-a LEFT OUTER join) to the SELECT statement emitted by a :class:`_query.Query`
-and populates the target scalar/collection from the
-same result set as that of the parent.
-
-At the mapping level, this looks like::
-
- class Address(Base):
- # ...
-
- user = relationship(User, lazy="joined")
-
-Joined eager loading is usually applied as an option to a query, rather than
-as a default loading option on the mapping, in particular when used for
-collections rather than many-to-one-references. This is achieved
-using the :func:`_orm.joinedload` loader option:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... options(joinedload(User.addresses)).\
- ... filter_by(name='jack').all()
- {opensql}SELECT
- addresses_1.id AS addresses_1_id,
- addresses_1.email_address AS addresses_1_email_address,
- addresses_1.user_id AS addresses_1_user_id,
- users.id AS users_id, users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- LEFT OUTER JOIN addresses AS addresses_1
- ON users.id = addresses_1.user_id
- WHERE users.name = ?
- ['jack']
-
-
-The JOIN emitted by default is a LEFT OUTER JOIN, to allow for a lead object
-that does not refer to a related row. For an attribute that is guaranteed
-to have an element, such as a many-to-one
-reference to a related object where the referencing foreign key is NOT NULL,
-the query can be made more efficient by using an inner join; this is available
-at the mapping level via the :paramref:`_orm.relationship.innerjoin` flag::
-
- class Address(Base):
- # ...
-
- user_id = mapped_column(ForeignKey('users.id'), nullable=False)
- user = relationship(User, lazy="joined", innerjoin=True)
-
-At the query option level, via the :paramref:`_orm.joinedload.innerjoin` flag::
-
- session.query(Address).options(
- joinedload(Address.user, innerjoin=True))
-
-The JOIN will right-nest itself when applied in a chain that includes
-an OUTER JOIN:
-
-.. sourcecode:: python+sql
-
- >>> session.query(User).options(
- ... joinedload(User.addresses).
- ... joinedload(Address.widgets, innerjoin=True)).all()
- {opensql}SELECT
- widgets_1.id AS widgets_1_id,
- widgets_1.name AS widgets_1_name,
- addresses_1.id AS addresses_1_id,
- addresses_1.email_address AS addresses_1_email_address,
- addresses_1.user_id AS addresses_1_user_id,
- users.id AS users_id, users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- LEFT OUTER JOIN (
- addresses AS addresses_1 JOIN widgets AS widgets_1 ON
- addresses_1.widget_id = widgets_1.id
- ) ON users.id = addresses_1.user_id
-
-On older versions of SQLite, the above nested right JOIN may be re-rendered
-as a nested subquery. Older versions of SQLAlchemy would convert right-nested
-joins into subqueries in all cases.
-
- .. warning::
-
- Using ``with_for_update`` in the context of eager loading
- relationships is not officially supported or recommended by
- SQLAlchemy and may not work with certain queries on various
- database backends. When ``with_for_update`` is successfully used
- with a query that involves :func:`_orm.joinedload`, SQLAlchemy will
- attempt to emit SQL that locks all involved tables.
-
-
-Joined eager loading and result set batching
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A central concept of joined eager loading when applied to collections is that
-the :class:`_query.Query` object must de-duplicate rows against the leading
-entity being queried. Such as above,
-if the ``User`` object we loaded referred to three ``Address`` objects, the
-result of the SQL statement would have had three rows; yet the :class:`_query.Query`
-returns only one ``User`` object. As additional rows are received for a
-``User`` object just loaded in a previous row, the additional columns that
-refer to new ``Address`` objects are directed into additional results within
-the ``User.addresses`` collection of that particular object.
-
-This process is very transparent, however does imply that joined eager
-loading is incompatible with "batched" query results, provided by the
-:meth:`_query.Query.yield_per` method, when used for collection loading. Joined
-eager loading used for scalar references is however compatible with
-:meth:`_query.Query.yield_per`. The :meth:`_query.Query.yield_per` method will result
-in an exception thrown if a collection based joined eager loader is
-in play.
-
-To "batch" queries with arbitrarily large sets of result data while maintaining
-compatibility with collection-based joined eager loading, emit multiple
-SELECT statements, each referring to a subset of rows using the WHERE
-clause, e.g. windowing. Alternatively, consider using "select IN" eager loading
-which is **potentially** compatible with :meth:`_query.Query.yield_per`, provided
-that the database driver in use supports multiple, simultaneous cursors
-(SQLite, PostgreSQL drivers, not MySQL drivers or SQL Server ODBC drivers).
-
-
-.. _zen_of_eager_loading:
-
-The Zen of Joined Eager Loading
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Since joined eager loading seems to have many resemblances to the use of
-:meth:`_query.Query.join`, it often produces confusion as to when and how it should
-be used. It is critical to understand the distinction that while
-:meth:`_query.Query.join` is used to alter the results of a query, :func:`_orm.joinedload`
-goes through great lengths to **not** alter the results of the query, and
-instead hide the effects of the rendered join to only allow for related objects
-to be present.
-
-The philosophy behind loader strategies is that any set of loading schemes can
-be applied to a particular query, and *the results don't change* - only the
-number of SQL statements required to fully load related objects and collections
-changes. A particular query might start out using all lazy loads. After using
-it in context, it might be revealed that particular attributes or collections
-are always accessed, and that it would be more efficient to change the loader
-strategy for these. The strategy can be changed with no other modifications
-to the query, the results will remain identical, but fewer SQL statements would
-be emitted. In theory (and pretty much in practice), nothing you can do to the
-:class:`_query.Query` would make it load a different set of primary or related
-objects based on a change in loader strategy.
-
-How :func:`joinedload` in particular achieves this result of not impacting
-entity rows returned in any way is that it creates an anonymous alias of the
-joins it adds to your query, so that they can't be referenced by other parts of
-the query. For example, the query below uses :func:`_orm.joinedload` to create a
-LEFT OUTER JOIN from ``users`` to ``addresses``, however the ``ORDER BY`` added
-against ``Address.email_address`` is not valid - the ``Address`` entity is not
-named in the query:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... options(joinedload(User.addresses)).\
- ... filter(User.name=='jack').\
- ... order_by(Address.email_address).all()
- {opensql}SELECT
- addresses_1.id AS addresses_1_id,
- addresses_1.email_address AS addresses_1_email_address,
- addresses_1.user_id AS addresses_1_user_id,
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- LEFT OUTER JOIN addresses AS addresses_1
- ON users.id = addresses_1.user_id
- WHERE users.name = ?
- ORDER BY addresses.email_address <-- this part is wrong !
- ['jack']
-
-Above, ``ORDER BY addresses.email_address`` is not valid since ``addresses`` is not in the
-FROM list. The correct way to load the ``User`` records and order by email
-address is to use :meth:`_query.Query.join`:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... join(User.addresses).\
- ... filter(User.name=='jack').\
- ... order_by(Address.email_address).all()
- {opensql}
- SELECT
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- JOIN addresses ON users.id = addresses.user_id
- WHERE users.name = ?
- ORDER BY addresses.email_address
- ['jack']
-
-The statement above is of course not the same as the previous one, in that the
-columns from ``addresses`` are not included in the result at all. We can add
-:func:`_orm.joinedload` back in, so that there are two joins - one is that which we
-are ordering on, the other is used anonymously to load the contents of the
-``User.addresses`` collection:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... join(User.addresses).\
- ... options(joinedload(User.addresses)).\
- ... filter(User.name=='jack').\
- ... order_by(Address.email_address).all()
- {opensql}SELECT
- addresses_1.id AS addresses_1_id,
- addresses_1.email_address AS addresses_1_email_address,
- addresses_1.user_id AS addresses_1_user_id,
- users.id AS users_id, users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users JOIN addresses
- ON users.id = addresses.user_id
- LEFT OUTER JOIN addresses AS addresses_1
- ON users.id = addresses_1.user_id
- WHERE users.name = ?
- ORDER BY addresses.email_address
- ['jack']
-
-What we see above is that our usage of :meth:`_query.Query.join` is to supply JOIN
-clauses we'd like to use in subsequent query criterion, whereas our usage of
-:func:`_orm.joinedload` only concerns itself with the loading of the
-``User.addresses`` collection, for each ``User`` in the result. In this case,
-the two joins most probably appear redundant - which they are. If we wanted to
-use just one JOIN for collection loading as well as ordering, we use the
-:func:`.contains_eager` option, described in :ref:`contains_eager` below. But
-to see why :func:`joinedload` does what it does, consider if we were
-**filtering** on a particular ``Address``:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... join(User.addresses).\
- ... options(joinedload(User.addresses)).\
- ... filter(User.name=='jack').\
- ... filter(Address.email_address=='someaddress@foo.com').\
- ... all()
- {opensql}SELECT
- addresses_1.id AS addresses_1_id,
- addresses_1.email_address AS addresses_1_email_address,
- addresses_1.user_id AS addresses_1_user_id,
- users.id AS users_id, users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users JOIN addresses
- ON users.id = addresses.user_id
- LEFT OUTER JOIN addresses AS addresses_1
- ON users.id = addresses_1.user_id
- WHERE users.name = ? AND addresses.email_address = ?
- ['jack', 'someaddress@foo.com']
-
-Above, we can see that the two JOINs have very different roles. One will match
-exactly one row, that of the join of ``User`` and ``Address`` where
-``Address.email_address=='someaddress@foo.com'``. The other LEFT OUTER JOIN
-will match *all* ``Address`` rows related to ``User``, and is only used to
-populate the ``User.addresses`` collection, for those ``User`` objects that are
-returned.
-
-By changing the usage of :func:`_orm.joinedload` to another style of loading, we
-can change how the collection is loaded completely independently of SQL used to
-retrieve the actual ``User`` rows we want. Below we change :func:`_orm.joinedload`
-into :func:`.subqueryload`:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... join(User.addresses).\
- ... options(subqueryload(User.addresses)).\
- ... filter(User.name=='jack').\
- ... filter(Address.email_address=='someaddress@foo.com').\
- ... all()
- {opensql}SELECT
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- JOIN addresses ON users.id = addresses.user_id
- WHERE
- users.name = ?
- AND addresses.email_address = ?
- ['jack', 'someaddress@foo.com']
-
- # ... subqueryload() emits a SELECT in order
- # to load all address records ...
-
-When using joined eager loading, if the query contains a modifier that impacts
-the rows returned externally to the joins, such as when using DISTINCT, LIMIT,
-OFFSET or equivalent, the completed statement is first wrapped inside a
-subquery, and the joins used specifically for joined eager loading are applied
-to the subquery. SQLAlchemy's joined eager loading goes the extra mile, and
-then ten miles further, to absolutely ensure that it does not affect the end
-result of the query, only the way collections and related objects are loaded,
-no matter what the format of the query is.
-
-.. seealso::
-
- :ref:`contains_eager` - using :func:`.contains_eager`
-
-.. _subquery_eager_loading:
-
-Subquery Eager Loading
-----------------------
-
-Subqueryload eager loading is configured in the same manner as that of
-joined eager loading; for the :paramref:`_orm.relationship.lazy` parameter,
-we would specify ``"subquery"`` rather than ``"joined"``, and for
-the option we use the :func:`.subqueryload` option rather than the
-:func:`_orm.joinedload` option.
-
-The operation of subquery eager loading is to emit a second SELECT statement
-for each relationship to be loaded, across all result objects at once.
-This SELECT statement refers to the original SELECT statement, wrapped
-inside of a subquery, so that we retrieve the same list of primary keys
-for the primary object being returned, then link that to the sum of all
-the collection members to load them at once:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... options(subqueryload(User.addresses)).\
- ... filter_by(name='jack').all()
- {opensql}SELECT
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- WHERE users.name = ?
- ('jack',)
- SELECT
- addresses.id AS addresses_id,
- addresses.email_address AS addresses_email_address,
- addresses.user_id AS addresses_user_id,
- anon_1.users_id AS anon_1_users_id
- FROM (
- SELECT users.id AS users_id
- FROM users
- WHERE users.name = ?) AS anon_1
- JOIN addresses ON anon_1.users_id = addresses.user_id
- ORDER BY anon_1.users_id, addresses.id
- ('jack',)
-
-The subqueryload strategy has many advantages over joined eager loading
-in the area of loading collections. First, it allows the original query
-to proceed without changing it at all, not introducing in particular a
-LEFT OUTER JOIN that may make it less efficient. Secondly, it allows
-for many collections to be eagerly loaded without producing a single query
-that has many JOINs in it, which can be even less efficient; each relationship
-is loaded in a fully separate query. Finally, because the additional query
-only needs to load the collection items and not the lead object, it can
-use an inner JOIN in all cases for greater query efficiency.
-
-Disadvantages of subqueryload include that the complexity of the original
-query is transferred to the relationship queries, which when combined with the
-use of a subquery, can on some backends in some cases (notably MySQL) produce
-significantly slow queries. Additionally, the subqueryload strategy can only
-load the full contents of all collections at once, is therefore incompatible
-with "batched" loading supplied by :meth:`_query.Query.yield_per`, both for collection
-and scalar relationships.
-
-The newer style of loading provided by :func:`.selectinload` solves these
-limitations of :func:`.subqueryload`.
-
-.. seealso::
-
- :ref:`selectin_eager_loading`
-
-
-.. _subqueryload_ordering:
-
-The Importance of Ordering
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A query which makes use of :func:`.subqueryload` in conjunction with a
-limiting modifier such as :meth:`_query.Query.first`, :meth:`_query.Query.limit`,
-or :meth:`_query.Query.offset` should **always** include :meth:`_query.Query.order_by`
-against unique column(s) such as the primary key, so that the additional queries
-emitted by :func:`.subqueryload` include
-the same ordering as used by the parent query. Without it, there is a chance
-that the inner query could return the wrong rows::
-
- # incorrect, no ORDER BY
- session.query(User).options(
- subqueryload(User.addresses)).first()
-
- # incorrect if User.name is not unique
- session.query(User).options(
- subqueryload(User.addresses)
- ).order_by(User.name).first()
-
- # correct
- session.query(User).options(
- subqueryload(User.addresses)
- ).order_by(User.name, User.id).first()
-
-.. seealso::
-
- :ref:`faq_subqueryload_limit_sort` - detailed example
-
-.. _selectin_eager_loading:
-
-Select IN loading
------------------
-
-Select IN loading is similar in operation to subquery eager loading, however
-the SELECT statement which is emitted has a much simpler structure than that of
-subquery eager loading. In most cases, selectin loading is the most simple and
-efficient way to eagerly load collections of objects. The only scenario in
-which selectin eager loading is not feasible is when the model is using
-composite primary keys, and the backend database does not support tuples with
-IN, which currently includes SQL Server.
-
-.. versionadded:: 1.2
-
-"Select IN" eager loading is provided using the ``"selectin"`` argument to
-:paramref:`_orm.relationship.lazy` or by using the :func:`.selectinload` loader
-option. This style of loading emits a SELECT that refers to the primary key
-values of the parent object, or in the case of a many-to-one
-relationship to the those of the child objects, inside of an IN clause, in
-order to load related associations:
-
-.. sourcecode:: python+sql
-
- >>> jack = session.query(User).\
- ... options(selectinload(User.addresses)).\
- ... filter(or_(User.name == 'jack', User.name == 'ed')).all()
- {opensql}SELECT
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- WHERE users.name = ? OR users.name = ?
- ('jack', 'ed')
- SELECT
- addresses.id AS addresses_id,
- addresses.email_address AS addresses_email_address,
- addresses.user_id AS addresses_user_id
- FROM addresses
- WHERE addresses.user_id IN (?, ?)
- (5, 7)
-
-Above, the second SELECT refers to ``addresses.user_id IN (5, 7)``, where the
-"5" and "7" are the primary key values for the previous two ``User``
-objects loaded; after a batch of objects are completely loaded, their primary
-key values are injected into the ``IN`` clause for the second SELECT.
-Because the relationship between ``User`` and ``Address`` has a simple [1]_
-primary join condition and provides that the
-primary key values for ``User`` can be derived from ``Address.user_id``, the
-statement has no joins or subqueries at all.
-
-.. versionchanged:: 1.3 selectin loading can omit the JOIN for a simple
- one-to-many collection.
-
-For simple [1]_ many-to-one loads, a JOIN is also not needed as the foreign key
-value from the parent object is used:
-
-.. sourcecode:: python+sql
-
- >>> session.query(Address).\
- ... options(selectinload(Address.user)).all()
- {opensql}SELECT
- addresses.id AS addresses_id,
- addresses.email_address AS addresses_email_address,
- addresses.user_id AS addresses_user_id
- FROM addresses
- SELECT
- users.id AS users_id,
- users.name AS users_name,
- users.fullname AS users_fullname,
- users.nickname AS users_nickname
- FROM users
- WHERE users.id IN (?, ?)
- (1, 2)
-
-.. versionchanged:: 1.3.6 selectin loading can also omit the JOIN for a simple
- many-to-one relationship.
-
-.. [1] by "simple" we mean that the :paramref:`_orm.relationship.primaryjoin`
- condition expresses an equality comparison between the primary key of the
- "one" side and a straight foreign key of the "many" side, without any
- additional criteria.
-
-Select IN loading also supports many-to-many relationships, where it currently
-will JOIN across all three tables to match rows from one side to the other.
-
-Things to know about this kind of loading include:
-
-* The SELECT statement emitted by the "selectin" loader strategy, unlike
- that of "subquery", does not
- require a subquery nor does it inherit any of the performance limitations
- of the original query; the lookup is a simple primary key lookup and should
- have high performance.
-
-* The special ordering requirements of subqueryload described at
- :ref:`subqueryload_ordering` also don't apply to selectin loading; selectin
- is always linking directly to a parent primary key and can't really
- return the wrong result.
-
-* "selectin" loading, unlike joined or subquery eager loading, always emits its
- SELECT in terms of the immediate parent objects just loaded, and not the
- original type of object at the top of the chain. So if eager loading many
- levels deep, "selectin" loading still will not require any JOINs for simple
- one-to-many or many-to-one relationships. In comparison, joined and
- subquery eager loading always refer to multiple JOINs up to the original
- parent.
-
-* The strategy emits a SELECT for up to 500 parent primary key values at a
- time, as the primary keys are rendered into a large IN expression in the
- SQL statement. Some databases like Oracle have a hard limit on how large
- an IN expression can be, and overall the size of the SQL string shouldn't
- be arbitrarily large.
-
-* As "selectin" loading relies upon IN, for a mapping with composite primary
- keys, it must use the "tuple" form of IN, which looks like ``WHERE
- (table.column_a, table.column_b) IN ((?, ?), (?, ?), (?, ?))``. This syntax
- is not currently supported on SQL Server and for SQLite requires at least
- version 3.15. There is no special logic in SQLAlchemy to check
- ahead of time which platforms support this syntax or not; if run against a
- non-supporting platform, the database will return an error immediately. An
- advantage to SQLAlchemy just running the SQL out for it to fail is that if a
- particular database does start supporting this syntax, it will work without
- any changes to SQLAlchemy (as was the case with SQLite).
-
-In general, "selectin" loading is probably superior to "subquery" eager loading
-in most ways, save for the syntax requirement with composite primary keys
-and possibly that it may emit many SELECT statements for larger result sets.
-As always, developers should spend time looking at the
-statements and results generated by their applications in development to
-check that things are working efficiently.
-
-.. _what_kind_of_loading:
-
-What Kind of Loading to Use ?
------------------------------
-
-Which type of loading to use typically comes down to optimizing the tradeoff
-between number of SQL executions, complexity of SQL emitted, and amount of
-data fetched. Lets take two examples, a :func:`~sqlalchemy.orm.relationship`
-which references a collection, and a :func:`~sqlalchemy.orm.relationship` that
-references a scalar many-to-one reference.
-
-* One to Many Collection
-
- * When using the default lazy loading, if you load 100 objects, and then access a collection on each of
- them, a total of 101 SQL statements will be emitted, although each statement will typically be a
- simple SELECT without any joins.
-
- * When using joined loading, the load of 100 objects and their collections will emit only one SQL
- statement. However, the
- total number of rows fetched will be equal to the sum of the size of all the collections, plus one
- extra row for each parent object that has an empty collection. Each row will also contain the full
- set of columns represented by the parents, repeated for each collection item - SQLAlchemy does not
- re-fetch these columns other than those of the primary key, however most DBAPIs (with some
- exceptions) will transmit the full data of each parent over the wire to the client connection in
- any case. Therefore joined eager loading only makes sense when the size of the collections are
- relatively small. The LEFT OUTER JOIN can also be performance intensive compared to an INNER join.
-
- * When using subquery loading, the load of 100 objects will
- emit two SQL statements. The second statement will fetch a total number of
- rows equal to the sum of the size of all collections. An INNER JOIN is
- used, and a minimum of parent columns are requested, only the primary keys.
- So a subquery load makes sense when the collections are larger.
-
- * When multiple levels of depth are used with joined or subquery loading, loading collections-within-
- collections will multiply the total number of rows fetched in a cartesian fashion. Both
- joined and subquery eager loading always join from the original parent class; if loading a collection
- four levels deep, there will be four JOINs out to the parent. selectin loading
- on the other hand will always have exactly one JOIN to the immediate
- parent table.
-
- * Using selectin loading, the load of 100 objects will also emit two SQL
- statements, the second of which refers to the 100 primary keys of the
- objects loaded. selectin loading will however render at most 500 primary
- key values into a single SELECT statement; so for a lead collection larger
- than 500, there will be a SELECT statement emitted for each batch of
- 500 objects selected.
-
- * Using multiple levels of depth with selectin loading does not incur the
- "cartesian" issue that joined and subquery eager loading have; the queries
- for selectin loading have the best performance characteristics and the
- fewest number of rows. The only caveat is that there might be more than
- one SELECT emitted depending on the size of the lead result.
-
- * selectin loading, unlike joined (when using collections) and subquery eager
- loading (all kinds of relationships), is potentially compatible with result
- set batching provided by :meth:`_query.Query.yield_per` assuming an appropriate
- database driver, so may be able to allow batching for large result sets.
-
-* Many to One Reference
-
- * When using the default lazy loading, a load of 100 objects will like in the case of the collection
- emit as many as 101 SQL statements. However - there is a significant exception to this, in that
- if the many-to-one reference is a simple foreign key reference to the target's primary key, each
- reference will be checked first in the current identity map using :meth:`_query.Query.get`. So here,
- if the collection of objects references a relatively small set of target objects, or the full set
- of possible target objects have already been loaded into the session and are strongly referenced,
- using the default of `lazy='select'` is by far the most efficient way to go.
-
- * When using joined loading, the load of 100 objects will emit only one SQL statement. The join
- will be a LEFT OUTER JOIN, and the total number of rows will be equal to 100 in all cases.
- If you know that each parent definitely has a child (i.e. the foreign
- key reference is NOT NULL), the joined load can be configured with
- :paramref:`_orm.relationship.innerjoin` set to ``True``, which is
- usually specified within the :func:`~sqlalchemy.orm.relationship`. For a load of objects where
- there are many possible target references which may have not been loaded already, joined loading
- with an INNER JOIN is extremely efficient.
-
- * Subquery loading will issue a second load for all the child objects, so for a load of 100 objects
- there would be two SQL statements emitted. There's probably not much advantage here over
- joined loading, however, except perhaps that subquery loading can use an INNER JOIN in all cases
- whereas joined loading requires that the foreign key is NOT NULL.
-
- * Selectin loading will also issue a second load for all the child objects (and as
- stated before, for larger results it will emit a SELECT per 500 rows), so for a load of 100 objects
- there would be two SQL statements emitted. The query itself still has to
- JOIN to the parent table, so again there's not too much advantage to
- selectin loading for many-to-one vs. joined eager loading save for the
- use of INNER JOIN in all cases.
-
-Polymorphic Eager Loading
--------------------------
-
-Specification of polymorphic options on a per-eager-load basis is supported.
-See the section :ref:`eagerloading_polymorphic_subtypes` for examples
-of the :meth:`.PropComparator.of_type` method in conjunction with the
-:func:`_orm.with_polymorphic` function.
-
-.. _wildcard_loader_strategies:
-
-Wildcard Loading Strategies
----------------------------
-
-Each of :func:`_orm.joinedload`, :func:`.subqueryload`, :func:`.lazyload`,
-:func:`.selectinload`,
-:func:`.noload`, and :func:`.raiseload` can be used to set the default
-style of :func:`_orm.relationship` loading
-for a particular query, affecting all :func:`_orm.relationship` -mapped
-attributes not otherwise
-specified in the :class:`_query.Query`. This feature is available by passing
-the string ``'*'`` as the argument to any of these options::
-
- session.query(MyClass).options(lazyload('*'))
-
-Above, the ``lazyload('*')`` option will supersede the ``lazy`` setting
-of all :func:`_orm.relationship` constructs in use for that query,
-except for those which use the ``'dynamic'`` style of loading.
-If some relationships specify
-``lazy='joined'`` or ``lazy='subquery'``, for example,
-using ``lazyload('*')`` will unilaterally
-cause all those relationships to use ``'select'`` loading, e.g. emit a
-SELECT statement when each attribute is accessed.
-
-The option does not supersede loader options stated in the
-query, such as :func:`.eagerload`,
-:func:`.subqueryload`, etc. The query below will still use joined loading
-for the ``widget`` relationship::
-
- session.query(MyClass).options(
- lazyload('*'),
- joinedload(MyClass.widget)
- )
-
-If multiple ``'*'`` options are passed, the last one overrides
-those previously passed.
-
-Per-Entity Wildcard Loading Strategies
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A variant of the wildcard loader strategy is the ability to set the strategy
-on a per-entity basis. For example, if querying for ``User`` and ``Address``,
-we can instruct all relationships on ``Address`` only to use lazy loading
-by first applying the :class:`_orm.Load` object, then specifying the ``*`` as a
-chained option::
-
- session.query(User, Address).options(
- Load(Address).lazyload('*'))
-
-Above, all relationships on ``Address`` will be set to a lazy load.
-
-.. _joinedload_and_join:
-
-.. _contains_eager:
-
-Routing Explicit Joins/Statements into Eagerly Loaded Collections
------------------------------------------------------------------
-
-The behavior of :func:`~sqlalchemy.orm.joinedload()` is such that joins are
-created automatically, using anonymous aliases as targets, the results of which
-are routed into collections and
-scalar references on loaded objects. It is often the case that a query already
-includes the necessary joins which represent a particular collection or scalar
-reference, and the joins added by the joinedload feature are redundant - yet
-you'd still like the collections/references to be populated.
-
-For this SQLAlchemy supplies the :func:`~sqlalchemy.orm.contains_eager()`
-option. This option is used in the same manner as the
-:func:`~sqlalchemy.orm.joinedload()` option except it is assumed that the
-:class:`~sqlalchemy.orm.query.Query` will specify the appropriate joins
-explicitly. Below, we specify a join between ``User`` and ``Address``
-and additionally establish this as the basis for eager loading of ``User.addresses``::
-
- class User(Base):
- __tablename__ = 'user'
- id = mapped_column(Integer, primary_key=True)
- addresses = relationship("Address")
-
- class Address(Base):
- __tablename__ = 'address'
-
- # ...
-
- q = session.query(User).join(User.addresses).\
- options(contains_eager(User.addresses))
-
-
-If the "eager" portion of the statement is "aliased", the path
-should be specified using :meth:`.PropComparator.of_type`, which allows
-the specific :func:`_orm.aliased` construct to be passed:
-
-.. sourcecode:: python+sql
-
- # use an alias of the Address entity
- adalias = aliased(Address)
-
- # construct a Query object which expects the "addresses" results
- query = session.query(User).\
- outerjoin(User.addresses.of_type(adalias)).\
- options(contains_eager(User.addresses.of_type(adalias)))
-
- # get results normally
- r = query.all()
- {opensql}SELECT
- users.user_id AS users_user_id,
- users.user_name AS users_user_name,
- adalias.address_id AS adalias_address_id,
- adalias.user_id AS adalias_user_id,
- adalias.email_address AS adalias_email_address,
- (...other columns...)
- FROM users
- LEFT OUTER JOIN email_addresses AS email_addresses_1
- ON users.user_id = email_addresses_1.user_id
-
-The path given as the argument to :func:`.contains_eager` needs
-to be a full path from the starting entity. For example if we were loading
-``Users->orders->Order->items->Item``, the option would be used as::
-
- query(User).options(
- contains_eager(User.orders).
- contains_eager(Order.items))
-
-Using contains_eager() to load a custom-filtered collection result
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When we use :func:`.contains_eager`, *we* are constructing ourselves the
-SQL that will be used to populate collections. From this, it naturally follows
-that we can opt to **modify** what values the collection is intended to store,
-by writing our SQL to load a subset of elements for collections or
-scalar attributes.
-
-As an example, we can load a ``User`` object and eagerly load only particular
-addresses into its ``.addresses`` collection by filtering the joined data,
-routing it using :func:`_orm.contains_eager`, also using
-:meth:`_query.Query.populate_existing` to ensure any already-loaded collections
-are overwritten::
-
- q = session.query(User).\
- join(User.addresses).\
- filter(Address.email_address.like('%@aol.com')).\
- options(contains_eager(User.addresses)).\
- populate_existing()
-
-The above query will load only ``User`` objects which contain at
-least ``Address`` object that contains the substring ``'aol.com'`` in its
-``email`` field; the ``User.addresses`` collection will contain **only**
-these ``Address`` entries, and *not* any other ``Address`` entries that are
-in fact associated with the collection.
-
-.. tip:: In all cases, the SQLAlchemy ORM does **not overwrite already loaded
- attributes and collections** unless told to do so. As there is an
- :term:`identity map` in use, it is often the case that an ORM query is
- returning objects that were in fact already present and loaded in memory.
- Therefore, when using :func:`_orm.contains_eager` to populate a collection
- in an alternate way, it is usually a good idea to use
- :meth:`_query.Query.populate_existing` as illustrated above so that an
- already-loaded collection is refreshed with the new data.
- :meth:`_query.Query.populate_existing` will reset **all** attributes that were
- already present, including pending changes, so make sure all data is flushed
- before using it. Using the :class:`_orm.Session` with its default behavior
- of :ref:`autoflush <session_flushing>` is sufficient.
-
-.. note:: The customized collection we load using :func:`_orm.contains_eager`
- is not "sticky"; that is, the next time this collection is loaded, it will
- be loaded with its usual default contents. The collection is subject
- to being reloaded if the object is expired, which occurs whenever the
- :meth:`.Session.commit`, :meth:`.Session.rollback` methods are used
- assuming default session settings, or the :meth:`.Session.expire_all`
- or :meth:`.Session.expire` methods are used.
-
-Creating Custom Load Rules
---------------------------
-
-.. deepalchemy:: This is an advanced technique! Great care and testing
- should be applied.
-
-The ORM has various edge cases where the value of an attribute is locally
-available, however the ORM itself doesn't have awareness of this. There
-are also cases when a user-defined system of loading attributes is desirable.
-To support the use case of user-defined loading systems, a key function
-:func:`.attributes.set_committed_value` is provided. This function is
-basically equivalent to Python's own ``setattr()`` function, except that
-when applied to a target object, SQLAlchemy's "attribute history" system
-which is used to determine flush-time changes is bypassed; the attribute
-is assigned in the same way as if the ORM loaded it that way from the database.
-
-The use of :func:`.attributes.set_committed_value` can be combined with another
-key event known as :meth:`.InstanceEvents.load` to produce attribute-population
-behaviors when an object is loaded. One such example is the bi-directional
-"one-to-one" case, where loading the "many-to-one" side of a one-to-one
-should also imply the value of the "one-to-many" side. The SQLAlchemy ORM
-does not consider backrefs when loading related objects, and it views a
-"one-to-one" as just another "one-to-many", that just happens to be one
-row.
-
-Given the following mapping::
-
- from sqlalchemy import Integer, ForeignKey, Column
- from sqlalchemy.orm import relationship, backref
- from sqlalchemy.orm import DeclarativeBase
-
- class Base(DeclarativeBase):
- pass
-
-
- class A(Base):
- __tablename__ = 'a'
- id = mapped_column(Integer, primary_key=True)
- b_id = mapped_column(ForeignKey('b.id'))
- b = relationship(
- "B",
- backref=backref("a", uselist=False),
- lazy='joined')
-
-
- class B(Base):
- __tablename__ = 'b'
- id = mapped_column(Integer, primary_key=True)
-
-
-If we query for an ``A`` row, and then ask it for ``a.b.a``, we will get
-an extra SELECT::
-
- >>> a1.b.a
- SELECT a.id AS a_id, a.b_id AS a_b_id
- FROM a
- WHERE ? = a.b_id
-
-This SELECT is redundant because ``b.a`` is the same value as ``a1``. We
-can create an on-load rule to populate this for us::
-
- from sqlalchemy import event
- from sqlalchemy.orm import attributes
-
- @event.listens_for(A, "load")
- def load_b(target, context):
- if 'b' in target.__dict__:
- attributes.set_committed_value(target.b, 'a', target)
-
-Now when we query for ``A``, we will get ``A.b`` from the joined eager load,
-and ``A.b.a`` from our event:
-
-.. sourcecode:: pycon+sql
-
- a1 = s.query(A).first()
- {opensql}SELECT
- a.id AS a_id,
- a.b_id AS a_b_id,
- b_1.id AS b_1_id
- FROM a
- LEFT OUTER JOIN b AS b_1 ON b_1.id = a.b_id
- LIMIT ? OFFSET ?
- (1, 0)
- {stop}assert a1.b.a is a1
-
-
-Relationship Loader API
------------------------
-
-.. autofunction:: contains_eager
-
-.. autofunction:: defaultload
-
-.. autofunction:: immediateload
-
-.. autofunction:: joinedload
-
-.. autofunction:: lazyload
-
-.. autoclass:: sqlalchemy.orm.Load
- :members:
- :inherited-members: Generative
-
-.. autofunction:: noload
-
-.. autofunction:: raiseload
-
-.. autofunction:: selectinload
-
-.. autofunction:: subqueryload
stmt = select(File.path).where(File.filename == 'foo.txt')
-.. autofunction:: column_property
+Using Column Deferral with ``column_property()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The column deferral feature introduced in the :ref:`queryguide_toplevel`
+at :ref:`orm_queryguide_column_deferral` may be applied at mapping time
+to a SQL expression mapped by :func:`_orm.column_property` by using the
+:func:`_orm.deferred` function in place of :func:`_orm.column_property`::
+
+ from sqlalchemy.orm import deferred
+
+ class User(Base):
+ __tablename__ = 'user'
+
+ id: Mapped[int] = mapped_column(primary_key=True)
+ firstname: Mapped[str] = mapped_column()
+ lastname: Mapped[str] = mapped_column()
+ fullname: Mapped[str] = deferred(firstname + " " + lastname)
+
+.. seealso::
+
+ :ref:`orm_queryguide_deferred_imperative`
+
Using a plain descriptor
Query-time SQL expressions as mapped attributes
-----------------------------------------------
-When using :meth:`.Session.query`, we have the option to specify not just
-mapped entities but ad-hoc SQL expressions as well. Suppose if a class
-``A`` had integer attributes ``.x`` and ``.y``, we could query for ``A``
-objects, and additionally the sum of ``.x`` and ``.y``, as follows::
-
- q = session.query(A, A.x + A.y)
-
-The above query returns tuples of the form ``(A object, integer)``.
-
-An option exists which can apply the ad-hoc ``A.x + A.y`` expression to the
-returned ``A`` objects instead of as a separate tuple entry; this is the
-:func:`.with_expression` query option in conjunction with the
-:func:`.query_expression` attribute mapping. The class is mapped
-to include a placeholder attribute where any particular SQL expression
-may be applied::
-
- from sqlalchemy.orm import query_expression
-
- class A(Base):
- __tablename__ = 'a'
- id = mapped_column(Integer, primary_key=True)
- x = mapped_column(Integer)
- y = mapped_column(Integer)
-
- expr = query_expression()
-
-We can then query for objects of type ``A``, applying an arbitrary
-SQL expression to be populated into ``A.expr``::
-
- from sqlalchemy.orm import with_expression
- q = session.query(A).options(
- with_expression(A.expr, A.x + A.y))
-
-The :func:`.query_expression` mapping has these caveats:
-
-* On an object where :func:`.query_expression` were not used to populate
- the attribute, the attribute on an object instance will have the value
- ``None``, unless the :paramref:`_orm.query_expression.default_expr`
- parameter is set to an alternate SQL expression.
-
-* The query_expression value **does not populate on an object that is
- already loaded**. That is, this will **not work**::
-
- obj = session.query(A).first()
-
- obj = session.query(A).options(with_expression(A.expr, some_expr)).first()
-
- To ensure the attribute is re-loaded, use :meth:`_orm.Query.populate_existing`::
-
- obj = session.query(A).populate_existing().options(
- with_expression(A.expr, some_expr)).first()
-
-* The query_expression value **does not refresh when the object is
- expired**. Once the object is expired, either via :meth:`.Session.expire`
- or via the expire_on_commit behavior of :meth:`.Session.commit`, the value is
- removed from the attribute and will return ``None`` on subsequent access.
- Only by running a new :class:`_query.Query` that touches the object which includes
- a new :func:`.with_expression` directive will the attribute be set to a
- non-None value.
-
-* The mapped attribute currently **cannot** be applied to other parts of the
- query, such as the WHERE clause, the ORDER BY clause, and make use of the
- ad-hoc expression; that is, this won't work::
-
- # won't work
- q = session.query(A).options(
- with_expression(A.expr, A.x + A.y)
- ).filter(A.expr > 5).order_by(A.expr)
-
- The ``A.expr`` expression will resolve to NULL in the above WHERE clause
- and ORDER BY clause. To use the expression throughout the query, assign to a
- variable and use that::
-
- a_expr = A.x + A.y
- q = session.query(A).options(
- with_expression(A.expr, a_expr)
- ).filter(a_expr > 5).order_by(a_expr)
-
-.. versionadded:: 1.2
+In addition to being able to configure fixed SQL expressions on mapped classes,
+the SQLAlchemy ORM also includes a feature wherein objects may be loaded
+with the results of arbitrary SQL expressions which are set up at query time as part
+of their state. This behavior is available by configuring an ORM mapped
+attribute using :func:`_orm.query_expression` and then using the
+:func:`_orm.with_expression` loader option at query time. See the section
+:ref:`orm_queryguide_with_expression` for an example mapping and usage.
.. autofunction:: add_mapped_attribute
+.. autofunction:: column_property
+
.. autofunction:: declarative_base
.. autofunction:: declarative_mixin
Using INSERT, UPDATE and ON CONFLICT (i.e. upsert) to return ORM Objects
==========================================================================
-.. deepalchemy:: The feature of linking ORM objects to RETURNING is a new and
- experimental feature.
-
-.. versionadded:: 1.4.0
-
-The :term:`DML` constructs :func:`_dml.insert`, :func:`_dml.update`, and
-:func:`_dml.delete` feature a method :meth:`_dml.UpdateBase.returning` which on
-database backends that support RETURNING (PostgreSQL, SQL Server, some MariaDB
-versions) may be used to return database rows generated or matched by
-the statement as though they were SELECTed. The ORM-enabled UPDATE and DELETE
-statements may be combined with this feature, so that they return rows
-corresponding to all the rows which were matched by the criteria::
-
- from sqlalchemy import update
-
- stmt = update(User).where(User.name == "squidward").values(name="spongebob").\
- returning(User.id)
-
- for row in session.execute(stmt):
- print(f"id: {row.id}")
-
-The above example returns the ``User.id`` attribute for each row matched.
-Provided that each row contains at least a primary key value, we may opt to
-receive these rows as ORM objects, allowing ORM objects to be loaded from the
-database corresponding atomically to an UPDATE statement against those rows. To
-achieve this, we may combine the :class:`_dml.Update` construct which returns
-``User`` rows with a :func:`_sql.select` that's adapted to run this UPDATE
-statement in an ORM context using the :meth:`_sql.Select.from_statement`
-method::
-
- stmt = update(User).where(User.name == "squidward").values(name="spongebob").\
- returning(User)
-
- orm_stmt = select(User).from_statement(stmt).execution_options(populate_existing=True)
-
- for user in session.execute(orm_stmt).scalars():
- print("updated user: %s" % user)
-
-Above, we produce an :func:`_dml.update` construct that includes
-:meth:`_dml.Update.returning` given the full ``User`` entity, which will
-produce complete rows from the database table as it UPDATEs them; any arbitrary
-set of columns to load may be specified as long as the full primary key is
-included. Next, these rows are adapted to an ORM load by producing a
-:func:`_sql.select` for the desired entity, then adapting it to the UPDATE
-statement by passing the :class:`_dml.Update` construct to the
-:meth:`_sql.Select.from_statement` method; this special ORM method, introduced
-at :ref:`orm_queryguide_selecting_text`, produces an ORM-specific adapter that
-allows the given statement to act as though it were the SELECT of rows that is
-first described. No SELECT is actually emitted in the database, only the
-UPDATE..RETURNING we've constructed.
-
-Finally, we make use of :ref:`orm_queryguide_populate_existing` on the
-construct so that all the data returned by the UPDATE, including the columns
-we've updated, are populated into the returned objects, replacing any
-values which were there already. This has the same effect as if we had
-used the ``synchronize_session='fetch'`` strategy described previously
-at :ref:`orm_expression_update_delete_sync`.
+SQLAlchemy 2.0 includes enhanced capabilities for emitting several varieties
+of ORM-enabled INSERT, UPDATE, and upsert statements. See the
+document at :doc:`queryguide/dml` for documentation. For upsert, see
+:ref:`orm_queryguide_upsert`.
Using PostgreSQL ON CONFLICT with RETURNING to return upserted ORM objects
---------------------------------------------------------------------------
-The above approach can be used with INSERTs with RETURNING as well. As a more
-advanced example, below illustrates how to use the PostgreSQL
-:ref:`postgresql_insert_on_conflict` construct to INSERT or UPDATE rows in the
-database, while simultaneously producing those objects as ORM instances::
+This section has moved to :ref:`orm_queryguide_upsert`.
- from sqlalchemy.dialects.postgresql import insert
-
- stmt = insert(User).values(
- [
- dict(name="sandy", fullname="Sandy Cheeks"),
- dict(name="squidward", fullname="Squidward Tentacles"),
- dict(name="spongebob", fullname="Spongebob Squarepants"),
- ]
- )
-
- stmt = stmt.on_conflict_do_update(
- index_elements=[User.name], set_=dict(fullname=stmt.excluded.fullname)
- ).returning(User)
-
- orm_stmt = (
- select(User)
- .from_statement(stmt)
- .execution_options(populate_existing=True)
- )
- for user in session.execute(
- orm_stmt,
- ).scalars():
- print("inserted or updated: %s" % user)
-
-To start, we make sure we are using the PostgreSQL variant of the
-:func:`_postgresql.insert` construct. Next, we construct a multi-values
-INSERT statement, where a single INSERT statement will provide multiple rows
-to be inserted. On the PostgreSQL database, this syntax provides the most
-efficient means of sending many hundreds of rows at once to be INSERTed.
-
-From there, we could if we wanted add the ``RETURNING`` clause to produce
-a bulk INSERT. However, to make the example even more interesting, we will
-also add the PostgreSQL specific ``ON CONFLICT..DO UPDATE`` syntax so that
-rows which already exist based on a unique criteria will be UPDATEd instead.
-We assume there is an INDEX or UNIQUE constraint on the ``name`` column of the
-``user_account`` table above, and then specify an appropriate :meth:`_postgresql.Insert.on_conflict_do_update`
-criteria that will update the ``fullname`` column for rows that already exist.
-
-Finally, we add the :meth:`_dml.Insert.returning` clause as we did in the
-previous example, and select our ``User`` objects using the same
-:meth:`_sql.Select.from_statement` approach as we did earlier. Supposing the
-database only a row for ``(1, "squidward", NULL)`` present; this row will
-trigger the ON CONFLICT routine in our above statement, in other words perform
-the equivalent of an UPDATE statement. The other two rows,
-``(NULL, "sandy", "Sandy Cheeks")`` and
-``(NULL, "spongebob", "Spongebob Squarepants")`` do not yet exist in the
-database, and will be inserted using normal INSERT semantics; the primary key
-column ``id`` uses either ``SERIAL`` or ``IDENTITY`` to auto-generate new
-integer values.
-
-Using this above form, we see SQL emitted on the PostgreSQL database as:
-
-
-.. sourcecode:: pycon+sql
-
- {opensql}INSERT INTO user_account (name, fullname)
- VALUES (%(name_m0)s, %(fullname_m0)s), (%(name_m1)s, %(fullname_m1)s), (%(name_m2)s, %(fullname_m2)s)
- ON CONFLICT (name) DO UPDATE SET fullname = excluded.fullname
- RETURNING user_account.id, user_account.name, user_account.fullname
- {'name_m0': 'sandy', 'fullname_m0': 'Sandy Cheeks', 'name_m1': 'squidward', 'fullname_m1': 'Squidward Tentacles', 'name_m2': 'spongebob', 'fullname_m2': 'Spongebob Squarepants'}{stop}
-
- inserted or updated: User(id=2, name='sandy', fullname='Sandy Cheeks')
- inserted or updated: User(id=3, name='squidward', fullname='Squidward Tentacles')
- inserted or updated: User(id=1, name='spongebob', fullname='Spongebob Squarepants')
-
-Above we can also see that the INSERTed ``User`` objects have a
-newly generated primary key value as we would expect with any other ORM
-oriented INSERT statement.
-
-.. seealso::
-
- :ref:`orm_queryguide_selecting_text` - introduces the
- :meth:`_sql.Select.from_statement` method.
.. _session_partitioning:
Bulk Operations
===============
-.. deepalchemy:: Bulk operations are essentially lower-functionality versions
- of the Unit of Work's facilities for emitting INSERT and UPDATE statements
- on primary key targeted rows. These routines were added to suit some
- cases where many rows being inserted or updated could be run into the
- database without as much of the usual unit of work overhead, by disabling
- many unit of work features.
-
- There is **usually no need to use these routines, particularly in
- modern SQLAlchemy 2.0 which has greatly improved the performance
- of ORM unit-of-work INSERTs for most backends.** Ordinary ORM
- INSERT operations as well as the bulk methods documented here both take
- advantage of the same :ref:`engine_insertmanyvalues` feature introduced
- in SQLAlchemy 2.0. For backends that support RETURNING, the vast majority
- of performance overhead for bulk inserts has been resolved.
-
- As the bulk operations forego many unit of work features, please read all
- caveats at :ref:`bulk_operations_caveats`.
-
-.. note:: Bulk INSERT and UPDATE should not be confused with the
- feature known as :ref:`orm_expression_update_delete`, which
- allow a single UPDATE or DELETE statement with arbitrary WHERE
- criteria to be emitted.
-
-.. seealso::
-
- :ref:`orm_expression_update_delete` - using straight multi-row UPDATE and DELETE statements
- in an ORM context.
-
- :ref:`orm_dml_returning_objects` - use UPDATE, INSERT or upsert operations to
- return ORM objects
-
-Bulk INSERT/per-row UPDATE operations on the :class:`.Session` include
-:meth:`.Session.bulk_save_objects`, :meth:`.Session.bulk_insert_mappings`, and
-:meth:`.Session.bulk_update_mappings`. The purpose of these methods is to
-directly expose internal elements of the unit of work system, such that
-facilities for emitting INSERT and UPDATE statements given dictionaries or
-object states can be utilized alone, bypassing the normal unit of work
-mechanics of state, relationship and attribute management. The advantages to
-this approach is strictly that of reduced Python overhead:
-
-* The flush() process, including the survey of all objects, their state,
- their cascade status, the status of all objects associated with them
- via :func:`_orm.relationship`, and the topological sort of all operations to
- be performed are bypassed. This can in many cases reduce
- Python overhead.
-
-* The objects as given have no defined relationship to the target
- :class:`.Session`, even when the operation is complete, meaning there's no
- overhead in attaching them or managing their state in terms of the identity
- map or session.
-
-* The :meth:`.Session.bulk_insert_mappings` and :meth:`.Session.bulk_update_mappings`
- methods accept lists of plain Python dictionaries, not objects; this further
- reduces a large amount of overhead associated with instantiating mapped
- objects and assigning state to them, which normally is also subject to
- expensive tracking of history on a per-attribute basis.
-
-* The set of objects passed to all bulk methods are processed
- in the order they are received. In the case of
- :meth:`.Session.bulk_save_objects`, when objects of different types are passed,
- the INSERT and UPDATE statements are necessarily broken up into per-type
- groups. In order to reduce the number of batch INSERT or UPDATE statements
- passed to the DBAPI, ensure that the incoming list of objects
- are grouped by type.
-
-* In most cases, the bulk operations don't need to fetch newly generated
- primary key values after the INSERT proceeds. This is historically a
- major performance bottleneck in the ORM, however in modern ORM use most
- backends have full support for RETURNING with multi-row INSERT statements.
-
-* UPDATE statements can similarly be tailored such that all attributes
- are subject to the SET clause unconditionally, making it more
- likely that ``executemany()`` blocks can be used.
-
-The performance behavior of the bulk routines should be studied using the
-:ref:`examples_performance` example suite. This is a series of example
-scripts which illustrate Python call-counts across a variety of scenarios,
-including bulk insert and update scenarios.
-
-.. seealso::
-
- :ref:`examples_performance` - includes detailed examples of bulk operations
- contrasted against traditional Core and ORM methods, including performance
- metrics.
-
-Usage
------
-
-The methods each work in the context of the :class:`.Session` object's
-transaction, like any other::
-
- s = Session()
- objects = [
- User(name="u1"),
- User(name="u2"),
- User(name="u3")
- ]
- s.bulk_save_objects(objects)
-
-For :meth:`.Session.bulk_insert_mappings`, and :meth:`.Session.bulk_update_mappings`,
-dictionaries are passed::
-
- s.bulk_insert_mappings(User,
- [dict(name="u1"), dict(name="u2"), dict(name="u3")]
- )
-
-.. seealso::
-
- :meth:`.Session.bulk_save_objects`
-
- :meth:`.Session.bulk_insert_mappings`
-
- :meth:`.Session.bulk_update_mappings`
-
-
-Comparison to Core Insert / Update Constructs
----------------------------------------------
-
-The bulk methods offer performance that under particular circumstances
-can be close to that of using the core :class:`_expression.Insert` and
-:class:`_expression.Update` constructs in an "executemany" context (for a description
-of "executemany", see :ref:`tutorial_multiple_parameters` in the Core tutorial).
-In order to achieve this, the
-:paramref:`.Session.bulk_insert_mappings.return_defaults`
-flag should be disabled so that rows can be batched together. The example
-suite in :ref:`examples_performance` should be carefully studied in order
-to gain familiarity with how fast bulk performance can be achieved.
-
-.. _bulk_operations_caveats:
-
-ORM Compatibility / Caveats
-----------------------------
-
-.. warning:: Be sure to familiarize with these limitations before using the
- bulk routines.
-
-The bulk insert / update methods lose a significant amount of functionality
-versus traditional ORM use. The following is a listing of features that
-are **not available** when using these methods:
-
-* persistence along :func:`_orm.relationship` linkages
-
-* sorting of rows within order of dependency; rows are inserted or updated
- directly in the order in which they are passed to the methods
-
-* Session-management on the given objects, including attachment to the
- session, identity map management.
-
-* Functionality related to primary key mutation, ON UPDATE cascade -
- **mutation of primary key columns will not work** - as the original PK
- value of each row is not available, so the WHERE criteria cannot be
- generated.
-
-* SQL expression inserts / updates (e.g. :ref:`flush_embedded_sql_expressions`)
- are not supported in this mode as it prevents INSERT / UPDATE statements
- from being efficiently batched.
-
-* ORM events such as :meth:`.MapperEvents.before_insert`, etc. The bulk
- session methods have no event support.
-
-Features that **are available** include:
-
-* INSERTs and UPDATEs of mapped objects
-
-* Version identifier support
-
-* Multi-table mappings, such as joined-inheritance. Enable
- :paramref:`.Session.bulk_save_objects.return_defaults` for this to be used.
-
+..legacy::
+ SQLAlchemy 2.0 has integrated the :class:`_orm.Session` "bulk insert" and
+ "bulk update" capabilities into 2.0 style :meth:`_orm.Session.execute`
+ method, making direct use of :class:`_dml.Insert` and :class:`_dml.Update`
+ constructs. See the document at :doc:`queryguide/dml` for documentation,
+ including :ref:`orm_queryguide_legacy_bulk` which illustrates migration
+ from the older methods to the new methods.
\ No newline at end of file
-.. currentmodule:: sqlalchemy.orm
+:orphan:
-.. _query_api_toplevel:
-
-================
-Legacy Query API
-================
-
-.. admonition:: About the Legacy Query API
-
-
- This page contains the Python generated documentation for the
- :class:`_query.Query` construct, which for many years was the sole SQL
- interface when working with the SQLAlchemy ORM. As of version 2.0, an all
- new way of working is now the standard approach, where the same
- :func:`_sql.select` construct that works for Core works just as well for the
- ORM, providing a consistent interface for building queries.
-
- For any application that is built on the SQLAlchemy ORM prior to the
- 2.0 API, the :class:`_query.Query` API will usually represents the vast
- majority of database access code within an application, and as such the
- majority of the :class:`_query.Query` API is
- **not being removed from SQLAlchemy**. The :class:`_query.Query` object
- behind the scenes now translates itself into a 2.0 style :func:`_sql.select`
- object when the :class:`_query.Query` object is executed, so it now is
- just a very thin adapter API.
-
- For an introduction to writing SQL for ORM objects in the 2.0 style,
- start with the :ref:`unified_tutorial`. Additional reference for 2.0 style
- querying is at :ref:`queryguide_toplevel`.
-
-The Query Object
-================
-
-:class:`_query.Query` is produced in terms of a given :class:`~.Session`, using the :meth:`~.Session.query` method::
-
- q = session.query(SomeMappedClass)
-
-Following is the full interface for the :class:`_query.Query` object.
-
-.. autoclass:: sqlalchemy.orm.Query
- :members:
- :inherited-members:
-
-ORM-Specific Query Constructs
-=============================
-
-This section has moved to :ref:`queryguide_additional`.
+This document has moved to :doc:`queryguide/query`
-.. highlight:: pycon+sql
+:orphan:
-.. _queryguide_toplevel:
-
-==================
-ORM Querying Guide
-==================
-
-This section provides an overview of emitting queries with the
-SQLAlchemy ORM using :term:`2.0 style` usage.
-
-Readers of this section should be familiar with the SQLAlchemy overview
-at :ref:`unified_tutorial`, and in particular most of the content here expands
-upon the content at :ref:`tutorial_selecting_data`.
-
-.. admonition:: Attention legacy users
-
- In the SQLAlchemy 2.x series, SQL SELECT statements for the ORM are
- constructed using the same :func:`_sql.select` construct as is used in
- Core, which is then invoked in terms of a :class:`_orm.Session` using the
- :meth:`_orm.Session.execute` method (as are the :func:`_sql.update` and
- :func:`_sql.delete` constructs now used for the
- :ref:`orm_expression_update_delete` feature). However, the legacy
- :class:`_query.Query` object, which performs these same steps as more of an
- "all-in-one" object, continues to remain available as a thin facade over
- this new system, to support applications that were built on the 1.x series
- without the need for wholesale replacement of all queries. For reference on
- this object, see the section :ref:`query_api_toplevel`.
-
-
-.. Setup code, not for display
-
- >>> from sqlalchemy import create_engine
- >>> engine = create_engine("sqlite+pysqlite:///:memory:", echo=True, future=True)
- >>> from sqlalchemy import MetaData, Table, Column, Integer, String
- >>> metadata_obj = MetaData()
- >>> user_table = Table(
- ... "user_account",
- ... metadata_obj,
- ... Column('id', Integer, primary_key=True),
- ... Column('name', String(30)),
- ... Column('fullname', String)
- ... )
- >>> from sqlalchemy import ForeignKey
- >>> address_table = Table(
- ... "address",
- ... metadata_obj,
- ... Column('id', Integer, primary_key=True),
- ... Column('user_id', None, ForeignKey('user_account.id')),
- ... Column('email_address', String, nullable=False)
- ... )
- >>> orders_table = Table(
- ... "user_order",
- ... metadata_obj,
- ... Column('id', Integer, primary_key=True),
- ... Column('user_id', None, ForeignKey('user_account.id')),
- ... Column('email_address', String, nullable=False)
- ... )
- >>> order_items_table = Table(
- ... "order_items",
- ... metadata_obj,
- ... Column("order_id", ForeignKey("user_order.id"), primary_key=True),
- ... Column("item_id", ForeignKey("item.id"), primary_key=True)
- ... )
- >>> items_table = Table(
- ... "item",
- ... metadata_obj,
- ... Column('id', Integer, primary_key=True),
- ... Column('name', String),
- ... Column('description', String)
- ... )
- >>> metadata_obj.create_all(engine)
- BEGIN (implicit)
- ...
- >>> from sqlalchemy.orm import DeclarativeBase
- >>> class Base(DeclarativeBase):
- ... pass
- >>> from sqlalchemy.orm import relationship
- >>> class User(Base):
- ... __table__ = user_table
- ...
- ... addresses = relationship("Address", back_populates="user")
- ... orders = relationship("Order")
- ...
- ... def __repr__(self):
- ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})"
-
- >>> class Address(Base):
- ... __table__ = address_table
- ...
- ... user = relationship("User", back_populates="addresses")
- ...
- ... def __repr__(self):
- ... return f"Address(id={self.id!r}, email_address={self.email_address!r})"
-
- >>> class Order(Base):
- ... __table__ = orders_table
- ... items = relationship("Item", secondary=order_items_table)
-
- >>> class Item(Base):
- ... __table__ = items_table
-
- >>> conn = engine.connect()
- >>> from sqlalchemy.orm import Session
- >>> session = Session(conn)
- >>> session.add_all([
- ... User(name="spongebob", fullname="Spongebob Squarepants", addresses=[
- ... Address(email_address="spongebob@sqlalchemy.org")
- ... ]),
- ... User(name="sandy", fullname="Sandy Cheeks", addresses=[
- ... Address(email_address="sandy@sqlalchemy.org"),
- ... Address(email_address="squirrel@squirrelpower.org")
- ... ]),
- ... User(name="patrick", fullname="Patrick Star", addresses=[
- ... Address(email_address="pat999@aol.com")
- ... ]),
- ... User(name="squidward", fullname="Squidward Tentacles", addresses=[
- ... Address(email_address="stentcl@sqlalchemy.org")
- ... ]),
- ... User(name="ehkrabs", fullname="Eugene H. Krabs"),
- ... ])
- >>> session.commit()
- BEGIN ...
- >>> conn.begin()
- BEGIN ...
-
-
-SELECT statements
-=================
-
-SELECT statements are produced by the :func:`_sql.select` function which
-returns a :class:`_sql.Select` object::
-
- >>> from sqlalchemy import select
- >>> stmt = select(User).where(User.name == 'spongebob')
-
-To invoke a :class:`_sql.Select` with the ORM, it is passed to
-:meth:`_orm.Session.execute`::
-
- {sql}>>> result = session.execute(stmt)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- WHERE user_account.name = ?
- [...] ('spongebob',){stop}
- >>> for user_obj in result.scalars():
- ... print(f"{user_obj.name} {user_obj.fullname}")
- spongebob Spongebob Squarepants
-
-
-.. _orm_queryguide_select_columns:
-
-Selecting ORM Entities and Attributes
---------------------------------------
-
-The :func:`_sql.select` construct accepts ORM entities, including mapped
-classes as well as class-level attributes representing mapped columns, which
-are converted into ORM-annotated :class:`_sql.FromClause` and
-:class:`_sql.ColumnElement` elements at construction time.
-
-A :class:`_sql.Select` object that contains ORM-annotated entities is normally
-executed using a :class:`_orm.Session` object, and not a :class:`_future.Connection`
-object, so that ORM-related features may take effect, including that
-instances of ORM-mapped objects may be returned. When using the
-:class:`_future.Connection` directly, result rows will only contain
-column-level data.
-
-Below we select from the ``User`` entity, producing a :class:`_sql.Select`
-that selects from the mapped :class:`_schema.Table` to which ``User`` is mapped::
-
- {sql}>>> result = session.execute(select(User).order_by(User.id))
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account ORDER BY user_account.id
- [...] (){stop}
-
-When selecting from ORM entities, the entity itself is returned in the result
-as a row with a single element, as opposed to a series of individual columns;
-for example above, the :class:`_engine.Result` returns :class:`_engine.Row`
-objects that have just a single element per row, that element holding onto a
-``User`` object::
-
- >>> result.fetchone()
- (User(id=1, name='spongebob', fullname='Spongebob Squarepants'),)
-
-When selecting a list of single-element rows containing ORM entities, it is
-typical to skip the generation of :class:`_engine.Row` objects and instead
-receive ORM entities directly, which is achieved using the
-:meth:`_engine.Result.scalars` method::
-
- >>> result.scalars().all()
- [User(id=2, name='sandy', fullname='Sandy Cheeks'),
- User(id=3, name='patrick', fullname='Patrick Star'),
- User(id=4, name='squidward', fullname='Squidward Tentacles'),
- User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')]
-
-ORM Entities are named in the result row based on their class name,
-such as below where we SELECT from both ``User`` and ``Address`` at the
-same time::
-
- >>> stmt = select(User, Address).join(User.addresses).order_by(User.id, Address.id)
-
- {sql}>>> for row in session.execute(stmt):
- ... print(f"{row.User.name} {row.Address.email_address}")
- SELECT user_account.id, user_account.name, user_account.fullname,
- address.id AS id_1, address.user_id, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- ORDER BY user_account.id, address.id
- [...] (){stop}
- spongebob spongebob@sqlalchemy.org
- sandy sandy@sqlalchemy.org
- sandy squirrel@squirrelpower.org
- patrick pat999@aol.com
- squidward stentcl@sqlalchemy.org
-
-
-Selecting Individual Attributes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The attributes on a mapped class, such as ``User.name`` and ``Address.email_address``,
-have a similar behavior as that of the entity class itself such as ``User``
-in that they are automatically converted into ORM-annotated Core objects
-when passed to :func:`_sql.select`. They may be used in the same way
-as table columns are used::
-
- {sql}>>> result = session.execute(
- ... select(User.name, Address.email_address).
- ... join(User.addresses).
- ... order_by(User.id, Address.id)
- ... )
- SELECT user_account.name, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- ORDER BY user_account.id, address.id
- [...] (){stop}
-
-ORM attributes, themselves known as
-:class:`_orm.InstrumentedAttribute`
-objects, can be used in the same way as any :class:`_sql.ColumnElement`,
-and are delivered in result rows just the same way, such as below
-where we refer to their values by column name within each row::
-
- >>> for row in result:
- ... print(f"{row.name} {row.email_address}")
- spongebob spongebob@sqlalchemy.org
- sandy sandy@sqlalchemy.org
- sandy squirrel@squirrelpower.org
- patrick pat999@aol.com
- squidward stentcl@sqlalchemy.org
-
-Grouping Selected Attributes with Bundles
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The :class:`_orm.Bundle` construct is an extensible ORM-only construct that
-allows sets of column expressions to be grouped in result rows::
-
- >>> from sqlalchemy.orm import Bundle
- >>> stmt = select(
- ... Bundle("user", User.name, User.fullname),
- ... Bundle("email", Address.email_address)
- ... ).join_from(User, Address)
- {sql}>>> for row in session.execute(stmt):
- ... print(f"{row.user.name} {row.user.fullname} {row.email.email_address}")
- SELECT user_account.name, user_account.fullname, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- [...] (){stop}
- spongebob Spongebob Squarepants spongebob@sqlalchemy.org
- sandy Sandy Cheeks sandy@sqlalchemy.org
- sandy Sandy Cheeks squirrel@squirrelpower.org
- patrick Patrick Star pat999@aol.com
- squidward Squidward Tentacles stentcl@sqlalchemy.org
-
-The :class:`_orm.Bundle` is potentially useful for creating lightweight
-views as well as custom column groupings such as mappings.
-
-.. seealso::
-
- :ref:`bundles` - in the ORM loading documentation.
-
-
-.. _orm_queryguide_orm_aliases:
-
-Selecting ORM Aliases
-^^^^^^^^^^^^^^^^^^^^^
-
-As discussed in the tutorial at :ref:`tutorial_using_aliases`, to create a
-SQL alias of an ORM entity is achieved using the :func:`_orm.aliased`
-construct against a mapped class::
-
- >>> from sqlalchemy.orm import aliased
- >>> u1 = aliased(User)
- >>> print(select(u1).order_by(u1.id))
- {opensql}SELECT user_account_1.id, user_account_1.name, user_account_1.fullname
- FROM user_account AS user_account_1 ORDER BY user_account_1.id
-
-As is the case when using :meth:`_schema.Table.alias`, the SQL alias
-is anonymously named. For the case of selecting the entity from a row
-with an explicit name, the :paramref:`_orm.aliased.name` parameter may be
-passed as well::
-
- >>> from sqlalchemy.orm import aliased
- >>> u1 = aliased(User, name="u1")
- >>> stmt = select(u1).order_by(u1.id)
- {sql}>>> row = session.execute(stmt).first()
- SELECT u1.id, u1.name, u1.fullname
- FROM user_account AS u1 ORDER BY u1.id
- [...] (){stop}
- >>> print(f"{row.u1.name}")
- spongebob
-
-The :class:`_orm.aliased` construct is also central to making use of subqueries
-with the ORM; the sections :ref:`orm_queryguide_subqueries` and
-:ref:`orm_queryguide_join_subqueries` discusses this further.
-
-
-.. _orm_queryguide_selecting_text:
-
-Getting ORM Results from Textual and Core Statements
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ORM supports loading of entities from SELECT statements that come from other
-sources. The typical use case is that of a textual SELECT statement, which
-in SQLAlchemy is represented using the :func:`_sql.text` construct. The
-:func:`_sql.text` construct, once constructed, can be augmented with
-information
-about the ORM-mapped columns that the statement would load; this can then be
-associated with the ORM entity itself so that ORM objects can be loaded based
-on this statement.
-
-Given a textual SQL statement we'd like to load from::
-
- >>> from sqlalchemy import text
- >>> textual_sql = text("SELECT id, name, fullname FROM user_account ORDER BY id")
-
-We can add column information to the statement by using the
-:meth:`_sql.TextClause.columns` method; when this method is invoked, the
-:class:`_sql.TextClause` object is converted into a :class:`_sql.TextualSelect`
-object, which takes on a role that is comparable to the :class:`_sql.Select`
-construct. The :meth:`_sql.TextClause.columns` method
-is typically passed :class:`_schema.Column` objects or equivalent, and in this
-case we can make use of the ORM-mapped attributes on the ``User`` class
-directly::
-
- >>> textual_sql = textual_sql.columns(User.id, User.name, User.fullname)
-
-We now have an ORM-configured SQL construct that as given, can load the "id",
-"name" and "fullname" columns separately. To use this SELECT statement as a
-source of complete ``User`` entities instead, we can link these columns to a
-regular ORM-enabled
-:class:`_sql.Select` construct using the :meth:`_sql.Select.from_statement`
-method::
-
- >>> # using from_statement()
- >>> orm_sql = select(User).from_statement(textual_sql)
- >>> for user_obj in session.execute(orm_sql).scalars():
- ... print(user_obj)
- {opensql}SELECT id, name, fullname FROM user_account ORDER BY id
- [...] (){stop}
- User(id=1, name='spongebob', fullname='Spongebob Squarepants')
- User(id=2, name='sandy', fullname='Sandy Cheeks')
- User(id=3, name='patrick', fullname='Patrick Star')
- User(id=4, name='squidward', fullname='Squidward Tentacles')
- User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
-
-The same :class:`_sql.TextualSelect` object can also be converted into
-a subquery using the :meth:`_sql.TextualSelect.subquery` method,
-and linked to the ``User`` entity to it using the :func:`_orm.aliased`
-construct, in a similar manner as discussed below in :ref:`orm_queryguide_subqueries`::
-
- >>> # using aliased() to select from a subquery
- >>> orm_subquery = aliased(User, textual_sql.subquery())
- >>> stmt = select(orm_subquery)
- >>> for user_obj in session.execute(stmt).scalars():
- ... print(user_obj)
- {opensql}SELECT anon_1.id, anon_1.name, anon_1.fullname
- FROM (SELECT id, name, fullname FROM user_account ORDER BY id) AS anon_1
- [...] (){stop}
- User(id=1, name='spongebob', fullname='Spongebob Squarepants')
- User(id=2, name='sandy', fullname='Sandy Cheeks')
- User(id=3, name='patrick', fullname='Patrick Star')
- User(id=4, name='squidward', fullname='Squidward Tentacles')
- User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
-
-The difference between using the :class:`_sql.TextualSelect` directly with
-:meth:`_sql.Select.from_statement` versus making use of :func:`_sql.aliased`
-is that in the former case, no subquery is produced in the resulting SQL.
-This can in some scenarios be advantageous from a performance or complexity
-perspective.
-
-.. seealso::
-
- :ref:`orm_dml_returning_objects` - The :meth:`_sql.Select.from_statement`
- method also works with :term:`DML` statements that support RETURNING.
-
-
-.. _orm_queryguide_subqueries:
-
-Selecting Entities from Subqueries
------------------------------------
-
-The :func:`_orm.aliased` construct discussed in the previous section
-can be used with any :class:`_sql.Subuqery` construct that comes from a
-method such as :meth:`_sql.Select.subquery` to link ORM entities to the
-columns returned by that subquery; there must be a **column correspondence**
-relationship between the columns delivered by the subquery and the columns
-to which the entity is mapped, meaning, the subquery needs to be ultimately
-derived from those entities, such as in the example below::
-
- >>> inner_stmt = select(User).where(User.id < 7).order_by(User.id)
- >>> subq = inner_stmt.subquery()
- >>> aliased_user = aliased(User, subq)
- >>> stmt = select(aliased_user)
- >>> for user_obj in session.execute(stmt).scalars():
- ... print(user_obj)
- {opensql} SELECT anon_1.id, anon_1.name, anon_1.fullname
- FROM (SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
- FROM user_account
- WHERE user_account.id < ? ORDER BY user_account.id) AS anon_1
- [generated in ...] (7,)
- {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
- User(id=2, name='sandy', fullname='Sandy Cheeks')
- User(id=3, name='patrick', fullname='Patrick Star')
- User(id=4, name='squidward', fullname='Squidward Tentacles')
- User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
-
-.. seealso::
-
- :ref:`tutorial_subqueries_orm_aliased` - in the :ref:`unified_tutorial`
-
- :ref:`orm_queryguide_join_subqueries`
-
-.. _orm_queryguide_unions:
-
-Selecting Entities from UNIONs and other set operations
---------------------------------------------------------
-
-The :func:`_sql.union` and :func:`_sql.union_all` functions are the most
-common set operations, which along with other set operations such as
-:func:`_sql.except_`, :func:`_sql.intersect` and others deliver an object known as
-a :class:`_sql.CompoundSelect`, which is composed of multiple
-:class:`_sql.Select` constructs joined by a set-operation keyword. ORM entities may
-be selected from simple compound selects using the :meth:`_sql.Select.from_statement`
-method illustrated previously at :ref:`orm_queryguide_selecting_text`. In
-this method, the UNION statement is the complete statement that will be
-rendered, no additional criteria can be added after :meth:`_sql.Select.from_statement`
-is used::
-
- >>> from sqlalchemy import union_all
- >>> u = union_all(
- ... select(User).where(User.id < 2),
- ... select(User).where(User.id == 3)
- ... ).order_by(User.id)
- >>> stmt = select(User).from_statement(u)
- >>> for user_obj in session.execute(stmt).scalars():
- ... print(user_obj)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- WHERE user_account.id < ? UNION ALL SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- WHERE user_account.id = ? ORDER BY id
- [generated in ...] (2, 3)
- {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
- User(id=3, name='patrick', fullname='Patrick Star')
-
-A :class:`_sql.CompoundSelect` construct can be more flexibly used within
-a query that can be further modified by organizing it into a subquery
-and linking it to an ORM entity using :func:`_orm.aliased`,
-as illustrated previously at :ref:`orm_queryguide_subqueries`. In the
-example below, we first use :meth:`_sql.CompoundSelect.subquery` to create
-a subquery of the UNION ALL statement, we then package that into the
-:func:`_orm.aliased` construct where it can be used like any other mapped
-entity in a :func:`_sql.select` construct, including that we can add filtering
-and order by criteria based on its exported columns::
-
- >>> subq = union_all(
- ... select(User).where(User.id < 2),
- ... select(User).where(User.id == 3)
- ... ).subquery()
- >>> user_alias = aliased(User, subq)
- >>> stmt = select(user_alias).order_by(user_alias.id)
- >>> for user_obj in session.execute(stmt).scalars():
- ... print(user_obj)
- {opensql}SELECT anon_1.id, anon_1.name, anon_1.fullname
- FROM (SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
- FROM user_account
- WHERE user_account.id < ? UNION ALL SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
- FROM user_account
- WHERE user_account.id = ?) AS anon_1 ORDER BY anon_1.id
- [generated in ...] (2, 3)
- {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
- User(id=3, name='patrick', fullname='Patrick Star')
-
-
-.. seealso::
-
- :ref:`tutorial_orm_union` - in the :ref:`unified_tutorial`
-
-.. _orm_queryguide_joins:
-
-Joins
------
-
-The :meth:`_sql.Select.join` and :meth:`_sql.Select.join_from` methods
-are used to construct SQL JOINs against a SELECT statement.
-
-This section will detail ORM use cases for these methods. For a general
-overview of their use from a Core perspective, see :ref:`tutorial_select_join`
-in the :ref:`unified_tutorial`.
-
-The usage of :meth:`_sql.Select.join` in an ORM context for :term:`2.0 style`
-queries is mostly equivalent, minus legacy use cases, to the usage of the
-:meth:`_orm.Query.join` method in :term:`1.x style` queries.
-
-Simple Relationship Joins
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Consider a mapping between two classes ``User`` and ``Address``,
-with a relationship ``User.addresses`` representing a collection
-of ``Address`` objects associated with each ``User``. The most
-common usage of :meth:`_sql.Select.join`
-is to create a JOIN along this
-relationship, using the ``User.addresses`` attribute as an indicator
-for how this should occur::
-
- >>> stmt = select(User).join(User.addresses)
-
-Where above, the call to :meth:`_sql.Select.join` along
-``User.addresses`` will result in SQL approximately equivalent to::
-
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account JOIN address ON user_account.id = address.user_id
-
-In the above example we refer to ``User.addresses`` as passed to
-:meth:`_sql.Select.join` as the "on clause", that is, it indicates
-how the "ON" portion of the JOIN should be constructed.
-
-Chaining Multiple Joins
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-To construct a chain of joins, multiple :meth:`_sql.Select.join` calls may be
-used. The relationship-bound attribute implies both the left and right side of
-the join at once. Consider additional entities ``Order`` and ``Item``, where
-the ``User.orders`` relationship refers to the ``Order`` entity, and the
-``Order.items`` relationship refers to the ``Item`` entity, via an association
-table ``order_items``. Two :meth:`_sql.Select.join` calls will result in
-a JOIN first from ``User`` to ``Order``, and a second from ``Order`` to
-``Item``. However, since ``Order.items`` is a :ref:`many to many <relationships_many_to_many>`
-relationship, it results in two separate JOIN elements, for a total of three
-JOIN elements in the resulting SQL::
-
- >>> stmt = (
- ... select(User).
- ... join(User.orders).
- ... join(Order.items)
- ... )
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN user_order ON user_account.id = user_order.user_id
- JOIN order_items AS order_items_1 ON user_order.id = order_items_1.order_id
- JOIN item ON item.id = order_items_1.item_id
-
-The order in which each call to the :meth:`_sql.Select.join` method
-is significant only to the degree that the "left" side of what we would like
-to join from needs to be present in the list of FROMs before we indicate a
-new target. :meth:`_sql.Select.join` would not, for example, know how to
-join correctly if we were to specify
-``select(User).join(Order.items).join(User.orders)``, and would raise an
-error. In correct practice, the :meth:`_sql.Select.join` method is invoked
-in such a way that lines up with how we would want the JOIN clauses in SQL
-to be rendered, and each call should represent a clear link from what
-precedes it.
-
-All of the elements that we target in the FROM clause remain available
-as potential points to continue joining FROM. We can continue to add
-other elements to join FROM the ``User`` entity above, for example adding
-on the ``User.addresses`` relationship to our chain of joins::
-
- >>> stmt = (
- ... select(User).
- ... join(User.orders).
- ... join(Order.items).
- ... join(User.addresses)
- ... )
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN user_order ON user_account.id = user_order.user_id
- JOIN order_items AS order_items_1 ON user_order.id = order_items_1.order_id
- JOIN item ON item.id = order_items_1.item_id
- JOIN address ON user_account.id = address.user_id
-
-
-Joins to a Target Entity or Selectable
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A second form of :meth:`_sql.Select.join` allows any mapped entity or core
-selectable construct as a target. In this usage, :meth:`_sql.Select.join`
-will attempt to **infer** the ON clause for the JOIN, using the natural foreign
-key relationship between two entities::
-
- >>> stmt = select(User).join(Address)
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account JOIN address ON user_account.id = address.user_id
-
-In the above calling form, :meth:`_sql.Select.join` is called upon to infer
-the "on clause" automatically. This calling form will ultimately raise
-an error if either there are no :class:`_schema.ForeignKeyConstraint` setup
-between the two mapped :class:`_schema.Table` constructs, or if there are multiple
-:class:`_schema.ForeignKeyConstraint` linakges between them such that the
-appropriate constraint to use is ambiguous.
-
-.. note:: When making use of :meth:`_sql.Select.join` or :meth:`_sql.Select.join_from`
- without indicating an ON clause, ORM
- configured :func:`_orm.relationship` constructs are **not taken into account**.
- Only the configured :class:`_schema.ForeignKeyConstraint` relationships between
- the entities at the level of the mapped :class:`_schema.Table` objects are consulted
- when an attempt is made to infer an ON clause for the JOIN.
-
-.. _queryguide_join_onclause:
-
-Joins to a Target with an ON Clause
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The third calling form allows both the target entity as well
-as the ON clause to be passed explicitly. A example that includes
-a SQL expression as the ON clause is as follows::
-
- >>> stmt = select(User).join(Address, User.id==Address.user_id)
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account JOIN address ON user_account.id = address.user_id
-
-The expression-based ON clause may also be the relationship-bound
-attribute; this form in fact states the target of ``Address`` twice, however
-this is accepted::
-
- >>> stmt = select(User).join(Address, User.addresses)
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account JOIN address ON user_account.id = address.user_id
-
-The above syntax has more functionality if we use it in terms of aliased
-entities. The default target for ``User.addresses`` is the ``Address``
-class, however if we pass aliased forms using :func:`_orm.aliased`, the
-:func:`_orm.aliased` form will be used as the target, as in the example
-below::
-
- >>> a1 = aliased(Address)
- >>> a2 = aliased(Address)
- >>> stmt = (
- ... select(User).
- ... join(a1, User.addresses).
- ... join(a2, User.addresses).
- ... where(a1.email_address == 'ed@foo.com').
- ... where(a2.email_address == 'ed@bar.com')
- ... )
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN address AS address_1 ON user_account.id = address_1.user_id
- JOIN address AS address_2 ON user_account.id = address_2.user_id
- WHERE address_1.email_address = :email_address_1
- AND address_2.email_address = :email_address_2
-
-When using relationship-bound attributes, the target entity can also be
-substituted with an aliased entity by using the
-:meth:`_orm.PropComparator.of_type` method. The same example using
-this method would be::
-
- >>> stmt = (
- ... select(User).
- ... join(User.addresses.of_type(a1)).
- ... join(User.addresses.of_type(a2)).
- ... where(a1.email_address == 'ed@foo.com').
- ... where(a2.email_address == 'ed@bar.com')
- ... )
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN address AS address_1 ON user_account.id = address_1.user_id
- JOIN address AS address_2 ON user_account.id = address_2.user_id
- WHERE address_1.email_address = :email_address_1
- AND address_2.email_address = :email_address_2
-
-.. _orm_queryguide_join_on_augmented:
-
-Augmenting Built-in ON Clauses
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-As a substitute for providing a full custom ON condition for an
-existing relationship, the :meth:`_orm.PropComparator.and_` function
-may be applied to a relationship attribute to augment additional
-criteria into the ON clause; the additional criteria will be combined
-with the default criteria using AND. Below, the ON criteria between
-``user_account`` and ``address`` contains two separate elements joined
-by ``AND``, the first one being the natural join along the foreign key,
-and the second being a custom limiting criteria::
-
- >>> stmt = (
- ... select(User).
- ... join(User.addresses.and_(Address.email_address != 'foo@bar.com'))
- ... )
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN address ON user_account.id = address.user_id
- AND address.email_address != :email_address_1
-
-.. seealso::
-
- The :meth:`_orm.PropComparator.and_` method also works with loader
- strategies. See the section :ref:`loader_option_criteria` for an example.
-
-.. _orm_queryguide_join_subqueries:
-
-Joining to Subqueries
-^^^^^^^^^^^^^^^^^^^^^^^
-
-The target of a join may be any "selectable" entity which usefully includes
-subuqeries. When using the ORM, it is typical
-that these targets are stated in terms of an
-:func:`_orm.aliased` construct, but this is not strictly required particularly
-if the joined entity is not being returned in the results. For example, to join from the
-``User`` entity to the ``Address`` entity, where the ``Address`` entity
-is represented as a row limited subquery, we first construct a :class:`_sql.Subquery`
-object using :meth:`_sql.Select.subquery`, which may then be used as the
-target of the :meth:`_sql.Select.join` method::
-
- >>> subq = (
- ... select(Address).
- ... where(Address.email_address == 'pat999@aol.com').
- ... subquery()
- ... )
- >>> stmt = select(User).join(subq, User.id == subq.c.user_id)
- >>> print(stmt)
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN (SELECT address.id AS id,
- address.user_id AS user_id, address.email_address AS email_address
- FROM address
- WHERE address.email_address = :email_address_1) AS anon_1
- ON user_account.id = anon_1.user_id{stop}
-
-The above SELECT statement when invoked via :meth:`_orm.Session.execute`
-will return rows that contain ``User`` entities, but not ``Address`` entities.
-In order to add ``Address`` entities to the set of entities that would be
-returned in result sets, we construct an :func:`_orm.aliased` object against
-the ``Address`` entity and the custom subquery. Note we also apply a name
-``"address"`` to the :func:`_orm.aliased` construct so that we may
-refer to it by name in the result row::
-
- >>> address_subq = aliased(Address, subq, name="address")
- >>> stmt = select(User, address_subq).join(address_subq)
- >>> for row in session.execute(stmt):
- ... print(f"{row.User} {row.address}")
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
- anon_1.id AS id_1, anon_1.user_id, anon_1.email_address
- FROM user_account
- JOIN (SELECT address.id AS id,
- address.user_id AS user_id, address.email_address AS email_address
- FROM address
- WHERE address.email_address = ?) AS anon_1 ON user_account.id = anon_1.user_id
- [...] ('pat999@aol.com',){stop}
- User(id=3, name='patrick', fullname='Patrick Star') Address(id=4, email_address='pat999@aol.com')
-
-The same subquery may be referred towards by multiple entities as well,
-for a subquery that represents more than one entity. The subquery itself
-will remain unique within the statement, while the entities that are linked
-to it using :class:`_orm.aliased` refer to distinct sets of columns::
-
- >>> user_address_subq = (
- ... select(User.id, User.name, Address.id, Address.email_address).
- ... join_from(User, Address).
- ... where(Address.email_address.in_(['pat999@aol.com', 'squirrel@squirrelpower.org'])).
- ... subquery()
- ... )
- >>> user_alias = aliased(User, user_address_subq, name="user")
- >>> address_alias = aliased(Address, user_address_subq, name="address")
- >>> stmt = select(user_alias, address_alias).where(user_alias.name == 'sandy')
- >>> for row in session.execute(stmt):
- ... print(f"{row.user} {row.address}")
- {opensql}SELECT anon_1.id, anon_1.name, anon_1.id_1, anon_1.email_address
- FROM (SELECT user_account.id AS id, user_account.name AS name, address.id AS id_1, address.email_address AS email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- WHERE address.email_address IN (?, ?)) AS anon_1
- WHERE anon_1.name = ?
- [...] ('pat999@aol.com', 'squirrel@squirrelpower.org', 'sandy'){stop}
- User(id=2, name='sandy', fullname='Sandy Cheeks') Address(id=3, email_address='squirrel@squirrelpower.org')
-
-
-.. _orm_queryguide_select_from:
-
-Controlling what to Join From
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In cases where the left side of the current state of
-:class:`_sql.Select` is not in line with what we want to join from,
-the :meth:`_sql.Select.join_from` method may be used::
-
- >>> stmt = select(Address).join_from(User, User.addresses).where(User.name == 'sandy')
- >>> print(stmt)
- SELECT address.id, address.user_id, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- WHERE user_account.name = :name_1
-
-The :meth:`_sql.Select.join_from` method accepts two or three arguments, either
-in the form ``<join from>, <onclause>``, or ``<join from>, <join to>,
-[<onclause>]``::
-
- >>> stmt = select(Address).join_from(User, Address).where(User.name == 'sandy')
- >>> print(stmt)
- SELECT address.id, address.user_id, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- WHERE user_account.name = :name_1
-
-To set up the initial FROM clause for a SELECT such that :meth:`_sql.Select.join`
-can be used subsequent, the :meth:`_sql.Select.select_from` method may also
-be used::
-
-
- >>> stmt = select(Address).select_from(User).join(Address).where(User.name == 'sandy')
- >>> print(stmt)
- SELECT address.id, address.user_id, address.email_address
- FROM user_account JOIN address ON user_account.id = address.user_id
- WHERE user_account.name = :name_1
-
-.. tip::
-
- The :meth:`_sql.Select.select_from` method does not actually have the
- final say on the order of tables in the FROM clause. If the statement
- also refers to a :class:`_sql.Join` construct that refers to existing
- tables in a different order, the :class:`_sql.Join` construct takes
- precedence. When we use methods like :meth:`_sql.Select.join`
- and :meth:`_sql.Select.join_from`, these methods are ultimately creating
- such a :class:`_sql.Join` object. Therefore we can see the contents
- of :meth:`_sql.Select.select_from` being overridden in a case like this::
-
- >>> stmt = select(Address).select_from(User).join(Address.user).where(User.name == 'sandy')
- >>> print(stmt)
- SELECT address.id, address.user_id, address.email_address
- FROM address JOIN user_account ON user_account.id = address.user_id
- WHERE user_account.name = :name_1
-
- Where above, we see that the FROM clause is ``address JOIN user_account``,
- even though we stated ``select_from(User)`` first. Because of the
- ``.join(Address.user)`` method call, the statement is ultimately equivalent
- to the following::
-
- >>> user_table = User.__table__
- >>> address_table = Address.__table__
- >>> from sqlalchemy.sql import join
- >>>
- >>> j = address_table.join(user_table, user_table.c.id == address_table.c.user_id)
- >>> stmt = (
- ... select(address_table).select_from(user_table).select_from(j).
- ... where(user_table.c.name == 'sandy')
- ... )
- >>> print(stmt)
- SELECT address.id, address.user_id, address.email_address
- FROM address JOIN user_account ON user_account.id = address.user_id
- WHERE user_account.name = :name_1
-
- The :class:`_sql.Join` construct above is added as another entry in the
- :meth:`_sql.Select.select_from` list which supersedes the previous entry.
-
-Special Relationship Operators
-------------------------------
-
-As detailed in the :ref:`unified_tutorial` at
-:ref:`tutorial_select_relationships`, ORM attributes mapped by
-:func:`_orm.relationship` may be used in a variety of ways as SQL construction
-helpers. In addition to the above documentation on
-:ref:`orm_queryguide_joins`, relationships may produce criteria to be used in
-the WHERE clause as well. See the linked sections below.
-
-.. seealso::
-
- Sections in the :ref:`tutorial_orm_related_objects` section of the
- :ref:`unified_tutorial`:
-
- * :ref:`tutorial_relationship_exists` - helpers to generate EXISTS clauses
- using :func:`_orm.relationship`
-
-
- * :ref:`tutorial_relationship_operators` - helpers to create comparisons in
- terms of a :func:`_orm.relationship` in reference to a specific object
- instance
-
-
-ORM Loader Options
--------------------
-
-Loader options are objects that are passed to the :meth:`_sql.Select.options`
-method which affect the loading of both column and relationship-oriented
-attributes. The majority of loader options descend from the :class:`_orm.Load`
-hierarchy. For a complete overview of using loader options, see the linked
-sections below.
-
-.. seealso::
-
- * :ref:`loading_columns` - details mapper and loading options that affect
- how column and SQL-expression mapped attributes are loaded
-
- * :ref:`loading_toplevel` - details relationship and loading options that
- affect how :func:`_orm.relationship` mapped attributes are loaded
-
-.. _orm_queryguide_execution_options:
-
-ORM Execution Options
----------------------
-
-Execution options are keyword arguments that are passed to an
-"execution_options" method, which take place at the level of statement
-execution. The primary "execution option" method is in Core at
-:meth:`_engine.Connection.execution_options`. In the ORM, execution options may
-also be passed to :meth:`_orm.Session.execute` using the
-:paramref:`_orm.Session.execute.execution_options` parameter. Perhaps more
-succinctly, most execution options, including those specific to the ORM, can be
-assigned to a statement directly, using the
-:meth:`_sql.Executable.execution_options` method, so that the options may be
-associated directly with the statement instead of being configured separately.
-The examples below will use this form.
-
-.. _orm_queryguide_populate_existing:
-
-Populate Existing
-^^^^^^^^^^^^^^^^^^
-
-The ``populate_existing`` execution option ensures that for all rows
-loaded, the corresponding instances in the :class:`_orm.Session` will
-be fully refreshed, erasing any existing data within the objects
-(including pending changes) and replacing with the data loaded from the
-result.
-
-Example use looks like::
-
- >>> stmt = select(User).execution_options(populate_existing=True)
- {sql}>>> result = session.execute(stmt)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- ...
-
-Normally, ORM objects are only loaded once, and if they are matched up
-to the primary key in a subsequent result row, the row is not applied to the
-object. This is both to preserve pending, unflushed changes on the object
-as well as to avoid the overhead and complexity of refreshing data which
-is already there. The :class:`_orm.Session` assumes a default working
-model of a highly isolated transaction, and to the degree that data is
-expected to change within the transaction outside of the local changes being
-made, those use cases would be handled using explicit steps such as this method.
-
-Using ``populate_existing``, any set of objects that matches a query
-can be refreshed, and it also allows control over relationship loader options.
-E.g. to refresh an instance while also refreshing a related set of objects::
-
- stmt = (
- select(User).
- where(User.name.in_(names)).
- execution_options(populate_existing=True).
- options(selectinload(User.addresses)
- )
- # will refresh all matching User objects as well as the related
- # Address objects
- users = session.execute(stmt).scalars().all()
-
-Another use case for ``populate_existing`` is in support of various
-attribute loading features that can change how an attribute is loaded on
-a per-query basis. Options for which this apply include:
-
-* The :func:`_orm.with_expression` option
-
-* The :meth:`_orm.PropComparator.and_` method that can modify what a loader
- strategy loads
-
-* The :func:`_orm.contains_eager` option
-
-* The :func:`_orm.with_loader_criteria` option
-
-The ``populate_existing`` execution option is equvialent to the
-:meth:`_orm.Query.populate_existing` method in :term:`1.x style` ORM queries.
-
-.. seealso::
-
- :ref:`faq_session_identity` - in :doc:`/faq/index`
-
- :ref:`session_expire` - in the ORM :class:`_orm.Session`
- documentation
-
-.. _orm_queryguide_autoflush:
-
-Autoflush
-^^^^^^^^^
-
-This option when passed as ``False`` will cause the :class:`_orm.Session`
-to not invoke the "autoflush" step. It's equivalent to using the
-:attr:`_orm.Session.no_autoflush` context manager to disable autoflush::
-
- >>> stmt = select(User).execution_options(autoflush=False)
- {sql}>>> session.execute(stmt)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- ...
-
-This option will also work on ORM-enabled :class:`_sql.Update` and
-:class:`_sql.Delete` queries.
-
-The ``autoflush`` execution option is equvialent to the
-:meth:`_orm.Query.autoflush` method in :term:`1.x style` ORM queries.
-
-.. seealso::
-
- :ref:`session_flushing`
-
-.. _orm_queryguide_yield_per:
-
-Fetching Large Result Sets with Yield Per
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``yield_per`` execution option is an integer value which will cause the
-:class:`_engine.Result` to buffer only limited number of rows and/or ORM
-objects at a time, before making data available to the client.
-
-Normally, the ORM will construct ORM objects for **all** rows up front,
-assembling them into a single buffer, before passing this buffer to
-the :class:`_engine.Result` object as a source of rows to be returned.
-The rationale for this behavior is to allow correct behavior
-for features such as joined eager loading, uniquifying of results, and the
-general case of result handling logic that relies upon the identity map
-maintaining a consistent state for every object in a result set as it is
-fetched.
-
-The purpose of the ``yield_per`` option is to change this behavior so that the
-ORM result set is optimized for iteration through very large result sets (> 10K
-rows), where the user has determined that the above patterns don't apply. When
-``yield_per`` is used, the ORM will instead batch ORM results into
-sub-collections and yield rows from each sub-collection individually as the
-:class:`_engine.Result` object is iterated, so that the Python interpreter
-doesn't need to declare very large areas of memory which is both time consuming
-and leads to excessive memory use. The option affects both the way the database
-cursor is used as well as how the ORM constructs rows and objects to be
-passed to the :class:`_engine.Result`.
-
-.. tip::
-
- From the above, it follows that the :class:`_engine.Result` must be
- consumed in an iterable fashion, that is, using iteration such as
- ``for row in result`` or using partial row methods such as
- :meth:`_engine.Result.fetchmany` or :meth:`_engine.Result.partitions`.
- Calling :meth:`_engine.Result.all` will defeat the purpose of using
- ``yield_per``.
-
-Using ``yield_per`` is equivalent to making use
-of both the :paramref:`_engine.Connection.execution_options.stream_results`
-execution option, which selects for server side cursors to be used
-by the backend if supported, and the :meth:`_engine.Result.yield_per` method
-on the returned :class:`_engine.Result` object,
-which establishes a fixed size of rows to be fetched as well as a
-corresponding limit to how many ORM objects will be constructed at once.
-
-.. tip::
-
- ``yield_per`` is now available as a Core execution option as well,
- described in detail at :ref:`engine_stream_results`. This section details
- the use of ``yield_per`` as an execution option with an ORM
- :class:`_orm.Session`. The option behaves as similarly as possible
- in both contexts.
-
-``yield_per`` when used with the ORM is typically established either
-via the :meth:`.Executable.execution_options` method on the given statement
-or by passing it to the :paramref:`_orm.Session.execute.execution_options`
-parameter of :meth:`_orm.Session.execute` or other similar :class:`_orm.Session`
-method. In the example below its invoked upon a statement::
-
- >>> stmt = select(User).execution_options(yield_per=10)
- {sql}>>> for row in session.execute(stmt):
- ... print(row)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- [...] (){stop}
- (User(id=1, name='spongebob', fullname='Spongebob Squarepants'),)
- ...
-
-The above code is mostly equivalent as making use of the
-:paramref:`_engine.Connection.execution_options.stream_results` execution
-option, setting the :paramref:`_engine.Connection.execution_options.max_row_buffer`
-to the given integer size, and then using the :meth:`_engine.Result.yield_per`
-method on the :class:`_engine.Result` returned by the
-:class:`_orm.Session`, as in the following example::
-
- # equivalent code
- >>> stmt = select(User).execution_options(stream_results=True, max_row_buffer=10)
- {sql}>>> for row in session.execute(stmt).yield_per(10):
- ... print(row)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- [...] (){stop}
- (User(id=1, name='spongebob', fullname='Spongebob Squarepants'),)
- ...
-
-``yield_per`` is also commonly used in combination with the
-:meth:`_engine.Result.partitions` method, that will iterate rows in grouped
-partitions. The size of each partition defaults to the integer value passed to
-``yield_per``, as in the below example::
-
- >>> stmt = select(User).execution_options(yield_per=10)
- {sql}>>> for partition in session.execute(stmt).partitions():
- ... for row in partition:
- ... print(row)
- SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- [...] (){stop}
- (User(id=1, name='spongebob', fullname='Spongebob Squarepants'),)
- ...
-
-When ``yield_per`` is used, the
-:paramref:`_engine.Connection.execution_options.stream_results` option is also
-set for the Core execution, so that a streaming / server side cursor will be
-used if the backend supports it.
-
-The ``yield_per`` execution option **is not compatible** with
-:ref:`"subquery" eager loading <subquery_eager_loading>` loading or
-:ref:`"joined" eager loading <joined_eager_loading>` when using collections. It
-is potentially compatible with :ref:`"select in" eager loading
-<selectin_eager_loading>` , provided the database driver supports multiple,
-independent cursors.
-
-Additionally, the ``yield_per`` execution option is not compatible
-with the :meth:`_engine.Result.unique` method; as this method relies upon
-storing a complete set of identities for all rows, it would necessarily
-defeat the purpose of using ``yield_per`` which is to handle an arbitrarily
-large number of rows.
-
-.. versionchanged:: 1.4.6 An exception is raised when ORM rows are fetched
- from a :class:`_engine.Result` object that makes use of the
- :meth:`_engine.Result.unique` filter, at the same time as the ``yield_per``
- execution option is used.
-
-When using the legacy :class:`_orm.Query` object with
-:term:`1.x style` ORM use, the :meth:`_orm.Query.yield_per` method
-will have the same result as that of the ``yield_per`` execution option.
-
-
-.. seealso::
-
- :ref:`engine_stream_results`
-
-ORM Update / Delete with Arbitrary WHERE clause
-================================================
-
-The :meth:`_orm.Session.execute` method, in addition to handling ORM-enabled
-:class:`_sql.Select` objects, can also accommodate ORM-enabled
-:class:`_sql.Update` and :class:`_sql.Delete` objects, which UPDATE or DELETE
-any number of database rows while also being able to synchronize the state of
-matching objects locally present in the :class:`_orm.Session`. See the section
-:ref:`orm_expression_update_delete` for background on this feature.
-
-
-.. Setup code, not for display
-
- >>> conn.close()
- ROLLBACK
-
-.. _queryguide_inspection:
-
-Inspecting entities and columns from ORM-enabled SELECT and DML statements
-==========================================================================
-
-The :func:`_sql.select` construct, as well as the :func:`_sql.insert`, :func:`_sql.update`
-and :func:`_sql.delete` constructs (for the latter DML constructs, as of SQLAlchemy
-1.4.33), all support the ability to inspect the entities in which these
-statements are created against, as well as the columns and datatypes that would
-be returned in a result set.
-
-For a :class:`.Select` object, this information is available from the
-:attr:`.Select.column_descriptions` attribute. This attribute operates in the
-same way as the legacy :attr:`.Query.column_descriptions` attribute. The format
-returned is a list of dictionaries::
-
- >>> from pprint import pprint
- >>> user_alias = aliased(User, name='user2')
- >>> stmt = select(User, User.id, user_alias)
- >>> pprint(stmt.column_descriptions)
- [{'aliased': False,
- 'entity': <class 'User'>,
- 'expr': <class 'User'>,
- 'name': 'User',
- 'type': <class 'User'>},
- {'aliased': False,
- 'entity': <class 'User'>,
- 'expr': <....InstrumentedAttribute object at ...>,
- 'name': 'id',
- 'type': Integer()},
- {'aliased': True,
- 'entity': <AliasedClass ...; User>,
- 'expr': <AliasedClass ...; User>,
- 'name': 'user2',
- 'type': <class 'User'>}]
-
-
-When :attr:`.Select.column_descriptions` is used with non-ORM objects
-such as plain :class:`.Table` or :class:`.Column` objects, the entries
-will contain basic information about individual columns returned in all
-cases::
-
- >>> stmt = select(user_table, address_table.c.id)
- >>> pprint(stmt.column_descriptions)
- [{'expr': Column('id', Integer(), table=<user_account>, primary_key=True, nullable=False),
- 'name': 'id',
- 'type': Integer()},
- {'expr': Column('name', String(length=30), table=<user_account>),
- 'name': 'name',
- 'type': String(length=30)},
- {'expr': Column('fullname', String(), table=<user_account>),
- 'name': 'fullname',
- 'type': String()},
- {'expr': Column('id', Integer(), table=<address>, primary_key=True, nullable=False),
- 'name': 'id_1',
- 'type': Integer()}]
-
-.. versionchanged:: 1.4.33 The :attr:`.Select.column_descriptions` attribute now returns
- a value when used against a :class:`.Select` that is not ORM-enabled. Previously,
- this would raise ``NotImplementedError``.
-
-
-For :func:`_sql.insert`, :func:`.update` and :func:`.delete` constructs, there are
-two separate attributes. One is :attr:`.UpdateBase.entity_description` which
-returns information about the primary ORM entity and database table which the
-DML construct would be affecting::
-
- >>> from sqlalchemy import update
- >>> stmt = update(User).values(name="somename").returning(User.id)
- >>> pprint(stmt.entity_description)
- {'entity': <class 'User'>,
- 'expr': <class 'User'>,
- 'name': 'User',
- 'table': Table('user_account', ...),
- 'type': <class 'User'>}
-
-.. tip:: The :attr:`.UpdateBase.entity_description` includes an entry
- ``"table"`` which is actually the **table to be inserted, updated or
- deleted** by the statement, which is **not** always the same as the SQL
- "selectable" to which the class may be mapped. For example, in a
- joined-table inheritance scenario, ``"table"`` will refer to the local table
- for the given entity.
-
-The other is :attr:`.UpdateBase.returning_column_descriptions` which
-delivers information about the columns present in the RETURNING collection
-in a manner roughly similar to that of :attr:`.Select.column_descriptions`::
-
- >>> pprint(stmt.returning_column_descriptions)
- [{'aliased': False,
- 'entity': <class 'User'>,
- 'expr': <sqlalchemy.orm.attributes.InstrumentedAttribute ...>,
- 'name': 'id',
- 'type': Integer()}]
-
-.. versionadded:: 1.4.33 Added the :attr:`.UpdateBase.entity_description`
- and :attr:`.UpdateBase.returning_column_descriptions` attributes.
-
-
-.. _queryguide_additional:
-
-Additional ORM API Constructs
-=============================
-
-
-.. autofunction:: sqlalchemy.orm.aliased
-
-.. autoclass:: sqlalchemy.orm.util.AliasedClass
-
-.. autoclass:: sqlalchemy.orm.util.AliasedInsp
-
-.. autoclass:: sqlalchemy.orm.Bundle
- :members:
-
-.. autofunction:: sqlalchemy.orm.with_loader_criteria
-
-.. autofunction:: sqlalchemy.orm.join
-
-.. autofunction:: sqlalchemy.orm.outerjoin
-
-.. autofunction:: sqlalchemy.orm.with_parent
+This document has moved to :doc:`queryguide/index`.
--- /dev/null
+:orphan:
+
+========================================
+Setup for ORM Queryguide: Column Loading
+========================================
+
+This page illustrates the mappings and fixture data used by the
+:doc:`columns` document of the :ref:`queryguide_toplevel`.
+
+.. sourcecode:: python
+
+ >>> from typing import List
+ >>> from typing import Optional
+ >>>
+ >>> from sqlalchemy import Column
+ >>> from sqlalchemy import create_engine
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy import LargeBinary
+ >>> from sqlalchemy import Table
+ >>> from sqlalchemy import Text
+ >>> from sqlalchemy.orm import DeclarativeBase
+ >>> from sqlalchemy.orm import Mapped
+ >>> from sqlalchemy.orm import mapped_column
+ >>> from sqlalchemy.orm import relationship
+ >>> from sqlalchemy.orm import Session
+ >>>
+ >>>
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class User(Base):
+ ... __tablename__ = "user_account"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... fullname: Mapped[Optional[str]]
+ ... books: Mapped[List["Book"]] = relationship(back_populates="owner")
+ ... def __repr__(self) -> str:
+ ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})"
+ ...
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text)
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary)
+ ... owner: Mapped["User"] = relationship(back_populates="books")
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+ ...
+ ...
+ >>> engine = create_engine("sqlite+pysqlite:///:memory:", echo=True)
+ >>> Base.metadata.create_all(engine)
+ BEGIN ...
+ >>> conn = engine.connect()
+ >>> session = Session(conn)
+ >>> session.add_all(
+ ... [
+ ... User(
+ ... name="spongebob",
+ ... fullname="Spongebob Squarepants",
+ ... books=[
+ ... Book(title="100 Years of Krabby Patties", summary="some long summary", cover_photo=b'binary_image_data'),
+ ... Book(title="Sea Catch 22", summary="another long summary", cover_photo=b'binary_image_data'),
+ ... Book(title="The Sea Grapes of Wrath", summary="yet another summary", cover_photo=b'binary_image_data'),
+ ... ],
+ ... ),
+ ... User(
+ ... name="sandy",
+ ... fullname="Sandy Cheeks",
+ ... books=[
+ ... Book(title="A Nut Like No Other", summary="some long summary", cover_photo=b'binary_image_data'),
+ ... Book(title="Geodesic Domes: A Retrospective", summary="another long summary", cover_photo=b'binary_image_data'),
+ ... Book(title="Rocketry for Squirrels", summary="yet another summary", cover_photo=b'binary_image_data'),
+ ... ],
+ ... ),
+ ... ]
+ ... )
+ >>> session.commit()
+ BEGIN ... COMMIT
+ >>> conn.begin()
+ BEGIN ...
--- /dev/null
+:orphan:
+
+======================================
+Setup for ORM Queryguide: DML
+======================================
+
+This page illustrates the mappings and fixture data used by the
+:doc:`dml` document of the :ref:`queryguide_toplevel`.
+
+.. sourcecode:: python
+
+ >>> from typing import List
+ >>> from typing import Optional
+ >>> import datetime
+ >>>
+ >>> from sqlalchemy import Column
+ >>> from sqlalchemy import create_engine
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy import Table
+ >>> from sqlalchemy.orm import DeclarativeBase
+ >>> from sqlalchemy.orm import Mapped
+ >>> from sqlalchemy.orm import mapped_column
+ >>> from sqlalchemy.orm import relationship
+ >>> from sqlalchemy.orm import Session
+ >>>
+ >>>
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class User(Base):
+ ... __tablename__ = "user_account"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str] = mapped_column(unique=True)
+ ... fullname: Mapped[Optional[str]]
+ ... species: Mapped[Optional[str]]
+ ... addresses: Mapped[List["Address"]] = relationship(back_populates="user")
+ ... def __repr__(self) -> str:
+ ... return f"User(name={self.name!r}, fullname={self.fullname!r})"
+ ...
+ >>> class Address(Base):
+ ... __tablename__ = "address"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... user_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... email_address: Mapped[str]
+ ... user: Mapped[User] = relationship(back_populates="addresses")
+ ... def __repr__(self) -> str:
+ ... return f"Address(email_address={self.email_address!r})"
+ ...
+ >>> class LogRecord(Base):
+ ... __tablename__ = "log_record"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... message: Mapped[str]
+ ... code: Mapped[str]
+ ... timestamp: Mapped[datetime.datetime]
+ ... def __repr__(self):
+ ... return f"LogRecord({self.message!r}, {self.code!r}, {self.timestamp!r})"
+
+ >>> class Employee(Base):
+ ... __tablename__ = "employee"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... type: Mapped[str]
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "employee",
+ ... "polymorphic_on": "type",
+ ... }
+ ...
+ >>> class Manager(Employee):
+ ... __tablename__ = "manager"
+ ... id: Mapped[int] = mapped_column(
+ ... ForeignKey("employee.id"), primary_key=True
+ ... )
+ ... manager_name: Mapped[str]
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r}, manager_name={self.manager_name!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "manager",
+ ... }
+ ...
+ >>> class Engineer(Employee):
+ ... __tablename__ = "engineer"
+ ... id: Mapped[int] = mapped_column(
+ ... ForeignKey("employee.id"), primary_key=True
+ ... )
+ ... engineer_info: Mapped[str]
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r}, engineer_info={self.engineer_info!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "engineer",
+ ... }
+ ...
+
+ >>> engine = create_engine("sqlite+pysqlite:///:memory:", echo=True)
+ >>> Base.metadata.create_all(engine)
+ BEGIN ...
+ >>> conn = engine.connect()
+ >>> session = Session(conn)
+ >>> conn.begin()
+ BEGIN ...
--- /dev/null
+:orphan:
+
+.. Setup code, not for display
+
+ >>> session.close()
+ >>> conn.close()
+ ...
--- /dev/null
+:orphan:
+
+============================================
+Setup for ORM Queryguide: Joined Inheritance
+============================================
+
+This page illustrates the mappings and fixture data used by the
+:ref:`joined_inheritance` examples in the :doc:`inheritance` document of
+the :ref:`queryguide_toplevel`.
+
+.. sourcecode:: python
+
+
+ >>> from sqlalchemy import create_engine
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy.orm import DeclarativeBase
+ >>> from sqlalchemy.orm import Mapped
+ >>> from sqlalchemy.orm import mapped_column
+ >>> from sqlalchemy.orm import relationship
+ >>> from sqlalchemy.orm import Session
+ >>>
+ >>>
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class Company(Base):
+ ... __tablename__ = "company"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... employees: Mapped[list["Employee"]] = relationship(back_populates="company")
+ ...
+ >>>
+ >>> class Employee(Base):
+ ... __tablename__ = "employee"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... type: Mapped[str]
+ ... company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ ... company: Mapped[Company] = relationship(back_populates="employees")
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "employee",
+ ... "polymorphic_on": "type",
+ ... }
+ ...
+ >>>
+ >>> class Manager(Employee):
+ ... __tablename__ = "manager"
+ ... id: Mapped[int] = mapped_column(
+ ... ForeignKey("employee.id"), primary_key=True
+ ... )
+ ... manager_name: Mapped[str]
+ ... paperwork: Mapped[list["Paperwork"]] = relationship()
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "manager",
+ ... }
+ ...
+ >>> class Paperwork(Base):
+ ... __tablename__ = "paperwork"
+ ... id: Mapped[int] = mapped_column(
+ ... primary_key=True
+ ... )
+ ... manager_id: Mapped[int] = mapped_column(ForeignKey('manager.id'))
+ ... document_name: Mapped[str]
+ ... def __repr__(self):
+ ... return f"Paperwork({self.document_name!r})"
+ ...
+ >>>
+ >>> class Engineer(Employee):
+ ... __tablename__ = "engineer"
+ ... id: Mapped[int] = mapped_column(
+ ... ForeignKey("employee.id"), primary_key=True
+ ... )
+ ... engineer_info: Mapped[str]
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "engineer",
+ ... }
+ ...
+ >>>
+ >>> engine = create_engine("sqlite://", echo=True)
+ >>>
+ >>> Base.metadata.create_all(engine)
+ BEGIN ...
+
+ >>> conn = engine.connect()
+ >>> from sqlalchemy.orm import Session
+ >>> session = Session(conn)
+ >>> session.add(
+ ... Company(
+ ... name="Krusty Krab",
+ ... employees=[
+ ... Manager(
+ ... name="Mr. Krabs", manager_name="Eugene H. Krabs",
+ ... paperwork=[
+ ... Paperwork(document_name="Secret Recipes"),
+ ... Paperwork(document_name="Krabby Patty Orders"),
+ ... ]
+ ... ),
+ ... Engineer(
+ ... name="SpongeBob", engineer_info="Krabby Patty Master"
+ ... ),
+ ... Engineer(name="Squidward", engineer_info="Senior Customer Engagement Engineer"),
+ ... ],
+ ... )
+ ... )
+ >>> session.commit()
+ ...
+ BEGIN ...
+
--- /dev/null
+:orphan:
+
+======================================
+Setup for ORM Queryguide: SELECT
+======================================
+
+This page illustrates the mappings and fixture data used by the
+:doc:`select` document of the :ref:`queryguide_toplevel`.
+
+.. sourcecode:: python
+
+ >>> from typing import List
+ >>> from typing import Optional
+ >>>
+ >>> from sqlalchemy import Column
+ >>> from sqlalchemy import create_engine
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy import Table
+ >>> from sqlalchemy.orm import DeclarativeBase
+ >>> from sqlalchemy.orm import Mapped
+ >>> from sqlalchemy.orm import mapped_column
+ >>> from sqlalchemy.orm import relationship
+ >>> from sqlalchemy.orm import Session
+ >>>
+ >>>
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class User(Base):
+ ... __tablename__ = "user_account"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... fullname: Mapped[Optional[str]]
+ ... addresses: Mapped[List["Address"]] = relationship(back_populates="user")
+ ... orders: Mapped[List["Order"]] = relationship()
+ ... def __repr__(self) -> str:
+ ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})"
+ ...
+ >>> class Address(Base):
+ ... __tablename__ = "address"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... user_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... email_address: Mapped[str]
+ ... user: Mapped[User] = relationship(back_populates="addresses")
+ ... def __repr__(self) -> str:
+ ... return f"Address(id={self.id!r}, email_address={self.email_address!r})"
+ ...
+ >>> order_items_table = Table(
+ ... "order_items",
+ ... Base.metadata,
+ ... Column("order_id", ForeignKey("user_order.id"), primary_key=True),
+ ... Column("item_id", ForeignKey("item.id"), primary_key=True),
+ ... )
+ >>>
+ >>> class Order(Base):
+ ... __tablename__ = "user_order"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... user_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... items: Mapped[List["Item"]] = relationship(secondary=order_items_table)
+ ...
+ >>> class Item(Base):
+ ... __tablename__ = "item"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... description: Mapped[str]
+ ...
+ >>> engine = create_engine("sqlite+pysqlite:///:memory:", echo=True)
+ >>> Base.metadata.create_all(engine)
+ BEGIN ...
+ >>> conn = engine.connect()
+ >>> session = Session(conn)
+ >>> session.add_all(
+ ... [
+ ... User(
+ ... name="spongebob",
+ ... fullname="Spongebob Squarepants",
+ ... addresses=[Address(email_address="spongebob@sqlalchemy.org")],
+ ... ),
+ ... User(
+ ... name="sandy",
+ ... fullname="Sandy Cheeks",
+ ... addresses=[
+ ... Address(email_address="sandy@sqlalchemy.org"),
+ ... Address(email_address="squirrel@squirrelpower.org"),
+ ... ],
+ ... ),
+ ... User(
+ ... name="patrick",
+ ... fullname="Patrick Star",
+ ... addresses=[Address(email_address="pat999@aol.com")],
+ ... ),
+ ... User(
+ ... name="squidward",
+ ... fullname="Squidward Tentacles",
+ ... addresses=[Address(email_address="stentcl@sqlalchemy.org")],
+ ... ),
+ ... User(name="ehkrabs", fullname="Eugene H. Krabs"),
+ ... ]
+ ... )
+ >>> session.commit()
+ BEGIN ... COMMIT
+ >>> conn.begin()
+ BEGIN ...
--- /dev/null
+:orphan:
+
+=============================================
+Setup for ORM Queryguide: Single Inheritance
+=============================================
+
+This page illustrates the mappings and fixture data used by the
+:ref:`single_inheritance` examples in the :doc:`inheritance` document of
+the :ref:`queryguide_toplevel`.
+
+.. sourcecode:: python
+
+
+ >>> from sqlalchemy import create_engine
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy.orm import DeclarativeBase
+ >>> from sqlalchemy.orm import Mapped
+ >>> from sqlalchemy.orm import mapped_column
+ >>> from sqlalchemy.orm import relationship
+ >>> from sqlalchemy.orm import Session
+ >>>
+ >>>
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class Employee(Base):
+ ... __tablename__ = "employee"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... type: Mapped[str]
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "employee",
+ ... "polymorphic_on": "type",
+ ... }
+ ...
+ >>> class Manager(Employee):
+ ... manager_name: Mapped[str] = mapped_column(nullable=True)
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "manager",
+ ... }
+ ...
+ >>> class Engineer(Employee):
+ ... engineer_info: Mapped[str] = mapped_column(nullable=True)
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "engineer",
+ ... }
+ ...
+ >>>
+ >>> engine = create_engine("sqlite://", echo=True)
+ >>>
+ >>> Base.metadata.create_all(engine)
+ BEGIN ...
+
+ >>> conn = engine.connect()
+ >>> from sqlalchemy.orm import Session
+ >>> session = Session(conn)
+ >>> session.add_all(
+ ... [
+ ... Manager(
+ ... name="Mr. Krabs", manager_name="Eugene H. Krabs",
+ ... ),
+ ... Engineer(
+ ... name="SpongeBob", engineer_info="Krabby Patty Master"
+ ... ),
+ ... Engineer(name="Squidward", engineer_info="Senior Customer Engagement Engineer"),
+ ... ],
+ ... )
+ >>> session.commit()
+ ...
+ BEGIN ...
+
--- /dev/null
+.. highlight:: pycon+sql
+
+.. |prev| replace:: :doc:`relationships`
+.. |next| replace:: :doc:`query`
+
+.. include:: queryguide_nav_include.rst
+
+
+=============================
+ORM API Features for Querying
+=============================
+
+ORM Loader Options
+-------------------
+
+Loader options are objects that are passed to the :meth:`_sql.Select.options`
+method, which affect the loading of both column and relationship-oriented
+attributes. The majority of loader options descend from the :class:`_orm.Load`
+hierarchy. For a complete overview of using loader options, see the linked
+sections below.
+
+.. seealso::
+
+ * :ref:`loading_columns` - details mapper and loading options that affect
+ how column and SQL-expression mapped attributes are loaded
+
+ * :ref:`loading_toplevel` - details relationship and loading options that
+ affect how :func:`_orm.relationship` mapped attributes are loaded
+
+.. _orm_queryguide_execution_options:
+
+ORM Execution Options
+---------------------
+
+ORM-level execution options are keyword options that may be associated with a
+statement execution using either the
+:paramref:`_orm.Session.execute.execution_options` parameter, which is a
+dictionary argument accepted by :class:`_orm.Session` methods such as
+:meth:`_orm.Session.execute` and :meth:`_orm.Session.scalars`, or by
+associating them directly with the statement to be invoked itself using the
+:meth:`_sql.Executable.execution_options` method, which accepts them as
+arbitrary keyword arguments.
+
+ORM-level options are distinct from the Core level execution options
+documented at :meth:`_engine.Connection.execution_options`.
+It's important to note that the ORM options
+discussed below are **not** compatible with Core level methods
+:meth:`_engine.Connection.execution_options` or
+:meth:`_engine.Engine.execution_options`; the options are ignored at this
+level, even if the :class:`.Engine` or :class:`.Connection` is associated
+with the :class:`_orm.Session` in use.
+
+Within this section, the :meth:`_sql.Executable.execution_options` method
+style will be illustrated for examples.
+
+.. _orm_queryguide_populate_existing:
+
+Populate Existing
+^^^^^^^^^^^^^^^^^^
+
+The ``populate_existing`` execution option ensures that, for all rows
+loaded, the corresponding instances in the :class:`_orm.Session` will
+be fully refreshed – erasing any existing data within the objects
+(including pending changes) and replacing with the data loaded from the
+result.
+
+Example use looks like::
+
+ >>> stmt = select(User).execution_options(populate_existing=True)
+ >>> result = session.execute(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ ...
+
+Normally, ORM objects are only loaded once, and if they are matched up
+to the primary key in a subsequent result row, the row is not applied to the
+object. This is both to preserve pending, unflushed changes on the object
+as well as to avoid the overhead and complexity of refreshing data which
+is already there. The :class:`_orm.Session` assumes a default working
+model of a highly isolated transaction, and to the degree that data is
+expected to change within the transaction outside of the local changes being
+made, those use cases would be handled using explicit steps such as this method.
+
+Using ``populate_existing``, any set of objects that matches a query
+can be refreshed, and it also allows control over relationship loader options.
+E.g. to refresh an instance while also refreshing a related set of objects:
+
+.. sourcecode:: python
+
+ stmt = (
+ select(User).
+ where(User.name.in_(names)).
+ execution_options(populate_existing=True).
+ options(selectinload(User.addresses)
+ )
+ # will refresh all matching User objects as well as the related
+ # Address objects
+ users = session.execute(stmt).scalars().all()
+
+Another use case for ``populate_existing`` is in support of various
+attribute loading features that can change how an attribute is loaded on
+a per-query basis. Options for which this apply include:
+
+* The :func:`_orm.with_expression` option
+
+* The :meth:`_orm.PropComparator.and_` method that can modify what a loader
+ strategy loads
+
+* The :func:`_orm.contains_eager` option
+
+* The :func:`_orm.with_loader_criteria` option
+
+The ``populate_existing`` execution option is equvialent to the
+:meth:`_orm.Query.populate_existing` method in :term:`1.x style` ORM queries.
+
+.. seealso::
+
+ :ref:`faq_session_identity` - in :doc:`/faq/index`
+
+ :ref:`session_expire` - in the ORM :class:`_orm.Session`
+ documentation
+
+.. _orm_queryguide_autoflush:
+
+Autoflush
+^^^^^^^^^
+
+This option, when passed as ``False``, will cause the :class:`_orm.Session`
+to not invoke the "autoflush" step. It is equivalent to using the
+:attr:`_orm.Session.no_autoflush` context manager to disable autoflush::
+
+ >>> stmt = select(User).execution_options(autoflush=False)
+ >>> session.execute(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ ...
+
+This option will also work on ORM-enabled :class:`_sql.Update` and
+:class:`_sql.Delete` queries.
+
+The ``autoflush`` execution option is equvialent to the
+:meth:`_orm.Query.autoflush` method in :term:`1.x style` ORM queries.
+
+.. seealso::
+
+ :ref:`session_flushing`
+
+.. _orm_queryguide_yield_per:
+
+Fetching Large Result Sets with Yield Per
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``yield_per`` execution option is an integer value which will cause the
+:class:`_engine.Result` to buffer only a limited number of rows and/or ORM
+objects at a time, before making data available to the client.
+
+Normally, the ORM will fetch **all** rows immediately, constructing ORM objects
+for each and assembling those objects into a single buffer, before passing this
+buffer to the :class:`_engine.Result` object as a source of rows to be
+returned. The rationale for this behavior is to allow correct behavior for
+features such as joined eager loading, uniquifying of results, and the general
+case of result handling logic that relies upon the identity map maintaining a
+consistent state for every object in a result set as it is fetched.
+
+The purpose of the ``yield_per`` option is to change this behavior so that the
+ORM result set is optimized for iteration through very large result sets (e.g.
+> 10K rows), where the user has determined that the above patterns don't apply.
+When ``yield_per`` is used, the ORM will instead batch ORM results into
+sub-collections and yield rows from each sub-collection individually as the
+:class:`_engine.Result` object is iterated, so that the Python interpreter
+doesn't need to declare very large areas of memory which is both time consuming
+and leads to excessive memory use. The option affects both the way the database
+cursor is used as well as how the ORM constructs rows and objects to be passed
+to the :class:`_engine.Result`.
+
+.. tip::
+
+ From the above, it follows that the :class:`_engine.Result` must be
+ consumed in an iterable fashion, that is, using iteration such as
+ ``for row in result`` or using partial row methods such as
+ :meth:`_engine.Result.fetchmany` or :meth:`_engine.Result.partitions`.
+ Calling :meth:`_engine.Result.all` will defeat the purpose of using
+ ``yield_per``.
+
+Using ``yield_per`` is equivalent to making use
+of both the :paramref:`_engine.Connection.execution_options.stream_results`
+execution option, which selects for server side cursors to be used
+by the backend if supported, and the :meth:`_engine.Result.yield_per` method
+on the returned :class:`_engine.Result` object,
+which establishes a fixed size of rows to be fetched as well as a
+corresponding limit to how many ORM objects will be constructed at once.
+
+.. tip::
+
+ ``yield_per`` is now available as a Core execution option as well,
+ described in detail at :ref:`engine_stream_results`. This section details
+ the use of ``yield_per`` as an execution option with an ORM
+ :class:`_orm.Session`. The option behaves as similarly as possible
+ in both contexts.
+
+When used with the ORM, ``yield_per`` must be established either
+via the :meth:`.Executable.execution_options` method on the given statement
+or by passing it to the :paramref:`_orm.Session.execute.execution_options`
+parameter of :meth:`_orm.Session.execute` or other similar :class:`_orm.Session`
+method such as :meth:`_orm.Session.scalars`. Typical use for fetching
+ORM objects is illustrated below::
+
+ >>> stmt = select(User).execution_options(yield_per=10)
+ >>> for user_obj in session.scalars(stmt):
+ ... print(user_obj)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ [...] ()
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ ...
+ >>> # ... rows continue ...
+
+The above code is equivalent to the example below, which uses
+:paramref:`_engine.Connection.execution_options.stream_results`
+and :paramref:`_engine.Connection.execution_options.max_row_buffer` Core-level
+execution options in conjunction with the :meth:`_engine.Result.yield_per`
+method of :class:`_engine.Result`::
+
+ # equivalent code
+ >>> stmt = select(User).execution_options(stream_results=True, max_row_buffer=10)
+ >>> for user_obj in session.scalars(stmt).yield_per(10):
+ ... print(user_obj)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ [...] ()
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ ...
+ >>> # ... rows continue ...
+
+``yield_per`` is also commonly used in combination with the
+:meth:`_engine.Result.partitions` method, which will iterate rows in grouped
+partitions. The size of each partition defaults to the integer value passed to
+``yield_per``, as in the below example::
+
+ >>> stmt = select(User).execution_options(yield_per=10)
+ >>> for partition in session.scalars(stmt).partitions():
+ ... for user_obj in partition:
+ ... print(user_obj)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ [...] ()
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ ...
+ >>> # ... rows continue ...
+
+
+The ``yield_per`` execution option **is not compatible** with
+:ref:`"subquery" eager loading <subquery_eager_loading>` loading or
+:ref:`"joined" eager loading <joined_eager_loading>` when using collections. It
+is potentially compatible with :ref:`"select in" eager loading
+<selectin_eager_loading>` , provided the database driver supports multiple,
+independent cursors.
+
+Additionally, the ``yield_per`` execution option is not compatible
+with the :meth:`_engine.Result.unique` method; as this method relies upon
+storing a complete set of identities for all rows, it would necessarily
+defeat the purpose of using ``yield_per`` which is to handle an arbitrarily
+large number of rows.
+
+.. versionchanged:: 1.4.6 An exception is raised when ORM rows are fetched
+ from a :class:`_engine.Result` object that makes use of the
+ :meth:`_engine.Result.unique` filter, at the same time as the ``yield_per``
+ execution option is used.
+
+When using the legacy :class:`_orm.Query` object with
+:term:`1.x style` ORM use, the :meth:`_orm.Query.yield_per` method
+will have the same result as that of the ``yield_per`` execution option.
+
+
+.. seealso::
+
+ :ref:`engine_stream_results`
+
+
+.. _queryguide_inspection:
+
+Inspecting entities and columns from ORM-enabled SELECT and DML statements
+==========================================================================
+
+The :func:`_sql.select` construct, as well as the :func:`_sql.insert`, :func:`_sql.update`
+and :func:`_sql.delete` constructs (for the latter DML constructs, as of SQLAlchemy
+1.4.33), all support the ability to inspect the entities in which these
+statements are created against, as well as the columns and datatypes that would
+be returned in a result set.
+
+For a :class:`.Select` object, this information is available from the
+:attr:`.Select.column_descriptions` attribute. This attribute operates in the
+same way as the legacy :attr:`.Query.column_descriptions` attribute. The format
+returned is a list of dictionaries::
+
+ >>> from pprint import pprint
+ >>> user_alias = aliased(User, name='user2')
+ >>> stmt = select(User, User.id, user_alias)
+ >>> pprint(stmt.column_descriptions)
+ [{'aliased': False,
+ 'entity': <class 'User'>,
+ 'expr': <class 'User'>,
+ 'name': 'User',
+ 'type': <class 'User'>},
+ {'aliased': False,
+ 'entity': <class 'User'>,
+ 'expr': <....InstrumentedAttribute object at ...>,
+ 'name': 'id',
+ 'type': Integer()},
+ {'aliased': True,
+ 'entity': <AliasedClass ...; User>,
+ 'expr': <AliasedClass ...; User>,
+ 'name': 'user2',
+ 'type': <class 'User'>}]
+
+
+When :attr:`.Select.column_descriptions` is used with non-ORM objects
+such as plain :class:`.Table` or :class:`.Column` objects, the entries
+will contain basic information about individual columns returned in all
+cases::
+
+ >>> stmt = select(user_table, address_table.c.id)
+ >>> pprint(stmt.column_descriptions)
+ [{'expr': Column('id', Integer(), table=<user_account>, primary_key=True, nullable=False),
+ 'name': 'id',
+ 'type': Integer()},
+ {'expr': Column('name', String(), table=<user_account>, nullable=False),
+ 'name': 'name',
+ 'type': String()},
+ {'expr': Column('fullname', String(), table=<user_account>),
+ 'name': 'fullname',
+ 'type': String()},
+ {'expr': Column('id', Integer(), table=<address>, primary_key=True, nullable=False),
+ 'name': 'id_1',
+ 'type': Integer()}]
+
+.. versionchanged:: 1.4.33 The :attr:`.Select.column_descriptions` attribute now returns
+ a value when used against a :class:`.Select` that is not ORM-enabled. Previously,
+ this would raise ``NotImplementedError``.
+
+
+For :func:`_sql.insert`, :func:`.update` and :func:`.delete` constructs, there are
+two separate attributes. One is :attr:`.UpdateBase.entity_description` which
+returns information about the primary ORM entity and database table which the
+DML construct would be affecting::
+
+ >>> from sqlalchemy import update
+ >>> stmt = update(User).values(name="somename").returning(User.id)
+ >>> pprint(stmt.entity_description)
+ {'entity': <class 'User'>,
+ 'expr': <class 'User'>,
+ 'name': 'User',
+ 'table': Table('user_account', ...),
+ 'type': <class 'User'>}
+
+.. tip:: The :attr:`.UpdateBase.entity_description` includes an entry
+ ``"table"`` which is actually the **table to be inserted, updated or
+ deleted** by the statement, which is **not** always the same as the SQL
+ "selectable" to which the class may be mapped. For example, in a
+ joined-table inheritance scenario, ``"table"`` will refer to the local table
+ for the given entity.
+
+The other is :attr:`.UpdateBase.returning_column_descriptions` which
+delivers information about the columns present in the RETURNING collection
+in a manner roughly similar to that of :attr:`.Select.column_descriptions`::
+
+ >>> pprint(stmt.returning_column_descriptions)
+ [{'aliased': False,
+ 'entity': <class 'User'>,
+ 'expr': <sqlalchemy.orm.attributes.InstrumentedAttribute ...>,
+ 'name': 'id',
+ 'type': Integer()}]
+
+.. versionadded:: 1.4.33 Added the :attr:`.UpdateBase.entity_description`
+ and :attr:`.UpdateBase.returning_column_descriptions` attributes.
+
+
+.. _queryguide_additional:
+
+Additional ORM API Constructs
+=============================
+
+
+.. autofunction:: sqlalchemy.orm.aliased
+
+.. autoclass:: sqlalchemy.orm.util.AliasedClass
+
+.. autoclass:: sqlalchemy.orm.util.AliasedInsp
+
+.. autoclass:: sqlalchemy.orm.Bundle
+ :members:
+
+.. autofunction:: sqlalchemy.orm.with_loader_criteria
+
+.. autofunction:: sqlalchemy.orm.join
+
+.. autofunction:: sqlalchemy.orm.outerjoin
+
+.. autofunction:: sqlalchemy.orm.with_parent
+
+
+.. Setup code, not for display
+
+ >>> session.close()
+ >>> conn.close()
+ ROLLBACK
--- /dev/null
+.. highlight:: pycon+sql
+
+.. |prev| replace:: :doc:`dml`
+.. |next| replace:: :doc:`relationships`
+
+.. include:: queryguide_nav_include.rst
+
+
+.. doctest-include _deferred_setup.rst
+
+.. currentmodule:: sqlalchemy.orm
+
+.. _loading_columns:
+
+======================
+Column Loading Options
+======================
+
+.. admonition:: About this Document
+
+ This section presents additional options regarding the loading of
+ columns. The mappings used include columns that would store
+ large string values for which we may want to limit when they
+ are loaded.
+
+ :doc:`View the ORM setup for this page <_deferred_setup>`. Some
+ of the examples below will redefine the ``Book`` mapper to modify
+ some of the column definitions.
+
+.. _orm_queryguide_column_deferral:
+
+Limiting which Columns Load with Column Deferral
+------------------------------------------------
+
+**Column deferral** refers to ORM mapped columns that are omitted from a SELECT
+statement when objects of that type are queried. The general rationale here is
+performance, in cases where tables have seldom-used columns with potentially
+large data values, as fully loading these columns on every query may be
+time and/or memory intensive. SQLAlchemy ORM offers a variety of ways to
+control the loading of columns when entities are loaded.
+
+Most examples in this section are illustrating **ORM loader options**. These
+are small constructs that are passed to the :meth:`_sql.Select.options` method
+of the :class:`_sql.Select` object, which are then consumed by the ORM
+when the object is compiled into a SQL string.
+
+.. _orm_queryguide_load_only:
+
+Using ``load_only()`` to reduce loaded columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_orm.load_only` loader option is the most expedient option to use
+when loading objects where it is known that only a small handful of columns will
+be accessed. This option accepts a variable number of class-bound attribute
+objects indicating those column-mapped attributes that should be loaded, where
+all other column-mapped attributes outside of the primary key will not be part
+of the columns fetched . In the example below, the ``Book`` class contains
+columns ``.title``, ``.summary`` and ``.cover_photo``. Using
+:func:`_orm.load_only` we can instruct the ORM to only load the
+``.title`` and ``.summary`` columns up front::
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy.orm import load_only
+ >>> stmt = select(Book).options(load_only(Book.title, Book.summary))
+ >>> books = session.scalars(stmt).all()
+ {opensql}SELECT book.id, book.title, book.summary
+ FROM book
+ [...] ()
+ {stop}>>> for book in books:
+ ... print(f"{book.title} {book.summary}")
+ 100 Years of Krabby Patties some long summary
+ Sea Catch 22 another long summary
+ The Sea Grapes of Wrath yet another summary
+ A Nut Like No Other some long summary
+ Geodesic Domes: A Retrospective another long summary
+ Rocketry for Squirrels yet another summary
+
+Above, the SELECT statement has omitted the ``.cover_photo`` column and
+included only ``.title`` and ``.summary``, as well as the primary key column
+``.id``; the ORM will typically always fetch the primary key columns as these
+are required to establish the identity for the row.
+
+Once loaded, the object will normally have :term:`lazy loading` behavior
+applied to the remaining unloaded attributes, meaning that when any are first
+accessed, a SQL statement will be emitted within the current transaction in
+order to load the value. Below, accessing ``.cover_photo`` emits a SELECT
+statement to load its value::
+
+ >>> img_data = books[0].cover_photo
+ {opensql}SELECT book.cover_photo AS book_cover_photo
+ FROM book
+ WHERE book.id = ?
+ [...] (1,)
+
+Lazy loads are always emitted using the :class:`_orm.Session` to which the
+object is in the :term:`persistent` state. If the object is :term:`detached`
+from any :class:`_orm.Session`, the operation fails, raising an exception.
+
+As an alternative to lazy loading on access, deferred columns may also be
+configured to raise an informative exception when accessed, regardless of their
+attachment state. See the section :ref:`orm_queryguide_deferred_raiseload` for
+background.
+
+Using ``load_only()`` with multiple entities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+:func:`_orm.load_only` limits itself to the single entity that is referred
+towards in its list of attributes (passing a list of attributes that span more
+than a single entity is currently disallowed). In the example below, the given
+:func:`_orm.load_only` option applies only to the ``Book`` entity. The ``User``
+entity that's also selected is not affected; within the resulting SELECT
+statement, all columns for ``user_account`` are present, whereas only
+``book.id`` and ``book.title`` are present for the ``book`` table::
+
+ >>> stmt = select(User, Book).join_from(User, Book).options(load_only(Book.title))
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ book.id AS id_1, book.title
+ FROM user_account JOIN book ON user_account.id = book.owner_id
+
+If we wanted to apply :func:`_orm.load_only` options to both ``User`` and
+``Book``, we would make use of two separate options::
+
+ >>> stmt = (
+ ... select(User, Book).
+ ... join_from(User, Book).
+ ... options(load_only(User.name), load_only(Book.title))
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, book.id AS id_1, book.title
+ FROM user_account JOIN book ON user_account.id = book.owner_id
+
+.. _orm_queryguide_load_only_related:
+
+Using ``load_only()`` on related objects and collections
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When using :ref:`relationship loaders <loading_toplevel>` to control the
+loading of related objects, the
+:meth:`.Load.load_only` method of any relationship loader may be used
+to apply :func:`_orm.load_only` rules to columns on the sub-entity. In the example below,
+:func:`_orm.selectinload` is used to load the related ``books`` collection
+on each ``User`` object. By applying :meth:`.Load.load_only` to the resulting
+option object, when objects are loaded for the relationship, the
+SELECT emitted will only refer to the ``title`` column
+in addition to primary key column::
+
+ >>> from sqlalchemy.orm import selectinload
+ >>> stmt = select(User).options(selectinload(User.books).load_only(Book.title))
+ >>> for user in session.scalars(stmt):
+ ... print(f"{user.fullname} {[b.title for b in user.books]}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ [...] ()
+ SELECT book.owner_id AS book_owner_id, book.id AS book_id, book.title AS book_title
+ FROM book
+ WHERE book.owner_id IN (?, ?)
+ [...] (1, 2)
+ {stop}Spongebob Squarepants ['100 Years of Krabby Patties', 'Sea Catch 22', 'The Sea Grapes of Wrath']
+ Sandy Cheeks ['A Nut Like No Other', 'Geodesic Domes: A Retrospective', 'Rocketry for Squirrels']
+
+
+.. comment
+
+ >>> session.expunge_all()
+
+:func:`_orm.load_only` may also be applied to sub-entities without needing
+to state the style of loading to use for the relationship itself. If we didn't
+want to change the default loading style of ``User.books`` but still apply
+load only rules to ``Book``, we would link using the :func:`_orm.defaultload`
+option, which in this case will retain the default relationship loading
+style of ``"lazy"``, and applying our custom :func:`_orm.load_only` rule to
+the SELECT statement emitted for each ``User.books`` collection::
+
+ >>> from sqlalchemy.orm import defaultload
+ >>> stmt = select(User).options(defaultload(User.books).load_only(Book.title))
+ >>> for user in session.scalars(stmt):
+ ... print(f"{user.fullname} {[b.title for b in user.books]}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ [...] ()
+ SELECT book.id AS book_id, book.title AS book_title
+ FROM book
+ WHERE ? = book.owner_id
+ [...] (1,)
+ {stop}Spongebob Squarepants ['100 Years of Krabby Patties', 'Sea Catch 22', 'The Sea Grapes of Wrath']
+ {opensql}SELECT book.id AS book_id, book.title AS book_title
+ FROM book
+ WHERE ? = book.owner_id
+ [...] (2,)
+ {stop}Sandy Cheeks ['A Nut Like No Other', 'Geodesic Domes: A Retrospective', 'Rocketry for Squirrels']
+
+.. _orm_queryguide_defer:
+
+Using ``defer()`` to omit specific columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_orm.defer` loader option is a more fine grained alternative to
+:func:`_orm.load_only`, which allows a single specific column to be marked as
+"dont load". In the example below, :func:`_orm.defer` is applied directly to the
+``.cover_photo`` column, leaving the behavior of all other columns
+unchanged::
+
+ >>> from sqlalchemy.orm import defer
+ >>> stmt = select(Book).where(Book.owner_id == 2).options(defer(Book.cover_photo))
+ >>> books = session.scalars(stmt).all()
+ {opensql}SELECT book.id, book.owner_id, book.title, book.summary
+ FROM book
+ WHERE book.owner_id = ?
+ [...] (2,)
+ {stop}>>> for book in books:
+ ... print(f"{book.title}: {book.summary}")
+ A Nut Like No Other: some long summary
+ Geodesic Domes: A Retrospective: another long summary
+ Rocketry for Squirrels: yet another summary
+
+As is the case with :func:`_orm.load_only`, unloaded columns by default
+will load themselves when accessed using :term:`lazy loading`::
+
+ >>> img_data = books[0].cover_photo
+ {opensql}SELECT book.cover_photo AS book_cover_photo
+ FROM book
+ WHERE book.id = ?
+ [...] (4,)
+
+Multiple :func:`_orm.defer` options may be used in one statement in order to
+mark several columns as deferred.
+
+As is the case with :func:`_orm.load_only`, the :func:`_orm.defer` option
+also includes the ability to have a deferred attribute raise an exception on
+access rather than lazy loading. This is illustrated in the section
+:ref:`orm_queryguide_deferred_raiseload`.
+
+.. _orm_queryguide_deferred_raiseload:
+
+Using raiseload to prevent deferred column loads
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. comment
+
+ >>> session.expunge_all()
+
+When using :func:`_orm.load_only` or :func:`_orm.deferred`, attributes marked
+as deferred on an object have the default behavior that when first accessed, a
+SELECT statement will be emitted within the current transaction in order to
+load their value. It is often necessary to prevent this load from occurring,
+and instead raise an exception when the attribute is accessed, indicating that
+the need to query the database for this column was not expected. A typical
+scenario is an operation where objects are loaded with all the columns that are
+known to be required for the operation to proceed, which are then passed onto a
+view layer. Any further SQL operations that emit within the view layer should
+be caught, so that the up-front loading operation can be adjusted to
+accommodate for that additional data up front, rather than incurring additional
+lazy loading.
+
+For this use case the :func:`_orm.defer` and :func:`_orm.load_only` options
+include a boolean parameter :paramref:`_orm.defer.raiseload`, which when set to
+``True`` will cause the affected attributes to raise on access. In the
+example below, the deferred column ``.cover_photo`` will disallow attribute
+access::
+
+ >>> book = session.scalar(
+ ... select(Book).
+ ... options(defer(Book.cover_photo, raiseload=True)).
+ ... where(Book.id == 4)
+ ... )
+ {opensql}SELECT book.id, book.owner_id, book.title, book.summary
+ FROM book
+ WHERE book.id = ?
+ [...] (4,)
+ {stop}>>> book.cover_photo
+ Traceback (most recent call last):
+ ...
+ sqlalchemy.exc.InvalidRequestError: 'Book.cover_photo' is not available due to raiseload=True
+
+When using :func:`_orm.load_only` to name a specific set of non-deferred
+columns, ``raiseload`` behavior may be applied to the remaining columns
+using the :paramref:`_orm.load_only.raiseload` parameter, which will be applied
+to all deferred attributes::
+
+ >>> session.expunge_all()
+ >>> book = session.scalar(
+ ... select(Book).
+ ... options(load_only(Book.title, raiseload=True)).
+ ... where(Book.id == 5)
+ ... )
+ {opensql}SELECT book.id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (5,)
+ {stop}>>> book.summary
+ Traceback (most recent call last):
+ ...
+ sqlalchemy.exc.InvalidRequestError: 'Book.summary' is not available due to raiseload=True
+
+.. note::
+
+ It is not yet possible to mix :func:`_orm.load_only` and :func:`_orm.defer`
+ options which refer to the same entity together in one statement in order
+ to change the ``raiseload`` behavior of certain attributes; currently,
+ doing so will produce undefined loading behavior of attributes.
+
+.. seealso::
+
+ The :paramref:`_orm.defer.raiseload` feature is the column-level version
+ of the same "raiseload" feature that's available for relationships.
+ For "raiseload" with relationships, see
+ :ref:`prevent_lazy_with_raiseload` in the
+ :ref:`loading_toplevel` section of this guide.
+
+
+
+.. _orm_queryguide_deferred_declarative:
+
+Configuring Column Deferral on Mappings
+---------------------------------------
+
+.. comment
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+
+The functionality of :func:`_orm.defer` is available as a default behavior for
+mapped columns, as may be appropriate for columns that should not be loaded
+unconditionally on every query. To configure, use the
+:paramref:`_orm.mapped_column.deferred` parameter of
+:func:`_orm.mapped_column`. The example below illustrates a mapping for
+``Book`` which applies default column deferral to the ``summary`` and
+``cover_photo`` columns::
+
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text, deferred=True)
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary, deferred=True)
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+
+Using the above mapping, queries against ``Book`` will automatically not
+include the ``summary`` and ``cover_photo`` columns::
+
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 2)
+ ... )
+ {opensql}SELECT book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+
+As is the case with all deferral, the default behavior when deferred attributes
+on the loaded object are first accessed is that they will :term:`lazy load`
+their value::
+
+ >>> img_data = book.cover_photo
+ {opensql}SELECT book.cover_photo AS book_cover_photo
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+
+As is the case with the :func:`_orm.defer` and :func:`_orm.load_only`
+loader options, mapper level deferral also includes an option for ``raiseload``
+behavior to occur, rather than lazy loading, when no other options are
+present in a statement. This allows a mapping where certain columns
+will not load by default and will also never load lazily without explicit
+directives used in a statement. See the section
+:ref:`orm_queryguide_mapper_deferred_raiseload` for background on how to
+configure and use this behavior.
+
+.. _orm_queryguide_deferred_imperative:
+
+Using ``deferred()`` for imperative mappers, mapped SQL expressions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_orm.deferred` function is the earlier, more general purpose
+"deferred column" mapping directive that precedes the introduction of the
+:func:`_orm.mapped_column` construct in SQLAlchemy.
+
+:func:`_orm.deferred` is used when configuring ORM mappers, and accepts
+arbitrary SQL expressions or
+:class:`_schema.Column` objects. As such it's suitable to be used with
+non-declarative :ref:`imperative mappings <orm_imperative_mapping>`, passing it
+to the :paramref:`_orm.registry.map_imperatively.properties` dictionary:
+
+.. sourcecode:: python
+
+ from sqlalchemy import Blob
+ from sqlalchemy import Column
+ from sqlalchemy import ForeignKey
+ from sqlalchemy import Integer
+ from sqlalchemy import String
+ from sqlalchemy import Table
+ from sqlalchemy import Text
+ from sqlalchemy.orm import registry
+
+ mapper_registry = registry()
+
+ book_table = Table(
+ 'book',
+ mapper_registry.metadata,
+ Column('id', Integer, primary_key=True),
+ Column('title', String(50)),
+ Column('summary', Text),
+ Column('cover_image', Blob)
+ )
+
+ class Book:
+ pass
+
+ mapper_registry.map_imperatively(
+ Book,
+ book_table,
+ properties={
+ "summary": deferred(book_table.c.summary),
+ "cover_image": deferred(book_table.c.cover_image),
+ }
+ )
+
+:func:`_orm.deferred` may also be used in place of :func:`_orm.column_property`
+when mapped SQL expressions should be loaded on a deferred basis:
+
+.. sourcecode:: python
+
+ from sqlalchemy.orm import deferred
+
+ class User(Base):
+ __tablename__ = 'user'
+
+ id: Mapped[int] = mapped_column(primary_key=True)
+ firstname: Mapped[str] = mapped_column()
+ lastname: Mapped[str] = mapped_column()
+ fullname: Mapped[str] = deferred(firstname + " " + lastname)
+
+.. seealso::
+
+ :ref:`mapper_column_property_sql_expressions` - in the section
+ :ref:`mapper_sql_expressions`
+
+ :ref:`orm_imperative_table_column_options` - in the section
+ :ref:`orm_declarative_table_config_toplevel`
+
+Using ``undefer()`` to "eagerly" load deferred columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With columns configured on mappings to defer by default, the
+:func:`_orm.undefer` option will cause any column that is normally deferred
+to be undeferred, that is, to load up front with all the other columns
+of the mapping. For example we may apply :func:`_orm.undefer` to the
+``Book.summary`` column, which is indicated in the previous mapping
+as deferred::
+
+ >>> from sqlalchemy.orm import undefer
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 2).options(undefer(Book.summary))
+ ... )
+ {opensql}SELECT book.summary, book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+
+The ``Book.summary`` column was now eagerly loaded, and may be accessed without
+additional SQL being emitted::
+
+ >>> print(book.summary)
+ another long summary
+
+.. _orm_queryguide_deferred_group:
+
+Loading deferred columns in groups
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. comment
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+
+Normally when a column is mapped with ``mapped_column(deferred=True)``, when
+the deferred attribute is accessed on an object, SQL will be emitted to load
+only that specific column and no others, even if the mapping has other columns
+that are also marked as deferred. In the common case that the deferred
+attribute is part of a group of attributes that should all load at once, rather
+than emitting SQL for each attribute individually, the
+:paramref:`_orm.mapped_column.deferred_group` parameter may be used, which
+accepts an arbitrary string which will define a common group of columns to be
+undeferred::
+
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text, deferred=True, deferred_group="book_attrs")
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary, deferred=True, deferred_group="book_attrs")
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+
+Using the above mapping, accessing either ``summary`` or ``cover_photo``
+will load both columns at once using just one SELECT statement::
+
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 2)
+ ... )
+ {opensql}SELECT book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+ {stop}>>> img_data, summary = book.cover_photo, book.summary
+ {opensql}SELECT book.summary AS book_summary, book.cover_photo AS book_cover_photo
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+
+
+Undeferring by group with ``undefer_group()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If deferred columns are configured with :paramref:`_orm.mapped_column.deferred_group`
+(introduced previously at :ref:`orm_queryguide_deferred_group`), the
+entire group may be indicated to load eagerly using the :func:`_orm.undefer_group`
+option, passing the string name of the group to be eagerly loaded::
+
+ >>> from sqlalchemy.orm import undefer_group
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 2).options(undefer_group("book_attrs"))
+ ... )
+ {opensql}SELECT book.summary, book.cover_photo, book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+
+Both ``summary`` and ``cover_photo`` are available without additional loads::
+
+ >>> img_data, summary = book.cover_photo, book.summary
+
+Undeferring on wildcards
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Most ORM loader options accept a wildcard expression, indicated by
+``"*"``, which indicates that the option should be applied to all relevant
+attributes. If a mapping has a series of deferred columns, all such
+columns can be undeferred at once, without using a group name, by indicating
+a wildcard::
+
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 3).options(undefer("*"))
+ ... )
+ {opensql}SELECT book.summary, book.cover_photo, book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (3,)
+
+.. _orm_queryguide_mapper_deferred_raiseload:
+
+Configuring mapper-level "raiseload" behavior
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. comment
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+
+The "raiseload" behavior first introduced at :ref:`orm_queryguide_deferred_raiseload` may
+also be applied as a default mapper-level behavior, using the
+:paramref:`_orm.mapped_column.deferred_raiseload` parameter of
+:func:`_orm.mapped_column`. When using this parameter, the affected columns
+will raise on access in all cases unless explicitly "undeferred" using
+:func:`_orm.undefer` or :func:`_orm.load_only` at query time::
+
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text, deferred=True, deferred_raiseload=True)
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary, deferred=True, deferred_raiseload=True)
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+
+Using the above mapping, the ``.summary`` and ``.cover_photo`` columns are
+by default not loadable::
+
+ >>> book = session.scalar(
+ ... select(Book).where(Book.id == 2)
+ ... )
+ {opensql}SELECT book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+ {stop}>>> book.summary
+ Traceback (most recent call last):
+ ...
+ sqlalchemy.exc.InvalidRequestError: 'Book.summary' is not available due to raiseload=True
+
+Only by overridding their behavior at query time, typically using
+:func:`_orm.undefer` or :func:`_orm.undefer_group`, or less commonly
+:func:`_orm.defer`, may the attributes be loaded. The example below applies
+``undefer('*')`` to undefer all attributes, also making use of
+:ref:`orm_queryguide_populate_existing` to refresh the already-loaded object's loader options::
+
+ >>> book = session.scalar(
+ ... select(Book).
+ ... where(Book.id == 2).
+ ... options(undefer('*')).
+ ... execution_options(populate_existing=True)
+ ... )
+ {opensql}SELECT book.summary, book.cover_photo, book.id, book.owner_id, book.title
+ FROM book
+ WHERE book.id = ?
+ [...] (2,)
+ {stop}>>> book.summary
+ 'another long summary'
+
+
+
+.. _orm_queryguide_with_expression:
+
+Loading Arbitrary SQL Expressions onto Objects
+-----------------------------------------------
+
+.. comment
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class User(Base):
+ ... __tablename__ = "user_account"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... fullname: Mapped[Optional[str]]
+ ... books: Mapped[List["Book"]] = relationship(back_populates="owner")
+ ... def __repr__(self) -> str:
+ ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})"
+ ...
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text)
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary)
+ ... owner: Mapped["User"] = relationship(back_populates="books")
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+
+
+As discussed :ref:`orm_queryguide_select_columns` and elsewhere,
+the :func:`.select` construct may be used to load arbitrary SQL expressions
+in a result set. Such as if we wanted to issue a query that loads
+``User`` objects, but also includes a count of how many books
+each ``User`` owned, we could use ``func.count(Book.id)`` to add a "count"
+column to a query which includes a JOIN to ``Book`` as well as a GROUP BY
+owner id. This will yield :class:`.Row` objects that each contain two
+entries, one for ``User`` and one for ``func.count(Book.id)``::
+
+ >>> from sqlalchemy import func
+ >>> stmt = (
+ ... select(User, func.count(Book.id)).
+ ... join_from(User, Book).
+ ... group_by(Book.owner_id)
+ ... )
+ >>> for user, book_count in session.execute(stmt):
+ ... print(f"Username: {user.name} Number of books: {book_count}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ count(book.id) AS count_1
+ FROM user_account JOIN book ON user_account.id = book.owner_id
+ GROUP BY book.owner_id
+ [...] ()
+ {stop}Username: spongebob Number of books: 3
+ Username: sandy Number of books: 3
+
+In the above example, the ``User`` entity and the "book count" SQL expression
+are returned separately. However, a popular use case is to produce a query that
+will yield ``User`` objects alone, which can be iterated for example using
+:meth:`_orm.Session.scalars`, where the result of the ``func.count(Book.id)``
+SQL expression is applied *dynamically* to each ``User`` entity. The end result
+would be similar to the case where an arbitrary SQL expression were mapped to
+the class using :func:`_orm.column_property`, except that the SQL expression
+can be modified at query time. For this use case SQLAlchemy provides the
+:func:`_orm.with_expression` loader option, which when combined with the mapper
+level :func:`_orm.query_expression` directive may produce this result.
+
+.. comment
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class Book(Base):
+ ... __tablename__ = "book"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... owner_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
+ ... title: Mapped[str]
+ ... summary: Mapped[str] = mapped_column(Text)
+ ... cover_photo: Mapped[bytes] = mapped_column(LargeBinary)
+ ... def __repr__(self) -> str:
+ ... return f"Book(id={self.id!r}, title={self.title!r})"
+
+
+To apply :func:`_orm.with_expression` to a query, the mapped class must have
+pre-configured an ORM mapped attribute using the :func:`_orm.query_expression`
+directive; this directive will produce an attribute on the mapped
+class that is suitable for receiving query-time SQL expressions. Below
+we add a new attribute ``User.book_count`` to ``User``. This ORM mapped attribute
+is read-only and has no default value; accessing it on a loaded instance will
+normally produce ``None``::
+
+ >>> from sqlalchemy.orm import query_expression
+ >>> class User(Base):
+ ... __tablename__ = "user_account"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... fullname: Mapped[Optional[str]]
+ ... book_count: Mapped[int] = query_expression()
+ ... def __repr__(self) -> str:
+ ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})"
+ ...
+
+With the ``User.book_count`` attribute configured in our mapping, we may populate
+it with data from a SQL expression using the
+:func:`_orm.with_expression` loader option to apply a custom SQL expression
+to each ``User`` object as it's loaded::
+
+
+ >>> from sqlalchemy.orm import with_expression
+ >>> stmt = (
+ ... select(User).
+ ... join_from(User, Book).
+ ... group_by(Book.owner_id).
+ ... options(with_expression(User.book_count, func.count(Book.id)))
+ ... )
+ >>> for user in session.scalars(stmt):
+ ... print(f"Username: {user.name} Number of books: {user.book_count}")
+ {opensql}SELECT count(book.id) AS count_1, user_account.id, user_account.name,
+ user_account.fullname
+ FROM user_account JOIN book ON user_account.id = book.owner_id
+ GROUP BY book.owner_id
+ [...] ()
+ {stop}Username: spongebob Number of books: 3
+ Username: sandy Number of books: 3
+
+Above, we moved our ``func.count(Book.id)`` expression out of the columns
+argument of the :func:`_sql.select` construct and into the :func:`_orm.with_expression`
+loader option. The ORM then considers this to be a special column load
+option that's applied dynamically to the statement.
+
+The :func:`.query_expression` mapping has these caveats:
+
+* On an object where :func:`_orm.with_expression` were not used to populate
+ the attribute, the attribute on an object instance will have the value
+ ``None``, unless on the mapping the :paramref:`_orm.query_expression.default_expr`
+ parameter is set to a default SQL expression.
+
+* The :func:`_orm.with_expression` value **does not populate on an object that is
+ already loaded**, unless :ref:`orm_queryguide_populate_existing` is used.
+ The example below will **not work**, as the ``A`` object
+ is already loaded:
+
+ .. sourcecode:: python
+
+ # load the first A
+ obj = session.scalars(select(A).order_by(A.id)).first()
+
+ # load the same A with an option; expression will **not** be applied
+ # to the already-loaded object
+ obj = session.scalars(
+ select(A).options(with_expression(A.expr, some_expr))
+ ).first()
+
+ To ensure the attribute is re-loaded on an existing object, use the
+ :ref:`orm_queryguide_populate_existing` execution option to ensure
+ all columns are re-populated:
+
+ .. sourcecode:: python
+
+ obj = session.scalars(
+ select(A).
+ options(with_expression(A.expr, some_expr)).
+ execution_options(populate_existing=True)
+ ).first()
+
+* The :func:`_orm.with_expression` SQL expression **is lost when when the object is
+ expired**. Once the object is expired, either via :meth:`.Session.expire`
+ or via the expire_on_commit behavior of :meth:`.Session.commit`, the SQL
+ expression and its value is no longer associated with the attribute and will
+ return ``None`` on subsequent access.
+
+* The mapped attribute currently **cannot** be applied to other parts of the
+ query, such as the WHERE clause, the ORDER BY clause, and make use of the
+ ad-hoc expression; that is, this won't work:
+
+ .. sourcecode:: python
+
+ # can't refer to A.expr elsewhere in the query
+ stmt = select(A).options(
+ with_expression(A.expr, A.x + A.y)
+ ).filter(A.expr > 5).order_by(A.expr)
+
+ The ``A.expr`` expression will resolve to NULL in the above WHERE clause
+ and ORDER BY clause. To use the expression throughout the query, assign to a
+ variable and use that:
+
+ .. sourcecode:: python
+
+ # assign desired expression up front, then refer to that in
+ # the query
+ a_expr = A.x + A.y
+ stmt = select(A).options(
+ with_expression(A.expr, a_expr)
+ ).filter(a_expr > 5).order_by(a_expr)
+
+.. seealso::
+
+ The :func:`_orm.with_expression` option is a special option used to
+ apply SQL expressions to mapped classes dynamically at query time.
+ For ordinary fixed SQL expressions configured on mappers,
+ see the section :ref:`mapper_sql_expressions`.
+
+Column Loading API
+-------------------
+
+.. autofunction:: defer
+
+.. autofunction:: deferred
+
+.. autofunction:: query_expression
+
+.. autofunction:: load_only
+
+.. autofunction:: undefer
+
+.. autofunction:: undefer_group
+
+.. autofunction:: with_expression
+
+.. comment
+
+ >>> session.close()
+ >>> conn.close()
+ ROLLBACK...
\ No newline at end of file
--- /dev/null
+
+this is the old content for reference while the new section is written
+
+remove when complete
+
+
+The ORM loader option system supports the concept of "wildcard" loader options,
+in which a loader option can be passed an asterisk ``"*"`` to indicate that
+a particular option should apply to all applicable attributes of a mapped
+class. Such as, if we wanted to load the ``Book`` class but only
+the "summary" and "excerpt" columns, we could say::
+
+ from sqlalchemy.orm import defer
+ from sqlalchemy.orm import undefer
+ from sqlalchemy import select
+
+ stmt = select(Book).options(
+ defer('*'), undefer(Book.summary), undefer(Book.excerpt))
+
+ book_objs = session.scalars(stmt).all()
+
+Above, the :func:`.defer` option is applied using a wildcard to all column
+attributes on the ``Book`` class. Then, the :func:`.undefer` option is used
+against the "summary" and "excerpt" fields so that they are the only columns
+loaded up front. A query for the above entity will include only the "summary"
+and "excerpt" fields in the SELECT, along with the primary key columns which
+are always used by the ORM.
+
+A similar function is available with less verbosity by using the
+:func:`_orm.load_only` option. This is a so-called **exclusionary** option
+which will apply deferred behavior to all column attributes except those
+that are named::
+
+ from sqlalchemy.orm import load_only
+ from sqlalchemy import select
+
+ stmt = select(Book).options(load_only(Book.summary, Book.excerpt))
+
+ book_objs = session.scal
+
+Load Only and Wildcard Options
+------------------------------
+
+The ORM loader option system supports the concept of "wildcard" loader options,
+in which a loader option can be passed an asterisk ``"*"`` to indicate that
+a particular option should apply to all applicable attributes of a mapped
+class. Such as, if we wanted to load the ``Book`` class but only
+the "summary" and "excerpt" columns, we could say::
+
+ from sqlalchemy.orm import defer
+ from sqlalchemy.orm import undefer
+ from sqlalchemy import select
+
+ stmt = select(Book).options(
+ defer('*'), undefer(Book.summary), undefer(Book.excerpt))
+
+ book_objs = session.scalars(stmt).all()
+
+Above, the :func:`.defer` option is applied using a wildcard to all column
+attributes on the ``Book`` class. Then, the :func:`.undefer` option is used
+against the "summary" and "excerpt" fields so that they are the only columns
+loaded up front. A query for the above entity will include only the "summary"
+and "excerpt" fields in the SELECT, along with the primary key columns which
+are always used by the ORM.
+
+A similar function is available with less verbosity by using the
+:func:`_orm.load_only` option. This is a so-called **exclusionary** option
+which will apply deferred behavior to all column attributes except those
+that are named::
+
+ from sqlalchemy.orm import load_only
+ from sqlalchemy import select
+
+ stmt = select(Book).options(load_only(Book.summary, Book.excerpt))
+
+ book_objs = session.scalars(stmt).all()
+
+Wildcard and Exclusionary Options with Multiple-Entity Queries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Wildcard options and exclusionary options such as :func:`.load_only` may
+only be applied to a single entity at a time within a statement.
+To suit the less common case where a statement is returning multiple
+primary entities at once, a special calling style may be required in order
+to apply a wildcard or exclusionary option to a specific entity, which is to use the
+:class:`_orm.Load` object to indicate the starting entity for a deferral option.
+Such as, if we were loading ``Book`` and ``Author`` at once, the ORM
+will raise an informative error if we try to apply :func:`.load_only` to
+both at once. Instead, we may use :class:`_orm.Load` to apply the option
+to either or both of ``Book`` and ``Author`` individually::
+
+ from sqlalchemy.orm import Load
+
+ stmt = select(Book, Author).join(Book.author)
+ stmt = stmt.options(
+ Load(Book).load_only(Book.summary, Book.excerpt)
+ )
+ book_author_objs = session.execute(stmt).all()
+
+Above, :class:`_orm.Load` is used in conjunction with the exclusionary option
+:func:`.load_only` so that the deferral of all other columns only takes
+place for the ``Book`` class and not the ``Author`` class. Again,
+the ORM should raise an informative error message when
+the above calling style is actually required that describes those cases
+where explicit use of :class:`_orm.Load` is needed.
+ars(stmt).all()
+
+Wildcard and Exclusionary Options with Multiple-Entity Queries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Wildcard options and exclusionary options such as :func:`.load_only` may
+only be applied to a single entity at a time within a statement.
+To suit the less common case where a statement is returning multiple
+primary entities at once, a special calling style may be required in order
+to apply a wildcard or exclusionary option to a specific entity, which is to use the
+:class:`_orm.Load` object to indicate the starting entity for a deferral option.
+Such as, if we were loading ``Book`` and ``Author`` at once, the ORM
+will raise an informative error if we try to apply :func:`.load_only` to
+both at once. Instead, we may use :class:`_orm.Load` to apply the option
+to either or both of ``Book`` and ``Author`` individually::
+
+ from sqlalchemy.orm import Load
+
+ stmt = select(Book, Author).join(Book.author)
+ stmt = stmt.options(
+ Load(Book).load_only(Book.summary, Book.excerpt)
+ )
+ book_author_objs = session.execute(stmt).all()
+
+Above, :class:`_orm.Load` is used in conjunction with the exclusionary option
+:func:`.load_only` so that the deferral of all other columns only takes
+place for the ``Book`` class and not the ``Author`` class. Again,
+the ORM should raise an informative error message when
+the above calling style is actually required that describes those cases
+where explicit use of :class:`_orm.Load` is needed.
+
+
+
+
+Deferred Column Loading
+=======================
+
+Deferred column loading allows particular columns of a table be loaded only
+upon direct access, instead of when the entity is queried using
+:class:`_sql.Select` or :class:`_orm.Query`. This feature is useful when one wants to avoid
+loading a large text or binary field into memory when it's not needed.
+
+Configuring Deferred Loading at Mapper Configuration Time
+---------------------------------------------------------
+
+First introduced at :ref:`orm_declarative_column_options` and
+:ref:`orm_imperative_table_column_options`, the
+:paramref:`_orm.mapped_column.deferred` parameter of :func:`_orm.mapped_column`,
+as well as the :func:`_orm.deferred` ORM function may be used to indicate mapped
+columns as "deferred" at mapper configuration time. With this configuration,
+the target columns will not be loaded in SELECT statements by default, and
+will instead only be loaded "lazily" when their corresponding attribute is
+accessed on a mapped instance. Deferral can be configured for individual
+columns or groups of columns that will load together when any of them
+are accessed.
+
+In the example below, using :ref:`Declarative Table <orm_declarative_table>`
+configuration, we define a mapping that will load each of
+``.excerpt`` and ``.photo`` in separate, individual-row SELECT statements when each
+attribute is first referenced on the individual object instance::
+
+ from sqlalchemy import Text
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import Mapped
+ from sqlalchemy.orm import mapped_column
+
+ class Base(DeclarativeBase):
+ pass
+
+ class Book(Base):
+ __tablename__ = 'book'
+
+ book_id: Mapped[int] = mapped_column(primary_key=True)
+ title: Mapped[str]
+ summary: Mapped[str]
+ excerpt: Mapped[str] = mapped_column(Text, deferred=True)
+ photo: Mapped[bytes] = mapped_column(deferred=True)
+
+A :func:`_sql.select` construct for the above mapping will not include
+``excerpt`` and ``photo`` by default::
+
+ >>> from sqlalchemy import select
+ >>> print(select(Book))
+ SELECT book.book_id, book.title, book.summary
+ FROM book
+
+When an object of type ``Book`` is loaded by the ORM, accessing the
+``.excerpt`` or ``.photo`` attributes will instead :term:`lazy load` the
+data from each column using a new SQL statement.
+
+When using :ref:`Imperative Table <orm_imperative_table_configuration>`
+or fully :ref:`Imperative <orm_imperative_mapping>` configuration, the
+:func:`_orm.deferred` construct should be used instead, passing the
+target :class:`_schema.Column` object to be mapped as the argument::
+
+ from sqlalchemy import Column, Integer, LargeBinary, String, Table, Text
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import deferred
+
+
+ class Base(DeclarativeBase):
+ pass
+
+
+ book = Table(
+ "book",
+ Base.metadata,
+ Column("book_id", Integer, primary_key=True),
+ Column("title", String),
+ Column("summary", String),
+ Column("excerpt", Text),
+ Column("photo", LargeBinary),
+ )
+
+
+ class Book(Base):
+ __table__ = book
+
+ excerpt = deferred(book.c.excerpt)
+ photo = deferred(book.c.photo)
+
+
+Deferred columns can be associated with a "group" name, so that they load
+together when any of them are first accessed. When using
+:func:`_orm.mapped_column`, this group name may be specified using the
+:paramref:`_orm.mapped_column.deferred_group` parameter, which implies
+:paramref:`_orm.mapped_column.deferred` if that parameter is not already
+set. When using :func:`_orm.deferred`, the :paramref:`_orm.deferred.group`
+parameter may be used.
+
+The example below defines a mapping with a ``photos`` deferred group. When
+an attribute within the group ``.photo1``, ``.photo2``, ``.photo3``
+is accessed on an instance of ``Book``, all three columns will be loaded in one SELECT
+statement. The ``.excerpt`` column however will only be loaded when it
+is directly accessed::
+
+ from sqlalchemy import Text
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import Mapped
+ from sqlalchemy.orm import mapped_column
+
+ class Base(DeclarativeBase):
+ pass
+
+ class Book(Base):
+ __tablename__ = 'book'
+
+ book_id: Mapped[int] = mapped_column(primary_key=True)
+ title: Mapped[str]
+ summary: Mapped[str]
+ excerpt: Mapped[str] = mapped_column(Text, deferred=True)
+ photo1: Mapped[bytes] = mapped_column(deferred_group="photos")
+ photo2: Mapped[bytes] = mapped_column(deferred_group="photos")
+ photo3: Mapped[bytes] = mapped_column(deferred_group="photos")
+
+
+.. _deferred_options:
+
+Deferred Column Loader Query Options
+------------------------------------
+At query time, the :func:`_orm.defer`, :func:`_orm.undefer` and
+:func:`_orm.undefer_group` loader options may be used to further control the
+"deferral behavior" of mapped columns.
+
+Columns can be marked as "deferred" or reset to "undeferred" at query time
+using options which are passed to the :meth:`_sql.Select.options` method; the most
+basic query options are :func:`_orm.defer` and
+:func:`_orm.undefer`::
+
+ from sqlalchemy.orm import defer
+ from sqlalchemy.orm import undefer
+ from sqlalchemy import select
+
+ stmt = select(Book)
+ stmt = stmt.options(defer(Book.summary), undefer(Book.excerpt))
+ book_objs = session.scalars(stmt).all()
+
+
+Above, the "summary" column will not load until accessed, and the "excerpt"
+column will load immediately even if it was mapped as a "deferred" column.
+
+:func:`_orm.deferred` attributes which are marked with a "group" can be undeferred
+using :func:`_orm.undefer_group`, sending in the group name::
+
+ from sqlalchemy.orm import undefer_group
+ from sqlalchemy import select
+
+ stmt = select(Book)
+ stmt = stmt.options(undefer_group('photos'))
+ book_objs = session.scalars(stmt).all()
+
+
+.. _deferred_loading_w_multiple:
+
+Deferred Loading across Multiple Entities
+-----------------------------------------
+
+Column deferral may also be used for a statement that loads multiple types of
+entities at once, by referring to the appropriate class bound attribute
+within the :func:`_orm.defer` function. Suppose ``Book`` has a
+relationship ``Book.author`` to a related class ``Author``, we could write
+a query as follows which will defer the ``Author.bio`` column::
+
+ from sqlalchemy.orm import defer
+ from sqlalchemy import select
+
+ stmt = select(Book, Author).join(Book.author)
+ stmt = stmt.options(defer(Author.bio))
+
+ book_author_objs = session.execute(stmt).all()
+
+
+Column deferral options may also indicate that they take place along various
+relationship paths, which are themselves often :ref:`eagerly loaded
+<loading_toplevel>` with loader options. All relationship-bound loader options
+support chaining onto additional loader options, which include loading for
+further levels of relationships, as well as onto column-oriented attributes at
+that path. Such as, to load ``Author`` instances, then joined-eager-load the
+``Author.books`` collection for each author, then apply deferral options to
+column-oriented attributes onto each ``Book`` entity from that relationship,
+the :func:`_orm.joinedload` loader option can be combined with the :func:`.load_only`
+option (described later in this section) to defer all ``Book`` columns except
+those explicitly specified::
+
+ from sqlalchemy.orm import joinedload
+ from sqlalchemy import select
+
+ stmt = select(Author)
+ stmt = stmt.options(
+ joinedload(Author.books).load_only(Book.summary, Book.excerpt)
+ )
+
+ author_objs = session.scalars(stmt).all()
+
+Option structures as above can also be organized in more complex ways, such
+as hierarchically using the :meth:`_orm.Load.options`
+method, which allows multiple sub-options to be chained to a common parent
+option at once. The example below illustrates a more complex structure::
+
+ from sqlalchemy.orm import defer
+ from sqlalchemy.orm import joinedload
+ from sqlalchemy.orm import load_only
+ from sqlalchemy import select
+
+ stmt = select(Author)
+ stmt = stmt.options(
+ joinedload(Author.book).options(
+ load_only(Book.summary, Book.excerpt),
+ joinedload(Book.citations).options(
+ joinedload(Citation.author),
+ defer(Citation.fulltext)
+ )
+ )
+ )
+ author_objs = session.scalars(stmt).all()
+
+
+Another way to apply options to a path is to use the :func:`_orm.defaultload`
+function. This function is used to indicate a particular path within a loader
+option structure without actually setting any options at that level, so that further
+sub-options may be applied. The :func:`_orm.defaultload` function can be used
+to create the same structure as we did above using :meth:`_orm.Load.options` as::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import defaultload
+
+ stmt = select(Author)
+ stmt = stmt.options(
+ joinedload(Author.book).load_only(Book.summary, Book.excerpt),
+ defaultload(Author.book).joinedload(Book.citations).joinedload(Citation.author),
+ defaultload(Author.book).defaultload(Book.citations).defer(Citation.fulltext)
+ )
+
+ author_objs = session.scalars(stmt).all()
+
+.. seealso::
+
+ :ref:`relationship_loader_options` - targeted towards relationship loading
+
+
+.. _deferred_raiseload:
+
+Raiseload for Deferred Columns
+------------------------------
+
+.. versionadded:: 1.4
+
+The :func:`.deferred` loader option and the corresponding loader strategy also
+support the concept of "raiseload", which is a loader strategy that will raise
+:class:`.InvalidRequestError` if the attribute is accessed such that it would
+need to emit a SQL query in order to be loaded. This behavior is the
+column-based equivalent of the :func:`_orm.raiseload` feature for relationship
+loading, discussed at :ref:`prevent_lazy_with_raiseload`. Using the
+:paramref:`_orm.defer.raiseload` parameter on the :func:`_orm.defer` option,
+an exception is raised if the attribute is accessed::
+
+ book = session.scalar(
+ select(Book).options(defer(Book.summary, raiseload=True)).limit(1)
+ )
+
+ # would raise an exception
+ book.summary
+
+Deferred "raiseload" can be configured at the mapper level via
+:paramref:`.orm.deferred.raiseload` on either :func:`_orm.mapped_column`
+or in :func:`.deferred`, so that an explicit
+:func:`.undefer` is required in order for the attribute to be usable.
+Below is a :ref:`Declarative table <orm_declarative_table>` configuration example::
+
+
+ from sqlalchemy import Text
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import Mapped
+ from sqlalchemy.orm import mapped_column
+
+ class Base(DeclarativeBase):
+ pass
+
+ class Book(Base):
+ __tablename__ = 'book'
+
+ book_id: Mapped[int] = mapped_column(primary_key=True)
+ title: Mapped[str]
+ summary: Mapped[str] = mapped_column(raiseload=True)
+ excerpt: Mapped[str] = mapped_column(Text, raiseload=True)
+
+Alternatively, the example below illustrates the same mapping using a
+:ref:`Imperative table <orm_imperative_table_configuration>` configuration::
+
+ from sqlalchemy import Column, Integer, LargeBinary, String, Table, Text
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import deferred
+
+
+ class Base(DeclarativeBase):
+ pass
+
+
+ book = Table(
+ "book",
+ Base.metadata,
+ Column("book_id", Integer, primary_key=True),
+ Column("title", String),
+ Column("summary", String),
+ Column("excerpt", Text),
+ )
+
+
+ class Book(Base):
+ __table__ = book
+
+ summary = deferred(book.c.summary, raiseload=True)
+ excerpt = deferred(book.c.excerpt, raiseload=True)
+
+With both mappings, if we wish to have either or both of ``.excerpt``
+or ``.summary`` available on an object when loaded, we make use of the
+:func:`_orm.undefer` loader option::
+
+ book_w_excerpt = session.scalars(
+ select(Book).options(undefer(Book.excerpt)).where(Book.id == 12)
+ ).first()
+
+The :func:`_orm.undefer` option will populate the ``.excerpt`` attribute
+above, even if the ``Book`` object were already loaded, assuming the
+``.excerpt`` field was not populated by some other means previously.
+
--- /dev/null
+.. highlight:: pycon+sql
+.. |prev| replace:: :doc:`inheritance`
+.. |next| replace:: :doc:`columns`
+
+.. include:: queryguide_nav_include.rst
+
+.. doctest-include _dml_setup.rst
+
+.. _orm_expression_update_delete:
+
+ORM-Enabled INSERT, UPDATE, and DELETE statements
+=================================================
+
+.. admonition:: About this Document
+
+ This section makes use of ORM mappings first illustrated in the
+ :ref:`unified_tutorial`, shown in the section
+ :ref:`tutorial_declaring_mapped_classes`, as well as inheritance
+ mappings shown in the section :ref:`inheritance_toplevel`.
+
+ :doc:`View the ORM setup for this page <_dml_setup>`.
+
+The :meth:`_orm.Session.execute` method, in addition to handling ORM-enabled
+:class:`_sql.Select` objects, can also accommodate ORM-enabled
+:class:`_sql.Insert`, :class:`_sql.Update` and :class:`_sql.Delete` objects,
+in various ways which are each used to INSERT, UPDATE, or DELETE
+many database rows at once. There is also dialect-specific support
+for ORM-enabled "upserts", which are INSERT statements that automatically
+make use of UPDATE for rows that already exist.
+
+The following table summarizes the calling forms that are discussed in this
+document:
+
+===================================================== ========================================== ======================================================================== ========================================================= ============================================================================
+ORM Use Case DML Construct Used Data is passed using ... Supports RETURNING? Supports Multi-Table Mappings?
+===================================================== ========================================== ======================================================================== ========================================================= ============================================================================
+:ref:`orm_queryguide_bulk_insert` :func:`_dml.insert` List of dictionaries to :paramref:`_orm.Session.execute.params` :ref:`yes <orm_queryguide_bulk_insert_returning>` :ref:`yes <orm_queryguide_insert_joined_table_inheritance>`
+:ref:`orm_queryguide_bulk_insert_w_sql` :func:`_dml.insert` :paramref:`_orm.Session.execute.params` with :meth:`_dml.Insert.values` :ref:`yes <orm_queryguide_bulk_insert_w_sql>` :ref:`yes <orm_queryguide_insert_joined_table_inheritance>`
+:ref:`orm_queryguide_insert_values` :func:`_dml.insert` List of dictionaries to :meth:`_dml.Insert.values` :ref:`yes <orm_queryguide_insert_values>` no
+:ref:`orm_queryguide_upsert` :func:`_dml.insert` List of dictionaries to :meth:`_dml.Insert.values` :ref:`yes <orm_queryguide_upsert_returning>` no
+:ref:`orm_queryguide_bulk_update` :func:`_dml.update` List of dictionaries to :paramref:`_orm.Session.execute.params` no :ref:`yes <orm_queryguide_bulk_update_joined_inh>`
+:ref:`orm_queryguide_update_delete_where` :func:`_dml.update`, :func:`_dml.delete` keywords to :meth:`_dml.Update.values` :ref:`yes <orm_queryguide_update_delete_where_returning>` :ref:`partial, with manual steps <orm_queryguide_update_delete_joined_inh>`
+===================================================== ========================================== ======================================================================== ========================================================= ============================================================================
+
+
+
+.. _orm_queryguide_bulk_insert:
+
+ORM Bulk INSERT Statements
+--------------------------
+
+A :func:`_dml.insert` construct can be constructed in terms of an ORM class
+and passed to the :meth:`_orm.Session.execute` method. A list of parameter
+dictionaries sent to the :paramref:`_orm.Session.execute.params` parameter, separate
+from the :class:`_dml.Insert` object itself, will invoke **bulk INSERT mode**
+for the statement, which essentially means the operation will optimize
+as much as possible for many rows::
+
+ >>> from sqlalchemy import insert
+ >>> session.execute(
+ ... insert(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ {opensql}INSERT INTO user_account (name, fullname) VALUES (?, ?)
+ [...] [('spongebob', 'Spongebob Squarepants'), ('sandy', 'Sandy Cheeks'), ('patrick', 'Patrick Star'),
+ ('squidward', 'Squidward Tentacles'), ('ehkrabs', 'Eugene H. Krabs')]
+ {stop}<...>
+
+The parameter dictionaries contain key/value pairs which may correspond to ORM
+mapped attributes that line up with mapped :class:`._schema.Column`
+or :func:`_orm.mapped_column` declarations, as well as with
+:ref:`composite <mapper_composite>` declarations. The keys should match
+the **ORM mapped attribute name** and **not** the actual database column name,
+if these two names happen to be different.
+
+.. versionchanged:: 2.0 Passing an :class:`_dml.Insert` construct to the
+ :meth:`_orm.Session.execute` method now invokes a "bulk insert", which
+ makes use of the same functionality as the legacy
+ :meth:`_orm.Session.bulk_insert_mappings` method. This is a behavior change
+ compared to the 1.x series where the :class:`_dml.Insert` would be interpreted
+ in a Core-centric way, using column names for value keys; ORM attribute
+ keys are now accepted. Core-style functionality is available by passing
+ the execution option ``{"dml_strategy": "raw"}`` to the
+ :paramref:`_orm.Session.execution_options` parameter of
+ :meth:`_orm.Session.execute`.
+
+.. _orm_queryguide_bulk_insert_returning:
+
+Getting new objects with RETURNING
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK...
+
+The bulk ORM insert feature supports INSERT..RETURNING for selected
+backends, which can return a :class:`.Result` object that may yield individual
+columns back as well as fully constructed ORM objects corresponding
+to the new rows. INSERT..RETURNING requires the use of a backend that
+supports SQL RETURNING syntax as well as support for :term:`executemany`
+with RETURNING; this feature is available with all
+:ref:`SQLAlchemy-included <included_dialects>` backends
+with the exception of MySQL (MariaDB is included).
+
+As an example, we can run the same statement as before, adding use of the
+:meth:`.UpdateBase.returning` method, passing the full ``User`` entity
+as what we'd like to return. In the example below, we also
+make use of the :meth:`_orm.Session.scalars` method in order to
+invoke the statement, which is an optional
+facade around the :meth:`_orm.Session.execute` method that will yield a
+:class:`.ScalarResult` instead of a
+:class:`.Result` object, which for convenience will yield ``User`` objects
+directly without packaging them into :class:`.Row` objects::
+
+ >>> users = session.scalars(
+ ... insert(User).returning(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ {opensql}INSERT INTO user_account (name, fullname)
+ VALUES (?, ?), (?, ?), (?, ?), (?, ?), (?, ?) RETURNING id, name, fullname, species
+ [... (insertmanyvalues)] ('spongebob', 'Spongebob Squarepants', 'sandy',
+ 'Sandy Cheeks', 'patrick', 'Patrick Star', 'squidward', 'Squidward Tentacles',
+ 'ehkrabs', 'Eugene H. Krabs')
+ {stop}>>> print(users.all())
+ [User(name='spongebob', fullname='Spongebob Squarepants'),
+ User(name='sandy', fullname='Sandy Cheeks'),
+ User(name='patrick', fullname='Patrick Star'),
+ User(name='squidward', fullname='Squidward Tentacles'),
+ User(name='ehkrabs', fullname='Eugene H. Krabs')]
+
+In the above example, the rendered SQL takes on the form used by the
+:ref:`insertmanyvalues <engine_insertmanyvalues>` feature as requested by the
+SQLite backend, where individual parameter dictionaries are inlined into a
+single INSERT statement so that RETURNING may be used.
+
+.. versionchanged:: 2.0 The ORM :class:`.Session` now interprets RETURNING
+ clauses from :class:`_dml.Insert`, :class:`_dml.Update`, and
+ even :class:`_dml.Delete` constructs in an ORM context, meaning a mixture
+ of column expressions and ORM mapped entities may be passed to the
+ :meth:`_dml.Insert.returning` method which will then be delivered
+ in the way that ORM results are delivered from constructs such as
+ :class:`_sql.Select`, including that mapped entities will be delivered
+ in the result as ORM mapped objects. Limited support for ORM loader
+ options such as :func:`_orm.load_only` and :func:`_orm.selectinload`
+ is also present.
+
+.. _orm_queryguide_insert_heterogeneous_params:
+
+Using Heterogenous Parameter Dictionaries
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK...
+
+The ORM bulk insert feature supports lists of parameter dictionaries that are
+"heterogenous", which basically means "individual dictionaries can have different
+keys". When this condition is detected,
+the ORM will break up the parameter dictionaries into groups corresponding
+to each set of keys and batch accordingly into separate INSERT statements::
+
+ >>> users = session.scalars(
+ ... insert(User).returning(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants", "species": "Sea Sponge"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks", "species": "Squirrel"},
+ ... {"name": "patrick", "species": "Starfish"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles", "species": "Squid"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs", "species": "Crab"},
+ ... ]
+ ... )
+ {opensql}INSERT INTO user_account (name, fullname, species) VALUES (?, ?, ?), (?, ?, ?) RETURNING id, name, fullname, species
+ [... (insertmanyvalues)] ('spongebob', 'Spongebob Squarepants', 'Sea Sponge', 'sandy', 'Sandy Cheeks', 'Squirrel')
+ INSERT INTO user_account (name, species) VALUES (?, ?) RETURNING id, name, fullname, species
+ [...] ('patrick', 'Starfish')
+ INSERT INTO user_account (name, fullname, species) VALUES (?, ?, ?), (?, ?, ?) RETURNING id, name, fullname, species
+ [... (insertmanyvalues)] ('squidward', 'Squidward Tentacles', 'Squid', 'ehkrabs', 'Eugene H. Krabs', 'Crab')
+
+In the above example, the five parameter dictionaries passed translated into
+three INSERT statements, grouped along the specific sets of keys
+in each dictionary while still maintaining row order, i.e.
+``("name", "fullname", "species")``, ``("name", "species")``, ``("name","fullname", "species")``.
+
+.. _orm_queryguide_insert_joined_table_inheritance:
+
+Bulk INSERT for Joined Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK
+ BEGIN...
+
+ORM bulk insert builds upon the internal system that is used by the
+traditional :term:`unit of work` system in order to emit INSERT statements. This means
+that for an ORM entity that is mapped to multiple tables, typically one which
+is mapped using :ref:`joined table inheritance <joined_inheritance>`, the
+bulk INSERT operation will emit an INSERT statement for each table represented
+by the mapping, correctly transferring server-generated primary key values
+to the table rows that depend upon them. The RETURNING feature is also supported
+here, where the ORM will receive :class:`.Result` objects for each INSERT
+statement executed, and will then "horizontally splice" them together so that
+the returned rows include values for all columns inserted::
+
+ >>> managers = session.scalars(
+ ... insert(Manager).returning(Manager),
+ ... [
+ ... {"name": "sandy", "manager_name": "Sandy Cheeks"},
+ ... {"name": "ehkrabs", "manager_name": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ {opensql}INSERT INTO employee (name, type) VALUES (?, ?), (?, ?) RETURNING id, name, type
+ [... (insertmanyvalues)] ('sandy', 'manager', 'ehkrabs', 'manager')
+ INSERT INTO manager (id, manager_name) VALUES (?, ?), (?, ?) RETURNING id, manager_name
+ [... (insertmanyvalues)] (1, 'Sandy Cheeks', 2, 'Eugene H. Krabs')
+ {stop}>>> print(managers.all())
+ [Manager('sandy', manager_name='Sandy Cheeks'), Manager('ehkrabs', manager_name='Eugene H. Krabs')]
+
+.. _orm_queryguide_bulk_insert_w_sql:
+
+ORM Bulk Insert with SQL Expressions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ORM bulk insert feature supports the addition of a fixed set of
+parameters which may include SQL expressions to be applied to every target row.
+To achieve this, combine the use of the :meth:`_dml.Insert.values` method,
+passing a dictionary of parameters that will be applied to all rows,
+with the usual bulk calling form by including a list of parameter dictionaries
+that contain individual row values when invoking :meth:`_orm.Session.execute`.
+
+As an example, given an ORM mapping that includes a "timestamp" column:
+
+.. sourcecode:: python
+
+ import datetime
+
+ class LogRecord(Base):
+ __tablename__ = "log_record"
+ id: Mapped[int] = mapped_column(primary_key=True)
+ message: Mapped[str]
+ code: Mapped[str]
+ timestamp: Mapped[datetime.datetime]
+
+If we wanted to INSERT a series of ``LogRecord`` elements, each with a unique
+``message`` field, however we would like to apply the SQL function ``now()``
+to all rows, we can pass ``timestamp`` within :meth:`_dml.Insert.values`
+and then pass the additional records using "bulk" mode::
+
+ >>> from sqlalchemy import func
+ >>> log_record_result = session.scalars(
+ ... insert(LogRecord).values(code="SQLA", timestamp=func.now()).returning(LogRecord),
+ ... [
+ ... {"message": "log message #1"},
+ ... {"message": "log message #2"},
+ ... {"message": "log message #3"},
+ ... {"message": "log message #4"},
+ ... ]
+ ... )
+ {opensql}INSERT INTO log_record (message, code, timestamp)
+ VALUES (?, ?, CURRENT_TIMESTAMP), (?, ?, CURRENT_TIMESTAMP), (?, ?, CURRENT_TIMESTAMP),
+ (?, ?, CURRENT_TIMESTAMP)
+ RETURNING id, message, code, timestamp
+ [... (insertmanyvalues)] ('log message #1', 'SQLA', 'log message #2', 'SQLA',
+ 'log message #3', 'SQLA', 'log message #4', 'SQLA')
+
+ {stop}>>> print(log_record_result.all())
+ [LogRecord('log message #1', 'SQLA', datetime.datetime(...)),
+ LogRecord('log message #2', 'SQLA', datetime.datetime(...)),
+ LogRecord('log message #3', 'SQLA', datetime.datetime(...)),
+ LogRecord('log message #4', 'SQLA', datetime.datetime(...))]
+
+
+.. _orm_queryguide_insert_values:
+
+ORM Bulk Insert with Per Row SQL Expressions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+
+.. Setup code, not for display
+
+ >>> session.rollback()
+ ROLLBACK
+ >>> session.execute(
+ ... insert(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants", "species": "Sea Sponge"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks", "species": "Squirrel"},
+ ... {"name": "patrick", "species": "Starfish"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles", "species": "Squid"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs", "species": "Crab"},
+ ... ]
+ ... )
+ BEGIN...
+
+The :meth:`_dml.Insert.values` method itself accommodates a list of parameter
+dictionaries directly. When using the :class:`_dml.Insert` construct in this
+way, without passing any list of parameter dictionaries to the
+:paramref:`_orm.Session.execute.params` parameter, bulk ORM insert mode is not
+used, and instead the INSERT statement is rendered exactly as given and invoked
+exactly once. This mode of operation may be useful both for the case of passing
+SQL expressions on a per-row basis, and is also used when using "upsert"
+statements with the ORM, documented later in this chapter at
+:ref:`orm_queryguide_upsert`.
+
+A contrived example of an INSERT that embeds per-row SQL expressions,
+and also demonstrates :meth:`_dml.Insert.returning` in this form, is below::
+
+
+ >>> from sqlalchemy import select
+ >>> address_result = session.scalars(
+ ... insert(Address).values(
+ ... [
+ ... {
+ ... "user_id": select(User.id).where(User.name == 'sandy'),
+ ... "email_address": "sandy@company.com"
+ ... },
+ ... {
+ ... "user_id": select(User.id).where(User.name == 'spongebob'),
+ ... "email_address": "spongebob@company.com"
+ ... },
+ ... {
+ ... "user_id": select(User.id).where(User.name == 'patrick'),
+ ... "email_address": "patrick@company.com"
+ ... },
+ ... ]
+ ... ).returning(Address),
+ ... )
+ {opensql}INSERT INTO address (user_id, email_address) VALUES
+ ((SELECT user_account.id
+ FROM user_account
+ WHERE user_account.name = ?), ?), ((SELECT user_account.id
+ FROM user_account
+ WHERE user_account.name = ?), ?), ((SELECT user_account.id
+ FROM user_account
+ WHERE user_account.name = ?), ?) RETURNING id, user_id, email_address
+ [...] ('sandy', 'sandy@company.com', 'spongebob', 'spongebob@company.com',
+ 'patrick', 'patrick@company.com')
+ {stop}>>> print(address_result.all())
+ [Address(email_address='sandy@company.com'),
+ Address(email_address='spongebob@company.com'),
+ Address(email_address='patrick@company.com')]
+
+Because bulk ORM insert mode is not used above, the following features
+are not present:
+
+* :ref:`Joined table inheritance <orm_queryguide_insert_joined_table_inheritance>`
+ or other multi-table mappings are not supported, since that would require multiple
+ INSERT statements.
+
+* :ref:`Heterogenous parameter sets <orm_queryguide_insert_heterogeneous_params>`
+ are not supported - each element in the VALUES set must have the same
+ columns.
+
+* Core-level scale optimizations such as the batching provided by
+ :ref:`insertmanyvalues <engine_insertmanyvalues>` are not available; statements
+ will need to ensure the total number of parameters does not exceed limits
+ imposed by the backing database.
+
+For the above reasons, it is generally not recommended to use multiple
+parameter sets with :meth:`_dml.Insert.values` with ORM INSERT statements
+unless there is a clear rationale, which is either that "upsert" is being used
+or there is a need to embed per-row SQL expressions in each parameter set.
+
+.. seealso::
+
+ :ref:`orm_queryguide_upsert`
+
+
+.. _orm_queryguide_legacy_bulk_insert:
+
+Legacy Session Bulk INSERT Methods
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :class:`_orm.Session` includes legacy methods for performing
+"bulk" INSERT and UPDATE statements. These methods share implementations
+with the SQLAlchemy 2.0 versions of these features, described
+at :ref:`orm_queryguide_bulk_insert` and :ref:`orm_queryguide_bulk_update`,
+however lack many features, namely RETURNING support as well as support
+for session-synchronization.
+
+Code which makes use of :meth:`.Session.bulk_insert_mappings` for example
+can port code as follows, starting with this mappings example::
+
+ session.bulk_insert_mappings(
+ User,
+ [{"name": "u1"}, {"name": "u2"}, {"name": "u3"}]
+ )
+
+The above is expressed using the new API as::
+
+ from sqlalchemy import insert
+ session.execute(
+ insert(User),
+ [{"name": "u1"}, {"name": "u2"}, {"name": "u3"}]
+ )
+
+.. seealso::
+
+ :ref:`orm_queryguide_legacy_bulk_update`
+
+
+.. _orm_queryguide_upsert:
+
+ORM "upsert" Statements
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Selected backends with SQLAlchemy may include dialect-specific :class:`_dml.Insert`
+constructs which additionally have the ability to perform "upserts", or INSERTs
+where an existing row in the parameter set is turned into an approximation of
+an UPDATE statement instead. By "existing row" , this may mean rows
+which share the same primary key value, or may refer to other indexed
+columns within the row that are considered to be unique; this is dependent
+on the capabilities of the backend in use.
+
+The dialects included with SQLAlchemy that include dialect-specific "upsert"
+API features are:
+
+* SQLite - using :class:`_sqlite.Insert` documented at :ref:`sqlite_on_conflict_insert`
+* PostgreSQL - using :class:`_postgresql.Insert` documented at :ref:`postgresql_insert_on_conflict`
+* MySQL/MariaDB - using :class:`_mysql.Insert` documented at :ref:`mysql_insert_on_duplicate_key_update`
+
+Users should review the above sections for background on proper construction
+of these objects; in particular, the "upsert" method typically needs to
+refer back to the original statement, so the statement is usually constructed
+in two separate steps.
+
+Third party backends such as those mentioned at :ref:`external_toplevel` may
+also feature similar constructs.
+
+While SQLAlchemy does not yet have a backend-agnostic upsert construct, the above
+:class:`_dml.Insert` variants are nonetheless ORM compatible in that they may be used
+in the same way as the :class:`_dml.Insert` construct itself as documented at
+:ref:`orm_queryguide_insert_values`, that is, by embedding the desired rows
+to INSERT within the :meth:`_dml.Insert.values` method. In the example
+below, the SQLite :func:`_sqlite.insert` function is used to generate
+an :class:`_sqlite.Insert` construct that includes "ON CONFLICT DO UPDATE"
+support. The statement is then passed to :meth:`_orm.Session.execute` where
+it proceeds normally, with the additional characteristic that the
+parameter dictionaries passed to :meth:`_dml.Insert.values` are interpreted
+as ORM mapped attribute keys, rather than column names:
+
+.. Setup code, not for display
+
+ >>> session.rollback();
+ ROLLBACK
+ >>> session.execute(insert(User).values(
+ ... [
+ ... dict(name="sandy"),
+ ... dict(name="spongebob", fullname="Spongebob Squarepants"),
+ ... ]
+ ... ))
+ BEGIN...
+
+::
+
+ >>> from sqlalchemy.dialects.sqlite import insert as sqlite_upsert
+ >>> stmt = sqlite_upsert(User).values(
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ >>> stmt = stmt.on_conflict_do_update(
+ ... index_elements=[User.name],
+ ... set_=dict(fullname=stmt.excluded.fullname)
+ ... )
+ >>> session.execute(stmt)
+ {opensql}INSERT INTO user_account (name, fullname)
+ VALUES (?, ?), (?, ?), (?, ?), (?, ?), (?, ?)
+ ON CONFLICT (name) DO UPDATE SET fullname = excluded.fullname
+ [...] ('spongebob', 'Spongebob Squarepants', 'sandy', 'Sandy Cheeks',
+ 'patrick', 'Patrick Star', 'squidward', 'Squidward Tentacles',
+ 'ehkrabs', 'Eugene H. Krabs')
+ {stop}<...>
+
+.. _orm_queryguide_upsert_returning:
+
+Using RETURNING with upsert statements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+From the SQLAlchemy ORM's point of view, upsert statements look like regular
+:class:`_dml.Insert` constructs, which includes that :meth:`_dml.Insert.returning`
+works with upsert statements in the same way as was demonstrated at
+:ref:`orm_queryguide_insert_values`, so that any column expression or
+relevant ORM entity class may be passed. Continuing from the
+example in the previous section::
+
+ >>> result = session.scalars(stmt.returning(User), execution_options={"populate_existing": True})
+ {opensql}INSERT INTO user_account (name, fullname)
+ VALUES (?, ?), (?, ?), (?, ?), (?, ?), (?, ?)
+ ON CONFLICT (name) DO UPDATE SET fullname = excluded.fullname
+ RETURNING id, name, fullname, species
+ [...] ('spongebob', 'Spongebob Squarepants', 'sandy', 'Sandy Cheeks',
+ 'patrick', 'Patrick Star', 'squidward', 'Squidward Tentacles',
+ 'ehkrabs', 'Eugene H. Krabs')
+ {stop}>>> print(result.all())
+ [User(name='spongebob', fullname='Spongebob Squarepants'),
+ User(name='sandy', fullname='Sandy Cheeks'),
+ User(name='patrick', fullname='Patrick Star'),
+ User(name='squidward', fullname='Squidward Tentacles'),
+ User(name='ehkrabs', fullname='Eugene H. Krabs')]
+
+The example above uses RETURNING to return ORM objects for each row inserted or
+upserted by the statement. The example also adds use of the
+:ref:`orm_queryguide_populate_existing` execution option. This option indicates
+that when a particular ``User`` object is being delivered by the statement,
+that the contents of an existing ``User`` object, if one were already present
+in the :class:`_orm.Session` for its particular identity key, should be
+**replaced** with that of the new row. For a pure :class:`_dml.Insert`
+statement, this option is not significant, because every row produced is a
+brand new primary key identity. However when the :class:`_dml.Insert` also
+includes "upsert" options, it may also be yielding results from rows that
+already exist and therefore may already have a primary key identity represented
+in the :class:`_orm.Session` object's :term:`identity map`.
+
+.. seealso::
+
+ :ref:`orm_queryguide_populate_existing`
+
+
+.. _orm_queryguide_bulk_update:
+
+ORM Bulk UPDATE by Primary Key
+------------------------------
+
+.. Setup code, not for display
+
+ >>> session.rollback();
+ ROLLBACK
+ >>> session.execute(
+ ... insert(User),
+ ... [
+ ... {"name": "spongebob", "fullname": "Spongebob Squarepants"},
+ ... {"name": "sandy", "fullname": "Sandy Cheeks"},
+ ... {"name": "patrick", "fullname": "Patrick Star"},
+ ... {"name": "squidward", "fullname": "Squidward Tentacles"},
+ ... {"name": "ehkrabs", "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ BEGIN ...
+ >>> session.commit(); session.connection()
+ COMMIT...
+
+The :class:`_dml.Update` construct may be used with
+:meth:`_orm.Session.execute` in a similar way as the :class:`_dml.Insert`
+statement is used as described at :ref:`orm_queryguide_bulk_insert`, passing a
+list of many parameter dictionaries, each dictionary representing an individual
+row that corresponds to a single primary key value. This use should not be
+confused with a more common way to use :class:`_dml.Update` statements with the
+ORM, using an explicit WHERE clause, which is documented at
+:ref:`orm_queryguide_update_delete_where`.
+
+For the "bulk" version of UPDATE, a :func:`_dml.update` construct is made in
+terms of an ORM class and passed to the :meth:`_orm.Session.execute` method;
+the resulting :class:`_dml.Update` object should have **no WHERE criteria or
+values**, that is, the :meth:`_dml.Update.where` and :meth:`_dml.Update.values`
+methods are not used. Passing the :class:`_dml.Update` construct along with a
+list of parameter dictionaries which each include a full primary key value will
+invoke **bulk UPDATE by primary key mode** for the statement, generating the
+appropriate WHERE criteria to match each row by primary key, and using
+:term:`executemany` to run each parameter set against the UPDATE statement::
+
+ >>> from sqlalchemy import update
+ >>> session.execute(
+ ... update(User),
+ ... [
+ ... {"id": 1, "fullname": "Spongebob Squarepants"},
+ ... {"id": 3, "fullname": "Patrick Star"},
+ ... {"id": 5, "fullname": "Eugene H. Krabs"},
+ ... ]
+ ... )
+ {opensql}UPDATE user_account SET fullname=? WHERE user_account.id = ?
+ [...] [('Spongebob Squarepants', 1), ('Patrick Star', 3), ('Eugene H. Krabs', 5)]
+ {stop}<...>
+
+Like the bulk INSERT feature, heterogeneous parameter lists are supported here
+as well, where the parameters will be grouped into sub-batches of UPDATE
+runs.
+
+The RETURNING feature is not available when using the "bulk UPDATE by primary
+key" feature; the list of multiple parameter dictionaries necessarily makes use
+of DBAPI :term:`executemany`, which in its usual form does not typically
+support result rows.
+
+
+.. versionchanged:: 2.0 Passing an :class:`_dml.Update` construct to the
+ :meth:`_orm.Session.execute` method along with a list of parameter dictionaries
+ and no WHERE criteria now invokes a "bulk update", which
+ makes use of the same functionality as the legacy
+ :meth:`_orm.Session.bulk_update_mappings` method. This is a behavior change
+ compared to the 1.x series where the :class:`_dml.Update` would only be
+ supported with explicit WHERE criteria and inline VALUES.
+
+.. _orm_queryguide_bulk_update_joined_inh:
+
+Bulk UPDATE by Primary Key for Joined Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.execute(
+ ... insert(Manager).returning(Manager),
+ ... [
+ ... {"name": "sandy", "manager_name": "Sandy Cheeks"},
+ ... {"name": "ehkrabs", "manager_name": "Eugene H. Krabs"},
+ ... ]
+ ... ); session.commit(); session.connection()
+ INSERT...
+
+ORM bulk update has similar behavior to ORM bulk insert when using mappings
+with joined table inheritance; as described at
+:ref:`orm_queryguide_insert_joined_table_inheritance`, the bulk UPDATE
+operation will emit an UPDATE statement for each table represented in the
+mapping, for which the given parameters include values to be updated
+(non-affected tables are skipped).
+
+Example::
+
+ >>> session.execute(
+ ... update(Manager),
+ ... [
+ ... {"id": 1, "name": "scheeks", "manager_name": "Sandy Cheeks, President"},
+ ... {"id": 2, "name": "eugene", "manager_name": "Eugene H. Krabs, VP Marketing"},
+ ... ]
+ ... )
+ {opensql}UPDATE employee SET name=? WHERE employee.id = ?
+ [...] [('scheeks', 1), ('eugene', 2)]
+ UPDATE manager SET manager_name=? WHERE manager.id = ?
+ [...] [('Sandy Cheeks, President', 1), ('Eugene H. Krabs, VP Marketing', 2)]
+ {stop}<...>
+
+.. _orm_queryguide_legacy_bulk_update:
+
+Legacy Session Bulk UPDATE Methods
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As discussed at :ref:`orm_queryguide_legacy_bulk_insert`, the
+:meth:`_orm.Session.bulk_update_mappings` method of :class:`_orm.Session` is
+the legacy form of bulk update, which the ORM makes use of internally when
+interpreting a :func:`_sql.update` statement with primary key parameters given;
+however, when using the legacy version, features such as support for
+session-synchronization are not included.
+
+The example below::
+
+ session.bulk_update_mappings(
+ User,
+ [
+ {"id": 1, "name": "scheeks", "manager_name": "Sandy Cheeks, President"},
+ {"id": 2, "name": "eugene", "manager_name": "Eugene H. Krabs, VP Marketing"},
+ ]
+ )
+
+Is expressed using the new API as::
+
+ from sqlalchemy import update
+ session.execute(
+ update(User),
+ [
+ {"id": 1, "name": "scheeks", "manager_name": "Sandy Cheeks, President"},
+ {"id": 2, "name": "eugene", "manager_name": "Eugene H. Krabs, VP Marketing"},
+ ]
+ )
+
+.. seealso::
+
+ :ref:`orm_queryguide_legacy_bulk_insert`
+
+
+
+.. _orm_queryguide_update_delete_where:
+
+ORM UPDATE and DELETE with Custom WHERE Criteria
+------------------------------------------------
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK...
+
+The :class:`_dml.Update` and :class:`_dml.Delete` constructs, when constructed
+with custom WHERE criteria (that is, using the :meth:`_dml.Update.where` and
+:meth:`_dml.Delete.where` methods), may be invoked in an ORM context
+by passing them to :meth:`_orm.Session.execute`, without using
+the :paramref:`_orm.Session.execute.params` parameter. For :class:`_dml.Update`,
+the values to be updated should be passed using :meth:`_dml.Update.values`.
+
+This mode of use differs
+from the feature described previously at :ref:`orm_queryguide_bulk_update`
+in that the ORM uses the given WHERE clause as is, rather than fixing the
+WHERE clause to be by primary key. This means that the single UPDATE or
+DELETE statement can affect many rows at once.
+
+As an example, below an UPDATE is emitted that affects the "fullname"
+field of multiple rows
+::
+
+ >>> from sqlalchemy import update
+ >>> stmt = update(User).where(User.name.in_(["squidward", "sandy"])).values(fullname="Name starts with S")
+ >>> session.execute(stmt)
+ {opensql}UPDATE user_account SET fullname=? WHERE user_account.name IN (?, ?)
+ [...] ('Name starts with S', 'squidward', 'sandy')
+ {stop}<...>
+
+
+For a DELETE, an example of deleting rows based on criteria::
+
+ >>> from sqlalchemy import delete
+ >>> stmt = delete(User).where(User.name.in_(["squidward", "sandy"]))
+ >>> session.execute(stmt)
+ {opensql}DELETE FROM user_account WHERE user_account.name IN (?, ?)
+ [...] ('squidward', 'sandy')
+ {stop}<...>
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK...
+
+.. _orm_queryguide_update_delete_sync:
+
+
+Selecting a Synchronization Strategy
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When making use of :func:`_dml.update` or :func:`_dml.delete` in conjunction
+with ORM-enabled execution using :meth:`_orm.Session.execute`, additional
+ORM-specific functionality is present which will **synchronize** the state
+being changed by the statement with that of the objects that are currently
+present within the :term:`identity map` of the :class:`_orm.Session`.
+By "synchronize" we mean that UPDATEd attributes will be refreshed with the
+new value, or at the very least :term:`expired` so that they will re-populate
+with their new value on next access, and DELETEd objects will be
+moved into the :term:`deleted` state.
+
+This synchronization is controllable as the "synchronization strategy",
+which is passed as an string ORM execution option, typically by using the
+:paramref:`_orm.Session.execute.execution_options` dictionary::
+
+ >>> from sqlalchemy import update
+ >>> stmt = (
+ ... update(User).
+ ... where(User.name == "squidward").
+ ... values(fullname="Squidward Tentacles")
+ ... )
+ >>> session.execute(stmt, execution_options={"synchronize_session": False})
+ {opensql}UPDATE user_account SET fullname=? WHERE user_account.name = ?
+ [...] ('Squidward Tentacles', 'squidward')
+ {stop}<...>
+
+The execution option may also be bundled with the statement itself using the
+:meth:`_sql.Executable.execution_options` method::
+
+ >>> from sqlalchemy import update
+ >>> stmt = (
+ ... update(User).
+ ... where(User.name == "squidward").
+ ... values(fullname="Squidward Tentacles").
+ ... execution_options(synchronize_session=False)
+ ... )
+ >>> session.execute(stmt)
+ {opensql}UPDATE user_account SET fullname=? WHERE user_account.name = ?
+ [...] ('Squidward Tentacles', 'squidward')
+ {stop}<...>
+
+The following values for ``synchronize_session`` are supported:
+
+* ``'auto'`` - this is the default. The ``'fetch'`` strategy will be used on
+ backends that support RETURNING, which includes all SQLAlchemy-native drivers
+ except for MySQL. If RETURNING is not supported, the ``'evaluate'``
+ strategy will be used instead.
+
+* ``'fetch'`` - Retrieves the primary key identity of affected rows by either
+ performing a SELECT before the UPDATE or DELETE, or by using RETURNING if the
+ database supports it, so that in-memory objects which are affected by the
+ operation can be refreshed with new values (updates) or expunged from the
+ :class:`_orm.Session` (deletes). This synchronization strategy may be used
+ even if the given :func:`_dml.update` or :func:`_dml.delete`
+ construct explicitly specifies entities or columns using
+ :meth:`_dml.UpdateBase.returning`.
+
+ .. versionchanged:: 2.0 Explicit :meth:`_dml.UpdateBase.returning` may be
+ combined with the ``'fetch'`` synchronization strategy when using
+ ORM-enabled UPDATE and DELETE with WHERE criteria. The actual statement
+ will contain the union of columns between that which the ``'fetch'``
+ strategy requires and those which were requested.
+
+* ``'evaluate'`` - This indicates to evaluate the WHERE
+ criteria given in the UPDATE or DELETE statement in Python, to locate
+ matching objects within the :class:`_orm.Session`. This approach does not add
+ any SQL round trips to the operation, and in the absence of RETURNING
+ support, may be more efficient. For UPDATE or DELETE statements with complex
+ criteria, the ``'evaluate'`` strategy may not be able to evaluate the
+ expression in Python and will raise an error. If this occurs, use the
+ ``'fetch'`` strategy for the operation instead.
+
+ .. tip::
+
+ If a SQL expression makes use of custom operators using the
+ :meth:`_sql.Operators.op` or :class:`_sql.custom_op` feature, the
+ :paramref:`_sql.Operators.op.python_impl` parameter may be used to indicate
+ a Python function that will be used by the ``"evaluate"`` synchronization
+ strategy.
+
+ .. versionadded:: 2.0
+
+ .. warning::
+
+ The ``"evaluate"`` strategy should be avoided if an UPDATE operation is
+ to run on a :class:`_orm.Session` that has many objects which have
+ been expired, because it will necessarily need to refresh objects in order
+ to test them against the given WHERE criteria, which will emit a SELECT
+ for each one. In this case, and particularly if the backend supports
+ RETURNING, the ``"fetch"`` strategy should be preferred.
+
+* ``False`` - don't synchronize the session. This option may be useful
+ for backends that don't support RETURNING where the ``"evaluate"`` strategy
+ is not able to be used. In this case, the state of objects in the
+ :class:`_orm.Session` is unchanged and will not automatically correspond
+ to the UPDATE or DELETE statement that was emitted, if such objects
+ that would normally correspond to the rows matched are present.
+
+
+.. _orm_queryguide_update_delete_where_returning:
+
+Using RETURNING with UPDATE/DELETE and Custom WHERE Criteria
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :meth:`.UpdateBase.returning` method is fully compatible with
+ORM-enabled UPDATE and DELETE with WHERE criteria. Full ORM objects
+and/or columns may be indicated for RETURNING::
+
+ >>> from sqlalchemy import update
+ >>> stmt = (
+ ... update(User).
+ ... where(User.name == "squidward").
+ ... values(fullname="Squidward Tentacles").
+ ... returning(User)
+ ... )
+ >>> result = session.scalars(stmt)
+ {opensql}UPDATE user_account SET fullname=? WHERE user_account.name = ?
+ RETURNING id, name, fullname, species
+ [...] ('Squidward Tentacles', 'squidward')
+ {stop}>>> print(result.all())
+ [User(name='squidward', fullname='Squidward Tentacles')]
+
+The support for RETURNING is also compatible with the ``fetch`` synchronization
+strategy, which also uses RETURNING. The ORM will organize the columns in
+RETURNING appropriately so that the synchronization proceeds as well as that
+the returned :class:`.Result` will contain the requested entities and SQL
+columns in their requested order.
+
+.. versionadded:: 2.0 :meth:`.UpdateBase.returning` may be used for
+ ORM enabled UPDATE and DELETE while still retaining full compatibility
+ with the ``fetch`` synchronization strategy.
+
+.. _orm_queryguide_update_delete_joined_inh:
+
+UPDATE/DELETE with Custom WHERE Criteria for Joined Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.rollback(); session.connection()
+ ROLLBACK...
+
+The UPDATE/DELETE with WHERE criteria feature, unlike the
+:ref:`orm_queryguide_bulk_update`, only emits a single UPDATE or DELETE
+statement per call to :meth:`_orm.Session.execute`. This means that when
+running an :func:`_dml.update` or :func:`_dml.delete` statement against a
+multi-table mapping, such as a subclass in a joined-table inheritance mapping,
+the statement must conform to the backend's current capabilities, which may
+include that the backend does not support an UPDATE or DELETE statement that
+refers to multiple tables, or may have only limited support for this. This
+means that for mappings such as joined inheritance subclasses, the ORM version
+of the UPDATE/DELETE with WHERE criteria feature can only be used to a limited
+extent or not at all, depending on specifics.
+
+The most straightforward way to emit a multi-row UPDATE statement
+for a joined-table subclass is to refer to the sub-table alone.
+This means the :func:`_dml.Update` construct should only refer to attributes
+that are local to the subclass table, as in the example below::
+
+
+ >>> stmt = (
+ ... update(Manager).
+ ... where(Manager.id == 1).
+ ... values(manager_name="Sandy Cheeks, President")
+ ... )
+ >>> session.execute(stmt)
+ UPDATE manager SET manager_name=? WHERE manager.id = ?
+ [...] ('Sandy Cheeks, President', 1)
+ <...>
+
+With the above form, a rudimentary way to refer to the base table in order
+to locate rows which will work on any SQL backend is so use a subquery::
+
+ >>> stmt = (
+ ... update(Manager).
+ ... where(
+ ... Manager.id ==
+ ... select(Employee.id).
+ ... where(Employee.name == "sandy").scalar_subquery()
+ ... ).
+ ... values(manager_name="Sandy Cheeks, President")
+ ... )
+ >>> session.execute(stmt)
+ {opensql}UPDATE manager SET manager_name=? WHERE manager.id = (SELECT employee.id
+ FROM employee
+ WHERE employee.name = ?) RETURNING id
+ [...] ('Sandy Cheeks, President', 'sandy')
+ {stop}<...>
+
+For backends that support UPDATE...FROM, the subquery may be stated instead
+as additional plain WHERE criteria, however the criteria between the two
+tables must be stated explicitly in some way::
+
+ >>> stmt = (
+ ... update(Manager).
+ ... where(
+ ... Manager.id == Employee.id,
+ ... Employee.name == "sandy"
+ ... ).
+ ... values(manager_name="Sandy Cheeks, President")
+ ... )
+ >>> session.execute(stmt)
+ {opensql}UPDATE manager SET manager_name=? FROM employee
+ WHERE manager.id = employee.id AND employee.name = ?
+ [...] ('Sandy Cheeks, President', 'sandy')
+ {stop}<...>
+
+
+For a DELETE, it's expected that rows in both the base table and the sub-table
+would be DELETEd at the same time. To DELETE many rows of joined inheritance
+objects **without** using cascading foreign keys, emit DELETE for each
+table individually::
+
+ >>> from sqlalchemy import delete
+ >>> session.execute(delete(Manager).where(Manager.id == 1))
+ {opensql}DELETE FROM manager WHERE manager.id = ?
+ [...] (1,)
+ {stop}<...>
+ >>> session.execute(delete(Employee).where(Employee.id == 1))
+ {opensql}DELETE FROM employee WHERE employee.id = ?
+ [...] (1,)
+ {stop}<...>
+
+Overall, normal :term:`unit of work` processes should be **preferred** for
+updating and deleting rows for joined inheritance and other multi-table
+mappings, unless there is a performance rationale for using custom WHERE
+criteria.
+
+
+Legacy Query Methods
+~~~~~~~~~~~~~~~~~~~~
+
+The ORM enabled UPDATE/DELETE with WHERE feature was originally part of the
+now-legacy :class:`.Query` object, in the :meth:`_orm.Query.update`
+and :meth:`_orm.Query.delete` methods. These methods remain available
+and provide a subset of the same functionality as that described at
+:ref:`orm_queryguide_update_delete_where`. The primary difference is that
+the legacy methods don't provide for explicit RETURNING support.
+
+.. seealso::
+
+ :meth:`_orm.Query.update`
+
+ :meth:`_orm.Query.delete`
+
+.. Setup code, not for display
+
+ >>> session.close(); conn.close()
+ ROLLBACK
--- /dev/null
+.. highlight:: pycon+sql
+
+.. _queryguide_toplevel:
+
+==================
+ORM Querying Guide
+==================
+
+This section provides an overview of emitting queries with the
+SQLAlchemy ORM using :term:`2.0 style` usage.
+
+Readers of this section should be familiar with the SQLAlchemy overview
+at :ref:`unified_tutorial`, and in particular most of the content here expands
+upon the content at :ref:`tutorial_selecting_data`.
+
+.. admonition:: For users of SQLAlchemy 1.x
+
+ In the SQLAlchemy 2.x series, SQL SELECT statements for the ORM are
+ constructed using the same :func:`_sql.select` construct as is used in
+ Core, which is then invoked in terms of a :class:`_orm.Session` using the
+ :meth:`_orm.Session.execute` method (as are the :func:`_sql.update` and
+ :func:`_sql.delete` constructs now used for the
+ :ref:`orm_expression_update_delete` feature). However, the legacy
+ :class:`_query.Query` object, which performs these same steps as more of an
+ "all-in-one" object, continues to remain available as a thin facade over
+ this new system, to support applications that were built on the 1.x series
+ without the need for wholesale replacement of all queries. For reference on
+ this object, see the section :ref:`query_api_toplevel`.
+
+
+
+
+.. toctree::
+ :maxdepth: 3
+
+ select
+ inheritance
+ dml
+ columns
+ relationships
+ api
+ query
--- /dev/null
+.. highlight:: pycon+sql
+.. |prev| replace:: :doc:`select`
+.. |next| replace:: :doc:`dml`
+
+.. include:: queryguide_nav_include.rst
+
+.. doctest-include _inheritance_setup.rst
+
+.. _inheritance_loading_toplevel:
+
+
+.. currentmodule:: sqlalchemy.orm
+
+.. _loading_joined_inheritance:
+
+Writing SELECT statements for Inheritance Mappings
+==================================================
+
+.. admonition:: About this Document
+
+ This section makes use of ORM mappings configured using
+ the :ref:`ORM Inheritance <inheritance_toplevel>` feature,
+ described at :ref:`inheritance_toplevel`. The emphasis will be on
+ :ref:`joined_inheritance` as this is the most intricate ORM querying
+ case.
+
+ :doc:`View the ORM setup for this page <_inheritance_setup>`.
+
+SELECTing from the base class vs. specific sub-classes
+--------------------------------------------------------
+
+A SELECT statement constructed against a class in a joined inheritance
+hierarchy will query against the table to which the class is mapped, as well as
+any super-tables present, using JOIN to link them together. The query would
+then return objects that are of that requested type as well as any sub-types of
+the requested type, using the :term:`discriminator` value in each row
+to determine the correct type. The query below is established against the ``Manager``
+subclass of ``Employee``, which then returns a result that will contain only
+objects of type ``Manager``::
+
+ >>> from sqlalchemy import select
+ >>> stmt = select(Manager).order_by(Manager.id)
+ >>> managers = session.scalars(stmt).all()
+ {opensql}BEGIN (implicit)
+ SELECT manager.id, employee.id AS id_1, employee.name, employee.type, employee.company_id, manager.manager_name
+ FROM employee JOIN manager ON employee.id = manager.id ORDER BY manager.id
+ [...] ()
+ {stop}>>> print(managers)
+ [Manager('Mr. Krabs')]
+
+.. Setup code, not for display
+
+
+ >>> session.close()
+ ROLLBACK
+
+When the SELECT statement is against the base class in the hierarchy, the
+default behavior is that only that class' table will be included in the
+rendered SQL and JOIN will not be used. As in all cases, the
+:term:`discriminator` column is used to distinguish between different requested
+sub-types, which then results in objects of any possible sub-type being
+returned. The objects returned will have attributes corresponding to the base
+table populated, and attributes corresponding to sub-tables will start in an
+un-loaded state, loading automatically when accessed. The loading of
+sub-attributes is configurable to be more "eager" in a variety of ways,
+discussed later in this section.
+
+The example below creates a query against the ``Employee`` superclass.
+This indicates that objects of any type, including ``Manager``, ``Engineer``,
+and ``Employee``, may be within the result set::
+
+ >>> from sqlalchemy import select
+ >>> stmt = select(Employee).order_by(Employee.id)
+ >>> objects = session.scalars(stmt).all()
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type, employee.company_id
+ FROM employee ORDER BY employee.id
+ [...] ()
+ {stop}>>> print(objects)
+ [Manager('Mr. Krabs'), Engineer('SpongeBob'), Engineer('Squidward')]
+
+Above, the additional tables for ``Manager`` and ``Engineer`` were not included
+in the SELECT, which means that the returned objects will not yet contain
+data represented from those tables, in this example the ``.manager_name``
+attribute of the ``Manager`` class as well as the ``.engineer_info`` attribute
+of the ``Engineer`` class. These attributes start out in the
+:term:`expired` state, and will automatically populate themselves when first
+accessed using :term:`lazy loading`::
+
+ >>> mr_krabs = objects[0]
+ >>> print(mr_krabs.manager_name)
+ {opensql}SELECT manager.manager_name AS manager_manager_name
+ FROM manager
+ WHERE ? = manager.id
+ [...] (1,)
+ {stop}Eugene H. Krabs
+
+This lazy load behavior is not desirable if a large number of objects have been
+loaded, in the case that the consuming application will need to be accessing
+subclass-specific attributes, as this would be an example of the
+:term:`N plus one` problem that emits additional SQL per row. This additional SQL can
+impact performance and also be incompatible with approaches such as
+using :ref:`asyncio <asyncio_toplevel>`. Additionally, in our query for
+``Employee`` objects, since the query is against the base table only, we did
+not have a way to add SQL criteria involving subclass-specific attributes in
+terms of ``Manager`` or ``Engineer``. The next two sections detail two
+constructs that provide solutions to these two issues in different ways, the
+:func:`_orm.selectin_polymorphic` loader option and the
+:func:`_orm.with_polymorphic` entity construct.
+
+
+.. _polymorphic_selectin:
+
+Using selectin_polymorphic()
+----------------------------
+
+.. Setup code, not for display
+
+
+ >>> session.close()
+ ROLLBACK
+
+To address the issue of performance when accessing attributes on subclasses,
+the :func:`_orm.selectin_polymorphic` loader strategy may be used to
+:term:`eagerly load` these additional attributes up front across many
+objects at once. This loader option works in a similar fashion as the
+:func:`_orm.selectinload` relationship loader strategy to emit an additional
+SELECT statement against each sub-table for objects loaded in the hierarchy,
+using ``IN`` to query for additional rows based on primary key.
+
+:func:`_orm.selectinload` accepts as its arguments the base entity that is
+being queried, followed by a sequence of subclasses of that entity for which
+their specific attributes should be loaded for incoming rows::
+
+ >>> from sqlalchemy.orm import selectin_polymorphic
+ >>> loader_opt = selectin_polymorphic(Employee, [Manager, Engineer])
+
+The :func:`_orm.selectin_polymorphic` construct is then used as a loader
+option, passing it to the :meth:`.Select.options` method of :class:`.Select`.
+The example illustrates the use of :func:`_orm.selectin_polymorphic` to eagerly
+load columns local to both the ``Manager`` and ``Engineer`` subclasses::
+
+ >>> from sqlalchemy.orm import selectin_polymorphic
+ >>> loader_opt = selectin_polymorphic(Employee, [Manager, Engineer])
+ >>> stmt = select(Employee).order_by(Employee.id).options(loader_opt)
+ >>> objects = session.scalars(stmt).all()
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type, employee.company_id
+ FROM employee ORDER BY employee.id
+ [...] ()
+ SELECT manager.id AS manager_id, employee.id AS employee_id,
+ employee.type AS employee_type, manager.manager_name AS manager_manager_name
+ FROM employee JOIN manager ON employee.id = manager.id
+ WHERE employee.id IN (?) ORDER BY employee.id
+ [...] (1,)
+ SELECT engineer.id AS engineer_id, employee.id AS employee_id,
+ employee.type AS employee_type, engineer.engineer_info AS engineer_engineer_info
+ FROM employee JOIN engineer ON employee.id = engineer.id
+ WHERE employee.id IN (?, ?) ORDER BY employee.id
+ [...] (2, 3)
+ {stop}>>> print(objects)
+ [Manager('Mr. Krabs'), Engineer('SpongeBob'), Engineer('Squidward')]
+
+The above example illustrates two additional SELECT statements being emitted
+in order to eagerly fetch additional attributes such as ``Engineer.engineer_info``
+as well as ``Manager.manager_name``. We can now access these sub-attributes on the
+objects that were loaded without any additional SQL statements being emitted::
+
+ >>> print(objects[0].manager_name)
+ Eugene H. Krabs
+
+.. tip:: The :func:`_orm.selectin_polymorphic` loader option does not yet
+ optimize for the fact that the base ``employee`` table does not need to be
+ included in the second two "eager load" queries; hence in the example above
+ we see a JOIN from ``employee`` to ``manager`` and ``engineer``, even though
+ columns from ``employee`` are already loaded. This is in contrast to
+ the :func:`_orm.selectinload` relationship strategy which is more
+ sophisticated in this regard and can factor out the JOIN when not needed.
+
+
+.. _polymorphic_selectin_w_loader_options:
+
+Combining additional loader options with selectin_polymorphic() subclass loads
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+
+ >>> session.close()
+ ROLLBACK
+
+The SELECT statements emitted by :func:`_orm.selectin_polymorphic` are themselves
+ORM statements, so we may also add other loader options (such as those
+documented at :ref:`orm_queryguide_relationship_loaders`) that refer to specific
+subclasses. For example, if we considered that the ``Manager`` mapper had
+a :ref:`one to many <relationship_patterns_o2m>` relationship to an entity
+called ``Paperwork``, we could combine the use of
+:func:`_orm.selectin_polymorphic` and :func:`_orm.selectinload` to eagerly load
+this collection on all ``Manager`` objects, where the sub-attributes of
+``Manager`` objects were also themselves eagerly loaded::
+
+ >>> from sqlalchemy.orm import selectinload
+ >>> from sqlalchemy.orm import selectin_polymorphic
+ >>> stmt = select(Employee).order_by(Employee.id).options(
+ ... selectin_polymorphic(Employee, [Manager, Engineer]),
+ ... selectinload(Manager.paperwork)
+ ... )
+ {opensql}>>> objects = session.scalars(stmt).all()
+ BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type, employee.company_id
+ FROM employee ORDER BY employee.id
+ [...] ()
+ SELECT manager.id AS manager_id, employee.id AS employee_id, employee.type AS employee_type, manager.manager_name AS manager_manager_name
+ FROM employee JOIN manager ON employee.id = manager.id
+ WHERE employee.id IN (?) ORDER BY employee.id
+ [...] (1,)
+ SELECT paperwork.manager_id AS paperwork_manager_id, paperwork.id AS paperwork_id, paperwork.document_name AS paperwork_document_name
+ FROM paperwork
+ WHERE paperwork.manager_id IN (?)
+ [...] (1,)
+ SELECT engineer.id AS engineer_id, employee.id AS employee_id, employee.type AS employee_type, engineer.engineer_info AS engineer_engineer_info
+ FROM employee JOIN engineer ON employee.id = engineer.id
+ WHERE employee.id IN (?, ?) ORDER BY employee.id
+ [...] (2, 3)
+ {stop}>>> print(objects[0])
+ Manager('Mr. Krabs')
+ >>> print(objects[0].paperwork)
+ [Paperwork('Secret Recipes'), Paperwork('Krabby Patty Orders')]
+
+.. _polymorphic_selectin_as_loader_option_target:
+
+Applying selectin_polymorphic() to an existing eager load
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In addition to being able to add loader options to the right side of a
+:func:`_orm.selectin_polymorphic` load, we may also indicate
+:func:`_orm.selectin_polymorphic` on the target of an existing load.
+As our :doc:`setup <_inheritance_setup>` mapping includes a parent
+``Company`` entity with a ``Company.employees`` :func:`_orm.relationship`
+referring to ``Employee`` entities, we may illustrate a SELECT against
+the ``Company`` entity that eagerly loads all ``Employee`` objects as well as
+all attributes on their subtypes as follows, by applying :meth:`.Load.selectin_polymorphic`
+as a chained loader option; in this form, the first argument is implicit from
+the previous loader option (in this case :func:`_orm.selectinload`), so
+we only indicate the additional target subclasses we wish to load::
+
+ >>> stmt = (
+ ... select(Company).
+ ... options(selectinload(Company.employees).selectin_polymorphic([Manager, Engineer]))
+ ... )
+ >>> for company in session.scalars(stmt):
+ ... print(f"company: {company.name}")
+ ... print(f"employees: {company.employees}")
+ {opensql}SELECT company.id, company.name
+ FROM company
+ [...] ()
+ SELECT employee.company_id AS employee_company_id, employee.id AS employee_id,
+ employee.name AS employee_name, employee.type AS employee_type
+ FROM employee
+ WHERE employee.company_id IN (?)
+ [...] (1,)
+ SELECT manager.id AS manager_id, employee.id AS employee_id, employee.name AS employee_name,
+ employee.type AS employee_type, employee.company_id AS employee_company_id,
+ manager.manager_name AS manager_manager_name
+ FROM employee JOIN manager ON employee.id = manager.id
+ WHERE employee.id IN (?) ORDER BY employee.id
+ [...] (1,)
+ SELECT engineer.id AS engineer_id, employee.id AS employee_id, employee.name AS employee_name,
+ employee.type AS employee_type, employee.company_id AS employee_company_id,
+ engineer.engineer_info AS engineer_engineer_info
+ FROM employee JOIN engineer ON employee.id = engineer.id
+ WHERE employee.id IN (?, ?) ORDER BY employee.id
+ [...] (2, 3)
+ {stop}company: Krusty Krab
+ employees: [Manager('Mr. Krabs'), Engineer('SpongeBob'), Engineer('Squidward')]
+
+.. seealso::
+
+ :ref:`eagerloading_polymorphic_subtypes` - illustrates the equivalent example
+ as above using :func:`_orm.with_polymorphic` instead
+
+
+Configuring selectin_polymorphic() on mappers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The behavior of :func:`_orm.selectin_polymorphic` may be configured on specific
+mappers so that it takes place by default, by using the
+:paramref:`_orm.Mapper.polymorphic_load` parameter, using the value ``"selectin"``
+on a per-subclass basis. The example below illustrates the use of this
+parameter within ``Engineer`` and ``Manager`` subclasses:
+
+.. sourcecode:: python
+
+ class Employee(Base):
+ __tablename__ = 'employee'
+ id = mapped_column(Integer, primary_key=True)
+ name = mapped_column(String(50))
+ type = mapped_column(String(50))
+
+ __mapper_args__ = {
+ 'polymorphic_identity': 'employee',
+ 'polymorphic_on': type
+ }
+
+ class Engineer(Employee):
+ __tablename__ = 'engineer'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ engineer_info = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_load': 'selectin',
+ 'polymorphic_identity': 'engineer',
+ }
+
+ class Manager(Employee):
+ __tablename__ = 'manager'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ manager_name = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_load': 'selectin',
+ 'polymorphic_identity': 'manager',
+ }
+
+With the above mapping, SELECT statements against the ``Employee`` class will
+automatically assume the use of
+``selectin_polymorphic(Employee, [Engineer, Manager])`` as a loader option when the statement is
+emitted.
+
+.. _with_polymorphic:
+
+Using with_polymorphic()
+------------------------
+
+.. Setup code, not for display
+
+
+ >>> session.close()
+ ROLLBACK
+
+In contrast to :func:`_orm.selectin_polymorphic` which affects only the loading
+of objects, the :func:`_orm.with_polymorphic` construct affects how the SQL
+query for a polymorphic structure is rendered, most commonly as a series of
+LEFT OUTER JOINs to each of the included sub-tables. This join structure is
+referred towards as the **polymorphic selectable**. By providing for a view of
+several sub-tables at once, :func:`_orm.with_polymorphic` offers a means of
+writing a SELECT statement across several inherited classes at once with the
+ability to add filtering criteria based on individual sub-tables.
+
+:func:`_orm.with_polymorphic` is essentially a special form of the
+:func:`_orm.aliased` construct. It accepts as its arguments a similar form to
+that of :func:`_orm.selectin_polymorphic`, which is the base entity that is
+being queried, followed by a sequence of subclasses of that entity for which
+their specific attributes should be loaded for incoming rows::
+
+ >>> from sqlalchemy.orm import with_polymorphic
+ >>> employee_poly = with_polymorphic(Employee, [Engineer, Manager])
+
+In order to indicate that all subclasses should be part of the entity,
+:func:`_orm.with_polymorphic` will also accept the string ``"*"``, which may be
+passed in place of the sequence of classes to indicate all classes (note this
+is not yet supported by :func:`_orm.selectin_polymorphic`)::
+
+ >>> employee_poly = with_polymorphic(Employee, "*")
+
+The example below illustrates the same operation as illustrated in the previous
+section, to load all columns for ``Manager`` and ``Engineer`` at once::
+
+ >>> stmt = select(employee_poly).order_by(employee_poly.id)
+ >>> objects = session.scalars(stmt).all()
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type, employee.company_id,
+ manager.id AS id_1, manager.manager_name, engineer.id AS id_2, engineer.engineer_info
+ FROM employee
+ LEFT OUTER JOIN manager ON employee.id = manager.id
+ LEFT OUTER JOIN engineer ON employee.id = engineer.id ORDER BY employee.id
+ [...] ()
+ {stop}>>> print(objects)
+ [Manager('Mr. Krabs'), Engineer('SpongeBob'), Engineer('Squidward')]
+
+As is the case with :func:`_orm.selectin_polymorphic`, attributes on subclasses
+are already loaded::
+
+ >>> print(objects[0].manager_name)
+ Eugene H. Krabs
+
+As the default selectable produced by :func:`_orm.with_polymorphic`
+uses LEFT OUTER JOIN, from a database point of view the query is not as well
+optimized as the approach that :func:`_orm.selectin_polymorphic` takes,
+with simple SELECT statements using only JOINs emitted on a per-table basis.
+
+
+.. _with_polymorphic_subclass_attributes:
+
+Filtering Subclass Attributes with with_polymorphic()
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :func:`_orm.with_polymorphic` construct makes available the attributes
+on the included subclass mappers, by including namespaces that allow
+references to subclasses. The ``employee_poly`` construct created in the
+previous section includes attributes named ``.Engineer`` and ``.Manager``
+which provide the namespace for ``Engineer`` and ``Manager`` in terms of
+the polymorphic SELECT. In the example below, we can use the :func:`_sql.or_`
+construct to create criteria against both classes at once::
+
+ >>> from sqlalchemy import or_
+ >>> employee_poly = with_polymorphic(Employee, [Engineer, Manager])
+ >>> stmt = (
+ ... select(employee_poly).
+ ... where(
+ ... or_(
+ ... employee_poly.Manager.manager_name == "Eugene H. Krabs",
+ ... employee_poly.Engineer.engineer_info == "Senior Customer Engagement Engineer"
+ ... )
+ ... ).
+ ... order_by(employee_poly.id)
+ ... )
+ >>> objects = session.scalars(stmt).all()
+ {opensql}SELECT employee.id, employee.name, employee.type, employee.company_id, manager.id AS id_1,
+ manager.manager_name, engineer.id AS id_2, engineer.engineer_info
+ FROM employee
+ LEFT OUTER JOIN manager ON employee.id = manager.id
+ LEFT OUTER JOIN engineer ON employee.id = engineer.id
+ WHERE manager.manager_name = ? OR engineer.engineer_info = ?
+ ORDER BY employee.id
+ [...] ('Eugene H. Krabs', 'Senior Customer Engagement Engineer')
+ {stop}>>> print(objects)
+ [Manager('Mr. Krabs'), Engineer('Squidward')]
+
+.. _with_polymorphic_aliasing:
+
+Using aliasing with with_polymorphic
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :func:`_orm.with_polymorphic` construct, as a special case of
+:func:`_orm.aliased`, also provides the basic feature that :func:`_orm.aliased`
+does, which is that of "aliasing" of the polymorphic selectable itself.
+Specifically this means two or more :func:`_orm.with_polymorphic` entities,
+referring to the same class hierarchy, can be used at once in a single
+statement.
+
+To use this feature with a joined inheritance mapping, we typically want to
+pass two parameters, :paramref:`_orm.with_polymorphic.aliased` as well as
+:paramref:`_orm.with_polymorphic.flat`. The :paramref:`_orm.with_polymorphic.aliased`
+parameter indicates that the polymorphic selectable should be referred towards
+by an alias name that is unique to this construct. The
+:paramref:`_orm.with_polymorphic.flat` parameter is specific to the default
+LEFT OUTER JOIN polymorphic selectable and indicates that a more optimized
+form of aliasing should be used in the statement.
+
+To illustrate this feature, the example below emits a SELECT for two
+separate polymorphic entities, ``Employee`` joined with ``Engineer``,
+and ``Employee`` joined with ``Manager``. Since these two polymorphic entities
+will both be including the base ``employee`` table in their polymorphic selectable, aliasing must
+be applied in order to differentiate this table in its two different contexts.
+The two polymorphic entities are treated like two individual tables,
+and as such typically need to be joined with each other in some way,
+as illustrated below where the entities are joined on the ``company_id``
+column along with some additional limiting criteria against the
+``Employee`` / ``Manager`` entity::
+
+ >>> manager_employee = with_polymorphic(Employee, [Manager], aliased=True, flat=True)
+ >>> engineer_employee = with_polymorphic(Employee, [Engineer], aliased=True, flat=True)
+ >>> stmt = (
+ ... select(manager_employee, engineer_employee).
+ ... join(
+ ... engineer_employee,
+ ... engineer_employee.company_id == manager_employee.company_id,
+ ... ).
+ ... where(
+ ... or_(
+ ... manager_employee.name == "Mr. Krabs",
+ ... manager_employee.Manager.manager_name == "Eugene H. Krabs"
+ ... )
+ ... ).
+ ... order_by(engineer_employee.name, manager_employee.name)
+ ... )
+ >>> for manager, engineer in session.execute(stmt):
+ ... print(f"{manager} {engineer}")
+ {opensql}SELECT
+ employee_1.id, employee_1.name, employee_1.type, employee_1.company_id,
+ manager_1.id AS id_1, manager_1.manager_name,
+ employee_2.id AS id_2, employee_2.name AS name_1, employee_2.type AS type_1,
+ employee_2.company_id AS company_id_1, engineer_1.id AS id_3, engineer_1.engineer_info
+ FROM employee AS employee_1
+ LEFT OUTER JOIN manager AS manager_1 ON employee_1.id = manager_1.id
+ JOIN
+ (employee AS employee_2 LEFT OUTER JOIN engineer AS engineer_1 ON employee_2.id = engineer_1.id)
+ ON employee_2.company_id = employee_1.company_id
+ WHERE employee_1.name = ? OR manager_1.manager_name = ?
+ ORDER BY employee_2.name, employee_1.name
+ [...] ('Mr. Krabs', 'Eugene H. Krabs')
+ {stop}Manager('Mr. Krabs') Manager('Mr. Krabs')
+ Manager('Mr. Krabs') Engineer('SpongeBob')
+ Manager('Mr. Krabs') Engineer('Squidward')
+
+In the above example, the behavior of :paramref:`_orm.with_polymorphic.flat`
+is that the polymorphic selectables remain as a LEFT OUTER JOIN of their
+individual tables, which themselves are given anonymous alias names. There
+is also a right-nested JOIN produced.
+
+When omitting the :paramref:`_orm.with_polymorphic.flat` parameter, the
+usual behavior is that each polymorphic selectable is enclosed within a
+subquery, producing a more verbose form::
+
+ >>> manager_employee = with_polymorphic(Employee, [Manager], aliased=True)
+ >>> engineer_employee = with_polymorphic(Employee, [Engineer], aliased=True)
+ >>> stmt = (
+ ... select(manager_employee, engineer_employee).
+ ... join(
+ ... engineer_employee,
+ ... engineer_employee.company_id == manager_employee.company_id,
+ ... ).
+ ... where(
+ ... or_(
+ ... manager_employee.name == "Mr. Krabs",
+ ... manager_employee.Manager.manager_name == "Eugene H. Krabs"
+ ... )
+ ... ).
+ ... order_by(engineer_employee.name, manager_employee.name)
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT anon_1.employee_id, anon_1.employee_name, anon_1.employee_type,
+ anon_1.employee_company_id, anon_1.manager_id, anon_1.manager_manager_name, anon_2.employee_id AS employee_id_1,
+ anon_2.employee_name AS employee_name_1, anon_2.employee_type AS employee_type_1,
+ anon_2.employee_company_id AS employee_company_id_1, anon_2.engineer_id, anon_2.engineer_engineer_info
+ FROM
+ (SELECT employee.id AS employee_id, employee.name AS employee_name, employee.type AS employee_type,
+ employee.company_id AS employee_company_id,
+ manager.id AS manager_id, manager.manager_name AS manager_manager_name
+ FROM employee LEFT OUTER JOIN manager ON employee.id = manager.id) AS anon_1
+ JOIN
+ (SELECT employee.id AS employee_id, employee.name AS employee_name, employee.type AS employee_type,
+ employee.company_id AS employee_company_id, engineer.id AS engineer_id, engineer.engineer_info AS engineer_engineer_info
+ FROM employee LEFT OUTER JOIN engineer ON employee.id = engineer.id) AS anon_2
+ ON anon_2.employee_company_id = anon_1.employee_company_id
+ WHERE anon_1.employee_name = :employee_name_2 OR anon_1.manager_manager_name = :manager_manager_name_1
+ ORDER BY anon_2.employee_name, anon_1.employee_name
+
+The above form historically has been more portable to backends that didn't necessarily
+have support for right-nested JOINs, and it additionally may be appropriate when
+the "polymorphic selectable" used by :func:`_orm.with_polymorphic` is not
+a simple LEFT OUTER JOIN of tables, as is the case when using mappings such as
+:ref:`concrete table inheritance <concrete_inheritance>` mappings as well as when
+using alternative polymorphic selectables in general.
+
+
+.. _with_polymorphic_mapper_config:
+
+Configuring with_polymorphic() on mappers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As is the case with :func:`_orm.selectin_polymorphic`, the
+:func:`_orm.with_polymorphic` construct also supports a mapper-configured
+version which may be configured in two different ways, either on the base class
+using the :paramref:`.mapper.with_polymorphic` parameter, or in a more modern
+form using the :paramref:`_orm.Mapper.polymorphic_load` parameter on a
+per-subclass basis, passing the value ``"inline"``.
+
+.. warning::
+
+ For joined inheritance mappings, prefer explicit use of
+ :func:`_orm.with_polymorphic` within queries, or for implicit eager subclass
+ loading use :paramref:`_orm.Mapper.polymorphic_load` with ``"selectin"``,
+ instead of using the mapper-level :paramref:`.mapper.with_polymorphic`
+ parameter described in this section. This parameter invokes complex
+ heuristics intended to rewrite the FROM clauses within SELECT statements
+ that can interfere with construction of more complex statements,
+ particularly those with nested subqueries that refer to the same mapped
+ entity.
+
+For example, we may state our ``Employee`` mapping using
+:paramref:`_orm.Mapper.polymorphic_load` as ``"inline"`` as below:
+
+.. sourcecode:: python
+
+ class Employee(Base):
+ __tablename__ = 'employee'
+ id = mapped_column(Integer, primary_key=True)
+ name = mapped_column(String(50))
+ type = mapped_column(String(50))
+
+ __mapper_args__ = {
+ 'polymorphic_identity': 'employee',
+ 'polymorphic_on': type
+ }
+
+ class Engineer(Employee):
+ __tablename__ = 'engineer'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ engineer_info = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_load': 'inline',
+ 'polymorphic_identity': 'engineer',
+ }
+
+ class Manager(Employee):
+ __tablename__ = 'manager'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ manager_name = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_load': 'inline',
+ 'polymorphic_identity': 'manager',
+ }
+
+With the above mapping, SELECT statements against the ``Employee`` class will
+automatically assume the use of
+``with_polymorphic(Employee, [Engineer, Manager])`` as the primary entity
+when the statement is emitted::
+
+ print(select(Employee))
+ {opensql}SELECT employee.id, employee.name, employee.type, engineer.id AS id_1,
+ engineer.engineer_info, manager.id AS id_2, manager.manager_name
+ FROM employee
+ LEFT OUTER JOIN engineer ON employee.id = engineer.id
+ LEFT OUTER JOIN manager ON employee.id = manager.id
+
+When using mapper-level "with polymorphic", queries can also refer to the
+subclass entities directly, where they implicitly represent the joined tables
+in the polymorphic query. Above, we can freely refer to
+``Manager`` and ``Engineer`` directly against the default ``Employee``
+entity::
+
+ print(
+ select(Employee).where(
+ or_(Manager.manager_name == "x", Engineer.engineer_info == "y")
+ )
+ )
+ {opensql}SELECT employee.id, employee.name, employee.type, engineer.id AS id_1,
+ engineer.engineer_info, manager.id AS id_2, manager.manager_name
+ FROM employee
+ LEFT OUTER JOIN engineer ON employee.id = engineer.id
+ LEFT OUTER JOIN manager ON employee.id = manager.id
+ WHERE manager.manager_name = :manager_name_1
+ OR engineer.engineer_info = :engineer_info_1
+
+However, if we needed to refer to the ``Employee`` entity or its sub
+entities in separate, aliased contexts, we would again make direct use of
+:func:`_orm.with_polymorphic` to define these aliased entities as illustrated
+in :ref:`with_polymorphic_aliasing`.
+
+For more centralized control over the polymorphic selectable, the more legacy
+form of mapper-level polymorphic control may be used which is the
+:paramref:`_orm.Mapper.with_polymorphic` parameter, configured on the base
+class. This parameter accepts arguments that are comparable to the
+:func:`_orm.with_polymorphic` construct, however common use with a joined
+inheritance mapping is the plain asterisk, indicating all sub-tables should be
+LEFT OUTER JOINED, as in:
+
+.. sourcecode:: python
+
+ class Employee(Base):
+ __tablename__ = 'employee'
+ id = mapped_column(Integer, primary_key=True)
+ name = mapped_column(String(50))
+ type = mapped_column(String(50))
+
+ __mapper_args__ = {
+ 'polymorphic_identity': 'employee',
+ 'with_polymorphic': '*',
+ 'polymorphic_on': type
+ }
+
+ class Engineer(Employee):
+ __tablename__ = 'engineer'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ engineer_info = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_identity': 'engineer',
+ }
+
+ class Manager(Employee):
+ __tablename__ = 'manager'
+ id = mapped_column(Integer, ForeignKey('employee.id'), primary_key=True)
+ manager_name = mapped_column(String(30))
+
+ __mapper_args__ = {
+ 'polymorphic_identity': 'manager',
+ }
+
+Overall, the LEFT OUTER JOIN format used by :func:`_orm.with_polymorphic` and
+by options such as :paramref:`_orm.Mapper.with_polymorphic` may be cumbersome
+from a SQL and database optimizer point of view; for general loading of
+subclass attributes in joined inheritance mappings, the
+:func:`_orm.selectin_polymorphic` approach, or its mapper level equivalent of
+setting :paramref:`_orm.Mapper.polymorphic_load` to ``"selectin"`` should
+likely be preferred, making use of :func:`_orm.with_polymorphic` on a per-query
+basis only as needed.
+
+.. _inheritance_of_type:
+
+Joining to specific sub-types or with_polymorphic() entities
+------------------------------------------------------------
+
+As a :func:`_orm.with_polymorphic` entity is a special case of :func:`_orm.aliased`,
+in order to treat a polymorphic entity as the target of a join, specifically
+when using a :func:`_orm.relationship` construct as the ON clause,
+we use the same technique for regular aliases as detailed at
+:ref:`orm_queryguide_joining_relationships_aliased`, most succinctly
+using :meth:`_orm.PropComparator.of_type`. In the example below we illustrate
+a join from the parent ``Company`` entity along the one-to-many relationship
+``Company.employees``, which is configured in the
+:doc:`setup <_inheritance_setup>` to link to ``Employee`` objects,
+using a :func:`_orm.with_polymorphic` entity as the target::
+
+ >>> employee_plus_engineer = with_polymorphic(Employee, [Engineer])
+ >>> stmt = (
+ ... select(Company.name, employee_plus_engineer.name).
+ ... join(Company.employees.of_type(employee_plus_engineer)).
+ ... where(
+ ... or_(
+ ... employee_plus_engineer.name == "SpongeBob",
+ ... employee_plus_engineer.Engineer.engineer_info == "Senior Customer Engagement Engineer"
+ ... )
+ ... )
+ ... )
+ >>> for company_name, emp_name in session.execute(stmt):
+ ... print(f"{company_name} {emp_name}")
+ {opensql}SELECT company.name, employee.name AS name_1
+ FROM company JOIN (employee LEFT OUTER JOIN engineer ON employee.id = engineer.id) ON company.id = employee.company_id
+ WHERE employee.name = ? OR engineer.engineer_info = ?
+ [...] ('SpongeBob', 'Senior Customer Engagement Engineer')
+ {stop}Krusty Krab SpongeBob
+ Krusty Krab Squidward
+
+More directly, :meth:`_orm.PropComparator.of_type` is also used with inheritance
+mappings of any kind to limit a join along a :func:`_orm.relationship` to a
+particular sub-type of the :func:`_orm.relationship`'s target. The above
+query could be written strictly in terms of ``Engineer`` targets as follows::
+
+ >>> stmt = (
+ ... select(Company.name, Engineer.name).
+ ... join(Company.employees.of_type(Engineer)).
+ ... where(
+ ... or_(
+ ... Engineer.name == "SpongeBob",
+ ... Engineer.engineer_info == "Senior Customer Engagement Engineer"
+ ... )
+ ... )
+ ... )
+ >>> for company_name, emp_name in session.execute(stmt):
+ ... print(f"{company_name} {emp_name}")
+ {opensql}SELECT company.name, employee.name AS name_1
+ FROM company JOIN (employee JOIN engineer ON employee.id = engineer.id) ON company.id = employee.company_id
+ WHERE employee.name = ? OR engineer.engineer_info = ?
+ [...] ('SpongeBob', 'Senior Customer Engagement Engineer')
+ {stop}Krusty Krab SpongeBob
+ Krusty Krab Squidward
+
+It can be observed above that joining to the ``Engineer`` target directly,
+rather than the "polymorphic selectable" of ``with_polymorphic(Employee, [Engineer])``
+has the useful characteristic of using an inner JOIN rather than a
+LEFT OUTER JOIN, which is generally more performant from a SQL optimizer
+point of view.
+
+.. _eagerloading_polymorphic_subtypes:
+
+Eager Loading of Polymorphic Subtypes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The use of :meth:`_orm.PropComparator.of_type` illustrated with the
+:meth:`.Select.join` method in the previous section may also be applied
+equivalently to :ref:`relationship loader options <orm_queryguide_relationship_loaders>`,
+such as :func:`_orm.selectinload` and :func:`_orm.joinedload`.
+
+As a basic example, if we wished to load ``Company`` objects, and additionally
+eagerly load all elements of ``Company.employees`` using the
+:func:`_orm.with_polymorphic` construct against the full hierarchy, we may write::
+
+ >>> all_employees = with_polymorphic(Employee, '*')
+ >>> stmt = (
+ ... select(Company).
+ ... options(selectinload(Company.employees.of_type(all_employees)))
+ ... )
+ >>> for company in session.scalars(stmt):
+ ... print(f"company: {company.name}")
+ ... print(f"employees: {company.employees}")
+ {opensql}SELECT company.id, company.name
+ FROM company
+ [...] ()
+ SELECT employee.company_id AS employee_company_id, employee.id AS employee_id,
+ employee.name AS employee_name, employee.type AS employee_type, manager.id AS manager_id,
+ manager.manager_name AS manager_manager_name, engineer.id AS engineer_id,
+ engineer.engineer_info AS engineer_engineer_info
+ FROM employee
+ LEFT OUTER JOIN manager ON employee.id = manager.id
+ LEFT OUTER JOIN engineer ON employee.id = engineer.id
+ WHERE employee.company_id IN (?)
+ [...] (1,)
+ company: Krusty Krab
+ employees: [Manager('Mr. Krabs'), Engineer('SpongeBob'), Engineer('Squidward')]
+
+The above query may be compared directly to the
+:func:`_orm.selectin_polymorphic` version illustrated in the previous
+section :ref:`polymorphic_selectin_as_loader_option_target`.
+
+.. seealso::
+
+ :ref:`polymorphic_selectin_as_loader_option_target` - illustrates the equivalent example
+ as above using :func:`_orm.selectin_polymorphic` instead
+
+
+.. _loading_single_inheritance:
+
+SELECT Statements for Single Inheritance Mappings
+-------------------------------------------------
+
+.. Setup code, not for display
+
+ >>> session.close()
+ ROLLBACK
+ >>> conn.close()
+
+.. doctest-include _single_inheritance.rst
+
+.. admonition:: Single Table Inheritance Setup
+
+ This section discusses single table inheritance,
+ described at :ref:`single_inheritance`, which uses a single table to
+ represent multiple classes in a hierarchy.
+
+ :doc:`View the ORM setup for this section <_single_inheritance>`.
+
+In contrast to joined inheritance mappings, the construction of SELECT
+statements for single inheritance mappings tends to be simpler since for
+an all-single-inheritance hierarchy, there's only one table.
+
+Regardless of whether or not the inheritance hierarchy is all single-inheritance
+or has a mixture of joined and single inheritance, SELECT statements for
+single inheritance differentiate queries against the base class vs. a subclass
+by limiting the SELECT statement with additional WHERE criteria.
+
+As an example, a query for the single-inheritance example mapping of
+``Employee`` will load objects of type ``Manager``, ``Engineer`` and
+``Employee`` using a simple SELECT of the table::
+
+ >>> stmt = select(Employee).order_by(Employee.id)
+ >>> for obj in session.scalars(stmt):
+ ... print(f"{obj}")
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type
+ FROM employee ORDER BY employee.id
+ [...] ()
+ {stop}Manager('Mr. Krabs')
+ Engineer('SpongeBob')
+ Engineer('Squidward')
+
+When a load is emitted for a specific subclass, additional criteria is
+added to the SELECT that limits the rows, such as below where a SELECT against
+the ``Engineer`` entity is performed::
+
+ >>> stmt = select(Engineer).order_by(Engineer.id)
+ >>> objects = session.scalars(stmt).all()
+ {opensql}SELECT employee.id, employee.name, employee.type, employee.engineer_info
+ FROM employee
+ WHERE employee.type IN (?) ORDER BY employee.id
+ [...] ('engineer',)
+ {stop}>>> for obj in objects:
+ ... print(f"{obj}")
+ Engineer('SpongeBob')
+ Engineer('Squidward')
+
+
+
+Optimizing Attribute Loads for Single Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. Setup code, not for display
+
+ >>> session.close()
+ ROLLBACK
+
+The default behavior of single inheritance mappings regarding how attributes on
+subclasses are SELECTed is similar to that of joined inheritance, in that
+subclass-specific attributes still emit a second SELECT by default. In
+the example below, a single ``Employee`` of type ``Manager`` is loaded,
+however since the requested class is ``Employee``, the ``Manager.manager_name``
+attribute is not present by default, and an additional SELECT is emitted
+when it's accessed::
+
+ >>> mr_krabs = session.scalars(select(Employee).where(Employee.name == "Mr. Krabs")).one()
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type
+ FROM employee
+ WHERE employee.name = ?
+ [...] ('Mr. Krabs',)
+ {stop}>>> mr_krabs.manager_name
+ {opensql}SELECT employee.manager_name AS employee_manager_name
+ FROM employee
+ WHERE employee.id = ? AND employee.type IN (?)
+ [...] (1, 'manager')
+ {stop}'Eugene H. Krabs'
+
+.. Setup code, not for display
+
+ >>> session.close()
+ ROLLBACK
+
+To alter this behavior, the same general concepts used to eagerly load these
+additional attributes used in joined inheritance loading apply to single
+inheritance as well, including use of the :func:`_orm.selectin_polymorphic`
+option as well as the :func:`_orm.with_polymorphic` option, the latter of which
+simply includes the additional columns and from a SQL perspective is more
+efficient for single-inheritance mappers::
+
+ >>> employees = with_polymorphic(Employee, '*')
+ >>> stmt = select(employees).order_by(employees.id)
+ >>> objects = session.scalars(stmt).all()
+ {opensql}BEGIN (implicit)
+ SELECT employee.id, employee.name, employee.type,
+ employee.manager_name, employee.engineer_info
+ FROM employee ORDER BY employee.id
+ [...] ()
+ {stop}>>> for obj in objects:
+ ... print(f"{obj}")
+ Manager('Mr. Krabs')
+ Engineer('SpongeBob')
+ Engineer('Squidward')
+ >>> objects[0].manager_name
+ 'Eugene H. Krabs'
+
+Since the overhead of loading single-inheritance subclass mappings is
+usually minimal, it's therefore recommended that single inheritance mappings
+include the :paramref:`_orm.Mapper.polymorphic_load` parameter with a
+setting of ``"inline"`` for those subclasses where loading of their specific
+subclass attributes is expected to be common. An example illustrating the
+:doc:`setup <_single_inheritance>`, modified to include this option,
+is below::
+
+ >>> class Base(DeclarativeBase):
+ ... pass
+ ...
+ >>> class Employee(Base):
+ ... __tablename__ = "employee"
+ ... id: Mapped[int] = mapped_column(primary_key=True)
+ ... name: Mapped[str]
+ ... type: Mapped[str]
+ ... def __repr__(self):
+ ... return f"{self.__class__.__name__}({self.name!r})"
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "employee",
+ ... "polymorphic_on": "type",
+ ... }
+ ...
+ >>> class Manager(Employee):
+ ... manager_name: Mapped[str] = mapped_column(nullable=True)
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "manager",
+ ... "polymorphic_load": "inline"
+ ... }
+ ...
+ >>> class Engineer(Employee):
+ ... engineer_info: Mapped[str] = mapped_column(nullable=True)
+ ... __mapper_args__ = {
+ ... "polymorphic_identity": "engineer",
+ ... "polymorphic_load": "inline"
+ ... }
+
+
+With the above mapping, the ``Manager`` and ``Engineer`` classes will have
+their columns included in SELECT statements against the ``Employee``
+entity automatically::
+
+ >>> print(select(Employee))
+ {opensql}SELECT employee.id, employee.name, employee.type,
+ employee.manager_name, employee.engineer_info
+ FROM employee
+
+
+
+
+
+
+Inheritance Loading API
+-----------------------
+
+.. autofunction:: sqlalchemy.orm.with_polymorphic
+
+.. autofunction:: sqlalchemy.orm.selectin_polymorphic
+
+
+.. Setup code, not for display
+
+ >>> session.close()
+ ROLLBACK
+ >>> conn.close()
\ No newline at end of file
--- /dev/null
+.. highlight:: pycon+sql
+.. |prev| replace:: :doc:`api`
+
+.. |tutorial_title| replace:: ORM Querying Guide
+
+.. topic:: |tutorial_title|
+
+ This page is part of the :doc:`index`.
+
+ Previous: |prev|
+
+
+.. currentmodule:: sqlalchemy.orm
+
+.. _query_api_toplevel:
+
+================
+Legacy Query API
+================
+
+.. admonition:: About the Legacy Query API
+
+
+ This page contains the Python generated documentation for the
+ :class:`_query.Query` construct, which for many years was the sole SQL
+ interface when working with the SQLAlchemy ORM. As of version 2.0, an all
+ new way of working is now the standard approach, where the same
+ :func:`_sql.select` construct that works for Core works just as well for the
+ ORM, providing a consistent interface for building queries.
+
+ For any application that is built on the SQLAlchemy ORM prior to the
+ 2.0 API, the :class:`_query.Query` API will usually represents the vast
+ majority of database access code within an application, and as such the
+ majority of the :class:`_query.Query` API is
+ **not being removed from SQLAlchemy**. The :class:`_query.Query` object
+ behind the scenes now translates itself into a 2.0 style :func:`_sql.select`
+ object when the :class:`_query.Query` object is executed, so it now is
+ just a very thin adapter API.
+
+ For a guide to migrating an application based on :class:`_query.Query`
+ to 2.0 style, see :ref:`migration_20_query_usage`.
+
+ For an introduction to writing SQL for ORM objects in the 2.0 style,
+ start with the :ref:`unified_tutorial`. Additional reference for 2.0 style
+ querying is at :ref:`queryguide_toplevel`.
+
+The Query Object
+================
+
+:class:`_query.Query` is produced in terms of a given :class:`~.Session`, using the :meth:`~.Session.query` method::
+
+ q = session.query(SomeMappedClass)
+
+Following is the full interface for the :class:`_query.Query` object.
+
+.. autoclass:: sqlalchemy.orm.Query
+ :members:
+ :inherited-members:
+
+ORM-Specific Query Constructs
+=============================
+
+This section has moved to :ref:`queryguide_additional`.
--- /dev/null
+.. note *_include.rst is a naming convention in conf.py
+
+.. |tutorial_title| replace:: ORM Querying Guide
+
+.. topic:: |tutorial_title|
+
+ This page is part of the :doc:`index`.
+
+ Previous: |prev| | Next: |next|
+
+.. footer_topic:: |tutorial_title|
+
+ Next Query Guide Section: |next|
+
--- /dev/null
+.. |prev| replace:: :doc:`columns`
+.. |next| replace:: :doc:`api`
+
+.. include:: queryguide_nav_include.rst
+
+.. _orm_queryguide_relationship_loaders:
+
+.. _loading_toplevel:
+
+.. currentmodule:: sqlalchemy.orm
+
+Relationship Loading Techniques
+===============================
+
+.. admonition:: About this Document
+
+ This section presents an in-depth view of how to load related
+ objects. Readers should be familiar with
+ :ref:`relationship_config_toplevel` and basic use.
+
+A big part of SQLAlchemy is providing a wide range of control over how related
+objects get loaded when querying. By "related objects" we refer to collections
+or scalar associations configured on a mapper using :func:`_orm.relationship`.
+This behavior can be configured at mapper construction time using the
+:paramref:`_orm.relationship.lazy` parameter to the :func:`_orm.relationship`
+function, as well as by using **ORM loader options** with
+the :class:`_sql.Select` construct.
+
+The loading of relationships falls into three categories; **lazy** loading,
+**eager** loading, and **no** loading. Lazy loading refers to objects that are returned
+from a query without the related
+objects loaded at first. When the given collection or reference is
+first accessed on a particular object, an additional SELECT statement
+is emitted such that the requested collection is loaded.
+
+Eager loading refers to objects returned from a query with the related
+collection or scalar reference already loaded up front. The ORM
+achieves this either by augmenting the SELECT statement it would normally
+emit with a JOIN to load in related rows simultaneously, or by emitting
+additional SELECT statements after the primary one to load collections
+or scalar references at once.
+
+"No" loading refers to the disabling of loading on a given relationship, either
+that the attribute is empty and is just never loaded, or that it raises
+an error when it is accessed, in order to guard against unwanted lazy loads.
+
+The primary forms of relationship loading are:
+
+* **lazy loading** - available via ``lazy='select'`` or the :func:`.lazyload`
+ option, this is the form of loading that emits a SELECT statement at
+ attribute access time to lazily load a related reference on a single
+ object at a time. Lazy loading is detailed at :ref:`lazy_loading`.
+
+* **select IN loading** - available via ``lazy='selectin'`` or the :func:`.selectinload`
+ option, this form of loading emits a second (or more) SELECT statement which
+ assembles the primary key identifiers of the parent objects into an IN clause,
+ so that all members of related collections / scalar references are loaded at once
+ by primary key. Select IN loading is detailed at :ref:`selectin_eager_loading`.
+
+* **joined loading** - available via ``lazy='joined'`` or the :func:`_orm.joinedload`
+ option, this form of loading applies a JOIN to the given SELECT statement
+ so that related rows are loaded in the same result set. Joined eager loading
+ is detailed at :ref:`joined_eager_loading`.
+
+* **subquery loading** - available via ``lazy='subquery'`` or the :func:`.subqueryload`
+ option, this form of loading emits a second SELECT statement which re-states the
+ original query embedded inside of a subquery, then JOINs that subquery to the
+ related table to be loaded to load all members of related collections / scalar
+ references at once. Subquery eager loading is detailed at :ref:`subquery_eager_loading`.
+
+* **raise loading** - available via ``lazy='raise'``, ``lazy='raise_on_sql'``,
+ or the :func:`.raiseload` option, this form of loading is triggered at the
+ same time a lazy load would normally occur, except it raises an ORM exception
+ in order to guard against the application making unwanted lazy loads.
+ An introduction to raise loading is at :ref:`prevent_lazy_with_raiseload`.
+
+* **no loading** - available via ``lazy='noload'``, or the :func:`.noload`
+ option; this loading style turns the attribute into an empty attribute
+ (``None`` or ``[]``) that will never load or have any loading effect. This
+ seldom-used strategy behaves somewhat like an eager loader when objects are
+ loaded in that an empty attribute or collection is placed, but for expired
+ objects relies upon the default value of the attribute being returned on
+ access; the net effect is the same except for whether or not the attribute
+ name appears in the :attr:`.InstanceState.unloaded` collection. ``noload``
+ may be useful for implementing a "write-only" attribute but this usage is not
+ currently tested or formally supported.
+
+
+.. _relationship_lazy_option:
+
+Configuring Loader Strategies at Mapping Time
+---------------------------------------------
+
+The loader strategy for a particular relationship can be configured
+at mapping time to take place in all cases where an object of the mapped
+type is loaded, in the absence of any query-level options that modify it.
+This is configured using the :paramref:`_orm.relationship.lazy` parameter to
+:func:`_orm.relationship`; common values for this parameter
+include ``select``, ``selectin`` and ``joined``.
+
+The example below illustrates the relationship example at
+:ref:`relationship_patterns_o2m`, configuring the ``Parent.children``
+relationship to use :ref:`selectin_eager_loading` when a SELECT
+statement for ``Parent`` objects is emitted::
+
+ from sqlalchemy import ForeignKey
+ from sqlalchemy.orm import DeclarativeBase
+ from sqlalchemy.orm import Mapped
+ from sqlalchemy.orm import mapped_column
+ from sqlalchemy.orm import relationship
+
+
+ class Base(DeclarativeBase):
+ pass
+
+ class Parent(Base):
+ __tablename__ = "parent"
+
+ id: Mapped[int] = mapped_column(primary_key=True)
+ children: Mapped[list["Child"]] = relationship(lazy="selectin")
+
+ class Child(Base):
+ __tablename__ = "child"
+
+ id: Mapped[int] = mapped_column(primary_key=True)
+ parent_id: Mapped[int] = mapped_column(ForeignKey("parent.id"))
+
+Above, whenever a collection of ``Parent`` objects are loaded, each
+``Parent`` will also have its ``children`` collection populated, using
+the ``"selectin"`` loader strategy that emits a second query.
+
+The default value of the :paramref:`_orm.relationship.lazy` argument is
+``"select"``, which indicates :ref:`lazy_loading`.
+
+.. _relationship_loader_options:
+
+Relationship Loading with Loader Options
+----------------------------------------
+
+The other, and possibly more common way to configure loading strategies
+is to set them up on a per-query basis against specific attributes using the
+:meth:`_sql.Select.options` method. Very detailed
+control over relationship loading is available using loader options;
+the most common are
+:func:`_orm.joinedload`,
+:func:`_orm.subqueryload`, :func:`_orm.selectinload`
+and :func:`_orm.lazyload`. The option accepts either
+the string name of an attribute against a parent, or for greater specificity
+can accommodate a class-bound attribute directly::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ # set children to load lazily
+ stmt = select(Parent).options(lazyload(Parent.children))
+
+ from sqlalchemy.orm import joinedload
+
+ # set children to load eagerly with a join
+ stmt = select(Parent).options(joinedload(Parent.children))
+
+The loader options can also be "chained" using **method chaining**
+to specify how loading should occur further levels deep::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import joinedload
+
+ stmt = select(Parent).options(
+ joinedload(Parent.children).
+ subqueryload(Child.subelements)
+ )
+
+Chained loader options can be applied against a "lazy" loaded collection.
+This means that when a collection or association is lazily loaded upon
+access, the specified option will then take effect::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ stmt = select(Parent).options(
+ lazyload(Parent.children).
+ subqueryload(Child.subelements)
+ )
+
+Above, the query will return ``Parent`` objects without the ``children``
+collections loaded. When the ``children`` collection on a particular
+``Parent`` object is first accessed, it will lazy load the related
+objects, but additionally apply eager loading to the ``subelements``
+collection on each member of ``children``.
+
+
+.. _loader_option_criteria:
+
+Adding Criteria to loader options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The relationship attributes used to indicate loader options include the
+ability to add additional filtering criteria to the ON clause of the join
+that's created, or to the WHERE criteria involved, depending on the loader
+strategy. This can be achieved using the :meth:`.PropComparator.and_`
+method which will pass through an option such that loaded results are limited
+to the given filter criteria::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ stmt = select(A).options(lazyload(A.bs.and_(B.id > 5)))
+
+When using limiting criteria, if a particular collection is already loaded
+it won't be refreshed; to ensure the new criteria takes place, apply
+the :ref:`orm_queryguide_populate_existing` execution option::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ stmt = (
+ select(A).
+ options(lazyload(A.bs.and_(B.id > 5))).
+ execution_options(populate_existing=True)
+ )
+
+In order to add filtering criteria to all occurrences of an entity throughout
+a query, regardless of loader strategy or where it occurs in the loading
+process, see the :func:`_orm.with_loader_criteria` function.
+
+.. versionadded:: 1.4
+
+.. _orm_queryguide_relationship_sub_options:
+
+Specifying Sub-Options with Load.options()
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Using method chaining, the loader style of each link in the path is explicitly
+stated. To navigate along a path without changing the existing loader style
+of a particular attribute, the :func:`.defaultload` method/function may be used::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import defaultload
+
+ stmt = select(A).options(
+ defaultload(A.atob).
+ joinedload(B.btoc)
+ )
+
+A similar approach can be used to specify multiple sub-options at once, using
+the :meth:`_orm.Load.options` method::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import defaultload
+ from sqlalchemy.orm import joinedload
+
+ stmt = select(A).options(
+ defaultload(A.atob).options(
+ joinedload(B.btoc),
+ joinedload(B.btod)
+ )
+ )
+
+
+.. seealso::
+
+ :ref:`orm_queryguide_load_only_related` - illustrates examples of combining
+ relationship and column-oriented loader options.
+
+
+.. note:: The loader options applied to an object's lazy-loaded collections
+ are **"sticky"** to specific object instances, meaning they will persist
+ upon collections loaded by that specific object for as long as it exists in
+ memory. For example, given the previous example::
+
+ stmt = select(Parent).options(
+ lazyload(Parent.children).
+ subqueryload(Child.subelements)
+ )
+
+ if the ``children`` collection on a particular ``Parent`` object loaded by
+ the above query is expired (such as when a :class:`.Session` object's
+ transaction is committed or rolled back, or :meth:`.Session.expire_all` is
+ used), when the ``Parent.children`` collection is next accessed in order to
+ re-load it, the ``Child.subelements`` collection will again be loaded using
+ subquery eager loading.This stays the case even if the above ``Parent``
+ object is accessed from a subsequent query that specifies a different set of
+ options.To change the options on an existing object without expunging it and
+ re-loading, they must be set explicitly in conjunction using the
+ :ref:`orm_queryguide_populate_existing` execution option::
+
+ # change the options on Parent objects that were already loaded
+ stmt = select(Parent).execution_options(populate_existing=True).options(
+ lazyload(Parent.children).
+ lazyload(Child.subelements)).all()
+
+ If the objects loaded above are fully cleared from the :class:`.Session`,
+ such as due to garbage collection or that :meth:`.Session.expunge_all`
+ were used, the "sticky" options will also be gone and the newly created
+ objects will make use of new options if loaded again.
+
+ A future SQLAlchemy release may add more alternatives to manipulating
+ the loader options on already-loaded objects.
+
+
+.. _lazy_loading:
+
+Lazy Loading
+------------
+
+By default, all inter-object relationships are **lazy loading**. The scalar or
+collection attribute associated with a :func:`_orm.relationship`
+contains a trigger which fires the first time the attribute is accessed. This
+trigger typically issues a SQL call at the point of access
+in order to load the related object or objects:
+
+.. sourcecode:: python+sql
+
+ >>> spongebob.addresses
+ {opensql}SELECT
+ addresses.id AS addresses_id,
+ addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id
+ [5]
+ {stop}[<Address(u'spongebob@google.com')>, <Address(u'j25@yahoo.com')>]
+
+The one case where SQL is not emitted is for a simple many-to-one relationship, when
+the related object can be identified by its primary key alone and that object is already
+present in the current :class:`.Session`. For this reason, while lazy loading
+can be expensive for related collections, in the case that one is loading
+lots of objects with simple many-to-ones against a relatively small set of
+possible target objects, lazy loading may be able to refer to these objects locally
+without emitting as many SELECT statements as there are parent objects.
+
+This default behavior of "load upon attribute access" is known as "lazy" or
+"select" loading - the name "select" because a "SELECT" statement is typically emitted
+when the attribute is first accessed.
+
+Lazy loading can be enabled for a given attribute that is normally
+configured in some other way using the :func:`.lazyload` loader option::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ # force lazy loading for an attribute that is set to
+ # load some other way normally
+ stmt = select(User).options(lazyload(User.addresses))
+
+.. _prevent_lazy_with_raiseload:
+
+Preventing unwanted lazy loads using raiseload
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`.lazyload` strategy produces an effect that is one of the most
+common issues referred to in object relational mapping; the
+:term:`N plus one problem`, which states that for any N objects loaded,
+accessing their lazy-loaded attributes means there will be N+1 SELECT
+statements emitted. In SQLAlchemy, the usual mitigation for the N+1 problem
+is to make use of its very capable eager load system. However, eager loading
+requires that the attributes which are to be loaded be specified with the
+:class:`_sql.Select` up front. The problem of code that may access other attributes
+that were not eagerly loaded, where lazy loading is not desired, may be
+addressed using the :func:`.raiseload` strategy; this loader strategy
+replaces the behavior of lazy loading with an informative error being
+raised::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import raiseload
+
+ stmt = select(User).options(raiseload(User.addresses))
+
+Above, a ``User`` object loaded from the above query will not have
+the ``.addresses`` collection loaded; if some code later on attempts to
+access this attribute, an ORM exception is raised.
+
+:func:`.raiseload` may be used with a so-called "wildcard" specifier to
+indicate that all relationships should use this strategy. For example,
+to set up only one attribute as eager loading, and all the rest as raise::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import joinedload
+ from sqlalchemy.orm import raiseload
+
+ stmt = select(Order).options(
+ joinedload(Order.items), raiseload('*')
+ )
+
+The above wildcard will apply to **all** relationships not just on ``Order``
+besides ``items``, but all those on the ``Item`` objects as well. To set up
+:func:`.raiseload` for only the ``Order`` objects, specify a full
+path with :class:`_orm.Load`::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import joinedload
+ from sqlalchemy.orm import Load
+
+ stmt = select(Order).options(
+ joinedload(Order.items), Load(Order).raiseload('*')
+ )
+
+Conversely, to set up the raise for just the ``Item`` objects::
+
+ stmt = select(Order).options(
+ joinedload(Order.items).raiseload('*')
+ )
+
+
+The :func:`.raiseload` option applies only to relationship attributes. For
+column-oriented attributes, the :func:`.defer` option supports the
+:paramref:`.orm.defer.raiseload` option which works in the same way.
+
+.. tip:: The "raiseload" strategies **do not apply**
+ within the :term:`unit of work` flush process. That means if the
+ :meth:`_orm.Session.flush` process needs to load a collection in order
+ to finish its work, it will do so while bypassing any :func:`_orm.raiseload`
+ directives.
+
+.. seealso::
+
+ :ref:`wildcard_loader_strategies`
+
+ :ref:`orm_queryguide_deferred_raiseload`
+
+.. _joined_eager_loading:
+
+Joined Eager Loading
+--------------------
+
+Joined eager loading is the oldest style of eager loading included with
+the SQLAlchemy ORM. It works by connecting a JOIN (by default
+a LEFT OUTER join) to the SELECT statement emitted,
+and populates the target scalar/collection from the
+same result set as that of the parent.
+
+At the mapping level, this looks like::
+
+ class Address(Base):
+ # ...
+
+ user: Mapped[User] = relationship(lazy="joined")
+
+Joined eager loading is usually applied as an option to a query, rather than
+as a default loading option on the mapping, in particular when used for
+collections rather than many-to-one-references. This is achieved
+using the :func:`_orm.joinedload` loader option:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy.orm import joinedload
+ >>> stmt = (
+ ... select(User).
+ ... options(joinedload(User.addresses)).\
+ ... filter_by(name='spongebob')
+ ... )
+ >>> spongebob = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ addresses_1.id AS addresses_1_id,
+ addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id,
+ users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ LEFT OUTER JOIN addresses AS addresses_1
+ ON users.id = addresses_1.user_id
+ WHERE users.name = ?
+ ['spongebob']
+
+
+.. tip::
+
+ When including :func:`_orm.joinedload` in reference to a one-to-many or
+ many-to-many collection, the :meth:`_result.Result.unique` method must be
+ applied to the returned result, which will uniquify the incoming rows by
+ primary key that otherwise are multiplied out by the join. The ORM will
+ raise an error if this is not present.
+
+ This is not automatic in modern SQLAlchemy, as it changes the behavior
+ of the result set to return fewer ORM objects than the statement would
+ normally return in terms of number of rows. Therefore SQLAlchemy keeps
+ the use of :meth:`_result.Result.unique` explicit, so there's no ambiguity
+ that the returned objects are being uniqified on primary key.
+
+The JOIN emitted by default is a LEFT OUTER JOIN, to allow for a lead object
+that does not refer to a related row. For an attribute that is guaranteed
+to have an element, such as a many-to-one
+reference to a related object where the referencing foreign key is NOT NULL,
+the query can be made more efficient by using an inner join; this is available
+at the mapping level via the :paramref:`_orm.relationship.innerjoin` flag::
+
+ class Address(Base):
+ # ...
+
+ user_id: Mapped[int] = mapped_column(ForeignKey('users.id'))
+ user: Mapped[User] = relationship(lazy="joined", innerjoin=True)
+
+At the query option level, via the :paramref:`_orm.joinedload.innerjoin` flag::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import joinedload
+
+ stmt = select(Address).options(
+ joinedload(Address.user, innerjoin=True)
+ )
+
+The JOIN will right-nest itself when applied in a chain that includes
+an OUTER JOIN:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy.orm import joinedload
+ >>> stmt = select(User).options(
+ ... joinedload(User.addresses).
+ ... joinedload(Address.widgets, innerjoin=True)
+ ... )
+ >>> results = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ widgets_1.id AS widgets_1_id,
+ widgets_1.name AS widgets_1_name,
+ addresses_1.id AS addresses_1_id,
+ addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id,
+ users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ LEFT OUTER JOIN (
+ addresses AS addresses_1 JOIN widgets AS widgets_1 ON
+ addresses_1.widget_id = widgets_1.id
+ ) ON users.id = addresses_1.user_id
+
+
+.. tip:: If using database row locking techniques when emitting the SELECT,
+ meaning the :meth:`_sql.Select.with_for_update` method is being used
+ to emit SELECT..FOR UPDATE, the joined table may be locked as well,
+ depending on the behavior of the backend in use. It's not recommended
+ to use joined eager loading at the same time as SELECT..FOR UPDATE
+ for this reason.
+
+
+
+.. NOTE: wow, this section. super long. it's not really reference material
+ either it's conceptual
+
+.. _zen_of_eager_loading:
+
+The Zen of Joined Eager Loading
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Since joined eager loading seems to have many resemblances to the use of
+:meth:`_sql.Select.join`, it often produces confusion as to when and how it should
+be used. It is critical to understand the distinction that while
+:meth:`_sql.Select.join` is used to alter the results of a query, :func:`_orm.joinedload`
+goes through great lengths to **not** alter the results of the query, and
+instead hide the effects of the rendered join to only allow for related objects
+to be present.
+
+The philosophy behind loader strategies is that any set of loading schemes can
+be applied to a particular query, and *the results don't change* - only the
+number of SQL statements required to fully load related objects and collections
+changes. A particular query might start out using all lazy loads. After using
+it in context, it might be revealed that particular attributes or collections
+are always accessed, and that it would be more efficient to change the loader
+strategy for these. The strategy can be changed with no other modifications
+to the query, the results will remain identical, but fewer SQL statements would
+be emitted. In theory (and pretty much in practice), nothing you can do to the
+:class:`_sql.Select` would make it load a different set of primary or related
+objects based on a change in loader strategy.
+
+How :func:`joinedload` in particular achieves this result of not impacting
+entity rows returned in any way is that it creates an anonymous alias of the
+joins it adds to your query, so that they can't be referenced by other parts of
+the query. For example, the query below uses :func:`_orm.joinedload` to create a
+LEFT OUTER JOIN from ``users`` to ``addresses``, however the ``ORDER BY`` added
+against ``Address.email_address`` is not valid - the ``Address`` entity is not
+named in the query:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy.orm import joinedload
+ >>> stmt = (
+ ... select(User).
+ ... options(joinedload(User.addresses)).
+ ... filter(User.name == 'spongebob').
+ ... order_by(Address.email_address)
+ ... )
+ >>> result = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ addresses_1.id AS addresses_1_id,
+ addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id,
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ LEFT OUTER JOIN addresses AS addresses_1
+ ON users.id = addresses_1.user_id
+ WHERE users.name = ?
+ ORDER BY addresses.email_address <-- this part is wrong !
+ ['spongebob']
+
+Above, ``ORDER BY addresses.email_address`` is not valid since ``addresses`` is not in the
+FROM list. The correct way to load the ``User`` records and order by email
+address is to use :meth:`_sql.Select.join`:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> stmt = (
+ ... select(User).
+ ... join(User.addresses).
+ ... filter(User.name == 'spongebob').
+ ... order_by(Address.email_address)
+ ... )
+ >>> result = session.scalars(stmt).unique().all()
+ {opensql}
+ SELECT
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ JOIN addresses ON users.id = addresses.user_id
+ WHERE users.name = ?
+ ORDER BY addresses.email_address
+ ['spongebob']
+
+The statement above is of course not the same as the previous one, in that the
+columns from ``addresses`` are not included in the result at all. We can add
+:func:`_orm.joinedload` back in, so that there are two joins - one is that which we
+are ordering on, the other is used anonymously to load the contents of the
+``User.addresses`` collection:
+
+.. sourcecode:: python+sql
+
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.addresses).
+ ... options(joinedload(User.addresses)).
+ ... filter(User.name == 'spongebob').
+ ... order_by(Address.email_address)
+ ... )
+ >>> result = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ addresses_1.id AS addresses_1_id,
+ addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id,
+ users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users JOIN addresses
+ ON users.id = addresses.user_id
+ LEFT OUTER JOIN addresses AS addresses_1
+ ON users.id = addresses_1.user_id
+ WHERE users.name = ?
+ ORDER BY addresses.email_address
+ ['spongebob']
+
+What we see above is that our usage of :meth:`_sql.Select.join` is to supply JOIN
+clauses we'd like to use in subsequent query criterion, whereas our usage of
+:func:`_orm.joinedload` only concerns itself with the loading of the
+``User.addresses`` collection, for each ``User`` in the result. In this case,
+the two joins most probably appear redundant - which they are. If we wanted to
+use just one JOIN for collection loading as well as ordering, we use the
+:func:`.contains_eager` option, described in :ref:`contains_eager` below. But
+to see why :func:`joinedload` does what it does, consider if we were
+**filtering** on a particular ``Address``:
+
+.. sourcecode:: python+sql
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.addresses).
+ ... options(joinedload(User.addresses)).
+ ... filter(User.name=='spongebob').
+ ... filter(Address.email_address=='someaddress@foo.com')
+ ... )
+ >>> result = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ addresses_1.id AS addresses_1_id,
+ addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id,
+ users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users JOIN addresses
+ ON users.id = addresses.user_id
+ LEFT OUTER JOIN addresses AS addresses_1
+ ON users.id = addresses_1.user_id
+ WHERE users.name = ? AND addresses.email_address = ?
+ ['spongebob', 'someaddress@foo.com']
+
+Above, we can see that the two JOINs have very different roles. One will match
+exactly one row, that of the join of ``User`` and ``Address`` where
+``Address.email_address=='someaddress@foo.com'``. The other LEFT OUTER JOIN
+will match *all* ``Address`` rows related to ``User``, and is only used to
+populate the ``User.addresses`` collection, for those ``User`` objects that are
+returned.
+
+By changing the usage of :func:`_orm.joinedload` to another style of loading, we
+can change how the collection is loaded completely independently of SQL used to
+retrieve the actual ``User`` rows we want. Below we change :func:`_orm.joinedload`
+into :func:`.subqueryload`:
+
+.. sourcecode:: python+sql
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.addresses).
+ ... options(subqueryload(User.addresses)).
+ ... filter(User.name=='spongebob').
+ ... filter(Address.email_address=='someaddress@foo.com')
+ ... )
+ >>> result = session.scalars(stmt).all()
+ {opensql}SELECT
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ JOIN addresses ON users.id = addresses.user_id
+ WHERE
+ users.name = ?
+ AND addresses.email_address = ?
+ ['spongebob', 'someaddress@foo.com']
+
+ # ... subqueryload() emits a SELECT in order
+ # to load all address records ...
+
+When using joined eager loading, if the query contains a modifier that impacts
+the rows returned externally to the joins, such as when using DISTINCT, LIMIT,
+OFFSET or equivalent, the completed statement is first wrapped inside a
+subquery, and the joins used specifically for joined eager loading are applied
+to the subquery. SQLAlchemy's joined eager loading goes the extra mile, and
+then ten miles further, to absolutely ensure that it does not affect the end
+result of the query, only the way collections and related objects are loaded,
+no matter what the format of the query is.
+
+.. seealso::
+
+ :ref:`contains_eager` - using :func:`.contains_eager`
+
+.. _selectin_eager_loading:
+
+Select IN loading
+-----------------
+
+Select IN loading is similar in operation to subquery eager loading, however
+the SELECT statement which is emitted has a much simpler structure than that of
+subquery eager loading. In most cases, selectin loading is the most simple and
+efficient way to eagerly load collections of objects. The only scenario in
+which selectin eager loading is not feasible is when the model is using
+composite primary keys, and the backend database does not support tuples with
+IN, which currently includes SQL Server.
+
+"Select IN" eager loading is provided using the ``"selectin"`` argument to
+:paramref:`_orm.relationship.lazy` or by using the :func:`.selectinload` loader
+option. This style of loading emits a SELECT that refers to the primary key
+values of the parent object, or in the case of a many-to-one
+relationship to the those of the child objects, inside of an IN clause, in
+order to load related associations:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy import selectinload
+ >>> stmt = (
+ ... select(User).
+ ... options(selectinload(User.addresses)).
+ ... filter(or_(User.name == 'spongebob', User.name == 'ed'))
+ ... )
+ >>> result = session.scalars(stmt).all()
+ {opensql}SELECT
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ WHERE users.name = ? OR users.name = ?
+ ('spongebob', 'ed')
+ SELECT
+ addresses.id AS addresses_id,
+ addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE addresses.user_id IN (?, ?)
+ (5, 7)
+
+Above, the second SELECT refers to ``addresses.user_id IN (5, 7)``, where the
+"5" and "7" are the primary key values for the previous two ``User``
+objects loaded; after a batch of objects are completely loaded, their primary
+key values are injected into the ``IN`` clause for the second SELECT.
+Because the relationship between ``User`` and ``Address`` has a simple [1]_
+primary join condition and provides that the
+primary key values for ``User`` can be derived from ``Address.user_id``, the
+statement has no joins or subqueries at all.
+
+.. versionchanged:: 1.3 selectin loading can omit the JOIN for a simple
+ one-to-many collection.
+
+For simple [1]_ many-to-one loads, a JOIN is also not needed as the foreign key
+value from the parent object is used:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy import selectinload
+ >>> stmt = select(Address).options(selectinload(Address.user))
+ >>> result = session.scalars(stmt).all()
+ {opensql}SELECT
+ addresses.id AS addresses_id,
+ addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ SELECT
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ WHERE users.id IN (?, ?)
+ (1, 2)
+
+.. versionchanged:: 1.3.6 selectin loading can also omit the JOIN for a simple
+ many-to-one relationship.
+
+.. [1] by "simple" we mean that the :paramref:`_orm.relationship.primaryjoin`
+ condition expresses an equality comparison between the primary key of the
+ "one" side and a straight foreign key of the "many" side, without any
+ additional criteria.
+
+Select IN loading also supports many-to-many relationships, where it currently
+will JOIN across all three tables to match rows from one side to the other.
+
+Things to know about this kind of loading include:
+
+* The SELECT statement emitted by the "selectin" loader strategy, unlike
+ that of "subquery", does not
+ require a subquery nor does it inherit any of the performance limitations
+ of the original query; the lookup is a simple primary key lookup and should
+ have high performance.
+
+* The special ordering requirements of subqueryload described at
+ :ref:`subqueryload_ordering` also don't apply to selectin loading; selectin
+ is always linking directly to a parent primary key and can't really
+ return the wrong result.
+
+* "selectin" loading, unlike joined or subquery eager loading, always emits its
+ SELECT in terms of the immediate parent objects just loaded, and not the
+ original type of object at the top of the chain. So if eager loading many
+ levels deep, "selectin" loading still will not require any JOINs for simple
+ one-to-many or many-to-one relationships. In comparison, joined and
+ subquery eager loading always refer to multiple JOINs up to the original
+ parent.
+
+* The strategy emits a SELECT for up to 500 parent primary key values at a
+ time, as the primary keys are rendered into a large IN expression in the
+ SQL statement. Some databases like Oracle have a hard limit on how large
+ an IN expression can be, and overall the size of the SQL string shouldn't
+ be arbitrarily large.
+
+* As "selectin" loading relies upon IN, for a mapping with composite primary
+ keys, it must use the "tuple" form of IN, which looks like ``WHERE
+ (table.column_a, table.column_b) IN ((?, ?), (?, ?), (?, ?))``. This syntax
+ is not currently supported on SQL Server and for SQLite requires at least
+ version 3.15. There is no special logic in SQLAlchemy to check
+ ahead of time which platforms support this syntax or not; if run against a
+ non-supporting platform, the database will return an error immediately. An
+ advantage to SQLAlchemy just running the SQL out for it to fail is that if a
+ particular database does start supporting this syntax, it will work without
+ any changes to SQLAlchemy (as was the case with SQLite).
+
+In general, "selectin" loading is probably superior to "subquery" eager loading
+in most ways, save for the syntax requirement with composite primary keys
+and possibly that it may emit many SELECT statements for larger result sets.
+As always, developers should spend time looking at the
+statements and results generated by their applications in development to
+check that things are working efficiently.
+
+.. _subquery_eager_loading:
+
+Subquery Eager Loading
+----------------------
+
+.. legacy:: The :func:`_orm.subqueryload` eager loader is mostly legacy
+ at this point, superseded by the :func:`_orm.selectinload` strategy
+ which is of much simpler design, more flexible with features such as
+ :ref:`Yield Per <orm_queryguide_yield_per>`, and emits more efficient SQL
+ statements in most cases. As :func:`_orm.subqueryload` relies upon
+ re-interpreting the original SELECT statement, it may fail to work
+ efficiently when given very complex source queries.
+
+ :func:`_orm.subqueryload` may continue to be useful for the specific
+ case of an eager loaded collection for objects that use composite primary
+ keys, on the Microsoft SQL Server backend that continues to not have
+ support for the "tuple IN" syntax.
+
+Subqueryload eager loading is configured in the same manner as that of
+joined eager loading; for the :paramref:`_orm.relationship.lazy` parameter,
+we would specify ``"subquery"`` rather than ``"joined"``, and for
+the option we use the :func:`.subqueryload` option rather than the
+:func:`_orm.joinedload` option.
+
+The operation of subquery eager loading is to emit a second SELECT statement
+for each relationship to be loaded, across all result objects at once.
+This SELECT statement refers to the original SELECT statement, wrapped
+inside of a subquery, so that we retrieve the same list of primary keys
+for the primary object being returned, then link that to the sum of all
+the collection members to load them at once:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import select
+ >>> from sqlalchemy.orm import subqueryload
+ >>> stmt = (
+ ... select(User)
+ ... options(subqueryload(User.addresses))
+ ... filter_by(name="spongebob")
+ ... )
+ >>> results = session.scalars(stmt).all()
+ {opensql}SELECT
+ users.id AS users_id,
+ users.name AS users_name,
+ users.fullname AS users_fullname,
+ users.nickname AS users_nickname
+ FROM users
+ WHERE users.name = ?
+ ('spongebob',)
+ SELECT
+ addresses.id AS addresses_id,
+ addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id,
+ anon_1.users_id AS anon_1_users_id
+ FROM (
+ SELECT users.id AS users_id
+ FROM users
+ WHERE users.name = ?) AS anon_1
+ JOIN addresses ON anon_1.users_id = addresses.user_id
+ ORDER BY anon_1.users_id, addresses.id
+ ('spongebob',)
+
+The subqueryload strategy has many advantages over joined eager loading
+in the area of loading collections. First, it allows the original query
+to proceed without changing it at all, not introducing in particular a
+LEFT OUTER JOIN that may make it less efficient. Secondly, it allows
+for many collections to be eagerly loaded without producing a single query
+that has many JOINs in it, which can be even less efficient; each relationship
+is loaded in a fully separate query. Finally, because the additional query
+only needs to load the collection items and not the lead object, it can
+use an inner JOIN in all cases for greater query efficiency.
+
+Disadvantages of subqueryload include that the complexity of the original
+query is transferred to the relationship queries, which when combined with the
+use of a subquery, can on some backends in some cases (notably MySQL) produce
+significantly slow queries. Additionally, the subqueryload strategy can only
+load the full contents of all collections at once, is therefore incompatible
+with "batched" loading supplied by :ref:`Yield Per <orm_queryguide_yield_per>`, both for collection
+and scalar relationships.
+
+The newer style of loading provided by :func:`.selectinload` solves these
+limitations of :func:`.subqueryload`.
+
+.. seealso::
+
+ :ref:`selectin_eager_loading`
+
+
+.. _subqueryload_ordering:
+
+The Importance of Ordering
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A query which makes use of :func:`.subqueryload` in conjunction with a
+limiting modifier such as :meth:`_sql.Select.limit`,
+or :meth:`_sql.Select.offset` should **always** include :meth:`_sql.Select.order_by`
+against unique column(s) such as the primary key, so that the additional queries
+emitted by :func:`.subqueryload` include
+the same ordering as used by the parent query. Without it, there is a chance
+that the inner query could return the wrong rows::
+
+ # incorrect, no ORDER BY
+ stmt = select(User).options(
+ subqueryload(User.addresses).limit(1)
+ )
+
+ # incorrect if User.name is not unique
+ stmt = select(User).options(
+ subqueryload(User.addresses)
+ ).order_by(User.name).limit(1)
+
+ # correct
+ stmt = select(User).options(
+ subqueryload(User.addresses)
+ ).order_by(User.name, User.id).limit(1)
+
+.. seealso::
+
+ :ref:`faq_subqueryload_limit_sort` - detailed example
+
+
+.. _what_kind_of_loading:
+
+What Kind of Loading to Use ?
+-----------------------------
+
+Which type of loading to use typically comes down to optimizing the tradeoff
+between number of SQL executions, complexity of SQL emitted, and amount of
+data fetched.
+
+
+**One to Many / Many to Many Collection** - The :func:`_orm.selectinload` is
+generally the best loading strategy to use. It emits an additional SELECT
+that uses as few tables as possible, leaving the original statement unaffected,
+and is most flexible for any kind of
+originating query. Its only major limitation is when using a table with
+composite primary keys on a backend that does not support "tuple IN", which
+currently includes SQL Server and very old SQLite versions; all other included
+backends support it.
+
+**Many to One** - The :func:`_orm.joinedload` strategy is the most general
+purpose strategy. In special cases, the :func:`_orm.immediateload` strategy may
+also be useful, if there are a very small number of potential related values,
+as this strategy will fetch the object from the local :class:`_orm.Session`
+without emitting any SQL if the related object is already present.
+
+
+
+Polymorphic Eager Loading
+-------------------------
+
+Specification of polymorphic options on a per-eager-load basis is supported.
+See the section :ref:`eagerloading_polymorphic_subtypes` for examples
+of the :meth:`.PropComparator.of_type` method in conjunction with the
+:func:`_orm.with_polymorphic` function.
+
+.. _wildcard_loader_strategies:
+
+Wildcard Loading Strategies
+---------------------------
+
+Each of :func:`_orm.joinedload`, :func:`.subqueryload`, :func:`.lazyload`,
+:func:`.selectinload`,
+:func:`.noload`, and :func:`.raiseload` can be used to set the default
+style of :func:`_orm.relationship` loading
+for a particular query, affecting all :func:`_orm.relationship` -mapped
+attributes not otherwise
+specified in the statement. This feature is available by passing
+the string ``'*'`` as the argument to any of these options::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+
+ stmt = select(MyClass).options(lazyload('*'))
+
+Above, the ``lazyload('*')`` option will supersede the ``lazy`` setting
+of all :func:`_orm.relationship` constructs in use for that query,
+except for those which use the ``'dynamic'`` style of loading.
+If some relationships specify
+``lazy='joined'`` or ``lazy='subquery'``, for example,
+using ``lazyload('*')`` will unilaterally
+cause all those relationships to use ``'select'`` loading, e.g. emit a
+SELECT statement when each attribute is accessed.
+
+The option does not supersede loader options stated in the
+query, such as :func:`.eagerload`,
+:func:`.subqueryload`, etc. The query below will still use joined loading
+for the ``widget`` relationship::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import lazyload
+ from sqlalchemy.orm import joinedload
+
+ stmt = select(MyClass).options(
+ lazyload('*'),
+ joinedload(MyClass.widget)
+ )
+
+If multiple ``'*'`` options are passed, the last one overrides
+those previously passed.
+
+.. _orm_queryguide_relationship_per_entity_wildcard:
+
+Per-Entity Wildcard Loading Strategies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A variant of the wildcard loader strategy is the ability to set the strategy
+on a per-entity basis. For example, if querying for ``User`` and ``Address``,
+we can instruct all relationships on ``Address`` only to use lazy loading
+by first applying the :class:`_orm.Load` object, then specifying the ``*`` as a
+chained option::
+
+ from sqlalchemy import select
+ from sqlalchemy.orm import Load
+
+ stmt = select(User, Address).options(
+ Load(Address).lazyload('*')
+ )
+
+Above, all relationships on ``Address`` will be set to a lazy load.
+
+.. _joinedload_and_join:
+
+.. _contains_eager:
+
+Routing Explicit Joins/Statements into Eagerly Loaded Collections
+-----------------------------------------------------------------
+
+The behavior of :func:`_orm.joinedload()` is such that joins are
+created automatically, using anonymous aliases as targets, the results of which
+are routed into collections and
+scalar references on loaded objects. It is often the case that a query already
+includes the necessary joins which represent a particular collection or scalar
+reference, and the joins added by the joinedload feature are redundant - yet
+you'd still like the collections/references to be populated.
+
+For this SQLAlchemy supplies the :func:`_orm.contains_eager`
+option. This option is used in the same manner as the
+:func:`_orm.joinedload()` option except it is assumed that the
+:class:`_sql.Select` object will explicitly include the appropriate joins,
+typically using methods like :meth:`_sql.Select.join`.
+Below, we specify a join between ``User`` and ``Address``
+and additionally establish this as the basis for eager loading of ``User.addresses``::
+
+ class User(Base):
+ __tablename__ = 'user'
+ id = mapped_column(Integer, primary_key=True)
+ addresses = relationship("Address")
+
+ class Address(Base):
+ __tablename__ = 'address'
+
+ # ...
+
+ from sqlalchemy.orm import contains_eager
+
+ stmt = (
+ select(User).
+ join(User.addresses).
+ options(contains_eager(User.addresses))
+ )
+
+
+If the "eager" portion of the statement is "aliased", the path
+should be specified using :meth:`.PropComparator.of_type`, which allows
+the specific :func:`_orm.aliased` construct to be passed:
+
+.. sourcecode:: python+sql
+
+ # use an alias of the Address entity
+ adalias = aliased(Address)
+
+ # construct a statement which expects the "addresses" results
+
+ stmt = (
+ select(User).
+ outerjoin(User.addresses.of_type(adalias)).
+ options(contains_eager(User.addresses.of_type(adalias))
+ )
+
+ # get results normally
+ r = session.scalars(stmt).unique().all()
+ {opensql}SELECT
+ users.user_id AS users_user_id,
+ users.user_name AS users_user_name,
+ adalias.address_id AS adalias_address_id,
+ adalias.user_id AS adalias_user_id,
+ adalias.email_address AS adalias_email_address,
+ (...other columns...)
+ FROM users
+ LEFT OUTER JOIN email_addresses AS email_addresses_1
+ ON users.user_id = email_addresses_1.user_id
+
+The path given as the argument to :func:`.contains_eager` needs
+to be a full path from the starting entity. For example if we were loading
+``Users->orders->Order->items->Item``, the option would be used as::
+
+ stmt = select(User).options(
+ contains_eager(User.orders).
+ contains_eager(Order.items)
+ )
+
+Using contains_eager() to load a custom-filtered collection result
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When we use :func:`.contains_eager`, *we* are constructing ourselves the
+SQL that will be used to populate collections. From this, it naturally follows
+that we can opt to **modify** what values the collection is intended to store,
+by writing our SQL to load a subset of elements for collections or
+scalar attributes.
+
+As an example, we can load a ``User`` object and eagerly load only particular
+addresses into its ``.addresses`` collection by filtering the joined data,
+routing it using :func:`_orm.contains_eager`, also using
+:ref:`orm_queryguide_populate_existing` to ensure any already-loaded collections
+are overwritten::
+
+ stmt = (
+ select(User).
+ join(User.addresses).
+ filter(Address.email_address.like('%@aol.com')).
+ options(contains_eager(User.addresses)).
+ execution_options(populate_existing=True)
+ )
+
+The above query will load only ``User`` objects which contain at
+least ``Address`` object that contains the substring ``'aol.com'`` in its
+``email`` field; the ``User.addresses`` collection will contain **only**
+these ``Address`` entries, and *not* any other ``Address`` entries that are
+in fact associated with the collection.
+
+.. tip:: In all cases, the SQLAlchemy ORM does **not overwrite already loaded
+ attributes and collections** unless told to do so. As there is an
+ :term:`identity map` in use, it is often the case that an ORM query is
+ returning objects that were in fact already present and loaded in memory.
+ Therefore, when using :func:`_orm.contains_eager` to populate a collection
+ in an alternate way, it is usually a good idea to use
+ :ref:`orm_queryguide_populate_existing` as illustrated above so that an
+ already-loaded collection is refreshed with the new data.
+ The ``populate_existing`` option will reset **all** attributes that were
+ already present, including pending changes, so make sure all data is flushed
+ before using it. Using the :class:`_orm.Session` with its default behavior
+ of :ref:`autoflush <session_flushing>` is sufficient.
+
+.. note:: The customized collection we load using :func:`_orm.contains_eager`
+ is not "sticky"; that is, the next time this collection is loaded, it will
+ be loaded with its usual default contents. The collection is subject
+ to being reloaded if the object is expired, which occurs whenever the
+ :meth:`.Session.commit`, :meth:`.Session.rollback` methods are used
+ assuming default session settings, or the :meth:`.Session.expire_all`
+ or :meth:`.Session.expire` methods are used.
+
+
+Relationship Loader API
+-----------------------
+
+.. autofunction:: contains_eager
+
+.. autofunction:: defaultload
+
+.. autofunction:: immediateload
+
+.. autofunction:: joinedload
+
+.. autofunction:: lazyload
+
+.. autoclass:: sqlalchemy.orm.Load
+ :members:
+ :inherited-members: Generative
+
+.. autofunction:: noload
+
+.. autofunction:: raiseload
+
+.. autofunction:: selectinload
+
+.. autofunction:: subqueryload
--- /dev/null
+.. highlight:: pycon+sql
+.. |prev| replace:: :doc:`index`
+.. |next| replace:: :doc:`inheritance`
+
+.. include:: queryguide_nav_include.rst
+
+Writing SELECT statements for ORM Mapped Classes
+================================================
+
+.. admonition:: About this Document
+
+ This section makes use of ORM mappings first illustrated in the
+ :ref:`unified_tutorial`, shown in the section
+ :ref:`tutorial_declaring_mapped_classes`.
+
+ :doc:`View the ORM setup for this page <_plain_setup>`.
+
+
+SELECT statements are produced by the :func:`_sql.select` function which
+returns a :class:`_sql.Select` object. The entities and/or SQL expressions
+to return (i.e. the "columns" clause) are passed positionally to the
+function. From there, additional methods are used to generate the complete
+statement, such as the :meth:`_sql.Select.where` method illustrated below::
+
+ >>> from sqlalchemy import select
+ >>> stmt = select(User).where(User.name == 'spongebob')
+
+Given a completed :class:`_sql.Select` object, in order to execute it within
+the ORM to get rows back, the object is passed to
+:meth:`_orm.Session.execute`, where a :class:`.Result` object is then
+returned::
+
+ >>> result = session.execute(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ WHERE user_account.name = ?
+ [...] ('spongebob',){stop}
+ >>> for user_obj in result.scalars():
+ ... print(f"{user_obj.name} {user_obj.fullname}")
+ spongebob Spongebob Squarepants
+
+
+.. _orm_queryguide_select_columns:
+
+Selecting ORM Entities and Attributes
+--------------------------------------
+
+The :func:`_sql.select` construct accepts ORM entities, including mapped
+classes as well as class-level attributes representing mapped columns, which
+are converted into :term:`ORM-annotated` :class:`_sql.FromClause` and
+:class:`_sql.ColumnElement` elements at construction time.
+
+A :class:`_sql.Select` object that contains ORM-annotated entities is normally
+executed using a :class:`_orm.Session` object, and not a :class:`_engine.Connection`
+object, so that ORM-related features may take effect, including that
+instances of ORM-mapped objects may be returned. When using the
+:class:`_engine.Connection` directly, result rows will only contain
+column-level data.
+
+Selecting ORM Entities
+^^^^^^^^^^^^^^^^^^^^^^
+
+Below we select from the ``User`` entity, producing a :class:`_sql.Select`
+that selects from the mapped :class:`_schema.Table` to which ``User`` is mapped::
+
+ >>> result = session.execute(select(User).order_by(User.id))
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account ORDER BY user_account.id
+ [...] ()
+
+When selecting from ORM entities, the entity itself is returned in the result
+as a row with a single element, as opposed to a series of individual columns;
+for example above, the :class:`_engine.Result` returns :class:`_engine.Row`
+objects that have just a single element per row, that element holding onto a
+``User`` object::
+
+ >>> result.all()
+ [(User(id=1, name='spongebob', fullname='Spongebob Squarepants'),),
+ (User(id=2, name='sandy', fullname='Sandy Cheeks'),),
+ (User(id=3, name='patrick', fullname='Patrick Star'),),
+ (User(id=4, name='squidward', fullname='Squidward Tentacles'),),
+ (User(id=5, name='ehkrabs', fullname='Eugene H. Krabs'),)]
+
+
+When selecting a list of single-element rows containing ORM entities, it is
+typical to skip the generation of :class:`_engine.Row` objects and instead
+receive ORM entities directly. This is most easily achieved by using the
+:meth:`_orm.Session.scalars` method to execute, rather than the
+:meth:`_orm.Session.execute` method, so that a :class:`.ScalarResult` object
+which yields single elements rather than rows is returned::
+
+ >>> session.scalars(select(User).order_by(User.id)).all()
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account ORDER BY user_account.id
+ [...] ()
+ {stop}[User(id=1, name='spongebob', fullname='Spongebob Squarepants'),
+ User(id=2, name='sandy', fullname='Sandy Cheeks'),
+ User(id=3, name='patrick', fullname='Patrick Star'),
+ User(id=4, name='squidward', fullname='Squidward Tentacles'),
+ User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')]
+
+Calling the :meth:`_orm.Session.scalars` method is the equivalent to calling
+upon :meth:`_orm.Session.execute` to receive a :class:`_engine.Result` object,
+then calling upon :meth:`_engine.Result.scalars` to receive a
+:class:`_engine.ScalarResult` object.
+
+
+.. _orm_queryguide_select_multiple_entities:
+
+Selecting Multiple ORM Entities Simultaneously
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_sql.select` function accepts any number of ORM classes and/or
+column expressions at once, including that multiple ORM classes may be
+requested. When SELECTing from multiple ORM classes, they are named
+in each result row based on their class name. In the example below,
+the result rows for a SELECT against ``User`` and ``Address`` will
+refer to them under the names ``User`` and ``Address``::
+
+ >>> stmt = (
+ ... select(User, Address).
+ ... join(User.addresses).
+ ... order_by(User.id, Address.id)
+ ... )
+ >>> for row in session.execute(stmt):
+ ... print(f"{row.User.name} {row.Address.email_address}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ address.id AS id_1, address.user_id, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ ORDER BY user_account.id, address.id
+ [...] (){stop}
+ spongebob spongebob@sqlalchemy.org
+ sandy sandy@sqlalchemy.org
+ sandy squirrel@squirrelpower.org
+ patrick pat999@aol.com
+ squidward stentcl@sqlalchemy.org
+
+If we wanted to assign different names to these entities in the rows, we would
+use the :func:`_orm.aliased` construct using the :paramref:`_orm.aliased.name`
+parameter to alias them with an explicit name::
+
+ >>> from sqlalchemy.orm import aliased
+ >>> user_cls = aliased(User, name="user_cls")
+ >>> email_cls = aliased(Address, name="email")
+ >>> stmt = (
+ ... select(user_cls, email_cls).
+ ... join(user_cls.addresses.of_type(email_cls)).
+ ... order_by(user_cls.id, email_cls.id)
+ ... )
+ >>> row = session.execute(stmt).first()
+ {opensql}SELECT user_cls.id, user_cls.name, user_cls.fullname,
+ email.id AS id_1, email.user_id, email.email_address
+ FROM user_account AS user_cls JOIN address AS email
+ ON user_cls.id = email.user_id ORDER BY user_cls.id, email.id
+ [...] ()
+ {stop}>>> print(f"{row.user_cls.name} {row.email.email_address}")
+ spongebob spongebob@sqlalchemy.org
+
+The aliased form above is discussed further at
+:ref:`orm_queryguide_joining_relationships_aliased`.
+
+An existing :class:`_sql.Select` construct may also have ORM classes and/or
+column expressions added to its columns clause using the
+:meth:`_sql.Select.add_columns` method. We can produce the same statement as
+above using this form as well::
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.addresses).
+ ... add_columns(Address).
+ ... order_by(User.id, Address.id)
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ address.id AS id_1, address.user_id, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ ORDER BY user_account.id, address.id
+
+
+Selecting Individual Attributes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The attributes on a mapped class, such as ``User.name`` and ``Address.email_address``,
+have a similar behavior as that of the entity class itself such as ``User``
+in that they are automatically converted into ORM-annotated Core objects
+when passed to :func:`_sql.select`. They may be used in the same way
+as table columns are used::
+
+ >>> result = session.execute(
+ ... select(User.name, Address.email_address).
+ ... join(User.addresses).
+ ... order_by(User.id, Address.id)
+ ... )
+ {opensql}SELECT user_account.name, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ ORDER BY user_account.id, address.id
+ [...] (){stop}
+
+ORM attributes, themselves known as
+:class:`_orm.InstrumentedAttribute`
+objects, can be used in the same way as any :class:`_sql.ColumnElement`,
+and are delivered in result rows just the same way, such as below
+where we refer to their values by column name within each row::
+
+ >>> for row in result:
+ ... print(f"{row.name} {row.email_address}")
+ spongebob spongebob@sqlalchemy.org
+ sandy sandy@sqlalchemy.org
+ sandy squirrel@squirrelpower.org
+ patrick pat999@aol.com
+ squidward stentcl@sqlalchemy.org
+
+.. _bundles:
+
+Grouping Selected Attributes with Bundles
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :class:`_orm.Bundle` construct is an extensible ORM-only construct that
+allows sets of column expressions to be grouped in result rows::
+
+ >>> from sqlalchemy.orm import Bundle
+ >>> stmt = select(
+ ... Bundle("user", User.name, User.fullname),
+ ... Bundle("email", Address.email_address)
+ ... ).join_from(User, Address)
+ >>> for row in session.execute(stmt):
+ ... print(f"{row.user.name} {row.user.fullname} {row.email.email_address}")
+ {opensql}SELECT user_account.name, user_account.fullname, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ [...] (){stop}
+ spongebob Spongebob Squarepants spongebob@sqlalchemy.org
+ sandy Sandy Cheeks sandy@sqlalchemy.org
+ sandy Sandy Cheeks squirrel@squirrelpower.org
+ patrick Patrick Star pat999@aol.com
+ squidward Squidward Tentacles stentcl@sqlalchemy.org
+
+The :class:`_orm.Bundle` is potentially useful for creating lightweight views
+and custom column groupings. :class:`_orm.Bundle` may also be subclassed in
+order to return alternate data structures; see
+:meth:`_orm.Bundle.create_row_processor` for an example.
+
+.. seealso::
+
+ :class:`_orm.Bundle`
+
+ :meth:`_orm.Bundle.create_row_processor`
+
+
+.. _orm_queryguide_orm_aliases:
+
+Selecting ORM Aliases
+^^^^^^^^^^^^^^^^^^^^^
+
+As discussed in the tutorial at :ref:`tutorial_using_aliases`, to create a
+SQL alias of an ORM entity is achieved using the :func:`_orm.aliased`
+construct against a mapped class::
+
+ >>> from sqlalchemy.orm import aliased
+ >>> u1 = aliased(User)
+ >>> print(select(u1).order_by(u1.id))
+ {opensql}SELECT user_account_1.id, user_account_1.name, user_account_1.fullname
+ FROM user_account AS user_account_1 ORDER BY user_account_1.id
+
+As is the case when using :meth:`_schema.Table.alias`, the SQL alias
+is anonymously named. For the case of selecting the entity from a row
+with an explicit name, the :paramref:`_orm.aliased.name` parameter may be
+passed as well::
+
+ >>> from sqlalchemy.orm import aliased
+ >>> u1 = aliased(User, name="u1")
+ >>> stmt = select(u1).order_by(u1.id)
+ >>> row = session.execute(stmt).first()
+ {opensql}SELECT u1.id, u1.name, u1.fullname
+ FROM user_account AS u1 ORDER BY u1.id
+ [...] (){stop}
+ >>> print(f"{row.u1.name}")
+ spongebob
+
+.. seealso::
+
+
+ The :class:`_orm.aliased` construct is central for several use cases,
+ including:
+
+ * making use of subqueries with the ORM; the sections
+ :ref:`orm_queryguide_subqueries` and
+ :ref:`orm_queryguide_join_subqueries` discuss this further.
+ * Controlling the name of an entity in a result set; see
+ :ref:`orm_queryguide_select_multiple_entities` for an example
+ * Joining to the same ORM entity multiple times; see
+ :ref:`orm_queryguide_joining_relationships_aliased` for an example.
+
+.. _orm_queryguide_selecting_text:
+
+Getting ORM Results from Textual Statements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ORM supports loading of entities from SELECT statements that come from
+other sources. The typical use case is that of a textual SELECT statement,
+which in SQLAlchemy is represented using the :func:`_sql.text` construct. A
+:func:`_sql.text` construct can be augmented with information about the
+ORM-mapped columns that the statement would load; this can then be associated
+with the ORM entity itself so that ORM objects can be loaded based on this
+statement.
+
+Given a textual SQL statement we'd like to load from::
+
+ >>> from sqlalchemy import text
+ >>> textual_sql = text("SELECT id, name, fullname FROM user_account ORDER BY id")
+
+We can add column information to the statement by using the
+:meth:`_sql.TextClause.columns` method; when this method is invoked, the
+:class:`_sql.TextClause` object is converted into a :class:`_sql.TextualSelect`
+object, which takes on a role that is comparable to the :class:`_sql.Select`
+construct. The :meth:`_sql.TextClause.columns` method
+is typically passed :class:`_schema.Column` objects or equivalent, and in this
+case we can make use of the ORM-mapped attributes on the ``User`` class
+directly::
+
+ >>> textual_sql = textual_sql.columns(User.id, User.name, User.fullname)
+
+We now have an ORM-configured SQL construct that as given, can load the "id",
+"name" and "fullname" columns separately. To use this SELECT statement as a
+source of complete ``User`` entities instead, we can link these columns to a
+regular ORM-enabled
+:class:`_sql.Select` construct using the :meth:`_sql.Select.from_statement`
+method::
+
+ >>> orm_sql = select(User).from_statement(textual_sql)
+ >>> for user_obj in session.execute(orm_sql).scalars():
+ ... print(user_obj)
+ {opensql}SELECT id, name, fullname FROM user_account ORDER BY id
+ [...] (){stop}
+ User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ User(id=3, name='patrick', fullname='Patrick Star')
+ User(id=4, name='squidward', fullname='Squidward Tentacles')
+ User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
+
+The same :class:`_sql.TextualSelect` object can also be converted into
+a subquery using the :meth:`_sql.TextualSelect.subquery` method,
+and linked to the ``User`` entity to it using the :func:`_orm.aliased`
+construct, in a similar manner as discussed below in :ref:`orm_queryguide_subqueries`::
+
+ >>> orm_subquery = aliased(User, textual_sql.subquery())
+ >>> stmt = select(orm_subquery)
+ >>> for user_obj in session.execute(stmt).scalars():
+ ... print(user_obj)
+ {opensql}SELECT anon_1.id, anon_1.name, anon_1.fullname
+ FROM (SELECT id, name, fullname FROM user_account ORDER BY id) AS anon_1
+ [...] (){stop}
+ User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ User(id=3, name='patrick', fullname='Patrick Star')
+ User(id=4, name='squidward', fullname='Squidward Tentacles')
+ User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
+
+The difference between using the :class:`_sql.TextualSelect` directly with
+:meth:`_sql.Select.from_statement` versus making use of :func:`_sql.aliased`
+is that in the former case, no subquery is produced in the resulting SQL.
+This can in some scenarios be advantageous from a performance or complexity
+perspective.
+
+.. _orm_queryguide_subqueries:
+
+Selecting Entities from Subqueries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_orm.aliased` construct discussed in the previous section
+can be used with any :class:`_sql.Subuqery` construct that comes from a
+method such as :meth:`_sql.Select.subquery` to link ORM entities to the
+columns returned by that subquery; there must be a **column correspondence**
+relationship between the columns delivered by the subquery and the columns
+to which the entity is mapped, meaning, the subquery needs to be ultimately
+derived from those entities, such as in the example below::
+
+ >>> inner_stmt = select(User).where(User.id < 7).order_by(User.id)
+ >>> subq = inner_stmt.subquery()
+ >>> aliased_user = aliased(User, subq)
+ >>> stmt = select(aliased_user)
+ >>> for user_obj in session.execute(stmt).scalars():
+ ... print(user_obj)
+ {opensql} SELECT anon_1.id, anon_1.name, anon_1.fullname
+ FROM (SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
+ FROM user_account
+ WHERE user_account.id < ? ORDER BY user_account.id) AS anon_1
+ [generated in ...] (7,)
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=2, name='sandy', fullname='Sandy Cheeks')
+ User(id=3, name='patrick', fullname='Patrick Star')
+ User(id=4, name='squidward', fullname='Squidward Tentacles')
+ User(id=5, name='ehkrabs', fullname='Eugene H. Krabs')
+
+.. seealso::
+
+ :ref:`tutorial_subqueries_orm_aliased` - in the :ref:`unified_tutorial`
+
+ :ref:`orm_queryguide_join_subqueries`
+
+.. _orm_queryguide_unions:
+
+Selecting Entities from UNIONs and other set operations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_sql.union` and :func:`_sql.union_all` functions are the most
+common set operations, which along with other set operations such as
+:func:`_sql.except_`, :func:`_sql.intersect` and others deliver an object known as
+a :class:`_sql.CompoundSelect`, which is composed of multiple
+:class:`_sql.Select` constructs joined by a set-operation keyword. ORM entities may
+be selected from simple compound selects using the :meth:`_sql.Select.from_statement`
+method illustrated previously at :ref:`orm_queryguide_selecting_text`. In
+this method, the UNION statement is the complete statement that will be
+rendered, no additional criteria can be added after :meth:`_sql.Select.from_statement`
+is used::
+
+ >>> from sqlalchemy import union_all
+ >>> u = union_all(
+ ... select(User).where(User.id < 2),
+ ... select(User).where(User.id == 3)
+ ... ).order_by(User.id)
+ >>> stmt = select(User).from_statement(u)
+ >>> for user_obj in session.execute(stmt).scalars():
+ ... print(user_obj)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ WHERE user_account.id < ? UNION ALL SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ WHERE user_account.id = ? ORDER BY id
+ [generated in ...] (2, 3)
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=3, name='patrick', fullname='Patrick Star')
+
+A :class:`_sql.CompoundSelect` construct can be more flexibly used within
+a query that can be further modified by organizing it into a subquery
+and linking it to an ORM entity using :func:`_orm.aliased`,
+as illustrated previously at :ref:`orm_queryguide_subqueries`. In the
+example below, we first use :meth:`_sql.CompoundSelect.subquery` to create
+a subquery of the UNION ALL statement, we then package that into the
+:func:`_orm.aliased` construct where it can be used like any other mapped
+entity in a :func:`_sql.select` construct, including that we can add filtering
+and order by criteria based on its exported columns::
+
+ >>> subq = union_all(
+ ... select(User).where(User.id < 2),
+ ... select(User).where(User.id == 3)
+ ... ).subquery()
+ >>> user_alias = aliased(User, subq)
+ >>> stmt = select(user_alias).order_by(user_alias.id)
+ >>> for user_obj in session.execute(stmt).scalars():
+ ... print(user_obj)
+ {opensql}SELECT anon_1.id, anon_1.name, anon_1.fullname
+ FROM (SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
+ FROM user_account
+ WHERE user_account.id < ? UNION ALL SELECT user_account.id AS id, user_account.name AS name, user_account.fullname AS fullname
+ FROM user_account
+ WHERE user_account.id = ?) AS anon_1 ORDER BY anon_1.id
+ [generated in ...] (2, 3)
+ {stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
+ User(id=3, name='patrick', fullname='Patrick Star')
+
+
+.. seealso::
+
+ :ref:`tutorial_orm_union` - in the :ref:`unified_tutorial`
+
+.. _orm_queryguide_joins:
+
+Joins
+-----
+
+The :meth:`_sql.Select.join` and :meth:`_sql.Select.join_from` methods
+are used to construct SQL JOINs against a SELECT statement.
+
+This section will detail ORM use cases for these methods. For a general
+overview of their use from a Core perspective, see :ref:`tutorial_select_join`
+in the :ref:`unified_tutorial`.
+
+The usage of :meth:`_sql.Select.join` in an ORM context for :term:`2.0 style`
+queries is mostly equivalent, minus legacy use cases, to the usage of the
+:meth:`_orm.Query.join` method in :term:`1.x style` queries.
+
+.. _orm_queryguide_simple_relationship_join:
+
+Simple Relationship Joins
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Consider a mapping between two classes ``User`` and ``Address``,
+with a relationship ``User.addresses`` representing a collection
+of ``Address`` objects associated with each ``User``. The most
+common usage of :meth:`_sql.Select.join`
+is to create a JOIN along this
+relationship, using the ``User.addresses`` attribute as an indicator
+for how this should occur::
+
+ >>> stmt = select(User).join(User.addresses)
+
+Where above, the call to :meth:`_sql.Select.join` along
+``User.addresses`` will result in SQL approximately equivalent to::
+
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account JOIN address ON user_account.id = address.user_id
+
+In the above example we refer to ``User.addresses`` as passed to
+:meth:`_sql.Select.join` as the "on clause", that is, it indicates
+how the "ON" portion of the JOIN should be constructed.
+
+.. tip::
+
+ Note that using :meth:`_sql.Select.join` to JOIN from one entity to another
+ affects the FROM clause of the SELECT statement, but not the columns clause;
+ the SELECT statement in this example will continue to return rows from only
+ the ``User`` entity. To SELECT
+ columns / entities from both ``User`` and ``Address`` at the same time,
+ the ``Address`` entity must also be named in the :func:`_sql.select` function,
+ or added to the :class:`_sql.Select` construct afterwards using the
+ :meth:`_sql.Select.add_columns` method. See the section
+ :ref:`orm_queryguide_select_multiple_entities` for examples of both
+ of these forms.
+
+Chaining Multiple Joins
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+To construct a chain of joins, multiple :meth:`_sql.Select.join` calls may be
+used. The relationship-bound attribute implies both the left and right side of
+the join at once. Consider additional entities ``Order`` and ``Item``, where
+the ``User.orders`` relationship refers to the ``Order`` entity, and the
+``Order.items`` relationship refers to the ``Item`` entity, via an association
+table ``order_items``. Two :meth:`_sql.Select.join` calls will result in
+a JOIN first from ``User`` to ``Order``, and a second from ``Order`` to
+``Item``. However, since ``Order.items`` is a :ref:`many to many <relationships_many_to_many>`
+relationship, it results in two separate JOIN elements, for a total of three
+JOIN elements in the resulting SQL::
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.orders).
+ ... join(Order.items)
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ JOIN user_order ON user_account.id = user_order.user_id
+ JOIN order_items AS order_items_1 ON user_order.id = order_items_1.order_id
+ JOIN item ON item.id = order_items_1.item_id
+
+The order in which each call to the :meth:`_sql.Select.join` method
+is significant only to the degree that the "left" side of what we would like
+to join from needs to be present in the list of FROMs before we indicate a
+new target. :meth:`_sql.Select.join` would not, for example, know how to
+join correctly if we were to specify
+``select(User).join(Order.items).join(User.orders)``, and would raise an
+error. In correct practice, the :meth:`_sql.Select.join` method is invoked
+in such a way that lines up with how we would want the JOIN clauses in SQL
+to be rendered, and each call should represent a clear link from what
+precedes it.
+
+All of the elements that we target in the FROM clause remain available
+as potential points to continue joining FROM. We can continue to add
+other elements to join FROM the ``User`` entity above, for example adding
+on the ``User.addresses`` relationship to our chain of joins::
+
+ >>> stmt = (
+ ... select(User).
+ ... join(User.orders).
+ ... join(Order.items).
+ ... join(User.addresses)
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ JOIN user_order ON user_account.id = user_order.user_id
+ JOIN order_items AS order_items_1 ON user_order.id = order_items_1.order_id
+ JOIN item ON item.id = order_items_1.item_id
+ JOIN address ON user_account.id = address.user_id
+
+
+Joins to a Target Entity
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+A second form of :meth:`_sql.Select.join` allows any mapped entity or core
+selectable construct as a target. In this usage, :meth:`_sql.Select.join`
+will attempt to **infer** the ON clause for the JOIN, using the natural foreign
+key relationship between two entities::
+
+ >>> stmt = select(User).join(Address)
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account JOIN address ON user_account.id = address.user_id
+
+In the above calling form, :meth:`_sql.Select.join` is called upon to infer
+the "on clause" automatically. This calling form will ultimately raise
+an error if either there are no :class:`_schema.ForeignKeyConstraint` setup
+between the two mapped :class:`_schema.Table` constructs, or if there are multiple
+:class:`_schema.ForeignKeyConstraint` linkages between them such that the
+appropriate constraint to use is ambiguous.
+
+.. note:: When making use of :meth:`_sql.Select.join` or :meth:`_sql.Select.join_from`
+ without indicating an ON clause, ORM
+ configured :func:`_orm.relationship` constructs are **not taken into account**.
+ Only the configured :class:`_schema.ForeignKeyConstraint` relationships between
+ the entities at the level of the mapped :class:`_schema.Table` objects are consulted
+ when an attempt is made to infer an ON clause for the JOIN.
+
+.. _queryguide_join_onclause:
+
+Joins to a Target with an ON Clause
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The third calling form allows both the target entity as well
+as the ON clause to be passed explicitly. A example that includes
+a SQL expression as the ON clause is as follows::
+
+ >>> stmt = select(User).join(Address, User.id==Address.user_id)
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account JOIN address ON user_account.id = address.user_id
+
+The expression-based ON clause may also be a :func:`_orm.relationship`-bound
+attribute, in the same way it's used in
+:ref:`orm_queryguide_simple_relationship_join`::
+
+ >>> stmt = select(User).join(Address, User.addresses)
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account JOIN address ON user_account.id = address.user_id
+
+The above example seems redundant in that it indicates the target of ``Address``
+in two different ways; however, the utility of this form becomes apparent
+when joining to aliased entities; see the section
+:ref:`orm_queryguide_joining_relationships_aliased` for an example.
+
+.. _orm_queryguide_join_relationship_onclause_and:
+
+.. _orm_queryguide_join_on_augmented:
+
+Combining Relationship with Custom ON Criteria
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ON clause generated by the :func:`_orm.relationship` construct may
+be augmented with additional criteria. This is useful both for
+quick ways to limit the scope of a particular join over a relationship path,
+as well as for cases like configuring loader strategies such as
+:func:`_orm.joinedload` and :func:`_orm.selectinload`.
+The :meth:`_orm.PropComparator.and_`
+method accepts a series of SQL expressions positionally that will be joined
+to the ON clause of the JOIN via AND. For example if we wanted to
+JOIN from ``User`` to ``Address`` but also limit the ON criteria to only certain
+email addresses:
+
+.. sourcecode:: pycon+sql
+
+ >>> stmt = (
+ ... select(User.fullname).
+ ... join(User.addresses.and_(Address.email_address == 'squirrel@squirrelpower.org'))
+ ... )
+ >>> session.execute(stmt).all()
+ {opensql}SELECT user_account.fullname
+ FROM user_account
+ JOIN address ON user_account.id = address.user_id AND address.email_address = ?
+ [...] ('squirrel@squirrelpower.org',){stop}
+ [('Sandy Cheeks',)]
+
+.. seealso::
+
+ The :meth:`_orm.PropComparator.and_` method also works with loader
+ strategies such as :func:`_orm.joinedload` and :func:`_orm.selectinload`.
+ See the section :ref:`loader_option_criteria`.
+
+.. _tutorial_joining_relationships_aliased:
+
+.. _orm_queryguide_joining_relationships_aliased:
+
+Using Relationship to join between aliased targets
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When constructing joins using :func:`_orm.relationship`-bound attributes to indicate
+the ON clause, the two-argument syntax illustrated in
+:ref:`queryguide_join_onclause` can be expanded to work with the
+:func:`_orm.aliased` construct, to indicate a SQL alias as the target of a join
+while still making use of the :func:`_orm.relationship`-bound attribute
+to indicate the ON clause, as in the example below, where the ``User``
+entity is joined twice to two different :func:`_orm.aliased` constructs
+against the ``Address`` entity::
+
+ >>> address_alias_1 = aliased(Address)
+ >>> address_alias_2 = aliased(Address)
+ >>> stmt = (
+ ... select(User).
+ ... join(address_alias_1, User.addresses).
+ ... where(address_alias_1.email_address == 'patrick@aol.com').
+ ... join(address_alias_2, User.addresses).
+ ... where(address_alias_2.email_address == 'patrick@gmail.com')
+ ... )
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ JOIN address AS address_1 ON user_account.id = address_1.user_id
+ JOIN address AS address_2 ON user_account.id = address_2.user_id
+ WHERE address_1.email_address = :email_address_1
+ AND address_2.email_address = :email_address_2
+
+The same pattern may be expressed more succinctly using the
+modifier :meth:`_orm.PropComparator.of_type`, which may be applied to the
+:func:`_orm.relationship`-bound attribute, passing along the target entity
+in order to indicate the target
+in one step. The example below uses :meth:`_orm.PropComparator.of_type`
+to produce the same SQL statement as the one just illustrated::
+
+ >>> print(
+ ... select(User).
+ ... join(User.addresses.of_type(address_alias_1)).
+ ... where(address_alias_1.email_address == 'patrick@aol.com').
+ ... join(User.addresses.of_type(address_alias_2)).
+ ... where(address_alias_2.email_address == 'patrick@gmail.com')
+ ... )
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ JOIN address AS address_1 ON user_account.id = address_1.user_id
+ JOIN address AS address_2 ON user_account.id = address_2.user_id
+ WHERE address_1.email_address = :email_address_1
+ AND address_2.email_address = :email_address_2
+
+
+To make use of a :func:`_orm.relationship` to construct a join **from** an
+aliased entity, the attribute is available from the :func:`_orm.aliased`
+construct directly::
+
+ >>> user_alias_1 = aliased(User)
+ >>> print(
+ ... select(user_alias_1.name).
+ ... join(user_alias_1.addresses)
+ ... )
+ {opensql}SELECT user_account_1.name
+ FROM user_account AS user_account_1
+ JOIN address ON user_account_1.id = address.user_id
+
+
+
+.. _orm_queryguide_join_subqueries:
+
+Joining to Subqueries
+^^^^^^^^^^^^^^^^^^^^^
+
+The target of a join may be any "selectable" entity which includes
+subuqeries. When using the ORM, it is typical
+that these targets are stated in terms of an
+:func:`_orm.aliased` construct, but this is not strictly required, particularly
+if the joined entity is not being returned in the results. For example, to join from the
+``User`` entity to the ``Address`` entity, where the ``Address`` entity
+is represented as a row limited subquery, we first construct a :class:`_sql.Subquery`
+object using :meth:`_sql.Select.subquery`, which may then be used as the
+target of the :meth:`_sql.Select.join` method::
+
+ >>> subq = (
+ ... select(Address).
+ ... where(Address.email_address == 'pat999@aol.com').
+ ... subquery()
+ ... )
+ >>> stmt = select(User).join(subq, User.id == subq.c.user_id)
+ >>> print(stmt)
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ JOIN (SELECT address.id AS id,
+ address.user_id AS user_id, address.email_address AS email_address
+ FROM address
+ WHERE address.email_address = :email_address_1) AS anon_1
+ ON user_account.id = anon_1.user_id{stop}
+
+The above SELECT statement when invoked via :meth:`_orm.Session.execute` will
+return rows that contain ``User`` entities, but not ``Address`` entities. In
+order to include ``Address`` entities to the set of entities that would be
+returned in result sets, we construct an :func:`_orm.aliased` object against
+the ``Address`` entity and :class:`.Subquery` object. We also may wish to apply
+a name to the :func:`_orm.aliased` construct, such as ``"address"`` used below,
+so that we can refer to it by name in the result row::
+
+ >>> address_subq = aliased(Address, subq, name="address")
+ >>> stmt = select(User, address_subq).join(address_subq)
+ >>> for row in session.execute(stmt):
+ ... print(f"{row.User} {row.address}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ anon_1.id AS id_1, anon_1.user_id, anon_1.email_address
+ FROM user_account
+ JOIN (SELECT address.id AS id,
+ address.user_id AS user_id, address.email_address AS email_address
+ FROM address
+ WHERE address.email_address = ?) AS anon_1 ON user_account.id = anon_1.user_id
+ [...] ('pat999@aol.com',){stop}
+ User(id=3, name='patrick', fullname='Patrick Star') Address(id=4, email_address='pat999@aol.com')
+
+Joining to Subqueries along Relationship paths
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The subquery form illustrated in the previous section
+may be expressed with more specificity using a
+:func:`_orm.relationship`-bound attribute using one of the forms indicated at
+:ref:`orm_queryguide_joining_relationships_aliased`. For example, to create the
+same join while ensuring the join is along that of a particular
+:func:`_orm.relationship`, we may use the
+:meth:`_orm.PropComparator.of_type` method, passing the :func:`_orm.aliased`
+construct containing the :class:`.Subquery` object that's the target
+of the join::
+
+ >>> address_subq = aliased(Address, subq, name="address")
+ >>> stmt = select(User, address_subq).join(User.addresses.of_type(address_subq))
+ >>> for row in session.execute(stmt):
+ ... print(f"{row.User} {row.address}")
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname,
+ anon_1.id AS id_1, anon_1.user_id, anon_1.email_address
+ FROM user_account
+ JOIN (SELECT address.id AS id,
+ address.user_id AS user_id, address.email_address AS email_address
+ FROM address
+ WHERE address.email_address = ?) AS anon_1 ON user_account.id = anon_1.user_id
+ [...] ('pat999@aol.com',){stop}
+ User(id=3, name='patrick', fullname='Patrick Star') Address(id=4, email_address='pat999@aol.com')
+
+Subqueries that Refer to Multiple Entities
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A subquery that contains columns spanning more than one ORM entity may be
+applied to more than one :func:`_orm.aliased` construct at once, and
+used in the same :class:`.Select` construct in terms of each entity separately.
+The rendered SQL will continue to treat all such :func:`_orm.aliased`
+constructs as the same subquery, however from the ORM / Python perspective
+the different return values and object attributes can be referred towards
+by using the appropriate :func:`_orm.aliased` construct.
+
+Given for example a subquery that refers to both ``User`` and ``Address``::
+
+ >>> user_address_subq = (
+ ... select(User.id, User.name, User.fullname, Address.id, Address.email_address).
+ ... join_from(User, Address).
+ ... where(Address.email_address.in_(['pat999@aol.com', 'squirrel@squirrelpower.org'])).
+ ... subquery()
+ ... )
+
+We can create :func:`_orm.aliased` constructs against both ``User`` and
+``Address`` that each refer to the same object::
+
+ >>> user_alias = aliased(User, user_address_subq, name="user")
+ >>> address_alias = aliased(Address, user_address_subq, name="address")
+
+A :class:`.Select` construct selecting from both entities will render the
+subquery once, but in a result-row context can return objects of both
+``User`` and ``Address`` classes at the same time::
+
+ >>> stmt = select(user_alias, address_alias).where(user_alias.name == 'sandy')
+ >>> for row in session.execute(stmt):
+ ... print(f"{row.user} {row.address}")
+ {opensql}SELECT anon_1.id, anon_1.name, anon_1.fullname, anon_1.id_1, anon_1.email_address
+ FROM (SELECT user_account.id AS id, user_account.name AS name,
+ user_account.fullname AS fullname, address.id AS id_1,
+ address.email_address AS email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ WHERE address.email_address IN (?, ?)) AS anon_1
+ WHERE anon_1.name = ?
+ [...] ('pat999@aol.com', 'squirrel@squirrelpower.org', 'sandy'){stop}
+ User(id=2, name='sandy', fullname='Sandy Cheeks') Address(id=3, email_address='squirrel@squirrelpower.org')
+
+
+.. _orm_queryguide_select_from:
+
+Setting the leftmost FROM clause in a join
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In cases where the left side of the current state of
+:class:`_sql.Select` is not in line with what we want to join from,
+the :meth:`_sql.Select.join_from` method may be used::
+
+ >>> stmt = select(Address).join_from(User, User.addresses).where(User.name == 'sandy')
+ >>> print(stmt)
+ SELECT address.id, address.user_id, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ WHERE user_account.name = :name_1
+
+The :meth:`_sql.Select.join_from` method accepts two or three arguments, either
+in the form ``(<join from>, <onclause>)``, or ``(<join from>, <join to>,
+[<onclause>])``::
+
+ >>> stmt = select(Address).join_from(User, Address).where(User.name == 'sandy')
+ >>> print(stmt)
+ SELECT address.id, address.user_id, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ WHERE user_account.name = :name_1
+
+To set up the initial FROM clause for a SELECT such that :meth:`_sql.Select.join`
+can be used subsequent, the :meth:`_sql.Select.select_from` method may also
+be used::
+
+
+ >>> stmt = select(Address).select_from(User).join(Address).where(User.name == 'sandy')
+ >>> print(stmt)
+ SELECT address.id, address.user_id, address.email_address
+ FROM user_account JOIN address ON user_account.id = address.user_id
+ WHERE user_account.name = :name_1
+
+.. tip::
+
+ The :meth:`_sql.Select.select_from` method does not actually have the
+ final say on the order of tables in the FROM clause. If the statement
+ also refers to a :class:`_sql.Join` construct that refers to existing
+ tables in a different order, the :class:`_sql.Join` construct takes
+ precedence. When we use methods like :meth:`_sql.Select.join`
+ and :meth:`_sql.Select.join_from`, these methods are ultimately creating
+ such a :class:`_sql.Join` object. Therefore we can see the contents
+ of :meth:`_sql.Select.select_from` being overridden in a case like this::
+
+ >>> stmt = select(Address).select_from(User).join(Address.user).where(User.name == 'sandy')
+ >>> print(stmt)
+ SELECT address.id, address.user_id, address.email_address
+ FROM address JOIN user_account ON user_account.id = address.user_id
+ WHERE user_account.name = :name_1
+
+ Where above, we see that the FROM clause is ``address JOIN user_account``,
+ even though we stated ``select_from(User)`` first. Because of the
+ ``.join(Address.user)`` method call, the statement is ultimately equivalent
+ to the following::
+
+ >>> from sqlalchemy.sql import join
+ >>>
+ >>> user_table = User.__table__
+ >>> address_table = Address.__table__
+ >>>
+ >>> j = address_table.join(user_table, user_table.c.id == address_table.c.user_id)
+ >>> stmt = (
+ ... select(address_table).select_from(user_table).select_from(j).
+ ... where(user_table.c.name == 'sandy')
+ ... )
+ >>> print(stmt)
+ SELECT address.id, address.user_id, address.email_address
+ FROM address JOIN user_account ON user_account.id = address.user_id
+ WHERE user_account.name = :name_1
+
+ The :class:`_sql.Join` construct above is added as another entry in the
+ :meth:`_sql.Select.select_from` list which supersedes the previous entry.
+
+
+.. _orm_queryguide_relationship_operators:
+
+
+Relationship WHERE Operators
+----------------------------
+
+
+Besides the use of :func:`_orm.relationship` constructs within the
+:meth:`.Select.join` and :meth:`.Select.join_from` methods,
+:func:`_orm.relationship` also plays a role in helping to construct
+SQL expressions that are typically for use in the WHERE clause, using
+the :meth:`.Select.where` method.
+
+
+.. _orm_queryguide_relationship_exists:
+
+.. _tutorial_relationship_exists:
+
+EXISTS forms: has() / any()
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :class:`_sql.Exists` construct was first introduced in the
+:ref:`unified_tutorial` in the section :ref:`tutorial_exists`. This object
+is used to render the SQL EXISTS keyword in conjunction with a
+scalar subquery. The :func:`_orm.relationship` construct provides for some
+helper methods that may be used to generate some common EXISTS styles
+of queries in terms of the relationship.
+
+For a one-to-many relationship such as ``User.addresses``, an EXISTS against
+the ``address`` table that correlates back to the ``user_account`` table
+can be produced using :meth:`_orm.PropComparator.any`. This method accepts
+an optional WHERE criteria to limit the rows matched by the subquery:
+
+.. sourcecode:: pycon+sql
+
+ >>> stmt = (
+ ... select(User.fullname).
+ ... where(User.addresses.any(Address.email_address == 'squirrel@squirrelpower.org'))
+ ... )
+ >>> session.execute(stmt).all()
+ {opensql}SELECT user_account.fullname
+ FROM user_account
+ WHERE EXISTS (SELECT 1
+ FROM address
+ WHERE user_account.id = address.user_id AND address.email_address = ?)
+ [...] ('squirrel@squirrelpower.org',){stop}
+ [('Sandy Cheeks',)]
+
+As EXISTS tends to be more efficient for negative lookups, a common query
+is to locate entities where there are no related entities present. This
+is succinct using a phrase such as ``~User.addresses.any()``, to select
+for ``User`` entities that have no related ``Address`` rows:
+
+.. sourcecode:: pycon+sql
+
+ >>> stmt = (
+ ... select(User.fullname).
+ ... where(~User.addresses.any())
+ ... )
+ >>> session.execute(stmt).all()
+ {opensql}SELECT user_account.fullname
+ FROM user_account
+ WHERE NOT (EXISTS (SELECT 1
+ FROM address
+ WHERE user_account.id = address.user_id))
+ [...] (){stop}
+ [('Eugene H. Krabs',)]
+
+The :meth:`_orm.PropComparator.has` method works in mostly the same way as
+:meth:`_orm.PropComparator.any`, except that it's used for many-to-one
+relationships, such as if we wanted to locate all ``Address`` objects
+which belonged to "sandy":
+
+.. sourcecode:: pycon+sql
+
+ >>> stmt = (
+ ... select(Address.email_address).
+ ... where(Address.user.has(User.name=="sandy"))
+ ... )
+ >>> session.execute(stmt).all()
+ {opensql}SELECT address.email_address
+ FROM address
+ WHERE EXISTS (SELECT 1
+ FROM user_account
+ WHERE user_account.id = address.user_id AND user_account.name = ?)
+ [...] ('sandy',){stop}
+ [('sandy@sqlalchemy.org',), ('squirrel@squirrelpower.org',)]
+
+.. _orm_queryguide_relationship_common_operators:
+
+Relationship Instance Comparison Operators
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`_orm.relationship`-bound attribute also offers a few SQL construction
+implementations that are geared towards filtering a :func:`_orm.relationship`-bound
+attribute in terms of a specific instance of a related object, which can unpack
+the appropriate attribute values from a given :term:`persistent` (or less
+commonly a :term:`detached`) object instance and construct WHERE criteria
+in terms of the target :func:`_orm.relationship`.
+
+* **many to one equals comparison** - a specific object instance can be
+ compared to many-to-one relationship, to select rows where the
+ foreign key of the target entity matches the primary key value of the
+ object given::
+
+ >>> user_obj = session.get(User, 1)
+ SELECT ...
+ >>> print(select(Address).where(Address.user == user_obj))
+ {opensql}SELECT address.id, address.user_id, address.email_address
+ FROM address
+ WHERE :param_1 = address.user_id
+
+ ..
+
+* **many to one not equals comparison** - the not equals operator may also
+ be used::
+
+ >>> print(select(Address).where(Address.user != user_obj))
+ {opensql}SELECT address.id, address.user_id, address.email_address
+ FROM address
+ WHERE address.user_id != :user_id_1 OR address.user_id IS NULL
+
+ ..
+
+* **object is contained in a one-to-many collection** - this is essentially
+ the one-to-many version of the "equals" comparison, select rows where the
+ primary key equals the value of the foreign key in a related object::
+
+ >>> address_obj = session.get(Address, 1)
+ SELECT ...
+ >>> print(select(User).where(User.addresses.contains(address_obj)))
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
+ FROM user_account
+ WHERE user_account.id = :param_1
+
+ ..
+
+* **An object has a particular parent from a one-to-many perspective** - the
+ :func:`_orm.with_parent` function produces a comparison that returns rows
+ which are referred towards by a given parent, this is essentially the
+ same as using the ``==`` operator with the many-to-one side::
+
+ >>> from sqlalchemy.orm import with_parent
+ >>> print(select(Address).where(with_parent(user_obj, User.addresses)))
+ {opensql}SELECT address.id, address.user_id, address.email_address
+ FROM address
+ WHERE :param_1 = address.user_id
+
+
backref=backref('parent', remote_side=[id])
)
-There are several examples included with SQLAlchemy illustrating
-self-referential strategies; these include :ref:`examples_adjacencylist` and
-:ref:`examples_xmlpersistence`.
+.. seealso::
+
+ :ref:`examples_adjacencylist` - working example
Composite Adjacency Lists
~~~~~~~~~~~~~~~~~~~~~~~~~
AND node_1.data = ?
['subchild1', 'child2']
-For an example of using :func:`_orm.aliased` to join across an arbitrarily long
-chain of self-referential nodes, see :ref:`examples_xmlpersistence`.
.. _self_referential_eager_loading:
Session Basics
==============
+
What does the Session do ?
-==========================
+--------------------------
In the most general sense, the :class:`~.Session` establishes all conversations
with the database and represents a "holding zone" for all the objects which
.. _session_basics:
Basics of Using a Session
-=========================
+-------------------------
The most basic :class:`.Session` use patterns are presented here.
.. _session_getting:
Opening and Closing a Session
------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :class:`_orm.Session` may be constructed on its own or by using the
:class:`_orm.sessionmaker` class. It typically is passed a single
.. _session_begin_commit_rollback_block:
Framing out a begin / commit / rollback block
------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We may also enclose the :meth:`_orm.Session.commit` call and the overall
"framing" of the transaction within a context manager for those cases where
# outer context calls session.close()
Using a sessionmaker
---------------------
+~~~~~~~~~~~~~~~~~~~~
The purpose of :class:`_orm.sessionmaker` is to provide a factory for
:class:`_orm.Session` objects with a fixed configuration. As it is typical
:class:`_orm.Session`
-.. _session_querying_1x:
-
-Querying (1.x Style)
---------------------
-
-The :meth:`~.Session.query` function takes one or more
-**entities** and returns a new :class:`~sqlalchemy.orm.query.Query` object which
-will issue mapper queries within the context of this Session. By
-"entity" we refer to a mapped class, an attribute of a mapped class, or
-other ORM constructs such as an :func:`_orm.aliased` construct::
-
- # query from a class
- results = session.query(User).filter_by(name='ed').all()
-
- # query with multiple classes, returns tuples
- results = session.query(User, Address).join('addresses').filter_by(name='ed').all()
-
- # query using orm-columns, also returns tuples
- results = session.query(User.name, User.fullname).all()
-
-When ORM objects are returned in results, they are also stored in the identity
-map. When an incoming database row has a primary key that matches an object
-which is already present, the same object is returned, and those attributes
-of the object which already have a value are not re-populated.
-
-The :class:`_orm.Session` automatically expires all instances along transaction
-boundaries (i.e. when the current transaction is committed or rolled back) so
-that with a normally isolated transaction, data will refresh itself when a new
-transaction begins.
-
-The :class:`_query.Query` object is introduced in great detail in
-:ref:`ormtutorial_toplevel`, and further documented in
-:ref:`query_api_toplevel`.
-
-.. seealso::
-
- :ref:`ormtutorial_toplevel`
-
- :meth:`_orm.Session.query`
-
- :ref:`query_api_toplevel`
.. _session_querying_20:
-Querying (2.0 style)
---------------------
+Querying
+~~~~~~~~
-.. versionadded:: 1.4
+The primary means of querying is to make use of the :func:`_sql.select`
+construct to create a :class:`_sql.Select` object, which is then executed to
+return a result using methods such as :meth:`_orm.Session.execute` and
+:meth:`_orm.Session.scalars`. Results are then returned in terms of
+:class:`_result.Result` objects, including sub-variants such as
+:class:`_result.ScalarResult`.
-SQLAlchemy 2.0 will standardize the production of SELECT statements across both
-Core and ORM by making direct use of the :class:`_sql.Select` object within the
-ORM, removing the need for there to be a separate :class:`_orm.Query`
-object. This mode of operation is available in SQLAlchemy 1.4 right now to
-support applications that will be migrating to 2.0. The :class:`_orm.Session`
-must be instantiated with the
-:paramref:`_orm.Session.future` flag set to ``True``; from that point on the
-:meth:`_orm.Session.execute` method will return ORM results via the
-standard :class:`_engine.Result` object when invoking :func:`_sql.select`
-statements that use ORM entities::
+A complete guide to SQLAlchemy ORM querying can be found at
+:ref:`queryguide_toplevel`. Some brief examples follow::
from sqlalchemy import select
from sqlalchemy.orm import Session
- session = Session(engine, future=True)
-
- # query from a class
- statement = select(User).filter_by(name="ed")
-
- # list of first element of each row (i.e. User objects)
- result = session.execute(statement).scalars().all()
+ with Session(engine) as session:
+ # query for ``User`` objects
+ statement = select(User).filter_by(name="ed")
- # query with multiple classes
- statement = select(User, Address).join('addresses').filter_by(name='ed')
+ # list of ``User`` objects
+ user_obj = session.scalars(statement).all()
- # list of tuples
- result = session.execute(statement).all()
- # query with ORM columns
- statement = select(User.name, User.fullname)
+ # query for individual columns
+ statement = select(User.name, User.fullname)
- # list of tuples
- result = session.execute(statement).all()
+ # list of Row objects
+ rows = session.execute(statement).all()
-It's important to note that while methods of :class:`_query.Query` such as
-:meth:`_query.Query.all` and :meth:`_query.Query.one` will return instances
-of ORM mapped objects directly in the case that only a single complete
-entity were requested, the :class:`_engine.Result` object returned
-by :meth:`_orm.Session.execute` will always deliver rows (named tuples)
-by default; this is so that results against single or multiple ORM objects,
-columns, tables, etc. may all be handled identically.
+.. versionchanged:: 2.0
-If only one ORM entity was queried, the rows returned will have exactly one
-column, consisting of the ORM-mapped object instance for each row. To convert
-these rows into object instances without the tuples, the
-:meth:`_engine.Result.scalars` method is used to first apply a "scalars" filter
-to the result; then the :class:`_engine.Result` can be iterated or deliver rows
-via standard methods such as :meth:`_engine.Result.all`,
-:meth:`_engine.Result.first`, etc.
+ "2.0" style querying is now standard. See
+ :ref:`migration_20_query_usage` for migration notes from the 1.x series.
.. seealso::
- :ref:`migration_20_toplevel`
+ :ref:`queryguide_toplevel`
Adding New or Existing Items
-----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:meth:`~.Session.add` is used to place instances in the
session. For :term:`transient` (i.e. brand new) instances, this will have the effect
Deleting
---------
+~~~~~~~~
The :meth:`~.Session.delete` method places an instance
into the Session's list of objects to be marked as deleted::
.. _session_flushing:
Flushing
---------
+~~~~~~~~
When the :class:`~sqlalchemy.orm.session.Session` is used with its default
configuration, the flush step is nearly always done transparently.
.. _session_get:
Get by Primary Key
-------------------
+~~~~~~~~~~~~~~~~~~
As the :class:`_orm.Session` makes use of an :term:`identity map` which refers
to current in-memory objects by primary key, the :meth:`_orm.Session.get`
.. _session_expiring:
Expiring / Refreshing
----------------------
+~~~~~~~~~~~~~~~~~~~~~
An important consideration that will often come up when using the
:class:`_orm.Session` is that of dealing with the state that is present on
-.. _orm_expression_update_delete:
-
UPDATE and DELETE with arbitrary WHERE clause
----------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The sections above on :meth:`_orm.Session.flush` and :meth:`_orm.Session.delete`
-detail how rows can be inserted, updated and deleted in the database,
-based on primary key identities that are referred towards by mapped Python
-objects in the application. The :class:`_orm.Session` can also emit UPDATE
-and DELETE statements with arbitrary WHERE clauses as well, and at the same
-time refresh locally present objects which match those rows.
+SQLAlchemy 2.0 includes enhanced capabilities for emitting several varieties
+of ORM-enabled INSERT, UPDATE and DELETE statements. See the
+document at :doc:`queryguide/dml` for documentation.
-To emit an ORM-enabled UPDATE, :meth:`_orm.Session.execute` is used with the
-Core :class:`_sql.Update` construct::
-
- from sqlalchemy import update
-
- stmt = update(User).where(User.name == "squidward").values(name="spongebob").\
- execution_options(synchronize_session="fetch")
-
- result = session.execute(stmt)
-
-Above, an UPDATE will be emitted against all rows that match the name
-"squidward" and be updated to the name "spongebob". The
-special execution option ``synchronize_session`` referring to
-"fetch" indicates the list of affected primary keys should be fetched either
-via a separate SELECT statement or via RETURNING if the backend database supports it;
-objects locally present in memory will be updated in memory based on these
-primary key identities.
-
-The result object returned is an instance of :class:`_result.CursorResult`; to
-retrieve the number of rows matched by any UPDATE or DELETE statement, use
-:attr:`_result.CursorResult.rowcount`::
-
- num_rows_matched = result.rowcount
-
-DELETEs work in the same way as UPDATE except there is no "values / set"
-clause established. When synchronize_session is used, matching objects
-within the :class:`_orm.Session` will be marked as deleted and expunged.
-
-ORM-enabled delete::
-
- from sqlalchemy import delete
-
- stmt = delete(User).where(User.name == "squidward").execution_options(synchronize_session="fetch")
-
- session.execute(stmt)
-
-.. _orm_expression_update_delete_sync:
-
-Selecting a Synchronization Strategy
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-With both the 1.x and 2.0 form of ORM-enabled updates and deletes, the following
-values for ``synchronize_session`` are supported:
-
-* ``'auto'`` - this is the default. The ``'fetch'`` strategy will be used on
- backends that support RETURNING, which includes all SQLAlchemy-native drivers
- except for MySQL. If RETURNING is not supported, the ``'evaluate'``
- strategy will be used instead.
-
- .. versionchanged:: 2.0 Added the ``'auto'`` synchronization strategy. As
- most backends now support RETURNING, selecting ``'fetch'`` for these
- backends specifically is the more efficient and error-free default for
- these backends. The MySQL backend as well as third party backends without
- RETURNING support will continue to use ``'evaluate'`` by default.
-
-* ``False`` - don't synchronize the session. This option is the most
- efficient and is reliable once the session is expired, which
- typically occurs after a commit(), or explicitly using
- expire_all(). Before the expiration, objects that were updated or deleted
- in the database may still
- remain in the session with stale values, which
- can lead to confusing results.
-
-* ``'fetch'`` - Retrieves the primary key identity of affected rows by either
- performing a SELECT before the UPDATE or DELETE, or by using RETURNING if the
- database supports it, so that in-memory objects which are affected by the
- operation can be refreshed with new values (updates) or expunged from the
- :class:`_orm.Session` (deletes). Note that this synchronization strategy is
- not available if the given :func:`_dml.update` or :func:`_dml.delete`
- construct specifies columns for :meth:`_dml.UpdateBase.returning` explicitly.
-
-* ``'evaluate'`` - Evaluate the WHERE criteria given in the UPDATE or DELETE
- statement in Python, to locate matching objects within the
- :class:`_orm.Session`. This approach does not add any round trips and in
- the absence of RETURNING support is more efficient. For UPDATE or DELETE
- statements with complex criteria, the ``'evaluate'`` strategy may not be
- able to evaluate the expression in Python and will raise an error. If
- this occurs, use the ``'fetch'`` strategy for the operation instead.
-
- .. tip::
-
- If a SQL expression makes use of custom operators using the
- :meth:`_sql.Operators.op` or :class:`_sql.custom_op` feature, the
- :paramref:`_sql.Operators.op.python_impl` parameter may be used to indicate
- a Python function that will be used by the ``"evaluate"`` synchronization
- strategy.
-
- .. versionadded:: 2.0
-
- .. warning::
-
- The ``"evaluate"`` strategy should be avoided if an UPDATE operation is
- to run on a :class:`_orm.Session` that has many objects which have
- been expired, because it will necessarily need to refresh those objects
- as they are located which will emit a SELECT for each one. The
- :class:`_orm.Session` may have expired objects if it is being used
- across multiple :meth:`_orm.Session.commit` calls and the
- :paramref:`_orm.Session.expire_on_commit` flag is at its default
- value of ``True``.
-
-
-.. warning:: **Additional Caveats for ORM-enabled updates and deletes**
-
- The ORM-enabled UPDATE and DELETE features bypass ORM unit-of-work
- automation in favor being able to emit a single UPDATE or DELETE statement
- that matches multiple rows at once without complexity.
-
- * The operations do not offer in-Python cascading of
- relationships - it is assumed that ON UPDATE CASCADE and/or
- ON DELETE CASCADE is
- configured for any foreign key references which require
- it, otherwise the database may emit an integrity
- violation if foreign key references are being enforced.
-
- * After the UPDATE or DELETE, dependent objects in the
- :class:`.Session` which were impacted by an ON UPDATE CASCADE or ON
- DELETE CASCADE on related tables may not contain the current state;
- this issue is resolved once the :class:`.Session` is expired, which
- normally occurs upon :meth:`.Session.commit` or can be forced by
- using
- :meth:`.Session.expire_all`.
+.. seealso::
- * The ``'fetch'`` strategy, when run on a database that does not support
- RETURNING such as MySQL or SQLite, results in an additional SELECT
- statement emitted which may reduce performance. Use SQL echoing when
- developing to evaluate the impact of SQL emitted.
+ :doc:`queryguide/dml`
- * ORM-enabled UPDATEs and DELETEs do not handle joined table inheritance
- automatically. If the operation is against multiple tables, typically
- individual UPDATE / DELETE statements against the individual tables
- should be used. Some databases support multiple table UPDATEs.
- Similar guidelines as those detailed at :ref:`tutorial_update_from`
- may be applied.
-
- * The WHERE criteria needed in order to limit the polymorphic identity to
- specific subclasses for single-table-inheritance mappings **is included
- automatically** . This only applies to a subclass mapper that has no
- table of its own.
-
- .. versionchanged:: 1.4 ORM updates/deletes now automatically
- accommodate for the WHERE criteria added for single-inheritance
- mappings.
-
- * The :func:`_orm.with_loader_criteria` option **is supported** by ORM
- update and delete operations; criteria here will be added to that of the
- UPDATE or DELETE statement being emitted, as well as taken into account
- during the "synchronize" process.
-
- * In order to intercept ORM-enabled UPDATE and DELETE operations with event
- handlers, use the :meth:`_orm.SessionEvents.do_orm_execute` event.
-
-
-Selecting ORM Objects Inline with UPDATE.. RETURNING or INSERT..RETURNING
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-This section has moved. See :ref:`orm_dml_returning_objects`.
+ :ref:`orm_queryguide_update_delete_where`
.. _session_autobegin:
Auto Begin
-----------
+~~~~~~~~~~
.. versionadded:: 1.4
.. _session_committing:
Committing
-----------
+~~~~~~~~~~
:meth:`~.Session.commit` is used to commit the current
transaction. At its core this indicates that it emits ``COMMIT`` on
.. _session_rollback:
Rolling Back
-------------
+~~~~~~~~~~~~
:meth:`~.Session.rollback` rolls back the current transaction, if any.
When there is no transaction in place, the method passes silently.
.. _session_closing:
Closing
--------
+~~~~~~~
The :meth:`~.Session.close` method issues a :meth:`~.Session.expunge_all` which
removes all ORM-mapped objects from the session, and :term:`releases` any
.. _session_faq:
Session Frequently Asked Questions
-==================================
+----------------------------------
By this point, many users already have questions about sessions.
This section presents a mini-FAQ (note that we have also a :doc:`real FAQ </faq/index>`)
of the most basic issues one is presented with when using a :class:`.Session`.
When do I make a :class:`.sessionmaker`?
-----------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Just one time, somewhere in your application's global scope. It should be
looked upon as part of your application's configuration. If your
.. _session_faq_whentocreate:
When do I construct a :class:`.Session`, when do I commit it, and when do I close it?
--------------------------------------------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. topic:: tl;dr;
manager without the use of external helper functions.
Is the Session a cache?
------------------------
+~~~~~~~~~~~~~~~~~~~~~~~
Yeee...no. It's somewhat used as a cache, in that it implements the
:term:`identity map` pattern, and stores objects keyed to their primary key.
via the :ref:`examples_caching` example.
How can I get the :class:`~sqlalchemy.orm.session.Session` for a certain object?
-------------------------------------------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the :meth:`~.Session.object_session` classmethod
available on :class:`~sqlalchemy.orm.session.Session`::
.. _session_faq_threadsafe:
Is the session thread-safe?
----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :class:`.Session` is very much intended to be used in a
**non-concurrent** fashion, which usually means in only one thread at a
ORM-enabled UPDATE and DELETE:
- * :ref:`tutorial_orm_enabled_update`
+ :ref:`orm_expression_update_delete` - in the :ref:`queryguide_toplevel`
- * :ref:`tutorial_orm_enabled_delete`
.. _tutorial_inserting_orm:
-Inserting Rows with the ORM
----------------------------
+Inserting Rows using the ORM Unit of Work pattern
+-------------------------------------------------
When using the ORM, the :class:`_orm.Session` object is responsible for
-constructing :class:`_sql.Insert` constructs and emitting them for us in a
-transaction. The way we instruct the :class:`_orm.Session` to do so is by
-**adding** object entries to it; the :class:`_orm.Session` then makes sure
-these new entries will be emitted to the database when they are needed, using
-a process known as a **flush**.
+constructing :class:`_sql.Insert` constructs and emitting them as INSERT
+statements within the ongoing transaction. The way we instruct the
+:class:`_orm.Session` to do so is by **adding** object entries to it; the
+:class:`_orm.Session` then makes sure these new entries will be emitted to the
+database when they are needed, using a process known as a **flush**. The
+overall process used by the :class:`_orm.Session` to persist objects is known
+as the :term:`unit of work` pattern.
Instances of Classes represent Rows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
More on this is at :ref:`tutorial_orm_closing`.
-
-
.. _tutorial_orm_updating:
-Updating ORM Objects
---------------------
+Updating ORM Objects using the Unit of Work pattern
+----------------------------------------------------
In the preceding section :ref:`tutorial_core_update_delete`, we introduced the
:class:`_sql.Update` construct that represents a SQL UPDATE statement. When
way is that it is emitted automatically as part of the :term:`unit of work`
process used by the :class:`_orm.Session`, where an UPDATE statement is emitted
on a per-primary key basis corresponding to individual objects that have
-changes on them. A second form of UPDATE is called an "ORM enabled
-UPDATE" and allows us to use the :class:`_sql.Update` construct with the
-:class:`_orm.Session` explicitly; this is described in the next section.
+changes on them.
Supposing we loaded the ``User`` object for the username ``sandy`` into
a transaction (also showing off the :meth:`_sql.Select.filter_by` method
:ref:`session_flushing`- details the flush process as well as information
about the :paramref:`_orm.Session.autoflush` setting.
-.. _tutorial_orm_enabled_update:
-
-ORM-enabled UPDATE statements
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-As previously mentioned, there's a second way to emit UPDATE statements in
-terms of the ORM, which is known as an **ORM enabled UPDATE statement**. This allows the use
-of a generic SQL UPDATE statement that can affect many rows at once. For example
-to emit an UPDATE that will change the ``User.fullname`` column based on
-a value in the ``User.name`` column:
-
-.. sourcecode:: pycon+sql
-
- >>> session.execute(
- ... update(User).
- ... where(User.name == "sandy").
- ... values(fullname="Sandy Squirrel Extraordinaire")
- ... )
- {opensql}UPDATE user_account SET fullname=? WHERE user_account.name = ?
- [...] ('Sandy Squirrel Extraordinaire', 'sandy'){stop}
- <sqlalchemy.engine.cursor.CursorResult object ...>
-
-When invoking the ORM-enabled UPDATE statement, special logic is used to locate
-objects in the current session that match the given criteria, so that they
-are refreshed with the new data. Above, the ``sandy`` object identity
-was located in memory and refreshed::
-
- >>> sandy.fullname
- 'Sandy Squirrel Extraordinaire'
-
-The refresh logic is known as the ``synchronize_session`` option, and is described
-in detail in the section :ref:`orm_expression_update_delete`.
-
-.. seealso::
-
- :ref:`orm_expression_update_delete` - describes ORM use of :func:`_sql.update`
- and :func:`_sql.delete` as well as ORM synchronization options.
.. _tutorial_orm_deleting:
-Deleting ORM Objects
----------------------
+Deleting ORM Objects using the Unit of Work pattern
+----------------------------------------------------
To round out the basic persistence operations, an individual ORM object
-may be marked for deletion by using the :meth:`_orm.Session.delete` method.
+may be marked for deletion within the :term:`unit of work` process
+by using the :meth:`_orm.Session.delete` method.
Let's load up ``patrick`` from the database:
.. sourcecode:: pycon+sql
permanent if we don't commit it. As rolling the transaction back is actually
more interesting at the moment, we will do that in the next section.
-.. _tutorial_orm_enabled_delete:
-ORM-enabled DELETE Statements
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Like UPDATE operations, there is also an ORM-enabled version of DELETE which we can
-illustrate by using the :func:`_sql.delete` construct with
-:meth:`_orm.Session.execute`. It also has a feature by which **non expired**
-objects (see :term:`expired`) that match the given deletion criteria will be
-automatically marked as ":term:`deleted`" in the :class:`_orm.Session`:
+Bulk / Multi Row INSERT, upsert, UPDATE and DELETE
+---------------------------------------------------
-.. sourcecode:: pycon+sql
+The :term:`unit of work` techniques discussed in this section
+are intended to integrate :term:`dml`, or INSERT/UPDATE/DELETE statements,
+with Python object mechanics, often involving complex graphs of
+inter-related objects. Once objects are added to a :class:`.Session` using
+:meth:`.Session.add`, the unit of work process transparently emits
+INSERT/UPDATE/DELETE on our behalf as attributes on our objects are created
+and modified.
- >>> # refresh the target object for demonstration purposes
- >>> # only, not needed for the DELETE
- {sql}>>> squidward = session.get(User, 4)
- SELECT user_account.id AS user_account_id, user_account.name AS user_account_name,
- user_account.fullname AS user_account_fullname
- FROM user_account
- WHERE user_account.id = ?
- [...] (4,){stop}
+However, the ORM :class:`.Session` also has the ability to process commands
+that allow it to emit INSERT, UPDATE and DELETE statements directly without
+being passed any ORM-persisted objects, instead being passed lists of values to
+be INSERTed, UPDATEd, or upserted, or WHERE criteria so that an UPDATE or
+DELETE statement that matches many rows at once can be invoked. This mode of
+use is of particular importance when large numbers of rows must be affected
+without the need to construct and manipulate mapped objects, which may be
+cumbersome and unnecessary for simplistic, performance-intensive tasks such as
+large bulk inserts.
- >>> session.execute(delete(User).where(User.name == "squidward"))
- {opensql}DELETE FROM user_account WHERE user_account.name = ?
- [...] ('squidward',){stop}
- <sqlalchemy.engine.cursor.CursorResult object at 0x...>
+The Bulk / Multi row features of the ORM :class:`_orm.Session` make use of the
+:func:`_dml.insert`, :func:`_dml.update` and :func:`_dml.delete` constructs
+directly, and their usage resembles how they are used with SQLAlchemy Core
+(first introduced in this tutorial at :ref:`tutorial_core_insert` and
+:ref:`tutorial_core_update_delete`). When using these constructs
+with the ORM :class:`_orm.Session` instead of a plain :class:`_engine.Connection`,
+their construction, execution and result handling is fully integrated with the ORM.
-The ``squidward`` identity, like that of ``patrick``, is now also in a
-deleted state. Note that we had to re-load ``squidward`` above in order
-to demonstrate this; if the object were expired, the DELETE operation
-would not take the time to refresh expired objects just to see that they
-had been deleted::
+For background and examples on using these features, see the section
+:ref:`orm_expression_update_delete` in the :ref:`queryguide_toplevel`.
- >>> squidward in session
- False
+.. seealso::
+ :ref:`orm_expression_update_delete` - in the :ref:`queryguide_toplevel`
Rolling Back
FROM user_account JOIN address ON user_account.id = address.user_id
The presence of an ORM :func:`_orm.relationship` on a mapping is not used
-by :meth:`_sql.Select.join` or :meth:`_sql.Select.join_from` if we don't
-specify it; it is **not used for ON clause
-inference**. This means, if we join from ``User`` to ``Address`` without an
+by :meth:`_sql.Select.join` or :meth:`_sql.Select.join_from`
+to infer the ON clause if we don't
+specify it. This means, if we join from ``User`` to ``Address`` without an
ON clause, it works because of the :class:`_schema.ForeignKeyConstraint`
between the two mapped :class:`_schema.Table` objects, not because of the
:func:`_orm.relationship` objects on the ``User`` and ``Address`` classes::
{opensql}SELECT address.email_address
FROM user_account JOIN address ON user_account.id = address.user_id
-.. _tutorial_joining_relationships_aliased:
+See the section :ref:`orm_queryguide_joins` in the :ref:`queryguide_toplevel`
+for many more examples of how to use :meth:`.Select.join` and :meth:`.Select.join_from`
+with :func:`_orm.relationship` constructs.
-Joining between Aliased targets
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In the section :ref:`tutorial_orm_entity_aliases` we introduced the
-:func:`_orm.aliased` construct, which is used to apply a SQL alias to an
-ORM entity. When using a :func:`_orm.relationship` to help construct SQL JOIN, the
-use case where the target of the join is to be an :func:`_orm.aliased` is suited
-by making use of the :meth:`_orm.PropComparator.of_type` modifier. To
-demonstrate we will construct the same join illustrated at :ref:`tutorial_orm_entity_aliases`
-using the :func:`_orm.relationship` attributes to join instead::
-
- >>> print(
- ... select(User).
- ... join(User.addresses.of_type(address_alias_1)).
- ... where(address_alias_1.email_address == 'patrick@aol.com').
- ... join(User.addresses.of_type(address_alias_2)).
- ... where(address_alias_2.email_address == 'patrick@gmail.com')
- ... )
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- JOIN address AS address_1 ON user_account.id = address_1.user_id
- JOIN address AS address_2 ON user_account.id = address_2.user_id
- WHERE address_1.email_address = :email_address_1
- AND address_2.email_address = :email_address_2
-
-To make use of a :func:`_orm.relationship` to construct a join **from** an
-aliased entity, the attribute is available from the :func:`_orm.aliased`
-construct directly::
-
- >>> user_alias_1 = aliased(User)
- >>> print(
- ... select(user_alias_1.name).
- ... join(user_alias_1.addresses)
- ... )
- {opensql}SELECT user_account_1.name
- FROM user_account AS user_account_1
- JOIN address ON user_account_1.id = address.user_id
-
-.. _tutorial_joining_relationships_augmented:
-
-Augmenting the ON Criteria
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ON clause generated by the :func:`_orm.relationship` construct may
-also be augmented with additional criteria. This is useful both for
-quick ways to limit the scope of a particular join over a relationship path,
-and also for use cases like configuring loader strategies, introduced below
-at :ref:`tutorial_orm_loader_strategies`. The :meth:`_orm.PropComparator.and_`
-method accepts a series of SQL expressions positionally that will be joined
-to the ON clause of the JOIN via AND. For example if we wanted to
-JOIN from ``User`` to ``Address`` but also limit the ON criteria to only certain
-email addresses:
-
-.. sourcecode:: pycon+sql
-
- >>> stmt = (
- ... select(User.fullname).
- ... join(User.addresses.and_(Address.email_address == 'pearl.krabs@gmail.com'))
- ... )
- >>> session.execute(stmt).all()
- {opensql}SELECT user_account.fullname
- FROM user_account
- JOIN address ON user_account.id = address.user_id AND address.email_address = ?
- [...] ('pearl.krabs@gmail.com',){stop}
- [('Pearl Krabs',)]
-
-
-.. _tutorial_relationship_exists:
-
-EXISTS forms: has() / any()
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In the section :ref:`tutorial_exists`, we introduced the :class:`_sql.Exists`
-object that provides for the SQL EXISTS keyword in conjunction with a
-scalar subquery. The :func:`_orm.relationship` construct provides for some
-helper methods that may be used to generate some common EXISTS styles
-of queries in terms of the relationship.
-
-For a one-to-many relationship such as ``User.addresses``, an EXISTS against
-the ``address`` table that correlates back to the ``user_account`` table
-can be produced using :meth:`_orm.PropComparator.any`. This method accepts
-an optional WHERE criteria to limit the rows matched by the subquery:
-
-.. sourcecode:: pycon+sql
-
- >>> stmt = (
- ... select(User.fullname).
- ... where(User.addresses.any(Address.email_address == 'pearl.krabs@gmail.com'))
- ... )
- >>> session.execute(stmt).all()
- {opensql}SELECT user_account.fullname
- FROM user_account
- WHERE EXISTS (SELECT 1
- FROM address
- WHERE user_account.id = address.user_id AND address.email_address = ?)
- [...] ('pearl.krabs@gmail.com',){stop}
- [('Pearl Krabs',)]
-
-As EXISTS tends to be more efficient for negative lookups, a common query
-is to locate entities where there are no related entities present. This
-is succinct using a phrase such as ``~User.addresses.any()``, to select
-for ``User`` entities that have no related ``Address`` rows:
-
-.. sourcecode:: pycon+sql
-
- >>> stmt = (
- ... select(User.fullname).
- ... where(~User.addresses.any())
- ... )
- >>> session.execute(stmt).all()
- {opensql}SELECT user_account.fullname
- FROM user_account
- WHERE NOT (EXISTS (SELECT 1
- FROM address
- WHERE user_account.id = address.user_id))
- [...] (){stop}
- [('Patrick McStar',), ('Squidward Tentacles',), ('Eugene H. Krabs',)]
-
-The :meth:`_orm.PropComparator.has` method works in mostly the same way as
-:meth:`_orm.PropComparator.any`, except that it's used for many-to-one
-relationships, such as if we wanted to locate all ``Address`` objects
-which belonged to "pearl":
+.. seealso::
-.. sourcecode:: pycon+sql
-
- >>> stmt = (
- ... select(Address.email_address).
- ... where(Address.user.has(User.name=="pkrabs"))
- ... )
- >>> session.execute(stmt).all()
- {opensql}SELECT address.email_address
- FROM address
- WHERE EXISTS (SELECT 1
- FROM user_account
- WHERE user_account.id = address.user_id AND user_account.name = ?)
- [...] ('pkrabs',){stop}
- [('pearl.krabs@gmail.com',), ('pearl@aol.com',)]
+ :ref:`orm_queryguide_joins` in the :ref:`queryguide_toplevel`
.. _tutorial_relationship_operators:
-Common Relationship Operators
+Relationship WHERE Operators
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are some additional varieties of SQL generation helpers that come with
-:func:`_orm.relationship`, including:
-
-* **many to one equals comparison** - a specific object instance can be
- compared to many-to-one relationship, to select rows where the
- foreign key of the target entity matches the primary key value of the
- object given::
-
- >>> print(select(Address).where(Address.user == u1))
- {opensql}SELECT address.id, address.email_address, address.user_id
- FROM address
- WHERE :param_1 = address.user_id
-
- ..
-
-* **many to one not equals comparison** - the not equals operator may also
- be used::
+:func:`_orm.relationship` which are typically useful when building up the
+WHERE clause of a statement. See the section
+:ref:`orm_queryguide_relationship_operators` in the :ref:`queryguide_toplevel`.
- >>> print(select(Address).where(Address.user != u1))
- {opensql}SELECT address.id, address.email_address, address.user_id
- FROM address
- WHERE address.user_id != :user_id_1 OR address.user_id IS NULL
-
- ..
-
-* **object is contained in a one-to-many collection** - this is essentially
- the one-to-many version of the "equals" comparison, select rows where the
- primary key equals the value of the foreign key in a related object::
-
- >>> print(select(User).where(User.addresses.contains(a1)))
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account
- WHERE user_account.id = :param_1
-
- ..
+.. seealso::
-* **An object has a particular parent from a one-to-many perspective** - the
- :func:`_orm.with_parent` function produces a comparison that returns rows
- which are referred towards by a given parent, this is essentially the
- same as using the ``==`` operator with the many-to-one side::
+ :ref:`orm_queryguide_relationship_operators` in the :ref:`queryguide_toplevel`
- >>> from sqlalchemy.orm import with_parent
- >>> print(select(Address).where(with_parent(u1, User.addresses)))
- {opensql}SELECT address.id, address.email_address, address.user_id
- FROM address
- WHERE :param_1 = address.user_id
- ..
.. _tutorial_orm_loader_strategies:
in the query. This concept is discussed in more detail in the section
:ref:`zen_of_eager_loading`.
-The ON clause rendered by :func:`_orm.joinedload` may be affected directly by
-using the :meth:`_orm.PropComparator.and_` method described previously at
-:ref:`tutorial_joining_relationships_augmented`; examples of this technique
-with loader strategies are further below at :ref:`tutorial_loader_strategy_augmented`.
-However, more generally, "joined eager loading" may be applied to a
-:class:`_sql.Select` that uses :meth:`_sql.Select.join` using the approach
-described in the next section,
-:ref:`tutorial_orm_loader_strategies_contains_eager`.
-
.. tip::
* :ref:`contains_eager` - using :func:`.contains_eager`
-.. _tutorial_loader_strategy_augmented:
-
-Augmenting Loader Strategy Paths
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In :ref:`tutorial_joining_relationships_augmented` we illustrated how to add
-arbitrary criteria to a JOIN rendered with :func:`_orm.relationship` to also
-include additional criteria in the ON clause. The :meth:`_orm.PropComparator.and_`
-method is in fact generally available for most loader options. For example,
-if we wanted to re-load the names of users and their email addresses, but omitting
-the email addresses with the ``sqlalchemy.org`` domain, we can apply
-:meth:`_orm.PropComparator.and_` to the argument passed to
-:func:`_orm.selectinload` to limit this criteria:
-
-
-.. sourcecode:: pycon+sql
-
- >>> from sqlalchemy.orm import selectinload
- >>> stmt = (
- ... select(User).
- ... options(
- ... selectinload(
- ... User.addresses.and_(
- ... ~Address.email_address.endswith("sqlalchemy.org")
- ... )
- ... )
- ... ).
- ... order_by(User.id).
- ... execution_options(populate_existing=True)
- ... )
- >>> for row in session.execute(stmt):
- ... print(f"{row.User.name} ({', '.join(a.email_address for a in row.User.addresses)})")
- {opensql}SELECT user_account.id, user_account.name, user_account.fullname
- FROM user_account ORDER BY user_account.id
- [...] ()
- SELECT address.user_id AS address_user_id, address.id AS address_id,
- address.email_address AS address_email_address
- FROM address
- WHERE address.user_id IN (?, ?, ?, ?, ?, ?)
- AND (address.email_address NOT LIKE '%' || ?)
- [...] (1, 2, 3, 4, 5, 6, 'sqlalchemy.org'){stop}
- spongebob ()
- sandy (sandy@squirrelpower.org)
- patrick ()
- squidward ()
- ehkrabs ()
- pkrabs (pearl.krabs@gmail.com, pearl@aol.com)
-
-
-A very important thing to note above is that a special option is added with
-``.execution_options(populate_existing=True)``. This option which takes
-effect when rows are being fetched indicates that the loader option we are
-using should **replace** the existing contents of collections on the objects,
-if they are already loaded. As we are working with a single
-:class:`_orm.Session` repeatedly, the objects we see being loaded above are the
-same Python instances as those that were first persisted at the start of the
-ORM section of this tutorial.
-
-
-.. seealso::
-
- :ref:`loader_option_criteria` - in :ref:`loading_toplevel`
-
- :ref:`orm_queryguide_populate_existing` - in :ref:`queryguide_toplevel`
-
Raiseload
^^^^^^^^^
+++ /dev/null
-"""
-Illustrates three strategies for persisting and querying XML
-documents as represented by ElementTree in a relational
-database. The techniques do not apply any mappings to the
-ElementTree objects directly, so are compatible with the
-native cElementTree as well as lxml, and can be adapted to
-suit any kind of DOM representation system. Querying along
-xpath-like strings is illustrated as well.
-
-E.g.::
-
- # parse an XML file and persist in the database
- doc = ElementTree.parse("test.xml")
- session.add(Document(file, doc))
- session.commit()
-
- # locate documents with a certain path/attribute structure
- for document in find_document('/somefile/header/field2[@attr=foo]'):
- # dump the XML
- print(document)
-
-.. autosource::
- :files: pickle_type.py, adjacency_list.py, optimized_al.py
-
-"""
+++ /dev/null
-"""
-Illustrates an explicit way to persist an XML document expressed using
-ElementTree.
-
-Each DOM node is stored in an individual
-table row, with attributes represented in a separate table. The
-nodes are associated in a hierarchy using an adjacency list
-structure. A query function is introduced which can search for nodes
-along any path with a given structure of attributes, basically a
-(very narrow) subset of xpath.
-
-This example explicitly marshals/unmarshals the ElementTree document into
-mapped entities which have their own tables. Compare to pickle_type.py which
-uses PickleType to accomplish the same task. Note that the usage of both
-styles of persistence are identical, as is the structure of the main Document
-class.
-
-"""
-
-# PART I - Imports/Configuration
-
-import os
-import re
-from xml.etree import ElementTree
-
-from sqlalchemy import and_
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import ForeignKey
-from sqlalchemy import Integer
-from sqlalchemy import String
-from sqlalchemy import Table
-from sqlalchemy import Unicode
-from sqlalchemy.orm import aliased
-from sqlalchemy.orm import lazyload
-from sqlalchemy.orm import mapper
-from sqlalchemy.orm import registry
-from sqlalchemy.orm import relationship
-from sqlalchemy.orm import Session
-
-
-e = create_engine("sqlite://")
-mapper_registry = registry()
-
-# PART II - Table Metadata
-
-# stores a top level record of an XML document.
-documents = Table(
- "documents",
- mapper_registry.metadata,
- Column("document_id", Integer, primary_key=True),
- Column("filename", String(30), unique=True),
- Column("element_id", Integer, ForeignKey("elements.element_id")),
-)
-
-# stores XML nodes in an adjacency list model. This corresponds to
-# Element and SubElement objects.
-elements = Table(
- "elements",
- mapper_registry.metadata,
- Column("element_id", Integer, primary_key=True),
- Column("parent_id", Integer, ForeignKey("elements.element_id")),
- Column("tag", Unicode(30), nullable=False),
- Column("text", Unicode),
- Column("tail", Unicode),
-)
-
-# stores attributes. This corresponds to the dictionary of attributes
-# stored by an Element or SubElement.
-attributes = Table(
- "attributes",
- mapper_registry.metadata,
- Column(
- "element_id",
- Integer,
- ForeignKey("elements.element_id"),
- primary_key=True,
- ),
- Column("name", Unicode(100), nullable=False, primary_key=True),
- Column("value", Unicode(255)),
-)
-
-mapper_registry.metadata.create_all(e)
-
-# PART III - Model
-
-# our document class. contains a string name,
-# and the ElementTree root element.
-
-
-class Document:
- def __init__(self, name, element):
- self.filename = name
- self.element = element
-
-
-# PART IV - Persistence Mapping
-
-# Node class. a non-public class which will represent the DB-persisted
-# Element/SubElement object. We cannot create mappers for ElementTree elements
-# directly because they are at the very least not new-style classes, and also
-# may be backed by native implementations. so here we construct an adapter.
-
-
-class _Node:
- pass
-
-
-# Attribute class. also internal, this will represent the key/value attributes
-# stored for a particular Node.
-
-
-class _Attribute:
- def __init__(self, name, value):
- self.name = name
- self.value = value
-
-
-# setup mappers. Document will eagerly load a list of _Node objects.
-mapper(
- Document,
- documents,
- properties={"_root": relationship(_Node, lazy="joined", cascade="all")},
-)
-
-mapper(
- _Node,
- elements,
- properties={
- "children": relationship(_Node, cascade="all"),
- # eagerly load attributes
- "attributes": relationship(
- _Attribute, lazy="joined", cascade="all, delete-orphan"
- ),
- },
-)
-
-mapper(_Attribute, attributes)
-
-# define marshalling functions that convert from _Node/_Attribute to/from
-# ElementTree objects. this will set the ElementTree element as
-# "document._element", and append the root _Node object to the "_root" mapped
-# collection.
-
-
-class ElementTreeMarshal:
- def __get__(self, document, owner):
- if document is None:
- return self
-
- if hasattr(document, "_element"):
- return document._element
-
- def traverse(node, parent=None):
- if parent is not None:
- elem = ElementTree.SubElement(parent, node.tag)
- else:
- elem = ElementTree.Element(node.tag)
- elem.text = node.text
- elem.tail = node.tail
- for attr in node.attributes:
- elem.attrib[attr.name] = attr.value
- for child in node.children:
- traverse(child, parent=elem)
- return elem
-
- document._element = ElementTree.ElementTree(traverse(document._root))
- return document._element
-
- def __set__(self, document, element):
- def traverse(node):
- n = _Node()
- n.tag = str(node.tag)
- n.text = str(node.text)
- n.tail = str(node.tail) if node.tail else None
- n.children = [traverse(n2) for n2 in node]
- n.attributes = [
- _Attribute(str(k), str(v)) for k, v in node.attrib.items()
- ]
- return n
-
- document._root = traverse(element.getroot())
- document._element = element
-
- def __delete__(self, document):
- del document._element
- document._root = []
-
-
-# override Document's "element" attribute with the marshaller.
-Document.element = ElementTreeMarshal()
-
-# PART V - Basic Persistence Example
-
-line = "\n--------------------------------------------------------"
-
-# save to DB
-session = Session(e)
-
-# get ElementTree documents
-for file in ("test.xml", "test2.xml", "test3.xml"):
- filename = os.path.join(os.path.dirname(__file__), file)
- doc = ElementTree.parse(filename)
- session.add(Document(file, doc))
-
-print("\nSaving three documents...", line)
-session.commit()
-print("Done.")
-
-print("\nFull text of document 'text.xml':", line)
-document = session.query(Document).filter_by(filename="test.xml").first()
-
-ElementTree.dump(document.element)
-
-# PART VI - Searching for Paths
-
-# manually search for a document which contains "/somefile/header/field1:hi"
-root = aliased(_Node)
-child_node = aliased(_Node)
-grandchild_node = aliased(_Node)
-
-d = (
- session.query(Document)
- .join(Document._root.of_type(root))
- .filter(root.tag == "somefile")
- .join(root.children.of_type(child_node))
- .filter(child_node.tag == "header")
- .join(child_node.children.of_type(grandchild_node))
- .filter(
- and_(grandchild_node.tag == "field1", grandchild_node.text == "hi")
- )
- .one()
-)
-ElementTree.dump(d.element)
-
-# generalize the above approach into an extremely impoverished xpath function:
-
-
-def find_document(path, compareto):
- query = session.query(Document)
- attribute = Document._root
- for i, match in enumerate(
- re.finditer(r"/([\w_]+)(?:\[@([\w_]+)(?:=(.*))?\])?", path)
- ):
- (token, attrname, attrvalue) = match.group(1, 2, 3)
- target_node = aliased(_Node)
-
- query = query.join(attribute.of_type(target_node)).filter(
- target_node.tag == token
- )
-
- attribute = target_node.children
-
- if attrname:
- attribute_entity = aliased(_Attribute)
-
- if attrvalue:
- query = query.join(
- target_node.attributes.of_type(attribute_entity)
- ).filter(
- and_(
- attribute_entity.name == attrname,
- attribute_entity.value == attrvalue,
- )
- )
- else:
- query = query.join(
- target_node.attributes.of_type(attribute_entity)
- ).filter(attribute_entity.name == attrname)
- return (
- query.options(lazyload(Document._root))
- .filter(target_node.text == compareto)
- .all()
- )
-
-
-for path, compareto in (
- ("/somefile/header/field1", "hi"),
- ("/somefile/field1", "hi"),
- ("/somefile/header/field2", "there"),
- ("/somefile/header/field2[@attr=foo]", "there"),
-):
- print("\nDocuments containing '%s=%s':" % (path, compareto), line)
- print([d.filename for d in find_document(path, compareto)])
+++ /dev/null
-"""Uses the same strategy as
- ``adjacency_list.py``, but associates each DOM row with its owning
- document row, so that a full document of DOM nodes can be loaded
- using O(1) queries - the construction of the "hierarchy" is performed
- after the load in a non-recursive fashion and is more
- efficient.
-
-"""
-
-# PART I - Imports/Configuration
-import os
-import re
-from xml.etree import ElementTree
-
-from sqlalchemy import and_
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import ForeignKey
-from sqlalchemy import Integer
-from sqlalchemy import String
-from sqlalchemy import Table
-from sqlalchemy import Unicode
-from sqlalchemy.orm import aliased
-from sqlalchemy.orm import lazyload
-from sqlalchemy.orm import mapper
-from sqlalchemy.orm import registry
-from sqlalchemy.orm import relationship
-from sqlalchemy.orm import Session
-
-
-e = create_engine("sqlite://")
-mapper_registry = registry()
-
-# PART II - Table Metadata
-
-# stores a top level record of an XML document.
-documents = Table(
- "documents",
- mapper_registry.metadata,
- Column("document_id", Integer, primary_key=True),
- Column("filename", String(30), unique=True),
-)
-
-# stores XML nodes in an adjacency list model. This corresponds to
-# Element and SubElement objects.
-elements = Table(
- "elements",
- mapper_registry.metadata,
- Column("element_id", Integer, primary_key=True),
- Column("parent_id", Integer, ForeignKey("elements.element_id")),
- Column("document_id", Integer, ForeignKey("documents.document_id")),
- Column("tag", Unicode(30), nullable=False),
- Column("text", Unicode),
- Column("tail", Unicode),
-)
-
-# stores attributes. This corresponds to the dictionary of attributes
-# stored by an Element or SubElement.
-attributes = Table(
- "attributes",
- mapper_registry.metadata,
- Column(
- "element_id",
- Integer,
- ForeignKey("elements.element_id"),
- primary_key=True,
- ),
- Column("name", Unicode(100), nullable=False, primary_key=True),
- Column("value", Unicode(255)),
-)
-
-mapper_registry.metadata.create_all(e)
-
-# PART III - Model
-
-# our document class. contains a string name,
-# and the ElementTree root element.
-
-
-class Document:
- def __init__(self, name, element):
- self.filename = name
- self.element = element
-
-
-# PART IV - Persistence Mapping
-
-# Node class. a non-public class which will represent the DB-persisted
-# Element/SubElement object. We cannot create mappers for ElementTree elements
-# directly because they are at the very least not new-style classes, and also
-# may be backed by native implementations. so here we construct an adapter.
-
-
-class _Node:
- pass
-
-
-# Attribute class. also internal, this will represent the key/value attributes
-# stored for a particular Node.
-
-
-class _Attribute:
- def __init__(self, name, value):
- self.name = name
- self.value = value
-
-
-# setup mappers. Document will eagerly load a list of _Node objects.
-# they will be ordered in primary key/insert order, so that we can reconstruct
-# an ElementTree structure from the list.
-mapper(
- Document,
- documents,
- properties={
- "_nodes": relationship(
- _Node, lazy="joined", cascade="all, delete-orphan"
- )
- },
-)
-
-# the _Node objects change the way they load so that a list of _Nodes will
-# organize themselves hierarchically using the ElementTreeMarshal. this
-# depends on the ordering of nodes being hierarchical as well; relationship()
-# always applies at least ROWID/primary key ordering to rows which will
-# suffice.
-mapper(
- _Node,
- elements,
- properties={
- "children": relationship(
- _Node, lazy=None
- ), # doesn't load; used only for the save relationship
- "attributes": relationship(
- _Attribute, lazy="joined", cascade="all, delete-orphan"
- ), # eagerly load attributes
- },
-)
-
-mapper(_Attribute, attributes)
-
-# define marshalling functions that convert from _Node/_Attribute to/from
-# ElementTree objects. this will set the ElementTree element as
-# "document._element", and append the root _Node object to the "_nodes" mapped
-# collection.
-
-
-class ElementTreeMarshal:
- def __get__(self, document, owner):
- if document is None:
- return self
-
- if hasattr(document, "_element"):
- return document._element
-
- nodes = {}
- root = None
- for node in document._nodes:
- if node.parent_id is not None:
- parent = nodes[node.parent_id]
- elem = ElementTree.SubElement(parent, node.tag)
- nodes[node.element_id] = elem
- else:
- parent = None
- elem = root = ElementTree.Element(node.tag)
- nodes[node.element_id] = root
- for attr in node.attributes:
- elem.attrib[attr.name] = attr.value
- elem.text = node.text
- elem.tail = node.tail
-
- document._element = ElementTree.ElementTree(root)
- return document._element
-
- def __set__(self, document, element):
- def traverse(node):
- n = _Node()
- n.tag = str(node.tag)
- n.text = str(node.text)
- n.tail = str(node.tail)
- document._nodes.append(n)
- n.children = [traverse(n2) for n2 in node]
- n.attributes = [
- _Attribute(str(k), str(v)) for k, v in node.attrib.items()
- ]
- return n
-
- traverse(element.getroot())
- document._element = element
-
- def __delete__(self, document):
- del document._element
- document._nodes = []
-
-
-# override Document's "element" attribute with the marshaller.
-Document.element = ElementTreeMarshal()
-
-# PART V - Basic Persistence Example
-
-line = "\n--------------------------------------------------------"
-
-# save to DB
-session = Session(e)
-
-# get ElementTree documents
-for file in ("test.xml", "test2.xml", "test3.xml"):
- filename = os.path.join(os.path.dirname(__file__), file)
- doc = ElementTree.parse(filename)
- session.add(Document(file, doc))
-
-print("\nSaving three documents...", line)
-session.commit()
-print("Done.")
-
-print("\nFull text of document 'text.xml':", line)
-document = session.query(Document).filter_by(filename="test.xml").first()
-
-ElementTree.dump(document.element)
-
-# PART VI - Searching for Paths
-
-# manually search for a document which contains "/somefile/header/field1:hi"
-print("\nManual search for /somefile/header/field1=='hi':", line)
-
-root = aliased(_Node)
-child_node = aliased(_Node)
-grandchild_node = aliased(_Node)
-
-d = (
- session.query(Document)
- .join(Document._nodes.of_type(root))
- .filter(and_(root.parent_id.is_(None), root.tag == "somefile"))
- .join(root.children.of_type(child_node))
- .filter(child_node.tag == "header")
- .join(child_node.children.of_type(grandchild_node))
- .filter(
- and_(grandchild_node.tag == "field1", grandchild_node.text == "hi")
- )
- .one()
-)
-ElementTree.dump(d.element)
-
-# generalize the above approach into an extremely impoverished xpath function:
-
-
-def find_document(path, compareto):
- query = session.query(Document)
-
- for i, match in enumerate(
- re.finditer(r"/([\w_]+)(?:\[@([\w_]+)(?:=(.*))?\])?", path)
- ):
- (token, attrname, attrvalue) = match.group(1, 2, 3)
-
- if not i:
- parent = Document
- target_node = aliased(_Node)
-
- query = query.join(parent._nodes.of_type(target_node)).filter(
- target_node.parent_id.is_(None)
- )
- else:
- parent = target_node
- target_node = aliased(_Node)
-
- query = query.join(parent.children.of_type(target_node))
-
- query = query.filter(target_node.tag == token)
- if attrname:
- attribute_entity = aliased(_Attribute)
- query = query.join(
- target_node.attributes.of_type(attribute_entity)
- )
- if attrvalue:
- query = query.filter(
- and_(
- attribute_entity.name == attrname,
- attribute_entity.value == attrvalue,
- )
- )
- else:
- query = query.filter(attribute_entity.name == attrname)
- return (
- query.options(lazyload(Document._nodes))
- .filter(target_node.text == compareto)
- .all()
- )
-
-
-for path, compareto in (
- ("/somefile/header/field1", "hi"),
- ("/somefile/field1", "hi"),
- ("/somefile/header/field2", "there"),
- ("/somefile/header/field2[@attr=foo]", "there"),
-):
- print("\nDocuments containing '%s=%s':" % (path, compareto), line)
- print([d.filename for d in find_document(path, compareto)])
+++ /dev/null
-"""
-illustrates a quick and dirty way to persist an XML document expressed using
-ElementTree and pickle.
-
-This is a trivial example using PickleType to marshal/unmarshal the ElementTree
-document into a binary column. Compare to explicit.py which stores the
-individual components of the ElementTree structure in distinct rows using two
-additional mapped entities. Note that the usage of both styles of persistence
-are identical, as is the structure of the main Document class.
-
-"""
-
-import os
-from xml.etree import ElementTree
-
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import Integer
-from sqlalchemy import PickleType
-from sqlalchemy import String
-from sqlalchemy import Table
-from sqlalchemy.orm import registry
-from sqlalchemy.orm import Session
-
-
-e = create_engine("sqlite://")
-mapper_registry = registry()
-
-
-# setup a comparator for the PickleType since it's a mutable
-# element.
-
-
-def are_elements_equal(x, y):
- return x == y
-
-
-# stores a top level record of an XML document.
-# the "element" column will store the ElementTree document as a BLOB.
-documents = Table(
- "documents",
- mapper_registry.metadata,
- Column("document_id", Integer, primary_key=True),
- Column("filename", String(30), unique=True),
- Column("element", PickleType(comparator=are_elements_equal)),
-)
-
-mapper_registry.metadata.create_all(e)
-
-# our document class. contains a string name,
-# and the ElementTree root element.
-
-
-class Document:
- def __init__(self, name, element):
- self.filename = name
- self.element = element
-
-
-# setup mapper.
-mapper_registry.map_imperatively(Document, documents)
-
-# time to test !
-
-# get ElementTree document
-filename = os.path.join(os.path.dirname(__file__), "test.xml")
-doc = ElementTree.parse(filename)
-
-# save to DB
-session = Session(e)
-session.add(Document("test.xml", doc))
-session.commit()
-
-# restore
-document = session.query(Document).filter_by(filename="test.xml").first()
-
-# print
-ElementTree.dump(document.element)
+++ /dev/null
-<somefile>
- This is somefile.
- <header name="foo" value="bar" hoho="lala">
- <field1>hi</field1>
- <field2>there</field2>
- Some additional text within the header.
- </header>
- Some more text within somefile.
-</somefile>
\ No newline at end of file
+++ /dev/null
-<somefile>
- <field1>hi</field1>
- <field2>there</field2>
-</somefile>
\ No newline at end of file
+++ /dev/null
-<somefile>
- test3
- <header name="aheader" value="bar" hoho="lala">
- <field1>one</field1>
- <field2 attr='foo'>there</field2>
- </header>
-</somefile>
\ No newline at end of file
"""Concrete-table (table-per-class) inheritance example."""
+from __future__ import annotations
+
+from typing import Annotated
-from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
-from sqlalchemy import inspect
-from sqlalchemy import Integer
from sqlalchemy import or_
+from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy.ext.declarative import ConcreteBase
-from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import DeclarativeBase
+from sqlalchemy.orm import Mapped
+from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
from sqlalchemy.orm import with_polymorphic
-Base = declarative_base()
+intpk = Annotated[int, mapped_column(primary_key=True)]
+str50 = Annotated[str, mapped_column(String(50))]
+
+
+class Base(DeclarativeBase):
+ pass
class Company(Base):
__tablename__ = "company"
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
+ id: Mapped[intpk]
+ name: Mapped[str50]
- employees = relationship(
- "Person", back_populates="company", cascade="all, delete-orphan"
+ employees: Mapped[list[Person]] = relationship(
+ back_populates="company", cascade="all, delete-orphan"
)
def __repr__(self):
- return "Company %s" % self.name
+ return f"Company {self.name}"
class Person(ConcreteBase, Base):
__tablename__ = "person"
- id = Column(Integer, primary_key=True)
- company_id = Column(ForeignKey("company.id"))
- name = Column(String(50))
+ id: Mapped[intpk]
+ company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ name: Mapped[str50]
- company = relationship("Company", back_populates="employees")
+ company: Mapped[Company] = relationship(back_populates="employees")
- __mapper_args__ = {"polymorphic_identity": "person"}
+ __mapper_args__ = {
+ "polymorphic_identity": "person",
+ }
def __repr__(self):
- return "Ordinary person %s" % self.name
+ return f"Ordinary person {self.name}"
class Engineer(Person):
__tablename__ = "engineer"
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
- company_id = Column(ForeignKey("company.id"))
- status = Column(String(30))
- engineer_name = Column(String(30))
- primary_language = Column(String(30))
- company = relationship("Company", back_populates="employees")
+ id: Mapped[int] = mapped_column(primary_key=True)
+ company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ name: Mapped[str50]
+ status: Mapped[str50]
+ engineer_name: Mapped[str50]
+ primary_language: Mapped[str50]
- __mapper_args__ = {"polymorphic_identity": "engineer", "concrete": True}
+ company: Mapped[Company] = relationship(back_populates="employees")
- def __repr__(self):
- return (
- "Engineer %s, status %s, engineer_name %s, "
- "primary_language %s"
- % (
- self.name,
- self.status,
- self.engineer_name,
- self.primary_language,
- )
- )
+ __mapper_args__ = {"polymorphic_identity": "engineer", "concrete": True}
class Manager(Person):
__tablename__ = "manager"
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
- company_id = Column(ForeignKey("company.id"))
- status = Column(String(30))
- manager_name = Column(String(30))
- company = relationship("Company", back_populates="employees")
+ id: Mapped[int] = mapped_column(primary_key=True)
+ company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ name: Mapped[str50]
+ status: Mapped[str50]
+ manager_name: Mapped[str50]
+
+ company: Mapped[Company] = relationship(back_populates="employees")
__mapper_args__ = {"polymorphic_identity": "manager", "concrete": True}
def __repr__(self):
- return "Manager %s, status %s, manager_name %s" % (
- self.name,
- self.status,
- self.manager_name,
+ return (
+ f"Manager {self.name}, status {self.status}, "
+ f"manager_name {self.manager_name}"
)
engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(engine)
-session = Session(engine)
-
-c = Company(
- name="company1",
- employees=[
- Manager(
- name="pointy haired boss", status="AAB", manager_name="manager1"
- ),
- Engineer(
- name="dilbert",
- status="BBA",
- engineer_name="engineer1",
- primary_language="java",
- ),
- Person(name="joesmith"),
- Engineer(
- name="wally",
- status="CGG",
- engineer_name="engineer2",
- primary_language="python",
- ),
- Manager(name="jsmith", status="ABA", manager_name="manager2"),
- ],
-)
-session.add(c)
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e, inspect(e).key, e.company)
-assert set([e.name for e in c.employees]) == set(
- ["pointy haired boss", "dilbert", "joesmith", "wally", "jsmith"]
-)
-print("\n")
-
-dilbert = session.query(Person).filter_by(name="dilbert").one()
-dilbert2 = session.query(Engineer).filter_by(name="dilbert").one()
-assert dilbert is dilbert2
-
-dilbert.engineer_name = "hes dilbert!"
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e)
-
-# query using with_polymorphic.
-eng_manager = with_polymorphic(Person, [Engineer, Manager])
-print(
- session.query(eng_manager)
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+with Session(engine) as session:
+
+ c = Company(
+ name="company1",
+ employees=[
+ Manager(
+ name="mr krabs",
+ status="AAB",
+ manager_name="manager1",
+ ),
+ Engineer(
+ name="spongebob",
+ status="BBA",
+ engineer_name="engineer1",
+ primary_language="java",
+ ),
+ Person(name="joesmith"),
+ Engineer(
+ name="patrick",
+ status="CGG",
+ engineer_name="engineer2",
+ primary_language="python",
+ ),
+ Manager(name="jsmith", status="ABA", manager_name="manager2"),
+ ],
)
- .all()
-)
-
-# illustrate join from Company
-eng_manager = with_polymorphic(Person, [Engineer, Manager])
-print(
- session.query(Company)
- .join(Company.employees.of_type(eng_manager))
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+ session.add(c)
+
+ session.commit()
+
+ for e in c.employees:
+ print(e)
+
+ spongebob = session.scalars(
+ select(Person).filter_by(name="spongebob")
+ ).one()
+ spongebob2 = session.scalars(
+ select(Engineer).filter_by(name="spongebob")
+ ).one()
+ assert spongebob is spongebob2
+
+ spongebob2.engineer_name = "hes spongebob!"
+
+ session.commit()
+
+ # query using with_polymorphic.
+ # when using ConcreteBase, use "*" to use the default selectable
+ # setting specific entities won't work right now.
+ eng_manager = with_polymorphic(Person, "*")
+ print(
+ session.scalars(
+ select(eng_manager).filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
+ )
+
+ # illustrate join from Company.
+ print(
+ session.scalars(
+ select(Company)
+ .join(Company.employees.of_type(eng_manager))
+ .filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
)
- .all()
-)
-session.commit()
+ session.commit()
"""Joined-table (table-per-subclass) inheritance example."""
+from __future__ import annotations
+
+from typing import Annotated
-from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
-from sqlalchemy import inspect
-from sqlalchemy import Integer
from sqlalchemy import or_
+from sqlalchemy import select
from sqlalchemy import String
-from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import DeclarativeBase
+from sqlalchemy.orm import Mapped
+from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
from sqlalchemy.orm import with_polymorphic
-Base = declarative_base()
+intpk = Annotated[int, mapped_column(primary_key=True)]
+str50 = Annotated[str, mapped_column(String(50))]
+
+
+class Base(DeclarativeBase):
+ pass
class Company(Base):
__tablename__ = "company"
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
+ id: Mapped[intpk]
+ name: Mapped[str50]
- employees = relationship(
- "Person", back_populates="company", cascade="all, delete-orphan"
+ employees: Mapped[list[Person]] = relationship(
+ back_populates="company", cascade="all, delete-orphan"
)
def __repr__(self):
- return "Company %s" % self.name
+ return f"Company {self.name}"
class Person(Base):
__tablename__ = "person"
- id = Column(Integer, primary_key=True)
- company_id = Column(ForeignKey("company.id"))
- name = Column(String(50))
- type = Column(String(50))
+ id: Mapped[intpk]
+ company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ name: Mapped[str50]
+ type: Mapped[str50]
- company = relationship("Company", back_populates="employees")
+ company: Mapped[Company] = relationship(back_populates="employees")
__mapper_args__ = {
"polymorphic_identity": "person",
- "polymorphic_on": type,
+ "polymorphic_on": "type",
}
def __repr__(self):
- return "Ordinary person %s" % self.name
+ return f"Ordinary person {self.name}"
class Engineer(Person):
__tablename__ = "engineer"
- id = Column(ForeignKey("person.id"), primary_key=True)
- status = Column(String(30))
- engineer_name = Column(String(30))
- primary_language = Column(String(30))
+ id: Mapped[intpk] = mapped_column(ForeignKey("person.id"))
+ status: Mapped[str50]
+ engineer_name: Mapped[str50]
+ primary_language: Mapped[str50]
__mapper_args__ = {"polymorphic_identity": "engineer"}
def __repr__(self):
return (
- "Engineer %s, status %s, engineer_name %s, "
- "primary_language %s"
- % (
- self.name,
- self.status,
- self.engineer_name,
- self.primary_language,
- )
+ f"Engineer {self.name}, status {self.status}, "
+ f"engineer_name {self.engineer_name}, "
+ f"primary_language {self.primary_language}"
)
class Manager(Person):
__tablename__ = "manager"
- id = Column(ForeignKey("person.id"), primary_key=True)
- status = Column(String(30))
- manager_name = Column(String(30))
+ id: Mapped[intpk] = mapped_column(ForeignKey("person.id"))
+ status: Mapped[str50]
+ manager_name: Mapped[str50]
__mapper_args__ = {"polymorphic_identity": "manager"}
def __repr__(self):
- return "Manager %s, status %s, manager_name %s" % (
- self.name,
- self.status,
- self.manager_name,
+ return (
+ f"Manager {self.name}, status {self.status}, "
+ f"manager_name {self.manager_name}"
)
engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(engine)
-session = Session(engine)
-
-c = Company(
- name="company1",
- employees=[
- Manager(
- name="pointy haired boss", status="AAB", manager_name="manager1"
- ),
- Engineer(
- name="dilbert",
- status="BBA",
- engineer_name="engineer1",
- primary_language="java",
- ),
- Person(name="joesmith"),
- Engineer(
- name="wally",
- status="CGG",
- engineer_name="engineer2",
- primary_language="python",
- ),
- Manager(name="jsmith", status="ABA", manager_name="manager2"),
- ],
-)
-session.add(c)
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e, inspect(e).key, e.company)
-assert set([e.name for e in c.employees]) == set(
- ["pointy haired boss", "dilbert", "joesmith", "wally", "jsmith"]
-)
-print("\n")
-
-dilbert = session.query(Person).filter_by(name="dilbert").one()
-dilbert2 = session.query(Engineer).filter_by(name="dilbert").one()
-assert dilbert is dilbert2
-
-dilbert.engineer_name = "hes dilbert!"
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e)
-
-# query using with_polymorphic.
-eng_manager = with_polymorphic(Person, [Engineer, Manager])
-print(
- session.query(eng_manager)
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+with Session(engine) as session:
+
+ c = Company(
+ name="company1",
+ employees=[
+ Manager(
+ name="mr krabs",
+ status="AAB",
+ manager_name="manager1",
+ ),
+ Engineer(
+ name="spongebob",
+ status="BBA",
+ engineer_name="engineer1",
+ primary_language="java",
+ ),
+ Person(name="joesmith"),
+ Engineer(
+ name="patrick",
+ status="CGG",
+ engineer_name="engineer2",
+ primary_language="python",
+ ),
+ Manager(name="jsmith", status="ABA", manager_name="manager2"),
+ ],
)
- .all()
-)
-
-# illustrate join from Company.
-# flat=True means the tables inside the "polymorphic join" will be aliased.
-# not strictly necessary in this example but helpful for the more general
-# case of joins involving inheritance hierarchies as well as joined eager
-# loading.
-eng_manager = with_polymorphic(Person, [Engineer, Manager], flat=True)
-print(
- session.query(Company)
- .join(Company.employees.of_type(eng_manager))
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+ session.add(c)
+
+ session.commit()
+
+ for e in c.employees:
+ print(e)
+
+ spongebob = session.scalars(
+ select(Person).filter_by(name="spongebob")
+ ).one()
+ spongebob2 = session.scalars(
+ select(Engineer).filter_by(name="spongebob")
+ ).one()
+ assert spongebob is spongebob2
+
+ spongebob2.engineer_name = "hes spongebob!"
+
+ session.commit()
+
+ # query using with_polymorphic. flat=True is generally recommended
+ # for joined inheritance mappings as it will produce fewer levels
+ # of subqueries
+ eng_manager = with_polymorphic(Person, [Engineer, Manager], flat=True)
+ print(
+ session.scalars(
+ select(eng_manager).filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
)
- .all()
-)
-session.commit()
+ # illustrate join from Company.
+ eng_manager = with_polymorphic(Person, [Engineer, Manager], flat=True)
+ print(
+ session.scalars(
+ select(Company)
+ .join(Company.employees.of_type(eng_manager))
+ .filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
+ )
"""Single-table (table-per-hierarchy) inheritance example."""
+from __future__ import annotations
+
+from typing import Annotated
-from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
-from sqlalchemy import inspect
-from sqlalchemy import Integer
+from sqlalchemy import FromClause
from sqlalchemy import or_
+from sqlalchemy import select
from sqlalchemy import String
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.ext.declarative import declared_attr
+from sqlalchemy.orm import DeclarativeBase
+from sqlalchemy.orm import declared_attr
+from sqlalchemy.orm import Mapped
+from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
from sqlalchemy.orm import with_polymorphic
+intpk = Annotated[int, mapped_column(primary_key=True)]
+str50 = Annotated[str, mapped_column(String(50))]
+
+# columns that are local to subclasses must be nullable.
+# we can still use a non-optional type, however
+str50subclass = Annotated[str, mapped_column(String(50), nullable=True)]
-Base = declarative_base()
+
+class Base(DeclarativeBase):
+ pass
class Company(Base):
__tablename__ = "company"
- id = Column(Integer, primary_key=True)
- name = Column(String(50))
+ id: Mapped[intpk]
+ name: Mapped[str50]
- employees = relationship(
- "Person", back_populates="company", cascade="all, delete-orphan"
+ employees: Mapped[list[Person]] = relationship(
+ back_populates="company", cascade="all, delete-orphan"
)
def __repr__(self):
- return "Company %s" % self.name
+ return f"Company {self.name}"
class Person(Base):
__tablename__ = "person"
- id = Column(Integer, primary_key=True)
- company_id = Column(ForeignKey("company.id"))
- name = Column(String(50))
- type = Column(String(50))
+ __table__: FromClause
+
+ id: Mapped[intpk]
+ company_id: Mapped[int] = mapped_column(ForeignKey("company.id"))
+ name: Mapped[str50]
+ type: Mapped[str50]
- company = relationship("Company", back_populates="employees")
+ company: Mapped[Company] = relationship(back_populates="employees")
__mapper_args__ = {
"polymorphic_identity": "person",
- "polymorphic_on": type,
+ "polymorphic_on": "type",
}
def __repr__(self):
- return "Ordinary person %s" % self.name
+ return f"Ordinary person {self.name}"
class Engineer(Person):
- engineer_name = Column(String(30))
- primary_language = Column(String(30))
-
- # illustrate a single-inh "conflicting" column declaration;
- # see https://docs.sqlalchemy.org/en/latest/orm/extensions/
- # declarative/inheritance.html#resolving-column-conflicts
+ # illustrate a single-inh "conflicting" mapped_column declaration,
+ # where both subclasses want to share the same column that is nonetheless
+ # not "local" to the base class
@declared_attr
- def status(cls):
- return Person.__table__.c.get("status", Column(String(30)))
+ def status(cls) -> Mapped[str50]:
+ return Person.__table__.c.get(
+ "status", mapped_column(String(30)) # type: ignore
+ )
+
+ engineer_name: Mapped[str50subclass]
+ primary_language: Mapped[str50subclass]
__mapper_args__ = {"polymorphic_identity": "engineer"}
def __repr__(self):
return (
- "Engineer %s, status %s, engineer_name %s, "
- "primary_language %s"
- % (
- self.name,
- self.status,
- self.engineer_name,
- self.primary_language,
- )
+ f"Engineer {self.name}, status {self.status}, "
+ f"engineer_name {self.engineer_name}, "
+ f"primary_language {self.primary_language}"
)
class Manager(Person):
- manager_name = Column(String(30))
+ manager_name: Mapped[str50subclass]
+ # illustrate a single-inh "conflicting" mapped_column declaration,
+ # where both subclasses want to share the same column that is nonetheless
+ # not "local" to the base class
@declared_attr
- def status(cls):
- return Person.__table__.c.get("status", Column(String(30)))
+ def status(cls) -> Mapped[str50]:
+ return Person.__table__.c.get(
+ "status", mapped_column(String(30)) # type: ignore
+ )
__mapper_args__ = {"polymorphic_identity": "manager"}
def __repr__(self):
- return "Manager %s, status %s, manager_name %s" % (
- self.name,
- self.status,
- self.manager_name,
+ return (
+ f"Manager {self.name}, status {self.status}, "
+ f"manager_name {self.manager_name}"
)
engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(engine)
-session = Session(engine)
-
-c = Company(
- name="company1",
- employees=[
- Manager(
- name="pointy haired boss", status="AAB", manager_name="manager1"
- ),
- Engineer(
- name="dilbert",
- status="BBA",
- engineer_name="engineer1",
- primary_language="java",
- ),
- Person(name="joesmith"),
- Engineer(
- name="wally",
- status="CGG",
- engineer_name="engineer2",
- primary_language="python",
- ),
- Manager(name="jsmith", status="ABA", manager_name="manager2"),
- ],
-)
-session.add(c)
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e, inspect(e).key, e.company)
-assert set([e.name for e in c.employees]) == set(
- ["pointy haired boss", "dilbert", "joesmith", "wally", "jsmith"]
-)
-print("\n")
-
-dilbert = session.query(Person).filter_by(name="dilbert").one()
-dilbert2 = session.query(Engineer).filter_by(name="dilbert").one()
-assert dilbert is dilbert2
-
-dilbert.engineer_name = "hes dilbert!"
-
-session.commit()
-
-c = session.query(Company).get(1)
-for e in c.employees:
- print(e)
-
-# query using with_polymorphic.
-eng_manager = with_polymorphic(Person, [Engineer, Manager])
-print(
- session.query(eng_manager)
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+with Session(engine) as session:
+
+ c = Company(
+ name="company1",
+ employees=[
+ Manager(
+ name="mr krabs",
+ status="AAB",
+ manager_name="manager1",
+ ),
+ Engineer(
+ name="spongebob",
+ status="BBA",
+ engineer_name="engineer1",
+ primary_language="java",
+ ),
+ Person(name="joesmith"),
+ Engineer(
+ name="patrick",
+ status="CGG",
+ engineer_name="engineer2",
+ primary_language="python",
+ ),
+ Manager(name="jsmith", status="ABA", manager_name="manager2"),
+ ],
)
- .all()
-)
-
-# illustrate join from Company,
-eng_manager = with_polymorphic(Person, [Engineer, Manager])
-print(
- session.query(Company)
- .join(Company.employees.of_type(eng_manager))
- .filter(
- or_(
- eng_manager.Engineer.engineer_name == "engineer1",
- eng_manager.Manager.manager_name == "manager2",
- )
+ session.add(c)
+
+ session.commit()
+
+ for e in c.employees:
+ print(e)
+
+ spongebob = session.scalars(
+ select(Person).filter_by(name="spongebob")
+ ).one()
+ spongebob2 = session.scalars(
+ select(Engineer).filter_by(name="spongebob")
+ ).one()
+ assert spongebob is spongebob2
+
+ spongebob2.engineer_name = "hes spongebob!"
+
+ session.commit()
+
+ # query using with_polymorphic.
+ eng_manager = with_polymorphic(Person, [Engineer, Manager])
+ print(
+ session.scalars(
+ select(eng_manager).filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
)
- .all()
-)
-session.commit()
+ # illustrate join from Company.
+ print(
+ session.scalars(
+ select(Company)
+ .join(Company.employees.of_type(eng_manager))
+ .filter(
+ or_(
+ eng_manager.Engineer.engineer_name == "engineer1",
+ eng_manager.Manager.manager_name == "manager2",
+ )
+ )
+ ).all()
+ )
+++ /dev/null
-"""Examples of various :func:`.orm.relationship` configurations,
-which make use of the ``primaryjoin`` argument to compose special types
-of join conditions.
-
-.. autosource::
-
-"""
+++ /dev/null
-"""Illustrate a :func:`.relationship` that joins two columns where those
-columns are not of the same type, and a CAST must be used on the SQL
-side in order to match them.
-
-When complete, we'd like to see a load of the relationship to look like::
-
- -- load the primary row, a_id is a string
- SELECT a.id AS a_id_1, a.a_id AS a_a_id
- FROM a
- WHERE a.a_id = '2'
-
- -- then load the collection using CAST, b.a_id is an integer
- SELECT b.id AS b_id, b.a_id AS b_a_id
- FROM b
- WHERE CAST('2' AS INTEGER) = b.a_id
-
-The relationship is essentially configured as follows::
-
- class B(Base):
- # ...
-
- a = relationship(A,
- primaryjoin=cast(A.a_id, Integer) == foreign(B.a_id),
- backref="bs")
-
-Where above, we are making use of the :func:`.cast` function in order
-to produce CAST, as well as the :func:`.foreign` :term:`annotation` function
-in order to note to the ORM that ``B.a_id`` should be treated like the
-"foreign key" column.
-
-"""
-
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import Integer
-from sqlalchemy import String
-from sqlalchemy import TypeDecorator
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import relationship
-from sqlalchemy.orm import Session
-
-
-Base = declarative_base()
-
-
-class StringAsInt(TypeDecorator):
- """Coerce string->integer type.
-
- This is needed only if the relationship() from
- int to string is writable, as SQLAlchemy will copy
- the string parent values into the integer attribute
- on the child during a flush.
-
- """
-
- impl = Integer
-
- def process_bind_param(self, value, dialect):
- if value is not None:
- value = int(value)
- return value
-
-
-class A(Base):
- """Parent. The referenced column is a string type."""
-
- __tablename__ = "a"
-
- id = Column(Integer, primary_key=True)
- a_id = Column(String)
-
-
-class B(Base):
- """Child. The column we reference 'A' with is an integer."""
-
- __tablename__ = "b"
-
- id = Column(Integer, primary_key=True)
- a_id = Column(StringAsInt)
- a = relationship(
- "A",
- # specify primaryjoin. The string form is optional
- # here, but note that Declarative makes available all
- # of the built-in functions we might need, including
- # cast() and foreign().
- primaryjoin="cast(A.a_id, Integer) == foreign(B.a_id)",
- backref="bs",
- )
-
-
-# we demonstrate with SQLite, but the important part
-# is the CAST rendered in the SQL output.
-
-e = create_engine("sqlite://", echo=True)
-Base.metadata.create_all(e)
-
-s = Session(e)
-
-s.add_all([A(a_id="1"), A(a_id="2", bs=[B(), B()]), A(a_id="3", bs=[B()])])
-s.commit()
-
-b1 = s.query(B).filter_by(a_id="2").first()
-print(b1.a)
-
-a1 = s.query(A).filter_by(a_id="2").first()
-print(a1.bs)
+++ /dev/null
-"""Illustrate a "three way join" - where a primary table joins to a remote
-table via an association table, but then the primary table also needs
-to refer to some columns in the remote table directly.
-
-E.g.::
-
- first.first_id -> second.first_id
- second.other_id --> partitioned.other_id
- first.partition_key ---------------------> partitioned.partition_key
-
-For a relationship like this, "second" is a lot like a "secondary" table,
-but the mechanics aren't present within the "secondary" feature to allow
-for the join directly between first and partitioned. Instead, we
-will derive a selectable from partitioned and second combined together, then
-link first to that derived selectable.
-
-If we define the derived selectable as::
-
- second JOIN partitioned ON second.other_id = partitioned.other_id
-
-A JOIN from first to this derived selectable is then::
-
- first JOIN (second JOIN partitioned
- ON second.other_id = partitioned.other_id)
- ON first.first_id = second.first_id AND
- first.partition_key = partitioned.partition_key
-
-We will use the "non primary mapper" feature in order to produce this.
-A non primary mapper is essentially an "extra" :func:`.mapper` that we can
-use to associate a particular class with some selectable that is
-not its usual mapped table. It is used only when called upon within
-a Query (or a :func:`.relationship`).
-
-
-"""
-from sqlalchemy import and_
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import Integer
-from sqlalchemy import join
-from sqlalchemy import String
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import foreign
-from sqlalchemy.orm import mapper
-from sqlalchemy.orm import relationship
-from sqlalchemy.orm import Session
-
-
-Base = declarative_base()
-
-
-class First(Base):
- __tablename__ = "first"
-
- first_id = Column(Integer, primary_key=True)
- partition_key = Column(String)
-
- def __repr__(self):
- return "First(%s, %s)" % (self.first_id, self.partition_key)
-
-
-class Second(Base):
- __tablename__ = "second"
-
- first_id = Column(Integer, primary_key=True)
- other_id = Column(Integer, primary_key=True)
-
-
-class Partitioned(Base):
- __tablename__ = "partitioned"
-
- other_id = Column(Integer, primary_key=True)
- partition_key = Column(String, primary_key=True)
-
- def __repr__(self):
- return "Partitioned(%s, %s)" % (self.other_id, self.partition_key)
-
-
-j = join(Partitioned, Second, Partitioned.other_id == Second.other_id)
-
-partitioned_second = mapper(
- Partitioned,
- j,
- non_primary=True,
- properties={
- # note we need to disambiguate columns here - the join()
- # will provide them as j.c.<tablename>_<colname> for access,
- # but they retain their real names in the mapping
- "other_id": [j.c.partitioned_other_id, j.c.second_other_id]
- },
-)
-
-First.partitioned = relationship(
- partitioned_second,
- primaryjoin=and_(
- First.partition_key == partitioned_second.c.partition_key,
- First.first_id == foreign(partitioned_second.c.first_id),
- ),
- innerjoin=True,
-)
-
-# when using any database other than SQLite, we will get a nested
-# join, e.g. "first JOIN (partitioned JOIN second ON ..) ON ..".
-# On SQLite, SQLAlchemy needs to render a full subquery.
-e = create_engine("sqlite://", echo=True)
-
-Base.metadata.create_all(e)
-s = Session(e)
-s.add_all(
- [
- First(first_id=1, partition_key="p1"),
- First(first_id=2, partition_key="p1"),
- First(first_id=3, partition_key="p2"),
- Second(first_id=1, other_id=1),
- Second(first_id=2, other_id=1),
- Second(first_id=3, other_id=2),
- Partitioned(partition_key="p1", other_id=1),
- Partitioned(partition_key="p1", other_id=2),
- Partitioned(partition_key="p2", other_id=2),
- ]
-)
-s.commit()
-
-for row in s.query(First, Partitioned).join(First.partitioned):
- print(row)
-
-for f in s.query(First):
- for p in f.partitioned:
- print(f.partition_key, p.partition_key)
+++ /dev/null
-"""Large collection example.
-
-Illustrates the options to use with
-:func:`~sqlalchemy.orm.relationship()` when the list of related
-objects is very large, including:
-
-* "dynamic" relationships which query slices of data as accessed
-* how to use ON DELETE CASCADE in conjunction with
- ``passive_deletes=True`` to greatly improve the performance of
- related collection deletion.
-
-.. autosource::
-
-"""
+++ /dev/null
-from sqlalchemy import Column
-from sqlalchemy import create_engine
-from sqlalchemy import ForeignKey
-from sqlalchemy import Integer
-from sqlalchemy import MetaData
-from sqlalchemy import String
-from sqlalchemy import Table
-from sqlalchemy.orm import mapper
-from sqlalchemy.orm import relationship
-from sqlalchemy.orm import sessionmaker
-
-
-meta = MetaData()
-
-org_table = Table(
- "organizations",
- meta,
- Column("org_id", Integer, primary_key=True),
- Column("org_name", String(50), nullable=False, key="name"),
- mysql_engine="InnoDB",
-)
-
-member_table = Table(
- "members",
- meta,
- Column("member_id", Integer, primary_key=True),
- Column("member_name", String(50), nullable=False, key="name"),
- Column(
- "org_id",
- Integer,
- ForeignKey("organizations.org_id", ondelete="CASCADE"),
- ),
- mysql_engine="InnoDB",
-)
-
-
-class Organization:
- def __init__(self, name):
- self.name = name
-
-
-class Member:
- def __init__(self, name):
- self.name = name
-
-
-mapper(
- Organization,
- org_table,
- properties={
- "members": relationship(
- Member,
- # Organization.members will be a Query object - no loading
- # of the entire collection occurs unless requested
- lazy="dynamic",
- # Member objects "belong" to their parent, are deleted when
- # removed from the collection
- cascade="all, delete-orphan",
- # "delete, delete-orphan" cascade does not load in objects on
- # delete, allows ON DELETE CASCADE to handle it.
- # this only works with a database that supports ON DELETE CASCADE -
- # *not* sqlite or MySQL with MyISAM
- passive_deletes=True,
- )
- },
-)
-
-mapper(Member, member_table)
-
-if __name__ == "__main__":
- engine = create_engine(
- "postgresql+psycopg2://scott:tiger@localhost/test", echo=True
- )
- meta.create_all(engine)
-
- # expire_on_commit=False means the session contents
- # will not get invalidated after commit.
- sess = sessionmaker(engine, expire_on_commit=False)()
-
- # create org with some members
- org = Organization("org one")
- org.members.append(Member("member one"))
- org.members.append(Member("member two"))
- org.members.append(Member("member three"))
-
- sess.add(org)
-
- print("-------------------------\nflush one - save org + 3 members\n")
- sess.commit()
-
- # the 'members' collection is a Query. it issues
- # SQL as needed to load subsets of the collection.
- print("-------------------------\nload subset of members\n")
- members = org.members.filter(member_table.c.name.like("%member t%")).all()
- print(members)
-
- # new Members can be appended without any
- # SQL being emitted to load the full collection
- org.members.append(Member("member four"))
- org.members.append(Member("member five"))
- org.members.append(Member("member six"))
-
- print("-------------------------\nflush two - save 3 more members\n")
- sess.commit()
-
- # delete the object. Using ON DELETE CASCADE
- # SQL is only emitted for the head row - the Member rows
- # disappear automatically without the need for additional SQL.
- sess.delete(org)
- print(
- "-------------------------\nflush three - delete org, "
- "delete members in one statement\n"
- )
- sess.commit()
-
- print("-------------------------\nno Member rows should remain:\n")
- print(sess.query(Member).count())
- sess.close()
-
- print("------------------------\ndone. dropping tables.")
- meta.drop_all(engine)
from sqlalchemy import Identity
from sqlalchemy import insert
from sqlalchemy import Integer
-from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import Session
session.commit()
-@Profiler.profile
-def test_bulk_save_return_pks(n):
- """INSERT statements in "bulk" (batched with RETURNING if available),
- fetching generated row id"""
- session = Session(bind=engine)
- session.bulk_save_objects(
- [
- Customer(
- name="customer name %d" % i,
- description="customer description %d" % i,
- )
- for i in range(n)
- ],
- return_defaults=True,
- )
- session.commit()
-
-
@Profiler.profile
def test_flush_pk_given(n):
"""Batched INSERT statements via the ORM, PKs already defined"""
@Profiler.profile
-def test_bulk_save(n):
- """Batched INSERT statements via the ORM in "bulk", discarding PKs."""
- session = Session(bind=engine)
- session.bulk_save_objects(
- [
- Customer(
- name="customer name %d" % i,
- description="customer description %d" % i,
- )
- for i in range(n)
- ]
- )
- session.commit()
-
-
-@Profiler.profile
-def test_orm_insert(n):
- """A single Core INSERT run through the Session"""
+def test_orm_bulk_insert(n):
+ """Batched INSERT statements via the ORM in "bulk", not returning rows"""
session = Session(bind=engine)
session.execute(
insert(Customer),
- params=[
- dict(
- name="customer name %d" % i,
- description="customer description %d" % i,
- )
+ [
+ {
+ "name": "customer name %d" % i,
+ "description": "customer description %d" % i,
+ }
for i in range(n)
],
)
@Profiler.profile
-def test_orm_insert_w_fetch(n):
- """A single Core INSERT w executemany run through the Session, fetching
- back new Customer objects into a list"""
+def test_orm_insert_returning(n):
+ """Batched INSERT statements via the ORM in "bulk", returning new Customer
+ objects"""
session = Session(bind=engine)
- result = session.execute(
- select(Customer).from_statement(insert(Customer).returning(Customer)),
- params=[
- dict(
- name="customer name %d" % i,
- description="customer description %d" % i,
- )
- for i in range(n)
- ],
- )
- customers = result.scalars().all() # noqa: F841
- session.commit()
-
-@Profiler.profile
-def test_bulk_insert_mappings(n):
- """Batched INSERT statements via the ORM "bulk", using dictionaries."""
- session = Session(bind=engine)
- session.bulk_insert_mappings(
- Customer,
+ customer_result = session.scalars(
+ insert(Customer).returning(Customer),
[
- dict(
- name="customer name %d" % i,
- description="customer description %d" % i,
- )
+ {
+ "name": "customer name %d" % i,
+ "description": "customer description %d" % i,
+ }
for i in range(n)
],
)
+
+ # this step is where the rows actually become objects
+ customers = customer_result.all() # noqa: F841
+
session.commit()
+++ /dev/null
-"""A naive example illustrating techniques to help
-embed PostGIS functionality.
-
-This example was originally developed in the hopes that it would be
-extrapolated into a comprehensive PostGIS integration layer. We are
-pleased to announce that this has come to fruition as `GeoAlchemy
-<https://geoalchemy-2.readthedocs.io>`_.
-
-The example illustrates:
-
-* a DDL extension which allows CREATE/DROP to work in
- conjunction with AddGeometryColumn/DropGeometryColumn
-
-* a Geometry type, as well as a few subtypes, which
- convert result row values to a GIS-aware object,
- and also integrates with the DDL extension.
-
-* a GIS-aware object which stores a raw geometry value
- and provides a factory for functions such as AsText().
-
-* an ORM comparator which can override standard column
- methods on mapped objects to produce GIS operators.
-
-* an attribute event listener that intercepts strings
- and converts to GeomFromText().
-
-* a standalone operator example.
-
-The implementation is limited to only public, well known
-and simple to use extension points.
-
-E.g.::
-
- print(session.query(Road).filter(
- Road.road_geom.intersects(r1.road_geom)).all())
-
-.. autosource::
-
-"""
+++ /dev/null
-import binascii
-
-from sqlalchemy import event
-from sqlalchemy import Table
-from sqlalchemy.sql import expression
-from sqlalchemy.sql import type_coerce
-from sqlalchemy.types import UserDefinedType
-
-
-# Python datatypes
-
-
-class GisElement:
- """Represents a geometry value."""
-
- def __str__(self):
- return self.desc
-
- def __repr__(self):
- return "<%s at 0x%x; %r>" % (
- self.__class__.__name__,
- id(self),
- self.desc,
- )
-
-
-class BinaryGisElement(GisElement, expression.Function):
- """Represents a Geometry value expressed as binary."""
-
- def __init__(self, data):
- self.data = data
- expression.Function.__init__(
- self, "ST_GeomFromEWKB", data, type_=Geometry(coerce_="binary")
- )
-
- @property
- def desc(self):
- return self.as_hex
-
- @property
- def as_hex(self):
- return binascii.hexlify(self.data)
-
-
-class TextualGisElement(GisElement, expression.Function):
- """Represents a Geometry value expressed as text."""
-
- def __init__(self, desc, srid=-1):
- self.desc = desc
- expression.Function.__init__(
- self, "ST_GeomFromText", desc, srid, type_=Geometry
- )
-
-
-# SQL datatypes.
-
-
-class Geometry(UserDefinedType):
- """Base PostGIS Geometry column type."""
-
- name = "GEOMETRY"
-
- def __init__(self, dimension=None, srid=-1, coerce_="text"):
- self.dimension = dimension
- self.srid = srid
- self.coerce = coerce_
-
- class comparator_factory(UserDefinedType.Comparator):
- """Define custom operations for geometry types."""
-
- # override the __eq__() operator
- def __eq__(self, other):
- return self.op("~=")(other)
-
- # add a custom operator
- def intersects(self, other):
- return self.op("&&")(other)
-
- # any number of GIS operators can be overridden/added here
- # using the techniques above.
-
- def _coerce_compared_value(self, op, value):
- return self
-
- def get_col_spec(self):
- return self.name
-
- def bind_expression(self, bindvalue):
- if self.coerce == "text":
- return TextualGisElement(bindvalue)
- elif self.coerce == "binary":
- return BinaryGisElement(bindvalue)
- else:
- assert False
-
- def column_expression(self, col):
- if self.coerce == "text":
- return func.ST_AsText(col, type_=self)
- elif self.coerce == "binary":
- return func.ST_AsBinary(col, type_=self)
- else:
- assert False
-
- def bind_processor(self, dialect):
- def process(value):
- if isinstance(value, GisElement):
- return value.desc
- else:
- return value
-
- return process
-
- def result_processor(self, dialect, coltype):
- if self.coerce == "text":
- fac = TextualGisElement
- elif self.coerce == "binary":
- fac = BinaryGisElement
- else:
- assert False
-
- def process(value):
- if value is not None:
- return fac(value)
- else:
- return value
-
- return process
-
- def adapt(self, impltype):
- return impltype(
- dimension=self.dimension, srid=self.srid, coerce_=self.coerce
- )
-
-
-# other datatypes can be added as needed.
-
-
-class Point(Geometry):
- name = "POINT"
-
-
-class Curve(Geometry):
- name = "CURVE"
-
-
-class LineString(Curve):
- name = "LINESTRING"
-
-
-# ... etc.
-
-
-# DDL integration
-# PostGIS historically has required AddGeometryColumn/DropGeometryColumn
-# and other management methods in order to create PostGIS columns. Newer
-# versions don't appear to require these special steps anymore. However,
-# here we illustrate how to set up these features in any case.
-
-
-def setup_ddl_events():
- @event.listens_for(Table, "before_create")
- def before_create(target, connection, **kw):
- dispatch("before-create", target, connection)
-
- @event.listens_for(Table, "after_create")
- def after_create(target, connection, **kw):
- dispatch("after-create", target, connection)
-
- @event.listens_for(Table, "before_drop")
- def before_drop(target, connection, **kw):
- dispatch("before-drop", target, connection)
-
- @event.listens_for(Table, "after_drop")
- def after_drop(target, connection, **kw):
- dispatch("after-drop", target, connection)
-
- def dispatch(event, table, bind):
- if event in ("before-create", "before-drop"):
- regular_cols = [
- c for c in table.c if not isinstance(c.type, Geometry)
- ]
- gis_cols = set(table.c).difference(regular_cols)
- table.info["_saved_columns"] = table.c
-
- # temporarily patch a set of columns not including the
- # Geometry columns
- table.columns = expression.ColumnCollection(*regular_cols)
-
- if event == "before-drop":
- for c in gis_cols:
- bind.execute(
- select(
- func.DropGeometryColumn(
- "public", table.name, c.name
- )
- ).execution_options(autocommit=True)
- )
-
- elif event == "after-create":
- table.columns = table.info.pop("_saved_columns")
- for c in table.c:
- if isinstance(c.type, Geometry):
- bind.execute(
- select(
- func.AddGeometryColumn(
- table.name,
- c.name,
- c.type.srid,
- c.type.name,
- c.type.dimension,
- )
- ).execution_options(autocommit=True)
- )
- elif event == "after-drop":
- table.columns = table.info.pop("_saved_columns")
-
-
-setup_ddl_events()
-
-
-# illustrate usage
-if __name__ == "__main__":
- from sqlalchemy import (
- create_engine,
- MetaData,
- Column,
- Integer,
- String,
- func,
- select,
- )
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy.ext.declarative import declarative_base
-
- engine = create_engine(
- "postgresql+psycopg2://scott:tiger@localhost/test", echo=True
- )
- metadata = MetaData(engine)
- Base = declarative_base(metadata=metadata)
-
- class Road(Base):
- __tablename__ = "roads"
-
- road_id = Column(Integer, primary_key=True)
- road_name = Column(String)
- road_geom = Column(Geometry(2))
-
- metadata.drop_all()
- metadata.create_all()
-
- session = sessionmaker(bind=engine)()
-
- # Add objects. We can use strings...
- session.add_all(
- [
- Road(
- road_name="Jeff Rd",
- road_geom="LINESTRING(191232 243118,191108 243242)",
- ),
- Road(
- road_name="Geordie Rd",
- road_geom="LINESTRING(189141 244158,189265 244817)",
- ),
- Road(
- road_name="Paul St",
- road_geom="LINESTRING(192783 228138,192612 229814)",
- ),
- Road(
- road_name="Graeme Ave",
- road_geom="LINESTRING(189412 252431,189631 259122)",
- ),
- Road(
- road_name="Phil Tce",
- road_geom="LINESTRING(190131 224148,190871 228134)",
- ),
- ]
- )
-
- # or use an explicit TextualGisElement
- # (similar to saying func.GeomFromText())
- r = Road(
- road_name="Dave Cres",
- road_geom=TextualGisElement(
- "LINESTRING(198231 263418,198213 268322)", -1
- ),
- )
- session.add(r)
-
- # pre flush, the TextualGisElement represents the string we sent.
- assert str(r.road_geom) == "LINESTRING(198231 263418,198213 268322)"
-
- session.commit()
-
- # after flush and/or commit, all the TextualGisElements
- # become PersistentGisElements.
- assert str(r.road_geom) == "LINESTRING(198231 263418,198213 268322)"
-
- r1 = session.query(Road).filter(Road.road_name == "Graeme Ave").one()
-
- # illustrate the overridden __eq__() operator.
-
- # strings come in as TextualGisElements
- r2 = (
- session.query(Road)
- .filter(Road.road_geom == "LINESTRING(189412 252431,189631 259122)")
- .one()
- )
-
- r3 = session.query(Road).filter(Road.road_geom == r1.road_geom).one()
-
- assert r1 is r2 is r3
-
- # core usage just fine:
-
- road_table = Road.__table__
- stmt = select(road_table).where(
- road_table.c.road_geom.intersects(r1.road_geom)
- )
- print(session.execute(stmt).fetchall())
-
- # TODO: for some reason the auto-generated labels have the internal
- # replacement strings exposed, even though PG doesn't complain
-
- # look up the hex binary version, using SQLAlchemy casts
- as_binary = session.scalar(
- select(type_coerce(r.road_geom, Geometry(coerce_="binary")))
- )
- assert as_binary.as_hex == (
- "01020000000200000000000000b832084100000000"
- "e813104100000000283208410000000088601041"
- )
-
- # back again, same method !
- as_text = session.scalar(
- select(type_coerce(as_binary, Geometry(coerce_="text")))
- )
- assert as_text.desc == "LINESTRING(198231 263418,198213 268322)"
-
- session.rollback()
-
- metadata.drop_all()
database and render.
"""
- for gcoord in session.query(GlyphCoordinate).options(joinedload("glyph")):
+ for gcoord in session.query(GlyphCoordinate).options(
+ joinedload(GlyphCoordinate.glyph)
+ ):
gcoord.render(window, state)
window.addstr(1, WINDOW_WIDTH - 5, "Score: %.4d" % state["score"])
window.move(0, 0)
insert_null_pk_still_autoincrements = True
insert_returning = True
update_returning = True
+ update_returning_multifrom = True
delete_returning = True
update_returning_multifrom = True
.. seealso::
- :ref:`.CursorResult.splice_horizontally`
+ :meth:`.CursorResult.splice_horizontally`
"""
clone = self._generate()
.. seealso::
- :ref:`deferred`
+ :ref:`orm_queryguide_deferred_declarative`
:param deferred_group: Implies :paramref:`_orm.mapped_column.deferred`
to ``True``, and set the :paramref:`_orm.deferred.group` parameter.
+
+ .. seealso::
+
+ :ref:`orm_queryguide_deferred_group`
+
:param deferred_raiseload: Implies :paramref:`_orm.mapped_column.deferred`
to ``True``, and set the :paramref:`_orm.deferred.raiseload` parameter.
+ .. seealso::
+
+ :ref:`orm_queryguide_deferred_raiseload`
+
:param default: Passed directly to the
:paramref:`_schema.Column.default` parameter if the
:paramref:`_orm.mapped_column.insert_default` parameter is not present.
.. seealso::
- :ref:`deferred_raiseload`
+ :ref:`orm_queryguide_deferred_raiseload`
.. seealso::
.. seealso::
- :ref:`deferred`
-
- :ref:`deferred_raiseload`
+ :ref:`orm_queryguide_deferred_imperative`
"""
return ColumnProperty(
:param default_expr: Optional SQL expression object that will be used in
all cases if not assigned later with :func:`_orm.with_expression`.
- E.g.::
-
- from sqlalchemy.sql import literal
-
- class C(Base):
- #...
- my_expr = query_expression(literal(1))
-
- .. versionadded:: 1.3.18
-
.. versionadded:: 1.2
.. seealso::
- :ref:`mapper_querytime_expression`
+ :ref:`orm_queryguide_with_expression` - background and usage examples
"""
prop = ColumnProperty(
def with_polymorphic(
base: Union[_O, Mapper[_O]],
- classes: Iterable[Type[Any]],
+ classes: Union[Literal["*"], Iterable[Type[Any]]],
selectable: Union[Literal[False, None], FromClause] = False,
flat: bool = False,
polymorphic_on: Optional[ColumnElement[Any]] = None,
parententity=adapt_to_entity,
)
- def of_type(self, entity: _EntityType[_T]) -> QueryableAttribute[_T]:
+ def of_type(self, entity: _EntityType[Any]) -> QueryableAttribute[_T]:
return QueryableAttribute(
self.class_,
self.key,
),
)
def after_bulk_update(self, update_context):
- """Execute after an ORM UPDATE against a WHERE expression has been
- invoked.
+ """Event for after the legacy :meth:`_orm.Query.update` method
+ has been called.
- This is called as a result of the :meth:`_query.Query.update` method.
+ .. legacy:: The :meth:`_orm.SessionEvents.after_bulk_update` method
+ is a legacy event hook as of SQLAlchemy 2.0. The event
+ **does not participate** in :term:`2.0 style` invocations
+ using :func:`_dml.update` documented at
+ :ref:`orm_queryguide_update_delete_where`. For 2.0 style use,
+ the :meth:`_orm.SessionEvents.do_orm_execute` hook will intercept
+ these calls.
:param update_context: an "update context" object which contains
details about the update, including these attributes:
),
)
def after_bulk_delete(self, delete_context):
- """Execute after ORM DELETE against a WHERE expression has been
- invoked.
+ """Event for after the legacy :meth:`_orm.Query.delete` method
+ has been called.
- This is called as a result of the :meth:`_query.Query.delete` method.
+ .. legacy:: The :meth:`_orm.SessionEvents.after_bulk_delete` method
+ is a legacy event hook as of SQLAlchemy 2.0. The event
+ **does not participate** in :term:`2.0 style` invocations
+ using :func:`_dml.delete` documented at
+ :ref:`orm_queryguide_update_delete_where`. For 2.0 style use,
+ the :meth:`_orm.SessionEvents.do_orm_execute` hook will intercept
+ these calls.
:param delete_context: a "delete context" object which contains
details about the update, including these attributes:
"""Represent events within the construction of a :class:`_query.Query`
object.
+ .. legacy:: The :class:`_orm.QueryEvents` event methods are legacy
+ as of SQLAlchemy 2.0, and only apply to direct use of the
+ :class:`_orm.Query` object. They are not used for :term:`2.0 style`
+ statements. For events to intercept and modify 2.0 style ORM use,
+ use the :meth:`_orm.SessionEvents.do_orm_execute` hook.
+
+
The :class:`_orm.QueryEvents` hooks are now superseded by the
:meth:`_orm.SessionEvents.do_orm_execute` event hook.
) -> ColumnElement[Any]:
...
- def of_type(self, class_: _EntityType[_T]) -> PropComparator[_T]:
+ def of_type(self, class_: _EntityType[Any]) -> PropComparator[_T]:
r"""Redefine this object in terms of a polymorphic subclass,
:func:`_orm.with_polymorphic` construct, or :func:`_orm.aliased`
construct.
.. seealso::
- :ref:`queryguide_join_onclause` - in the :ref:`queryguide_toplevel`
+ :ref:`orm_queryguide_joining_relationships_aliased` - in the
+ :ref:`queryguide_toplevel`
:ref:`inheritance_of_type`
CASCADE for joined-table inheritance mappers
:param polymorphic_load: Specifies "polymorphic loading" behavior
- for a subclass in an inheritance hierarchy (joined and single
- table inheritance only). Valid values are:
+ for a subclass in an inheritance hierarchy (joined and single
+ table inheritance only). Valid values are:
- * "'inline'" - specifies this class should be part of the
- "with_polymorphic" mappers, e.g. its columns will be included
- in a SELECT query against the base.
+ * "'inline'" - specifies this class should be part of
+ the "with_polymorphic" mappers, e.g. its columns will be included
+ in a SELECT query against the base.
- * "'selectin'" - specifies that when instances of this class
- are loaded, an additional SELECT will be emitted to retrieve
- the columns specific to this subclass. The SELECT uses
- IN to fetch multiple subclasses at once.
+ * "'selectin'" - specifies that when instances of this class
+ are loaded, an additional SELECT will be emitted to retrieve
+ the columns specific to this subclass. The SELECT uses
+ IN to fetch multiple subclasses at once.
.. versionadded:: 1.2
indicates a selectable that will be used to query for multiple
classes.
+ The :paramref:`_orm.Mapper.polymorphic_load` parameter may be
+ preferable over the use of :paramref:`_orm.Mapper.with_polymorphic`
+ in modern mappings to indicate a per-subclass technique of
+ indicating polymorphic loading styles.
+
.. seealso::
- :ref:`with_polymorphic` - discussion of polymorphic querying
- techniques.
+ :ref:`with_polymorphic_mapper_config`
"""
self.class_ = util.assert_arg_type(class_, type, "class_")
"""ORM-level SQL construction object.
- :class:`_query.Query`
- is the source of all SELECT statements generated by the
- ORM, both those formulated by end-user query operations as well as by
- high level internal operations such as related collection loading. It
- features a generative interface whereby successive calls return a new
- :class:`_query.Query` object, a copy of the former with additional
- criteria and options associated with it.
+ .. legacy:: The ORM :class:`.Query` object is a legacy construct
+ as of SQLAlchemy 2.0. See the notes at the top of
+ :ref:`query_api_toplevel` for an overview, including links to migration
+ documentation.
:class:`_query.Query` objects are normally initially generated using the
:meth:`~.Session.query` method of :class:`.Session`, and in
.. seealso::
- :ref:`deferred_options`
+ :ref:`loading_columns`
:ref:`relationship_loader_options`
else:
return pj
- def of_type(self, class_: _EntityType[_PT]) -> PropComparator[_PT]:
+ def of_type(self, class_: _EntityType[Any]) -> PropComparator[_PT]:
r"""Redefine this object in terms of a polymorphic subclass.
See :meth:`.PropComparator.of_type` for an example.
Proxied for the :class:`_orm.Session` class on
behalf of the :class:`_orm.scoping.scoped_session` class.
- The bulk save feature allows mapped objects to be used as the
- source of simple INSERT and UPDATE operations which can be more easily
- grouped together into higher performing "executemany"
- operations; the extraction of data from the objects is also performed
- using a lower-latency process that ignores whether or not attributes
- have actually been modified in the case of UPDATEs, and also ignores
- SQL expressions.
-
- The objects as given are not added to the session and no additional
- state is established on them. If the
- :paramref:`_orm.Session.bulk_save_objects.return_defaults` flag is set,
- then server-generated primary key values will be assigned to the
- returned objects, but **not server side defaults**; this is a
- limitation in the implementation. If stateful objects are desired,
- please use the standard :meth:`_orm.Session.add_all` approach or
- as an alternative newer mass-insert features such as
- :ref:`orm_dml_returning_objects`.
-
- .. warning::
-
- The bulk save feature allows for a lower-latency INSERT/UPDATE
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- INSERT/UPDATES of records.
-
- Please note that newer versions of SQLAlchemy are **greatly
- improving the efficiency** of the standard flush process. It is
- **strongly recommended** to not use the bulk methods as they
- represent a forking of SQLAlchemy's functionality and are slowly
- being moved into legacy status. New features such as
- :ref:`orm_dml_returning_objects` are both more efficient than
- the "bulk" methods and provide more predictable functionality.
-
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ .. legacy::
+
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`.
+
+ For general INSERT and UPDATE of existing ORM mapped objects,
+ prefer standard :term:`unit of work` data management patterns,
+ introduced in the :ref:`unified_tutorial` at
+ :ref:`tutorial_orm_data_manipulation`. SQLAlchemy 2.0
+ now uses :ref:`engine_insertmanyvalues` with modern dialects
+ which solves previous issues of bulk INSERT slowness.
:param objects: a sequence of mapped object instances. The mapped
objects are persisted as is, and are **not** associated with the
False, common types of objects are grouped into inserts
and updates, to allow for more batching opportunities.
- .. versionadded:: 1.3
-
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_insert_mappings`
Proxied for the :class:`_orm.Session` class on
behalf of the :class:`_orm.scoping.scoped_session` class.
- The bulk insert feature allows plain Python dictionaries to be used as
- the source of simple INSERT operations which can be more easily
- grouped together into higher performing "executemany"
- operations. Using dictionaries, there is no "history" or session
- state management features in use, reducing latency when inserting
- large numbers of simple rows.
-
- The values within the dictionaries as given are typically passed
- without modification into Core :meth:`_expression.Insert` constructs,
- after
- organizing the values within them across the tables to which
- the given mapper is mapped.
-
- .. versionadded:: 1.0.0
-
- .. warning::
-
- The bulk insert feature allows for a lower-latency INSERT
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- INSERT of records.
-
- Please note that newer versions of SQLAlchemy are **greatly
- improving the efficiency** of the standard flush process. It is
- **strongly recommended** to not use the bulk methods as they
- represent a forking of SQLAlchemy's functionality and are slowly
- being moved into legacy status. New features such as
- :ref:`orm_dml_returning_objects` are both more efficient than
- the "bulk" methods and provide more predictable functionality.
-
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ .. legacy::
+
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`. The 2.0 API shares
+ implementation details with this method and adds new features
+ as well.
:param mapper: a mapped class, or the actual :class:`_orm.Mapper`
object,
such as a joined-inheritance mapping, each dictionary must contain all
keys to be populated into all tables.
- :param return_defaults: when True, rows that are missing values which
- generate defaults, namely integer primary key defaults and sequences,
- will be inserted **one at a time**, so that the primary key value
- is available. In particular this will allow joined-inheritance
- and other multi-table mappings to insert correctly without the need
- to provide primary
- key values ahead of time; however,
- :paramref:`.Session.bulk_insert_mappings.return_defaults`
- **greatly reduces the performance gains** of the method overall.
- If the rows
- to be inserted only refer to a single table, then there is no
- reason this flag should be set as the returned default information
- is not used.
+ :param return_defaults: when True, the INSERT process will be altered
+ to ensure that newly generated primary key values will be fetched.
+ The rationale for this parameter is typically to enable
+ :ref:`Joined Table Inheritance <joined_inheritance>` mappings to
+ be bulk inserted.
+
+ .. note:: for backends that don't support RETURNING, the
+ :paramref:`_orm.Session.bulk_insert_mappings.return_defaults`
+ parameter can significantly decrease performance as INSERT
+ statements can no longer be batched. See
+ :ref:`engine_insertmanyvalues`
+ for background on which backends are affected.
:param render_nulls: When True, a value of ``None`` will result
in a NULL value being included in the INSERT statement, rather
to ensure that no server-side default functions need to be
invoked for the operation as a whole.
- .. versionadded:: 1.1
-
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_save_objects`
Proxied for the :class:`_orm.Session` class on
behalf of the :class:`_orm.scoping.scoped_session` class.
- The bulk update feature allows plain Python dictionaries to be used as
- the source of simple UPDATE operations which can be more easily
- grouped together into higher performing "executemany"
- operations. Using dictionaries, there is no "history" or session
- state management features in use, reducing latency when updating
- large numbers of simple rows.
+ .. legacy::
- .. versionadded:: 1.0.0
-
- .. warning::
-
- The bulk update feature allows for a lower-latency UPDATE
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- UPDATES of records.
-
- Please note that newer versions of SQLAlchemy are **greatly
- improving the efficiency** of the standard flush process. It is
- **strongly recommended** to not use the bulk methods as they
- represent a forking of SQLAlchemy's functionality and are slowly
- being moved into legacy status. New features such as
- :ref:`orm_dml_returning_objects` are both more efficient than
- the "bulk" methods and provide more predictable functionality.
-
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`. The 2.0 API shares
+ implementation details with this method and adds new features
+ as well.
:param mapper: a mapped class, or the actual :class:`_orm.Mapper`
object,
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_insert_mappings`
def scalars(
self,
statement: TypedReturnsRows[Tuple[_T]],
- params: Optional[_CoreSingleExecuteParams] = None,
+ params: Optional[_CoreAnyExecuteParams] = None,
*,
execution_options: _ExecuteOptionsParameter = util.EMPTY_DICT,
bind_arguments: Optional[_BindArguments] = None,
def scalars(
self,
statement: Executable,
- params: Optional[_CoreSingleExecuteParams] = None,
+ params: Optional[_CoreAnyExecuteParams] = None,
*,
execution_options: _ExecuteOptionsParameter = util.EMPTY_DICT,
bind_arguments: Optional[_BindArguments] = None,
def scalars(
self,
statement: Executable,
- params: Optional[_CoreSingleExecuteParams] = None,
+ params: Optional[_CoreAnyExecuteParams] = None,
*,
execution_options: _ExecuteOptionsParameter = util.EMPTY_DICT,
bind_arguments: Optional[_BindArguments] = None,
) -> None:
"""Perform a bulk save of the given list of objects.
- The bulk save feature allows mapped objects to be used as the
- source of simple INSERT and UPDATE operations which can be more easily
- grouped together into higher performing "executemany"
- operations; the extraction of data from the objects is also performed
- using a lower-latency process that ignores whether or not attributes
- have actually been modified in the case of UPDATEs, and also ignores
- SQL expressions.
-
- The objects as given are not added to the session and no additional
- state is established on them. If the
- :paramref:`_orm.Session.bulk_save_objects.return_defaults` flag is set,
- then server-generated primary key values will be assigned to the
- returned objects, but **not server side defaults**; this is a
- limitation in the implementation. If stateful objects are desired,
- please use the standard :meth:`_orm.Session.add_all` approach or
- as an alternative newer mass-insert features such as
- :ref:`orm_dml_returning_objects`.
+ .. legacy::
- .. warning::
-
- The bulk save feature allows for a lower-latency INSERT/UPDATE
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- INSERT/UPDATES of records.
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`.
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ For general INSERT and UPDATE of existing ORM mapped objects,
+ prefer standard :term:`unit of work` data management patterns,
+ introduced in the :ref:`unified_tutorial` at
+ :ref:`tutorial_orm_data_manipulation`. SQLAlchemy 2.0
+ now uses :ref:`engine_insertmanyvalues` with modern dialects
+ which solves previous issues of bulk INSERT slowness.
:param objects: a sequence of mapped object instances. The mapped
objects are persisted as is, and are **not** associated with the
False, common types of objects are grouped into inserts
and updates, to allow for more batching opportunities.
- .. versionadded:: 1.3
-
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_insert_mappings`
) -> None:
"""Perform a bulk insert of the given list of mapping dictionaries.
- The bulk insert feature allows plain Python dictionaries to be used as
- the source of simple INSERT operations which can be more easily
- grouped together into higher performing "executemany"
- operations. Using dictionaries, there is no "history" or session
- state management features in use, reducing latency when inserting
- large numbers of simple rows.
-
- The values within the dictionaries as given are typically passed
- without modification into Core :meth:`_expression.Insert` constructs,
- after
- organizing the values within them across the tables to which
- the given mapper is mapped.
-
- .. warning::
-
- The bulk insert feature allows for a lower-latency INSERT
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- INSERT of records.
+ .. legacy::
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`. The 2.0 API shares
+ implementation details with this method and adds new features
+ as well.
:param mapper: a mapped class, or the actual :class:`_orm.Mapper`
object,
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_save_objects`
) -> None:
"""Perform a bulk update of the given list of mapping dictionaries.
- The bulk update feature allows plain Python dictionaries to be used as
- the source of simple UPDATE operations which can be more easily
- grouped together into higher performing "executemany"
- operations. Using dictionaries, there is no "history" or session
- state management features in use, reducing latency when updating
- large numbers of simple rows.
-
- .. warning::
-
- The bulk update feature allows for a lower-latency UPDATE
- of rows at the expense of most other unit-of-work features.
- Features such as object management, relationship handling,
- and SQL clause support are **silently omitted** in favor of raw
- UPDATES of records.
+ .. legacy::
- **Please read the list of caveats at**
- :ref:`bulk_operations_caveats` **before using this method, and
- fully test and confirm the functionality of all code developed
- using these systems.**
+ This method is a legacy feature as of the 2.0 series of
+ SQLAlchemy. For modern bulk INSERT and UPDATE, see
+ the sections :ref:`orm_queryguide_bulk_insert` and
+ :ref:`orm_queryguide_bulk_update`. The 2.0 API shares
+ implementation details with this method and adds new features
+ as well.
:param mapper: a mapped class, or the actual :class:`_orm.Mapper`
object,
.. seealso::
- :ref:`bulk_operations`
+ :doc:`queryguide/dml`
:meth:`.Session.bulk_insert_mappings`
* are normally :term:`lazy loaded` but are not currently loaded
- * are "deferred" via :ref:`deferred` and are not yet loaded
+ * are "deferred" (see :ref:`orm_queryguide_column_deferral`) and are
+ not yet loaded
* were not present in the query which loaded this object, such as that
which is common in joined table inheritance and other scenarios.
Load(Address).load_only(Address.email_address)
)
- .. note:: This method will still load a :class:`_schema.Column` even
- if the column property is defined with ``deferred=True``
- for the :func:`.column_property` function.
+ :param \*attrs: Attributes to be loaded, all others will be deferred.
+
+ :param raiseload: raise :class:`.InvalidRequestError` rather than
+ lazy loading a value when a deferred attribute is accessed. Used
+ to prevent unwanted SQL from being emitted.
+
+ .. versionadded:: 2.0
+
+ .. seealso::
+
+ :ref:`orm_queryguide_column_deferral` - in the
+ :ref:`queryguide_toplevel`
:param \*attrs: Attributes to be loaded, all others will be deferred.
:ref:`prevent_lazy_with_raiseload`
- :ref:`deferred_raiseload`
+ :ref:`orm_queryguide_deferred_raiseload`
"""
.. seealso::
- :meth:`_orm.Load.options` - allows for complex hierarchical
- loader option structures with less verbosity than with individual
- :func:`.defaultload` directives.
-
- :ref:`relationship_loader_options`
+ :ref:`orm_queryguide_relationship_sub_options`
- :ref:`deferred_loading_w_multiple`
+ :meth:`_orm.Load.options`
"""
return self._set_relationship_strategy(attr, None)
.. seealso::
- :ref:`deferred_raiseload`
-
- .. seealso::
+ :ref:`orm_queryguide_column_deferral` - in the
+ :ref:`queryguide_toplevel`
- :ref:`deferred`
+ :func:`_orm.load_only`
:func:`_orm.undefer`
.. seealso::
- :ref:`deferred`
+ :ref:`orm_queryguide_column_deferral` - in the
+ :ref:`queryguide_toplevel`
:func:`_orm.defer`
.. seealso::
- :ref:`deferred`
+ :ref:`orm_queryguide_column_deferral` - in the
+ :ref:`queryguide_toplevel`
:func:`_orm.defer`
:param expr: SQL expression to be applied to the attribute.
- .. note:: the target attribute is populated only if the target object
- is **not currently loaded** in the current :class:`_orm.Session`
- unless the :ref:`orm_queryguide_populate_existing` execution option
- is used. Please refer to :ref:`mapper_querytime_expression` for
- complete usage details.
-
.. seealso::
- :ref:`mapper_querytime_expression`
+ :ref:`orm_queryguide_with_expression` - background and usage
+ examples
"""
The :class:`_orm.Load` object is in most cases used implicitly behind the
scenes when one makes use of a query option like :func:`_orm.joinedload`,
- :func:`.defer`, or similar. However, the :class:`_orm.Load` object
- can also be used directly, and in some cases can be useful.
-
- To use :class:`_orm.Load` directly, instantiate it with the target mapped
- class as the argument. This style of usage is
- useful when dealing with a statement
- that has multiple entities::
-
- myopt = Load(MyClass).joinedload("widgets")
-
- The above ``myopt`` can now be used with :meth:`_sql.Select.options` or
- :meth:`_query.Query.options` where it
- will only take effect for the ``MyClass`` entity::
-
- stmt = select(MyClass, MyOtherClass).options(myopt)
-
- One case where :class:`_orm.Load`
- is useful as public API is when specifying
- "wildcard" options that only take effect for a certain class::
-
- stmt = select(Order).options(Load(Order).lazyload('*'))
-
- Above, all relationships on ``Order`` will be lazy-loaded, but other
- attributes on those descendant objects will load using their normal
- loader strategy.
+ :func:`_orm.defer`, or similar. It typically is not instantiated directly
+ except for in some very specific cases.
.. seealso::
- :ref:`deferred_options`
-
- :ref:`deferred_loading_w_multiple`
-
- :ref:`relationship_loader_options`
+ :ref:`orm_queryguide_relationship_per_entity_wildcard` - illustrates an
+ example where direct use of :class:`_orm.Load` may be useful
"""
:func:`.defaultload`
- :ref:`relationship_loader_options`
-
- :ref:`deferred_loading_w_multiple`
+ :ref:`orm_queryguide_relationship_sub_options`
"""
for opt in opts:
def _with_polymorphic_factory(
cls,
base: Union[_O, Mapper[_O]],
- classes: Iterable[_EntityType[Any]],
+ classes: Union[Literal["*"], Iterable[_EntityType[Any]]],
selectable: Union[Literal[False, None], FromClause] = False,
flat: bool = False,
polymorphic_on: Optional[ColumnElement[Any]] = None,
allowing post-processing as well as custom return types, without
involving ORM identity-mapped classes.
- .. versionadded:: 0.9.0
-
.. seealso::
:ref:`bundles`
) -> Callable[[Row[Any]], Any]:
"""Produce the "row processing" function for this :class:`.Bundle`.
- May be overridden by subclasses.
+ May be overridden by subclasses to provide custom behaviors when
+ results are fetched. The method is passed the statement object and a
+ set of "row processor" functions at query execution time; these
+ processor functions when given a result row will return the individual
+ attribute value, which can then be adapted into any kind of return data
+ structure.
+
+ The example below illustrates replacing the usual :class:`.Row`
+ return structure with a straight Python dictionary::
+
+ from sqlalchemy.orm import Bundle
+
+ class DictBundle(Bundle):
+ def create_row_processor(self, query, procs, labels):
+ 'Override create_row_processor to return values as
+ dictionaries'
+
+ def proc(row):
+ return dict(
+ zip(labels, (proc(row) for proc in procs))
+ )
+ return proc
- .. seealso::
+ A result from the above :class:`_orm.Bundle` will return dictionary
+ values::
- :ref:`bundles` - includes an example of subclassing.
+ bn = DictBundle('mybundle', MyClass.data1, MyClass.data2)
+ for row in session.execute(select(bn)).where(bn.c.data1 == 'd1'):
+ print(row.mybundle['data1'], row.mybundle['data2'])
"""
keyed_tuple = result_tuple(labels, [() for l in labels])
backends that support "returning", this turns off the "implicit
returning" feature for the statement.
- If both :paramref:`_expression.Insert.values` and compile-time bind
+ If both :paramref:`_expression.insert.values` and compile-time bind
parameters are present, the compile-time bind parameters override the
- information specified within :paramref:`_expression.Insert.values` on a
+ information specified within :paramref:`_expression.insert.values` on a
per-key basis.
The keys within :paramref:`_expression.Insert.values` can be either
.. seealso::
- :ref:`deferred_options` - refers to options specific to the usage
+ :ref:`loading_columns` - refers to options specific to the usage
of ORM queries
:ref:`relationship_loader_options` - refers to options specific
@_generative
def add_columns(
- self, *columns: _ColumnsClauseArgument[Any]
+ self, *entities: _ColumnsClauseArgument[Any]
) -> Select[Any]:
- """Return a new :func:`_expression.select` construct with
- the given column expressions added to its columns clause.
+ r"""Return a new :func:`_expression.select` construct with
+ the given entities appended to its columns clause.
E.g.::
my_select = my_select.add_columns(table.c.new_column)
- See the documentation for
- :meth:`_expression.Select.with_only_columns`
- for guidelines on adding /replacing the columns of a
- :class:`_expression.Select` object.
+ The original expressions in the columns clause remain in place.
+ To replace the original expressions with new ones, see the method
+ :meth:`_expression.Select.with_only_columns`.
+
+ :param \*entities: column, table, or other entity expressions to be
+ added to the columns clause
+
+ .. seealso::
+
+ :meth:`_expression.Select.with_only_columns` - replaces existing
+ expressions rather than appending.
+
+ :ref:`orm_queryguide_select_multiple_entities` - ORM-centric
+ example
"""
self._reset_memoizations()
coercions.expect(
roles.ColumnsClauseRole, column, apply_propagate_attrs=self
)
- for column in columns
+ for column in entities
]
return self
@overload
def with_only_columns(
self,
- *columns: _ColumnsClauseArgument[Any],
+ *entities: _ColumnsClauseArgument[Any],
maintain_column_froms: bool = False,
**__kw: Any,
) -> Select[Any]:
@_generative
def with_only_columns(
self,
- *columns: _ColumnsClauseArgument[Any],
+ *entities: _ColumnsClauseArgument[Any],
maintain_column_froms: bool = False,
**__kw: Any,
) -> Select[Any]:
r"""Return a new :func:`_expression.select` construct with its columns
- clause replaced with the given columns.
+ clause replaced with the given entities.
By default, this method is exactly equivalent to as if the original
- :func:`_expression.select` had been called with the given columns
- clause. E.g. a statement::
+ :func:`_expression.select` had been called with the given entities.
+ E.g. a statement::
s = select(table1.c.a, table1.c.b)
s = s.with_only_columns(table1.c.b)
s = select(table1.c.a, table2.c.b)
s = s.select_from(*s.columns_clause_froms).with_only_columns(table1.c.a)
- :param \*columns: column expressions to be used.
-
- .. versionchanged:: 1.4 the :meth:`_sql.Select.with_only_columns`
- method accepts the list of column expressions positionally;
- passing the expressions as a list is deprecated.
+ :param \*entities: column expressions to be used.
:param maintain_column_froms: boolean parameter that will ensure the
FROM list implied from the current columns clause will be transferred
self._raw_columns = [
coercions.expect(roles.ColumnsClauseRole, c)
for c in coercions._expression_collection_was_a_list(
- "columns", "Select.with_only_columns", columns
+ "columns", "Select.with_only_columns", entities
)
]
return self
lambda: util.py39, "Python 3.9 or above required"
)
+ @property
+ def python310(self):
+ return exclusions.only_if(
+ lambda: util.py310, "Python 3.10 or above required"
+ )
+
@property
def cpython(self):
return exclusions.only_if(
+from __future__ import annotations
+
import doctest
import logging
import os
class DocTest(fixtures.TestBase):
- __requires__ = ("python39",)
+ __requires__ = ("python310",)
+ __only_on__ = "sqlite"
def _setup_logger(self):
rootlogger = logging.getLogger("sqlalchemy.engine.Engine")
path = os.path.join(sqla_base, "doc/build", fname)
if not os.path.exists(path):
config.skip_test("Can't find documentation file %r" % path)
+
+ buf = []
+ line_counter = 0
+ last_line_counter = 0
with open(path, encoding="utf-8") as file_:
- content = file_.read()
- content = re.sub(r"{(?:stop|sql|opensql)}", "", content)
- test = parser.get_doctest(content, globs, fname, fname, 0)
- runner.run(test, clear_globs=False)
+ def load_include(m):
+ fname = m.group(1)
+ sub_path = os.path.join(os.path.dirname(path), fname)
+ with open(sub_path, encoding="utf-8") as file_:
+ for line in file_:
+ buf.append(line)
+ return fname
+
+ def run_buf(fname, is_include):
+ if not buf:
+ return
+ nonlocal last_line_counter
+ test = parser.get_doctest(
+ "".join(buf),
+ globs,
+ fname,
+ fname,
+ last_line_counter if not is_include else 0,
+ )
+ buf[:] = []
+ runner.run(test, clear_globs=False)
+ globs.update(test.globs)
+
+ if not is_include:
+ last_line_counter = line_counter
+
+ for line in file_:
+ line = re.sub(r"{(?:stop|sql|opensql)}", "", line)
+
+ include = re.match(r"\.\. doctest-include (.+\.rst)", line)
+ if include:
+ run_buf(fname, False)
+ include_fname = load_include(include)
+ run_buf(include_fname, True)
+ else:
+ buf.append(line)
+ line_counter += 1
+
+ run_buf(fname, False)
+
runner.summarize()
- globs.update(test.globs)
assert not runner.failures
@requires.has_json_each
def test_core_operators(self):
self._run_doctest("core/operators.rst")
- def test_orm_queryguide(self):
- self._run_doctest("orm/queryguide.rst")
+ def test_orm_queryguide_select(self):
+ self._run_doctest(
+ "orm/queryguide/_plain_setup.rst",
+ "orm/queryguide/select.rst",
+ "orm/queryguide/api.rst",
+ "orm/queryguide/_end_doctest.rst",
+ )
+
+ def test_orm_queryguide_inheritance(self):
+ self._run_doctest(
+ "orm/queryguide/inheritance.rst",
+ )
+
+ @requires.update_from
+ def test_orm_queryguide_dml(self):
+ self._run_doctest(
+ "orm/queryguide/dml.rst",
+ )
+
+ def test_orm_queryguide_columns(self):
+ self._run_doctest(
+ "orm/queryguide/columns.rst",
+ )
def test_orm_quickstart(self):
self._run_doctest("orm/quickstart.rst")
def test_load_only_raise_option_raise_column_plain(self):
A = self.classes.A
s = fixture_session()
+ a1 = s.query(A).options(defer(A.x)).first()
+ a1.x
+
+ s.close()
a1 = s.query(A).options(load_only(A.y, A.z, raiseload=True)).first()
assert_raises_message(