===============
.. toctree::
-
+ :maxdepth: 1
+
tutorial
expression_api
engines
.. _sqlalchemy.dialects_toplevel:
-sqlalchemy.dialects
-====================
+Dialects
+========
+
+The *dialect* is the system SQLAlchemy uses to communicate with various types of DBAPIs and databases.
+A compatibility chart of supported backends can be found at :ref:`supported_dbapis`.
+
+This section contains all notes and documentation specific to the usage of various backends.
Supported Databases
-------------------
current versions of SQLAlchemy.
.. toctree::
+ :maxdepth: 1
:glob:
firebird
ported to current versions of SQLAlchemy.
.. toctree::
+ :maxdepth: 1
:glob:
access
=================
.. toctree::
-
+ :maxdepth: 2
+
intro
orm/index
core/index
--- /dev/null
+.. _collections_toplevel:
+
+Collection Configuration and Techniques
+=======================================
+
+The :func:`.relationship` function defines a linkage between two classes.
+When the linkage defines a one-to-many or many-to-many relationship, it's
+represented as a Python collection when objects are loaded and manipulated.
+This section presents additional information about collection configuration
+and techniques.
+
+.. _largecollections:
+.. currentmodule:: sqlalchemy.orm
+
+Working with Large Collections
+-------------------------------
+
+The default behavior of :func:`.relationship` is to fully load
+the collection of items in, as according to the loading strategy of the
+relationship. Additionally, the Session by default only knows how to delete
+objects which are actually present within the session. When a parent instance
+is marked for deletion and flushed, the Session loads its full list of child
+items in so that they may either be deleted as well, or have their foreign key
+value set to null; this is to avoid constraint violations. For large
+collections of child items, there are several strategies to bypass full
+loading of child items both at load time as well as deletion time.
+
+Dynamic Relationship Loaders
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The most useful by far is the :func:`~sqlalchemy.orm.dynamic_loader`
+relationship. This is a variant of :func:`~sqlalchemy.orm.relationship` which
+returns a :class:`~sqlalchemy.orm.query.Query` object in place of a collection
+when accessed. :func:`~sqlalchemy.orm.query.Query.filter` criterion may be
+applied as well as limits and offsets, either explicitly or via array slices:
+
+.. sourcecode:: python+sql
+
+ mapper(User, users_table, properties={
+ 'posts': dynamic_loader(Post)
+ })
+
+ jack = session.query(User).get(id)
+
+ # filter Jack's blog posts
+ posts = jack.posts.filter(Post.headline=='this is a post')
+
+ # apply array slices
+ posts = jack.posts[5:20]
+
+The dynamic relationship supports limited write operations, via the
+``append()`` and ``remove()`` methods. Since the read side of the dynamic
+relationship always queries the database, changes to the underlying collection
+will not be visible until the data has been flushed:
+
+.. sourcecode:: python+sql
+
+ oldpost = jack.posts.filter(Post.headline=='old post').one()
+ jack.posts.remove(oldpost)
+
+ jack.posts.append(Post('new post'))
+
+To place a dynamic relationship on a backref, use ``lazy='dynamic'``:
+
+.. sourcecode:: python+sql
+
+ mapper(Post, posts_table, properties={
+ 'user': relationship(User, backref=backref('posts', lazy='dynamic'))
+ })
+
+Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
+
+Setting Noload
+~~~~~~~~~~~~~~~
+
+The opposite of the dynamic relationship is simply "noload", specified using ``lazy='noload'``:
+
+.. sourcecode:: python+sql
+
+ mapper(MyClass, table, properties={
+ 'children': relationship(MyOtherClass, lazy='noload')
+ })
+
+Above, the ``children`` collection is fully writeable, and changes to it will
+be persisted to the database as well as locally available for reading at the
+time they are added. However when instances of ``MyClass`` are freshly loaded
+from the database, the ``children`` collection stays empty.
+
+Using Passive Deletes
+~~~~~~~~~~~~~~~~~~~~~~
+
+Use ``passive_deletes=True`` to disable child object loading on a DELETE
+operation, in conjunction with "ON DELETE (CASCADE|SET NULL)" on your database
+to automatically cascade deletes to child objects. Note that "ON DELETE" is
+not supported on SQLite, and requires ``InnoDB`` tables when using MySQL:
+
+.. sourcecode:: python+sql
+
+ mytable = Table('mytable', meta,
+ Column('id', Integer, primary_key=True),
+ )
+
+ myothertable = Table('myothertable', meta,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer),
+ ForeignKeyConstraint(['parent_id'], ['mytable.id'], ondelete="CASCADE"),
+ )
+
+ mapper(MyOtherClass, myothertable)
+
+ mapper(MyClass, mytable, properties={
+ 'children': relationship(MyOtherClass, cascade="all, delete-orphan", passive_deletes=True)
+ })
+
+When ``passive_deletes`` is applied, the ``children`` relationship will not be
+loaded into memory when an instance of ``MyClass`` is marked for deletion. The
+``cascade="all, delete-orphan"`` *will* take effect for instances of
+``MyOtherClass`` which are currently present in the session; however for
+instances of ``MyOtherClass`` which are not loaded, SQLAlchemy assumes that
+"ON DELETE CASCADE" rules will ensure that those rows are deleted by the
+database and that no foreign key violation will occur.
+
+.. currentmodule:: sqlalchemy.orm.collections
+.. _custom_collections:
+
+Customizing Collection Access
+-----------------------------
+
+Mapping a one-to-many or many-to-many relationship results in a collection of
+values accessible through an attribute on the parent instance. By default,
+this collection is a ``list``::
+
+ mapper(Parent, properties={
+ children = relationship(Child)
+ })
+
+ parent = Parent()
+ parent.children.append(Child())
+ print parent.children[0]
+
+Collections are not limited to lists. Sets, mutable sequences and almost any
+other Python object that can act as a container can be used in place of the
+default list, by specifying the ``collection_class`` option on
+:func:`~sqlalchemy.orm.relationship`.
+
+.. sourcecode:: python+sql
+
+ # use a set
+ mapper(Parent, properties={
+ children = relationship(Child, collection_class=set)
+ })
+
+ parent = Parent()
+ child = Child()
+ parent.children.add(child)
+ assert child in parent.children
+
+
+Custom Collection Implementations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use your own types for collections as well. For most cases, simply
+inherit from ``list`` or ``set`` and add the custom behavior.
+
+Collections in SQLAlchemy are transparently *instrumented*. Instrumentation
+means that normal operations on the collection are tracked and result in
+changes being written to the database at flush time. Additionally, collection
+operations can fire *events* which indicate some secondary operation must take
+place. Examples of a secondary operation include saving the child item in the
+parent's :class:`~sqlalchemy.orm.session.Session` (i.e. the ``save-update``
+cascade), as well as synchronizing the state of a bi-directional relationship
+(i.e. a ``backref``).
+
+The collections package understands the basic interface of lists, sets and
+dicts and will automatically apply instrumentation to those built-in types and
+their subclasses. Object-derived types that implement a basic collection
+interface are detected and instrumented via duck-typing:
+
+.. sourcecode:: python+sql
+
+ class ListLike(object):
+ def __init__(self):
+ self.data = []
+ def append(self, item):
+ self.data.append(item)
+ def remove(self, item):
+ self.data.remove(item)
+ def extend(self, items):
+ self.data.extend(items)
+ def __iter__(self):
+ return iter(self.data)
+ def foo(self):
+ return 'foo'
+
+``append``, ``remove``, and ``extend`` are known list-like methods, and will be instrumented automatically. ``__iter__`` is not a mutator method and won't be instrumented, and ``foo`` won't be either.
+
+Duck-typing (i.e. guesswork) isn't rock-solid, of course, so you can be
+explicit about the interface you are implementing by providing an
+``__emulates__`` class attribute::
+
+ class SetLike(object):
+ __emulates__ = set
+
+ def __init__(self):
+ self.data = set()
+ def append(self, item):
+ self.data.add(item)
+ def remove(self, item):
+ self.data.remove(item)
+ def __iter__(self):
+ return iter(self.data)
+
+This class looks list-like because of ``append``, but ``__emulates__`` forces
+it to set-like. ``remove`` is known to be part of the set interface and will
+be instrumented.
+
+But this class won't work quite yet: a little glue is needed to adapt it for
+use by SQLAlchemy. The ORM needs to know which methods to use to append,
+remove and iterate over members of the collection. When using a type like
+``list`` or ``set``, the appropriate methods are well-known and used
+automatically when present. This set-like class does not provide the expected
+``add`` method, so we must supply an explicit mapping for the ORM via a
+decorator.
+
+Annotating Custom Collections via Decorators
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Decorators can be used to tag the individual methods the ORM needs to manage
+collections. Use them when your class doesn't quite meet the regular interface
+for its container type, or you simply would like to use a different method to
+get the job done.
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import collection
+
+ class SetLike(object):
+ __emulates__ = set
+
+ def __init__(self):
+ self.data = set()
+
+ @collection.appender
+ def append(self, item):
+ self.data.add(item)
+
+ def remove(self, item):
+ self.data.remove(item)
+
+ def __iter__(self):
+ return iter(self.data)
+
+And that's all that's needed to complete the example. SQLAlchemy will add
+instances via the ``append`` method. ``remove`` and ``__iter__`` are the
+default methods for sets and will be used for removing and iteration. Default
+methods can be changed as well:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import collection
+
+ class MyList(list):
+ @collection.remover
+ def zark(self, item):
+ # do something special...
+
+ @collection.iterator
+ def hey_use_this_instead_for_iteration(self):
+ # ...
+
+There is no requirement to be list-, or set-like at all. Collection classes
+can be any shape, so long as they have the append, remove and iterate
+interface marked for SQLAlchemy's use. Append and remove methods will be
+called with a mapped entity as the single argument, and iterator methods are
+called with no arguments and must return an iterator.
+
+Dictionary-Based Collections
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A ``dict`` can be used as a collection, but a keying strategy is needed to map
+entities loaded by the ORM to key, value pairs. The
+:mod:`sqlalchemy.orm.collections` package provides several built-in types for
+dictionary-based collections:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import column_mapped_collection, attribute_mapped_collection, mapped_collection
+
+ mapper(Item, items_table, properties={
+ # key by column
+ 'notes': relationship(Note, collection_class=column_mapped_collection(notes_table.c.keyword)),
+ # or named attribute
+ 'notes2': relationship(Note, collection_class=attribute_mapped_collection('keyword')),
+ # or any callable
+ 'notes3': relationship(Note, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
+ })
+
+ # ...
+ item = Item()
+ item.notes['color'] = Note('color', 'blue')
+ print item.notes['color']
+
+These functions each provide a ``dict`` subclass with decorated ``set`` and
+``remove`` methods and the keying strategy of your choice.
+
+The :class:`sqlalchemy.orm.collections.MappedCollection` class can be used as
+a base class for your custom types or as a mix-in to quickly add ``dict``
+collection support to other classes. It uses a keying function to delegate to
+``__setitem__`` and ``__delitem__``:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.util import OrderedDict
+ from sqlalchemy.orm.collections import MappedCollection
+
+ class NodeMap(OrderedDict, MappedCollection):
+ """Holds 'Node' objects, keyed by the 'name' attribute with insert order maintained."""
+
+ def __init__(self, *args, **kw):
+ MappedCollection.__init__(self, keyfunc=lambda node: node.name)
+ OrderedDict.__init__(self, *args, **kw)
+
+The ORM understands the ``dict`` interface just like lists and sets, and will
+automatically instrument all dict-like methods if you choose to subclass
+``dict`` or provide dict-like collection behavior in a duck-typed class. You
+must decorate appender and remover methods, however- there are no compatible
+methods in the basic dictionary interface for SQLAlchemy to use by default.
+Iteration will go through ``itervalues()`` unless otherwise decorated.
+
+Instrumentation and Custom Types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Many custom types and existing library classes can be used as a entity
+collection type as-is without further ado. However, it is important to note
+that the instrumentation process _will_ modify the type, adding decorators
+around methods automatically.
+
+The decorations are lightweight and no-op outside of relationships, but they
+do add unneeded overhead when triggered elsewhere. When using a library class
+as a collection, it can be good practice to use the "trivial subclass" trick
+to restrict the decorations to just your usage in relationships. For example:
+
+.. sourcecode:: python+sql
+
+ class MyAwesomeList(some.great.library.AwesomeList):
+ pass
+
+ # ... relationship(..., collection_class=MyAwesomeList)
+
+The ORM uses this approach for built-ins, quietly substituting a trivial
+subclass when a ``list``, ``set`` or ``dict`` is used directly.
+
+The collections package provides additional decorators and support for
+authoring custom types. See the :mod:`sqlalchemy.orm.collections` package for
+more information and discussion of advanced usage and Python 2.3-compatible
+decoration options.
+
+Collections API
+~~~~~~~~~~~~~~~
+
+.. autofunction:: attribute_mapped_collection
+
+.. autoclass:: collection
+
+.. autoclass:: sqlalchemy.orm.collections.MappedCollection
+ :members:
+
+.. autofunction:: collection_adapter
+
+.. autofunction:: column_mapped_collection
+
+.. autofunction:: mapped_collection
+
===============
.. toctree::
-
+ :maxdepth: 1
+
tutorial
mapper_config
relationships
+ collections
+ inheritance
session
query
loading
events
- collections
extensions
examples
deprecated
'employees': relationship(Employee, backref='company')
})
-SQLAlchemy has a lot of experience in this area; the optimized "outer join"
-approach can be used freely for parent and child relationships, eager loads
-are fully useable, :func:`~sqlalchemy.orm.aliased` objects and other
-techniques are fully supported as well.
+Relationships with Concrete Inheritance
++++++++++++++++++++++++++++++++++++++++
-In a concrete inheritance scenario, mapping relationships is more difficult
+In a concrete inheritance scenario, mapping relationships is more challenging
since the distinct classes do not share a table. In this case, you *can*
establish a relationship from parent to child if a join condition can be
constructed from parent to child, if each child table contains a foreign key
Column('company_id', Integer, ForeignKey('companies.id'))
)
- mapper(Employee, employees_table, with_polymorphic=('*', pjoin), polymorphic_on=pjoin.c.type, polymorphic_identity='employee')
- mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
- mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')
+ mapper(Employee, employees_table,
+ with_polymorphic=('*', pjoin),
+ polymorphic_on=pjoin.c.type,
+ polymorphic_identity='employee')
+
+ mapper(Manager, managers_table,
+ inherits=employee_mapper,
+ concrete=True,
+ polymorphic_identity='manager')
+
+ mapper(Engineer, engineers_table,
+ inherits=employee_mapper,
+ concrete=True,
+ polymorphic_identity='engineer')
+
mapper(Company, companies, properties={
'employees': relationship(Employee)
})
mapper(C, c_table, properties={
'many_a':relationship(A, collection_class=set, back_populates='some_c'),
})
+
+Using Inheritance with Declarative
+-----------------------------------
+
+Declarative makes inheritance configuration more intuitive. See the docs at :ref:`declarative_inheritance`.
--- /dev/null
+.. _loading_toplevel:
+
+.. currentmodule:: sqlalchemy.orm
+
+Relationship Loading Techniques
+===============================
+
+A big part of SQLAlchemy is providing a wide range of control over how related objects get loaded when querying. This behavior
+can be configured at mapper construction time using the ``lazy`` parameter to the :func:`.relationship` function,
+as well as by using options with the :class:`.Query` object.
+
+Using Loader Strategies: Lazy Loading, Eager Loading
+----------------------------------------------------
+
+In the :ref:`ormtutorial_toplevel`, we introduced the concept of **Eager
+Loading**. We used an ``option`` in conjunction with the
+:class:`~sqlalchemy.orm.query.Query` object in order to indicate that a
+relationship should be loaded at the same time as the parent, within a single
+SQL query:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack = session.query(User).options(joinedload('addresses')).filter_by(name='jack').all() #doctest: +NORMALIZE_WHITESPACE
+ SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname, users.password AS users_password
+ FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
+ WHERE users.name = ?
+ ['jack']
+
+By default, all inter-object relationships are **lazy loading**. The scalar or
+collection attribute associated with a :func:`~sqlalchemy.orm.relationship`
+contains a trigger which fires the first time the attribute is accessed, which
+issues a SQL call at that point:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack.addresses
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id
+ [5]
+ {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
+
+A second option for eager loading exists, called "subquery" loading. This kind
+of eager loading emits an additional SQL statement for each collection
+requested, aggregated across all parent objects:
+
+.. sourcecode:: python+sql
+
+ {sql}>>>jack = session.query(User).options(subqueryload('addresses')).filter_by(name='jack').all()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname,
+ users.password AS users_password
+ FROM users
+ WHERE users.name = ?
+ ('jack',)
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id, anon_1.users_id AS anon_1_users_id
+ FROM (SELECT users.id AS users_id
+ FROM users
+ WHERE users.name = ?) AS anon_1 JOIN addresses ON anon_1.users_id = addresses.user_id
+ ORDER BY anon_1.users_id, addresses.id
+ ('jack',)
+
+The default **loader strategy** for any :func:`~sqlalchemy.orm.relationship`
+is configured by the ``lazy`` keyword argument, which defaults to ``select``.
+Below we set it as ``joined`` so that the ``children`` relationship is eager
+loading, using a join:
+
+.. sourcecode:: python+sql
+
+ # load the 'children' collection using LEFT OUTER JOIN
+ mapper(Parent, parent_table, properties={
+ 'children': relationship(Child, lazy='joined')
+ })
+
+We can also set it to eagerly load using a second query for all collections,
+using ``subquery``:
+
+.. sourcecode:: python+sql
+
+ # load the 'children' attribute using a join to a subquery
+ mapper(Parent, parent_table, properties={
+ 'children': relationship(Child, lazy='subquery')
+ })
+
+When querying, all three choices of loader strategy are available on a
+per-query basis, using the :func:`~sqlalchemy.orm.joinedload`,
+:func:`~sqlalchemy.orm.subqueryload` and :func:`~sqlalchemy.orm.lazyload`
+query options:
+
+.. sourcecode:: python+sql
+
+ # set children to load lazily
+ session.query(Parent).options(lazyload('children')).all()
+
+ # set children to load eagerly with a join
+ session.query(Parent).options(joinedload('children')).all()
+
+ # set children to load eagerly with a second statement
+ session.query(Parent).options(subqueryload('children')).all()
+
+To reference a relationship that is deeper than one level, separate the names by periods:
+
+.. sourcecode:: python+sql
+
+ session.query(Parent).options(joinedload('foo.bar.bat')).all()
+
+When using dot-separated names with :func:`~sqlalchemy.orm.joinedload` or
+:func:`~sqlalchemy.orm.subqueryload`, option applies **only** to the actual
+attribute named, and **not** its ancestors. For example, suppose a mapping
+from ``A`` to ``B`` to ``C``, where the relationships, named ``atob`` and
+``btoc``, are both lazy-loading. A statement like the following:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(joinedload('atob.btoc')).all()
+
+will load only ``A`` objects to start. When the ``atob`` attribute on each ``A`` is accessed, the returned ``B`` objects will *eagerly* load their ``C`` objects.
+
+Therefore, to modify the eager load to load both ``atob`` as well as ``btoc``, place joinedloads for both:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(joinedload('atob'), joinedload('atob.btoc')).all()
+
+or more simply just use :func:`~sqlalchemy.orm.joinedload_all` or :func:`~sqlalchemy.orm.subqueryload_all`:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(joinedload_all('atob.btoc')).all()
+
+There are two other loader strategies available, **dynamic loading** and **no loading**; these are described in :ref:`largecollections`.
+
+What Kind of Loading to Use ?
+-----------------------------
+
+Which type of loading to use typically comes down to optimizing the tradeoff
+between number of SQL executions, complexity of SQL emitted, and amount of
+data fetched. Lets take two examples, a :func:`~sqlalchemy.orm.relationship`
+which references a collection, and a :func:`~sqlalchemy.orm.relationship` that
+references a scalar many-to-one reference.
+
+* One to Many Collection
+
+ * When using the default lazy loading, if you load 100 objects, and then access a collection on each of
+ them, a total of 101 SQL statements will be emitted, although each statement will typically be a
+ simple SELECT without any joins.
+
+ * When using joined loading, the load of 100 objects and their collections will emit only one SQL
+ statement. However, the
+ total number of rows fetched will be equal to the sum of the size of all the collections, plus one
+ extra row for each parent object that has an empty collection. Each row will also contain the full
+ set of columns represented by the parents, repeated for each collection item - SQLAlchemy does not
+ re-fetch these columns other than those of the primary key, however most DBAPIs (with some
+ exceptions) will transmit the full data of each parent over the wire to the client connection in
+ any case. Therefore joined eager loading only makes sense when the size of the collections are
+ relatively small. The LEFT OUTER JOIN can also be performance intensive compared to an INNER join.
+
+ * When using subquery loading, the load of 100 objects will emit two SQL statements. The second
+ statement will fetch a total number of rows equal to the sum of the size of all collections. An
+ INNER JOIN is used, and a minimum of parent columns are requested, only the primary keys. So a
+ subquery load makes sense when the collections are larger.
+
+ * When multiple levels of depth are used with joined or subquery loading, loading collections-within-
+ collections will multiply the total number of rows fetched in a cartesian fashion. Both forms
+ of eager loading always join from the original parent class.
+
+* Many to One Reference
+
+ * When using the default lazy loading, a load of 100 objects will like in the case of the collection
+ emit as many as 101 SQL statements. However - there is a significant exception to this, in that
+ if the many-to-one reference is a simple foreign key reference to the target's primary key, each
+ reference will be checked first in the current identity map using ``query.get()``. So here,
+ if the collection of objects references a relatively small set of target objects, or the full set
+ of possible target objects have already been loaded into the session and are strongly referenced,
+ using the default of `lazy='select'` is by far the most efficient way to go.
+
+ * When using joined loading, the load of 100 objects will emit only one SQL statement. The join
+ will be a LEFT OUTER JOIN, and the total number of rows will be equal to 100 in all cases.
+ If you know that each parent definitely has a child (i.e. the foreign
+ key reference is NOT NULL), the joined load can be configured with ``innerjoin=True``, which is
+ usually specified within the :func:`~sqlalchemy.orm.relationship`. For a load of objects where
+ there are many possible target references which may have not been loaded already, joined loading
+ with an INNER JOIN is extremely efficient.
+
+ * Subquery loading will issue a second load for all the child objects, so for a load of 100 objects
+ there would be two SQL statements emitted. There's probably not much advantage here over
+ joined loading, however, except perhaps that subquery loading can use an INNER JOIN in all cases
+ whereas joined loading requires that the foreign key is NOT NULL.
+
+Routing Explicit Joins/Statements into Eagerly Loaded Collections
+------------------------------------------------------------------
+
+The behavior of :func:`~sqlalchemy.orm.joinedload()` is such that joins are
+created automatically, the results of which are routed into collections and
+scalar references on loaded objects. It is often the case that a query already
+includes the necessary joins which represent a particular collection or scalar
+reference, and the joins added by the joinedload feature are redundant - yet
+you'd still like the collections/references to be populated.
+
+For this SQLAlchemy supplies the :func:`~sqlalchemy.orm.contains_eager()`
+option. This option is used in the same manner as the
+:func:`~sqlalchemy.orm.joinedload()` option except it is assumed that the
+:class:`~sqlalchemy.orm.query.Query` will specify the appropriate joins
+explicitly. Below it's used with a ``from_statement`` load::
+
+ # mapping is the users->addresses mapping
+ mapper(User, users_table, properties={
+ 'addresses': relationship(Address, addresses_table)
+ })
+
+ # define a query on USERS with an outer join to ADDRESSES
+ statement = users_table.outerjoin(addresses_table).select().apply_labels()
+
+ # construct a Query object which expects the "addresses" results
+ query = session.query(User).options(contains_eager('addresses'))
+
+ # get results normally
+ r = query.from_statement(statement)
+
+It works just as well with an inline ``Query.join()`` or
+``Query.outerjoin()``::
+
+ session.query(User).outerjoin(User.addresses).options(contains_eager(User.addresses)).all()
+
+If the "eager" portion of the statement is "aliased", the ``alias`` keyword
+argument to :func:`~sqlalchemy.orm.contains_eager` may be used to indicate it.
+This is a string alias name or reference to an actual
+:class:`~sqlalchemy.sql.expression.Alias` (or other selectable) object:
+
+.. sourcecode:: python+sql
+
+ # use an alias of the Address entity
+ adalias = aliased(Address)
+
+ # construct a Query object which expects the "addresses" results
+ query = session.query(User).\
+ outerjoin((adalias, User.addresses)).\
+ options(contains_eager(User.addresses, alias=adalias))
+
+ # get results normally
+ {sql}r = query.all()
+ SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, adalias.address_id AS adalias_address_id,
+ adalias.user_id AS adalias_user_id, adalias.email_address AS adalias_email_address, (...other columns...)
+ FROM users LEFT OUTER JOIN email_addresses AS email_addresses_1 ON users.user_id = email_addresses_1.user_id
+
+The ``alias`` argument is used only as a source of columns to match up to the
+result set. You can use it even to match up the result to arbitrary label
+names in a string SQL statement, by passing a selectable() which links those
+labels to the mapped :class:`~sqlalchemy.schema.Table`::
+
+ # label the columns of the addresses table
+ eager_columns = select([
+ addresses.c.address_id.label('a1'),
+ addresses.c.email_address.label('a2'),
+ addresses.c.user_id.label('a3')])
+
+ # select from a raw SQL statement which uses those label names for the
+ # addresses table. contains_eager() matches them up.
+ query = session.query(User).\
+ from_statement("select users.*, addresses.address_id as a1, "
+ "addresses.email_address as a2, addresses.user_id as a3 "
+ "from users left outer join addresses on users.user_id=addresses.user_id").\
+ options(contains_eager(User.addresses, alias=eager_columns))
+
+The path given as the argument to :func:`~sqlalchemy.orm.contains_eager` needs
+to be a full path from the starting entity. For example if we were loading
+``Users->orders->Order->items->Item``, the string version would look like::
+
+ query(User).options(contains_eager('orders', 'items'))
+
+Or using the class-bound descriptor::
+
+ query(User).options(contains_eager(User.orders, Order.items))
+
+A variant on :func:`~sqlalchemy.orm.contains_eager` is the
+``contains_alias()`` option, which is used in the rare case that the parent
+object is loaded from an alias within a user-defined SELECT statement::
+
+ # define an aliased UNION called 'ulist'
+ statement = users.select(users.c.user_id==7).union(users.select(users.c.user_id>7)).alias('ulist')
+
+ # add on an eager load of "addresses"
+ statement = statement.outerjoin(addresses).select().apply_labels()
+
+ # create query, indicating "ulist" is an alias for the main table, "addresses" property should
+ # be eager loaded
+ query = session.query(User).options(contains_alias('ulist'), contains_eager('addresses'))
+
+ # results
+ r = query.from_statement(statement)
+
+Relation Loader API
+--------------------
+
+.. autofunction:: contains_alias
+
+.. autofunction:: contains_eager
+
+.. autofunction:: eagerload
+
+.. autofunction:: eagerload_all
+
+.. autofunction:: joinedload
+
+.. autofunction:: joinedload_all
+
+.. autofunction:: lazyload
+
+.. autofunction:: subqueryload
+
+.. autofunction:: subqueryload_all
:class:`.ClauseElement` may be
used. Unlike older versions of SQLAlchemy, there is no :func:`~.sql.expression.label` requirement::
+ from sqlalchemy.orm import column_property
+
mapper(User, users_table, properties={
'fullname': column_property(
users_table.c.firstname + " " + users_table.c.lastname
)
})
-Correlated subqueries may be used as well:
-
-.. sourcecode:: python+sql
+Correlated subqueries may be used as well::
+ from sqlalchemy.orm import column_property
from sqlalchemy import select, func
mapper(User, users_table, properties={
The declarative form of the above is described in :ref:`declarative_sql_expressions`.
+.. autofunction:: column_property
+
Note that :func:`.column_property` is used to provide the effect of a SQL
expression that is actively rendered into the SELECT generated for a
particular mapped class. Alternatively, for the typical attribute that
as SQL expressions. The :mod:`.derived_attributes` example is slated to become a
built-in feature of SQLAlchemy in a future release.
-.. autofunction:: column_property
Changing Attribute Behavior
----------------------------
Each of :func:`.column_property`, :func:`~.composite`, :func:`.relationship`,
and :func:`.comparable_property` accept an argument called
-``comparator_factory``. A subclass of :class:`.PropComparator` can be provided
+``comparator_factory``. A subclass of :class:`.PropComparator` can be provided
for this argument, which can then reimplement basic Python comparison methods
-such as ``__eq__()``, ``__ne__()``, ``__lt__()``, and so on. See each of those
-functions for subclassing guidelines, as it's usually best to subclass the
-:class:`.PropComparator` subclass used by that type of property, so that all
-methods remain implemented. For example, to allow a column-mapped attribute to
+such as ``__eq__()``, ``__ne__()``, ``__lt__()``, and so on.
+
+It's best to subclass the :class:`.PropComparator` subclass provided by
+each type of property. For example, to allow a column-mapped attribute to
do case-insensitive comparison::
from sqlalchemy.orm.properties import ColumnProperty
comparator_factory=MyComparator)
})
-Above, comparisons on the ``email`` column are wrapped in the SQL lower() function to produce case-insensitive matching::
+Above, comparisons on the ``email`` column are wrapped in the SQL lower()
+function to produce case-insensitive matching::
>>> str(EmailAddress.email == 'SomeAddress@foo.com')
lower(addresses.email) = lower(:lower_1)
-In contrast, a similar effect is more easily accomplished, although
-with less control of it's behavior, using a column-mapped expression::
-
- from sqlachemy.orm import column_property
- from sqlalchemy.sql import func
-
- mapper(EmailAddress, addresses_table, properties={
- 'email':column_property(func.lower(addresses_table.c.email))
- })
+When building a :class:`.PropComparator`, the ``__clause_element__()`` method
+should be used in order to acquire the underlying mapped column. This will
+return a column that is appropriately wrapped in any kind of subquery
+or aliasing that has been applied in the context of the generated SQL statement.
-In the above case, the "email" attribute will be rendered as ``lower(email)``
-in all queries, including in the columns clause of the SELECT statement.
-This means the value of "email" will be loaded as lower case, not just in
-comparisons. It's up to the user to decide if the finer-grained control
-but more upfront work of a custom :class:`.PropComparator` is necessary.
+.. autoclass:: sqlalchemy.orm.interfaces.PropComparator
+ :show-inheritance:
.. autofunction:: comparable_property
+
.. _mapper_composite:
Composite Column Types
The :func:`.composite` function is then used in the mapping::
- from sqlalchemy.orm import mapper, composite
+ from sqlalchemy.orm import composite
class Vertex(object):
pass
.. autofunction:: composite
-Controlling Ordering
----------------------
-
-The ORM does not generate ordering for any query unless explicitly configured.
-
-The "default" ordering for a collection, which applies to list-based
-collections, can be configured using the ``order_by`` keyword argument on
-:func:`~sqlalchemy.orm.relationship`::
-
- mapper(Address, addresses_table)
-
- # order address objects by address id
- mapper(User, users_table, properties={
- 'addresses': relationship(Address, order_by=addresses_table.c.address_id)
- })
-
-Note that when using joined eager loaders with relationships, the tables used
-by the eager load's join are anonymously aliased. You can only order by these
-columns if you specify it at the :func:`~sqlalchemy.orm.relationship` level.
-To control ordering at the query level based on a related table, you
-``join()`` to that relationship, then order by it::
-
- session.query(User).join('addresses').order_by(Address.street)
-
-Ordering for rows loaded through :class:`~sqlalchemy.orm.query.Query` is
-usually specified using the ``order_by()`` generative method. There is also an
-option to set a default ordering for Queries which are against a single mapped
-entity and where there was no explicit ``order_by()`` stated, which is the
-``order_by`` keyword argument to ``mapper()``::
-
- # order by a column
- mapper(User, users_table, order_by=users_table.c.user_id)
-
- # order by multiple items
- mapper(User, users_table, order_by=[users_table.c.user_id, users_table.c.user_name.desc()])
-
-Above, a :class:`~sqlalchemy.orm.query.Query` issued for the ``User`` class
-will use the value of the mapper's ``order_by`` setting if the
-:class:`~sqlalchemy.orm.query.Query` itself has no ordering specified.
-
-.. _datamapping_inheritance:
-
-
.. _maptojoin:
Mapping a Class against Multiple Tables
----------------------------------------
-Mappers can be constructed against arbitrary relational units (called ``Selectables``) as well as plain ``Tables``. For example, The ``join`` keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.
+Mappers can be constructed against arbitrary relational units (called
+``Selectables``) as well as plain ``Tables``. For example, The ``join``
+keyword from the SQL package creates a neat selectable unit comprised of
+multiple tables, complete with its own composite primary key, which can be
+passed in to a mapper as the table.
.. sourcecode:: python+sql
'keyword_id': [userkeywords.c.keyword_id, keywords.c.keyword_id]
})
-In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
+In both examples above, "composite" columns were added as properties to the
+mappers; these are aggregations of multiple columns into one mapper property,
+which instructs the mapper to keep both of those columns set at the same
+value.
Mapping a Class against Arbitrary Selects
------------------------------------------
.. autofunction:: reconstructor
-Mapper API
-----------
+The :func:`mapper` API
+----------------------
.. autofunction:: mapper
.. autofunction:: outerjoin
-Query Options
--------------
-
-Options which are passed to ``query.options()``, to affect the behavior of loading.
-
-.. autofunction:: contains_alias
-
-.. autofunction:: contains_eager
-
-
-.. autofunction:: eagerload
-
-.. autofunction:: eagerload_all
-
-.. autofunction:: extension
-
-.. autofunction:: joinedload
-
-.. autofunction:: joinedload_all
-
-.. autofunction:: lazyload
-
-.. autofunction:: subqueryload
-
-.. autofunction:: subqueryload_all
-
+.. module:: sqlalchemy.orm
+
+Relationship Configuration
+==========================
+
+This section describes the :func:`relationship` function and in depth discussion
+of its usage. The reference material here continues into the next section,
+:ref:`collections_toplevel`, which has additional detail on configuration
+of collections via :func:`relationship`.
+
+Basic Relational Patterns
+--------------------------
+
+A quick walkthrough of the basic relational patterns. In this section we
+illustrate the classical mapping using :func:`mapper` in conjunction with
+:func:`relationship`. Then (by popular demand), we illustrate the declarative
+form using the :mod:`~sqlalchemy.ext.declarative` module.
+
+Note that :func:`.relationship` is historically known as
+:func:`.relation` in older versions of SQLAlchemy.
+
+One To Many
+~~~~~~~~~~~~
+
+A one to many relationship places a foreign key in the child table referencing
+the parent. SQLAlchemy creates the relationship as a collection on the parent
+object containing instances of the child object.
+
+.. sourcecode:: python+sql
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True))
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('parent.id'))
+ )
+
+ class Parent(object):
+ pass
+
+ class Child(object):
+ pass
+
+ mapper(Parent, parent_table, properties={
+ 'children': relationship(Child)
+ })
+
+ mapper(Child, child_table)
+
+To establish a bi-directional relationship in one-to-many, where the "reverse" side is a many to one, specify the ``backref`` option:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, parent_table, properties={
+ 'children': relationship(Child, backref='parent')
+ })
+
+ mapper(Child, child_table)
+
+``Child`` will get a ``parent`` attribute with many-to-one semantics.
+
+Declarative::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+ children = relationship("Child", backref="parent")
+
+ class Child(Base):
+ __tablename__ = 'child'
+ id = Column(Integer, primary_key=True)
+ parent_id = Column(Integer, ForeignKey('parent.id'))
+
+
+Many To One
+~~~~~~~~~~~~
+
+Many to one places a foreign key in the parent table referencing the child.
+The mapping setup is identical to one-to-many, however SQLAlchemy creates the
+relationship as a scalar attribute on the parent object referencing a single
+instance of the child object.
+
+.. sourcecode:: python+sql
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('child_id', Integer, ForeignKey('child.id')))
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True),
+ )
+
+ class Parent(object):
+ pass
+
+ class Child(object):
+ pass
+
+ mapper(Parent, parent_table, properties={
+ 'child': relationship(Child)
+ })
+
+ mapper(Child, child_table)
+
+Backref behavior is available here as well, where ``backref="parents"`` will
+place a one-to-many collection on the ``Child`` class::
+
+ mapper(Parent, parent_table, properties={
+ 'child': relationship(Child, backref="parents")
+ })
+
+Declarative::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+ child_id = Column(Integer, ForeignKey('child.id'))
+ child = relationship("Child", backref="parents")
+
+ class Child(Base):
+ __tablename__ = 'child'
+ id = Column(Integer, primary_key=True)
+
+One To One
+~~~~~~~~~~~
+
+One To One is essentially a bi-directional relationship with a scalar
+attribute on both sides. To achieve this, the ``uselist=False`` flag indicates
+the placement of a scalar attribute instead of a collection on the "many" side
+of the relationship. To convert one-to-many into one-to-one::
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('parent.id'))
+ )
+
+ mapper(Parent, parent_table, properties={
+ 'child': relationship(Child, uselist=False, backref='parent')
+ })
+
+ mapper(Child, child_table)
+
+Or to turn a one-to-many backref into one-to-one, use the :func:`.backref` function
+to provide arguments for the reverse side::
+
+ from sqlalchemy.orm import backref
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('child_id', Integer, ForeignKey('child.id'))
+ )
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ mapper(Parent, parent_table, properties={
+ 'child': relationship(Child, backref=backref('parent', uselist=False))
+ })
+
+ mapper(Child, child_table)
+
+The second example above as declarative::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+ child_id = Column(Integer, ForeignKey('child.id'))
+ child = relationship("Child", backref=backref("parent", uselist=False))
+
+ class Child(Base):
+ __tablename__ = 'child'
+ id = Column(Integer, primary_key=True)
+
+Many To Many
+~~~~~~~~~~~~~
+
+Many to Many adds an association table between two classes. The association
+table is indicated by the ``secondary`` argument to
+:func:`.relationship`.
+
+.. sourcecode:: python+sql
+
+ left_table = Table('left', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ right_table = Table('right', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ association_table = Table('association', metadata,
+ Column('left_id', Integer, ForeignKey('left.id')),
+ Column('right_id', Integer, ForeignKey('right.id'))
+ )
+
+ mapper(Parent, left_table, properties={
+ 'children': relationship(Child, secondary=association_table)
+ })
+
+ mapper(Child, right_table)
+
+For a bi-directional relationship, both sides of the relationship contain a
+collection. The ``backref`` keyword will automatically use
+the same ``secondary`` argument for the reverse relationship:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, left_table, properties={
+ 'children': relationship(Child, secondary=association_table,
+ backref='parents')
+ })
+
+With declarative, we still use the :class:`.Table` for the ``secondary``
+argument. A class is not mapped to this table, so it remains in its
+plain schematic form::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ association_table = Table('association', Base.metadata,
+ Column('left_id', Integer, ForeignKey('left.id')),
+ Column('right_id', Integer, ForeignKey('right.id'))
+ )
+
+ class Parent(Base):
+ __tablename__ = 'left'
+ id = Column(Integer, primary_key=True)
+ children = relationship("Child",
+ secondary=association_table,
+ backref="parents")
+
+ class Child(Base):
+ __tablename__ = 'right'
+ id = Column(Integer, primary_key=True)
+
+.. _association_pattern:
+
+Association Object
+~~~~~~~~~~~~~~~~~~
+
+The association object pattern is a variant on many-to-many: it specifically
+is used when your association table contains additional columns beyond those
+which are foreign keys to the left and right tables. Instead of using the
+``secondary`` argument, you map a new class directly to the association table.
+The left side of the relationship references the association object via
+one-to-many, and the association class references the right side via
+many-to-one.
+
+.. sourcecode:: python+sql
+
+ left_table = Table('left', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ right_table = Table('right', metadata,
+ Column('id', Integer, primary_key=True)
+ )
+
+ association_table = Table('association', metadata,
+ Column('left_id', Integer, ForeignKey('left.id'), primary_key=True),
+ Column('right_id', Integer, ForeignKey('right.id'), primary_key=True),
+ Column('data', String(50))
+ )
+
+ mapper(Parent, left_table, properties={
+ 'children':relationship(Association)
+ })
+
+ mapper(Association, association_table, properties={
+ 'child':relationship(Child)
+ })
+
+ mapper(Child, right_table)
+
+The bi-directional version adds backrefs to both relationships:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, left_table, properties={
+ 'children':relationship(Association, backref="parent")
+ })
+
+ mapper(Association, association_table, properties={
+ 'child':relationship(Child, backref="parent_assocs")
+ })
+
+ mapper(Child, right_table)
+
+Declarative::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ class Association(Base):
+ __tablename__ = 'association'
+ left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
+ right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
+ child = relationship("Child", backref="parent_assocs")
+
+ class Parent(Base):
+ __tablename__ = 'left'
+ id = Column(Integer, primary_key=True)
+ children = relationship(Association, backref="parent")
+
+ class Child(Base):
+ __tablename__ = 'right'
+ id = Column(Integer, primary_key=True)
+
+Working with the association pattern in its direct form requires that child
+objects are associated with an association instance before being appended to
+the parent; similarly, access from parent to child goes through the
+association object:
+
+.. sourcecode:: python+sql
+
+ # create parent, append a child via association
+ p = Parent()
+ a = Association()
+ a.child = Child()
+ p.children.append(a)
+
+ # iterate through child objects via association, including association
+ # attributes
+ for assoc in p.children:
+ print assoc.data
+ print assoc.child
+
+To enhance the association object pattern such that direct
+access to the ``Association`` object is optional, SQLAlchemy
+provides the :ref:`associationproxy` extension. This
+extension allows the configuration of attributes which will
+access two "hops" with a single access, one "hop" to the
+associated object, and a second to a target attribute.
+
+.. note:: When using the association object pattern, it is
+ advisable that the association-mapped table not be used
+ as the ``secondary`` argument on a :func:`.relationship`
+ elsewhere, unless that :func:`.relationship` contains
+ the option ``viewonly=True``. SQLAlchemy otherwise
+ may attempt to emit redundant INSERT and DELETE
+ statements on the same table, if similar state is detected
+ on the related attribute as well as the associated
+ object.
+
+Adjacency List Relationships
+-----------------------------
+
+The **adjacency list** pattern is a common relational pattern whereby a table
+contains a foreign key reference to itself. This is the most common and simple
+way to represent hierarchical data in flat tables. The other way is the
+"nested sets" model, sometimes called "modified preorder". Despite what many
+online articles say about modified preorder, the adjacency list model is
+probably the most appropriate pattern for the large majority of hierarchical
+storage needs, for reasons of concurrency, reduced complexity, and that
+modified preorder has little advantage over an application which can fully
+load subtrees into the application space.
+
+SQLAlchemy commonly refers to an adjacency list relationship as a
+**self-referential mapper**. In this example, we'll work with a single table
+called ``nodes`` to represent a tree structure::
+
+ nodes = Table('nodes', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('nodes.id')),
+ Column('data', String(50)),
+ )
+
+A graph such as the following::
+
+ root --+---> child1
+ +---> child2 --+--> subchild1
+ | +--> subchild2
+ +---> child3
+
+Would be represented with data such as::
+
+ id parent_id data
+ --- ------- ----
+ 1 NULL root
+ 2 1 child1
+ 3 1 child2
+ 4 3 subchild1
+ 5 3 subchild2
+ 6 1 child3
+
+SQLAlchemy's :func:`.mapper` configuration for a self-referential one-to-many
+relationship is exactly like a "normal" one-to-many relationship. When
+SQLAlchemy encounters the foreign key relationship from ``nodes`` to
+``nodes``, it assumes one-to-many unless told otherwise:
+
+.. sourcecode:: python+sql
+
+ # entity class
+ class Node(object):
+ pass
+
+ mapper(Node, nodes, properties={
+ 'children': relationship(Node)
+ })
+
+To create a many-to-one relationship from child to parent, an extra indicator
+of the "remote side" is added, which contains the
+:class:`~sqlalchemy.schema.Column` object or objects indicating the remote
+side of the relationship:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'parent': relationship(Node, remote_side=[nodes.c.id])
+ })
+
+And the bi-directional version combines both:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'children': relationship(Node,
+ backref=backref('parent', remote_side=[nodes.c.id])
+ )
+ })
+
+For comparison, the declarative version typically uses the inline ``id``
+:class:`.Column` attribute to declare remote_side (note the list form is optional
+when the collection is only one column)::
+
+ from sqlalchemy.ext.declarative import declarative_base
+ Base = declarative_base()
+
+ class Node(Base):
+ __tablename__ = 'nodes'
+ id = Column(Integer, primary_key=True)
+ parent_id = Column(Integer, ForeignKey('nodes.id'))
+ data = Column(String(50))
+ children = relationship("Node",
+ backref=backref('parent', remote_side=id)
+ )
+
+There are several examples included with SQLAlchemy illustrating
+self-referential strategies; these include :ref:`examples_adjacencylist` and
+:ref:`examples_xmlpersistence`.
+
+Self-Referential Query Strategies
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Querying self-referential structures is done in the same way as any other
+query in SQLAlchemy, such as below, we query for any node whose ``data``
+attribute stores the value ``child2``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'child2'
+ session.query(Node).filter(Node.data=='child2')
+
+On the subject of joins, i.e. those described in `datamapping_joins`,
+self-referential structures require the usage of aliases so that the same
+table can be referenced multiple times within the FROM clause of the query.
+Aliasing can be done either manually using the ``nodes``
+:class:`~sqlalchemy.schema.Table` object as a source of aliases:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2'
+ nodealias = nodes.alias()
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ filter(and_(Node.parent_id==nodealias.c.id, nodealias.c.data=='child2')).all()
+ SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.data AS nodes_data
+ FROM nodes, nodes AS nodes_1
+ WHERE nodes.data = ? AND nodes.parent_id = nodes_1.id AND nodes_1.data = ?
+ ['subchild1', 'child2']
+
+or automatically, using ``join()`` with ``aliased=True``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2'
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ join('parent', aliased=True).filter(Node.data=='child2').all()
+ SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.data AS nodes_data
+ FROM nodes JOIN nodes AS nodes_1 ON nodes_1.id = nodes.parent_id
+ WHERE nodes.data = ? AND nodes_1.data = ?
+ ['subchild1', 'child2']
+
+To add criterion to multiple points along a longer join, use ``from_joinpoint=True``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2' and a grandparent 'root'
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ join('parent', aliased=True).filter(Node.data=='child2').\
+ join('parent', aliased=True, from_joinpoint=True).filter(Node.data=='root').all()
+ SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.data AS nodes_data
+ FROM nodes JOIN nodes AS nodes_1 ON nodes_1.id = nodes.parent_id JOIN nodes AS nodes_2 ON nodes_2.id = nodes_1.parent_id
+ WHERE nodes.data = ? AND nodes_1.data = ? AND nodes_2.data = ?
+ ['subchild1', 'child2', 'root']
+
+Configuring Eager Loading
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Eager loading of relationships occurs using joins or outerjoins from parent to
+child table during a normal query operation, such that the parent and its
+child collection can be populated from a single SQL statement, or a second
+statement for all collections at once. SQLAlchemy's joined and subquery eager
+loading uses aliased tables in all cases when joining to related items, so it
+is compatible with self-referential joining. However, to use eager loading
+with a self-referential relationship, SQLAlchemy needs to be told how many
+levels deep it should join; otherwise the eager load will not take place. This
+depth setting is configured via ``join_depth``:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'children': relationship(Node, lazy='joined', join_depth=2)
+ })
+
+ {sql}session.query(Node).all()
+ SELECT nodes_1.id AS nodes_1_id, nodes_1.parent_id AS nodes_1_parent_id, nodes_1.data AS nodes_1_data, nodes_2.id AS nodes_2_id, nodes_2.parent_id AS nodes_2_parent_id, nodes_2.data AS nodes_2_data, nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.data AS nodes_data
+ FROM nodes LEFT OUTER JOIN nodes AS nodes_2 ON nodes.id = nodes_2.parent_id LEFT OUTER JOIN nodes AS nodes_1 ON nodes_2.id = nodes_1.parent_id
+ []
+
+Specifying Alternate Join Conditions to relationship()
+------------------------------------------------------
+
+The :func:`~sqlalchemy.orm.relationship` function uses the foreign key
+relationship between the parent and child tables to formulate the **primary
+join condition** between parent and child; in the case of a many-to-many
+relationship it also formulates the **secondary join condition**::
+
+ one to many/many to one:
+ ------------------------
+
+ parent_table --> parent_table.c.id == child_table.c.parent_id --> child_table
+ primaryjoin
+
+ many to many:
+ -------------
+
+ parent_table --> parent_table.c.id == secondary_table.c.parent_id -->
+ primaryjoin
+
+ secondary_table.c.child_id == child_table.c.id --> child_table
+ secondaryjoin
+
+If you are working with a :class:`~sqlalchemy.schema.Table` which has no
+:class:`~sqlalchemy.schema.ForeignKey` objects on it (which can be the case
+when using reflected tables with MySQL), or if the join condition cannot be
+expressed by a simple foreign key relationship, use the ``primaryjoin`` and
+possibly ``secondaryjoin`` conditions to create the appropriate relationship.
+
+In this example we create a relationship ``boston_addresses`` which will only
+load the user addresses with a city of "Boston":
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ pass
+ class Address(object):
+ pass
+
+ mapper(Address, addresses_table)
+ mapper(User, users_table, properties={
+ 'boston_addresses': relationship(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='Boston'))
+ })
+
+Many to many relationships can be customized by one or both of ``primaryjoin``
+and ``secondaryjoin``, shown below with just the default many-to-many
+relationship explicitly set:
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ pass
+ class Keyword(object):
+ pass
+ mapper(Keyword, keywords_table)
+ mapper(User, users_table, properties={
+ 'keywords': relationship(Keyword, secondary=userkeywords_table,
+ primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id,
+ secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id
+ )
+ })
+
+Specifying Foreign Keys
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+When using ``primaryjoin`` and ``secondaryjoin``, SQLAlchemy also needs to be
+aware of which columns in the relationship reference the other. In most cases,
+a :class:`~sqlalchemy.schema.Table` construct will have
+:class:`~sqlalchemy.schema.ForeignKey` constructs which take care of this;
+however, in the case of reflected tables on a database that does not report
+FKs (like MySQL ISAM) or when using join conditions on columns that don't have
+foreign keys, the :func:`~sqlalchemy.orm.relationship` needs to be told
+specifically which columns are "foreign" using the ``foreign_keys``
+collection:
+
+.. sourcecode:: python+sql
+
+ mapper(Address, addresses_table)
+ mapper(User, users_table, properties={
+ 'addresses': relationship(Address, primaryjoin=
+ users_table.c.user_id==addresses_table.c.user_id,
+ foreign_keys=[addresses_table.c.user_id])
+ })
+
+Building Query-Enabled Properties
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Very ambitious custom join conditions may fail to be directly persistable, and
+in some cases may not even load correctly. To remove the persistence part of
+the equation, use the flag ``viewonly=True`` on the
+:func:`~sqlalchemy.orm.relationship`, which establishes it as a read-only
+attribute (data written to the collection will be ignored on flush()).
+However, in extreme cases, consider using a regular Python property in
+conjunction with :class:`~sqlalchemy.orm.query.Query` as follows:
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ def _get_addresses(self):
+ return object_session(self).query(Address).with_parent(self).filter(...).all()
+ addresses = property(_get_addresses)
+
+Multiple Relationships against the Same Parent/Child
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Theres no restriction on how many times you can relate from parent to child.
+SQLAlchemy can usually figure out what you want, particularly if the join
+conditions are straightforward. Below we add a ``newyork_addresses`` attribute
+to complement the ``boston_addresses`` attribute:
+
+.. sourcecode:: python+sql
+
+ mapper(User, users_table, properties={
+ 'boston_addresses': relationship(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='Boston')),
+ 'newyork_addresses': relationship(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='New York')),
+ })
+
+
+Rows that point to themselves / Mutually Dependent Rows
+-------------------------------------------------------
+
+This is a very specific case where relationship() must perform an INSERT and a
+second UPDATE in order to properly populate a row (and vice versa an UPDATE
+and DELETE in order to delete without violating foreign key constraints). The
+two use cases are:
+
+ * A table contains a foreign key to itself, and a single row will have a foreign key value pointing to its own primary key.
+ * Two tables each contain a foreign key referencing the other table, with a row in each table referencing the other.
+
+For example::
+
+ user
+ ---------------------------------
+ user_id name related_user_id
+ 1 'ed' 1
+
+Or::
+
+ widget entry
+ ------------------------------------------- ---------------------------------
+ widget_id name favorite_entry_id entry_id name widget_id
+ 1 'somewidget' 5 5 'someentry' 1
+
+In the first case, a row points to itself. Technically, a database that uses
+sequences such as PostgreSQL or Oracle can INSERT the row at once using a
+previously generated value, but databases which rely upon autoincrement-style
+primary key identifiers cannot. The :func:`~sqlalchemy.orm.relationship`
+always assumes a "parent/child" model of row population during flush, so
+unless you are populating the primary key/foreign key columns directly,
+:func:`~sqlalchemy.orm.relationship` needs to use two statements.
+
+In the second case, the "widget" row must be inserted before any referring
+"entry" rows, but then the "favorite_entry_id" column of that "widget" row
+cannot be set until the "entry" rows have been generated. In this case, it's
+typically impossible to insert the "widget" and "entry" rows using just two
+INSERT statements; an UPDATE must be performed in order to keep foreign key
+constraints fulfilled. The exception is if the foreign keys are configured as
+"deferred until commit" (a feature some databases support) and if the
+identifiers were populated manually (again essentially bypassing
+:func:`~sqlalchemy.orm.relationship`).
+
+To enable the UPDATE after INSERT / UPDATE before DELETE behavior on
+:func:`~sqlalchemy.orm.relationship`, use the ``post_update`` flag on *one* of
+the relationships, preferably the many-to-one side::
+
+ mapper(Widget, widget, properties={
+ 'entries':relationship(Entry, primaryjoin=widget.c.widget_id==entry.c.widget_id),
+ 'favorite_entry':relationship(Entry, primaryjoin=widget.c.favorite_entry_id==entry.c.entry_id, post_update=True)
+ })
+
+When a structure using the above mapping is flushed, the "widget" row will be
+INSERTed minus the "favorite_entry_id" value, then all the "entry" rows will
+be INSERTed referencing the parent "widget" row, and then an UPDATE statement
+will populate the "favorite_entry_id" column of the "widget" table (it's one
+row at a time for the time being).
+
+
+Mutable Primary Keys / Update Cascades
+---------------------------------------
+
+When the primary key of an entity changes, related items which reference the
+primary key must also be updated as well. For databases which enforce
+referential integrity, it's required to use the database's ON UPDATE CASCADE
+functionality in order to propagate primary key changes. For those which
+don't, the ``passive_updates`` flag can be set to ``False`` which instructs
+SQLAlchemy to issue UPDATE statements individually. The ``passive_updates``
+flag can also be ``False`` in conjunction with ON UPDATE CASCADE
+functionality, although in that case it issues UPDATE statements
+unnecessarily.
+
+A typical mutable primary key setup might look like:
+
+.. sourcecode:: python+sql
+
+ users = Table('users', metadata,
+ Column('username', String(50), primary_key=True),
+ Column('fullname', String(100)))
+
+ addresses = Table('addresses', metadata,
+ Column('email', String(50), primary_key=True),
+ Column('username', String(50), ForeignKey('users.username', onupdate="cascade")))
+
+ class User(object):
+ pass
+ class Address(object):
+ pass
+
+ mapper(User, users, properties={
+ 'addresses': relationship(Address, passive_updates=False)
+ })
+ mapper(Address, addresses)
+
+passive_updates is set to ``True`` by default. Foreign key references to non-primary key columns are supported as well.
+
+The :func:`relationship` API
+----------------------------
+
+.. autofunction:: relationship
+
+.. autofunction:: backref
+
+.. autofunction:: dynamic_loader
+
+.. autofunction:: relation
+
+
'children': relationship(Children)
}
-.. autofunction:: backref
-
-
-
-
-
-.. autofunction:: dynamic_loader
-
-.. autofunction:: relation
-
-.. autofunction:: relationship
-
Decorators
----------
'version_id_generator': lambda v:datetime.now()
}
+.. _declarative_inheritance:
+
Inheritance Configuration
=========================
def relationship(argument, secondary=None, **kwargs):
"""Provide a relationship of a primary Mapper to a secondary Mapper.
- .. note:: This function is known as :func:`relation` in all versions
- of SQLAlchemy prior to version 0.6beta2, including the 0.5 and 0.4
- series. :func:`~sqlalchemy.orm.relationship()` is only available
- starting with SQLAlchemy 0.6beta2. The :func:`relation` name will
- remain available for the foreseeable future in order to enable
- cross-compatibility.
+ .. note:: :func:`relationship` is historically known as
+ :func:`relation` prior to version 0.6.
This corresponds to a parent-child or associative table relationship. The
constructed class is an instance of :class:`RelationshipProperty`.
:param collection_class:
a class or callable that returns a new list-holding object. will
be used in place of a plain list for storing elements.
+ Behavior of this attribute is described in detail at
+ :ref:`custom_collections`.
:param comparator_factory:
a class which extends :class:`RelationshipProperty.Comparator` which
* None - a synonym for 'noload'
+ Detailed discussion of loader strategies is at :ref:`loading_toplevel`.
+
:param order_by:
indicates the ordering that should be applied when loading these
items.
doc=doc)
def comparable_property(comparator_factory, descriptor=None):
- """Provide query semantics for an unmanaged attribute.
+ """Provides a method of applying a :class:`.PropComparator`
+ to any Python descriptor attribute.
Allows a regular Python @property (descriptor) to be used in Queries and
SQL constructs like a managed attribute. comparable_property wraps a
"""Return a ``MapperOption`` that will convert the property
of the given name into an subquery eager load.
- .. note:: This function is new as of SQLAlchemy version 0.6beta3.
-
Used with :meth:`~sqlalchemy.orm.query.Query.options`.
examples::
"""Return a ``MapperOption`` that will convert all properties along the
given dot-separated path into a subquery eager load.
- .. note:: This function is new as of SQLAlchemy version 0.6beta3.
-
Used with :meth:`~sqlalchemy.orm.query.Query.options`.
For example::
return operator(self.comparator, value)
class PropComparator(expression.ColumnOperators):
- """defines comparison operations for MapperProperty objects.
+ """Defines comparison operations for MapperProperty objects.
+
+ User-defined subclasses of :class:`.PropComparator` may be created. The
+ built-in Python comparison and math operator methods, such as
+ ``__eq__()``, ``__lt__()``, ``__add__()``, can be overridden to provide
+ new operator behaivor. The custom :class:`.PropComparator` is passed to
+ the mapper property via the ``comparator_factory`` argument. In each case,
+ the appropriate subclass of :class:`.PropComparator` should be used::
+
+ from sqlalchemy.orm.properties import \\
+ ColumnProperty,\\
+ CompositeProperty,\\
+ RelationshipProperty
- PropComparator instances should also define an accessor 'property'
- which returns the MapperProperty associated with this
- PropComparator.
+ class MyColumnComparator(ColumnProperty.Comparator):
+ pass
+
+ class MyCompositeComparator(CompositeProperty.Comparator):
+ pass
+
+ class MyRelationshipComparator(RelationshipProperty.Comparator):
+ pass
+
"""
def __init__(self, prop, mapper, adapter=None):