[alpha_api]: javascript:alphaApi()
[alpha_implementation]: javascript:alphaImplementation()
-Advanced Data Mapping {@name=advdatamapping}
+Mapper Configuration {@name=advdatamapping}
======================
This section details all the options available to Mappers, as well as advanced patterns.
-To start, heres the tables we will work with again:
+### API Reference
+
+Important points of reference, detailing the full configurational ORM API:
+
+[ORM Package Documentation](rel:docstrings_sqlalchemy.orm)
+
+[Options for mapper()](rel:docstrings_sqlalchemy.orm_modfunc_mapper)
+
+[Options for relation()](rel:docstrings_sqlalchemy.orm_modfunc_relation)
+
+### Customizing Column Properties {@name=columns}
+
+The default behavior of a `mapper` is to assemble all the columns in the mapped `Table` into mapped object attributes. This behavior can be modified in several ways, as well as enhanced by SQL expressions.
+
+To load only a part of the columns referenced by a table as attributes, use the `include_properties` and `exclude_properties` arguments:
{python}
- from sqlalchemy import *
+ mapper(User, users_table, include_properties=['user_id', 'user_name'])
+
+ mapper(Address, addresses_table, exclude_properties=['street', 'city', 'state', 'zip'])
+
+To change the name of the attribute mapped to a particular column, place the `Column` object in the `properties` dictionary with the desired key:
- metadata = MetaData()
+ {python}
+ mapper(User, users_table, properties={
+ 'id' : users_table.c.user_id,
+ 'name' : users_table.c.user_name,
+ })
- # a table to store users
- users_table = Table('users', metadata,
- Column('user_id', Integer, primary_key = True),
- Column('user_name', String(40)),
- Column('password', String(80))
- )
+To change the names of all attributes using a prefix, use the `column_prefix` option. This is useful for classes which wish to add their own `property` accessors:
- # a table that stores mailing addresses associated with a specific user
- addresses_table = Table('addresses', metadata,
- Column('address_id', Integer, primary_key = True),
- Column('user_id', Integer, ForeignKey("users.user_id")),
- Column('street', String(100)),
- Column('city', String(80)),
- Column('state', String(2)),
- Column('zip', String(10))
- )
+ {python}
+ mapper(User, users_table, column_prefix='_')
+
+The above will place attribute names such as `_user_id`, `_user_name`, `_password` etc. on the mapped `User` class.
+
+To place multiple columns which are known to be "synonymous" based on foreign key relationship or join condition into the same mapped attribute, put them together using a list, as below where we map to a `Join`:
- # a table that stores keywords
- keywords_table = Table('keywords', metadata,
- Column('keyword_id', Integer, primary_key = True),
- Column('name', VARCHAR(50))
- )
+ {python}
+ # join users and addresses
+ usersaddresses = sql.join(users_table, addresses_table, \
+ users_table.c.user_id == addresses_table.c.user_id)
+
+ mapper(User, usersaddresses,
+ properties = {
+ 'id':[users_table.c.user_id, addresses_table.c.user_id],
+ })
- # a table that associates keywords with users
- userkeywords_table = Table('userkeywords', metadata,
- Column('user_id', INT, ForeignKey("users")),
- Column('keyword_id', INT, ForeignKey("keywords"))
- )
+#### Deferred Column Loading {@name=deferred}
+
+This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentailly "column-level lazy loading". This feature is useful when one wants to avoid loading a large text or binary field into memory when its not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together.
+ {python}
+ book_excerpts = Table('books', db,
+ Column('book_id', Integer, primary_key=True),
+ Column('title', String(200), nullable=False),
+ Column('summary', String(2000)),
+ Column('excerpt', String),
+ Column('photo', Binary)
+ )
-### More On Mapper Properties {@name=properties}
+ class Book(object):
+ pass
-#### Overriding Column Names {@name=colname}
+ # define a mapper that will load each of 'excerpt' and 'photo' in
+ # separate, individual-row SELECT statements when each attribute
+ # is first referenced on the individual object instance
+ mapper(Book, book_excerpts, properties = {
+ 'excerpt' : deferred(book_excerpts.c.excerpt),
+ 'photo' : deferred(book_excerpts.c.photo)
+ })
-When mappers are constructed, by default the column names in the Table metadata are used as the names of attributes on the mapped class. This can be customzed within the properties by stating the key/column combinations explicitly:
+Deferred columns can be placed into groups so that they load together:
{python}
- user_mapper = mapper(User, users_table, properties={
- 'id' : users_table.c.user_id,
- 'name' : users_table.c.user_name,
+ book_excerpts = Table('books', db,
+ Column('book_id', Integer, primary_key=True),
+ Column('title', String(200), nullable=False),
+ Column('summary', String(2000)),
+ Column('excerpt', String),
+ Column('photo1', Binary),
+ Column('photo2', Binary),
+ Column('photo3', Binary)
+ )
+
+ class Book(object):
+ pass
+
+ # define a mapper with a 'photos' deferred group. when one photo is referenced,
+ # all three photos will be loaded in one SELECT statement. The 'excerpt' will
+ # be loaded separately when it is first referenced.
+ mapper(Book, book_excerpts, properties = {
+ 'excerpt' : deferred(book_excerpts.c.excerpt),
+ 'photo1' : deferred(book_excerpts.c.photo1, group='photos'),
+ 'photo2' : deferred(book_excerpts.c.photo2, group='photos'),
+ 'photo3' : deferred(book_excerpts.c.photo3, group='photos')
})
-In the situation when column names overlap in a mapper against multiple tables, columns may be referenced together with a list:
+You can defer or undefer columns at the `Query` level with the `options` method:
{python}
- # join users and addresses
- usersaddresses = sql.join(users_table, addresses_table, users_table.c.user_id == addresses_table.c.user_id)
- m = mapper(User, usersaddresses,
- properties = {
- 'id' : [users_table.c.user_id, addresses_table.c.user_id],
- }
- )
+ query = session.query(Book)
+ query.options(defer('summary')).all()
+ query.options(undefer('excerpt')).all()
-#### Overriding Properties {@name=overriding}
+#### SQL Expressions as Mapped Attributes {@name=expressions}
-A common request is the ability to create custom class properties that override the behavior of setting/getting an attribute. Currently, the easiest way to do this in SQLAlchemy is how it would be done in any Python program; define your attribute with a different name, such as "_attribute", and use a property to get/set its value. The mapper just needs to be told of the special name:
+To add a SQL clause composed of local or external columns as a read-only, mapped column attribute, use the `column_property()` function. Any scalar-returning `ClauseElement` may be used, as long as it has a `name` attribute; usually, you'll want to call `label()` to give it a specific name:
{python}
- class MyClass(object):
- def _set_email(self, email):
- self._email = email
- def _get_email(self):
- return self._email
- email = property(_get_email, _set_email)
+ mapper(User, users, properties={
+ 'fullname' : column_property(
+ (users_table.c.firstname + " " + users_table.c.lastname).label('fullname')
+ )
+ })
+
+Correlated subqueries may be used as well:
- mapper(MyClass, mytable, properties = {
- # map the '_email' attribute to the "email" column
- # on the table
- '_email': mytable.c.email
+ {python}
+ mapper(User, users, properties={
+ 'address_count' : column_property(
+ select(
+ [func.count(addresses_table.c.address_id)],
+ addresses_table.c.user_id==users_table.c.user_id
+ ).label('address_count')
+ )
})
+
+### Overriding Attribute Behavior {@name=overriding}
-It is also possible to route the the `select_by` and `get_by` functions on `Query` using the new property name, by establishing a `synonym`:
+A common request is the ability to create custom class properties that override the behavior of setting/getting an attribute. You accomplish this using normal Python `property` constructs:
{python}
- mapper(MyClass, mytable, properties = {
- # map the '_email' attribute to the "email" column
- # on the table
- '_email': mytable.c.email,
+ class MyAddress(object):
+ def _set_email(self, email):
+ self._email = email
+ def _get_email(self):
+ return self._email
+ email = property(_get_email, _set_email)
+
+ mapper(MyAddress, addresses_table, properties = {
+ # map the '_email' attribute to the "email" column
+ # on the table
+ '_email': addresses_table.c.email
+ })
+
+To have your custom `email` property be recognized by keyword-based `Query` functions such as `filter_by()`, place a `synonym` on your mapper:
- # make a synonym 'email'
- 'email' : synonym('_email')
+ {python}
+ mapper(MyAddress, addresses_table, properties = {
+ '_email': addresses_table.c.email
+
+ 'email':synonym('_email')
})
- # now you can select_by(email)
- result = session.query(MyClass).select_by(email='john@smith.com')
+ # use the synonym in a query
+ result = session.query(MyAddress).filter_by(email='john@smith.com')
-Synonym can be established with the flag "proxy=True", to create a class-level proxy to the actual property. This has the effect of creating a fully functional synonym on class instances:
+Synonym strategies such as the above can be easily automated, such as this example which specifies all columns and synonyms explicitly:
{python}
- mapper(MyClass, mytable, properties = {
- '_email': mytable.c.email
- 'email' : synonym('_email', proxy=True)
- })
+ mapper(MyAddress, addresses_table, properties = dict(
+ [('_'+col.key, col) for col in addresses_table.c] +
+ [(col.key, synonym('_'+col.key)) for col in addresses_table.c]
+ ))
- x = MyClass()
- x.email = 'john@doe.com'
-
- >>> x._email
- 'john@doe.com'
+The `column_prefix` option can also help with the above scenario by setting up the columns automatically with a prefix:
-#### Entity Collections {@name=entitycollections}
+ {python}
+ mapper(MyAddress, addresses_table, column_prefix='_', properties = dict(
+ [(col.key, synonym('_'+col.key)) for col in addresses_table.c]
+ ))
+
+
+### Alternate Collection Implementations {@name=collections}
Mapping a one-to-many or many-to-many relationship results in a collection of values accessible through an attribute on the parent instance. By default, this collection is a `list`:
parent.children.add(child)
assert child in parent.children
-##### Custom Entity Collections {@name=customcollections}
+#### Custom Collection Implementations {@name=custom}
You can use your own types for collections as well. For most cases, simply inherit from `list` or `set` and add the custom behavior.
But this class won't work quite yet: a little glue is needed to adapt it for use by SQLAlchemy. The ORM needs to know which methods to use to append, remove and iterate over members of the collection. When using a type like `list` or `set`, the appropriate methods are well-known and used automatically when present. This set-like class does not provide the expected `add` method, so we must supply an explicit mapping for the ORM via a decorator.
-##### Collection Decorators {@name=collectiondecorators}
+#### Annotating Custom Collections via Decorators {@name=decorators}
Decorators can be used to tag the individual methods the ORM needs to manage collections. Use them when your class doesn't quite meet the regular interface for its container type, or you simply would like to use a different method to get the job done.
There is no requirement to be list-, or set-like at all. Collection classes can be any shape, so long as they have the append, remove and iterate interface marked for SQLAlchemy's use. Append and remove methods will be called with a mapped entity as the single argument, and iterator methods are called with no arguments and must return an iterator.
-##### Dictionary-Based Collections {@name=dictcollections}
+#### Dictionary-Based Collections {@name=dictcollections}
A `dict` can be used as a collection, but a keying strategy is needed to map entities loaded by the ORM to key, value pairs. The [collections](rel:docstrings_sqlalchemy.orm.collections) package provides several built-in types for dictionary-based collections:
The ORM understands the `dict` interface just like lists and sets, and will automatically instrument all dict-like methods if you choose to subclass `dict` or provide dict-like collection behavior in a duck-typed class. You must decorate appender and remover methods, however- there are no compatible methods in the basic dictionary interface for SQLAlchemy to use by default. Iteration will go through `itervalues()` unless otherwise decorated.
-##### Instrumentation and Custom Types {@name=adv_collections}
+#### Instrumentation and Custom Types {@name=adv_collections}
Many custom types and existing library classes can be used as a entity collection type as-is without further ado. However, it is important to note that the instrumentation process _will_ modify the type, adding decorators around methods automatically.
The collections package provides additional decorators and support for authoring custom types. See the [package documentation](rel:docstrings_sqlalchemy.orm.collections) for more information and discussion of advanced usage and Python 2.3-compatible decoration options.
-#### Custom Join Conditions {@name=customjoin}
+### Specifying Alternate Join Conditions to relation() {@name=customjoin}
-When creating relations on a mapper, most examples so far have illustrated the mapper and relationship joining up based on the foreign keys of the tables they represent. in fact, this "automatic" inspection can be completely circumvented using the `primaryjoin` and `secondaryjoin` arguments to `relation`, as in this example which creates a User object which has a relationship to all of its Addresses which are in Boston:
+The `relation()` function uses the foreign key relationship between the parent and child tables to formulate the **primary join condition** between parent and child; in the case of a many-to-many relationship it also formulates the **secondary join condition**. If you are working with a `Table` which has no `ForeignKey` objects on it (which can be the case when using reflected tables with MySQL), or if the join condition cannot be expressed by a simple foreign key relationship, use the `primaryjoin` and possibly `secondaryjoin` conditions to create the appropriate relationship.
+
+In this example we create a relation `boston_addresses` which will only load the user addresses with a city of "Boston":
{python}
class User(object):
)
})
-#### Lazy/Eager Joins Multiple Times to One Table {@name=multiplejoin}
+Very ambitious custom join conditions may fail to be directly persistable, and in some cases may not even load correctly. To remove the persistence part of the equation, use the flag `viewonly=True` on the `relation()`, which establishes it as a read-only attribute (data written to the collection will be ignored on flush()). However, in extreme cases, consider using a regular Python property in conjunction with `Query` as follows:
+
+ class User(object):
+ def _get_addresses(self):
+ return object_session(self).query(Address).with_parent(self).filter(...).all()
+ addresses = property(_get_addresses)
+
+#### Multiple Relations against the Same Parent/Child {@name=multiplejoin}
-The previous example leads in to the idea of joining against the same table multiple times. Below is a User object that has lists of its Boston and New York addresses:
+Theres no restriction on how many times you can relate from parent to child. SQLAlchemy can usually figure out what you want, particularly if the join conditions are straightforward. Below we add a `newyork_addresses` attribute to complement the `boston_addresses` attribute:
{python}
mapper(User, users_table, properties={
Addresses.c.city=='New York')),
})
-Both lazy and eager loading support multiple joins equally well.
-
-#### Deferred Column Loading {@name=deferred}
-
-This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentailly "column-level lazy loading". This feature is useful when one wants to avoid loading a large text or binary field into memory when its not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together.
- {python}
- book_excerpts = Table('books', db,
- Column('book_id', Integer, primary_key=True),
- Column('title', String(200), nullable=False),
- Column('summary', String(2000)),
- Column('excerpt', String),
- Column('photo', Binary)
- )
+### Working with Large Collections {@name=largecollections}
- class Book(object):
- pass
+The default behavior of `relation()` is to fully load the collection of items in, as according to the loading strategy of the relation. Additionally, the Session by default only knows how to delete objects which are actually present within the session. When a parent instance is marked for deletion and flushed, the Session loads its full list of child items in so that they may either be deleted as well, or have their foreign key value set to null; this is to avoid constraint violations. For large collections of child items, there are several strategies to bypass full loading of child items both at load time as well as deletion time.
- # define a mapper that will load each of 'excerpt' and 'photo' in
- # separate, individual-row SELECT statements when each attribute
- # is first referenced on the individual object instance
- mapper(Book, book_excerpts, properties = {
- 'excerpt' : deferred(book_excerpts.c.excerpt),
- 'photo' : deferred(book_excerpts.c.photo)
- })
+#### Dynamic Relation Loaders {@name=dynamic}
-Deferred columns can be placed into groups so that they load together:
+The most useful by far is the `dynamic_loader()` relation. This is a variant of `relation()` which returns a `Query` object in place of a collection when accessed. `filter()` criterion may be applied as well as limits and offsets, either explicitly or via array slices:
{python}
- book_excerpts = Table('books', db,
- Column('book_id', Integer, primary_key=True),
- Column('title', String(200), nullable=False),
- Column('summary', String(2000)),
- Column('excerpt', String),
- Column('photo1', Binary),
- Column('photo2', Binary),
- Column('photo3', Binary)
- )
-
- class Book(object):
- pass
-
- # define a mapper with a 'photos' deferred group. when one photo is referenced,
- # all three photos will be loaded in one SELECT statement. The 'excerpt' will
- # be loaded separately when it is first referenced.
- mapper(Book, book_excerpts, properties = {
- 'excerpt' : deferred(book_excerpts.c.excerpt),
- 'photo1' : deferred(book_excerpts.c.photo1, group='photos'),
- 'photo2' : deferred(book_excerpts.c.photo2, group='photos'),
- 'photo3' : deferred(book_excerpts.c.photo3, group='photos')
+ mapper(User, users_table, properties={
+ 'posts':dynamic_loader(Post)
})
-
-You can defer or undefer columns at the `Query` level with the `options` method:
+
+ jack = session.query(User).get(id)
+
+ # filter Jack's blog posts
+ posts = jack.posts.filter(Post.c.headline=='this is a post')
+
+ # apply array slices
+ posts = jack.posts[5:20]
+
+The dynamic relation supports limited write operations, via the `append()` and `remove()` methods. Since the read side of the dynamic relation always queries the database, changes to the underlying collection will not be visible until the data has been flushed:
{python}
- query = session.query(Book)
- query.options(defer('summary')).all()
- query.options(undefer('excerpt')).all()
-
-#### Working with Large Collections
+ oldpost = jack.posts.filter(Post.c.headline=='old post').one()
+ jack.posts.remove(oldpost)
+
+ jack.posts.append(Post('new post'))
+
+To place a dynamic relation on a backref, use `lazy='dynamic'`:
-SQLAlchemy relations are generally simplistic; the lazy loader loads in the full list of child objects when accessed, and the eager load builds a query that loads the full list of child objects. Additionally, when you are deleting a parent object, SQLAlchemy ensures that it has loaded the full list of child objects so that it can mark them as deleted as well (or to update their parent foreign key to NULL). It does not issue an en-masse "delete from table where parent_id=?" type of statement in such a scenario. This is because the child objects themselves may also have further dependencies, and additionally may also exist in the current session in which case SA needs to know their identity so that their state can be properly updated.
+ {python}
+ mapper(Post, posts_table, properties={
+ 'user':relation(User, backref=backref('posts', lazy='dynamic'))
+ })
+
+Note that eager/lazy loading options cannot be used in conjunction dynamic relations at this time.
-So there are several techniques that can be used individually or combined together to address these issues, in the context of a large collection where you normally would not want to load the full list of relationships:
+#### Setting Noload {@name=noload}
-* Use `lazy=None` to disable child object loading (i.e. noload)
+The opposite of the dynamic relation is simply "noload", specified using `lazy=None`:
- {python}
- mapper(MyClass, table, properties=relation{
- 'children':relation(MyOtherClass, lazy=None)
- })
+ {python}
+ mapper(MyClass, table, properties=relation{
+ 'children':relation(MyOtherClass, lazy=None)
+ })
-* To load child objects, just use a query. Of particular convenience is that `Query` is a generative object, so you can return
-it as is, allowing additional criterion to be added as needed:
+Above, the `children` collection is fully writeable, and changes to it will be persisted to the database as well as locally available for reading at the time they are added. However when instances of `MyClass` are freshly loaded from the database, the `children` collection stays empty.
- {python}
- class Organization(object):
- def __init__(self, name):
- self.name = name
- member_query = property(lambda self: object_session(self).query(Member).with_parent(self))
+#### Using Passive Deletes {@name=passivedelete}
- myorg = sess.query(Organization).get(5)
-
- # get all members
- members = myorg.member_query.list()
-
- # query a subset of members using LIMIT/OFFSET
- members = myorg.member_query[5:10]
-
-* Use `passive_deletes=True` to disable child object loading on a DELETE operation, in conjunction with "ON DELETE (CASCADE|SET NULL)" on your database to automatically cascade deletes to child objects. Note that "ON DELETE" is not supported on SQLite, and requires `InnoDB` tables when using MySQL:
+Use `passive_deletes=True` to disable child object loading on a DELETE operation, in conjunction with "ON DELETE (CASCADE|SET NULL)" on your database to automatically cascade deletes to child objects. Note that "ON DELETE" is not supported on SQLite, and requires `InnoDB` tables when using MySQL:
{python}
mytable = Table('mytable', meta,
mmapper(MyOtherClass, myothertable)
mapper(MyClass, mytable, properties={
- 'children':relation(MyOtherClass, passive_deletes=True)
- })
-
-* As an alternative to using "ON DELETE CASCADE", for very simple scenarios you can create a simple `MapperExtension` that will issue a DELETE for child objects before the parent object is deleted:
-
- {python}
- class DeleteMemberExt(MapperExtension):
- def before_delete(self, mapper, connection, instance):
- connection.execute(member_table.delete(member_table.c.org_id==instance.org_id))
-
- mapper(Organization, org_table, extension=DeleteMemberExt(), properties = {
- 'members' : relation(Member, lazy=None, passive_deletes=True, cascade="all, delete-orphan")
+ 'children':relation(MyOtherClass, cascade="all, delete-orphan", passive_deletes=True)
})
-Note that this approach is not nearly as efficient or general-purpose as "ON DELETE CASCADE", since the database itself can cascade the operation along any number of tables.
-
-The latest distribution includes an example `examples/collection/large_collection.py` which illustrates most of these techniques.
-
-#### Relation Options {@name=relationoptions}
-
-Options which can be sent to the `relation()` function. For arguments to `mapper()`, see [advdatamapping_mapperoptions](rel:advdatamapping_mapperoptions).
-
-* **association** - Deprecated; as of version 0.3.0 the association keyword is synonomous with applying the "all, delete-orphan" cascade to a "one-to-many" relationship. SA can now automatically reconcile a "delete" and "insert" operation of two objects with the same "identity" in a flush() operation into a single "update" statement, which is the pattern that "association" used to indicate. See the updated example of association mappings in [datamapping_association](rel:datamapping_association).
-* **backref** - indicates the name of a property to be placed on the related mapper's class that will handle this relationship in the other direction, including synchronizing the object attributes on both sides of the relation. Can also point to a `backref()` construct for more configurability. See [datamapping_relations_backreferences](rel:datamapping_relations_backreferences).
-* **cascade** - a string list of cascade rules which determines how persistence operations should be "cascaded" from parent to child. For a description of cascade rules, see [datamapping_relations_lifecycle](rel:datamapping_relations_lifecycle) and [unitofwork_cascade](rel:unitofwork_cascade).
-* **collection_class** - a class or function that returns a new list-holding object. will be used in place of a plain list for storing elements. See [advdatamapping_properties_customlist](rel:advdatamapping_properties_customlist).
-* **foreign_keys** - a list of columns which are to be used as "foreign key" columns. this parameter should be used in conjunction with explicit
-`primaryjoin` and `secondaryjoin` (if needed) arguments, and the columns within the `foreign_keys` list should be present within those join conditions. Normally, `relation()` will inspect the columns within the join conditions to determine which columns are the "foreign key" columns, based on information in the `Table` metadata. Use this argument when no ForeignKey's are present in the join condition, or to override the table-defined foreign keys.
-* **foreignkey** - deprecated. use the `foreign_keys` argument for foreign key specification, or `remote_side` for "directional" logic.
-* **lazy=True** - specifies how the related items should be loaded. a value of True indicates they should be loaded lazily when the property is first accessed. A value of False indicates they should be loaded by joining against the parent object query, so parent and child are loaded in one round trip (i.e. eagerly). A value of None indicates the related items are not loaded by the mapper in any case; the application will manually insert items into the list in some other way. In all cases, items added or removed to the parent object's collection (or scalar attribute) will cause the appropriate updates and deletes upon flush(), i.e. this option only affects load operations, not save operations.
-* **order_by** - indicates the ordering that should be applied when loading these items. See the section [advdatamapping_orderby](rel:advdatamapping_orderby) for details.
-* **passive_deletes=False** - Indicates if lazy-loaders should not be executed during the `flush()` process, which normally occurs in order to locate all existing child items when a parent item is to be deleted. Setting this flag to True is appropriate when `ON DELETE CASCADE` rules have been set up on the actual tables so that the database may handle cascading deletes automatically. This strategy is useful particularly for handling the deletion of objects that have very large (and/or deep) child-object collections. See the example in [advdatamapping_properties_working](rel:advdatamapping_properties_working).
-* **post_update** - this indicates that the relationship should be handled by a second UPDATE statement after an INSERT or before a DELETE. Currently, it also will issue an UPDATE after the instance was UPDATEd as well, although this technically should be improved. This flag is used to handle saving bi-directional dependencies between two individual rows (i.e. each row references the other), where it would otherwise be impossible to INSERT or DELETE both rows fully since one row exists before the other. Use this flag when a particular mapping arrangement will incur two rows that are dependent on each other, such as a table that has a one-to-many relationship to a set of child rows, and also has a column that references a single child row within that list (i.e. both tables contain a foreign key to each other). If a `flush()` operation returns an error that a "cyclical dependency" was detected, this is a cue that you might want to use `post_update` to "break" the cycle.
-* **primaryjoin** - a ClauseElement that will be used as the primary join of this child object against the parent object, or in a many-to-many relationship the join of the primary object to the association table. By default, this value is computed based on the foreign key relationships of the parent and child tables (or association table).
-* **private=False** - deprecated. setting `private=True` is the equivalent of setting `cascade="all, delete-orphan"`, and indicates the lifecycle of child objects should be contained within that of the parent. See the example in [datamapping_relations_lifecycle](rel:datamapping_relations_lifecycle).
-* **remote_side** - used for self-referential relationships, indicates the column or list of columns that form the "remote side" of the relationship. See the examples in [advdatamapping_selfreferential](rel:advdatamapping_selfreferential).
-* **secondary** - for a many-to-many relationship, specifies the intermediary table. The `secondary` keyword argument should generally only be used for a table that is not otherwise expressed in any class mapping. In particular, using the [Association Object Pattern](rel:datamapping_association) is generally mutually exclusive against using the `secondary` keyword argument.
-* **secondaryjoin** - a ClauseElement that will be used as the join of an association table to the child object. By default, this value is computed based on the foreign key relationships of the association and child tables.
-* **uselist=(True|False)** - a boolean that indicates if this property should be loaded as a list or a scalar. In most cases, this value is determined automatically by `relation()`, based on the type and direction of the relationship - one to many forms a list, many to one forms a scalar, many to many is a list. If a scalar is desired where normally a list would be present, such as a bi-directional one-to-one relationship, set uselist to False.
-* **viewonly=False** - when set to True, the relation is used only for loading objects within the relationship, and has no effect on the unit-of-work flush process. Relations with viewonly can specify any kind of join conditions to provide additional views of related objects onto a parent object. Note that the functionality of a viewonly relationship has its limits - complicated join conditions may not compile into eager or lazy loaders properly. If this is the case, use an alternative method, such as those described in [advdatamapping_properties_working](rel:advdatamapping_properties_working), [advdatamapping_resultset](rel:advdatamapping_resultset), or [advdatamapping_selects](rel:advdatamapping_selects).
-
+When `passive_deletes` is applied, the `children` relation will not be loaded into memory when an instance of `MyClass` is marked for deletion. The `cascade="all, delete-orphan"` *will* take effect for instances of `MyOtherClass` which are currently present in the session; however for instances of `MyOtherClass` which are not loaded, SQLAlchemy assumes that "ON DELETE CASCADE" rules will ensure that those rows are deleted by the database and that no foreign key violation will occur.
+
### Controlling Ordering {@name=orderby}
By default, mappers will attempt to ORDER BY the "oid" column of a table, or the primary key column, when selecting rows. This can be modified in several ways.
The "root" property on a TreeNode is a many-to-one relationship. By default, a self-referential mapper declares relationships as one-to-many, so the extra parameter `remote_side`, pointing to a column or list of columns on the remote side of a relationship, is needed to indicate a "many-to-one" self-referring relationship (note the previous keyword argument `foreignkey` is deprecated).
Both TreeNode examples above are available in functional form in the `examples/adjacencytree` directory of the distribution.
-### Statement and Result-Set ORM Queries {@name=resultset}
-
-Take any textual statement, constructed statement or result set and feed it into a Query to produce objects. Below, we define two class/mapper combinations, issue a SELECT statement, and send the result object to the method `instances()` method on `Query`:
-
- {python}
- class User(object):
- pass
-
- class Address(object):
- pass
-
- mapper(User, users_table)
-
- mapper(Address, addresses_table)
-
- # select users and addresses in one query
- # use_labels is so that the user_id column in both tables are distinguished
- s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id, use_labels=True)
-
- # execute it, and process the results, asking for both User and Address objects
- r = session.query(User, Address).instances(s.execute())
-
- # result rows come back as tuples
- for entry in r:
- user = r[0]
- address = r[1]
-
-Alternatively, the `from_statement()` method may be used with either a textual string or SQL construct:
-
- {python}
- s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id, use_labels=True)
-
- r = session.query(User, Address).from_statement(s).all()
+#### Self-Referential Query Strategies {@name=query}
- for entry in r:
- user = r[0]
- address = r[1]
+todo
-#### Combining Eager Loads with Statement/Result Set Queries
+### Combining Eager Loads with Statement/Result Set Queries
When full statement/result loads are used with `Query`, SQLAlchemy does not affect the SQL query itself, and therefore has no way of tacking on its own `LEFT [OUTER] JOIN` conditions that are normally used to eager load relationships. If the query being constructed is created in such a way that it returns rows not just from a parent table (or tables) but also returns rows from child tables, the result-set mapping can be notified as to which additional properties are contained within the result set. This is done using the `contains_eager()` query option, which specifies the name of the relationship to be eagerly loaded.
query = session.query(User).options(contains_eager('addresses'))
# get results normally
- r = query.instances(statement.execute())
+ r = query.from_statement(statement)
If the "eager" portion of the statement is "alisaed", the `alias` keyword argument to `contains_eager()` may be used to indicate it. This is a string alias name or reference to an actual `Alias` object:
query = create_session().query(User).options(contains_alias('ulist'), contains_eager('addresses'))
# results
- r = query.instances(statement.execute())
-
-
-### Mapper Keyword Arguments {@name=mapperoptions}
-
-Keyword arguments which can be used with the `mapper()` function. For arguments to `relation()`, see [advdatamapping_properties_relationoptions](rel:advdatamapping_properties_relationoptions).
-
-* **allow_column_override** - if True, allows the usage of a `relation()` which has the same name as a column in the mapped table. The table column will no longer be mapped.
-* **allow_null_pks=False** - indicates that composite primary keys where one or more (but not all) columns contain NULL is a valid primary key. Primary keys which contain NULL values usually indicate that a result row does not contain an entity and should be skipped.
-* **always_refresh=False** - if True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database. Note that this option bypasses the usage patterns for which the Session is designed - negative side effects should be expected, and usage issues involving this flag are not supported. For a better way to refresh data, use `query.load()`, `session.refresh()`, `session.expunge()`, or `session.clear()`.
-* **batch=True** - when False, indicates that when a mapper is persisting a list of instances, each instance will be fully saved to the database before moving onto the next instance. Normally, inserts and updates are batched together per-table, such as for an inheriting mapping that spans multiple tables. This flag is for rare circumstances where custom `MapperExtension` objects are used to attach logic to `before_insert()`, `before_update()`, etc., and the user-defined logic requires that the full persistence of each instance must be completed before moving onto the next (such as logic which queries the tables for the most recent ID). Note that this flag has a significant impact on the efficiency of a large save operation.
-* **column_prefix** - a string which will be prepended to the "key" name of all Columns when creating column-based properties from the given Table. Does not affect explicitly specified column-based properties. Setting `column_prefix='_'` is equivalent to defining all column-based properties as `_columnname=table.c.columnname`. See [advdatamapping_properties_colname](rel:advdatamapping_properties_colname) for information on overriding column-based attribute names.
-* **concrete** - if True, indicates this mapper should use concrete table inheritance with its parent mapper. Requires `inherits` to be set.
-* **entity_name** - defines this mapping as local to a particular class of entities within a single class. Allows alternate persistence mappings for a single class. See [advdatamapping_multiple](rel:advdatamapping_multiple).
-* **extension** - a MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper. See [advdatamapping_extending](rel:advdatamapping_extending).
-* **inherits** - another Mapper or class for which this Mapper will have an inheritance relationship with. See the examples in [advdatamapping_inheritance](rel:advdatamapping_inheritance).
-* **inherit_condition** - for joined table inheritance, a SQL expression (constructed ClauseElement) which will define how the two tables are joined;
-defaults to a natural join between the two tables.
-* **non_primary=False** - if True, construct a Mapper that will define only the selection of instances, not their persistence. It essentially creates a mapper that can be used for querying but does not define how instances of the class are stored. A non_primary mapper is always created after a regular primary mapper has already been created for the class. To use one, send it in place of the class argument when creating a query, such as `session.query(somemapper)`. Note that it is usually invalid to define additional relationships on a non_primary mapper as they will conflict with those of the primary. See [advdatamapping_multiple](rel:advdatamapping_multiple).
-* **order_by** - a single Column or list of Columns for which selection operations should use as the default ordering for entities. Defaults to the OID/ROWID of the table if any, or the first primary key column of the table. See [advdatamapping_orderby](rel:advdatamapping_orderby).
-* **polymorphic_on** - used with mappers in an inheritance relationship, a Column which will identify the class/mapper combination to be used with a particular row. requires the `polymorphic_identity` value to be set for all mappers in the inheritance hierarchy.
-* **polymorphic_identity** - a value which will be stored in the Column denoted by `polymorphic_on`, corresponding to the "class identity" of this mapper. See [advdatamapping_inheritance](rel:advdatamapping_inheritance).
-* **primary_key** - a list of Column objects which define the "primary key" to be used against this mapper's selectable unit. The mapper normally determines these automatically from the given `local_table` of the mapper combined against any inherited tables. When this argument is specified, the primary keys of the mapped table if any are disregarded in place of the columns given. This can be used to provide primary key identity to a table that has no PKs defined at the schema level, or to modify what defines "identity" for a particular table.
-* **properties** - a dictionary mapping the string names of object attributes to MapperProperty instances, which define the persistence behavior of that attribute. Note that the columns in the mapped table are automatically converted into ColumnProperty instances based on the "key" property of each Column (although they can be overridden using this dictionary).
-* **select_table** - used with polymorphic mappers, this is a `Selectable` which will take the place of the `Mapper`'s main table argument when
-performing queries.
-* **version_id_col** - a Column which must have an integer type that will be used to keep a running "version id" of mapped entities in the database. This is used during save operations to ensure that no other thread or process has updated the instance during the lifetime of the entity, else a ConcurrentModificationError exception is thrown.
+ r = query.from_statement(statement)
### Extending Mapper {@name=extending}
Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class. This class is just a series of "hooks" where various functionality takes place. An application can make its own MapperExtension objects, overriding only the methods it needs. Methods that are not overridden return the special value `sqlalchemy.orm.EXT_CONTINUE` to allow processing to continue to the next MapperExtension or simply proceed normally if there are no more extensions.
- {python}
- class MapperExtension(object):
- """Base implementation for customizing Mapper behavior.
-
- For each method in MapperExtension, returning a result of
- EXT_CONTINUE will allow processing to continue to the next
- MapperExtension in line or use the default functionality if there
- are no other extensions.
-
- Returning EXT_STOP will halt processing of further extensions
- handling that method. Some methods such as ``load`` have other
- return requirements, see the individual documentation for details.
- Other than these exception cases, any return value other than
- EXT_CONTINUE or EXT_STOP will be interpreted as equivalent to
- EXT_STOP.
-
- EXT_PASS is a synonym for EXT_CONTINUE and is provided for
- backward compatibility.
- """
-
- def get_session(self):
- """called to retrieve a contextual Session instance with which to
- register a new object. Note: this is not called if a session is
- provided with the __init__ params (i.e. _sa_session)"""
- return EXT_CONTINUE
- def select_by(self, query, *args, **kwargs):
- """overrides the select_by method of the Query object"""
- return EXT_CONTINUE
- def select(self, query, *args, **kwargs):
- """overrides the select method of the Query object"""
- return EXT_CONTINUE
- def create_instance(self, mapper, selectcontext, row, class_):
- """called when a new object instance is about to be created from a row.
- the method can choose to create the instance itself, or it can return
- None to indicate normal object creation should take place.
-
- mapper - the mapper doing the operation
-
- selectcontext - SelectionContext corresponding to the instances() call
-
- row - the result row from the database
-
- class_ - the class we are mapping.
- """
- return EXT_CONTINUE
- def append_result(self, mapper, selectcontext, row, instance, identitykey, result, isnew):
- """called when an object instance is being appended to a result list.
-
- If this method returns EXT_CONTINUE, it is assumed that the mapper should do the appending, else
- if this method returns any other value or None, it is assumed that the append was handled by this method.
-
- mapper - the mapper doing the operation
-
- selectcontext - SelectionContext corresponding to the instances() call
-
- row - the result row from the database
-
- instance - the object instance to be appended to the result
-
- identitykey - the identity key of the instance
-
- result - list to which results are being appended
-
- isnew - indicates if this is the first time we have seen this object instance in the current result
- set. if you are selecting from a join, such as an eager load, you might see the same object instance
- many times in the same result set.
- """
- return EXT_CONTINUE
- def populate_instance(self, mapper, selectcontext, row, instance, identitykey, isnew):
- """called right before the mapper, after creating an instance from a row, passes the row
- to its MapperProperty objects which are responsible for populating the object's attributes.
- If this method returns EXT_CONTINUE, it is assumed that the mapper should do the appending, else
- if this method returns any other value or None, it is assumed that the append was handled by this method.
-
- Essentially, this method is used to have a different mapper populate the object:
-
- def populate_instance(self, mapper, selectcontext, instance, row, identitykey, isnew):
- othermapper.populate_instance(selectcontext, instance, row, identitykey, isnew, frommapper=mapper)
- return True
- """
- return EXT_CONTINUE
- def before_insert(self, mapper, connection, instance):
- """called before an object instance is INSERTed into its table.
-
- this is a good place to set up primary key values and such that arent handled otherwise."""
- return EXT_CONTINUE
- def before_update(self, mapper, connection, instance):
- """called before an object instance is UPDATED"""
- return EXT_CONTINUE
- def after_update(self, mapper, connection, instance):
- """called after an object instance is UPDATED"""
- return EXT_CONTINUE
- def after_insert(self, mapper, connection, instance):
- """called after an object instance has been INSERTed"""
- return EXT_CONTINUE
- def before_delete(self, mapper, connection, instance):
- """called before an object instance is DELETEed"""
- return EXT_CONTINUE
- def after_delete(self, mapper, connection, instance):
- """called after an object instance is DELETEed"""
- return EXT_CONTINUE
+API documentation for MapperExtension: [docstrings_sqlalchemy.orm_MapperExtension](rel:docstrings_sqlalchemy.orm_MapperExtension)
+
To use MapperExtension, make your own subclass of it and just send it off to a mapper:
{python}
Bind parameters can be specified with string-based SQL, using a colon. To specify the values, use the `params()` method:
{python}
- {sql}>>> session.query(User).filter("id<:value and name=:name").params(value=224, name='fred').one() # doctest: +NORMALIZE_WHITESPACE
+ {sql}>>> session.query(User).filter("id<:value and name=:name").\
+ ... params(value=224, name='fred').one() # doctest: +NORMALIZE_WHITESPACE
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
FROM users
WHERE id<? and name=? ORDER BY users.oid
>>> from sqlalchemy import select, func
{sql}>>> session.query(User).from_statement(
- ... select([users_table], select([func.max(users_table.c.name)]).label('maxuser')==users_table.c.name)
+ ... select(
+ ... [users_table],
+ ... select([func.max(users_table.c.name)]).label('maxuser')==users_table.c.name)
... ).all()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
FROM users
There's also a way to combine scalar results with objects, using `add_column()`. This is often used for functions and aggregates. When `add_column()` (or its cousin `add_entity()`, described later) is used, tuples are returned:
{python}
- {sql}>>> for r in session.query(User).add_column(select([func.max(users_table.c.name)]).label('maxuser')):
+ {sql}>>> for r in session.query(User).\
+ ... add_column(select([func.max(users_table.c.name)]).label('maxuser')):
... print r
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, (SELECT max(users.name)
FROM users) AS maxuser
['jack@google.com']
{stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]
+By "A to B", we mean a single relation name or a path of relations. In our case we only have `User->addresses->Address` configured, but if we had a setup like `A->bars->B->bats->C->widgets->D`, a join along all four entities would look like:
+
+ {python}
+ session.query(Foo).join(['bars', 'bats', 'widgets']).filter(...)
+
Each time the `join()` is called on `Query`, the **joinpoint** of the query is moved to be that of the endpoint of the join. As above, when we joined from `users_table` to `addresses_table`, all subsequent criterion used by `filter_by()` are against the `addresses` table. When you `join()` again, the joinpoint starts back from the root. We can also backtrack to the beginning explicitly using `reset_joinpoint()`. This instruction will place the joinpoint back at the root `users` table, where subsequent `filter_by()` criterion are again against `users`:
{python}
['jack@google.com', 'j25@yahoo.com']
{stop}2
-Uh oh, they're still there ! Anaylzing the flush SQL, we can see that the `user_id` column of each addresss was set to NULL, but the rows weren't deleted. SQLAlchemy doesn't assume that deletes cascade, you have to tell it so.
+Uh oh, they're still there ! Anaylzing the flush SQL, we can see that the `user_id` column of each addresss was set to NULL, but the rows weren't deleted. SQLAlchemy doesn't assume that deletes cascade, you have to tell it so.
-So lets rollback our work, and start fresh with new mappers that express the relationship the way we want:
+So let's rollback our work, and start fresh with new mappers that express the relationship the way we want:
{python}
{sql}>>> session.rollback() # roll back the transaction
{python}
>>> session.delete(jack)
+
{sql}>>> session.commit()
DELETE FROM addresses WHERE addresses.id = ?
[1]
DELETE FROM users WHERE users.id = ?
[5]
COMMIT
-
+ {stop}
+
{sql}>>> session.query(User).filter_by(name='jack').count()
BEGIN
SELECT count(users.id)
WHERE addresses.email_address IN (?, ?)
['jack@google.com', 'j25@yahoo.com']
{stop}0
-
\ No newline at end of file
+
+## Building a Many To Many Relation
+
+We're moving into the bonus round here, but lets show off a many-to-many relationship. We'll sneak in some other features too, just to take a tour. We'll make our application a blog application, where users can write `BlogPost`s, which have `Keywords` associated with them.
+
+First some new tables:
+
+ {python}
+ >>> post_table = Table('posts', metadata,
+ ... Column('id', Integer, primary_key=True),
+ ... Column('user_id', Integer, ForeignKey('users.id')),
+ ... Column('headline', String(255), nullable=False),
+ ... Column('body', String)
+ ... )
+
+ >>> post_keywords = Table('post_keywords', metadata,
+ ... Column('post_id', Integer, ForeignKey('posts.id')),
+ ... Column('keyword_id', Integer, ForeignKey('keywords.id')))
+
+ >>> keywords_table = Table('keywords', metadata,
+ ... Column('id', Integer, primary_key=True),
+ ... Column('keyword', String(50), nullable=False, unique=True))
+
+ {sql}>>> metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE
+ PRAGMA table_info("users")
+ {}
+ PRAGMA table_info("posts")
+ {}
+ PRAGMA table_info("keywords")
+ {}
+ PRAGMA table_info("post_keywords")
+ {}
+ PRAGMA table_info("addresses")
+ {}
+ CREATE TABLE posts (
+ id INTEGER NOT NULL,
+ user_id INTEGER,
+ headline VARCHAR(255) NOT NULL,
+ body TEXT,
+ PRIMARY KEY (id),
+ FOREIGN KEY(user_id) REFERENCES users (id)
+ )
+ None
+ COMMIT
+ <BLANKLINE>
+ CREATE TABLE keywords (
+ id INTEGER NOT NULL,
+ keyword VARCHAR(50) NOT NULL,
+ PRIMARY KEY (id),
+ UNIQUE (keyword)
+ )
+ None
+ COMMIT
+ <BLANKLINE>
+ CREATE TABLE post_keywords (
+ post_id INTEGER,
+ keyword_id INTEGER,
+ FOREIGN KEY(post_id) REFERENCES posts (id),
+ FOREIGN KEY(keyword_id) REFERENCES keywords (id)
+ )
+ None
+ COMMIT
+
+Then some classes:
+
+ {python}
+ >>> class BlogPost(object):
+ ... def __init__(self, headline, body, author):
+ ... self.author = author
+ ... self.headline = headline
+ ... self.body = body
+ ... def __repr__(self):
+ ... return "BlogPost(%r, %r, %r)" % (self.headline, self.body, self.author)
+
+ >>> class Keyword(object):
+ ... def __init__(self, keyword):
+ ... self.keyword = keyword
+
+And the mappers. `BlogPost` will reference `User` via its `author` attribute:
+
+ {python}
+ >>> from sqlalchemy.orm import backref
+
+ >>> mapper(Keyword, keywords_table) # doctest: +ELLIPSIS
+ <sqlalchemy.orm.mapper.Mapper object at 0x...>
+
+ >>> mapper(BlogPost, post_table, properties={ # doctest: +ELLIPSIS
+ ... 'author':relation(User, backref=backref('posts', lazy='dynamic')),
+ ... 'keywords':relation(Keyword, secondary=post_keywords)
+ ... })
+ <sqlalchemy.orm.mapper.Mapper object at 0x...>
+
+There's three new things in the above mapper:
+
+ * the `User` relation has a backref, like we've used before, except this time it references a function called `backref()`. This function is used when you'd like to specify keyword options for the backwards relationship.
+ * the keyword option we specified to `backref()` is `lazy="dynamic"`. This sets a default **loader strategy** on the attribute, in this case a special strategy that allows partial loading of results.
+ * The `keywords` relation uses a keyword argument `secondary` to indicate the **association table** for the many to many relationship from `BlogPost` to `Keyword`.
+
+Usage is not too different from what we've been doing. Let's give Wendy some blog posts:
+
+ {python}
+ {sql}>>> wendy = session.query(User).filter_by(name='wendy').one()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name = ? ORDER BY users.oid
+ LIMIT 2 OFFSET 0
+ ['wendy']
+
+ >>> post = BlogPost("Wendy's Blog Post", "This is a test", wendy)
+ >>> session.save(post)
+
+We're storing keywords uniquely in the database, but we know that we don't have any yet, so we can just create them:
+
+ {python}
+ >>> post.keywords.append(Keyword('wendy'))
+ >>> post.keywords.append(Keyword('firstpost'))
+
+We can now look up all blog posts with the keyword 'firstpost'. We'll use a special collection operator `any` to locate "blog posts where any of its keywords has the keyword string 'firstpost'":
+
+ {python}
+ {sql}>>> session.query(BlogPost).filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ INSERT INTO keywords (keyword) VALUES (?)
+ ['wendy']
+ INSERT INTO keywords (keyword) VALUES (?)
+ ['firstpost']
+ INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?)
+ [2, "Wendy's Blog Post", 'This is a test']
+ INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?)
+ [[1, 1], [1, 2]]
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?) ORDER BY posts.oid
+ ['firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
+
+If we want to look up just Wendy's posts, we can tell the query to narrow down to her as a parent:
+
+ {python}
+ {sql}>>> session.query(BlogPost).with_parent(wendy).\
+ ... filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE ? = posts.user_id AND (EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) ORDER BY posts.oid
+ [2, 'firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
+
+Or we can use Wendy's own `posts` relation, which is a "dynamic" relation, to query straight from there:
+
+ {python}
+ {sql}>>> wendy.posts.filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE ? = posts.user_id AND (EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) ORDER BY posts.oid
+ [2, 'firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User(u'wendy',u'Wendy Williams', u'foobar')>)]
+
+Further information on mapping setups are in [advdatamapping](rel:advdatamapping).
\ No newline at end of file