0.5.5
=======
- general
- - unit tests have been migrated from unittest to nose.
- See README.unittests for information on how to run
- the tests. [ticket:970]
-
+ - unit tests have been migrated from unittest to nose. See
+ README.unittests for information on how to run the tests.
+ [ticket:970]
+
- orm
- - Session.mapper is now *deprecated*.
- Call session.add() if you'd like a free-standing object to be
+ - The "foreign_keys" argument of relation() will now propagate
+ automatically to the backref in the same way that primaryjoin
+ and secondaryjoin do. For the extremely rare use case where
+ the backref of a relation() has intentionally different
+ "foreign_keys" configured, both sides now need to be
+ configured explicity (if they do in fact require this setting,
+ see the next note...).
+
+ - ...the only known (and really, really rare) use case where a
+ different foreign_keys setting was used on the
+ forwards/backwards side, a composite foreign key that
+ partially points to its own columns, has been enhanced such
+ that the fk->itself aspect of the relation won't be used to
+ determine relation direction.
+
+ - Session.mapper is now *deprecated*.
+
+ Call session.add() if you'd like a free-standing object to be
part of your session. Otherwise, a DIY version of
- Session.mapper is now documented at
+ Session.mapper is now documented at
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SessionAwareMapper
The method will remain deprecated throughout 0.6.
-
- - Fixed bug introduced in 0.5.4 whereby Composite types
- fail when default-holding columns are flushed.
-
- - Fixed another 0.5.4 bug whereby mutable attributes (i.e. PickleType)
- wouldn't be deserialized correctly when the whole object
- was serialized. [ticket:1426]
-
- - Fixed Query being able to join() from individual columns of
- a joined-table subclass entity, i.e.
- query(SubClass.foo, SubcClass.bar).join(<anything>).
- In most cases, an error "Could not find a FROM clause to join
- from" would be raised. In a few others, the result would be
- returned in terms of the base class rather than the subclass -
- so applications which relied on this erroneous result need to be
+
+ - Fixed Query being able to join() from individual columns of a
+ joined-table subclass entity, i.e. query(SubClass.foo,
+ SubcClass.bar).join(<anything>). In most cases, an error
+ "Could not find a FROM clause to join from" would be
+ raised. In a few others, the result would be returned in terms
+ of the base class rather than the subclass - so applications
+ which relied on this erroneous result need to be
adjusted. [ticket:1431]
- - Fixed bug whereby list-based attributes, like pickletype
- and PGArray, failed to be merged() properly.
-
- - The "foreign_keys" argument of relation() will now propagate
- automatically to the backref in the same way that
- primaryjoin and secondaryjoin do. For the extremely
- rare use case where the backref of a relation() has
- intentionally different "foreign_keys" configured, both sides
- now need to be configured explicity (if they do in fact require
- this setting, see the next note...).
-
- - ...the only known (and really, really rare) use case where a
- different foreign_keys setting was used on the forwards/backwards
- side, a composite foreign key that partially points to its own
- columns, has been enhanced such that the fk->itself aspect of the
- relation won't be used to determine relation direction.
-
- - repaired non-working attributes.set_committed_value function.
+ - Fixed a bug involving contains_eager(), which would apply
+ itself to a secondary (i.e. lazy) load in a particular rare
+ case, producing cartesian products. improved the targeting of
+ query.options() on secondary loads overall [ticket:1461].
+
+ - Fixed bug introduced in 0.5.4 whereby Composite types fail
+ when default-holding columns are flushed.
+
+ - Fixed another 0.5.4 bug whereby mutable attributes
+ (i.e. PickleType) wouldn't be deserialized correctly when the
+ whole object was serialized. [ticket:1426]
+
+ - Fixed bug whereby session.is_modified() would raise an
+ exception if any synonyms were in use.
+
+ - Fixed potential memory leak whereby previously pickled objects
+ placed back in a session would not be fully garbage collected
+ unless the Session were explicitly closed out.
+
+ - Fixed bug whereby list-based attributes, like pickletype and
+ PGArray, failed to be merged() properly.
+
+ - Repaired non-working attributes.set_committed_value function.
+
+ - Trimmed the pickle format for InstanceState which should
+ further reduce the memory footprint of pickled instances. The
+ format should be backwards compatible with that of 0.5.4 and
+ previous.
+
+ - sqlalchemy.orm.join and sqlalchemy.orm.outerjoin are now
+ added to __all__ in sqlalchemy.orm.*. [ticket:1463]
+
+ - Fixed bug where Query exception raise would fail when
+ a too-short composite primary key value were passed to
+ get(). [ticket:1458]
- - Trimmed the pickle format for InstanceState which should further
- reduce the memory footprint of pickled instances. The format
- should be backwards compatible with that of 0.5.4 and previous.
-
- sql
- Removed an obscure feature of execute() (including connection,
- engine, Session) whereby a bindparam() construct can be sent as
- a key to the params dictionary. This usage is undocumented
+ engine, Session) whereby a bindparam() construct can be sent
+ as a key to the params dictionary. This usage is undocumented
and is at the core of an issue whereby the bindparam() object
- created implicitly by a text() construct may have the same
- hash value as a string placed in the params dictionary and
- may result in an inappropriate match when computing the final
- bind parameters. Internal checks for this condition would
- add significant latency to the critical task of parameter
+ created implicitly by a text() construct may have the same
+ hash value as a string placed in the params dictionary and may
+ result in an inappropriate match when computing the final bind
+ parameters. Internal checks for this condition would add
+ significant latency to the critical task of parameter
rendering, so the behavior is removed. This is a backwards
incompatible change for any application that may have been
using this feature, however the feature has never been
documented.
-
+
+- engine/pool
+ - Implemented recreate() for StaticPool.
+
+
0.5.4p2
=======
Relation Configuration
=======================
+
Basic Relational Patterns
--------------------------
When a structure using the above mapping is flushed, the "widget" row will be INSERTed minus the "favorite_entry_id" value, then all the "entry" rows will be INSERTed referencing the parent "widget" row, and then an UPDATE statement will populate the "favorite_entry_id" column of the "widget" table (it's one row at a time for the time being).
+.. _advdatamapping_entitycollections:
+
Alternate Collection Implementations
-------------------------------------
parent.children.append(Child())
print parent.children[0]
-Collections are not limited to lists. Sets, mutable sequences and almost any other Python object that can act as a container can be used in place of the default list.
+Collections are not limited to lists. Sets, mutable sequences and almost any other Python object that can act as a container can be used in place of the default list, by specifying the ``collection_class`` option on ``relation()``.
.. sourcecode:: python+sql
parent.children.add(child)
assert child in parent.children
-.. _advdatamapping_entitycollections:
Custom Collection Implementations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configuring Loader Strategies: Lazy Loading, Eager Loading
-----------------------------------------------------------
-
-In the `datamapping`, we introduced the concept of **Eager Loading**. We used an ``option`` in conjunction with the ``Query`` object in order to indicate that a relation should be loaded at the same time as the parent, within a single SQL query:
+In the :ref:`ormtutorial_toplevel`, we introduced the concept of **Eager Loading**. We used an ``option`` in conjunction with the ``Query`` object in order to indicate that a relation should be loaded at the same time as the parent, within a single SQL query:
.. sourcecode:: python+sql
WHERE users.name = ?
['jack']
-By default, all relations are **lazy loading**. The scalar or collection attribute associated with a ``relation()`` contains a trigger which fires the first time the attribute is accessed, which issues a SQL call at that point:
+By default, all inter-object relationships are **lazy loading**. The scalar or collection attribute associated with a ``relation()`` contains a trigger which fires the first time the attribute is accessed, which issues a SQL call at that point:
.. sourcecode:: python+sql
``cursor()`` method will return a proxied cursor object. Both the
connection proxy and the cursor proxy will also return the underlying
connection to the pool after they have both been garbage collected,
-which is detected via the ``__del__()`` method.
+which is detected via weakref callbacks (``__del__`` is not used).
Additionally, when connections are returned to the pool, a
``rollback()`` is issued on the connection unconditionally. This is
Flushing
--------
-When the ``Session`` is used with its default configuration, the flush step is nearly always done transparently. Specifically, the flush occurs before any individual ``Query`` is issued, as well as within the ``commit()`` call before the transaction is committed. It also occurs before a SAVEPOINT is issued when ``begin_nested()`` is used. The "flush-on-Query" aspect of the behavior can be disabled by constructing ``sessionmaker()`` with the flag ``autoflush=False``.
+When the ``Session`` is used with its default configuration, the flush step is nearly always done transparently. Specifically, the flush occurs before any individual ``Query`` is issued, as well as within the ``commit()`` call before the transaction is committed. It also occurs before a SAVEPOINT is issued when ``begin_nested()`` is used.
Regardless of the autoflush setting, a flush can always be forced by issuing ``flush()``::
session.flush()
-``flush()`` also supports the ability to flush a subset of objects which are present in the session, by passing a list of objects::
+The "flush-on-Query" aspect of the behavior can be disabled by constructing ``sessionmaker()`` with the flag ``autoflush=False``::
- # saves only user1 and address2. all other modified
- # objects remain present in the session.
- session.flush([user1, address2])
+ Session = sessionmaker(autoflush=False)
-This second form of flush should be used carefully as it currently does not cascade, meaning that it will not necessarily affect other objects directly associated with the objects given.
+Additionally, autoflush can be temporarily disabled by setting the ``autoflush`` flag at any time::
-The flush process *always* occurs within a transaction, even if the ``Session`` has been configured with ``autocommit=True``, a setting that disables the session's persistent transactional state. If no transaction is present, ``flush()`` creates its own transaction and commits it. Any failures during flush will always result in a rollback of whatever transaction is present.
+ mysession = Session()
+ mysession.autoflush = False
+
+Some autoflush-disable recipes are available at `DisableAutoFlush <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisableAutoflush>`_.
+
+The flush process *always* occurs within a transaction, even if the ``Session`` has been configured with ``autocommit=True``, a setting that disables the session's persistent transactional state. If no transaction is present, ``flush()`` creates its own transaction and commits it. Any failures during flush will always result in a rollback of whatever transaction is present. If the Session is not in ``autocommit=True`` mode, an explicit call to ``rollback()`` is required after a flush fails, even though the underlying transaction will have been rolled back already - this is so that the overall nesting pattern of so-called "subtransactions" is consistently maintained.
Committing
----------
>>> print func.concat('x', 'y')
concat(:param_1, :param_2)
-Certain functions are marked as "ANSI" functions, which mean they don't get the parenthesis added after them, such as CURRENT_TIMESTAMP:
+By "generates", we mean that **any** SQL function is created based on the word you choose::
+
+ >>> print func.xyz_my_goofy_function()
+ xyz_my_goofy_function()
+
+Certain function names are known by SQLAlchemy, allowing special behavioral rules to be applied. Some for example are "ANSI" functions, which mean they don't get the parenthesis added after them, such as CURRENT_TIMESTAMP:
.. sourcecode:: pycon+sql
'eagerload',
'eagerload_all',
'extension',
+ 'join',
'lazyload',
'mapper',
'noload',
'object_mapper',
'object_session',
+ 'outerjoin',
'polymorphic_union',
'reconstructor',
'relation',
if kwargs:
raise exceptions.ArgumentError("Invalid kwargs for contains_eager: %r" % kwargs.keys())
- return (strategies.EagerLazyOption(keys, lazy=False), strategies.LoadEagerFromAliasOption(keys, alias=alias))
+ return (strategies.EagerLazyOption(keys, lazy=False, _only_on_lead=True), strategies.LoadEagerFromAliasOption(keys, alias=alias))
@sa_util.accepts_a_list_as_starargs(list_deprecation='pending')
def defer(*keys):
searchfor = mapper
else:
searchfor = _class_to_mapper(mapper).base_mapper
-
+
for ent in query._mapper_entities:
if ent.path_entity is searchfor:
return ent
else:
if raiseerr:
- raise sa_exc.ArgumentError("Can't find entity %s in Query. Current list: %r" % (searchfor, [str(m.path_entity) for m in query._entities]))
+ raise sa_exc.ArgumentError("Can't find entity %s in Query. Current list: %r"
+ % (searchfor, [str(m.path_entity) for m in query._entities]))
else:
return None
entity = None
l = []
+ # _current_path implies we're in a secondary load
+ # with an existing path
current_path = list(query._current_path)
-
+
if self.mapper:
entity = self.__find_entity(query, self.mapper, raiseerr)
mapper = entity.mapper
if current_path and key == current_path[1]:
current_path = current_path[2:]
continue
-
+
if prop is None:
return []
path_element = mapper = getattr(prop, 'mapper', None)
if path_element:
path_element = path_element.base_mapper
-
+
+ # if current_path tokens remain, then
+ # we didn't have an exact path match.
+ if current_path:
+ return []
+
return l
class AttributeExtension(object):
# TODO:
# this provides one kind of "backwards join"
# tested in test/orm/query.py.
- # remove this in 0.6
+ # removal of this has been considered, but maybe not
+ # see [ticket:1445]
if not clause:
if isinstance(onclause, interfaces.PropComparator):
clause = onclause.__clause_element__()
params[_get_params[primary_key].key] = ident[i]
except IndexError:
raise sa_exc.InvalidRequestError("Could not find enough values to formulate primary key for "
- "query.get(); primary key columns are %s" % ', '.join("'%s'" % c for c in q.mapper.primary_key))
+ "query.get(); primary key columns are %s" % ', '.join("'%s'" % c for c in mapper.primary_key))
q._params = params
if lockmode is not None:
state = attributes.instance_state(instance)
except exc.NO_STATE:
raise exc.UnmappedInstanceError(instance)
+ dict_ = state.dict
for attr in state.manager.attributes:
- if not include_collections and hasattr(attr.impl, 'get_collection'):
+ if \
+ (
+ not include_collections and
+ hasattr(attr.impl, 'get_collection')
+ ) or not hasattr(attr.impl, 'get_history'):
continue
- (added, unchanged, deleted) = attr.get_history(instance, passive=passive)
+
+ (added, unchanged, deleted) = \
+ attr.impl.get_history(state, dict_, passive=passive)
+
if added or deleted:
return True
return False
load_path = ()
insert_order = None
mutable_dict = None
+ _strong_obj = None
def __init__(self, obj, manager):
self.class_ = obj.__class__
return d
def __setstate__(self, state):
- self.obj = weakref.ref(state['instance'])
+ self.obj = weakref.ref(state['instance'], self._cleanup)
self.class_ = state['instance'].__class__
self.manager = manager_of_class(self.class_)
self.expired = state.get('expired', False)
self.callables = state.get('callables', {})
+ if self.modified:
+ self._strong_obj = state['instance']
+
self.__dict__.update(
(k, state[k]) for k in (
'key', 'load_options', 'expired_attributes', 'mutable_dict'
instance_dict._modified.add(self)
self.modified = True
- self._strong_obj = self.obj()
+ if not self._strong_obj:
+ self._strong_obj = self.obj()
def commit(self, dict_, keys):
"""Commit attributes.
log.class_logger(EagerLoader)
class EagerLazyOption(StrategizedOption):
- def __init__(self, key, lazy=True, chained=False, mapper=None):
+ def __init__(self, key, lazy=True, chained=False, mapper=None, _only_on_lead=False):
super(EagerLazyOption, self).__init__(key, mapper)
self.lazy = lazy
self.chained = chained
+ self._only_on_lead = _only_on_lead
+ def process_query_conditionally(self, query):
+ if not self._only_on_lead:
+ StrategizedOption.process_query_conditionally(self, query)
+
def is_chained(self):
return not self.lazy and self.chained
m, alias, is_aliased_class = mapperutil._entity_info(alias)
self.alias = alias
+ def process_query_conditionally(self, query):
+ # dont run this option on a secondary load
+ pass
+
def process_query_property(self, query, paths):
if self.alias:
if isinstance(self.alias, basestring):
def dispose(self):
pass
-
+
+
class StaticPool(Pool):
"""A Pool of exactly one connection, used for all requests.
-
+
Reconnect-related functions such as ``recycle`` and connection
invalidation (which is also used to support auto-reconnect) are not
currently supported by this Pool implementation but may be implemented
in a future release.
-
+
"""
def __init__(self, creator, **params):
self._conn.close()
self._conn = None
+ def recreate(self):
+ self.log("Pool recreating")
+ return self.__class__(creator=self._creator,
+ recycle=self._recycle,
+ use_threadlocal=self._use_threadlocal,
+ reset_on_return=self._reset_on_return,
+ echo=self.echo,
+ listeners=self.listeners)
+
def create_connection(self):
return self._conn
c1 = p.connect()
assert c1.connection.id != c_id
-
-
-
+
+
+class StaticPoolTest(PoolTestBase):
+ def test_recreate(self):
+ dbapi = MockDBAPI()
+ creator = lambda: dbapi.connect('foo.db')
+ p = pool.StaticPool(creator)
+ p2 = p.recreate()
+ assert p._creator is p2._creator
('id', 'parent_id', 'data')
)
+composite_pk_table = fixture_table(
+ Table('composite_pk_table', fixture_metadata,
+ Column('i', Integer, primary_key=True),
+ Column('j', Integer, primary_key=True),
+ Column('k', Integer, nullable=False),
+ ),
+ ('i', 'j', 'k'),
+ (1, 2, 3),
+ (2, 1, 4),
+ (1, 1, 5),
+ (2, 2,6))
+
def _load_fixtures():
for table in fixture_metadata.sorted_tables:
class Node(Base):
pass
+
+class CompositePk(Base):
+ pass
class FixtureTest(_base.MappedTest):
"""A MappedTest pre-configured for fixtures.
from sqlalchemy.orm import defer, deferred, synonym, attributes, column_property, composite, relation, dynamic_loader, comparable_property
from sqlalchemy.test.testing import eq_, AssertsCompiledSQL
from test.orm import _base, _fixtures
+from sqlalchemy.test.assertsql import AllOf, CompiledSQL
class MapperTest(_fixtures.FixtureTest):
self.sql_count_(0, go)
eq_(item.description, 'item 4')
+
+class SecondaryOptionsTest(_base.MappedTest):
+ """test that the contains_eager() option doesn't bleed into a secondary load."""
+
+ run_inserts = 'once'
+
+ run_deletes = None
+
+ @classmethod
+ def define_tables(cls, metadata):
+ Table("base", metadata,
+ Column('id', Integer, primary_key=True),
+ Column('type', String(50), nullable=False)
+ )
+ Table("child1", metadata,
+ Column('id', Integer, ForeignKey('base.id'), primary_key=True),
+ Column('child2id', Integer, ForeignKey('child2.id'), nullable=False)
+ )
+ Table("child2", metadata,
+ Column('id', Integer, ForeignKey('base.id'), primary_key=True),
+ )
+ Table('related', metadata,
+ Column('id', Integer, ForeignKey('base.id'), primary_key=True),
+ )
+
+ @classmethod
+ @testing.resolve_artifact_names
+ def setup_mappers(cls):
+ class Base(_base.ComparableEntity):
+ pass
+ class Child1(Base):
+ pass
+ class Child2(Base):
+ pass
+ class Related(_base.ComparableEntity):
+ pass
+ mapper(Base, base, polymorphic_on=base.c.type, properties={
+ 'related':relation(Related, uselist=False)
+ })
+ mapper(Child1, child1, inherits=Base, polymorphic_identity='child1', properties={
+ 'child2':relation(Child2, primaryjoin=child1.c.child2id==base.c.id, foreign_keys=child1.c.child2id)
+ })
+ mapper(Child2, child2, inherits=Base, polymorphic_identity='child2')
+ mapper(Related, related)
+
+ @classmethod
+ @testing.resolve_artifact_names
+ def insert_data(cls):
+ base.insert().execute([
+ {'id':1, 'type':'child1'},
+ {'id':2, 'type':'child1'},
+ {'id':3, 'type':'child1'},
+ {'id':4, 'type':'child2'},
+ {'id':5, 'type':'child2'},
+ {'id':6, 'type':'child2'},
+ ])
+ child2.insert().execute([
+ {'id':4},
+ {'id':5},
+ {'id':6},
+ ])
+ child1.insert().execute([
+ {'id':1, 'child2id':4},
+ {'id':2, 'child2id':5},
+ {'id':3, 'child2id':6},
+ ])
+ related.insert().execute([
+ {'id':1},
+ {'id':2},
+ {'id':3},
+ {'id':4},
+ {'id':5},
+ {'id':6},
+ ])
+
+ @testing.resolve_artifact_names
+ def test_contains_eager(self):
+ sess = create_session()
+
+
+ child1s = sess.query(Child1).join(Child1.related).options(sa.orm.contains_eager(Child1.related)).order_by(Child1.id)
+
+ def go():
+ eq_(
+ child1s.all(),
+ [Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
+ )
+ self.assert_sql_count(testing.db, go, 1)
+
+ c1 = child1s[0]
+
+ self.assert_sql_execution(
+ testing.db,
+ lambda: c1.child2,
+ CompiledSQL(
+ "SELECT base.id AS base_id, child2.id AS child2_id, base.type AS base_type "
+ "FROM base JOIN child2 ON base.id = child2.id "
+ "WHERE base.id = :param_1",
+ {'param_1':4}
+ )
+ )
+
+ @testing.resolve_artifact_names
+ def test_eagerload_on_other(self):
+ sess = create_session()
+
+ child1s = sess.query(Child1).join(Child1.related).options(sa.orm.eagerload(Child1.related)).order_by(Child1.id)
+
+ def go():
+ eq_(
+ child1s.all(),
+ [Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
+ )
+ self.assert_sql_count(testing.db, go, 1)
+
+ c1 = child1s[0]
+
+ self.assert_sql_execution(
+ testing.db,
+ lambda: c1.child2,
+ CompiledSQL(
+ "SELECT base.id AS base_id, child2.id AS child2_id, base.type AS base_type "
+ "FROM base JOIN child2 ON base.id = child2.id WHERE base.id = :param_1",
+
+# eagerload- this shouldn't happen
+# "SELECT base.id AS base_id, child2.id AS child2_id, base.type AS base_type, "
+# "related_1.id AS related_1_id FROM base JOIN child2 ON base.id = child2.id "
+# "LEFT OUTER JOIN related AS related_1 ON base.id = related_1.id WHERE base.id = :param_1",
+ {'param_1':4}
+ )
+ )
+
+ @testing.resolve_artifact_names
+ def test_eagerload_on_same(self):
+ sess = create_session()
+
+ child1s = sess.query(Child1).join(Child1.related).options(sa.orm.eagerload(Child1.child2, Child2.related)).order_by(Child1.id)
+
+ def go():
+ eq_(
+ child1s.all(),
+ [Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
+ )
+ self.assert_sql_count(testing.db, go, 4)
+
+ c1 = child1s[0]
+
+ # this *does* eagerload
+ self.assert_sql_execution(
+ testing.db,
+ lambda: c1.child2,
+ CompiledSQL(
+ "SELECT base.id AS base_id, child2.id AS child2_id, base.type AS base_type, "
+ "related_1.id AS related_1_id FROM base JOIN child2 ON base.id = child2.id "
+ "LEFT OUTER JOIN related AS related_1 ON base.id = related_1.id WHERE base.id = :param_1",
+ {'param_1':4}
+ )
+ )
+
+
class DeferredPopulationTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
from test.orm._fixtures import keywords, addresses, Base, Keyword, FixtureTest, \
Dingaling, item_keywords, dingalings, User, items,\
orders, Address, users, nodes, \
- order_items, Item, Order, Node
+ order_items, Item, Order, Node, \
+ composite_pk_table, CompositePk
from test.orm import _base
)
})
+ mapper(CompositePk, composite_pk_table)
+
compile_mappers()
class RowTupleTest(QueryTest):
u2 = s.query(User).get(7)
assert u is not u2
+ def test_get_composite_pk(self):
+ s = create_session()
+ assert s.query(CompositePk).get((100,100)) is None
+ one_two = s.query(CompositePk).get((1,2))
+ assert one_two.i == 1
+ assert one_two.j == 2
+ assert one_two.k == 3
+ q = s.query(CompositePk)
+ assert_raises(sa_exc.InvalidRequestError, q.get, 7)
+
+
def test_no_criterion(self):
"""test that get()/load() does not use preexisting filter/etc. criterion"""
assert s.is_modified(user)
assert not s.is_modified(user, include_collections=False)
+ @testing.resolve_artifact_names
+ def test_is_modified_syn(self):
+ s = sessionmaker()()
+
+ mapper(User, users, properties={'uname':sa.orm.synonym('name')})
+ u = User(uname='fred')
+ assert s.is_modified(u)
+ s.add(u)
+ s.commit()
+ assert not s.is_modified(u)
@testing.resolve_artifact_names
def test_weak_ref(self):
user = s.query(User).one()
assert user.name == 'fred'
assert s.identity_map
+
+ @testing.resolve_artifact_names
+ def test_weak_ref_pickled(self):
+ s = create_session()
+ mapper(User, users)
+
+ s.add(User(name='ed'))
+ s.flush()
+ assert not s.dirty
+
+ user = s.query(User).one()
+ user.name = 'fred'
+ s.expunge(user)
+
+ u2 = pickle.loads(pickle.dumps(user))
+
+ del user
+ s.add(u2)
+
+ del u2
+ gc_collect()
+
+ assert len(s.identity_map) == 1
+ assert len(s.dirty) == 1
+ assert None not in s.dirty
+ s.flush()
+ gc_collect()
+ assert not s.dirty
+
+ assert not s.identity_map
@testing.resolve_artifact_names
def test_weakref_with_cycles_o2m(self):