"load=False".
- changed two internal "no_"/"dont_" arguments to positive names
There is no implicit fallback onto "fetch". Failure of evaluation is based
on the structure of criteria, so success/failure is deterministic based on
code structure.
-
+ - the "dont_load=True" flag on Session.merge() is deprecated and is now
+ "load=False".
+
- sql
- returning() support is native to insert(), update(), delete(). Implementations
of varying levels of functionality exist for Postgresql, Firebird, MSSQL and
* An application which reads an object structure from a file and wishes to save it to the database might parse the file, build up the structure, and then use ``merge()`` to save it to the database, ensuring that the data within the file is used to formulate the primary key of each element of the structure. Later, when the file has changed, the same process can be re-run, producing a slightly different object structure, which can then be ``merged()`` in again, and the ``Session`` will automatically update the database to reflect those changes.
* A web application stores mapped entities within an HTTP session object. When each request starts up, the serialized data can be merged into the session, so that the original entity may be safely shared among requests and threads.
-``merge()`` is frequently used by applications which implement their own second level caches. This refers to an application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans of time. When such an object needs to exist within a ``Session``, ``merge()`` is a good choice since it leaves the original cached object untouched. For this use case, merge provides a keyword option called ``dont_load=True``. When this boolean flag is set to ``True``, ``merge()`` will not issue any SQL to reconcile the given object against the current state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children may not contain any pending changes, and it's also of course possible that newer information in the database will not be present on the merged object, since no load is issued.
+``merge()`` is frequently used by applications which implement their own second level caches. This refers to an application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans of time. When such an object needs to exist within a ``Session``, ``merge()`` is a good choice since it leaves the original cached object untouched. For this use case, merge provides a keyword option called ``load=False``. When this boolean flag is set to ``False``, ``merge()`` will not issue any SQL to reconcile the given object against the current state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children may not contain any pending changes, and it's also of course possible that newer information in the database will not be present on the merged object, since no load is issued.
Deleting
--------
self.session.expunge(x)
_cache[self.cachekey] = ret
- return iter(self.session.merge(x, dont_load=True) for x in ret)
+ return iter(self.session.merge(x, load=False) for x in ret)
else:
return Query.__iter__(self)
class _ProxyImpl(object):
accepts_scalar_loader = False
- dont_expire_missing = False
+ expire_missing = True
def __init__(self, key):
self.key = key
def __init__(self, class_, key,
callable_, trackparent=False, extension=None,
compare_function=None, active_history=False, parent_token=None,
- dont_expire_missing=False,
+ expire_missing=True,
**kwargs):
"""Construct an AttributeImpl.
Allows multiple AttributeImpls to all match a single
owner attribute.
- dont_expire_missing
- if True, don't add an "expiry" callable to this attribute
+ expire_missing
+ if False, don't add an "expiry" callable to this attribute
during state.expire_attributes(None), if no value is present
for this key.
self.is_equal = compare_function
self.extensions = util.to_list(extension or [])
self.active_history = active_history
- self.dont_expire_missing = dont_expire_missing
+ self.expire_missing = expire_missing
def hasparent(self, state, optimistic=False):
"""Return the boolean value of a `hasparent` flag attached to the given item.
return not self.parent.non_primary
- def merge(self, session, source, dest, dont_load, _recursive):
+ def merge(self, session, source, dest, load, _recursive):
"""Merge the attribute represented by this ``MapperProperty``
from source to destination object"""
if self.polymorphic_on and self.polymorphic_on not in self._columntoproperty:
col = self.mapped_table.corresponding_column(self.polymorphic_on)
if not col:
- dont_instrument = True
+ instrument = False
col = self.polymorphic_on
else:
- dont_instrument = False
+ instrument = True
if self._should_exclude(col.key, col.key, local=False):
raise sa_exc.InvalidRequestError("Cannot exclude or override the discriminator column %r" % col.key)
- self._configure_property(col.key, ColumnProperty(col, _no_instrument=dont_instrument), init=False, setparent=True)
+ self._configure_property(col.key, ColumnProperty(col, _instrument=instrument), init=False, setparent=True)
def _adapt_inherited_property(self, key, prop, init):
if not self.concrete:
self.columns = [expression._labeled(c) for c in columns]
self.group = kwargs.pop('group', None)
self.deferred = kwargs.pop('deferred', False)
- self.no_instrument = kwargs.pop('_no_instrument', False)
+ self.instrument = kwargs.pop('_instrument', True)
self.comparator_factory = kwargs.pop('comparator_factory', self.__class__.Comparator)
self.descriptor = kwargs.pop('descriptor', None)
self.extension = kwargs.pop('extension', None)
self.__class__.__name__, ', '.join(sorted(kwargs.keys()))))
util.set_creation_order(self)
- if self.no_instrument:
+ if not self.instrument:
self.strategy_class = strategies.UninstrumentedColumnLoader
elif self.deferred:
self.strategy_class = strategies.DeferredColumnLoader
self.strategy_class = strategies.ColumnLoader
def instrument_class(self, mapper):
- if self.no_instrument:
+ if not self.instrument:
return
attributes.register_descriptor(
def setattr(self, state, value, column):
state.get_impl(self.key).set(state, state.dict, value, None)
- def merge(self, session, source, dest, dont_load, _recursive):
+ def merge(self, session, source, dest, load, _recursive):
value = attributes.instance_state(source).value_as_iterable(
self.key, passive=True)
if value:
proxy_property=self.descriptor
)
- def merge(self, session, source, dest, dont_load, _recursive):
+ def merge(self, session, source, dest, load, _recursive):
pass
log.class_logger(SynonymProperty)
def create_row_processor(self, selectcontext, path, mapper, row, adapter):
return (None, None)
- def merge(self, session, source, dest, dont_load, _recursive):
+ def merge(self, session, source, dest, load, _recursive):
pass
def __str__(self):
return str(self.parent.class_.__name__) + "." + self.key
- def merge(self, session, source, dest, dont_load, _recursive):
- if not dont_load:
+ def merge(self, session, source, dest, load, _recursive):
+ if load:
# TODO: no test coverage for recursive check
for r in self._reverse_property:
if (source, r) in _recursive:
dest_list = []
for current in instances:
_recursive[(current, self)] = True
- obj = session._merge(current, dont_load=dont_load, _recursive=_recursive)
+ obj = session._merge(current, load=load, _recursive=_recursive)
if obj is not None:
dest_list.append(obj)
- if dont_load:
+ if not load:
coll = attributes.init_collection(dest_state, self.key)
for c in dest_list:
coll.append_without_event(c)
current = instances[0]
if current is not None:
_recursive[(current, self)] = True
- obj = session._merge(current, dont_load=dont_load, _recursive=_recursive)
+ obj = session._merge(current, load=load, _recursive=_recursive)
if obj is not None:
- if dont_load:
+ if not load:
dest_state.dict[self.key] = obj
else:
setattr(dest, self.key, obj)
for state, m, o in cascade_states:
self._delete_impl(state)
- def merge(self, instance, dont_load=False):
+ def merge(self, instance, load=True, **kw):
"""Copy the state an instance onto the persistent instance with the same identifier.
If there is no persistent instance currently associated with the
mapped with ``cascade="merge"``.
"""
+ if 'dont_load' in kw:
+ load = not kw['dont_load']
+ util.warn_deprecated("dont_load=True has been renamed to load=False.")
+
# TODO: this should be an IdentityDict for instances, but will
# need a separate dict for PropertyLoader tuples
_recursive = {}
autoflush = self.autoflush
try:
self.autoflush = False
- return self._merge(instance, dont_load=dont_load, _recursive=_recursive)
+ return self._merge(instance, load=load, _recursive=_recursive)
finally:
self.autoflush = autoflush
- def _merge(self, instance, dont_load=False, _recursive=None):
+ def _merge(self, instance, load=True, _recursive=None):
mapper = _object_mapper(instance)
if instance in _recursive:
return _recursive[instance]
key = state.key
if key is None:
- if dont_load:
+ if not load:
raise sa_exc.InvalidRequestError(
- "merge() with dont_load=True option does not support "
+ "merge() with load=False option does not support "
"objects transient (i.e. unpersisted) objects. flush() "
"all changes on mapped instances before merging with "
- "dont_load=True.")
+ "load=False.")
key = mapper._identity_key_from_state(state)
merged = None
if key:
if key in self.identity_map:
merged = self.identity_map[key]
- elif dont_load:
+ elif not load:
if state.modified:
raise sa_exc.InvalidRequestError(
- "merge() with dont_load=True option does not support "
+ "merge() with load=False option does not support "
"objects marked as 'dirty'. flush() all changes on "
- "mapped instances before merging with dont_load=True.")
+ "mapped instances before merging with load=False.")
merged = mapper.class_manager.new_instance()
merged_state = attributes.instance_state(merged)
merged_state.key = key
_recursive[instance] = merged
for prop in mapper.iterate_properties:
- prop.merge(self, instance, merged, dont_load, _recursive)
+ prop.merge(self, instance, merged, load, _recursive)
- if dont_load:
+ if not load:
attributes.instance_state(merged).commit_all(attributes.instance_dict(merged), self.identity_map) # remove any history
if new_instance:
for key in attribute_names:
impl = self.manager[key].impl
if not filter_deferred or \
- not impl.dont_expire_missing or \
+ impl.expire_missing or \
key in dict_:
self.expired_attributes.add(key)
if impl.accepts_scalar_loader:
copy_function=self.columns[0].type.copy_value,
mutable_scalars=self.columns[0].type.is_mutable(),
callable_=self._class_level_loader,
- dont_expire_missing=True
+ expire_missing=False
)
def setup_query(self, context, entity, path, adapter, only_load_props=None, **kwargs):
# test with "dontload" merge
sess5 = create_session()
- u = sess5.merge(u, dont_load=True)
+ u = sess5.merge(u, load=False)
assert len(u.addresses)
for a in u.addresses:
assert a.user is u
def go():
sess5.flush()
# no changes; therefore flush should do nothing
- # but also, dont_load wipes out any difference in committed state,
+ # but also, load=False wipes out any difference in committed state,
# so no flush at all
self.assert_sql_count(testing.db, go, 0)
eq_(on_load.called, 15)
sess4 = create_session()
- u = sess4.merge(u, dont_load=True)
+ u = sess4.merge(u, load=False)
# post merge change
u.addresses[1].email_address='afafds'
def go():
assert u3 is u
@testing.resolve_artifact_names
- def test_transient_dontload(self):
+ def test_transient_no_load(self):
mapper(User, users)
sess = create_session()
u = User()
- assert_raises_message(sa.exc.InvalidRequestError, "dont_load=True option does not support", sess.merge, u, dont_load=True)
+ assert_raises_message(sa.exc.InvalidRequestError, "load=False option does not support", sess.merge, u, load=False)
+ @testing.resolve_artifact_names
+ def test_dont_load_deprecated(self):
+ mapper(User, users)
+
+ sess = create_session()
+ u = User(name='ed')
+ sess.add(u)
+ sess.flush()
+ u = sess.query(User).first()
+ sess.expunge(u)
+ sess.execute(users.update().values(name='jack'))
+ @testing.uses_deprecated("dont_load=True has been renamed")
+ def go():
+ u1 = sess.merge(u, dont_load=True)
+ assert u1 in sess
+ assert u1.name=='ed'
+ assert u1 not in sess.dirty
+ go()
@testing.resolve_artifact_names
- def test_dontload_with_backrefs(self):
- """dontload populates relations in both directions without requiring a load"""
+ def test_no_load_with_backrefs(self):
+ """load=False populates relations in both directions without requiring a load"""
mapper(User, users, properties={
'addresses':relation(mapper(Address, addresses), backref='user')
})
assert 'user' in u.addresses[1].__dict__
sess = create_session()
- u2 = sess.merge(u, dont_load=True)
+ u2 = sess.merge(u, load=False)
assert 'user' in u2.addresses[1].__dict__
eq_(u2.addresses[1].user, User(id=7, name='fred'))
sess.close()
sess = create_session()
- u = sess.merge(u2, dont_load=True)
+ u = sess.merge(u2, load=False)
assert 'user' not in u.addresses[1].__dict__
eq_(u.addresses[1].user, User(id=7, name='fred'))
def test_dontload_with_eager(self):
"""
- This test illustrates that with dont_load=True, we can't just copy the
+ This test illustrates that with load=False, we can't just copy the
committed_state of the merged instance over; since it references
collection objects which themselves are to be merged. This
committed_state would instead need to be piecemeal 'converted' to
represent the correct objects. However, at the moment I'd rather not
- support this use case; if you are merging with dont_load=True, you're
+ support this use case; if you are merging with load=False, you're
typically dealing with caching and the merged objects shouldnt be
'dirty'.
u2 = sess2.query(User).options(sa.orm.eagerload('addresses')).get(7)
sess3 = create_session()
- u3 = sess3.merge(u2, dont_load=True)
+ u3 = sess3.merge(u2, load=False)
def go():
sess3.flush()
self.assert_sql_count(testing.db, go, 0)
@testing.resolve_artifact_names
- def test_dont_load_disallows_dirty(self):
- """dont_load doesnt support 'dirty' objects right now
+ def test_no_load_disallows_dirty(self):
+ """load=False doesnt support 'dirty' objects right now
- (see test_dont_load_with_eager()). Therefore lets assert it.
+ (see test_no_load_with_eager()). Therefore lets assert it.
"""
mapper(User, users)
u.name = 'ed'
sess2 = create_session()
try:
- sess2.merge(u, dont_load=True)
+ sess2.merge(u, load=False)
assert False
except sa.exc.InvalidRequestError, e:
- assert ("merge() with dont_load=True option does not support "
+ assert ("merge() with load=False option does not support "
"objects marked as 'dirty'. flush() all changes on mapped "
- "instances before merging with dont_load=True.") in str(e)
+ "instances before merging with load=False.") in str(e)
u2 = sess2.query(User).get(7)
sess3 = create_session()
- u3 = sess3.merge(u2, dont_load=True)
+ u3 = sess3.merge(u2, load=False)
assert not sess3.dirty
def go():
sess3.flush()
@testing.resolve_artifact_names
- def test_dont_load_sets_backrefs(self):
+ def test_no_load_sets_backrefs(self):
mapper(User, users, properties={
'addresses':relation(mapper(Address, addresses),backref='user')})
assert u.addresses[0].user is u
sess2 = create_session()
- u2 = sess2.merge(u, dont_load=True)
+ u2 = sess2.merge(u, load=False)
assert not sess2.dirty
def go():
assert u2.addresses[0].user is u2
self.assert_sql_count(testing.db, go, 0)
@testing.resolve_artifact_names
- def test_dont_load_preserves_parents(self):
- """Merge with dont_load does not trigger a 'delete-orphan' operation.
+ def test_no_load_preserves_parents(self):
+ """Merge with load=False does not trigger a 'delete-orphan' operation.
- merge with dont_load sets attributes without using events. this means
+ merge with load=False sets attributes without using events. this means
the 'hasparent' flag is not propagated to the newly merged instance.
in fact this works out OK, because the '_state.parents' collection on
the newly merged instance is empty; since the mapper doesn't see an
assert u.addresses[0].user is u
sess2 = create_session()
- u2 = sess2.merge(u, dont_load=True)
+ u2 = sess2.merge(u, load=False)
assert not sess2.dirty
a2 = u2.addresses[0]
a2.email_address='somenewaddress'
# this use case is not supported; this is with a pending Address on
# the pre-merged object, and we currently dont support 'dirty' objects
- # being merged with dont_load=True. in this case, the empty
+ # being merged with load=False. in this case, the empty
# '_state.parents' collection would be an issue, since the optimistic
# flag is False in _is_orphan() for pending instances. so if we start
- # supporting 'dirty' with dont_load=True, this test will need to pass
+ # supporting 'dirty' with load=False, this test will need to pass
sess = create_session()
u = sess.query(User).get(7)
u.addresses.append(Address())
sess2 = create_session()
try:
- u2 = sess2.merge(u, dont_load=True)
+ u2 = sess2.merge(u, load=False)
assert False
- # if dont_load is changed to support dirty objects, this code
+ # if load=False is changed to support dirty objects, this code
# needs to pass
a2 = u2.addresses[0]
a2.email_address='somenewaddress'
eq_(sess2.query(User).get(u2.id).addresses[0].email_address,
'somenewaddress')
except sa.exc.InvalidRequestError, e:
- assert "dont_load=True option does not support" in str(e)
+ assert "load=False option does not support" in str(e)
@testing.resolve_artifact_names
def test_synonym_comparable(self):
u2 = pickle.loads(pickle.dumps(u1))
sess2 = create_session()
- u2 = sess2.merge(u2, dont_load=True)
+ u2 = sess2.merge(u2, load=False)
eq_(u2.name, 'ed')
eq_(u2, User(name='ed', addresses=[Address(email_address='ed@bar.com')]))
u2 = pickle.loads(pickle.dumps(u1))
sess2 = create_session()
- u2 = sess2.merge(u2, dont_load=True)
+ u2 = sess2.merge(u2, load=False)
eq_(u2.name, 'ed')
assert 'addresses' not in u2.__dict__
ad = u2.addresses[0]