given instance is the same instance which is already present.
- merge() now also merges the "options" associated with a given
state, i.e. those passed through query.options() which follow
along with an instance, such as options to eagerly- or
lazyily- load various attributes. This is essential for
the construction of highly integrated caching schemes. This
is a subtle behavioral change vs. 0.5.
- A bug was fixed regarding the serialization of the "loader
path" present on an instance's state, which is also necessary
when combining the usage of merge() with serialized state
and associated options that should be preserved.
- The "query_cache" examples have been removed, and are replaced
with a fully comprehensive approach that combines the usage of
Beaker with SQLAlchemy. New query options are used to indicate
the caching characteristics of a particular Query, which
can also be invoked deep within an object graph when lazily
loading related objects. See /examples/beaker_caching/README.
limit/offset without the extra overhead of a subquery,
since a many-to-one join does not add rows to the result.
+ - Enhancements / Changes on Session.merge():
+ - the "dont_load=True" flag on Session.merge() is deprecated
+ and is now "load=False".
+
+ - Session.merge() is performance optimized, using half the
+ call counts for "load=False" mode compared to 0.5 and
+ significantly fewer SQL queries in the case of collections
+ for "load=True" mode.
+
+ - merge() will not issue a needless merge of attributes if the
+ given instance is the same instance which is already present.
+
+ - merge() now also merges the "options" associated with a given
+ state, i.e. those passed through query.options() which follow
+ along with an instance, such as options to eagerly- or
+ lazyily- load various attributes. This is essential for
+ the construction of highly integrated caching schemes. This
+ is a subtle behavioral change vs. 0.5.
+
+ - A bug was fixed regarding the serialization of the "loader
+ path" present on an instance's state, which is also necessary
+ when combining the usage of merge() with serialized state
+ and associated options that should be preserved.
+
+ - The all new merge() is showcased in a new comprehensive
+ example of how to integrate Beaker with SQLAlchemy. See
+ the notes in the "examples" note below.
+
- Using a "dynamic" loader with a "secondary" table now produces
a query where the "secondary" table is *not* aliased. This
allows the secondary Table object to be used in the "order_by"
things like FROM expressions being placed there directly.
[ticket:1622]
- - the "dont_load=True" flag on Session.merge() is deprecated
- and is now "load=False".
-
- - Session.merge() is performance optimized, using half the
- call counts for "load=False" mode compared to 0.5 and
- significantly fewer SQL queries in the case of collections
- for "load=True" mode.
-
- `expression.null()` is fully understood the same way
None is when comparing an object/collection-referencing
attribute within query.filter(), filter_by(), etc.
that is the parent AssociationProxy argument. Allows
serializability and subclassing of the built in collections.
[ticket:1259]
-
+
+- examples
+ - The "query_cache" examples have been removed, and are replaced
+ with a fully comprehensive approach that combines the usage of
+ Beaker with SQLAlchemy. New query options are used to indicate
+ the caching characteristics of a particular Query, which
+ can also be invoked deep within an object graph when lazily
+ loading related objects. See /examples/beaker_caching/README.
+
0.5.8
=====
- sql
--- /dev/null
+Illustrates how to embed Beaker cache functionality within
+the Query object, allowing full cache control as well as the
+ability to pull "lazy loaded" attributes from long term cache
+as well.
+
+In this demo, the following techniques are illustrated:
+
+ * Using custom subclasses of Query
+ * Basic technique of circumventing Query to pull from a
+ custom cache source instead of the database.
+ * Rudimental caching with Beaker, using "regions" which allow
+ global control over a fixed set of configurations.
+ * Using custom MapperOption objects to configure options on
+ a Query, including the ability to invoke the options
+ deep within an object graph when lazy loads occur.
+
+To run, both SQLAlchemy and Beaker (1.4 or greater) must be
+installed or on the current PYTHONPATH. The demo will create a local
+directory for datafiles, insert initial data, and run. Running the
+demo a second time will utilize the cache files already present, and
+exactly one SQL statement against two tables will be emitted - the
+displayed result however will utilize dozens of lazyloads that all
+pull from cache.
+
+Two endpoint scripts, "demo.py" and "ad_hoc.py", are run as follows:
+
+ python examples/beaker_caching/demo.py
+
+ python examples/beaker_caching/ad_hoc.py
+
+
+Listing of files:
+
+__init__.py - Establish data / cache file paths, and configurations,
+bootstrap fixture data if necessary.
+
+meta.py - Represent persistence structures which allow the usage of
+Beaker caching with SQLAlchemy. Introduces a query option called
+FromCache.
+
+model.py - The datamodel, which represents Person that has multiple
+Address objects, each with PostalCode, City, Country
+
+fixture_data.py - creates demo PostalCode, Address, Person objects
+in the database.
+
+demo.py - The first script to run - illustrates loading a list of
+Person / Address objects. When run a second time, most data is
+cached and only one SQL statement is emitted.
+
+ad_hoc.py - Further examples of how to use FromCache. Illustrates
+front-end usage, cache invalidation, loading related collections
+from cache vs. eager loading of collections.
+
--- /dev/null
+"""__init__.py
+
+Establish data / cache file paths, and configurations,
+bootstrap fixture data if necessary.
+
+"""
+import meta, model, fixture_data
+from sqlalchemy import create_engine
+import os
+
+root = "./beaker_data/"
+
+if not os.path.exists(root):
+ raw_input("Will create datafiles in %r.\n"
+ "To reset the cache + database, delete this directory.\n"
+ "Press enter to continue.\n" % root
+ )
+ os.makedirs(root)
+
+dbfile = os.path.join(root, "beaker_demo.db")
+engine = create_engine('sqlite:///%s' % dbfile, echo=True)
+meta.Session.configure(bind=engine)
+
+# configure the "default" cache region.
+meta.cache_manager.regions['default'] ={
+
+ # using type 'file' to illustrate
+ # serialized persistence. In reality,
+ # use memcached. Other backends
+ # are much, much slower.
+ 'type':'file',
+ 'data_dir':root,
+ 'expire':3600,
+
+ # set start_time to current time
+ # to re-cache everything
+ # upon application startup
+ #'start_time':time.time()
+ }
+
+installed = False
+if not os.path.exists(dbfile):
+ fixture_data.install()
+ installed = True
\ No newline at end of file
--- /dev/null
+"""ac_hoc.py
+
+Illustrate usage of Query combined with the FromCache option,
+including front-end loading, cache invalidation, namespace techniques
+and collection caching.
+
+"""
+
+import __init__ # if running as a script
+from model import Person, Address, cache_address_bits
+from meta import Session, FromCache
+from sqlalchemy.orm import eagerload
+
+def load_name_range(start, end, invalidate=False):
+ """Load Person objects on a range of names.
+
+ start/end are integers, range is then
+ "person <start>" - "person <end>".
+
+ The cache option we set up is called "name_range", indicating
+ a range of names for the Person class.
+
+ The `Person.addresses` collections are also cached. Its basically
+ another level of tuning here, as that particular cache option
+ can be transparently replaced with eagerload(Person.addresses).
+ The effect is that each Person and his/her Address collection
+ is cached either together or separately, affecting the kind of
+ SQL that emits for unloaded Person objects as well as the distribution
+ of data within the cache.
+ """
+ q = Session.query(Person).\
+ filter(Person.name.between("person %.2d" % start, "person %.2d" % end)).\
+ options(*cache_address_bits).\
+ options(FromCache("default", "name_range"))
+
+ # have the "addresses" collection cached separately
+ # each lazyload of Person.addresses loads from cache.
+ q = q.options(FromCache("default", "by_person", Person.addresses))
+
+ # alternatively, eagerly load the "addresses" collection, so that they'd
+ # be cached together. This issues a bigger SQL statement and caches
+ # a single, larger value in the cache per person rather than two
+ # separate ones.
+ #q = q.options(eagerload(Person.addresses))
+
+ # if requested, invalidate the cache on current criterion.
+ if invalidate:
+ q.invalidate()
+
+ return q.all()
+
+print "two through twelve, possibly from cache:\n"
+print ", ".join([p.name for p in load_name_range(2, 12)])
+
+print "\ntwenty five through forty, possibly from cache:\n"
+print ", ".join([p.name for p in load_name_range(25, 40)])
+
+# loading them again, no SQL is emitted
+print "\ntwo through twelve, from the cache:\n"
+print ", ".join([p.name for p in load_name_range(2, 12)])
+
+# but with invalidate, they are
+print "\ntwenty five through forty, invalidate first:\n"
+print ", ".join([p.name for p in load_name_range(25, 40, True)])
+
+# illustrate the address loading from either cache/already
+# on the Person
+print "\n\nPeople plus addresses, two through twelve, addresses possibly from cache"
+for p in load_name_range(2, 12):
+ print p.format_full()
+
+# illustrate the address loading from either cache/already
+# on the Person
+print "\n\nPeople plus addresses, two through twelve, addresses from cache"
+for p in load_name_range(2, 12):
+ print p.format_full()
+
+print "\n\nIf this was the first run of ad_hoc.py, try "\
+ "a second run. Only one SQL statement will be emitted."
--- /dev/null
+"""demo.py
+
+Load a set of Person and Address objects, specifying that
+PostalCode, City, Country objects should be pulled from long
+term cache.
+
+"""
+import __init__ # if running as a script
+from model import Person, Address, cache_address_bits
+from meta import Session
+from sqlalchemy.orm import eagerload
+import os
+
+for p in Session.query(Person).options(eagerload(Person.addresses), *cache_address_bits):
+ print p.format_full()
+
+
+print "\n\nIf this was the first run of demo.py, SQL was likely emitted to "\
+ "load postal codes, cities, countries.\n"\
+ "If run a second time, only a single SQL statement will run - all "\
+ "related data is pulled from cache.\n"\
+ "To clear the cache, delete the directory %r. \n"\
+ "This will cause a re-load of cities, postal codes and countries on "\
+ "the next run.\n"\
+ % os.path.join(__init__.root, 'container_file')
--- /dev/null
+"""fixture_data.py
+
+Installs some sample data. Here we have a handful of postal codes for a few US/
+Canadian cities. Then, 100 Person records are installed, each with a
+randomly selected postal code.
+
+"""
+from meta import Session, Base
+from model import City, Country, PostalCode, Person, Address
+import random
+
+def install():
+ Base.metadata.create_all(Session().bind)
+
+ data = [
+ ('Chicago', 'United States', ('60601', '60602', '60603', '60604')),
+ ('Montreal', 'Canada', ('H2S 3K9', 'H2B 1V4', 'H7G 2T8')),
+ ('Edmonton', 'Canada', ('T5J 1R9', 'T5J 1Z4', 'T5H 1P6')),
+ ('New York', 'United States', ('10001', '10002', '10003', '10004', '10005', '10006')),
+ ('San Francisco', 'United States', ('94102', '94103', '94104', '94105', '94107', '94108'))
+ ]
+
+ countries = {}
+ all_post_codes = []
+ for city, country, postcodes in data:
+ try:
+ country = countries[country]
+ except KeyError:
+ countries[country] = country = Country(country)
+
+ city = City(city, country)
+ pc = [PostalCode(code, city) for code in postcodes]
+ Session.add_all(pc)
+ all_post_codes.extend(pc)
+
+ for i in xrange(1, 51):
+ person = Person(
+ "person %.2d" % i,
+ Address(
+ street="street %.2d" % i,
+ postal_code=all_post_codes[random.randint(0, len(all_post_codes) - 1)]
+ )
+ )
+ Session.add(person)
+
+ Session.commit()
+
+ # start the demo fresh
+ Session.remove()
\ No newline at end of file
--- /dev/null
+"""meta.py
+
+Represent persistence structures which allow the usage of
+Beaker caching with SQLAlchemy.
+
+The three new concepts introduced here are:
+
+ * CachingQuery - a Query subclass that caches and
+ retrieves results in/from Beaker.
+ * FromCache - a query option that establishes caching
+ parameters on a Query
+ * _params_from_query - extracts value parameters from
+ a Query.
+
+The rest of what's here are standard SQLAlchemy and
+Beaker constructs.
+
+"""
+from sqlalchemy.orm import scoped_session, sessionmaker
+from sqlalchemy.orm.interfaces import MapperOption
+from sqlalchemy.orm.query import Query
+from sqlalchemy.sql import visitors
+from sqlalchemy.ext.declarative import declarative_base
+from beaker import cache
+
+class CachingQuery(Query):
+ """A Query subclass which optionally loads full results from a Beaker
+ cache region.
+
+ The CachingQuery is instructed to load from cache based on two optional
+ attributes configured on the instance, called 'cache_region' and 'cache_namespace'.
+
+ When these attributes are present, any iteration of the Query will configure
+ a Beaker cache against this region and a generated namespace, which takes
+ into account the 'cache_namespace' name as well as the entities this query
+ is created against (i.e. the columns and classes sent to the constructor).
+ The 'cache_namespace' is a string name that represents a particular structure
+ of query. E.g. a query that filters on a name might use the name "by_name",
+ a query that filters on a date range to a joined table might use the name
+ "related_date_range".
+
+ The Query then attempts to retrieved a cached value using a key, which
+ is generated from all the parameterized values present in the Query. In
+ this way, the combination of "cache_namespace" and embedded parameter values
+ correspond exactly to the lexical structure of a SQL statement combined
+ with its bind parameters. If no such key exists then the ultimate SQL
+ is emitted and the objects loaded.
+
+ The returned objects, if loaded from cache, are merged into the Query's
+ session using Session.merge(load=False), which is a fast performing
+ method to ensure state is present.
+
+ The FromCache mapper option below represents the "public" method of
+ configuring the "cache_region" and "cache_namespace" attributes,
+ and includes the ability to be invoked upon lazy loaders embedded
+ in an object graph.
+
+ """
+
+ def _get_cache_plus_key(self):
+ """For a query with cache_region and cache_namespace configured,
+ return the correspoinding Cache instance and cache key, based
+ on this query's current criterion and parameter values.
+
+ """
+ if not hasattr(self, 'cache_region'):
+ raise ValueError("This Query does not have caching parameters configured.")
+
+ # cache namespace - the token handed in by the
+ # option + class we're querying against
+ namespace = " ".join([self.cache_namespace] + [str(x) for x in self._entities])
+
+ # cache key - the value arguments from this query's parameters.
+ args = _params_from_query(self)
+ cache_key = " ".join([str(x) for x in args])
+
+ # get cache
+ cache = cache_manager.get_cache_region(namespace, self.cache_region)
+
+ # optional - hash the cache_key too for consistent length
+ # import uuid
+ # cache_key= str(uuid.uuid5(uuid.NAMESPACE_DNS, cache_key))
+
+ return cache, cache_key
+
+ def __iter__(self):
+ """override __iter__ to pull results from Beaker
+ if particular attributes have been configured.
+ """
+ if hasattr(self, 'cache_region'):
+ cache, cache_key = self._get_cache_plus_key()
+
+ ret = cache.get_value(cache_key, createfunc=lambda: list(Query.__iter__(self)))
+ return iter(self.session.merge(x, load=False) for x in ret)
+ else:
+ return Query.__iter__(self)
+
+ def invalidate(self):
+ """Invalidate the cache represented in this Query."""
+
+ cache, cache_key = self._get_cache_plus_key()
+ cache.remove(cache_key)
+
+
+class FromCache(MapperOption):
+ """A MapperOption which configures a Query to use a particular
+ cache namespace and region.
+
+ Can optionally be configured to be invoked for a specific
+ lazy loader.
+
+ """
+ def __init__(self, region, namespace, key=None):
+ """Construct a new FromCache.
+
+ :param region: the cache region. Should be a
+ region configured in the Beaker CacheManager.
+
+ :param namespace: the cache namespace. Should
+ be a name uniquely describing the target Query's
+ lexical structure.
+
+ :param key: optional. A Class.attrname which
+ indicates a particular class relation() whose
+ lazy loader should be pulled from the cache.
+
+ """
+ self.region = region
+ self.namespace = namespace
+ if key:
+ self.cls_ = key.property.parent.class_
+ self.propname = key.property.key
+ self.propagate_to_loaders = True
+ else:
+ self.cls_ = self.propname = None
+ self.propagate_to_loaders = False
+
+ def _set_query_cache(self, query):
+ """Configure this FromCache's region and namespace on a query."""
+
+ if hasattr(query, 'cache_region'):
+ raise ValueError("This query is already configured "
+ "for region %r namespace %r" %
+ (query.cache_region, query.cache_namespace)
+ )
+ query.cache_region = self.region
+ query.cache_namespace = self.namespace
+
+ def process_query_conditionally(self, query):
+ """Process a Query that is used within a lazy loader.
+
+ (the process_query_conditionally() method is a SQLAlchemy
+ hook invoked only within lazyload.)
+
+ """
+ if self.cls_ is not None and query._current_path:
+ mapper, key = query._current_path[-2:]
+ if mapper.class_ is self.cls_ and key == self.propname:
+ self._set_query_cache(query)
+
+ def process_query(self, query):
+ """Process a Query during normal loading operation."""
+
+ if self.cls_ is None:
+ self._set_query_cache(query)
+
+def _params_from_query(query):
+ """Pull the bind parameter values from a query.
+
+ This takes into account any scalar attribute bindparam set up.
+
+ E.g. params_from_query(query.filter(Cls.foo==5).filter(Cls.bar==7)))
+ would return [5, 7].
+
+ """
+ v = []
+ def visit_bindparam(bind):
+ value = query._params.get(bind.key, bind.value)
+
+ # lazyloader may dig a callable in here, intended
+ # to late-evaluate params after autoflush is called.
+ # convert to a scalar value.
+ if callable(value):
+ value = value()
+
+ v.append(value)
+ visitors.traverse(query._criterion, {}, {'bindparam':visit_bindparam})
+ return v
+
+# Beaker CacheManager. A home base for cache configurations.
+# Configured at startup in __init__.py
+cache_manager = cache.CacheManager()
+
+# global application session.
+# configured at startup in __init__.py
+Session = scoped_session(sessionmaker(query_cls=CachingQuery))
+
+# global declarative base class.
+Base = declarative_base()
+
--- /dev/null
+"""Model. We are modeling Person objects with a collection
+of Address objects. Each Address has a PostalCode, which
+in turn references a City and then a Country:
+
+Person --(1..n)--> Address
+Address --(has a)--> PostalCode
+PostalCode --(has a)--> City
+City --(has a)--> Country
+
+"""
+from sqlalchemy import Column, Integer, String, ForeignKey
+from sqlalchemy.orm import relation
+from meta import Base, FromCache, Session
+
+class Country(Base):
+ __tablename__ = 'country'
+
+ id = Column(Integer, primary_key=True)
+ name = Column(String(100), nullable=False)
+
+ def __init__(self, name):
+ self.name = name
+
+class City(Base):
+ __tablename__ = 'city'
+
+ id = Column(Integer, primary_key=True)
+ name = Column(String(100), nullable=False)
+ country_id = Column(Integer, ForeignKey('country.id'), nullable=False)
+ country = relation(Country)
+
+ def __init__(self, name, country):
+ self.name = name
+ self.country = country
+
+class PostalCode(Base):
+ __tablename__ = 'postal_code'
+
+ id = Column(Integer, primary_key=True)
+ code = Column(String(10), nullable=False)
+ city_id = Column(Integer, ForeignKey('city.id'), nullable=False)
+ city = relation(City)
+
+ @property
+ def country(self):
+ return self.city.country
+
+ def __init__(self, code, city):
+ self.code = code
+ self.city = city
+
+class Address(Base):
+ __tablename__ = 'address'
+
+ id = Column(Integer, primary_key=True)
+ person_id = Column(Integer, ForeignKey('person.id'), nullable=False)
+ street = Column(String(200), nullable=False)
+ postal_code_id = Column(Integer, ForeignKey('postal_code.id'))
+ postal_code = relation(PostalCode)
+
+ @property
+ def city(self):
+ return self.postal_code.city
+
+ @property
+ def country(self):
+ return self.postal_code.country
+
+ def __str__(self):
+ return "%s\t"\
+ "%s, %s\t"\
+ "%s" % (self.street, self.city.name,
+ self.postal_code.code, self.country.name)
+
+class Person(Base):
+ __tablename__ = 'person'
+
+ id = Column(Integer, primary_key=True)
+ name = Column(String(100), nullable=False)
+ addresses = relation(Address, collection_class=set)
+
+ def __init__(self, name, *addresses):
+ self.name = name
+ self.addresses = set(addresses)
+
+ def __str__(self):
+ return self.name
+
+ def format_full(self):
+ return "\t".join([str(x) for x in [self] + list(self.addresses)])
+
+# Caching options. A set of three FromCache options
+# which can be applied to Query(), causing the "lazy load"
+# of these attributes to be loaded from cache.
+cache_address_bits = [
+ FromCache("default", "byid", PostalCode.city),
+ FromCache("default", "byid", City.country),
+ FromCache("default", "byid", Address.postal_code),
+ ]
+
+++ /dev/null
-"""Example of caching objects in a per-session cache,
-including implicit usage of the statement and params as a key.
-
-"""
-from sqlalchemy.orm.query import Query
-from sqlalchemy.orm.session import Session
-
-class CachingQuery(Query):
-
- def __iter__(self):
- try:
- cache = self.session._cache
- except AttributeError:
- self.session._cache = cache = {}
-
- stmt = self.statement.compile()
- params = stmt.params
- params.update(self._params)
- cachekey = str(stmt) + str(params)
-
- try:
- ret = cache[cachekey]
- except KeyError:
- ret = list(Query.__iter__(self))
- cache[cachekey] = ret
-
- return iter(ret)
-
-
-# example usage
-if __name__ == '__main__':
- from sqlalchemy import Column, create_engine, Integer, String
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy.ext.declarative import declarative_base
-
- Session = sessionmaker(query_cls=CachingQuery)
-
- Base = declarative_base(engine=create_engine('sqlite://', echo=True))
-
- class User(Base):
- __tablename__ = 'users'
- id = Column(Integer, primary_key=True)
- name = Column(String(100))
-
- def __repr__(self):
- return "User(name=%r)" % self.name
-
- Base.metadata.create_all()
-
- sess = Session()
-
- sess.add_all(
- [User(name='u1'), User(name='u2'), User(name='u3')]
- )
- sess.commit()
-
- # issue a query
- print sess.query(User).filter(User.name.in_(['u2', 'u3'])).all()
-
- # issue another
- print sess.query(User).filter(User.name == 'u1').all()
-
- # pull straight from cache
- print sess.query(User).filter(User.name.in_(['u2', 'u3'])).all()
-
- print sess.query(User).filter(User.name == 'u1').all()
-
-
+++ /dev/null
-"""Example of caching objects in a per-session cache.
-
-
-This approach is faster in that objects don't need to be detached/remerged
-between sessions, but is slower in that the cache is empty at the start
-of each session's lifespan.
-
-"""
-
-from sqlalchemy.orm.query import Query, _generative
-from sqlalchemy.orm.session import Session
-
-class CachingQuery(Query):
-
- # generative method to set a "cache" key. The method of "keying" the cache
- # here can be made more sophisticated, such as caching based on the query._criterion.
- @_generative()
- def with_cache_key(self, cachekey):
- self.cachekey = cachekey
-
- def __iter__(self):
- if hasattr(self, 'cachekey'):
- try:
- cache = self.session._cache
- except AttributeError:
- self.session._cache = cache = {}
-
- try:
- ret = cache[self.cachekey]
- except KeyError:
- ret = list(Query.__iter__(self))
- cache[self.cachekey] = ret
-
- return iter(ret)
-
- else:
- return Query.__iter__(self)
-
-# example usage
-if __name__ == '__main__':
- from sqlalchemy import Column, create_engine, Integer, String
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy.ext.declarative import declarative_base
-
- Session = sessionmaker(query_cls=CachingQuery)
-
- Base = declarative_base(engine=create_engine('sqlite://', echo=True))
-
- class User(Base):
- __tablename__ = 'users'
- id = Column(Integer, primary_key=True)
- name = Column(String(100))
-
- def __repr__(self):
- return "User(name=%r)" % self.name
-
- Base.metadata.create_all()
-
- sess = Session()
-
- sess.add_all(
- [User(name='u1'), User(name='u2'), User(name='u3')]
- )
- sess.commit()
-
- # cache two user objects
- sess.query(User).with_cache_key('u2andu3').filter(User.name.in_(['u2', 'u3'])).all()
-
- # pull straight from cache
- print sess.query(User).with_cache_key('u2andu3').all()
-
+++ /dev/null
-"""Example of caching objects in a global cache."""
-
-from sqlalchemy.orm.query import Query, _generative
-from sqlalchemy.orm.session import Session
-
-# the cache. This would be replaced with the caching mechanism of
-# choice, i.e. LRU cache, memcached, etc.
-_cache = {}
-
-class CachingQuery(Query):
-
- # generative method to set a "cache" key. The method of "keying" the cache
- # here can be made more sophisticated, such as caching based on the query._criterion.
- @_generative()
- def with_cache_key(self, cachekey):
- self.cachekey = cachekey
-
- # single point of object loading is __iter__(). objects in the cache are not associated
- # with a session and are never returned directly; only merged copies.
- def __iter__(self):
- if hasattr(self, 'cachekey'):
- try:
- ret = _cache[self.cachekey]
- except KeyError:
- ret = list(Query.__iter__(self))
- for x in ret:
- self.session.expunge(x)
- _cache[self.cachekey] = ret
-
- return iter(self.session.merge(x, load=False) for x in ret)
-
- else:
- return Query.__iter__(self)
-
-# example usage
-if __name__ == '__main__':
- from sqlalchemy import Column, create_engine, Integer, String
- from sqlalchemy.orm import sessionmaker
- from sqlalchemy.ext.declarative import declarative_base
-
- Session = sessionmaker(query_cls=CachingQuery)
-
- Base = declarative_base(engine=create_engine('sqlite://', echo=True))
-
- class User(Base):
- __tablename__ = 'users'
- id = Column(Integer, primary_key=True)
- name = Column(String(100))
-
- def __repr__(self):
- return "User(name=%r)" % self.name
-
- Base.metadata.create_all()
-
- sess = Session()
-
- sess.add_all(
- [User(name='u1'), User(name='u2'), User(name='u3')]
- )
- sess.commit()
-
- # cache two user objects
- sess.query(User).with_cache_key('u2andu3').filter(User.name.in_(['u2', 'u3'])).all()
-
- sess.close()
-
- sess = Session()
-
- # pull straight from cache
- print sess.query(User).with_cache_key('u2andu3').all()
-
def serialize_path(path):
if path is None:
return None
-
- return [
- (mapper.class_, key)
- for mapper, key in [(path[i], path[i+1]) for i in range(0, len(path)-1, 2)]
- ]
+
+ return zip(
+ [m.class_ for m in [path[i] for i in range(0, len(path), 2)]],
+ [path[i] for i in range(1, len(path), 2)] + [None]
+ )
def deserialize_path(path):
if path is None:
global class_mapper
if class_mapper is None:
from sqlalchemy.orm import class_mapper
-
- return tuple(
- chain(*[(class_mapper(cls), key) for cls, key in path])
- )
+
+ p = tuple(chain(*[(class_mapper(cls), key) for cls, key in path]))
+ if p[-1] is None:
+ p = p[0:-1]
+ return p
class MapperOption(object):
"""Describe a modification to a Query."""
_recursive[state] = merged
- for prop in mapper.iterate_properties:
- prop.merge(self, state, state_dict, merged_state, merged_dict, load, _recursive)
+ # check that we didn't just pull the exact same
+ # state out.
+ if state is not merged_state:
+ merged_state.load_path = state.load_path
+ merged_state.load_options = state.load_options
+
+ for prop in mapper.iterate_properties:
+ prop.merge(self, state, state_dict, merged_state, merged_dict, load, _recursive)
if not load:
# remove any history
# down from 185 on this
# this is a small slice of a usually bigger
# operation so using a small variance
- @profiling.function_call_count(106, variance=0.001)
+ @profiling.function_call_count(94, variance=0.001)
def go():
- p2 = sess2.merge(p1, load=False)
+ return sess2.merge(p1, load=False)
- go()
+ p2 = go()
+
+ # third call, merge object already present.
+ # almost no calls.
+ @profiling.function_call_count(15, variance=0.001)
+ def go():
+ return sess2.merge(p2, load=False)
+
+ p3 = go()
+
@testing.resolve_artifact_names
def test_merge_load(self):
from sqlalchemy.util import OrderedSet
from sqlalchemy.orm import mapper, relation, create_session, PropComparator, \
synonym, comparable_property, sessionmaker, attributes
+from sqlalchemy.orm.interfaces import MapperOption
from sqlalchemy.test.testing import eq_, ne_
from test.orm import _base, _fixtures
from sqlalchemy.test.schema import Table, Column
assert sess.autoflush
sess.commit()
+ @testing.resolve_artifact_names
+ def test_option_state(self):
+ """test that the merged takes on the MapperOption characteristics
+ of that which is merged.
+
+ """
+ class Option(MapperOption):
+ propagate_to_loaders = True
+
+ opt1, opt2 = Option(), Option()
+
+ sess = sessionmaker()()
+
+ umapper = mapper(User, users)
+
+ sess.add_all([
+ User(id=1, name='u1'),
+ User(id=2, name='u2'),
+ ])
+ sess.commit()
+
+ sess2 = sessionmaker()()
+ s2_users = sess2.query(User).options(opt2).all()
+
+ # test 1. no options are replaced by merge options
+ sess = sessionmaker()()
+ s1_users = sess.query(User).all()
+
+ for u in s1_users:
+ ustate = attributes.instance_state(u)
+ eq_(ustate.load_path, ())
+ eq_(ustate.load_options, set())
+
+ for u in s2_users:
+ sess.merge(u)
+
+ for u in s1_users:
+ ustate = attributes.instance_state(u)
+ eq_(ustate.load_path, (umapper, ))
+ eq_(ustate.load_options, set([opt2]))
+
+ # test 2. present options are replaced by merge options
+ sess = sessionmaker()()
+ s1_users = sess.query(User).options(opt1).all()
+ for u in s1_users:
+ ustate = attributes.instance_state(u)
+ eq_(ustate.load_path, (umapper, ))
+ eq_(ustate.load_options, set([opt1]))
+
+ for u in s2_users:
+ sess.merge(u)
+
+ for u in s1_users:
+ ustate = attributes.instance_state(u)
+ eq_(ustate.load_path, (umapper, ))
+ eq_(ustate.load_options, set([opt2]))
+
class MutableMergeTest(_base.MappedTest):
@classmethod
from sqlalchemy.test import testing
from sqlalchemy import Integer, String, ForeignKey
from sqlalchemy.test.schema import Table, Column
-from sqlalchemy.orm import mapper, relation, create_session, attributes
+from sqlalchemy.orm import mapper, relation, create_session, attributes, interfaces
from test.orm import _base, _fixtures
eq_(u1, sess.query(User).get(u2.id))
+ @testing.resolve_artifact_names
+ def test_serialize_path(self):
+ umapper = mapper(User, users, properties={
+ 'addresses':relation(Address, backref="user")
+ })
+ amapper = mapper(Address, addresses)
+
+ # this is a "relation" path with mapper, key, mapper, key
+ p1 = (umapper, 'addresses', amapper, 'email_address')
+ eq_(
+ interfaces.deserialize_path(interfaces.serialize_path(p1)),
+ p1
+ )
+
+ # this is a "mapper" path with mapper, key, mapper, no key
+ # at the end.
+ p2 = (umapper, 'addresses', amapper, )
+ eq_(
+ interfaces.deserialize_path(interfaces.serialize_path(p2)),
+ p2
+ )
+
@testing.resolve_artifact_names
def test_class_deferred_cols(self):
mapper(User, users, properties={
sess.flush()
sess.expunge_all()
- u1 = sess.query(User).options(sa.orm.defer('name'), sa.orm.defer('addresses.email_address')).get(u1.id)
+ u1 = sess.query(User).\
+ options(sa.orm.defer('name'),
+ sa.orm.defer('addresses.email_address')).\
+ get(u1.id)
assert 'name' not in u1.__dict__
assert 'addresses' not in u1.__dict__
eq_(u2.name, 'ed')
assert 'addresses' not in u2.__dict__
ad = u2.addresses[0]
- assert 'email_address' in ad.__dict__ # mapper options dont transmit over merge() right now
+
+ # mapper options now transmit over merge(),
+ # new as of 0.6, so email_address is deferred.
+ assert 'email_address' not in ad.__dict__
+
eq_(ad.email_address, 'ed@bar.com')
eq_(u2, User(name='ed', addresses=[Address(email_address='ed@bar.com')]))