import re
from sphinx.util.compat import Directive
from docutils.statemachine import StringList
-from docutils import nodes
+from docutils import nodes, utils
import textwrap
import itertools
import collections
[line.strip() for line in textwrap.dedent(text).split("\n")]
)
+
+def make_ticket_link(name, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ env = inliner.document.settings.env
+ render_ticket = env.config.changelog_render_ticket or "%s"
+ prefix = "#%s"
+ if render_ticket:
+ ref = render_ticket % text
+ node = nodes.reference(rawtext, prefix % text, refuri=ref, **options)
+ else:
+ node = nodes.Text(prefix % text, prefix % text)
+ return [node], []
+
def setup(app):
app.add_directive('changelog', ChangeLogDirective)
app.add_directive('change', ChangeDirective)
None,
'env'
)
+ app.add_role('ticket', make_ticket_link)
Migration Guides
----------------
-SQLAlchemy migration guides are currently available on the wiki.
+SQLAlchemy migration guides are now available within the main documentation.
-* `Version 0.8 <http://www.sqlalchemy.org/trac/wiki/08Migration>`_
-
-* `Version 0.7 <http://www.sqlalchemy.org/trac/wiki/07Migration>`_
-
-* `Version 0.6 <http://www.sqlalchemy.org/trac/wiki/06Migration>`_
+.. toctree::
+ :maxdepth: 1
-* `Version 0.5 <http://www.sqlalchemy.org/trac/wiki/05Migration>`_
+ migration_08
+ migration_07
+ migration_06
+ migration_05
+ migration_04
Change logs
-----------
--- /dev/null
+=============================
+What's new in SQLAlchemy 0.4?
+=============================
+
+.. admonition:: About this Document
+
+ This document describes changes between SQLAlchemy version 0.3,
+ last released October 14, 2007, and SQLAlchemy version 0.4,
+ last released October 12, 2008.
+
+ Document date: March 21, 2008
+
+First Things First
+==================
+
+If you're using any ORM features, make sure you import from
+``sqlalchemy.orm``:
+
+::
+
+ from sqlalchemy import *
+ from sqlalchemy.orm import *
+
+Secondly, anywhere you used to say ``engine=``,
+``connectable=``, ``bind_to=``, ``something.engine``,
+``metadata.connect()``, use ``bind``:
+
+::
+
+ myengine = create_engine('sqlite://')
+
+ meta = MetaData(myengine)
+
+ meta2 = MetaData()
+ meta2.bind = myengine
+
+ session = create_session(bind=myengine)
+
+ statement = select([table], bind=myengine)
+
+Got those ? Good! You're now (95%) 0.4 compatible. If
+you're using 0.3.10, you can make these changes immediately;
+they'll work there too.
+
+Module Imports
+==============
+
+In 0.3, "``from sqlachemy import *``" would import all of
+sqlachemy's sub-modules into your namespace. Version 0.4 no
+longer imports sub-modules into the namespace. This may mean
+you need to add extra imports into your code.
+
+In 0.3, this code worked:
+
+::
+
+ from sqlalchemy import *
+
+ class UTCDateTime(types.TypeDecorator):
+ pass
+
+In 0.4, one must do:
+
+::
+
+ from sqlalchemy import *
+ from sqlalchemy import types
+
+ class UTCDateTime(types.TypeDecorator):
+ pass
+
+Object Relational Mapping
+=========================
+
+Querying
+--------
+
+New Query API
+^^^^^^^^^^^^^
+
+Query is standardized on the generative interface (old
+interface is still there, just deprecated). While most of
+the generative interface is available in 0.3, the 0.4 Query
+has the inner guts to match the generative outside, and has
+a lot more tricks. All result narrowing is via ``filter()``
+and ``filter_by()``, limiting/offset is either through array
+slices or ``limit()``/``offset()``, joining is via
+``join()`` and ``outerjoin()`` (or more manually, through
+``select_from()`` as well as manually-formed criteria).
+
+To avoid deprecation warnings, you must make some changes to
+your 03 code
+
+User.query.get_by( \**kwargs )
+
+::
+
+ User.query.filter_by(**kwargs).first()
+
+User.query.select_by( \**kwargs )
+
+::
+
+ User.query.filter_by(**kwargs).all()
+
+User.query.select()
+
+::
+
+ User.query.filter(xxx).all()
+
+New Property-Based Expression Constructs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By far the most palpable difference within the ORM is that
+you can now construct your query criterion using class-based
+attributes directly. The ".c." prefix is no longer needed
+when working with mapped classes:
+
+::
+
+ session.query(User).filter(and_(User.name == 'fred', User.id > 17))
+
+While simple column-based comparisons are no big deal, the
+class attributes have some new "higher level" constructs
+available, including what was previously only available in
+``filter_by()``:
+
+::
+
+ # comparison of scalar relations to an instance
+ filter(Address.user == user)
+
+ # return all users who contain a particular address
+ filter(User.addresses.contains(address))
+
+ # return all users who *dont* contain the address
+ filter(~User.address.contains(address))
+
+ # return all users who contain a particular address with
+ # the email_address like '%foo%'
+ filter(User.addresses.any(Address.email_address.like('%foo%')))
+
+ # same, email address equals 'foo@bar.com'. can fall back to keyword
+ # args for simple comparisons
+ filter(User.addresses.any(email_address = 'foo@bar.com'))
+
+ # return all Addresses whose user attribute has the username 'ed'
+ filter(Address.user.has(name='ed'))
+
+ # return all Addresses whose user attribute has the username 'ed'
+ # and an id > 5 (mixing clauses with kwargs)
+ filter(Address.user.has(User.id > 5, name='ed'))
+
+The ``Column`` collection remains available on mapped
+classes in the ``.c`` attribute. Note that property-based
+expressions are only available with mapped properties of
+mapped classes. ``.c`` is still used to access columns in
+regular tables and selectable objects produced from SQL
+Expressions.
+
+Automatic Join Aliasing
+^^^^^^^^^^^^^^^^^^^^^^^
+
+We've had join() and outerjoin() for a while now:
+
+::
+
+ session.query(Order).join('items')...
+
+Now you can alias them:
+
+::
+
+ session.query(Order).join('items', aliased=True).
+ filter(Item.name='item 1').join('items', aliased=True).filter(Item.name=='item 3')
+
+The above will create two joins from orders->items using
+aliases. the ``filter()`` call subsequent to each will
+adjust its table criterion to that of the alias. To get at
+the ``Item`` objects, use ``add_entity()`` and target each
+join with an ``id``:
+
+::
+
+ session.query(Order).join('items', id='j1', aliased=True).
+ filter(Item.name == 'item 1').join('items', aliased=True, id='j2').
+ filter(Item.name == 'item 3').add_entity(Item, id='j1').add_entity(Item, id='j2')
+
+Returns tuples in the form: ``(Order, Item, Item)``.
+
+Self-referential Queries
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+So query.join() can make aliases now. What does that give
+us ? Self-referential queries ! Joins can be done without
+any ``Alias`` objects:
+
+::
+
+ # standard self-referential TreeNode mapper with backref
+ mapper(TreeNode, tree_nodes, properties={
+ 'children':relation(TreeNode, backref=backref('parent', remote_side=tree_nodes.id))
+ })
+
+ # query for node with child containing "bar" two levels deep
+ session.query(TreeNode).join(["children", "children"], aliased=True).filter_by(name='bar')
+
+To add criterion for each table along the way in an aliased
+join, you can use ``from_joinpoint`` to keep joining against
+the same line of aliases:
+
+::
+
+ # search for the treenode along the path "n1/n12/n122"
+
+ # first find a Node with name="n122"
+ q = sess.query(Node).filter_by(name='n122')
+
+ # then join to parent with "n12"
+ q = q.join('parent', aliased=True).filter_by(name='n12')
+
+ # join again to the next parent with 'n1'. use 'from_joinpoint'
+ # so we join from the previous point, instead of joining off the
+ # root table
+ q = q.join('parent', aliased=True, from_joinpoint=True).filter_by(name='n1')
+
+ node = q.first()
+
+``query.populate_existing()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The eager version of ``query.load()`` (or
+``session.refresh()``). Every instance loaded from the
+query, including all eagerly loaded items, get refreshed
+immediately if already present in the session:
+
+::
+
+ session.query(Blah).populate_existing().all()
+
+Relations
+---------
+
+SQL Clauses Embedded in Updates/Inserts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For inline execution of SQL clauses, embedded right in the
+UPDATE or INSERT, during a ``flush()``:
+
+::
+
+
+ myobject.foo = mytable.c.value + 1
+
+ user.pwhash = func.md5(password)
+
+ order.hash = text("select hash from hashing_table")
+
+The column-attribute is set up with a deferred loader after
+the operation, so that it issues the SQL to load the new
+value when you next access.
+
+Self-referential and Cyclical Eager Loading
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Since our alias-fu has improved, ``relation()`` can join
+along the same table \*any number of times*; you tell it how
+deep you want to go. Lets show the self-referential
+``TreeNode`` more clearly:
+
+::
+
+ nodes = Table('nodes', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('nodes.id')),
+ Column('name', String(30)))
+
+ class TreeNode(object):
+ pass
+
+ mapper(TreeNode, nodes, properties={
+ 'children':relation(TreeNode, lazy=False, join_depth=3)
+ })
+
+So what happens when we say:
+
+::
+
+ create_session().query(TreeNode).all()
+
+? A join along aliases, three levels deep off the parent:
+
+::
+
+ SELECT
+ nodes_3.id AS nodes_3_id, nodes_3.parent_id AS nodes_3_parent_id, nodes_3.name AS nodes_3_name,
+ nodes_2.id AS nodes_2_id, nodes_2.parent_id AS nodes_2_parent_id, nodes_2.name AS nodes_2_name,
+ nodes_1.id AS nodes_1_id, nodes_1.parent_id AS nodes_1_parent_id, nodes_1.name AS nodes_1_name,
+ nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.name AS nodes_name
+ FROM nodes LEFT OUTER JOIN nodes AS nodes_1 ON nodes.id = nodes_1.parent_id
+ LEFT OUTER JOIN nodes AS nodes_2 ON nodes_1.id = nodes_2.parent_id
+ LEFT OUTER JOIN nodes AS nodes_3 ON nodes_2.id = nodes_3.parent_id
+ ORDER BY nodes.oid, nodes_1.oid, nodes_2.oid, nodes_3.oid
+
+Notice the nice clean alias names too. The joining doesn't
+care if it's against the same immediate table or some other
+object which then cycles back to the beginining. Any kind
+of chain of eager loads can cycle back onto itself when
+``join_depth`` is specified. When not present, eager
+loading automatically stops when it hits a cycle.
+
+Composite Types
+^^^^^^^^^^^^^^^
+
+This is one from the Hibernate camp. Composite Types let
+you define a custom datatype that is composed of more than
+one column (or one column, if you wanted). Lets define a
+new type, ``Point``. Stores an x/y coordinate:
+
+::
+
+ class Point(object):
+ def __init__(self, x, y):
+ self.x = x
+ self.y = y
+ def __composite_values__(self):
+ return self.x, self.y
+ def __eq__(self, other):
+ return other.x == self.x and other.y == self.y
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+The way the ``Point`` object is defined is specific to a
+custom type; constructor takes a list of arguments, and the
+``__composite_values__()`` method produces a sequence of
+those arguments. The order will match up to our mapper, as
+we'll see in a moment.
+
+Let's create a table of vertices storing two points per row:
+
+::
+
+ vertices = Table('vertices', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('x1', Integer),
+ Column('y1', Integer),
+ Column('x2', Integer),
+ Column('y2', Integer),
+ )
+
+Then, map it ! We'll create a ``Vertex`` object which
+stores two ``Point`` objects:
+
+::
+
+ class Vertex(object):
+ def __init__(self, start, end):
+ self.start = start
+ self.end = end
+
+ mapper(Vertex, vertices, properties={
+ 'start':composite(Point, vertices.c.x1, vertices.c.y1),
+ 'end':composite(Point, vertices.c.x2, vertices.c.y2)
+ })
+
+Once you've set up your composite type, it's usable just
+like any other type:
+
+::
+
+
+ v = Vertex(Point(3, 4), Point(26,15))
+ session.save(v)
+ session.flush()
+
+ # works in queries too
+ q = session.query(Vertex).filter(Vertex.start == Point(3, 4))
+
+If you'd like to define the way the mapped attributes
+generate SQL clauses when used in expressions, create your
+own ``sqlalchemy.orm.PropComparator`` subclass, defining any
+of the common operators (like ``__eq__()``, ``__le__()``,
+etc.), and send it in to ``composite()``. Composite types
+work as primary keys too, and are usable in ``query.get()``:
+
+::
+
+ # a Document class which uses a composite Version
+ # object as primary key
+ document = query.get(Version(1, 'a'))
+
+``dynamic_loader()`` relations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``relation()`` that returns a live ``Query`` object for
+all read operations. Write operations are limited to just
+``append()`` and ``remove()``, changes to the collection are
+not visible until the session is flushed. This feature is
+particularly handy with an "autoflushing" session which will
+flush before each query.
+
+::
+
+ mapper(Foo, foo_table, properties={
+ 'bars':dynamic_loader(Bar, backref='foo', <other relation() opts>)
+ })
+
+ session = create_session(autoflush=True)
+ foo = session.query(Foo).first()
+
+ foo.bars.append(Bar(name='lala'))
+
+ for bar in foo.bars.filter(Bar.name=='lala'):
+ print bar
+
+ session.commit()
+
+New Options: ``undefer_group()``, ``eagerload_all()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A couple of query options which are handy.
+``undefer_group()`` marks a whole group of "deferred"
+columns as undeferred:
+
+::
+
+ mapper(Class, table, properties={
+ 'foo' : deferred(table.c.foo, group='group1'),
+ 'bar' : deferred(table.c.bar, group='group1'),
+ 'bat' : deferred(table.c.bat, group='group1'),
+ )
+
+ session.query(Class).options(undefer_group('group1')).filter(...).all()
+
+and ``eagerload_all()`` sets a chain of attributes to be
+eager in one pass:
+
+::
+
+ mapper(Foo, foo_table, properties={
+ 'bar':relation(Bar)
+ })
+ mapper(Bar, bar_table, properties={
+ 'bat':relation(Bat)
+ })
+ mapper(Bat, bat_table)
+
+ # eager load bar and bat
+ session.query(Foo).options(eagerload_all('bar.bat')).filter(...).all()
+
+New Collection API
+^^^^^^^^^^^^^^^^^^
+
+Collections are no longer proxied by an
+{{{InstrumentedList}}} proxy, and access to members, methods
+and attributes is direct. Decorators now intercept objects
+entering and leaving the collection, and it is now possible
+to easily write a custom collection class that manages its
+own membership. Flexible decorators also replace the named
+method interface of custom collections in 0.3, allowing any
+class to be easily adapted to use as a collection container.
+
+Dictionary-based collections are now much easier to use and
+fully ``dict``-like. Changing ``__iter__`` is no longer
+needed for ``dict``s, and new built-in ``dict`` types cover
+many needs:
+
+::
+
+ # use a dictionary relation keyed by a column
+ relation(Item, collection_class=column_mapped_collection(items.c.keyword))
+ # or named attribute
+ relation(Item, collection_class=attribute_mapped_collection('keyword'))
+ # or any function you like
+ relation(Item, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
+
+Existing 0.3 ``dict``-like and freeform object derived
+collection classes will need to be updated for the new API.
+In most cases this is simply a matter of adding a couple
+decorators to the class definition.
+
+Mapped Relations from External Tables/Subqueries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This feature quietly appeared in 0.3 but has been improved
+in 0.4 thanks to better ability to convert subqueries
+against a table into subqueries against an alias of that
+table; this is key for eager loading, aliased joins in
+queries, etc. It reduces the need to create mappers against
+select statements when you just need to add some extra
+columns or subqueries:
+
+::
+
+ mapper(User, users, properties={
+ 'fullname': column_property((users.c.firstname + users.c.lastname).label('fullname')),
+ 'numposts': column_property(
+ select([func.count(1)], users.c.id==posts.c.user_id).correlate(users).label('posts')
+ )
+ })
+
+a typical query looks like:
+
+::
+
+ SELECT (SELECT count(1) FROM posts WHERE users.id = posts.user_id) AS count,
+ users.firstname || users.lastname AS fullname,
+ users.id AS users_id, users.firstname AS users_firstname, users.lastname AS users_lastname
+ FROM users ORDER BY users.oid
+
+Horizontal Scaling (Sharding) API
+---------------------------------
+
+[browser:/sqlalchemy/trunk/examples/sharding/attribute_shard
+.py]
+
+Sessions
+--------
+
+New Session Create Paradigm; SessionContext, assignmapper Deprecated
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+That's right, the whole shebang is being replaced with two
+configurational functions. Using both will produce the most
+0.1-ish feel we've had since 0.1 (i.e., the least amount of
+typing).
+
+Configure your own ``Session`` class right where you define
+your ``engine`` (or anywhere):
+
+::
+
+ from sqlalchemy import create_engine
+ from sqlalchemy.orm import sessionmaker
+
+ engine = create_engine('myengine://')
+ Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
+
+ # use the new Session() freely
+ sess = Session()
+ sess.save(someobject)
+ sess.flush()
+
+
+If you need to post-configure your Session, say with an
+engine, add it later with ``configure()``:
+
+::
+
+ Session.configure(bind=create_engine(...))
+
+All the behaviors of ``SessionContext`` and the ``query``
+and ``__init__`` methods of ``assignmapper`` are moved into
+the new ``scoped_session()`` function, which is compatible
+with both ``sessionmaker`` as well as ``create_session()``:
+
+::
+
+ from sqlalchemy.orm import scoped_session, sessionmaker
+
+ Session = scoped_session(sessionmaker(autoflush=True, transactional=True))
+ Session.configure(bind=engine)
+
+ u = User(name='wendy')
+
+ sess = Session()
+ sess.save(u)
+ sess.commit()
+
+ # Session constructor is thread-locally scoped. Everyone gets the same
+ # Session in the thread when scope="thread".
+ sess2 = Session()
+ assert sess is sess2
+
+
+When using a thread-local ``Session``, the returned class
+has all of ``Session's`` interface implemented as
+classmethods, and "assignmapper"'s functionality is
+available using the ``mapper`` classmethod. Just like the
+old ``objectstore`` days....
+
+::
+
+
+ # "assignmapper"-like functionality available via ScopedSession.mapper
+ Session.mapper(User, users_table)
+
+ u = User(name='wendy')
+
+ Session.commit()
+
+
+Sessions are again Weak Referencing By Default
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The weak_identity_map flag is now set to ``True`` by default
+on Session. Instances which are externally deferenced and
+fall out of scope are removed from the session
+automatically. However, items which have "dirty" changes
+present will remain strongly referenced until those changes
+are flushed at which case the object reverts to being weakly
+referenced (this works for 'mutable' types, like picklable
+attributes, as well). Setting weak_identity_map to
+``False`` restores the old strong-referencing behavior for
+those of you using the session like a cache.
+
+Auto-Transactional Sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+As you might have noticed above, we are calling ``commit()``
+on ``Session``. The flag ``transactional=True`` means the
+``Session`` is always in a transaction, ``commit()``
+persists permanently.
+
+Auto-Flushing Sessions
+^^^^^^^^^^^^^^^^^^^^^^
+
+Also, ``autoflush=True`` means the ``Session`` will
+``flush()`` before each ``query`` as well as when you call
+``flush()`` or ``commit()``. So now this will work:
+
+::
+
+ Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
+
+ u = User(name='wendy')
+
+ sess = Session()
+ sess.save(u)
+
+ # wendy is flushed, comes right back from a query
+ wendy = sess.query(User).filter_by(name='wendy').one()
+
+Transactional methods moved onto sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``commit()`` and ``rollback()``, as well as ``begin()`` are
+now directly on ``Session``. No more need to use
+``SessionTransaction`` for anything (it remains in the
+background).
+
+::
+
+ Session = sessionmaker(autoflush=True, transactional=False)
+
+ sess = Session()
+ sess.begin()
+
+ # use the session
+
+ sess.commit() # commit transaction
+
+Sharing a ``Session`` with an enclosing engine-level (i.e.
+non-ORM) transaction is easy:
+
+::
+
+ Session = sessionmaker(autoflush=True, transactional=False)
+
+ conn = engine.connect()
+ trans = conn.begin()
+ sess = Session(bind=conn)
+
+ # ... session is transactional
+
+ # commit the outermost transaction
+ trans.commit()
+
+Nested Session Transactions with SAVEPOINT
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Available at the Engine and ORM level. ORM docs so far:
+
+http://www.sqlalchemy.org/docs/04/session.html#unitofwork_ma
+naging
+
+Two-Phase Commit Sessions
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Available at the Engine and ORM level. ORM docs so far:
+
+http://www.sqlalchemy.org/docs/04/session.html#unitofwork_ma
+naging
+
+Inheritance
+-----------
+
+Polymorphic Inheritance with No Joins or Unions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+New docs for inheritance: http://www.sqlalchemy.org/docs/04
+/mappers.html#advdatamapping_mapper_inheritance_joined
+
+Better Polymorphic Behavior with ``get()``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All classes within a joined-table inheritance hierarchy get
+an ``_instance_key`` using the base class, i.e.
+``(BaseClass, (1, ), None)``. That way when you call
+``get()`` a ``Query`` against the base class, it can locate
+subclass instances in the current identity map without
+querying the database.
+
+Types
+-----
+
+Custom Subclasses of ``sqlalchemy.types.TypeDecorator``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There is a `New API <http://www.sqlalchemy.org/docs/04/types
+.html#types_custom>`_ for subclassing a TypeDecorator.
+Using the 0.3 API causes compilation errors in some cases.
+
+SQL Expressions
+===============
+
+All New, Deterministic Label/Alias Generation
+---------------------------------------------
+
+All the "anonymous" labels and aliases use a simple
+<name>_<number> format now. SQL is much easier to read and
+is compatible with plan optimizer caches. Just check out
+some of the examples in the tutorials:
+http://www.sqlalchemy.org/docs/04/ormtutorial.html
+http://www.sqlalchemy.org/docs/04/sqlexpression.html
+
+Generative select() Constructs
+------------------------------
+
+This is definitely the way to go with ``select()``. See htt
+p://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_transf
+orm .
+
+New Operator System
+-------------------
+
+SQL operators and more or less every SQL keyword there is
+are now abstracted into the compiler layer. They now act
+intelligently and are type/backend aware, see: http://www.sq
+lalchemy.org/docs/04/sqlexpression.html#sql_operators
+
+All ``type`` Keyword Arguments Renamed to ``type_``
+---------------------------------------------------
+
+Just like it says:
+
+::
+
+ b = bindparam('foo', type_=String)
+
+in_ Function Changed to Accept Sequence or Selectable
+-----------------------------------------------------
+
+The in_ function now takes a sequence of values or a
+selectable as its sole argument. The previous API of passing
+in values as positional arguments still works, but is now
+deprecated. This means that
+
+::
+
+ my_table.select(my_table.c.id.in_(1,2,3)
+ my_table.select(my_table.c.id.in_(*listOfIds)
+
+should be changed to
+
+::
+
+ my_table.select(my_table.c.id.in_([1,2,3])
+ my_table.select(my_table.c.id.in_(listOfIds)
+
+Schema and Reflection
+=====================
+
+``MetaData``, ``BoundMetaData``, ``DynamicMetaData``...
+-------------------------------------------------------
+
+In the 0.3.x series, ``BoundMetaData`` and
+``DynamicMetaData`` were deprecated in favor of ``MetaData``
+and ``ThreadLocalMetaData``. The older names have been
+removed in 0.4. Updating is simple:
+
+::
+
+ +-------------------------------------+-------------------------+
+ |If You Had | Now Use |
+ +=====================================+=========================+
+ | ``MetaData`` | ``MetaData`` |
+ +-------------------------------------+-------------------------+
+ | ``BoundMetaData`` | ``MetaData`` |
+ +-------------------------------------+-------------------------+
+ | ``DynamicMetaData`` (with one | ``MetaData`` |
+ | engine or threadlocal=False) | |
+ +-------------------------------------+-------------------------+
+ | ``DynamicMetaData`` | ``ThreadLocalMetaData`` |
+ | (with different engines per thread) | |
+ +-------------------------------------+-------------------------+
+
+The seldom-used ``name`` parameter to ``MetaData`` types has
+been removed. The ``ThreadLocalMetaData`` constructor now
+takes no arguments. Both types can now be bound to an
+``Engine`` or a single ``Connection``.
+
+One Step Multi-Table Reflection
+-------------------------------
+
+You can now load table definitions and automatically create
+``Table`` objects from an entire database or schema in one
+pass:
+
+::
+
+ >>> metadata = MetaData(myengine, reflect=True)
+ >>> metadata.tables.keys()
+ ['table_a', 'table_b', 'table_c', '...']
+
+``MetaData`` also gains a ``.reflect()`` method enabling
+finer control over the loading process, including
+specification of a subset of available tables to load.
+
+SQL Execution
+=============
+
+``engine``, ``connectable``, and ``bind_to`` are all now ``bind``
+-----------------------------------------------------------------
+
+``Transactions``, ``NestedTransactions`` and ``TwoPhaseTransactions``
+---------------------------------------------------------------------
+
+Connection Pool Events
+----------------------
+
+The connection pool now fires events when new DB-API
+connections are created, checked out and checked back into
+the pool. You can use these to execute session-scoped SQL
+setup statements on fresh connections, for example.
+
+Oracle Engine Fixed
+-------------------
+
+In 0.3.11, there were bugs in the Oracle Engine on how
+Primary Keys are handled. These bugs could cause programs
+that worked fine with other engines, such as sqlite, to fail
+when using the Oracle Engine. In 0.4, the Oracle Engine has
+been reworked, fixing these Primary Key problems.
+
+Out Parameters for Oracle
+-------------------------
+
+::
+
+ result = engine.execute(text("begin foo(:x, :y, :z); end;", bindparams=[bindparam('x', Numeric), outparam('y', Numeric), outparam('z', Numeric)]), x=5)
+ assert result.out_parameters == {'y':10, 'z':75}
+
+Connection-bound ``MetaData``, ``Sessions``
+-------------------------------------------
+
+``MetaData`` and ``Session`` can be explicitly bound to a
+connection:
+
+::
+
+ conn = engine.connect()
+ sess = create_session(bind=conn)
+
+Faster, More Foolproof ``ResultProxy`` Objects
+----------------------------------------------
+
--- /dev/null
+=============================
+What's new in SQLAlchemy 0.5?
+=============================
+
+.. admonition:: About this Document
+
+ This document describes changes between SQLAlchemy version 0.4,
+ last released October 12, 2008, and SQLAlchemy version 0.5,
+ last released January 16, 2010.
+
+ Document date: August 4, 2009
+
+
+This guide documents API changes which affect users
+migrating their applications from the 0.4 series of
+SQLAlchemy to 0.5. It's also recommended for those working
+from `Essential SQLAlchemy
+<http://oreilly.com/catalog/9780596516147/>`_, which only
+covers 0.4 and seems to even have some old 0.3isms in it.
+Note that SQLAlchemy 0.5 removes many behaviors which were
+deprecated throughout the span of the 0.4 series, and also
+deprecates more behaviors specific to 0.4.
+
+Major Documentation Changes
+===========================
+
+Some sections of the documentation have been completely
+rewritten and can serve as an introduction to new ORM
+features. The ``Query`` and ``Session`` objects in
+particular have some distinct differences in API and
+behavior which fundamentally change many of the basic ways
+things are done, particularly with regards to constructing
+highly customized ORM queries and dealing with stale session
+state, commits and rollbacks.
+
+* `ORM Tutorial
+ <http://www.sqlalchemy.org/docs/05/ormtutorial.html>`_
+
+* `Session Documentation
+ <http://www.sqlalchemy.org/docs/05/session.html>`_
+
+Deprecations Source
+===================
+
+Another source of information is documented within a series
+of unit tests illustrating up to date usages of some common
+``Query`` patterns; this file can be viewed at
+[source:sqlalchemy/trunk/test/orm/test_deprecations.py].
+
+Requirements Changes
+====================
+
+* Python 2.4 or higher is required. The SQLAlchemy 0.4 line
+ is the last version with Python 2.3 support.
+
+Object Relational Mapping
+=========================
+
+* **Column level expressions within Query.** - as detailed
+ in the `tutorial
+ <http://www.sqlalchemy.org/docs/05/ormtutorial.html>`_,
+ ``Query`` has the capability to create specific SELECT
+ statements, not just those against full rows:
+
+ ::
+
+ session.query(User.name, func.count(Address.id).label("numaddresses")).join(Address).group_by(User.name)
+
+ The tuples returned by any multi-column/entity query are
+ *named*' tuples:
+
+ ::
+
+ for row in session.query(User.name, func.count(Address.id).label('numaddresses')).join(Address).group_by(User.name):
+ print "name", row.name, "number", row.numaddresses
+
+ ``Query`` has a ``statement`` accessor, as well as a
+ ``subquery()`` method which allow ``Query`` to be used to
+ create more complex combinations:
+
+ ::
+
+ subq = session.query(Keyword.id.label('keyword_id')).filter(Keyword.name.in_(['beans', 'carrots'])).subquery()
+ recipes = session.query(Recipe).filter(exists().
+ where(Recipe.id==recipe_keywords.c.recipe_id).
+ where(recipe_keywords.c.keyword_id==subq.c.keyword_id)
+ )
+
+* **Explicit ORM aliases are recommended for aliased joins**
+ - The ``aliased()`` function produces an "alias" of a
+ class, which allows fine-grained control of aliases in
+ conjunction with ORM queries. While a table-level alias
+ (i.e. ``table.alias()``) is still usable, an ORM level
+ alias retains the semantics of the ORM mapped object which
+ is significant for inheritance mappings, options, and
+ other scenarios. E.g.:
+
+ ::
+
+ Friend = aliased(Person)
+ session.query(Person, Friend).join((Friend, Person.friends)).all()
+
+* **query.join() greatly enhanced.** - You can now specify
+ the target and ON clause for a join in multiple ways. A
+ target class alone can be provided where SQLA will attempt
+ to form a join to it via foreign key in the same way as
+ ``table.join(someothertable)``. A target and an explicit
+ ON condition can be provided, where the ON condition can
+ be a ``relation()`` name, an actual class descriptor, or a
+ SQL expression. Or the old way of just a ``relation()``
+ name or class descriptor works too. See the ORM tutorial
+ which has several examples.
+
+* **Declarative is recommended for applications which don't
+ require (and don't prefer) abstraction between tables and
+ mappers** - The [/docs/05/reference/ext/declarative.html
+ Declarative] module, which is used to combine the
+ expression of ``Table``, ``mapper()``, and user defined
+ class objects together, is highly recommended as it
+ simplifies application configuration, ensures the "one
+ mapper per class" pattern, and allows the full range of
+ configuration available to distinct ``mapper()`` calls.
+ Separate ``mapper()`` and ``Table`` usage is now referred
+ to as "classical SQLAlchemy usage" and of course is freely
+ mixable with declarative.
+
+* **The .c. attribute has been removed** from classes (i.e.
+ ``MyClass.c.somecolumn``). As is the case in 0.4, class-
+ level properties are usable as query elements, i.e.
+ ``Class.c.propname`` is now superseded by
+ ``Class.propname``, and the ``c`` attribute continues to
+ remain on ``Table`` objects where they indicate the
+ namespace of ``Column`` objects present on the table.
+
+ To get at the Table for a mapped class (if you didn't keep
+ it around already):
+
+ ::
+
+ table = class_mapper(someclass).mapped_table
+
+ Iterate through columns:
+
+ ::
+
+ for col in table.c:
+ print col
+
+ Work with a specific column:
+
+ ::
+
+ table.c.somecolumn
+
+ The class-bound descriptors support the full set of Column
+ operators as well as the documented relation-oriented
+ operators like ``has()``, ``any()``, ``contains()``, etc.
+
+ The reason for the hard removal of ``.c.`` is that in 0.5,
+ class-bound descriptors carry potentially different
+ meaning, as well as information regarding class mappings,
+ versus plain ``Column`` objects - and there are use cases
+ where you'd specifically want to use one or the other.
+ Generally, using class-bound descriptors invokes a set of
+ mapping/polymorphic aware translations, and using table-
+ bound columns does not. In 0.4, these translations were
+ applied across the board to all expressions, but 0.5
+ differentiates completely between columns and mapped
+ descriptors, only applying translations to the latter. So
+ in many cases, particularly when dealing with joined table
+ inheritance configurations as well as when using
+ ``query(<columns>)``, ``Class.propname`` and
+ ``table.c.colname`` are not interchangeable.
+
+ For example, ``session.query(users.c.id, users.c.name)``
+ is different versus ``session.query(User.id, User.name)``;
+ in the latter case, the ``Query`` is aware of the mapper
+ in use and further mapper-specific operations like
+ ``query.join(<propname>)``, ``query.with_parent()`` etc.
+ may be used, but in the former case cannot. Additionally,
+ in polymorphic inheritance scenarios, the class-bound
+ descriptors refer to the columns present in the
+ polymorphic selectable in use, not necessarily the table
+ column which directly corresponds to the descriptor. For
+ example, a set of classes related by joined-table
+ inheritance to the ``person`` table along the
+ ``person_id`` column of each table will all have their
+ ``Class.person_id`` attribute mapped to the ``person_id``
+ column in ``person``, and not their subclass table.
+ Version 0.4 would map this behavior onto table-bound
+ ``Column`` objects automatically. In 0.5, this automatic
+ conversion has been removed, so that you in fact *can* use
+ table-bound columns as a means to override the
+ translations which occur with polymorphic querying; this
+ allows ``Query`` to be able to create optimized selects
+ among joined-table or concrete-table inheritance setups,
+ as well as portable subqueries, etc.
+
+* **Session Now Synchronizes Automatically with
+ Transactions.** Session now synchronizes against the
+ transaction automatically by default, including autoflush
+ and autoexpire. A transaction is present at all times
+ unless disabled using the ``autocommit`` option. When all
+ three flags are set to their default, the Session recovers
+ gracefully after rollbacks and it's very difficult to get
+ stale data into the session. See the new Session
+ documentation for details.
+
+* **Implicit Order By Is Removed**. This will impact ORM
+ users who rely upon SA's "implicit ordering" behavior,
+ which states that all Query objects which don't have an
+ ``order_by()`` will ORDER BY the "id" or "oid" column of
+ the primary mapped table, and all lazy/eagerly loaded
+ collections apply a similar ordering. In 0.5, automatic
+ ordering must be explicitly configured on ``mapper()`` and
+ ``relation()`` objects (if desired), or otherwise when
+ using ``Query``.
+
+ To convert an 0.4 mapping to 0.5, such that its ordering
+ behavior will be extremely similar to 0.4 or previous, use
+ the ``order_by`` setting on ``mapper()`` and
+ ``relation()``:
+
+ ::
+
+ mapper(User, users, properties={
+ 'addresses':relation(Address, order_by=addresses.c.id)
+ }, order_by=users.c.id)
+
+ To set ordering on a backref, use the ``backref()``
+ function:
+
+ ::
+
+ 'keywords':relation(Keyword, secondary=item_keywords,
+ order_by=keywords.c.name, backref=backref('items', order_by=items.c.id))
+
+ Using declarative ? To help with the new ``order_by``
+ requirement, ``order_by`` and friends can now be set using
+ strings which are evaluated in Python later on (this works
+ **only** with declarative, not plain mappers):
+
+ ::
+
+ class MyClass(MyDeclarativeBase):
+ ...
+ 'addresses':relation("Address", order_by="Address.id")
+
+ It's generally a good idea to set ``order_by`` on
+ ``relation()s`` which load list-based collections of
+ items, since that ordering cannot otherwise be affected.
+ Other than that, the best practice is to use
+ ``Query.order_by()`` to control ordering of the primary
+ entities being loaded.
+
+* **Session is now
+ autoflush=True/autoexpire=True/autocommit=False.** - To
+ set it up, just call ``sessionmaker()`` with no arguments.
+ The name ``transactional=True`` is now
+ ``autocommit=False``. Flushes occur upon each query
+ issued (disable with ``autoflush=False``), within each
+ ``commit()`` (as always), and before each
+ ``begin_nested()`` (so rolling back to the SAVEPOINT is
+ meaningful). All objects are expired after each
+ ``commit()`` and after each ``rollback()``. After
+ rollback, pending objects are expunged, deleted objects
+ move back to persistent. These defaults work together
+ very nicely and there's really no more need for old
+ techniques like ``clear()`` (which is renamed to
+ ``expunge_all()`` as well).
+
+ P.S.: sessions are now reusable after a ``rollback()``.
+ Scalar and collection attribute changes, adds and deletes
+ are all rolled back.
+
+* **session.add() replaces session.save(), session.update(),
+ session.save_or_update().** - the
+ ``session.add(someitem)`` and ``session.add_all([list of
+ items])`` methods replace ``save()``, ``update()``, and
+ ``save_or_update()``. Those methods will remain
+ deprecated throughout 0.5.
+
+* **backref configuration made less verbose.** - The
+ ``backref()`` function now uses the ``primaryjoin`` and
+ ``secondaryjoin`` arguments of the forwards-facing
+ ``relation()`` when they are not explicitly stated. It's
+ no longer necessary to specify
+ ``primaryjoin``/``secondaryjoin`` in both directions
+ separately.
+
+* **Simplified polymorphic options.** - The ORM's
+ "polymorphic load" behavior has been simplified. In 0.4,
+ mapper() had an argument called ``polymorphic_fetch``
+ which could be configured as ``select`` or ``deferred``.
+ This option is removed; the mapper will now just defer any
+ columns which were not present in the SELECT statement.
+ The actual SELECT statement used is controlled by the
+ ``with_polymorphic`` mapper argument (which is also in 0.4
+ and replaces ``select_table``), as well as the
+ ``with_polymorphic()`` method on ``Query`` (also in 0.4).
+
+ An improvement to the deferred loading of inheriting
+ classes is that the mapper now produces the "optimized"
+ version of the SELECT statement in all cases; that is, if
+ class B inherits from A, and several attributes only
+ present on class B have been expired, the refresh
+ operation will only include B's table in the SELECT
+ statement and will not JOIN to A.
+
+* The ``execute()`` method on ``Session`` converts plain
+ strings into ``text()`` constructs, so that bind
+ parameters may all be specified as ":bindname" without
+ needing to call ``text()`` explicitly. If "raw" SQL is
+ desired here, use ``session.connection().execute("raw
+ text")``.
+
+* ``session.Query().iterate_instances()`` has been renamed
+ to just ``instances()``. The old ``instances()`` method
+ returning a list instead of an iterator no longer exists.
+ If you were relying on that behavior, you should use
+ ``list(your_query.instances())``.
+
+Extending the ORM
+=================
+
+In 0.5 we're moving forward with more ways to modify and
+extend the ORM. Heres a summary:
+
+* **MapperExtension.** - This is the classic extension
+ class, which remains. Methods which should rarely be
+ needed are ``create_instance()`` and
+ ``populate_instance()``. To control the initialization of
+ an object when it's loaded from the database, use the
+ ``reconstruct_instance()`` method, or more easily the
+ ``@reconstructor`` decorator described in the
+ documentation.
+
+* **SessionExtension.** - This is an easy to use extension
+ class for session events. In particular, it provides
+ ``before_flush()``, ``after_flush()`` and
+ ``after_flush_postexec()`` methods. It's usage is
+ recommended over ``MapperExtension.before_XXX`` in many
+ cases since within ``before_flush()`` you can modify the
+ flush plan of the session freely, something which cannot
+ be done from within ``MapperExtension``.
+
+* **AttributeExtension.** - This class is now part of the
+ public API, and allows the interception of userland events
+ on attributes, including attribute set and delete
+ operations, and collection appends and removes. It also
+ allows the value to be set or appended to be modified.
+ The ``@validates`` decorator, described in the
+ documentation, provides a quick way to mark any mapped
+ attributes as being "validated" by a particular class
+ method.
+
+* **Attribute Instrumentation Customization.** - An API is
+ provided for ambitious efforts to entirely replace
+ SQLAlchemy's attribute instrumentation, or just to augment
+ it in some cases. This API was produced for the purposes
+ of the Trellis toolkit, but is available as a public API.
+ Some examples are provided in the distribution in the
+ ``/examples/custom_attributes`` directory.
+
+Schema/Types
+============
+
+* **String with no length no longer generates TEXT, it
+ generates VARCHAR** - The ``String`` type no longer
+ magically converts into a ``Text`` type when specified
+ with no length. This only has an effect when CREATE TABLE
+ is issued, as it will issue ``VARCHAR`` with no length
+ parameter, which is not valid on many (but not all)
+ databases. To create a TEXT (or CLOB, i.e. unbounded
+ string) column, use the ``Text`` type.
+
+* **PickleType() with mutable=True requires an __eq__()
+ method** - The ``PickleType`` type needs to compare values
+ when mutable=True. The method of comparing
+ ``pickle.dumps()`` is inefficient and unreliable. If an
+ incoming object does not implement ``__eq__()`` and is
+ also not ``None``, the ``dumps()`` comparison is used but
+ a warning is raised. For types which implement
+ ``__eq__()`` which includes all dictionaries, lists, etc.,
+ comparison will use ``==`` and is now reliable by default.
+
+* **convert_bind_param() and convert_result_value() methods
+ of TypeEngine/TypeDecorator are removed.** - The O'Reilly
+ book unfortunately documented these methods even though
+ they were deprecated post 0.3. For a user-defined type
+ which subclasses ``TypeEngine``, the ``bind_processor()``
+ and ``result_processor()`` methods should be used for
+ bind/result processing. Any user defined type, whether
+ extending ``TypeEngine`` or ``TypeDecorator``, which uses
+ the old 0.3 style can be easily adapted to the new style
+ using the following adapter:
+
+ ::
+
+ class AdaptOldConvertMethods(object):
+ """A mixin which adapts 0.3-style convert_bind_param and
+ convert_result_value methods
+
+ """
+ def bind_processor(self, dialect):
+ def convert(value):
+ return self.convert_bind_param(value, dialect)
+ return convert
+
+ def result_processor(self, dialect):
+ def convert(value):
+ return self.convert_result_value(value, dialect)
+ return convert
+
+ def convert_result_value(self, value, dialect):
+ return value
+
+ def convert_bind_param(self, value, dialect):
+ return value
+
+ To use the above mixin:
+
+ ::
+
+ class MyType(AdaptOldConvertMethods, TypeEngine):
+ # ...
+
+* The ``quote`` flag on ``Column`` and ``Table`` as well as
+ the ``quote_schema`` flag on ``Table`` now control quoting
+ both positively and negatively. The default is ``None``,
+ meaning let regular quoting rules take effect. When
+ ``True``, quoting is forced on. When ``False``, quoting
+ is forced off.
+
+* Column ``DEFAULT`` value DDL can now be more conveniently
+ specified with ``Column(..., server_default='val')``,
+ deprecating ``Column(..., PassiveDefault('val'))``.
+ ``default=`` is now exclusively for Python-initiated
+ default values, and can coexist with server_default. A
+ new ``server_default=FetchedValue()`` replaces the
+ ``PassiveDefault('')`` idiom for marking columns as
+ subject to influence from external triggers and has no DDL
+ side effects.
+
+* SQLite's ``DateTime``, ``Time`` and ``Date`` types now
+ **only accept datetime objects, not strings** as bind
+ parameter input. If you'd like to create your own
+ "hybrid" type which accepts strings and returns results as
+ date objects (from whatever format you'd like), create a
+ ``TypeDecorator`` that builds on ``String``. If you only
+ want string-based dates, just use ``String``.
+
+* Additionally, the ``DateTime`` and ``Time`` types, when
+ used with SQLite, now represent the "microseconds" field
+ of the Python ``datetime.datetime`` object in the same
+ manner as ``str(datetime)`` - as fractional seconds, not a
+ count of microseconds. That is:
+
+ ::
+
+ dt = datetime.datetime(2008, 6, 27, 12, 0, 0, 125) # 125 usec
+
+ # old way
+ '2008-06-27 12:00:00.125'
+
+ # new way
+ '2008-06-27 12:00:00.000125'
+
+ So if an existing SQLite file-based database intends to be
+ used across 0.4 and 0.5, you either have to upgrade the
+ datetime columns to store the new format (NOTE: please
+ test this, I'm pretty sure its correct):
+
+ ::
+
+ UPDATE mytable SET somedatecol =
+ substr(somedatecol, 0, 19) || '.' || substr((substr(somedatecol, 21, -1) / 1000000), 3, -1);
+
+ or, enable "legacy" mode as follows:
+
+ ::
+
+ from sqlalchemy.databases.sqlite import DateTimeMixin
+ DateTimeMixin.__legacy_microseconds__ = True
+
+Connection Pool no longer threadlocal by default
+================================================
+
+0.4 has an unfortunate default setting of
+"pool_threadlocal=True", leading to surprise behavior when,
+for example, using multiple Sessions within a single thread.
+This flag is now off in 0.5. To re-enable 0.4's behavior,
+specify ``pool_threadlocal=True`` to ``create_engine()``, or
+alternatively use the "threadlocal" strategy via
+``strategy="threadlocal"``.
+
+\*args Accepted, \*args No Longer Accepted
+==========================================
+
+The policy with ``method(\*args)`` vs. ``method([args])``
+is, if the method accepts a variable-length set of items
+which represent a fixed structure, it takes ``\*args``. If
+the method accepts a variable-length set of items that are
+data-driven, it takes ``[args]``.
+
+* The various Query.options() functions ``eagerload()``,
+ ``eagerload_all()``, ``lazyload()``, ``contains_eager()``,
+ ``defer()``, ``undefer()`` all accept variable-length
+ ``\*keys`` as their argument now, which allows a path to
+ be formulated using descriptors, ie.:
+
+ ::
+
+ query.options(eagerload_all(User.orders, Order.items, Item.keywords))
+
+ A single array argument is still accepted for backwards
+ compatibility.
+
+* Similarly, the ``Query.join()`` and ``Query.outerjoin()``
+ methods accept a variable length \*args, with a single
+ array accepted for backwards compatibility:
+
+ ::
+
+ query.join('orders', 'items')
+ query.join(User.orders, Order.items)
+
+* the ``in_()`` method on columns and similar only accepts a
+ list argument now. It no longer accepts ``\*args``.
+
+Removed
+=======
+
+* **entity_name** - This feature was always problematic and
+ rarely used. 0.5's more deeply fleshed out use cases
+ revealed further issues with ``entity_name`` which led to
+ its removal. If different mappings are required for a
+ single class, break the class into separate subclasses and
+ map them separately. An example of this is at
+ [wiki:UsageRecipes/EntityName]. More information
+ regarding rationale is described at http://groups.google.c
+ om/group/sqlalchemy/browse_thread/thread/9e23a0641a88b96d?
+ hl=en .
+
+* **get()/load() cleanup**
+
+
+ The ``load()`` method has been removed. It's
+ functionality was kind of arbitrary and basically copied
+ from Hibernate, where it's also not a particularly
+ meaningful method.
+
+ To get equivalent functionality:
+
+ ::
+
+ x = session.query(SomeClass).populate_existing().get(7)
+
+ ``Session.get(cls, id)`` and ``Session.load(cls, id)``
+ have been removed. ``Session.get()`` is redundant vs.
+ ``session.query(cls).get(id)``.
+
+ ``MapperExtension.get()`` is also removed (as is
+ ``MapperExtension.load()``). To override the
+ functionality of ``Query.get()``, use a subclass:
+
+ ::
+
+ class MyQuery(Query):
+ def get(self, ident):
+ # ...
+
+ session = sessionmaker(query_cls=MyQuery)()
+
+ ad1 = session.query(Address).get(1)
+
+* ``sqlalchemy.orm.relation()``
+
+
+ The following deprecated keyword arguments have been
+ removed:
+
+ foreignkey, association, private, attributeext, is_backref
+
+ In particular, ``attributeext`` is replaced with
+ ``extension`` - the ``AttributeExtension`` class is now in
+ the public API.
+
+* ``session.Query()``
+
+
+ The following deprecated functions have been removed:
+
+ list, scalar, count_by, select_whereclause, get_by,
+ select_by, join_by, selectfirst, selectone, select,
+ execute, select_statement, select_text, join_to, join_via,
+ selectfirst_by, selectone_by, apply_max, apply_min,
+ apply_avg, apply_sum
+
+ Additionally, the ``id`` keyword argument to ``join()``,
+ ``outerjoin()``, ``add_entity()`` and ``add_column()`` has
+ been removed. To target table aliases in ``Query`` to
+ result columns, use the ``aliased`` construct:
+
+ ::
+
+ from sqlalchemy.orm import aliased
+ address_alias = aliased(Address)
+ print session.query(User, address_alias).join((address_alias, User.addresses)).all()
+
+* ``sqlalchemy.orm.Mapper``
+
+
+ * instances()
+
+
+ * get_session() - this method was not very noticeable, but
+ had the effect of associating lazy loads with a
+ particular session even if the parent object was
+ entirely detached, when an extension such as
+ ``scoped_session()`` or the old ``SessionContextExt``
+ was used. It's possible that some applications which
+ relied upon this behavior will no longer work as
+ expected; but the better programming practice here is
+ to always ensure objects are present within sessions if
+ database access from their attributes are required.
+
+* ``mapper(MyClass, mytable)``
+
+
+ Mapped classes no are longer instrumented with a "c" class
+ attribute; e.g. ``MyClass.c``
+
+* ``sqlalchemy.orm.collections``
+
+
+ The _prepare_instrumentation alias for
+ prepare_instrumentation has been removed.
+
+* ``sqlalchemy.orm``
+
+
+ Removed the ``EXT_PASS`` alias of ``EXT_CONTINUE``.
+
+* ``sqlalchemy.engine``
+
+
+ The alias from ``DefaultDialect.preexecute_sequences`` to
+ ``.preexecute_pk_sequences`` has been removed.
+
+ The deprecated engine_descriptors() function has been
+ removed.
+
+* ``sqlalchemy.ext.activemapper``
+
+
+ Module removed.
+
+* ``sqlalchemy.ext.assignmapper``
+
+
+ Module removed.
+
+* ``sqlalchemy.ext.associationproxy``
+
+
+ Pass-through of keyword args on the proxy's
+ ``.append(item, \**kw)`` has been removed and is now
+ simply ``.append(item)``
+
+* ``sqlalchemy.ext.selectresults``,
+ ``sqlalchemy.mods.selectresults``
+
+ Modules removed.
+
+* ``sqlalchemy.ext.declarative``
+
+
+ ``declared_synonym()`` removed.
+
+* ``sqlalchemy.ext.sessioncontext``
+
+
+ Module removed.
+
+* ``sqlalchemy.log``
+
+
+ The ``SADeprecationWarning`` alias to
+ ``sqlalchemy.exc.SADeprecationWarning`` has been removed.
+
+* ``sqlalchemy.exc``
+
+
+ ``exc.AssertionError`` has been removed and usage replaced
+ by the Python built-in of the same name.
+
+* ``sqlalchemy.databases.mysql``
+
+
+ The deprecated ``get_version_info`` dialect method has
+ been removed.
+
+Renamed or Moved
+================
+
+* ``sqlalchemy.exceptions`` is now ``sqlalchemy.exc``
+
+
+ The module may still be imported under the old name until
+ 0.6.
+
+* ``FlushError``, ``ConcurrentModificationError``,
+ ``UnmappedColumnError`` -> sqlalchemy.orm.exc
+
+ These exceptions moved to the orm package. Importing
+ 'sqlalchemy.orm' will install aliases in sqlalchemy.exc
+ for compatibility until 0.6.
+
+* ``sqlalchemy.logging`` -> ``sqlalchemy.log``
+
+
+ This internal module was renamed. No longer needs to be
+ special cased when packaging SA with py2app and similar
+ tools that scan imports.
+
+* ``session.Query().iterate_instances()`` ->
+ ``session.Query().instances()``.
+
+Deprecated
+==========
+
+* ``Session.save()``, ``Session.update()``,
+ ``Session.save_or_update()``
+
+ All three replaced by ``Session.add()``
+
+* ``sqlalchemy.PassiveDefault``
+
+
+ Use ``Column(server_default=...)`` Translates to
+ sqlalchemy.DefaultClause() under the hood.
+
+* ``session.Query().iterate_instances()``. It has been
+ renamed to ``instances()``.
+
--- /dev/null
+==============================
+What's New in SQLAlchemy 0.6 ?
+==============================
+
+.. admonition:: About this Document
+
+ This document describes changes between SQLAlchemy version 0.5,
+ last released January 16, 2010, and SQLAlchemy version 0.6,
+ last released May 5, 2012.
+
+ Document date: June 6, 2010
+
+This guide documents API changes which affect users
+migrating their applications from the 0.5 series of
+SQLAlchemy to 0.6. Note that SQLAlchemy 0.6 removes some
+behaviors which were deprecated throughout the span of the
+0.5 series, and also deprecates more behaviors specific to
+0.5.
+
+Platform Support
+================
+
+* cPython versions 2.4 and upwards throughout the 2.xx
+ series
+
+* Jython 2.5.1 - using the zxJDBC DBAPI included with
+ Jython.
+
+* cPython 3.x - see [source:sqlalchemy/trunk/README.py3k]
+ for information on how to build for python3.
+
+New Dialect System
+==================
+
+Dialect modules are now broken up into distinct
+subcomponents, within the scope of a single database
+backend. Dialect implementations are now in the
+``sqlalchemy.dialects`` package. The
+``sqlalchemy.databases`` package still exists as a
+placeholder to provide some level of backwards compatibility
+for simple imports.
+
+For each supported database, a sub-package exists within
+``sqlalchemy.dialects`` where several files are contained.
+Each package contains a module called ``base.py`` which
+defines the specific SQL dialect used by that database. It
+also contains one or more "driver" modules, each one
+corresponding to a specific DBAPI - these files are named
+corresponding to the DBAPI itself, such as ``pysqlite``,
+``cx_oracle``, or ``pyodbc``. The classes used by
+SQLAlchemy dialects are first declared in the ``base.py``
+module, defining all behavioral characteristics defined by
+the database. These include capability mappings, such as
+"supports sequences", "supports returning", etc., type
+definitions, and SQL compilation rules. Each "driver"
+module in turn provides subclasses of those classes as
+needed which override the default behavior to accommodate
+the additional features, behaviors, and quirks of that
+DBAPI. For DBAPIs that support multiple backends (pyodbc,
+zxJDBC, mxODBC), the dialect module will use mixins from the
+``sqlalchemy.connectors`` package, which provide
+functionality common to that DBAPI across all backends, most
+typically dealing with connect arguments. This means that
+connecting using pyodbc, zxJDBC or mxODBC (when implemented)
+is extremely consistent across supported backends.
+
+The URL format used by ``create_engine()`` has been enhanced
+to handle any number of DBAPIs for a particular backend,
+using a scheme that is inspired by that of JDBC. The
+previous format still works, and will select a "default"
+DBAPI implementation, such as the Postgresql URL below that
+will use psycopg2:
+
+::
+
+ create_engine('postgresql://scott:tiger@localhost/test')
+
+However to specify a specific DBAPI backend such as pg8000,
+add it to the "protocol" section of the URL using a plus
+sign "+":
+
+::
+
+ create_engine('postgresql+pg8000://scott:tiger@localhost/test')
+
+Important Dialect Links:
+
+* Documentation on connect arguments:
+ http://www.sqlalchemy.org/docs/06/dbengine.html#create-
+ engine-url-arguments.
+
+* Reference documentation for individual dialects: http://ww
+ w.sqlalchemy.org/docs/06/reference/dialects/index.html
+
+* The tips and tricks at DatabaseNotes.
+
+
+Other notes regarding dialects:
+
+* the type system has been changed dramatically in
+ SQLAlchemy 0.6. This has an impact on all dialects
+ regarding naming conventions, behaviors, and
+ implementations. See the section on "Types" below.
+
+* the ``ResultProxy`` object now offers a 2x speed
+ improvement in some cases thanks to some refactorings.
+
+* the ``RowProxy``, i.e. individual result row object, is
+ now directly pickleable.
+
+* the setuptools entrypoint used to locate external dialects
+ is now called ``sqlalchemy.dialects``. An external
+ dialect written against 0.4 or 0.5 will need to be
+ modified to work with 0.6 in any case so this change does
+ not add any additional difficulties.
+
+* dialects now receive an initialize() event on initial
+ connection to determine connection properties.
+
+* Functions and operators generated by the compiler now use
+ (almost) regular dispatch functions of the form
+ "visit_<opname>" and "visit_<funcname>_fn" to provide
+ customed processing. This replaces the need to copy the
+ "functions" and "operators" dictionaries in compiler
+ subclasses with straightforward visitor methods, and also
+ allows compiler subclasses complete control over
+ rendering, as the full _Function or _BinaryExpression
+ object is passed in.
+
+Dialect Imports
+---------------
+
+The import structure of dialects has changed. Each dialect
+now exports its base "dialect" class as well as the full set
+of SQL types supported on that dialect via
+``sqlalchemy.dialects.<name>``. For example, to import a
+set of PG types:
+
+::
+
+ from sqlalchemy.dialects.postgresql import INTEGER, BIGINT, SMALLINT,\
+ VARCHAR, MACADDR, DATE, BYTEA
+
+Above, ``INTEGER`` is actually the plain ``INTEGER`` type
+from ``sqlalchemy.types``, but the PG dialect makes it
+available in the same way as those types which are specific
+to PG, such as ``BYTEA`` and ``MACADDR``.
+
+Expression Language Changes
+===========================
+
+An Important Expression Language Gotcha
+---------------------------------------
+
+There's one quite significant behavioral change to the
+expression language which may affect some applications.
+The boolean value of Python boolean expressions, i.e.
+``==``, ``!=``, and similar, now evaluates accurately with
+regards to the two clause objects being compared.
+
+As we know, comparing a ``ClauseElement`` to any other
+object returns another ``ClauseElement``:
+
+::
+
+ >>> from sqlalchemy.sql import column
+ >>> column('foo') == 5
+ <sqlalchemy.sql.expression._BinaryExpression object at 0x1252490>
+
+This so that Python expressions produce SQL expressions when
+converted to strings:
+
+::
+
+ >>> str(column('foo') == 5)
+ 'foo = :foo_1'
+
+But what happens if we say this?
+
+::
+
+ >>> if column('foo') == 5:
+ ... print "yes"
+ ...
+
+In previous versions of SQLAlchemy, the returned
+``_BinaryExpression`` was a plain Python object which
+evaluated to ``True``. Now it evaluates to whether or not
+the actual ``ClauseElement`` should have the same hash value
+as to that being compared. Meaning:
+
+::
+
+ >>> bool(column('foo') == 5)
+ False
+ >>> bool(column('foo') == column('foo'))
+ False
+ >>> c = column('foo')
+ >>> bool(c == c)
+ True
+ >>>
+
+That means code such as the following:
+
+::
+
+ if expression:
+ print "the expression is:", expression
+
+Would not evaluate if ``expression`` was a binary clause.
+Since the above pattern should never be used, the base
+``ClauseElement`` now raises an exception if called in a
+boolean context:
+
+::
+
+ >>> bool(c)
+ Traceback (most recent call last):
+ File "<stdin>", line 1, in <module>
+ ...
+ raise TypeError("Boolean value of this clause is not defined")
+ TypeError: Boolean value of this clause is not defined
+
+Code that wants to check for the presence of a
+``ClauseElement`` expression should instead say:
+
+::
+
+ if expression is not None:
+ print "the expression is:", expression
+
+Keep in mind, **this applies to Table and Column objects
+too**.
+
+The rationale for the change is twofold:
+
+* Comparisons of the form ``if c1 == c2: <do something>``
+ can actually be written now
+
+* Support for correct hashing of ``ClauseElement`` objects
+ now works on alternate platforms, namely Jython. Up until
+ this point SQLAlchemy relied heavily on the specific
+ behavior of cPython in this regard (and still had
+ occasional problems with it).
+
+Stricter "executemany" Behavior
+-------------------------------
+
+An "executemany" in SQLAlchemy corresponds to a call to
+``execute()``, passing along a collection of bind parameter
+sets:
+
+::
+
+ connection.execute(table.insert(), {'data':'row1'}, {'data':'row2'}, {'data':'row3'})
+
+When the ``Connection`` object sends off the given
+``insert()`` construct for compilation, it passes to the
+compiler the keynames present in the first set of binds
+passed along to determine the construction of the
+statement's VALUES clause. Users familiar with this
+construct will know that additional keys present in the
+remaining dictionaries don't have any impact. What's
+different now is that all subsequent dictionaries need to
+include at least *every* key that is present in the first
+dictionary. This means that a call like this no longer
+works:
+
+::
+
+ connection.execute(table.insert(),
+ {'timestamp':today, 'data':'row1'},
+ {'timestamp':today, 'data':'row2'},
+ {'data':'row3'})
+
+Because the third row does not specify the 'timestamp'
+column. Previous versions of SQLAlchemy would simply insert
+NULL for these missing columns. However, if the
+``timestamp`` column in the above example contained a
+Python-side default value or function, it would *not* be
+used. This because the "executemany" operation is optimized
+for maximum performance across huge numbers of parameter
+sets, and does not attempt to evaluate Python-side defaults
+for those missing keys. Because defaults are often
+implemented either as SQL expressions which are embedded
+inline with the INSERT statement, or are server side
+expressions which again are triggered based on the structure
+of the INSERT string, which by definition cannot fire off
+conditionally based on each parameter set, it would be
+inconsistent for Python side defaults to behave differently
+vs. SQL/server side defaults. (SQL expression based
+defaults are embedded inline as of the 0.5 series, again to
+minimize the impact of huge numbers of parameter sets).
+
+SQLAlchemy 0.6 therefore establishes predictable consistency
+by forbidding any subsequent parameter sets from leaving any
+fields blank. That way, there's no more silent failure of
+Python side default values and functions, which additionally
+are allowed to remain consistent in their behavior versus
+SQL and server side defaults.
+
+UNION and other "compound" constructs parenthesize consistently
+---------------------------------------------------------------
+
+A rule that was designed to help SQLite has been removed,
+that of the first compound element within another compound
+(such as, a ``union()`` inside of an ``except_()``) wouldn't
+be parenthesized. This is inconsistent and produces the
+wrong results on Postgresql, which has precedence rules
+regarding INTERSECTION, and its generally a surprise. When
+using complex composites with SQLite, you now need to turn
+the first element into a subquery (which is also compatible
+on PG). A new example is in the SQL expression tutorial at
+the end of
+[http://www.sqlalchemy.org/docs/06/sqlexpression.html
+#unions-and-other-set-operations]. See :ticket:`1665` and
+r6690 for more background.
+
+C Extensions for Result Fetching
+================================
+
+The ``ResultProxy`` and related elements, including most
+common "row processing" functions such as unicode
+conversion, numerical/boolean conversions and date parsing,
+have been re-implemented as optional C extensions for the
+purposes of performance. This represents the beginning of
+SQLAlchemy's path to the "dark side" where we hope to
+continue improving performance by reimplementing critical
+sections in C. The extensions can be built by specifying
+``--with-cextensions``, i.e. ``python setup.py --with-
+cextensions install``.
+
+The extensions have the most dramatic impact on result
+fetching using direct ``ResultProxy`` access, i.e. that
+which is returned by ``engine.execute()``,
+``connection.execute()``, or ``session.execute()``. Within
+results returned by an ORM ``Query`` object, result fetching
+is not as high a percentage of overhead, so ORM performance
+improves more modestly, and mostly in the realm of fetching
+large result sets. The performance improvements highly
+depend on the dbapi in use and on the syntax used to access
+the columns of each row (eg ``row['name']`` is much faster
+than ``row.name``). The current extensions have no impact
+on the speed of inserts/updates/deletes, nor do they improve
+the latency of SQL execution, that is, an application that
+spends most of its time executing many statements with very
+small result sets will not see much improvement.
+
+Performance has been improved in 0.6 versus 0.5 regardless
+of the extensions. A quick overview of what connecting and
+fetching 50,000 rows looks like with SQLite, using mostly
+direct SQLite access, a ``ResultProxy``, and a simple mapped
+ORM object:
+
+::
+
+ sqlite select/native: 0.260s
+
+ 0.6 / C extension
+
+ sqlalchemy.sql select: 0.360s
+ sqlalchemy.orm fetch: 2.500s
+
+ 0.6 / Pure Python
+
+ sqlalchemy.sql select: 0.600s
+ sqlalchemy.orm fetch: 3.000s
+
+ 0.5 / Pure Python
+
+ sqlalchemy.sql select: 0.790s
+ sqlalchemy.orm fetch: 4.030s
+
+Above, the ORM fetches the rows 33% faster than 0.5 due to
+in-python performance enhancements. With the C extensions
+we get another 20%. However, ``ResultProxy`` fetches
+improve by 67% with the C extension versus not. Other
+tests report as much as a 200% speed improvement for some
+scenarios, such as those where lots of string conversions
+are occurring.
+
+New Schema Capabilities
+=======================
+
+The ``sqlalchemy.schema`` package has received some long-
+needed attention. The most visible change is the newly
+expanded DDL system. In SQLAlchemy, it was possible since
+version 0.5 to create custom DDL strings and associate them
+with tables or metadata objects:
+
+::
+
+ from sqlalchemy.schema import DDL
+
+ DDL('CREATE TRIGGER users_trigger ...').execute_at('after-create', metadata)
+
+Now the full suite of DDL constructs are available under the
+same system, including those for CREATE TABLE, ADD
+CONSTRAINT, etc.:
+
+::
+
+ from sqlalchemy.schema import Constraint, AddConstraint
+
+ AddContraint(CheckConstraint("value > 5")).execute_at('after-create', mytable)
+
+Additionally, all the DDL objects are now regular
+``ClauseElement`` objects just like any other SQLAlchemy
+expression object:
+
+::
+
+ from sqlalchemy.schema import CreateTable
+
+ create = CreateTable(mytable)
+
+ # dumps the CREATE TABLE as a string
+ print create
+
+ # executes the CREATE TABLE statement
+ engine.execute(create)
+
+and using the ``sqlalchemy.ext.compiler`` extension you can
+make your own:
+
+::
+
+ from sqlalchemy.schema import DDLElement
+ from sqlalchemy.ext.compiler import compiles
+
+ class AlterColumn(DDLElement):
+
+ def __init__(self, column, cmd):
+ self.column = column
+ self.cmd = cmd
+
+ @compiles(AlterColumn)
+ def visit_alter_column(element, compiler, **kw):
+ return "ALTER TABLE %s ALTER COLUMN %s %s ..." % (
+ element.column.table.name,
+ element.column.name,
+ element.cmd
+ )
+
+ engine.execute(AlterColumn(table.c.mycolumn, "SET DEFAULT 'test'"))
+
+Deprecated/Removed Schema Elements
+----------------------------------
+
+The schema package has also been greatly streamlined. Many
+options and methods which were deprecated throughout 0.5
+have been removed. Other little known accessors and methods
+have also been removed.
+
+* the "owner" keyword argument is removed from ``Table``.
+ Use "schema" to represent any namespaces to be prepended
+ to the table name.
+
+* deprecated ``MetaData.connect()`` and
+ ``ThreadLocalMetaData.connect()`` have been removed - send
+ the "bind" attribute to bind a metadata.
+
+* deprecated metadata.table_iterator() method removed (use
+ sorted_tables)
+
+* the "metadata" argument is removed from
+ ``DefaultGenerator`` and subclasses, but remains locally
+ present on ``Sequence``, which is a standalone construct
+ in DDL.
+
+* deprecated ``PassiveDefault`` - use ``DefaultClause``.
+
+
+* Removed public mutability from ``Index`` and
+ ``Constraint`` objects:
+
+ * ``ForeignKeyConstraint.append_element()``
+
+
+ * ``Index.append_column()``
+
+
+ * ``UniqueConstraint.append_column()``
+
+
+ * ``PrimaryKeyConstraint.add()``
+
+
+ * ``PrimaryKeyConstraint.remove()``
+
+
+These should be constructed declaratively (i.e. in one
+construction).
+
+* Other removed things:
+
+
+ * ``Table.key`` (no idea what this was for)
+
+
+ * ``Column.bind`` (get via column.table.bind)
+
+
+ * ``Column.metadata`` (get via column.table.metadata)
+
+
+ * ``Column.sequence`` (use column.default)
+
+
+Other Behavioral Changes
+------------------------
+
+* ``UniqueConstraint``, ``Index``, ``PrimaryKeyConstraint``
+ all accept lists of column names or column objects as
+ arguments.
+
+* The ``use_alter`` flag on ``ForeignKey`` is now a shortcut
+ option for operations that can be hand-constructed using
+ the ``DDL()`` event system. A side effect of this refactor
+ is that ``ForeignKeyConstraint`` objects with
+ ``use_alter=True`` will *not* be emitted on SQLite, which
+ does not support ALTER for foreign keys. This has no
+ effect on SQLite's behavior since SQLite does not actually
+ honor FOREIGN KEY constraints.
+
+* ``Table.primary_key`` is not assignable - use
+ ``table.append_constraint(PrimaryKeyConstraint(...))``
+
+* A ``Column`` definition with a ``ForeignKey`` and no type,
+ e.g. ``Column(name, ForeignKey(sometable.c.somecol))``
+ used to get the type of the referenced column. Now support
+ for that automatic type inference is partial and may not
+ work in all cases.
+
+Logging opened up
+=================
+
+At the expense of a few extra method calls here and there,
+you can set log levels for INFO and DEBUG after an engine,
+pool, or mapper has been created, and logging will commence.
+The ``isEnabledFor(INFO)`` method is now called
+per-``Connection`` and ``isEnabledFor(DEBUG)``
+per-``ResultProxy`` if already enabled on the parent
+connection. Pool logging sends to ``log.info()`` and
+``log.debug()`` with no check - note that pool
+checkout/checkin is typically once per transaction.
+
+Reflection/Inspector API
+========================
+
+The reflection system, which allows reflection of table
+columns via ``Table('sometable', metadata, autoload=True)``
+has been opened up into its own fine-grained API, which
+allows direct inspection of database elements such as
+tables, columns, constraints, indexes, and more. This API
+expresses return values as simple lists of strings,
+dictionaries, and ``TypeEngine`` objects. The internals of
+``autoload=True`` now build upon this system such that the
+translation of raw database information into
+``sqlalchemy.schema`` constructs is centralized and the
+contract of individual dialects greatly simplified, vastly
+reducing bugs and inconsistencies across different backends.
+
+To use an inspector:
+
+::
+
+ from sqlalchemy.engine.reflection import Inspector
+ insp = Inspector.from_engine(my_engine)
+
+ print insp.get_schema_names()
+
+the ``from_engine()`` method will in some cases provide a
+backend-specific inspector with additional capabilities,
+such as that of Postgresql which provides a
+``get_table_oid()`` method:
+
+::
+
+
+ my_engine = create_engine('postgresql://...')
+ pg_insp = Inspector.from_engine(my_engine)
+
+ print pg_insp.get_table_oid('my_table')
+
+RETURNING Support
+=================
+
+The ``insert()``, ``update()`` and ``delete()`` constructs
+now support a ``returning()`` method, which corresponds to
+the SQL RETURNING clause as supported by Postgresql, Oracle,
+MS-SQL, and Firebird. It is not supported for any other
+backend at this time.
+
+Given a list of column expressions in the same manner as
+that of a ``select()`` construct, the values of these
+columns will be returned as a regular result set:
+
+::
+
+
+ result = connection.execute(
+ table.insert().values(data='some data').returning(table.c.id, table.c.timestamp)
+ )
+ row = result.first()
+ print "ID:", row['id'], "Timestamp:", row['timestamp']
+
+The implementation of RETURNING across the four supported
+backends varies wildly, in the case of Oracle requiring an
+intricate usage of OUT parameters which are re-routed into a
+"mock" result set, and in the case of MS-SQL using an
+awkward SQL syntax. The usage of RETURNING is subject to
+limitations:
+
+* it does not work for any "executemany()" style of
+ execution. This is a limitation of all supported DBAPIs.
+
+* Some backends, such as Oracle, only support RETURNING that
+ returns a single row - this includes UPDATE and DELETE
+ statements, meaning the update() or delete() construct
+ must match only a single row, or an error is raised (by
+ Oracle, not SQLAlchemy).
+
+RETURNING is also used automatically by SQLAlchemy, when
+available and when not otherwise specified by an explicit
+``returning()`` call, to fetch the value of newly generated
+primary key values for single-row INSERT statements. This
+means there's no more "SELECT nextval(sequence)" pre-
+execution for insert statements where the primary key value
+is required. Truth be told, implicit RETURNING feature
+does incur more method overhead than the old "select
+nextval()" system, which used a quick and dirty
+cursor.execute() to get at the sequence value, and in the
+case of Oracle requires additional binding of out
+parameters. So if method/protocol overhead is proving to be
+more expensive than additional database round trips, the
+feature can be disabled by specifying
+``implicit_returning=False`` to ``create_engine()``.
+
+Type System Changes
+===================
+
+New Archicture
+--------------
+
+The type system has been completely reworked behind the
+scenes to provide two goals:
+
+* Separate the handling of bind parameters and result row
+ values, typically a DBAPI requirement, from the SQL
+ specification of the type itself, which is a database
+ requirement. This is consistent with the overall dialect
+ refactor that separates database SQL behavior from DBAPI.
+
+* Establish a clear and consistent contract for generating
+ DDL from a ``TypeEngine`` object and for constructing
+ ``TypeEngine`` objects based on column reflection.
+
+Highlights of these changes include:
+
+* The construction of types within dialects has been totally
+ overhauled. Dialects now define publically available types
+ as UPPERCASE names exclusively, and internal
+ implementation types using underscore identifiers (i.e.
+ are private). The system by which types are expressed in
+ SQL and DDL has been moved to the compiler system. This
+ has the effect that there are much fewer type objects
+ within most dialects. A detailed document on this
+ architecture for dialect authors is in [source:/lib/sqlalc
+ hemy/dialects/type_migration_guidelines.txt].
+
+* Reflection of types now returns the exact UPPERCASE type
+ within types.py, or the UPPERCASE type within the dialect
+ itself if the type is not a standard SQL type. This means
+ reflection now returns more accurate information about
+ reflected types.
+
+* User defined types that subclass ``TypeEngine`` and wish
+ to provide ``get_col_spec()`` should now subclass
+ ``UserDefinedType``.
+
+* The ``result_processor()`` method on all type classes now
+ accepts an additional argument ``coltype``. This is the
+ DBAPI type object attached to cursor.description, and
+ should be used when applicable to make better decisions on
+ what kind of result-processing callable should be
+ returned. Ideally result processor functions would never
+ need to use ``isinstance()``, which is an expensive call
+ at this level.
+
+Native Unicode Mode
+-------------------
+
+As more DBAPIs support returning Python unicode objects
+directly, the base dialect now performs a check upon the
+first connection which establishes whether or not the DBAPI
+returns a Python unicode object for a basic select of a
+VARCHAR value. If so, the ``String`` type and all
+subclasses (i.e. ``Text``, ``Unicode``, etc.) will skip the
+"unicode" check/conversion step when result rows are
+received. This offers a dramatic performance increase for
+large result sets. The "unicode mode" currently is known to
+work with:
+
+* sqlite3 / pysqlite
+
+
+* psycopg2 - SQLA 0.6 now uses the "UNICODE" type extension
+ by default on each psycopg2 connection object
+
+* pg8000
+
+
+* cx_oracle (we use an output processor - nice feature !)
+
+
+Other types may choose to disable unicode processing as
+needed, such as the ``NVARCHAR`` type when used with MS-SQL.
+
+In particular, if porting an application based on a DBAPI
+that formerly returned non-unicode strings, the "native
+unicode" mode has a plainly different default behavior -
+columns that are declared as ``String`` or ``VARCHAR`` now
+return unicode by default whereas they would return strings
+before. This can break code which expects non-unicode
+strings. The psycopg2 "native unicode" mode can be
+disabled by passing ``use_native_unicode=False`` to
+``create_engine()``.
+
+A more general solution for string columns that explicitly
+do not want a unicode object is to use a ``TypeDecorator``
+that converts unicode back to utf-8, or whatever is desired:
+
+::
+
+ class UTF8Encoded(TypeDecorator):
+ """Unicode type which coerces to utf-8."""
+
+ impl = sa.VARCHAR
+
+ def process_result_value(self, value, dialect):
+ if isinstance(value, unicode):
+ value = value.encode('utf-8')
+ return value
+
+Note that the ``assert_unicode`` flag is now deprecated.
+SQLAlchemy allows the DBAPI and backend database in use to
+handle Unicode parameters when available, and does not add
+operational overhead by checking the incoming type; modern
+systems like sqlite and Postgresql will raise an encoding
+error on their end if invalid data is passed. In those
+cases where SQLAlchemy does need to coerce a bind parameter
+from Python Unicode to an encoded string, or when the
+Unicode type is used explicitly, a warning is raised if the
+object is a bytestring. This warning can be suppressed or
+converted to an exception using the Python warnings filter
+documented at: http://docs.python.org/library/warnings.html
+
+Generic Enum Type
+-----------------
+
+We now have an ``Enum`` in the ``types`` module. This is a
+string type that is given a collection of "labels" which
+constrain the possible values given to those labels. By
+default, this type generates a ``VARCHAR`` using the size of
+the largest label, and applies a CHECK constraint to the
+table within the CREATE TABLE statement. When using MySQL,
+the type by default uses MySQL's ENUM type, and when using
+Postgresql the type will generate a user defined type using
+``CREATE TYPE <mytype> AS ENUM``. In order to create the
+type using Postgresql, the ``name`` parameter must be
+specified to the constructor. The type also accepts a
+``native_enum=False`` option which will issue the
+VARCHAR/CHECK strategy for all databases. Note that
+Postgresql ENUM types currently don't work with pg8000 or
+zxjdbc.
+
+Reflection Returns Dialect-Specific Types
+-----------------------------------------
+
+Reflection now returns the most specific type possible from
+the database. That is, if you create a table using
+``String``, then reflect it back, the reflected column will
+likely be ``VARCHAR``. For dialects that support a more
+specific form of the type, that's what you'll get. So a
+``Text`` type would come back as ``oracle.CLOB`` on Oracle,
+a ``LargeBinary`` might be an ``mysql.MEDIUMBLOB`` etc. The
+obvious advantage here is that reflection preserves as much
+information possible from what the database had to say.
+
+Some applications that deal heavily in table metadata may
+wish to compare types across reflected tables and/or non-
+reflected tables. There's a semi-private accessor available
+on ``TypeEngine`` called ``_type_affinity`` and an
+associated comparison helper ``_compare_type_affinity``.
+This accessor returns the "generic" ``types`` class which
+the type corresponds to:
+
+::
+
+ >>> String(50)._compare_type_affinity(postgresql.VARCHAR(50))
+ True
+ >>> Integer()._compare_type_affinity(mysql.REAL)
+ False
+
+Miscellaneous API Changes
+-------------------------
+
+The usual "generic" types are still the general system in
+use, i.e. ``String``, ``Float``, ``DateTime``. There's a
+few changes there:
+
+* Types no longer make any guesses as to default parameters.
+ In particular, ``Numeric``, ``Float``, as well as
+ subclasses NUMERIC, FLOAT, DECIMAL don't generate any
+ length or scale unless specified. This also continues to
+ include the controversial ``String`` and ``VARCHAR`` types
+ (although MySQL dialect will pre-emptively raise when
+ asked to render VARCHAR with no length). No defaults are
+ assumed, and if they are used in a CREATE TABLE statement,
+ an error will be raised if the underlying database does
+ not allow non-lengthed versions of these types.
+
+* the ``Binary`` type has been renamed to ``LargeBinary``,
+ for BLOB/BYTEA/similar types. For ``BINARY`` and
+ ``VARBINARY``, those are present directly as
+ ``types.BINARY``, ``types.VARBINARY``, as well as in the
+ MySQL and MS-SQL dialects.
+
+* ``PickleType`` now uses == for comparison of values when
+ mutable=True, unless the "comparator" argument with a
+ comparison function is specified to the type. If you are
+ pickling a custom object you should implement an
+ ``__eq__()`` method so that value-based comparisons are
+ accurate.
+
+* The default "precision" and "scale" arguments of Numeric
+ and Float have been removed and now default to None.
+ NUMERIC and FLOAT will be rendered with no numeric
+ arguments by default unless these values are provided.
+
+* DATE, TIME and DATETIME types on SQLite can now take
+ optional "storage_format" and "regexp" argument.
+ "storage_format" can be used to store those types using a
+ custom string format. "regexp" allows to use a custom
+ regular expression to match string values from the
+ database.
+
+* ``__legacy_microseconds__`` on SQLite ``Time`` and
+ ``DateTime`` types is not supported anymore. You should
+ use the new "storage_format" argument instead.
+
+* ``DateTime`` types on SQLite now use by a default a
+ stricter regular expression to match strings from the
+ database. Use the new "regexp" argument if you are using
+ data stored in a legacy format.
+
+ORM Changes
+===========
+
+Upgrading an ORM application from 0.5 to 0.6 should require
+little to no changes, as the ORM's behavior remains almost
+identical. There are some default argument and name
+changes, and some loading behaviors have been improved.
+
+New Unit of Work
+----------------
+
+The internals for the unit of work, primarily
+``topological.py`` and ``unitofwork.py``, have been
+completely rewritten and are vastly simplified. This
+should have no impact on usage, as all existing behavior
+during flush has been maintained exactly (or at least, as
+far as it is exercised by our testsuite and the handful of
+production environments which have tested it heavily). The
+performance of flush() now uses 20-30% fewer method calls
+and should also use less memory. The intent and flow of the
+source code should now be reasonably easy to follow, and the
+architecture of the flush is fairly open-ended at this
+point, creating room for potential new areas of
+sophistication. The flush process no longer has any
+reliance on recursion so flush plans of arbitrary size and
+complexity can be flushed. Additionally, the mapper's
+"save" process, which issues INSERT and UPDATE statements,
+now caches the "compiled" form of the two statements so that
+callcounts are further dramatically reduced with very large
+flushes.
+
+Any changes in behavior observed with flush versus earlier
+versions of 0.6 or 0.5 should be reported to us ASAP - we'll
+make sure no functionality is lost.
+
+Changes to ``query.update()`` and ``query.delete()``
+----------------------------------------------------
+
+* the 'expire' option on query.update() has been renamed to
+ 'fetch', thus matching that of query.delete()
+
+* ``query.update()`` and ``query.delete()`` both default to
+ 'evaluate' for the synchronize strategy.
+
+* the 'synchronize' strategy for update() and delete()
+ raises an error on failure. There is no implicit fallback
+ onto "fetch". Failure of evaluation is based on the
+ structure of criteria, so success/failure is deterministic
+ based on code structure.
+
+``relation()`` is officially named ``relationship()``
+-----------------------------------------------------
+
+This to solve the long running issue that "relation" means a
+"table or derived table" in relational algebra terms. The
+``relation()`` name, which is less typing, will hang around
+for the foreseeable future so this change should be entirely
+painless.
+
+Subquery eager loading
+----------------------
+
+A new kind of eager loading is added called "subquery"
+loading. This is a load that emits a second SQL query
+immediately after the first which loads full collections for
+all the parents in the first query, joining upwards to the
+parent using INNER JOIN. Subquery loading is used simlarly
+to the current joined-eager loading, using the
+```subqueryload()```` and ````subqueryload_all()```` options
+as well as the ````lazy='subquery'```` setting on
+````relationship()```. The subquery load is usually much
+more efficient for loading many larger collections as it
+uses INNER JOIN unconditionally and also doesn't re-load
+parent rows.
+
+```eagerload()````, ````eagerload_all()```` is now ````joinedload()````, ````joinedload_all()```
+------------------------------------------------------------------------------------------------
+
+To make room for the new subquery load feature, the existing
+```eagerload()````/````eagerload_all()```` options are now
+superceded by ````joinedload()```` and
+````joinedload_all()````. The old names will hang around
+for the foreseeable future just like ````relation()```.
+
+```lazy=False|None|True|'dynamic'```` now accepts ````lazy='noload'|'joined'|'subquery'|'select'|'dynamic'```
+-------------------------------------------------------------------------------------------------------------
+
+Continuing on the theme of loader strategies opened up, the
+standard keywords for the ```lazy```` option on
+````relationship()```` are now ````select```` for lazy
+loading (via a SELECT issued on attribute access),
+````joined```` for joined-eager loading, ````subquery````
+for subquery-eager loading, ````noload```` for no loading
+should occur, and ````dynamic```` for a "dynamic"
+relationship. The old ````True````, ````False````,
+````None``` arguments are still accepted with the identical
+behavior as before.
+
+innerjoin=True on relation, joinedload
+--------------------------------------
+
+Joined-eagerly loaded scalars and collections can now be
+instructed to use INNER JOIN instead of OUTER JOIN. On
+Postgresql this is observed to provide a 300-600% speedup on
+some queries. Set this flag for any many-to-one which is
+on a NOT NULLable foreign key, and similarly for any
+collection where related items are guaranteed to exist.
+
+At mapper level:
+
+::
+
+ mapper(Child, child)
+ mapper(Parent, parent, properties={
+ 'child':relationship(Child, lazy='joined', innerjoin=True)
+ })
+
+At query time level:
+
+::
+
+ session.query(Parent).options(joinedload(Parent.child, innerjoin=True)).all()
+
+The ``innerjoin=True`` flag at the ``relationship()`` level
+will also take effect for any ``joinedload()`` option which
+does not override the value.
+
+Many-to-one Enhancements
+------------------------
+
+* many-to-one relations now fire off a lazyload in fewer
+ cases, including in most cases will not fetch the "old"
+ value when a new one is replaced.
+
+* many-to-one relation to a joined-table subclass now uses
+ get() for a simple load (known as the "use_get"
+ condition), i.e. ``Related``->``Sub(Base)``, without the
+ need to redefine the primaryjoin condition in terms of the
+ base table. [ticket:1186]
+
+* specifying a foreign key with a declarative column, i.e.
+ ``ForeignKey(MyRelatedClass.id)`` doesn't break the
+ "use_get" condition from taking place [ticket:1492]
+
+* relationship(), joinedload(), and joinedload_all() now
+ feature an option called "innerjoin". Specify ``True`` or
+ ``False`` to control whether an eager join is constructed
+ as an INNER or OUTER join. Default is ``False`` as always.
+ The mapper options will override whichever setting is
+ specified on relationship(). Should generally be set for
+ many-to-one, not nullable foreign key relations to allow
+ improved join performance. [ticket:1544]
+
+* the behavior of joined eager loading such that the main
+ query is wrapped in a subquery when LIMIT/OFFSET are
+ present now makes an exception for the case when all eager
+ loads are many-to-one joins. In those cases, the eager
+ joins are against the parent table directly along with the
+ limit/offset without the extra overhead of a subquery,
+ since a many-to-one join does not add rows to the result.
+
+ For example, in 0.5 this query:
+
+ ::
+
+ session.query(Address).options(eagerload(Address.user)).limit(10)
+
+ would produce SQL like:
+
+ ::
+
+ SELECT * FROM
+ (SELECT * FROM addresses LIMIT 10) AS anon_1
+ LEFT OUTER JOIN users AS users_1 ON users_1.id = anon_1.addresses_user_id
+
+ This because the presence of any eager loaders suggests
+ that some or all of them may relate to multi-row
+ collections, which would necessitate wrapping any kind of
+ rowcount-sensitive modifiers like LIMIT inside of a
+ subquery.
+
+ In 0.6, that logic is more sensitive and can detect if all
+ eager loaders represent many-to-ones, in which case the
+ eager joins don't affect the rowcount:
+
+ ::
+
+ SELECT * FROM addresses LEFT OUTER JOIN users AS users_1 ON users_1.id = addresses.user_id LIMIT 10
+
+Mutable Primary Keys with Joined Table Inheritance
+--------------------------------------------------
+
+A joined table inheritance config where the child table has
+a PK that foreign keys to the parent PK can now be updated
+on a CASCADE-capable database like Postgresql.
+``mapper()`` now has an option ``passive_updates=True``
+which indicates this foreign key is updated automatically.
+If on a non-cascading database like SQLite or MySQL/MyISAM,
+set this flag to ``False``. A future feature enhancement
+will try to get this flag to be auto-configuring based on
+dialect/table style in use.
+
+Beaker Caching
+--------------
+
+A promising new example of Beaker integration is in
+``examples/beaker_caching``. This is a straightforward
+recipe which applies a Beaker cache within the result-
+generation engine of ``Query``. Cache parameters are
+provided via ``query.options()``, and allows full control
+over the contents of the cache. SQLAlchemy 0.6 includes
+improvements to the ``Session.merge()`` method to support
+this and similar recipes, as well as to provide
+significantly improved performance in most scenarios.
+
+Other Changes
+-------------
+
+* the "row tuple" object returned by ``Query`` when multiple
+ column/entities are selected is now picklable as well as
+ higher performing.
+
+* ``query.join()`` has been reworked to provide more
+ consistent behavior and more flexibility (includes
+ [ticket:1537])
+
+* ``query.select_from()`` accepts multiple clauses to
+ produce multiple comma separated entries within the FROM
+ clause. Useful when selecting from multiple-homed join()
+ clauses.
+
+* the "dont_load=True" flag on ``Session.merge()`` is
+ deprecated and is now "load=False".
+
+* added "make_transient()" helper function which transforms
+ a persistent/ detached instance into a transient one (i.e.
+ deletes the instance_key and removes from any session.)
+ [ticket:1052]
+
+* the allow_null_pks flag on mapper() is deprecated and has
+ been renamed to allow_partial_pks. It is turned "on" by
+ default. This means that a row which has a non-null value
+ for any of its primary key columns will be considered an
+ identity. The need for this scenario typically only occurs
+ when mapping to an outer join. When set to False, a PK
+ that has NULLs in it will not be considered a primary key
+ - in particular this means a result row will come back as
+ None (or not be filled into a collection), and new in 0.6
+ also indicates that session.merge() won't issue a round
+ trip to the database for such a PK value. [ticket:1680]
+
+* the mechanics of "backref" have been fully merged into the
+ finer grained "back_populates" system, and take place
+ entirely within the ``_generate_backref()`` method of
+ ``RelationProperty``. This makes the initialization
+ procedure of ``RelationProperty`` simpler and allows
+ easier propagation of settings (such as from subclasses of
+ ``RelationProperty``) into the reverse reference. The
+ internal ``BackRef()`` is gone and ``backref()`` returns a
+ plain tuple that is understood by ``RelationProperty``.
+
+* the keys attribute of ``ResultProxy`` is now a method, so
+ references to it (``result.keys``) must be changed to
+ method invocations (``result.keys()``)
+
+* ``ResultProxy.last_inserted_ids`` is now deprecated, use
+ ``ResultProxy.inserted_primary_key`` instead.
+
+Deprecated/Removed ORM Elements
+-------------------------------
+
+Most elements that were deprecated throughout 0.5 and raised
+deprecation warnings have been removed (with a few
+exceptions). All elements that were marked "pending
+deprecation" are now deprecated and will raise a warning
+upon use.
+
+* 'transactional' flag on sessionmaker() and others is
+ removed. Use 'autocommit=True' to indicate
+ 'transactional=False'.
+
+* 'polymorphic_fetch' argument on mapper() is removed.
+ Loading can be controlled using the 'with_polymorphic'
+ option.
+
+* 'select_table' argument on mapper() is removed. Use
+ 'with_polymorphic=("*", <some selectable>)' for this
+ functionality.
+
+* 'proxy' argument on synonym() is removed. This flag did
+ nothing throughout 0.5, as the "proxy generation"
+ behavior is now automatic.
+
+* Passing a single list of elements to joinedload(),
+ joinedload_all(), contains_eager(), lazyload(), defer(),
+ and undefer() instead of multiple positional \*args is
+ deprecated.
+
+* Passing a single list of elements to query.order_by(),
+ query.group_by(), query.join(), or query.outerjoin()
+ instead of multiple positional \*args is deprecated.
+
+* ``query.iterate_instances()`` is removed. Use
+ ``query.instances()``.
+
+* ``Query.query_from_parent()`` is removed. Use the
+ sqlalchemy.orm.with_parent() function to produce a
+ "parent" clause, or alternatively ``query.with_parent()``.
+
+* ``query._from_self()`` is removed, use
+ ``query.from_self()`` instead.
+
+* the "comparator" argument to composite() is removed. Use
+ "comparator_factory".
+
+* ``RelationProperty._get_join()`` is removed.
+
+
+* the 'echo_uow' flag on Session is removed. Use logging
+ on the "sqlalchemy.orm.unitofwork" name.
+
+* ``session.clear()`` is removed. use
+ ``session.expunge_all()``.
+
+* ``session.save()``, ``session.update()``,
+ ``session.save_or_update()`` are removed. Use
+ ``session.add()`` and ``session.add_all()``.
+
+* the "objects" flag on session.flush() remains deprecated.
+
+
+* the "dont_load=True" flag on session.merge() is deprecated
+ in favor of "load=False".
+
+* ``ScopedSession.mapper`` remains deprecated. See the
+ usage recipe at http://www.sqlalchemy.org/trac/wiki/Usag
+ eRecipes/SessionAwareMapper
+
+* passing an ``InstanceState`` (internal SQLAlchemy state
+ object) to ``attributes.init_collection()`` or
+ ``attributes.get_history()`` is deprecated. These
+ functions are public API and normally expect a regular
+ mapped object instance.
+
+* the 'engine' parameter to ``declarative_base()`` is
+ removed. Use the 'bind' keyword argument.
+
+Extensions
+==========
+
+SQLSoup
+-------
+
+SQLSoup has been modernized and updated to reflect common
+0.5/0.6 capabilities, including well defined session
+integration. Please read the new docs at [http://www.sqlalc
+hemy.org/docs/06/reference/ext/sqlsoup.html].
+
+Declarative
+-----------
+
+The ``DeclarativeMeta`` (default metaclass for
+``declarative_base``) previously allowed subclasses to
+modify ``dict_`` to add class attributes (e.g. columns).
+This no longer works, the ``DeclarativeMeta`` constructor
+now ignores ``dict_``. Instead, the class attributes should
+be assigned directly, e.g. ``cls.id=Column(...)``, or the
+`MixIn class <http://www.sqlalchemy.org/docs/reference/ext/d
+eclarative.html#mix-in-classes>`_ approach should be used
+instead of the metaclass approach.
+
--- /dev/null
+==============================
+What's New in SQLAlchemy 0.7 ?
+==============================
+
+.. admonition:: About this Document
+
+ This document describes changes between SQLAlchemy version 0.6,
+ last released May 5, 2012, and SQLAlchemy version 0.7,
+ undergoing maintenance releases as of October, 2012.
+
+ Document date: July 27, 2011
+
+Introduction
+============
+
+This guide introduces what's new in SQLAlchemy version 0.7,
+and also documents changes which affect users migrating
+their applications from the 0.6 series of SQLAlchemy to 0.7.
+
+To as great a degree as possible, changes are made in such a
+way as to not break compatibility with applications built
+for 0.6. The changes that are necessarily not backwards
+compatible are very few, and all but one, the change to
+mutable attribute defaults, should affect an exceedingly
+small portion of applications - many of the changes regard
+non-public APIs and undocumented hacks some users may have
+been attempting to use.
+
+A second, even smaller class of non-backwards-compatible
+changes is also documented. This class of change regards
+those features and behaviors that have been deprecated at
+least since version 0.5 and have been raising warnings since
+their deprecation. These changes would only affect
+applications that are still using 0.4- or early 0.5-style
+APIs. As the project matures, we have fewer and fewer of
+these kinds of changes with 0.x level releases, which is a
+product of our API having ever fewer features that are less
+than ideal for the use cases they were meant to solve.
+
+An array of existing functionalities have been superseded in
+SQLAlchemy 0.7. There's not much difference between the
+terms "superseded" and "deprecated", except that the former
+has a much weaker suggestion of the old feature would ever
+be removed. In 0.7, features like ``synonym`` and
+``comparable_property``, as well as all the ``Extension``
+and other event classes, have been superseded. But these
+"superseded" features have been re-implemented such that
+their implementations live mostly outside of core ORM code,
+so their continued "hanging around" doesn't impact
+SQLAlchemy's ability to further streamline and refine its
+internals, and we expect them to remain within the API for
+the foreseeable future.
+
+New Features
+============
+
+New Event System
+----------------
+
+SQLAlchemy started early with the ``MapperExtension`` class,
+which provided hooks into the persistence cycle of mappers.
+As SQLAlchemy quickly became more componentized, pushing
+mappers into a more focused configurational role, many more
+"extension", "listener", and "proxy" classes popped up to
+solve various activity-interception use cases in an ad-hoc
+fashion. Part of this was driven by the divergence of
+activities; ``ConnectionProxy`` objects wanted to provide a
+system of rewriting statements and parameters;
+``AttributeExtension`` provided a system of replacing
+incoming values, and ``DDL`` objects had events that could
+be switched off of dialect-sensitive callables.
+
+0.7 re-implements virtually all of these plugin points with
+a new, unified approach, which retains all the
+functionalities of the different systems, provides more
+flexibility and less boilerplate, performs better, and
+eliminates the need to learn radically different APIs for
+each event subsystem. The pre-existing classes
+``MapperExtension``, ``SessionExtension``,
+``AttributeExtension``, ``ConnectionProxy``,
+``PoolListener`` as well as the ``DDLElement.execute_at``
+method are deprecated and now implemented in terms of the
+new system - these APIs remain fully functional and are
+expected to remain in place for the foreseeable future.
+
+The new approach uses named events and user-defined
+callables to associate activities with events. The API's
+look and feel was driven by such diverse sources as JQuery,
+Blinker, and Hibernate, and was also modified further on
+several occasions during conferences with dozens of users on
+Twitter, which appears to have a much higher response rate
+than the mailing list for such questions.
+
+It also features an open-ended system of target
+specification that allows events to be associated with API
+classes, such as for all ``Session`` or ``Engine`` objects,
+with specific instances of API classes, such as for a
+specific ``Pool`` or ``Mapper``, as well as for related
+objects like a user- defined class that's mapped, or
+something as specific as a certain attribute on instances of
+a particular subclass of a mapped parent class. Individual
+listener subsystems can apply wrappers to incoming user-
+defined listener functions which modify how they are called
+- an mapper event can receive either the instance of the
+object being operated upon, or its underlying
+``InstanceState`` object. An attribute event can opt whether
+or not to have the responsibility of returning a new value.
+
+Several systems now build upon the new event API, including
+the new "mutable attributes" API as well as composite
+attributes. The greater emphasis on events has also led to
+the introduction of a handful of new events, including
+attribute expiration and refresh operations, pickle
+loads/dumps operations, completed mapper construction
+operations.
+
+The event system is introduced at `Events
+<http://www.sqlalchemy.org/docs/07/core/event.html>`_.
+
+[ticket:1902]
+
+Hybrid Attributes, implements/supersedes synonym(), comparable_property()
+-------------------------------------------------------------------------
+
+The "derived attributes" example has now been turned into an
+official extension. The typical use case for ``synonym()``
+is to provide descriptor access to a mapped column; the use
+case for ``comparable_property()`` is to be able to return a
+``PropComparator`` from any descriptor. In practice, the
+approach of "derived" is easier to use, more extensible, is
+implemented in a few dozen lines of pure Python with almost
+no imports, and doesn't require the ORM core to even be
+aware of it. The feature is now known as the `Hybrid
+Attributes <http://www.sqlalchemy.org/docs/07/orm/extensions
+/hybrid.html>`_ extension.
+
+``synonym()`` and ``comparable_property()`` are still part
+of the ORM, though their implementations have been moved
+outwards, building on an approach that is similar to that of
+the hybrid extension, so that the core ORM
+mapper/query/property modules aren't really aware of them
+otherwise.
+
+`Hybrid Attributes <http://www.sqlalchemy.org/docs/07/orm/ex
+tensions/hybrid.html>`_
+
+[ticket:1903]
+
+Speed Enhancements
+------------------
+
+As is customary with all major SQLA releases, a wide pass
+through the internals to reduce overhead and callcounts has
+been made which further reduces the work needed in common
+scenarios. Highlights of this release include:
+
+* The flush process will now bundle INSERT statements into
+ batches fed to ``cursor.executemany()``, for rows where
+ the primary key is already present. In particular this
+ usually applies to the "child" table on a joined table
+ inheritance configuration, meaning the number of calls to
+ ``cursor.execute`` for a large bulk insert of joined-
+ table objects can be cut in half, allowing native DBAPI
+ optimizations to take place for those statements passed
+ to ``cursor.executemany()`` (such as re-using a prepared
+ statement).
+
+* The codepath invoked when accessing a many-to-one
+ reference to a related object that's already loaded has
+ been greatly simplified. The identity map is checked
+ directly without the need to generate a new ``Query``
+ object first, which is expensive in the context of
+ thousands of in-memory many-to-ones being accessed. The
+ usage of constructed-per-call "loader" objects is also no
+ longer used for the majority of lazy attribute loads.
+
+* The rewrite of composites allows a shorter codepath when
+ mapper internals access mapped attributes within a
+ flush.
+
+* New inlined attribute access functions replace the
+ previous usage of "history" when the "save-update" and
+ other cascade operations need to cascade among the full
+ scope of datamembers associated with an attribute. This
+ reduces the overhead of generating a new ``History``
+ object for this speed-critical operation.
+
+* The internals of the ``ExecutionContext``, the object
+ corresponding to a statement execution, have been
+ inlined and simplified.
+
+* The ``bind_processor()`` and ``result_processor()``
+ callables generated by types for each statement
+ execution are now cached (carefully, so as to avoid memory
+ leaks for ad-hoc types and dialects) for the lifespan of
+ that type, further reducing per-statement call overhead.
+
+* The collection of "bind processors" for a particular
+ ``Compiled`` instance of a statement is also cached on
+ the ``Compiled`` object, taking further advantage of the
+ "compiled cache" used by the flush process to re-use the
+ same compiled form of INSERT, UPDATE, DELETE statements.
+
+A demonstration of callcount reduction including a sample
+benchmark script is at
+http://techspot.zzzeek.org/2010/12/12/a-tale-of-three-
+profiles/
+
+Composites Rewritten
+--------------------
+
+The "composite" feature has been rewritten, like
+``synonym()`` and ``comparable_property()``, to use a
+lighter weight implementation based on descriptors and
+events, rather than building into the ORM internals. This
+allowed the removal of some latency from the mapper/unit of
+work internals, and simplifies the workings of composite.
+The composite attribute now no longer conceals the
+underlying columns it builds upon, which now remain as
+regular attributes. Composites can also act as a proxy for
+``relationship()`` as well as ``Column()`` attributes.
+
+The major backwards-incompatible change of composites is
+that they no longer use the ``mutable=True`` system to
+detect in-place mutations. Please use the `Mutation
+Tracking <http://www.sqlalchemy.org/docs/07/orm/extensions/m
+utable.html>`_ extension to establish in-place change events
+to existing composite usage.
+
+`Composite Column Types
+<http://www.sqlalchemy.org/docs/07/orm/mapper_config.html
+#composite-column-types>`_
+
+`Mutation Tracking <http://www.sqlalchemy.org/docs/07/orm/ex
+tensions/mutable.html>`_
+
+[ticket:2008] [ticket:2024]
+
+More succinct form of query.join(target, onclause)
+--------------------------------------------------
+
+The default method of issuing ``query.join()`` to a target
+with an explicit onclause is now:
+
+::
+
+ query.join(SomeClass, SomeClass.id==ParentClass.some_id)
+
+In 0.6, this usage was considered to be an error, because
+``join()`` accepts multiple arguments corresponding to
+multiple JOIN clauses - the two-argument form needed to be
+in a tuple to disambiguate between single-argument and two-
+argument join targets. In the middle of 0.6 we added
+detection and an error message for this specific calling
+style, since it was so common. In 0.7, since we are
+detecting the exact pattern anyway, and since having to type
+out a tuple for no reason is extremely annoying, the non-
+tuple method now becomes the "normal" way to do it. The
+"multiple JOIN" use case is exceedingly rare compared to the
+single join case, and multiple joins these days are more
+clearly represented by multiple calls to ``join()``.
+
+The tuple form will remain for backwards compatibility.
+
+Note that all the other forms of ``query.join()`` remain
+unchanged:
+
+::
+
+ query.join(MyClass.somerelation)
+ query.join("somerelation")
+ query.join(MyTarget)
+ # ... etc
+
+`Querying with Joins
+<http://www.sqlalchemy.org/docs/07/orm/tutorial.html
+#querying-with-joins>`_
+
+[ticket:1923]
+
+Mutation event extension, supersedes "mutable=True"
+---------------------------------------------------
+
+A new extension, `Mutation Tracking <http://www.sqlalchemy.o
+rg/docs/07/orm/extensions/mutable.html>`_, provides a
+mechanism by which user-defined datatypes can provide change
+events back to the owning parent or parents. The extension
+includes an approach for scalar database values, such as
+those managed by ``PickleType``, ``postgresql.ARRAY``, or
+other custom ``MutableType`` classes, as well as an approach
+for ORM "composites", those configured using `composite()
+<http://www.sqlalchemy.org/docs/07/orm/mapper_config.html
+#composite-column-types>`_.
+
+`Mutation Tracking Extension <http://www.sqlalchemy.org/docs
+/07/orm/extensions/mutable.html>`_
+
+NULLS FIRST / NULLS LAST operators
+----------------------------------
+
+These are implemented as an extension to the ``asc()`` and
+``desc()`` operators, called ``nullsfirst()`` and
+``nullslast()``.
+
+`nullsfirst() <http://www.sqlalchemy.org/docs/07/core/expres
+sion_api.html#sqlalchemy.sql.expression._CompareMixin.nullsf
+irst>`_
+
+`nullslast() <http://www.sqlalchemy.org/docs/07/core/express
+ion_api.html#sqlalchemy.sql.expression._CompareMixin.nullsla
+st>`_
+
+[ticket:723]
+
+select.distinct(), query.distinct() accepts \*args for Postgresql DISTINCT ON
+-----------------------------------------------------------------------------
+
+This was already available by passing a list of expressions
+to the ``distinct`` keyword argument of ``select()``, the
+``distinct()`` method of ``select()`` and ``Query`` now
+accept positional arguments which are rendered as DISTINCT
+ON when a Postgresql backend is used.
+
+`distinct() <http://www.sqlalchemy.org/docs/07/core/expressi
+on_api.html#sqlalchemy.sql.expression.Select.distinct>`_
+
+`Query.distinct() <http://www.sqlalchemy.org/docs/07/orm/que
+ry.html#sqlalchemy.orm.query.Query.distinct>`_
+
+[ticket:1069]
+
+``Index()`` can be placed inline inside of ``Table``, ``__table_args__``
+------------------------------------------------------------------------
+
+The Index() construct can be created inline with a Table
+definition, using strings as column names, as an alternative
+to the creation of the index outside of the Table. That is:
+
+::
+
+ Table('mytable', metadata,
+ Column('id',Integer, primary_key=True),
+ Column('name', String(50), nullable=False),
+ Index('idx_name', 'name')
+ )
+
+The primary rationale here is for the benefit of declarative
+``__table_args__``, particularly when used with mixins:
+
+::
+
+ class HasNameMixin(object):
+ name = Column('name', String(50), nullable=False)
+ @declared_attr
+ def __table_args__(cls):
+ return (Index('name'), {})
+
+ class User(HasNameMixin, Base):
+ __tablename__ = 'user'
+ id = Column('id', Integer, primary_key=True)
+
+`Indexes <http://www.sqlalchemy.org/docs/07/core/schema.html
+#indexes>`_
+
+Window Function SQL Construct
+-----------------------------
+
+A "window function" provides to a statement information
+about the result set as it's produced. This allows criteria
+against various things like "row number", "rank" and so
+forth. They are known to be supported at least by
+Postgresql, SQL Server and Oracle, possibly others.
+
+The best introduction to window functions is on Postgresql's
+site, where window functions have been supported since
+version 8.4:
+
+http://www.postgresql.org/docs/9.0/static/tutorial-
+window.html
+
+SQLAlchemy provides a simple construct typically invoked via
+an existing function clause, using the ``over()`` method,
+which accepts ``order_by`` and ``partition_by`` keyword
+arguments. Below we replicate the first example in PG's
+tutorial:
+
+::
+
+ from sqlalchemy.sql import table, column, select, func
+
+ empsalary = table('empsalary',
+ column('depname'),
+ column('empno'),
+ column('salary'))
+
+ s = select([
+ empsalary,
+ func.avg(empsalary.c.salary).
+ over(partition_by=empsalary.c.depname).
+ label('avg')
+ ])
+
+ print s
+
+SQL:
+
+::
+
+ SELECT empsalary.depname, empsalary.empno, empsalary.salary,
+ avg(empsalary.salary) OVER (PARTITION BY empsalary.depname) AS avg
+ FROM empsalary
+
+`sqlalchemy.sql.expression.over <http://www.sqlalchemy.org/d
+ocs/07/core/expression_api.html#sqlalchemy.sql.expression.ov
+er>`_
+
+[ticket:1844]
+
+execution_options() on Connection accepts "isolation_level" argument
+--------------------------------------------------------------------
+
+This sets the transaction isolation level for a single
+``Connection``, until that ``Connection`` is closed and its
+underlying DBAPI resource returned to the connection pool,
+upon which the isolation level is reset back to the default.
+The default isolation level is set using the
+``isolation_level`` argument to ``create_engine()``.
+
+Transaction isolation support is currently only supported by
+the Postgresql and SQLite backends.
+
+`execution_options() <http://www.sqlalchemy.org/docs/07/core
+/connections.html#sqlalchemy.engine.base.Connection.executio
+n_options>`_
+
+[ticket:2001]
+
+``TypeDecorator`` works with integer primary key columns
+--------------------------------------------------------
+
+A ``TypeDecorator`` which extends the behavior of
+``Integer`` can be used with a primary key column. The
+"autoincrement" feature of ``Column`` will now recognize
+that the underlying database column is still an integer so
+that lastrowid mechanisms continue to function. The
+``TypeDecorator`` itself will have its result value
+processor applied to newly generated primary keys, including
+those received by the DBAPI ``cursor.lastrowid`` accessor.
+
+[ticket:2005] [ticket:2006]
+
+``TypeDecorator`` is present in the "sqlalchemy" import space
+-------------------------------------------------------------
+
+No longer need to import this from ``sqlalchemy.types``,
+it's now mirrored in ``sqlalchemy``.
+
+New Dialects
+------------
+
+Dialects have been added:
+
+* a MySQLdb driver for the Drizzle database:
+
+
+ `Drizzle <http://www.sqlalchemy.org/docs/07/dialects/drizz
+ le.html>`_
+
+* support for the pymysql DBAPI:
+
+
+ `pymsql Notes
+ <http://www.sqlalchemy.org/docs/07/dialects/mysql.html
+ #module-sqlalchemy.dialects.mysql.pymysql>`_
+
+* psycopg2 now works with Python 3
+
+
+Behavioral Changes (Backwards Compatible)
+=========================================
+
+C Extensions Build by Default
+-----------------------------
+
+This is as of 0.7b4. The exts will build if cPython 2.xx
+is detected. If the build fails, such as on a windows
+install, that condition is caught and the non-C install
+proceeds. The C exts won't build if Python 3 or Pypy is
+used.
+
+Query.count() simplified, should work virtually always
+------------------------------------------------------
+
+The very old guesswork which occurred within
+``Query.count()`` has been modernized to use
+``.from_self()``. That is, ``query.count()`` is now
+equivalent to:
+
+::
+
+ query.from_self(func.count(literal_column('1'))).scalar()
+
+Previously, internal logic attempted to rewrite the columns
+clause of the query itself, and upon detection of a
+"subquery" condition, such as a column-based query that
+might have aggregates in it, or a query with DISTINCT, would
+go through a convoluted process of rewriting the columns
+clause. This logic failed in complex conditions,
+particularly those involving joined table inheritance, and
+was long obsolete by the more comprehensive ``.from_self()``
+call.
+
+The SQL emitted by ``query.count()`` is now always of the
+form:
+
+::
+
+ SELECT count(1) AS count_1 FROM (
+ SELECT user.id AS user_id, user.name AS user_name from user
+ ) AS anon_1
+
+that is, the original query is preserved entirely inside of
+a subquery, with no more guessing as to how count should be
+applied.
+
+[ticket:2093]
+
+To emit a non-subquery form of count()
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+MySQL users have already reported that the MyISAM engine not
+surprisingly falls over completely with this simple change.
+Note that for a simple ``count()`` that optimizes for DBs
+that can't handle simple subqueries, ``func.count()`` should
+be used:
+
+::
+
+ from sqlalchemy import func
+ session.query(func.count(MyClass.id)).scalar()
+
+or for ``count(*)``:
+
+::
+
+ from sqlalchemy import func, literal_column
+ session.query(func.count(literal_column('*'))).select_from(MyClass).scalar()
+
+LIMIT/OFFSET clauses now use bind parameters
+--------------------------------------------
+
+The LIMIT and OFFSET clauses, or their backend equivalents
+(i.e. TOP, ROW NUMBER OVER, etc.), use bind parameters for
+the actual values, for all backends which support it (most
+except for Sybase). This allows better query optimizer
+performance as the textual string for multiple statements
+with differing LIMIT/OFFSET are now identical.
+
+[ticket:805]
+
+Logging enhancements
+--------------------
+
+Vinay Sajip has provided a patch to our logging system such
+that the "hex string" embedded in logging statements for
+engines and pools is no longer needed to allow the ``echo``
+flag to work correctly. A new system that uses filtered
+logging objects allows us to maintain our current behavior
+of ``echo`` being local to individual engines without the
+need for additional identifying strings local to those
+engines.
+
+[ticket:1926]
+
+Simplified polymorphic_on assignment
+------------------------------------
+
+The population of the ``polymorphic_on`` column-mapped
+attribute, when used in an inheritance scenario, now occurs
+when the object is constructed, i.e. its ``__init__`` method
+is called, using the init event. The attribute then behaves
+the same as any other column-mapped attribute. Previously,
+special logic would fire off during flush to populate this
+column, which prevented any user code from modifying its
+behavior. The new approach improves upon this in three
+ways: 1. the polymorphic identity is now present on the
+object as soon as its constructed; 2. the polymorphic
+identity can be changed by user code without any difference
+in behavior from any other column-mapped attribute; 3. the
+internals of the mapper during flush are simplified and no
+longer need to make special checks for this column.
+
+[ticket:1895]
+
+contains_eager() chains across multiple paths (i.e. "all()")
+------------------------------------------------------------
+
+The ```contains_eager()```` modifier now will chain itself
+for a longer path without the need to emit individual
+````contains_eager()``` calls. Instead of:
+
+::
+
+ session.query(A).options(contains_eager(A.b), contains_eager(A.b, B.c))
+
+you can say:
+
+::
+
+ session.query(A).options(contains_eager(A.b, B.c))
+
+[ticket:2032]
+
+Flushing of orphans that have no parent is allowed
+--------------------------------------------------
+
+We've had a long standing behavior that checks for a so-
+called "orphan" during flush, that is, an object which is
+associated with a ``relationship()`` that specifies "delete-
+orphan" cascade, has been newly added to the session for an
+INSERT, and no parent relationship has been established.
+This check was added years ago to accommodate some test
+cases which tested the orphan behavior for consistency. In
+modern SQLA, this check is no longer needed on the Python
+side. The equivalent behavior of the "orphan check" is
+accomplished by making the foreign key reference to the
+object's parent row NOT NULL, where the database does its
+job of establishing data consistency in the same way SQLA
+allows most other operations to do. If the object's parent
+foreign key is nullable, then the row can be inserted. The
+"orphan" behavior runs when the object was persisted with a
+particular parent, and is then disassociated with that
+parent, leading to a DELETE statement emitted for it.
+
+[ticket:1912]
+
+Warnings generated when collection members, scalar referents not part of the flush
+----------------------------------------------------------------------------------
+
+Warnings are now emitted when related objects referenced via
+a loaded ``relationship()`` on a parent object marked as
+"dirty" are not present in the current ``Session``.
+
+The ``save-update`` cascade takes effect when objects are
+added to the ``Session``, or when objects are first
+associated with a parent, so that an object and everything
+related to it are usually all present in the same
+``Session``. However, if ``save-update`` cascade is
+disabled for a particular ``relationship()``, then this
+behavior does not occur, and the flush process does not try
+to correct for it, instead staying consistent to the
+configured cascade behavior. Previously, when such objects
+were detected during the flush, they were silently skipped.
+The new behavior is that a warning is emitted, for the
+purposes of alerting to a situation that more often than not
+is the source of unexpected behavior.
+
+[ticket:1973]
+
+Setup no longer installs a Nose plugin
+--------------------------------------
+
+Since we moved to nose we've used a plugin that installs via
+setuptools, so that the ``nosetests`` script would
+automatically run SQLA's plugin code, necessary for our
+tests to have a full environment. In the middle of 0.6, we
+realized that the import pattern here meant that Nose's
+"coverage" plugin would break, since "coverage" requires
+that it be started before any modules to be covered are
+imported; so in the middle of 0.6 we made the situation
+worse by adding a separate ``sqlalchemy-nose`` package to
+the build to overcome this.
+
+In 0.7 we've done away with trying to get ``nosetests`` to
+work automatically, since the SQLAlchemy module would
+produce a large number of nose configuration options for all
+usages of ``nosetests``, not just the SQLAlchemy unit tests
+themselves, and the additional ``sqlalchemy-nose`` install
+was an even worse idea, producing an extra package in Python
+environments. The ``sqla_nose.py`` script in 0.7 is now
+the only way to run the tests with nose.
+
+[ticket:1949]
+
+Non-``Table``-derived constructs can be mapped
+----------------------------------------------
+
+A construct that isn't against any ``Table`` at all, like a
+function, can be mapped.
+
+::
+
+ from sqlalchemy import select, func
+ from sqlalchemy.orm import mapper
+
+ class Subset(object):
+ pass
+ selectable = select(["x", "y", "z"]).select_from(func.some_db_function()).alias()
+ mapper(Subset, selectable, primary_key=[selectable.c.x])
+
+[ticket:1876]
+
+aliased() accepts ``FromClause`` elements
+-----------------------------------------
+
+This is a convenience helper such that in the case a plain
+``FromClause``, such as a ``select``, ``Table`` or ``join``
+is passed to the ``orm.aliased()`` construct, it passes
+through to the ``.alias()`` method of that from construct
+rather than constructing an ORM level ``AliasedClass``.
+
+[ticket:2018]
+
+Session.connection(), Session.execute() accept 'bind'
+-----------------------------------------------------
+
+This is to allow execute/connection operations to
+participate in the open transaction of an engine explicitly.
+It also allows custom subclasses of ``Session`` that
+implement their own ``get_bind()`` method and arguments to
+use those custom arguments with both the ``execute()`` and
+``connection()`` methods equally.
+
+`Session.connection <http://www.sqlalchemy.org/docs/07/orm/s
+ession.html#sqlalchemy.orm.session.Session.connection>`_
+`Session.execute <http://www.sqlalchemy.org/docs/07/orm/sess
+ion.html#sqlalchemy.orm.session.Session.execute>`_
+
+[ticket:1996]
+
+Standalone bind parameters in columns clause auto-labeled.
+----------------------------------------------------------
+
+Bind parameters present in the "columns clause" of a select
+are now auto-labeled like other "anonymous" clauses, which
+among other things allows their "type" to be meaningful when
+the row is fetched, as in result row processors.
+
+SQLite - relative file paths are normalized through os.path.abspath()
+---------------------------------------------------------------------
+
+This so that a script that changes the current directory
+will continue to target the same location as subsequent
+SQLite connections are established.
+
+[ticket:2036]
+
+MS-SQL - ``String``/``Unicode``/``VARCHAR``/``NVARCHAR``/``VARBINARY`` emit "max" for no length
+-----------------------------------------------------------------------------------------------
+
+On the MS-SQL backend, the String/Unicode types, and their
+counterparts VARCHAR/ NVARCHAR, as well as VARBINARY
+(:ticket:`1833`) emit "max" as the length when no length is
+specified. This makes it more compatible with Postgresql's
+VARCHAR type which is similarly unbounded when no length
+specified. SQL Server defaults the length on these types
+to '1' when no length is specified.
+
+Behavioral Changes (Backwards Incompatible)
+===========================================
+
+Note again, aside from the default mutability change, most
+of these changes are \*extremely minor* and will not affect
+most users.
+
+``PickleType`` and ARRAY mutability turned off by default
+---------------------------------------------------------
+
+This change refers to the default behavior of the ORM when
+mapping columns that have either the ``PickleType`` or
+``postgresql.ARRAY`` datatypes. The ``mutable`` flag is now
+set to ``False`` by default. If an existing application uses
+these types and depends upon detection of in-place
+mutations, the type object must be constructed with
+``mutable=True`` to restore the 0.6 behavior:
+
+::
+
+ Table('mytable', metadata,
+ # ....
+
+ Column('pickled_data', PickleType(mutable=True))
+ )
+
+The ``mutable=True`` flag is being phased out, in favor of
+the new `Mutation Tracking <http://www.sqlalchemy.org/docs/0
+7/orm/extensions/mutable.html>`_ extension. This extension
+provides a mechanism by which user-defined datatypes can
+provide change events back to the owning parent or parents.
+
+The previous approach of using ``mutable=True`` does not
+provide for change events - instead, the ORM must scan
+through all mutable values present in a session and compare
+them against their original value for changes every time
+``flush()`` is called, which is a very time consuming event.
+This is a holdover from the very early days of SQLAlchemy
+when ``flush()`` was not automatic and the history tracking
+system was not nearly as sophisticated as it is now.
+
+Existing applications which use ``PickleType``,
+``postgresql.ARRAY`` or other ``MutableType`` subclasses,
+and require in-place mutation detection, should migrate to
+the new mutation tracking system, as ``mutable=True`` is
+likely to be deprecated in the future.
+
+[ticket:1980]
+
+Mutability detection of ``composite()`` requires the Mutation Tracking Extension
+--------------------------------------------------------------------------------
+
+So-called "composite" mapped attributes, those configured
+using the technique described at `Composite Column Types
+<http://www.sqlalchemy.org/docs/07/orm/mapper_config.html
+#composite-column-types>`_, have been re-implemented such
+that the ORM internals are no longer aware of them (leading
+to shorter and more efficient codepaths in critical
+sections). While composite types are generally intended to
+be treated as immutable value objects, this was never
+enforced. For applications that use composites with
+mutability, the `Mutation Tracking <http://www.sqlalchemy.or
+g/docs/07/orm/extensions/mutable.html>`_ extension offers a
+base class which establishes a mechanism for user-defined
+composite types to send change event messages back to the
+owning parent or parents of each object.
+
+Applications which use composite types and rely upon in-
+place mutation detection of these objects should either
+migrate to the "mutation tracking" extension, or change the
+usage of the composite types such that in-place changes are
+no longer needed (i.e., treat them as immutable value
+objects).
+
+SQLite - the SQLite dialect now uses ``NullPool`` for file-based databases
+--------------------------------------------------------------------------
+
+This change is **99.999% backwards compatible**, unless you
+are using temporary tables across connection pool
+connections.
+
+A file-based SQLite connection is blazingly fast, and using
+``NullPool`` means that each call to ``Engine.connect``
+creates a new pysqlite connection.
+
+Previously, the ``SingletonThreadPool`` was used, which
+meant that all connections to a certain engine in a thread
+would be the same connection. It's intended that the new
+approach is more intuitive, particularly when multiple
+connections are used.
+
+``SingletonThreadPool`` is still the default engine when a
+``:memory:`` database is used.
+
+Note that this change **breaks temporary tables used across
+Session commits**, due to the way SQLite handles temp
+tables. See the note at
+http://www.sqlalchemy.org/docs/dialects/sqlite.html#using-
+temporary-tables-with-sqlite if temporary tables beyond the
+scope of one pool connection are desired.
+
+[ticket:1921]
+
+``Session.merge()`` checks version ids for versioned mappers
+------------------------------------------------------------
+
+Session.merge() will check the version id of the incoming
+state against that of the database, assuming the mapping
+uses version ids and incoming state has a version_id
+assigned, and raise StaleDataError if they don't match.
+This is the correct behavior, in that if incoming state
+contains a stale version id, it should be assumed the state
+is stale.
+
+If merging data into a versioned state, the version id
+attribute can be left undefined, and no version check will
+take place.
+
+This check was confirmed by examining what Hibernate does -
+both the ``merge()`` and the versioning features were
+originally adapted from Hibernate.
+
+[ticket:2027]
+
+Tuple label names in Query Improved
+-----------------------------------
+
+This improvement is potentially slightly backwards
+incompatible for an application that relied upon the old
+behavior.
+
+Given two mapped classes ``Foo`` and ``Bar`` each with a
+column ``spam``:
+
+::
+
+
+ qa = session.query(Foo.spam)
+ qb = session.query(Bar.spam)
+
+ qu = qa.union(qb)
+
+The name given to the single column yielded by ``qu`` will
+be ``spam``. Previously it would be something like
+``foo_spam`` due to the way the ``union`` would combine
+things, which is inconsistent with the name ``spam`` in the
+case of a non-unioned query.
+
+[ticket:1942]
+
+Mapped column attributes reference the most specific column first
+-----------------------------------------------------------------
+
+This is a change to the behavior involved when a mapped
+column attribute references multiple columns, specifically
+when dealing with an attribute on a joined-table subclass
+that has the same name as that of an attribute on the
+superclass.
+
+Using declarative, the scenario is this:
+
+::
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+
+ class Child(Parent):
+ __tablename__ = 'child'
+ id = Column(Integer, ForeignKey('parent.id'), primary_key=True)
+
+Above, the attribute ``Child.id`` refers to both the
+``child.id`` column as well as ``parent.id`` - this due to
+the name of the attribute. If it were named differently on
+the class, such as ``Child.child_id``, it then maps
+distinctly to ``child.id``, with ``Child.id`` being the same
+attribute as ``Parent.id``.
+
+When the ``id`` attribute is made to reference both
+``parent.id`` and ``child.id``, it stores them in an ordered
+list. An expression such as ``Child.id`` then refers to
+just *one* of those columns when rendered. Up until 0.6,
+this column would be ``parent.id``. In 0.7, it is the less
+surprising ``child.id``.
+
+The legacy of this behavior deals with behaviors and
+restrictions of the ORM that don't really apply anymore; all
+that was needed was to reverse the order.
+
+A primary advantage of this approach is that it's now easier
+to construct ``primaryjoin`` expressions that refer to the
+local column:
+
+::
+
+ class Child(Parent):
+ __tablename__ = 'child'
+ id = Column(Integer, ForeignKey('parent.id'), primary_key=True)
+ some_related = relationship("SomeRelated",
+ primaryjoin="Child.id==SomeRelated.child_id")
+
+ class SomeRelated(Base):
+ __tablename__ = 'some_related'
+ id = Column(Integer, primary_key=True)
+ child_id = Column(Integer, ForeignKey('child.id'))
+
+Prior to 0.7 the ``Child.id`` expression would reference
+``Parent.id``, and it would be necessary to map ``child.id``
+to a distinct attribute.
+
+It also means that a query like this one changes its
+behavior:
+
+::
+
+ session.query(Parent).filter(Child.id > 7)
+
+In 0.6, this would render:
+
+::
+
+ SELECT parent.id AS parent_id
+ FROM parent
+ WHERE parent.id > :id_1
+
+in 0.7, you get:
+
+::
+
+ SELECT parent.id AS parent_id
+ FROM parent, child
+ WHERE child.id > :id_1
+
+which you'll note is a cartesian product - this behavior is
+now equivalent to that of any other attribute that is local
+to ``Child``. The ``with_polymorphic()`` method, or a
+similar strategy of explicitly joining the underlying
+``Table`` objects, is used to render a query against all
+``Parent`` objects with criteria against ``Child``, in the
+same manner as that of 0.5 and 0.6:
+
+::
+
+ print s.query(Parent).with_polymorphic([Child]).filter(Child.id > 7)
+
+Which on both 0.6 and 0.7 renders:
+
+::
+
+ SELECT parent.id AS parent_id, child.id AS child_id
+ FROM parent LEFT OUTER JOIN child ON parent.id = child.id
+ WHERE child.id > :id_1
+
+Another effect of this change is that a joined-inheritance
+load across two tables will populate from the child table's
+value, not that of the parent table. An unusual case is that
+a query against "Parent" using ``with_polymorphic="*"``
+issues a query against "parent", with a LEFT OUTER JOIN to
+"child". The row is located in "Parent", sees the
+polymorphic identity corresponds to "Child", but suppose the
+actual row in "child" has been *deleted*. Due to this
+corruption, the row comes in with all the columns
+corresponding to "child" set to NULL - this is now the value
+that gets populated, not the one in the parent table.
+
+[ticket:1892]
+
+Mapping to joins with two or more same-named columns requires explicit declaration
+----------------------------------------------------------------------------------
+
+This is somewhat related to the previous change in
+[ticket:1892]. When mapping to a join, same-named columns
+must be explicitly linked to mapped attributes, i.e. as
+described in `Mapping a Class Against Multiple Tables <http:
+//www.sqlalchemy.org/docs/07/orm/mapper_config.html#mapping-
+a-class-against-multiple-tables>`_.
+
+Given two tables ``foo`` and ``bar``, each with a primary
+key column ``id``, the following now produces an error:
+
+::
+
+
+ foobar = foo.join(bar, foo.c.id==bar.c.foo_id)
+ mapper(FooBar, foobar)
+
+This because the ``mapper()`` refuses to guess what column
+is the primary representation of ``FooBar.id`` - is it
+``foo.c.id`` or is it ``bar.c.id`` ? The attribute must be
+explicit:
+
+::
+
+
+ foobar = foo.join(bar, foo.c.id==bar.c.foo_id)
+ mapper(FooBar, foobar, properties={
+ 'id':[foo.c.id, bar.c.id]
+ })
+
+[ticket:1896]
+
+Mapper requires that polymorphic_on column be present in the mapped selectable
+------------------------------------------------------------------------------
+
+This is a warning in 0.6, now an error in 0.7. The column
+given for ``polymorphic_on`` must be in the mapped
+selectable. This to prevent some occasional user errors
+such as:
+
+::
+
+ mapper(SomeClass, sometable, polymorphic_on=some_lookup_table.c.id)
+
+where above the polymorphic_on needs to be on a
+``sometable`` column, in this case perhaps
+``sometable.c.some_lookup_id``. There are also some
+"polymorphic union" scenarios where similar mistakes
+sometimes occur.
+
+Such a configuration error has always been "wrong", and the
+above mapping doesn't work as specified - the column would
+be ignored. It is however potentially backwards
+incompatible in the rare case that an application has been
+unknowingly relying upon this behavior.
+
+[ticket:1875]
+
+``DDL()`` constructs now escape percent signs
+---------------------------------------------
+
+Previously, percent signs in ``DDL()`` strings would have to
+be escaped, i.e. ``%%`` depending on DBAPI, for those DBAPIs
+that accept ``pyformat`` or ``format`` binds (i.e. psycopg2,
+mysql-python), which was inconsistent versus ``text()``
+constructs which did this automatically. The same escaping
+now occurs for ``DDL()`` as for ``text()``.
+
+[ticket:1897]
+
+``Table.c`` / ``MetaData.tables`` refined a bit, don't allow direct mutation
+----------------------------------------------------------------------------
+
+Another area where some users were tinkering around in such
+a way that doesn't actually work as expected, but still left
+an exceedingly small chance that some application was
+relying upon this behavior, the construct returned by the
+``.c`` attribute on ``Table`` and the ``.tables`` attribute
+on ``MetaData`` is explicitly non-mutable. The "mutable"
+version of the construct is now private. Adding columns to
+``.c`` involves using the ``append_column()`` method of
+``Table``, which ensures things are associated with the
+parent ``Table`` in the appropriate way; similarly,
+``MetaData.tables`` has a contract with the ``Table``
+objects stored in this dictionary, as well as a little bit
+of new bookkeeping in that a ``set()`` of all schema names
+is tracked, which is satisfied only by using the public
+``Table`` constructor as well as ``Table.tometadata()``.
+
+It is of course possible that the ``ColumnCollection`` and
+``dict`` collections consulted by these attributes could
+someday implement events on all of their mutational methods
+such that the appropriate bookkeeping occurred upon direct
+mutation of the collections, but until someone has the
+motivation to implement all that along with dozens of new
+unit tests, narrowing the paths to mutation of these
+collections will ensure no application is attempting to rely
+upon usages that are currently not supported.
+
+[ticket:1893] [ticket:1917]
+
+server_default consistently returns None for all inserted_primary_key values
+----------------------------------------------------------------------------
+
+Established consistency when server_default is present on an
+Integer PK column. SQLA doesn't pre-fetch these, nor do they
+come back in cursor.lastrowid (DBAPI). Ensured all backends
+consistently return None in result.inserted_primary_key for
+these - some backends may have returned a value previously.
+Using a server_default on a primary key column is extremely
+unusual. If a special function or SQL expression is used
+to generate primary key defaults, this should be established
+as a Python-side "default" instead of server_default.
+
+Regarding reflection for this case, reflection of an int PK
+col with a server_default sets the "autoincrement" flag to
+False, except in the case of a PG SERIAL col where we
+detected a sequence default.
+
+[ticket:2020] [ticket:2021]
+
+The ``sqlalchemy.exceptions`` alias in sys.modules is removed
+-------------------------------------------------------------
+
+For a few years we've added the string
+``sqlalchemy.exceptions`` to ``sys.modules``, so that a
+statement like "``import sqlalchemy.exceptions``" would
+work. The name of the core exceptions module has been
+``exc`` for a long time now, so the recommended import for
+this module is:
+
+::
+
+ from sqlalchemy import exc
+
+The ``exceptions`` name is still present in "``sqlalchemy``"
+for applications which might have said ``from sqlalchemy
+import exceptions``, but they should also start using the
+``exc`` name.
+
+Query Timing Recipe Changes
+---------------------------
+
+While not part of SQLAlchemy itself, it's worth mentioning
+that the rework of the ``ConnectionProxy`` into the new
+event system means it is no longer appropriate for the
+"Timing all Queries" recipe. Please adjust query-timers to
+use the ``before_cursor_execute()`` and
+``after_cursor_execute()`` events, demonstrated in the
+updated recipe UsageRecipes/Profiling.
+
+Deprecated API
+==============
+
+Default constructor on types will not accept arguments
+------------------------------------------------------
+
+Simple types like ``Integer``, ``Date`` etc. in the core
+types module don't accept arguments. The default
+constructor that accepts/ignores a catchall ``\*args,
+\**kwargs`` is restored as of 0.7b4/0.7.0, but emits a
+deprecation warning.
+
+If arguments are being used with a core type like
+``Integer``, it may be that you intended to use a dialect
+specific type, such as ``sqlalchemy.dialects.mysql.INTEGER``
+which does accept a "display_width" argument for example.
+
+compile_mappers() renamed configure_mappers(), simplified configuration internals
+---------------------------------------------------------------------------------
+
+This system slowly morphed from something small, implemented
+local to an individual mapper, and poorly named into
+something that's more of a global "registry-" level function
+and poorly named, so we've fixed both by moving the
+implementation out of ``Mapper`` altogether and renaming it
+to ``configure_mappers()``. It is of course normally not
+needed for an application to call ``configure_mappers()`` as
+this process occurs on an as-needed basis, as soon as the
+mappings are needed via attribute or query access.
+
+[ticket:1966]
+
+Core listener/proxy superseded by event listeners
+-------------------------------------------------
+
+``PoolListener``, ``ConnectionProxy``,
+``DDLElement.execute_at`` are superseded by
+``event.listen()``, using the ``PoolEvents``,
+``EngineEvents``, ``DDLEvents`` dispatch targets,
+respectively.
+
+ORM extensions superseded by event listeners
+--------------------------------------------
+
+``MapperExtension``, ``AttributeExtension``,
+``SessionExtension`` are superseded by ``event.listen()``,
+using the ``MapperEvents``/``InstanceEvents``,
+``AttributeEvents``, ``SessionEvents``, dispatch targets,
+respectively.
+
+Sending a string to 'distinct' in select() for MySQL should be done via prefixes
+--------------------------------------------------------------------------------
+
+This obscure feature allows this pattern with the MySQL
+backend:
+
+::
+
+ select([mytable], distinct='ALL', prefixes=['HIGH_PRIORITY'])
+
+The ``prefixes`` keyword or ``prefix_with()`` method should
+be used for non-standard or unusual prefixes:
+
+::
+
+ select([mytable]).prefix_with('HIGH_PRIORITY', 'ALL')
+
+``useexisting`` superseded by ``extend_existing`` and ``keep_existing``
+-----------------------------------------------------------------------
+
+The ``useexisting`` flag on Table has been superseded by a
+new pair of flags ``keep_existing`` and ``extend_existing``.
+``extend_existing`` is equivalent to ``useexisting`` - the
+existing Table is returned, and additional constructor
+elements are added. With ``keep_existing``, the existing
+Table is returned, but additional constructor elements are
+not added - these elements are only applied when the Table
+is newly created.
+
+Backwards Incompatible API Changes
+==================================
+
+Callables passed to ``bindparam()`` don't get evaluated - affects the Beaker example
+------------------------------------------------------------------------------------
+
+[ticket:1950]
+
+Note this affects the Beaker caching example, where the
+workings of the ``_params_from_query()`` function needed a
+slight adjustment. If you're using code from the Beaker
+example, this change should be applied.
+
+types.type_map is now private, types._type_map
+----------------------------------------------
+
+We noticed some users tapping into this dictionary inside of
+``sqlalchemy.types`` as a shortcut to associating Python
+types with SQL types. We can't guarantee the contents or
+format of this dictionary, and additionally the business of
+associating Python types in a one-to-one fashion has some
+grey areas that should are best decided by individual
+applications, so we've underscored this attribute.
+
+[ticket:1870]
+
+Renamed the ``alias`` keyword arg of standalone ``alias()`` function to ``name``
+--------------------------------------------------------------------------------
+
+This so that the keyword argument ``name`` matches that of
+the ``alias()`` methods on all ``FromClause`` objects as
+well as the ``name`` argument on ``Query.subquery()``.
+
+Only code that uses the standalone ``alias()`` function, and
+not the method bound functions, and passes the alias name
+using the explicit keyword name ``alias``, and not
+positionally, would need modification here.
+
+Non-public ``Pool`` methods underscored
+---------------------------------------
+
+All methods of ``Pool`` and subclasses which are not
+intended for public use have been renamed with underscores.
+That they were not named this way previously was a bug.
+
+Pooling methods now underscored or removed:
+
+``Pool.create_connection()`` ->
+``Pool._create_connection()``
+
+``Pool.do_get()`` -> ``Pool._do_get()``
+
+``Pool.do_return_conn()`` -> ``Pool._do_return_conn()``
+
+``Pool.do_return_invalid()`` -> removed, was not used
+
+``Pool.return_conn()`` -> ``Pool._return_conn()``
+
+``Pool.get()`` -> ``Pool._get()``, public API is
+``Pool.connect()``
+
+``SingletonThreadPool.cleanup()`` -> ``_cleanup()``
+
+``SingletonThreadPool.dispose_local()`` -> removed, use
+``conn.invalidate()``
+
+[ticket:1982]
+
+Previously Deprecated, Now Removed
+==================================
+
+Query.join(), Query.outerjoin(), eagerload(), eagerload_all(), others no longer allow lists of attributes as arguments
+----------------------------------------------------------------------------------------------------------------------
+
+Passing a list of attributes or attribute names to
+``Query.join``, ``eagerload()``, and similar has been
+deprecated since 0.5:
+
+::
+
+ # old way, deprecated since 0.5
+ session.query(Houses).join([Houses.rooms, Room.closets])
+ session.query(Houses).options(eagerload_all([Houses.rooms, Room.closets]))
+
+These methods all accept \*args as of the 0.5 series:
+
+::
+
+ # current way, in place since 0.5
+ session.query(Houses).join(Houses.rooms, Room.closets)
+ session.query(Houses).options(eagerload_all(Houses.rooms, Room.closets))
+
+``ScopedSession.mapper`` is removed
+-----------------------------------
+
+This feature provided a mapper extension which linked class-
+based functionality with a particular ``ScopedSession``, in
+particular providing the behavior such that new object
+instances would be automatically associated with that
+session. The feature was overused by tutorials and
+frameworks which led to great user confusion due to its
+implicit behavior, and was deprecated in 0.5.5. Techniques
+for replicating its functionality are at
+[wiki:UsageRecipes/SessionAwareMapper]
+
--- /dev/null
+==============================
+What's New in SQLAlchemy 0.8 ?
+==============================
+
+.. admonition:: About this Document
+
+ This document describes changes between SQLAlchemy version 0.7,
+ undergoing maintenance releases as of October, 2012,
+ and SQLAlchemy version 0.8, which is expected for release
+ in late 2012.
+
+ Document date: October 25, 2012
+
+Introduction
+============
+
+This guide introduces what's new in SQLAlchemy version 0.8,
+and also documents changes which affect users migrating
+their applications from the 0.7 series of SQLAlchemy to 0.8.
+
+SQLAlchemy releases are closing in on 1.0, and each new
+version since 0.5 features fewer major usage changes. Most
+applications that are settled into modern 0.7 patterns
+should be movable to 0.8 with no changes. Applications that
+use 0.6 and even 0.5 patterns should be directly migratable
+to 0.8 as well, though larger applications may want to test
+with each interim version.
+
+Platform Support
+================
+
+Targeting Python 2.5 and Up Now
+-------------------------------
+
+Status: ongoing
+
+SQLAlchemy 0.8 will target Python 2.5 and forward;
+compatibility for Python 2.4 is being dropped.
+
+The internals will be able to make usage of Python ternaries
+(that is, ``x if y else z``) which will improve things
+versus the usage of ``y and x or z``, which naturally has
+been the source of some bugs, as well as context managers
+(that is, ``with:``) and perhaps in some cases
+``try:/except:/else:`` blocks which will help with code
+readability.
+
+SQLAlchemy will eventually drop 2.5 support as well - when
+2.6 is reached as the baseline, SQLAlchemy will move to use
+2.6/3.3 in-place compatibility, removing the usage of the
+``2to3`` tool and maintaining a source base that works with
+Python 2 and 3 at the same time.
+
+New Features
+============
+
+Rewritten ``relationship()`` mechanics
+--------------------------------------
+
+Status: completed, needs docs
+
+0.8 features a much improved and capable system regarding
+how ``relationship()`` determines how to join between two
+entities. The new system includes these features:
+
+* The ``primaryjoin`` argument is **no longer needed** when
+ constructing a ``relationship()`` against a class that
+ has multiple foreign key paths to the target. Only the
+ ``foreign_keys`` argument is needed to specify those
+ columns which should be included:
+
+ ::
+
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+ child_id_one = Column(Integer, ForeignKey('child.id'))
+ child_id_two = Column(Integer, ForeignKey('child.id'))
+
+ child_one = relationship("Child", foreign_keys=child_id_one)
+ child_two = relationship("Child", foreign_keys=child_id_two)
+
+ class Child(Base):
+ __tablename__ = 'child'
+ id = Column(Integer, primary_key=True)
+
+* relationships against self-referential, composite foreign
+ keys where **a column points to itself** are now
+ supported. The canonical case is as follows:
+
+ ::
+
+ class Folder(Base):
+ __tablename__ = 'folder'
+ __table_args__ = (
+ ForeignKeyConstraint(
+ ['account_id', 'parent_id'],
+ ['folder.account_id', 'folder.folder_id']),
+ )
+
+ account_id = Column(Integer, primary_key=True)
+ folder_id = Column(Integer, primary_key=True)
+ parent_id = Column(Integer)
+ name = Column(String)
+
+ parent_folder = relationship("Folder",
+ backref="child_folders",
+ remote_side=[account_id, folder_id]
+ )
+
+ Above, the ``Folder`` refers to its parent ``Folder``
+ joining from ``account_id`` to itself, and ``parent_id``
+ to ``folder_id``. When SQLAlchemy constructs an auto-
+ join, no longer can it assume all columns on the "remote"
+ side are aliased, and all columns on the "local" side are
+ not - the ``account_id`` column is **on both sides**. So
+ the internal relationship mechanics were totally rewritten
+ to support an entirely different system whereby two copies
+ of ``account_id`` are generated, each containing different
+ *annotations*' to determine their role within the
+ statement. Note the join condition within a basic eager
+ load:
+
+ ::
+
+ SELECT
+ folder.account_id AS folder_account_id,
+ folder.folder_id AS folder_folder_id,
+ folder.parent_id AS folder_parent_id,
+ folder.name AS folder_name,
+ folder_1.account_id AS folder_1_account_id,
+ folder_1.folder_id AS folder_1_folder_id,
+ folder_1.parent_id AS folder_1_parent_id,
+ folder_1.name AS folder_1_name
+ FROM folder
+ LEFT OUTER JOIN folder AS folder_1
+ ON
+ folder_1.account_id = folder.account_id
+ AND folder.folder_id = folder_1.parent_id
+
+ WHERE folder.folder_id = ? AND folder.account_id = ?
+
+* Thanks to the new relationship mechanics, new
+ **annotation** functions are provided which can be used
+ to create ``primaryjoin`` conditions involving any kind of
+ SQL function, CAST, or other construct that wraps the
+ target column. Previously, a semi-public argument
+ ``_local_remote_pairs`` would be used to tell
+ ``relationship()`` unambiguously what columns should be
+ considered as corresponding to the mapping - the
+ annotations make the point more directly, such as below
+ where ``Parent`` joins to ``Child`` by matching the
+ ``Parent.name`` column converted to lower case to that of
+ the ``Child.name_upper`` column:
+
+ ::
+
+
+ class Parent(Base):
+ __tablename__ = 'parent'
+ id = Column(Integer, primary_key=True)
+ name = Column(String)
+ children = relationship("Child",
+ primaryjoin="Parent.name==foreign(func.lower(Child.name_upper))"
+ )
+
+ class Child(Base):
+ __tablename__ = 'child'
+ id = Column(Integer, primary_key=True)
+ name_upper = Column(String)
+
+:ticket:`1401` :ticket:`610`
+
+New Class Inspection System
+---------------------------
+
+Status: completed, needs docs
+
+Lots of SQLAlchemy users are writing systems that require
+the ability to inspect the attributes of a mapped class,
+including being able to get at the primary key columns,
+object relationships, plain attributes, and so forth,
+typically for the purpose of building data-marshalling
+systems, like JSON/XML conversion schemes and of course form
+libraries galore.
+
+Originally, the ``Table`` and ``Column`` model were the
+original inspection points, which have a well-documented
+system. While SQLAlchemy ORM models are also fully
+introspectable, this has never been a fully stable and
+supported feature, and users tended to not have a clear idea
+how to get at this information.
+
+0.8 has a plan to produce a consistent, stable and fully
+documented API for this purpose, which would provide an
+inspection system that works on classes, instances, and
+possibly other things as well. While many elements of this
+system are already available, the plan is to lock down the
+API including various accessors available from such objects
+as ``Mapper``, ``InstanceState``, and ``MapperProperty``:
+
+::
+
+ class User(Base):
+ __tablename__ = 'user'
+
+ id = Column(Integer, primary_key=True)
+ name = Column(String)
+ name_syn = synonym(name)
+ addresses = relationship(Address)
+
+ # universal entry point is inspect()
+ >>> b = inspect(User)
+
+ # column collection
+ >>> b.columns
+ [<id column>, <name column>]
+
+ # its a ColumnCollection
+ >>> b.columns.id
+ <id column>
+
+ # i.e. from mapper
+ >>> b.primary_key
+ (<id column>, )
+
+ # ColumnProperty
+ >>> b.attr.id.columns
+ [<id column>]
+
+ # get only column attributes
+ >>> b.column_attrs
+ [<id prop>, <name prop>]
+
+ # its a namespace
+ >>> b.column_attrs.id
+ <id prop>
+
+ # get only relationships
+ >>> b.relationships
+ [<addresses prop>]
+
+ # its a namespace
+ >>> b.relationships.addresses
+ <addresses prop>
+
+ # point inspect() at a class level attribute,
+ # basically returns ".property"
+ >>> b = inspect(User.addresses)
+ >>> b
+ <addresses prop>
+
+ # mapper
+ >>> b.mapper
+ <Address mapper>
+
+ # None columns collection, just like columnprop has empty mapper
+ >>> b.columns
+ None
+
+ # the parent
+ >>> b.parent
+ <User mapper>
+
+ # __clause_element__()
+ >>> b.expression
+ User.id==Address.user_id
+
+ >>> inspect(User.id).expression
+ <id column with ORM annotations>
+
+ # inspect works on instances !
+ >>> u1 = User(id=3, name='x')
+ >>> b = inspect(u1)
+
+ # what's b here ? probably InstanceState
+ >>> b
+ <InstanceState>
+
+ >>> b.attr.keys()
+ ['id', 'name', 'name_syn', 'addresses']
+
+ # attribute interface
+ >>> b.attr.id
+ <magic attribute inspect thing>
+
+ # value
+ >>> b.attr.id.value
+ 3
+
+ # history
+ >>> b.attr.id.history
+ <history object>
+
+ >>> b.attr.id.history.unchanged
+ 3
+
+ >>> b.attr.id.history.deleted
+ None
+
+ # lets assume the object is persistent
+ >>> s = Session()
+ >>> s.add(u1)
+ >>> s.commit()
+
+ # big one - the primary key identity ! always
+ # works in query.get()
+ >>> b.identity
+ [3]
+
+ # the mapper level key
+ >>> b.identity_key
+ (User, [3])
+
+ >>> b.persistent
+ True
+
+ >>> b.transient
+ False
+
+ >>> b.deleted
+ False
+
+ >>> b.detached
+ False
+
+ >>> b.session
+ <session>
+
+
+:ticket:`2208`
+
+Fully extensible, type-level operator support in Core
+-----------------------------------------------------
+
+Status: completed, needs more docs
+
+The Core has to date never had any system of adding support
+for new SQL operators to Column and other expression
+constructs, other than the ``op(<somestring>)`` function
+which is "just enough" to make things work. There has also
+never been any system in place for Core which allows the
+behavior of existing operators to be overridden. Up until
+now, the only way operators could be flexibly redefined was
+in the ORM layer, using ``column_property()`` given a
+``comparator_factory`` argument. Third party libraries
+like GeoAlchemy therefore were forced to be ORM-centric and
+rely upon an array of hacks to apply new opertions as well
+as to get them to propagate correctly.
+
+The new operator system in Core adds the one hook that's
+been missing all along, which is to associate new and
+overridden operators with *types*. Since after all, it's
+not really a column, CAST operator, or SQL function that
+really drives what kinds of operations are present, it's the
+*type* of the expression. The implementation details are
+minimal - only a few extra methods are added to the core
+``ColumnElement`` type so that it consults it's
+``TypeEngine`` object for an optional set of operators.
+New or revised operations can be associated with any type,
+either via subclassing of an existing type, by using
+``TypeDecorator``, or "globally across-the-board" by
+attaching a new ``Comparator`` object to an existing type
+class.
+
+For example, to add logarithm support to ``Numeric`` types:
+
+::
+
+
+ from sqlalchemy.types import Numeric
+ from sqlalchemy.sql import func
+
+ class CustomNumeric(Numeric):
+ class comparator_factory(Numeric.Comparator):
+ def log(self, other):
+ return func.log(self.expr, other)
+
+The new type is usable like any other type:
+
+::
+
+
+ data = Table('data', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('x', CustomNumeric(10, 5)),
+ Column('y', CustomNumeric(10, 5))
+ )
+
+ stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
+ print conn.execute(stmt).fetchall()
+
+
+New features which should come from this immediately are
+support for Postgresql's HSTORE type, which is ready to go
+in a separate library which may be merged, as well as all
+the special operations associated with Postgresql's ARRAY
+type. It also paves the way for existing types to acquire
+lots more operators that are specific to those types, such
+as more string, integer and date operators.
+
+:ticket:`2547`
+
+New with_polymorphic() feature, can be used anywhere
+----------------------------------------------------
+
+Status: completed
+
+The ``Query.with_polymorphic()`` method allows the user to
+specify which tables should be present when querying against
+a joined-table entity. Unfortunately the method is awkward
+and only applies to the first entity in the list, and
+otherwise has awkward behaviors both in usage as well as
+within the internals. A new enhancement to the
+``aliased()`` construct has been added called
+``with_polymorphic()`` which allows any entity to be
+"aliased" into a "polymorphic" version of itself, freely
+usable anywhere:
+
+::
+
+ from sqlalchemy.orm import with_polymorphic
+ palias = with_polymorphic(Person, [Engineer, Manager])
+ session.query(Company).\
+ join(palias, Company.employees).\
+ filter(or_(Engineer.language=='java', Manager.hair=='pointy'))
+
+:ticket:`2333`
+
+of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()
+--------------------------------------------------------------------------------------------------------------
+
+Status: completed
+
+You can use ``of_type()`` with aliases and polymorphic
+constructs; also works with most relationship functions like
+``joinedload()``, ``subqueryload()``, ``contains_eager()``,
+``any()``, and ``has()``:
+
+::
+
+
+ # use eager loading in conjunction with with_polymorphic targets
+ Job_P = with_polymorphic(Job, SubJob, aliased=True)
+ q = s.query(DataContainer).\
+ join(DataContainer.jobs.of_type(Job_P)).\
+ options(contains_eager(DataContainer.jobs.of_type(Job_P)))
+
+ # pass subclasses to eager loads (implicitly applies with_polymorphic)
+ q = s.query(ParentThing).\
+ options(
+ joinedload_all(
+ ParentThing.container,
+ DataContainer.jobs.of_type(SubJob)
+ ))
+
+ # control self-referential aliasing with any()/has()
+ Job_A = aliased(Job)
+ q = s.query(Job).join(DataContainer.jobs).\
+ filter(
+ DataContainer.jobs.of_type(Job_A).\
+ any(and_(Job_A.id < Job.id, Job_A.type=='fred'))
+
+
+:ticket:`2438` :ticket:`1106`
+
+New DeferredReflection Feature in Declarative
+---------------------------------------------
+
+The "deferred reflection" example has been moved to a
+supported feature within Declarative. This feature allows
+the construction of declarative mapped classes with only
+placeholder ``Table`` metadata, until a ``prepare()`` step
+is called, given an ``Engine`` with which to reflect fully
+all tables and establish actual mappings. The system
+supports overriding of columns, single and joined
+inheritance, as well as distinct bases-per-engine. A full
+declarative configuration can now be created against an
+existing table that is assembled upon engine creation time
+in one step:
+
+::
+
+ class ReflectedOne(DeferredReflection, Base):
+ __abstract__ = True
+
+ class ReflectedTwo(DeferredReflection, Base):
+ __abstract__ = True
+
+ class MyClass(ReflectedOne):
+ __tablename__ = 'mytable'
+
+ class MyOtherClass(ReflectedOne):
+ __tablename__ = 'myothertable'
+
+ class YetAnotherClass(ReflectedTwo):
+ __tablename__ = 'yetanothertable'
+
+ ReflectedOne.prepare(engine_one)
+ ReflectedTwo.prepare(engine_two)
+
+:ticket:`2485`
+
+New, configurable DATE, TIME types for SQLite
+---------------------------------------------
+
+Status: completed
+
+SQLite has no built-in DATE, TIME, or DATETIME types, and
+instead provides some support for storage of date and time
+values either as strings or integers. The date and time
+types for SQLite are enhanced in 0.8 to be much more
+configurable as to the specific format, including that the
+"microseconds" portion is optional, as well as pretty much
+everything else.
+
+::
+
+ Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
+ Column('sometimestamp', sqlite.DATETIME(
+ storage_format=(
+ "%(year)04d%(month)02d%(day)02d"
+ "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
+ ),
+ regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
+ )
+ )
+ Column('somedate', sqlite.DATE(
+ storage_format="%(month)02d/%(day)02d/%(year)04d",
+ regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
+ )
+ )
+
+
+Huge thanks to Nate Dub for the sprinting on this at Pycon
+'12.
+
+:ticket:`2363`
+
+Query.update() will support UPDATE..FROM
+----------------------------------------
+
+Status: not implemented
+
+Not 100% sure if this will make it in, the new UPDATE..FROM
+mechanics should work in query.update():
+
+::
+
+ query(SomeEntity).\
+ filter(SomeEntity.id==SomeOtherEntity.id).\
+ filter(SomeOtherEntity.foo=='bar').\
+ update({"data":"x"})
+
+Should also work when used against a joined-inheritance
+entity, provided the target of the UPDATE is local to the
+table being filtered on, or if the parent and child tables
+are mixed, they are joined explicitly in the query. Below,
+given ``Engineer`` as a joined subclass of ``Person``:
+
+::
+
+ query(Engineer).\
+ filter(Person.id==Engineer.id).\
+ filter(Person.name=='dilbert').\
+ update({"engineer_data":"java"})
+
+would produce:
+
+::
+
+ UPDATE engineer SET engineer_data='java' FROM person
+ WHERE person.id=engineer.id AND person.name='dilbert'
+
+:ticket:`2365`
+
+Enhanced Postgresql ARRAY type
+------------------------------
+
+status: completed
+
+The ``postgresql.ARRAY`` type will accept an optional
+"dimension" argument, pinning it to a fixed number of
+dimensions and greatly improving efficiency when retrieving
+results:
+
+::
+
+ # old way, still works since PG supports N-dimensions per row:
+ Column("my_array", postgresql.ARRAY(Integer))
+
+ # new way, will render ARRAY with correct number of [] in DDL,
+ # will process binds and results more efficiently as we don't need
+ # to guess how many levels deep to go
+ Column("my_array", postgresql.ARRAY(Integer, dimensions=2))
+
+:ticket:`2441`
+
+rollback() will only roll back "dirty" objects from a begin_nested()
+--------------------------------------------------------------------
+
+Status: completed
+
+A behavioral change that should improve efficiency for those
+users using SAVEPOINT via ``Session.begin_nested()`` - upon
+``rollback()``, only those objects that were made dirty
+since the last flush will be expired, the rest of the
+``Session`` remains intact. This because a ROLLBACK to a
+SAVEPOINT does not terminate the containing transaction's
+isolation, so no expiry is needed except for those changes
+that were not flushed in the current transaction.
+
+:ticket:`2452`
+
+Behavioral Changes
+==================
+
+The after_attach event fires after the item is associated with the Session instead of before; before_attach added
+-----------------------------------------------------------------------------------------------------------------
+
+Event handlers which use after_attach can now assume the
+given instance is associated with the given session:
+
+::
+
+ @event.listens_for(Session, "after_attach")
+ def after_attach(session, instance):
+ assert instance in session
+
+Some use cases require that it work this way. However,
+other use cases require that the item is *not* yet part of
+the session, such as when a query, intended to load some
+state required for an instance, emits autoflush first and
+would otherwise prematurely flush the target object. Those
+use cases should use the new "before_attach" event:
+
+::
+
+ @event.listens_for(Session, "before_attach")
+ def before_attach(session, instance):
+ instance.some_necessary_attribute = session.query(Widget).\
+ filter_by(instance.widget_name).\
+ first()
+
+:ticket:`2464`
+
+Query now auto-correlates like a select() does
+----------------------------------------------
+
+Status: Completed
+
+Previously it was necessary to call ``Query.correlate`` in
+order to have a column- or WHERE-subquery correlate to the
+parent:
+
+::
+
+ subq = session.query(Entity.value).\
+ filter(Entity.id==Parent.entity_id).\
+ correlate(Parent).\
+ as_scalar()
+ session.query(Parent).filter(subq=="some value")
+
+This was the opposite behavior of a plain ``select()``
+construct which would assume auto-correlation by default.
+The above statement in 0.8 will correlate automatically:
+
+::
+
+ subq = session.query(Entity.value).\
+ filter(Entity.id==Parent.entity_id).\
+ as_scalar()
+ session.query(Parent).filter(subq=="some value")
+
+like in ``select()``, correlation can be disabled by calling
+``query.correlate(None)`` or manually set by passing an
+entity, ``query.correlate(someentity)``.
+
+:ticket:`2179`
+
+No more magic coercion of "=" to IN when comparing to subquery in MS-SQL
+------------------------------------------------------------------------
+
+Status: Completed
+
+We found a very old behavior in the MSSQL dialect which
+would attempt to rescue the user from his or herself when
+doing something like this:
+
+::
+
+ scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
+ select([sometable]).where(sometable.c.id==scalar_subq)
+
+SQL Server doesn't allow an equality comparison to a scalar
+SELECT, that is, "x = (SELECT something)". The MSSQL dialect
+would convert this to an IN. The same thing would happen
+however upon a comparison like "(SELECT something) = x", and
+overall this level of guessing is outside of SQLAlchemy's
+usual scope so the behavior is removed.
+
+:ticket:`2277`
+
+Fixed the behavior of Session.is_modified()
+-------------------------------------------
+
+Status: completed
+
+The ``Session.is_modified()`` method accepts an argument
+``passive`` which basically should not be necessary, the
+argument in all cases should be the value ``True`` - when
+left at its default of ``False`` it would have the effect of
+hitting the database, and often triggering autoflush which
+would itself change the results. In 0.8 the ``passive``
+argument will have no effect, and unloaded attributes will
+never be checked for history since by definition there can
+be no pending state change on an unloaded attribute.
+
+:ticket:`2320`
+
+``column.key`` is honored in the ``.c.`` attribute of ``select()`` with ``apply_labels()``
+------------------------------------------------------------------------------------------
+
+Status: completed
+
+Users of the expression system know that ``apply_labels()``
+prepends the table name to each column name, affecting the
+names that are available from ``.c.``:
+
+::
+
+ s = select([table1]).apply_labels()
+ s.c.table1_col1
+ s.c.table1_col2
+
+Before 0.8, if the ``Column`` had a different ``key``, this
+key would be ignored, inconsistently versus when
+``apply_labels()`` were not used:
+
+::
+
+ # before 0.8
+ table1 = Table('t1', metadata,
+ Column('col1', Integer, key='column_one')
+ )
+ s = select([table1])
+ s.c.column_one # would be accessible like this
+ s.c.col1 # would raise AttributeError
+
+ s = select([table1]).apply_labels()
+ s.c.table1_column_one # would raise AttributeError
+ s.c.table1_col1 # would be accessible like this
+
+In 0.8, ``key`` is honored in both cases:
+
+::
+
+ # with 0.8
+ table1 = Table('t1', metadata,
+ Column('col1', Integer, key='column_one')
+ )
+ s = select([table1])
+ s.c.column_one # works
+ s.c.col1 # AttributeError
+
+ s = select([table1]).apply_labels()
+ s.c.table1_column_one # works
+ s.c.table1_col1 # AttributeError
+
+All other behavior regarding "name" and "key" are the same,
+including that the rendered SQL will still use the form
+``<tablename>_<colname>`` - the emphasis here was on
+preventing the ``key`` contents from being rendered into the
+``SELECT`` statement so that there are no issues with
+special/ non-ascii characters used in the ``key``.
+
+:ticket:`2397`
+
+single_parent warning is now an error
+-------------------------------------
+
+Status: completed
+
+A ``relationship()`` that is many-to-one or many-to-many and
+specifies "cascade='all, delete-orphan'", which is an
+awkward but nonetheless supported use case (with
+restrictions) will now raise an error if the relationship
+does not specify the ``single_parent=True`` option.
+Previously it would only emit a warning, but a failure would
+follow almost immediately within the attribute system in any
+case.
+
+:ticket:`2405`
+
+Adding the ``inspector`` argument to the ``column_reflect`` event
+-----------------------------------------------------------------
+
+Status: completed
+
+0.7 added a new event called ``column_reflect``, provided so
+that the reflection of columns could be augmented as each
+one were reflected. We got this event slightly wrong in
+that the event gave no way to get at the current
+``Inspector`` and ``Connection`` being used for the
+reflection, in the case that additional information from the
+database is needed. As this is a new event not widely used
+yet, we'll be adding the ``inspector`` argument into it
+directly:
+
+::
+
+ @event.listens_for(Table, "column_reflect")
+ def listen_for_col(inspector, table, column_info):
+ # ...
+
+:ticket:`2418`
+
+Disabling auto-detect of collations, casing for MySQL
+-----------------------------------------------------
+
+Status: completed
+
+The MySQL dialect does two calls, one very expensive, to
+load all possible collations from the database as well as
+information on casing, the first time an ``Engine``
+connects. Neither of these collections are used for any
+SQLAlchemy functions, so these calls will be changed to no
+longer be emitted automatically. Applications that might
+have relied on these collections being present on
+``engine.dialect`` will need to call upon
+``_detect_collations()`` and ``_detect_casing()`` directly.
+
+:ticket:`2404`
+
+"Unconsumed column names" warning becomes an exception
+------------------------------------------------------
+
+Status: completed
+
+Referring to a non-existent column in an ``insert()`` or
+``update()`` construct will raise an error instead of a
+warning:
+
+::
+
+ t1 = table('t1', column('x'))
+ t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
+
+:ticket:`2415`
+
+Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint
+---------------------------------------------------------------------------
+
+Status: completed
+
+These two methods on ``Inspector`` were redundant, where
+``get_primary_keys()`` would return the same information as
+``get_pk_constraint()`` minus the name of the constraint:
+
+::
+
+ >>> insp.get_primary_keys()
+ ["a", "b"]
+
+ >>> insp.get_pk_constraint()
+ {"name":"pk_constraint", "constrained_columns":["a", "b"]}
+
+:ticket:`2422`
+
+Case-insensitive result row names will be disabled in most cases
+----------------------------------------------------------------
+
+Status: completed
+
+A very old behavior, the column names in ``RowProxy`` were
+always compared case-insensitively:
+
+::
+
+ >>> row = result.fetchone()
+ >>> row['foo'] == row['FOO'] == row['Foo']
+ True
+
+This was for the benefit of a few dialects which in the
+early days needed this, like Oracle and Firebird, but in
+modern usage we have more accurate ways of dealing with the
+case-insensitive behavior of these two platforms.
+
+Going forward, this behavior will be available only
+optionally, by passing the flag ```case_sensitive=False```
+to ```create_engine()```, but otherwise column names
+requested from the row must match as far as casing.
+
+:ticket:`2423`
+
+``InstrumentationManager`` and alternate class instrumentation is now an extension
+----------------------------------------------------------------------------------
+
+The ``sqlalchemy.orm.interfaces.InstrumentationManager``
+class is moved to
+``sqlalchemy.ext.instrumentation.InstrumentationManager``.
+The "alternate instrumentation" system was built for the
+benefit of a very small number of installations that needed
+to work with existing or unusual class instrumentation
+systems, and generally is very seldom used. The complexity
+of this system has been exported to an ``ext.`` module. It
+remains unused until once imported, typically when a third
+party library imports ``InstrumentationManager``, at which
+point it is injected back into ``sqlalchemy.orm`` by
+replacing the default ``InstrumentationFactory`` with
+``ExtendedInstrumentationRegistry``.
+
+Removed
+=======
+
+SQLSoup
+-------
+
+Status: completed
+
+SQLSoup is a handy package that presents an alternative
+interface on top of the SQLAlchemy ORM. SQLSoup is now
+moved into its own project and documented/released
+separately; see https://bitbucket.org/zzzeek/sqlsoup.
+
+SQLSoup is a very simple tool that could also benefit from
+contributors who are interested in its style of usage.
+
+:ticket:`2262`
+
+MutableType
+-----------
+
+Status: completed
+
+The older "mutable" system within the SQLAlchemy ORM has
+been removed. This refers to the ``MutableType`` interface
+which was applied to types such as ``PickleType`` and
+conditionally to ``TypeDecorator``, and since very early
+SQLAlchemy versions has provided a way for the ORM to detect
+changes in so-called "mutable" data structures such as JSON
+structures and pickled objects. However, the
+implementation was never reasonable and forced a very
+inefficient mode of usage on the unit-of-work which caused
+an expensive scan of all objects to take place during flush.
+In 0.7, the `sqlalchemy.ext.mutable <http://docs.sqlalchemy.
+org/en/latest/orm/extensions/mutable.html>`_ extension was
+introduced so that user-defined datatypes can appropriately
+send events to the unit of work as changes occur.
+
+Today, usage of ``MutableType`` is expected to be low, as
+warnings have been in place for some years now regarding its
+inefficiency.
+
+:ticket:`2442`
+
+sqlalchemy.exceptions (has been sqlalchemy.exc for years)
+---------------------------------------------------------
+
+Status: completed
+
+We had left in an alias ``sqlalchemy.exceptions`` to attempt
+to make it slightly easier for some very old libraries that
+hadn't yet been upgraded to use ``sqlalchemy.exc``. Some
+users are still being confused by it however so in 0.8 we're
+taking it out entirely to eliminate any of that confusion.
+
+:ticket:`2433`
+
text-align:right;
}
-div.note, div.warning, p.deprecated, div.topic {
+div.note, div.warning, p.deprecated, div.topic, div.admonition {
background-color:#EEFFEF;
}
border:1px solid #CCCCCC;
padding:5px 10px;
font-size:.9em;
+ margin-top:5px;
box-shadow: 2px 2px 3px #DFDFDF;
}