- Documentation has been converted to Sphinx.
In particular, the generated API documentation
has been constructed into a full blown
"API Reference" section which organizes
editorial documentation combined with
generated docstrings. Cross linking between
sections and API docs are vastly improved,
a javascript-powered search feature is
provided, and a full index of all
classes, functions and members is provided.
========
- new features
- orm
+ - Documentation has been converted to Sphinx.
+ In particular, the generated API documentation
+ has been constructed into a full blown
+ "API Reference" section which organizes
+ editorial documentation combined with
+ generated docstrings. Cross linking between
+ sections and API docs are vastly improved,
+ a javascript-powered search feature is
+ provided, and a full index of all
+ classes, functions and members is provided.
+
- Query.with_polymorphic() now accepts a third
argument "discriminator" which will replace
the value of mapper.polymorphic_on for that
+++ /dev/null
-<html>
-<head>
- <link href="style.css" rel="stylesheet" type="text/css"></link>
- <link href="docs.css" rel="stylesheet" type="text/css"></link>
- <script src="scripts.js"></script>
- <title>SQLAlchemy Documentation</title>
-</head>
-<body>
- <h3>What is an Alpha API Feature?</h3>
-<p><b>Alpha API</b> indicates that the best way for a particular feature to be presented hasn't been firmly settled on as of yet, and the current way is being introduced on a trial basis. Its spirit is not as much a warning that "this API might change", its more an invitation to the users saying, "heres a new idea I had. I'm not sure if this is the best way to do it. Do you like it ? Should we do this differently? Or is it good the way it is ?". Alpha API features are always small in scope and are presented in releases so that the greatest number of users get some hands-on experience with it; large-scoped API or architectural changes will always be discussed on the mailing list/Wiki first.</p>
-
-<p>Reasons why a feature might want to change include:
- <ul>
- <li>The API for the feature is too difficult to use for the typical task, and needs to be more "convenient"</li>
- <li>The feature only implements a subsection of what it really should be doing</li>
- <li>The feature's interface is inconsistent with that of other features which operate at a similar level</li>
- <li>The feature is confusing and is often misunderstood, and would be better replaced by a more manual feature that makes the task clearer</li>
- <li>The feature overlaps with another feature and effectively provides too many ways to do the same thing</li>
- <li>The feature made some assumptions about the total field of use cases which is not really true, and it breaks in other scenarios</li>
- </ul>
-
-</p>
-<p>A good example of what was essentially an "alpha feature" is the <code>private=True</code> flag. This flag on a <code>relation()</code> indicates that child objects should be deleted along with the parent. After this flag experienced some usage by the SA userbase, some users remarked that a more generic and configurable way was Hibernates <code>cascade="all, delete-orphan"</code>, and also that the term <code>cascade</code> was clearer in purpose than the more ambiguous <code>private</code> keyword, which could be construed as a "private variable".</p>
-
-<center><input type="button" value="close window" onclick="window.close()"></center>
-</body>
-</html>
\ No newline at end of file
+++ /dev/null
-<html>
-<head>
- <link href="style.css" rel="stylesheet" type="text/css"></link>
- <link href="docs.css" rel="stylesheet" type="text/css"></link>
- <script src="scripts.js"></script>
- <title>SQLAlchemy Documentation</title>
-</head>
-<body>
- <h3>What is an Alpha Implementation Feature?</h3>
-<p><b>Alpha Implementation</b> indicates a feature where developer confidence in its functionality has not yet been firmly established. This typically includes brand new features for which adequate unit tests have not been completed, and/or features whose scope is broad enough that its not clear what additional unit tests might be needed.</p>
-
-<p>Alpha implementation is not meant to discourage the usage of a feature, it is only meant to indicate that some difficulties in getting full functionality from the feature may occur, and to encourage the reporting of these difficulties either via the mailing list or through <a href="http://www.sqlalchemy.org/trac/newticket" target="_blank">submitting a ticket</a>.</p>
-
-<center><input type="button" value="close window" onclick="window.close()"></center>
-</body>
-</html>
\ No newline at end of file
--- /dev/null
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+PAPER =
+
+# Internal variables.
+PAPEROPT_a4 = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS = -d output/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html latex site-mako
+
+help:
+ @echo "Please use \`make <target>' where <target> is one of"
+ @echo " html to make standalone HTML files"
+ @echo " dist-html same as html, but places files in /doc"
+ @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+
+clean:
+ -rm -rf output/*
+
+html:
+ mkdir -p output/html output/doctrees
+ $(SPHINXBUILD) -b html -A mako_layout=html $(ALLSPHINXOPTS) output/html
+ @echo
+ @echo "Build finished. The HTML pages are in output/html."
+
+dist-html:
+ $(SPHINXBUILD) -b html -A mako_layout=html $(ALLSPHINXOPTS) ..
+ @echo
+ @echo "Build finished. The HTML pages are in ../."
+
+site-mako:
+ mkdir -p output/site output/doctrees
+ $(SPHINXBUILD) -b html -A mako_layout=site $(ALLSPHINXOPTS) output/site
+ @echo
+ @echo "Build finished. The Mako pages are in output/site."
+
+latex:
+ mkdir -p output/latex output/doctrees
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) output/latex
+ @echo
+ @echo "Build finished; the LaTeX files are in output/latex."
+ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
+ "run these through (pdf)latex."
+++ /dev/null
-Documentation exists in its original format as Markdown files in the ./content directory.
-
-To generate documentation:
-
- python genhtml.py
-
-This generates the Markdown files into Myghty templates as an interim step and then into HTML. It also
-creates two pickled datafiles corresponding to the table of contents and all the generated docstrings
-for the SQLAlchemy sourcecode.
-
--- /dev/null
+from sphinx.application import TemplateBridge
+from sphinx.builder import StandaloneHTMLBuilder
+from sphinx.highlighting import PygmentsBridge
+from pygments import highlight
+from pygments.lexer import RegexLexer, bygroups, using
+from pygments.token import *
+from pygments.filter import Filter, apply_filters
+from pygments.lexers import PythonLexer, PythonConsoleLexer
+from pygments.formatters import HtmlFormatter, LatexFormatter
+import re
+from mako.lookup import TemplateLookup
+
+class MakoBridge(TemplateBridge):
+ def init(self, builder):
+ self.layout = builder.config.html_context.get('mako_layout', 'html')
+
+ self.lookup = TemplateLookup(directories=builder.config.templates_path,
+ format_exceptions=True,
+ imports=[
+ "from builder import util"
+ ]
+ )
+
+ def render(self, template, context):
+ template = template.replace(".html", ".mako")
+ context['prevtopic'] = context.pop('prev', None)
+ context['nexttopic'] = context.pop('next', None)
+ context['mako_layout'] = self.layout == 'html' and 'static_base.mako' or 'site_base.mako'
+ return self.lookup.get_template(template).render_unicode(**context)
+
+
+class StripDocTestFilter(Filter):
+ def filter(self, lexer, stream):
+ for ttype, value in stream:
+ if ttype is Token.Comment and re.match(r'#\s*doctest:', value):
+ continue
+ yield ttype, value
+
+class PyConWithSQLLexer(RegexLexer):
+ name = 'PyCon+SQL'
+ aliases = ['pycon+sql']
+
+ flags = re.IGNORECASE | re.DOTALL
+
+ tokens = {
+ 'root': [
+ (r'{sql}', Token.Sql.Link, 'sqlpopup'),
+ (r'{opensql}', Token.Sql.Open, 'opensqlpopup'),
+ (r'.*?\n', using(PythonConsoleLexer))
+ ],
+ 'sqlpopup':[
+ (
+ r'(.*?\n)((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?(?:{stop}\n*|$))',
+ bygroups(using(PythonConsoleLexer), Token.Sql.Popup),
+ "#pop"
+ )
+ ],
+ 'opensqlpopup':[
+ (
+ r'.*?(?:{stop}\n*|$)',
+ Token.Sql,
+ "#pop"
+ )
+ ]
+ }
+
+
+class PythonWithSQLLexer(RegexLexer):
+ name = 'Python+SQL'
+ aliases = ['pycon+sql']
+
+ flags = re.IGNORECASE | re.DOTALL
+
+ tokens = {
+ 'root': [
+ (r'{sql}', Token.Sql.Link, 'sqlpopup'),
+ (r'{opensql}', Token.Sql.Open, 'opensqlpopup'),
+ (r'.*?\n', using(PythonLexer))
+ ],
+ 'sqlpopup':[
+ (
+ r'(.*?\n)((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?(?:{stop}\n*|$))',
+ bygroups(using(PythonLexer), Token.Sql.Popup),
+ "#pop"
+ )
+ ],
+ 'opensqlpopup':[
+ (
+ r'.*?(?:{stop}\n*|$)',
+ Token.Sql,
+ "#pop"
+ )
+ ]
+ }
+
+
+def _strip_trailing_whitespace(iter_):
+ buf = list(iter_)
+ if buf:
+ buf[-1] = (buf[-1][0], buf[-1][1].rstrip())
+ for t, v in buf:
+ yield t, v
+
+class PopupSQLFormatter(HtmlFormatter):
+ def _format_lines(self, tokensource):
+ buf = []
+ for ttype, value in apply_filters(tokensource, [StripDocTestFilter()]):
+ if ttype in Token.Sql:
+ for t, v in HtmlFormatter._format_lines(self, iter(buf)):
+ yield t, v
+ buf = []
+
+ if ttype is Token.Sql:
+ yield 1, "<div class='show_sql'>%s</div>" % re.sub(r'(?:[{stop}|\n]*)$', '', value)
+ elif ttype is Token.Sql.Link:
+ yield 1, "<a href='#' class='sql_link'>sql</a>"
+ elif ttype is Token.Sql.Popup:
+ yield 1, "<div class='popup_sql'>%s</div>" % re.sub(r'(?:[{stop}|\n]*)$', '', value)
+ else:
+ buf.append((ttype, value))
+
+ for t, v in _strip_trailing_whitespace(HtmlFormatter._format_lines(self, iter(buf))):
+ yield t, v
+
+def setup(app):
+ app.add_lexer('pycon+sql', PyConWithSQLLexer())
+ app.add_lexer('python+sql', PythonWithSQLLexer())
+ PygmentsBridge.html_formatter = PopupSQLFormatter
+ #PygmentsBridge.latex_formatter = LatexFormatter
+
+
\ No newline at end of file
--- /dev/null
+import re
+
+def striptags(text):
+ return re.compile(r'<[^>]*>').sub('', text)
+
+def strip_toplevel_anchors(text):
+ return re.compile(r'\.html#.*-toplevel').sub('.html', text)
+
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Foo Bar documentation build configuration file, created by
+# sphinx-quickstart on Wed Nov 26 19:50:10 2008.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# The contents of this file are pickled, so don't put values in the namespace
+# that aren't pickleable (module imports are okay, they're removed automatically).
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+
+# If your extensions are in another directory, add it here. If the directory
+# is relative to the documentation root, use os.path.abspath to make it
+# absolute, like shown here.
+sys.path.append(os.path.abspath('.'))
+sys.path.append(os.path.abspath('../../lib'))
+
+import sqlalchemy
+
+# General configuration
+# ---------------------
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc', 'builder.builders']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+template_bridge = "builder.builders.MakoBridge"
+
+# The encoding of source files.
+#source_encoding = 'utf-8'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'SQLAlchemy'
+copyright = u'2008, the SQLAlchemy authors and contributors'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = sqlalchemy.__version__
+# The full version, including alpha/beta/rc tags.
+release = sqlalchemy.__version__
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of documents that shouldn't be included in the build.
+#unused_docs = []
+
+# List of directories, relative to source directory, that shouldn't be searched
+# for source files.
+exclude_trees = ['build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+
+# Options for HTML output
+# -----------------------
+
+# The style sheet to use for HTML and HTML Help pages. A file of that name
+# must exist either in Sphinx' static/ path, or in one of the custom paths
+# given in html_static_path.
+html_style = 'default.css'
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+html_title = "%s %s Documentation" % (project, release)
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+html_last_updated_fmt = '%m/%d/%Y %H:%M:%S'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+html_use_modindex = False
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, the reST sources are included in the HTML build as _sources/<name>.
+#html_copy_source = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = ''
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'FooBardoc'
+
+
+# Options for LaTeX output
+# ------------------------
+
+# The paper size ('letter' or 'a4').
+#latex_paper_size = 'letter'
+
+# The font size ('10pt', '11pt' or '12pt').
+#latex_font_size = '10pt'
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, document class [howto/manual]).
+latex_documents = [
+ ('index', 'sqlalchemy.tex', ur'SQLAlchemy Documentation',
+ ur'Mike Bayer', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# Additional stuff for the LaTeX preamble.
+#latex_preamble = ''
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_use_modindex = True
--- /dev/null
+====================
+Appendix: Copyright
+====================
+
+This is the MIT license: `<http://www.opensource.org/licenses/mit-license.php>`_
+
+Copyright (c) 2005, 2006, 2007, 2008 Michael Bayer and contributors. SQLAlchemy is a trademark of Michael
+Bayer.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this
+software and associated documentation files (the "Software"), to deal in the Software
+without restriction, including without limitation the rights to use, copy, modify, merge,
+publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
+to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or
+substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
+PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
+FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+DEALINGS IN THE SOFTWARE.
+
--- /dev/null
+.. _engines_toplevel:
+
+================
+Database Engines
+================
+The **Engine** is the starting point for any SQLAlchemy application. It's "home base" for the actual database and its DBAPI, delivered to the SQLAlchemy application through a connection pool and a **Dialect**, which describes how to talk to a specific kind of database/DBAPI combination.
+
+The general structure is this::
+
+ +-----------+ __________
+ /---| Pool |---\ (__________)
+ +-------------+ / +-----------+ \ +--------+ | |
+ connect() <--| Engine |---x x----| DBAPI |---| database |
+ +-------------+ \ +-----------+ / +--------+ | |
+ \---| Dialect |---/ |__________|
+ +-----------+ (__________)
+
+Where above, a :class:`~sqlalchemy.engine.Engine` references both a :class:`~sqlalchemy.engine.Dialect` and :class:`~sqlalchemy.pool.Pool`, which together interpret the DBAPI's module functions as well as the behavior of the database.
+
+Creating an engine is just a matter of issuing a single call, :func:`create_engine()`::
+
+ engine = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')
+
+The above engine invokes the ``postgres`` dialect and a connection pool which references ``localhost:5432``.
+
+The engine can be used directly to issue SQL to the database. The most generic way is to use connections, which you get via the ``connect()`` method::
+
+ connection = engine.connect()
+ result = connection.execute("select username from users")
+ for row in result:
+ print "username:", row['username']
+ connection.close()
+
+The connection is an instance of :class:`~sqlalchemy.engine.Connection`, which is a **proxy** object for an actual DBAPI connection. The returned result is an instance of :class:`~sqlalchemy.engine.ResultProxy`, which acts very much like a DBAPI cursor.
+
+When you say ``engine.connect()``, a new ``Connection`` object is created, and a DBAPI connection is retrieved from the connection pool. Later, when you call ``connection.close()``, the DBAPI connection is returned to the pool; nothing is actually "closed" from the perspective of the database.
+
+To execute some SQL more quickly, you can skip the ``Connection`` part and just say::
+
+ result = engine.execute("select username from users")
+ for row in result:
+ print "username:", row['username']
+ result.close()
+
+Where above, the ``execute()`` method on the ``Engine`` does the ``connect()`` part for you, and returns the ``ResultProxy`` directly. The actual ``Connection`` is *inside* the ``ResultProxy``, waiting for you to finish reading the result. In this case, when you ``close()`` the ``ResultProxy``, the underlying ``Connection`` is closed, which returns the DBAPI connection to the pool.
+
+To summarize the above two examples, when you use a ``Connection`` object, it's known as **explicit execution**. When you don't see the ``Connection`` object, but you still use the ``execute()`` method on the ``Engine``, it's called **explicit, connectionless execution**. A third variant of execution also exists called **implicit execution**; this will be described later.
+
+The ``Engine`` and ``Connection`` can do a lot more than what we illustrated above; SQL strings are only its most rudimentary function. Later chapters will describe how "constructed SQL" expressions can be used with engines; in many cases, you don't have to deal with the ``Engine`` at all after it's created. The Object Relational Mapper (ORM), an optional feature of SQLAlchemy, also uses the ``Engine`` in order to get at connections; that's also a case where you can often create the engine once, and then forget about it.
+
+.. _supported_dbapis:
+
+Supported Databases
+====================
+Recall that the ``Dialect`` is used to describe how to talk to a specific kind of database. Dialects are included with SQLAlchemy for SQLite, Postgres, MySQL, MS-SQL, Firebird, Informix, and Oracle; these can each be seen as a Python module present in the :mod:``~sqlalchemy.databases`` package. Each dialect requires the appropriate DBAPI drivers to be installed separately.
+
+Downloads for each DBAPI at the time of this writing are as follows:
+
+* Postgres: `psycopg2 <http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo>`_
+* SQLite: `sqlite3 <http://www.python.org/doc/2.5.2/lib/module-sqlite3.html>`_ (included in Python 2.5 or greater) `pysqlite <http://initd.org/tracker/pysqlite>`_
+* MySQL: `MySQLDB <http://sourceforge.net/projects/mysql-python>`_
+* Oracle: `cx_Oracle <http://cx-oracle.sourceforge.net/>`_
+* MS-SQL, MSAccess: `pyodbc <http://pyodbc.sourceforge.net/>`_ (recommended) `adodbapi <http://adodbapi.sourceforge.net/>`_ `pymssql <http://pymssql.sourceforge.net/>`_
+* Firebird: `kinterbasdb <http://kinterbasdb.sourceforge.net/>`_
+* Informix: `informixdb <http://informixdb.sourceforge.net/>`_
+* DB2/Informix IDS: `ibm-db <http://code.google.com/p/ibm-db/>`_
+* Sybase: TODO
+* MAXDB: TODO
+
+The SQLAlchemy Wiki contains a page of database notes, describing whatever quirks and behaviors have been observed. Its a good place to check for issues with specific databases. `Database Notes <http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
+
+create_engine() URL Arguments
+==============================
+
+SQLAlchemy indicates the source of an Engine strictly via `RFC-1738 <http://rfc.net/rfc1738.html>`_ style URLs, combined with optional keyword arguments to specify options for the Engine. The form of the URL is:
+
+ driver://username:password@host:port/database
+
+Available drivernames are ``sqlite``, ``mysql``, ``postgres``, ``oracle``, ``mssql``, and ``firebird``. For sqlite, the database name is the filename to connect to, or the special name ":memory:" which indicates an in-memory database. The URL is typically sent as a string to the ``create_engine()`` function:
+
+.. sourcecode:: python+sql
+
+ # postgres
+ pg_db = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')
+
+ # sqlite (note the four slashes for an absolute path)
+ sqlite_db = create_engine('sqlite:////absolute/path/to/database.txt')
+ sqlite_db = create_engine('sqlite:///relative/path/to/database.txt')
+ sqlite_db = create_engine('sqlite://') # in-memory database
+ sqlite_db = create_engine('sqlite://:memory:') # the same
+
+ # mysql
+ mysql_db = create_engine('mysql://localhost/foo')
+
+ # oracle via TNS name
+ oracle_db = create_engine('oracle://scott:tiger@dsn')
+
+ # oracle will feed host/port/SID into cx_oracle.makedsn
+ oracle_db = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')
+
+ # mssql
+ mssql_db = create_engine('mssql://username:password@localhost/database')
+
+ # mssql via a DSN connection
+ mssql_db = create_engine('mssql://username:password@/?dsn=mydsn')
+
+The :class:`~sqlalchemy.engine.base.Engine` will ask the connection pool for a connection when the ``connect()`` or ``execute()`` methods are called. The default connection pool, :class:`~sqlalchemy.pool.QueuePool`, as well as the default connection pool used with SQLite, :class:`~sqlalchemy.pool.SingletonThreadPool`, will open connections to the database on an as-needed basis. As concurrent statements are executed, :class:`~sqlalchemy.pool.QueuePool` will grow its pool of connections to a default size of five, and will allow a default "overflow" of ten. Since the ``Engine`` is essentially "home base" for the connection pool, it follows that you should keep a single :class:`~sqlalchemy.engine.base.Engine` per database established within an application, rather than creating a new one for each connection.
+
+Custom DBAPI connect() arguments
+--------------------------------
+
+
+Custom arguments used when issuing the ``connect()`` call to the underlying DBAPI may be issued in three distinct ways. String-based arguments can be passed directly from the URL string as query arguments:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgres://scott:tiger@localhost/test?argument1=foo&argument2=bar')
+
+If SQLAlchemy's database connector is aware of a particular query argument, it may convert its type from string to its proper type.
+
+``create_engine`` also takes an argument ``connect_args`` which is an additional dictionary that will be passed to ``connect()``. This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgres://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
+
+The most customizable connection method of all is to pass a ``creator`` argument, which specifies a callable that returns a DBAPI connection:
+
+.. sourcecode:: python+sql
+
+ def connect():
+ return psycopg.connect(user='scott', host='localhost')
+
+ db = create_engine('postgres://', creator=connect)
+
+.. _create_engine_args:
+
+Database Engine Options
+========================
+
+Keyword options can also be specified to ``create_engine()``, following the string URL as follows:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgres://...', encoding='latin1', echo=True)
+
+Options common to all database dialects are described at :func:`~sqlalchemy.create_engine`.
+
+More On Connections
+====================
+
+Recall from the beginning of this section that the Engine provides a ``connect()`` method which returns a ``Connection`` object. ``Connection`` is a *proxy* object which maintains a reference to a DBAPI connection instance. The ``close()`` method on ``Connection`` does not actually close the DBAPI connection, but instead returns it to the connection pool referenced by the ``Engine``. ``Connection`` will also automatically return its resources to the connection pool when the object is garbage collected, i.e. its ``__del__()`` method is called. When using the standard C implementation of Python, this method is usually called immediately as soon as the object is dereferenced. With other Python implementations such as Jython, this is not so guaranteed.
+
+The ``execute()`` methods on both ``Engine`` and ``Connection`` can also receive SQL clause constructs as well::
+
+ connection = engine.connect()
+ result = connection.execute(select([table1], table1.c.col1==5))
+ for row in result:
+ print row['col1'], row['col2']
+ connection.close()
+
+The above SQL construct is known as a ``select()``. The full range of SQL constructs available are described in `sql`.
+
+Both ``Connection`` and ``Engine`` fulfill an interface known as ``Connectable`` which specifies common functionality between the two objects, namely being able to call ``connect()`` to return a ``Connection`` object (``Connection`` just returns itself), and being able to call ``execute()`` to get a result set. Following this, most SQLAlchemy functions and objects which accept an ``Engine`` as a parameter or attribute with which to execute SQL will also accept a ``Connection``. As of SQLAlchemy 0.3.9, this argument is named ``bind``::
+
+ engine = create_engine('sqlite:///:memory:')
+
+ # specify some Table metadata
+ metadata = MetaData()
+ table = Table('sometable', metadata, Column('col1', Integer))
+
+ # create the table with the Engine
+ table.create(bind=engine)
+
+ # drop the table with a Connection off the Engine
+ connection = engine.connect()
+ table.drop(bind=connection)
+
+Connection facts:
+
+* the Connection object is **not threadsafe**. While a Connection can be shared among threads using properly synchronized access, this is also not recommended as many DBAPIs have issues with, if not outright disallow, sharing of connection state between threads.
+* The Connection object represents a single dbapi connection checked out from the connection pool. In this state, the connection pool has no affect upon the connection, including its expiration or timeout state. For the connection pool to properly manage connections, **connections should be returned to the connection pool (i.e. ``connection.close()``) whenever the connection is not in use**. If your application has a need for management of multiple connections or is otherwise long running (this includes all web applications, threaded or not), don't hold a single connection open at the module level.
+
+Using Transactions with Connection
+===================================
+
+The ``Connection`` object provides a ``begin()`` method which returns a ``Transaction`` object. This object is usually used within a try/except clause so that it is guaranteed to ``rollback()`` or ``commit()``::
+
+ trans = connection.begin()
+ try:
+ r1 = connection.execute(table1.select())
+ connection.execute(table1.insert(), col1=7, col2='this is some data')
+ trans.commit()
+ except:
+ trans.rollback()
+ raise
+
+The ``Transaction`` object also handles "nested" behavior by keeping track of the outermost begin/commit pair. In this example, two functions both issue a transaction on a Connection, but only the outermost Transaction object actually takes effect when it is committed.
+
+.. sourcecode:: python+sql
+
+ # method_a starts a transaction and calls method_b
+ def method_a(connection):
+ trans = connection.begin() # open a transaction
+ try:
+ method_b(connection)
+ trans.commit() # transaction is committed here
+ except:
+ trans.rollback() # this rolls back the transaction unconditionally
+ raise
+
+ # method_b also starts a transaction
+ def method_b(connection):
+ trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
+ try:
+ connection.execute("insert into mytable values ('bat', 'lala')")
+ connection.execute(mytable.insert(), col1='bat', col2='lala')
+ trans.commit() # transaction is not committed yet
+ except:
+ trans.rollback() # this rolls back the transaction unconditionally
+ raise
+
+ # open a Connection and call method_a
+ conn = engine.connect()
+ method_a(conn)
+ conn.close()
+
+Above, ``method_a`` is called first, which calls ``connection.begin()``. Then it calls ``method_b``. When ``method_b`` calls ``connection.begin()``, it just increments a counter that is decremented when it calls ``commit()``. If either ``method_a`` or ``method_b`` calls ``rollback()``, the whole transaction is rolled back. The transaction is not committed until ``method_a`` calls the ``commit()`` method. This "nesting" behavior allows the creation of functions which "guarantee" that a transaction will be used if one was not already available, but will automatically participate in an enclosing transaction if one exists.
+
+Note that SQLAlchemy's Object Relational Mapper also provides a way to control transaction scope at a higher level; this is described in `unitofwork_transaction`.
+
+Transaction Facts:
+
+* the Transaction object, just like its parent Connection, is **not threadsafe**.
+* SQLAlchemy 0.4 will feature transactions with two-phase commit capability as well as SAVEPOINT capability.
+
+Understanding Autocommit
+------------------------
+
+
+The above transaction example illustrates how to use ``Transaction`` so that several executions can take part in the same transaction. What happens when we issue an INSERT, UPDATE or DELETE call without using ``Transaction``? The answer is **autocommit**. While many DBAPIs implement a flag called ``autocommit``, the current SQLAlchemy behavior is such that it implements its own autocommit. This is achieved by detecting statements which represent data-changing operations, i.e. INSERT, UPDATE, DELETE, etc., and then issuing a COMMIT automatically if no transaction is in progress. The detection is based on compiled statement attributes, or in the case of a text-only statement via regular expressions.
+
+.. sourcecode:: python+sql
+
+ conn = engine.connect()
+ conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
+
+Connectionless Execution, Implicit Execution
+=============================================
+
+Recall from the first section we mentioned executing with and without a ``Connection``. ``Connectionless`` execution refers to calling the ``execute()`` method on an object which is not a ``Connection``, which could be on the ``Engine`` itself, or could be a constructed SQL object. When we say "implicit", we mean that we are calling the ``execute()`` method on an object which is neither a ``Connection`` nor an ``Engine`` object; this can only be used with constructed SQL objects which have their own ``execute()`` method, and can be "bound" to an ``Engine``. A description of "constructed SQL objects" may be found in `sql`.
+
+A summary of all three methods follows below. First, assume the usage of the following ``MetaData`` and ``Table`` objects; while we haven't yet introduced these concepts, for now you only need to know that we are representing a database table, and are creating an "executable" SQL construct which issues a statement to the database. These objects are described in `metadata`.
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ users_table = Table('users', meta,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(50))
+ )
+
+Explicit execution delivers the SQL text or constructed SQL expression to the ``execute()`` method of ``Connection``:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ connection = engine.connect()
+ result = connection.execute(users_table.select())
+ for row in result:
+ # ....
+ connection.close()
+
+Explicit, connectionless execution delivers the expression to the ``execute()`` method of ``Engine``:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ result = engine.execute(users_table.select())
+ for row in result:
+ # ....
+ result.close()
+
+Implicit execution is also connectionless, and calls the ``execute()`` method on the expression itself, utilizing the fact that either an ``Engine`` or ``Connection`` has been *bound* to the expression object (binding is discussed further in the next section, `metadata`):
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ meta.bind = engine
+ result = users_table.select().execute()
+ for row in result:
+ # ....
+ result.close()
+
+In both "connectionless" examples, the ``Connection`` is created behind the scenes; the ``ResultProxy`` returned by the ``execute()`` call references the ``Connection`` used to issue the SQL statement. When we issue ``close()`` on the ``ResultProxy``, or if the result set object falls out of scope and is garbage collected, the underlying ``Connection`` is closed for us, resulting in the DBAPI connection being returned to the pool.
+
+.. _threadlocal_strategy:
+
+Using the Threadlocal Execution Strategy
+-----------------------------------------
+
+The "threadlocal" engine strategy is used by non-ORM applications which wish to bind a transaction to the current thread, such that all parts of the application can participate in that transaction implicitly without the need to explicitly reference a ``Connection``. "threadlocal" is designed for a very specific pattern of use, and is not appropriate unless this very specfic pattern, described below, is what's desired. It has **no impact** on the "thread safety" of SQLAlchemy components or one's application. It also should not be used when using an ORM ``Session`` object, as the ``Session`` itself represents an ongoing transaction and itself handles the job of maintaining connection and transactional resources.
+
+Enabling ``threadlocal`` is achieved as follows:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('mysql://localhost/test', strategy='threadlocal')
+
+When the engine above is used in a "connectionless" style, meaning ``engine.execute()`` is called, a DBAPI connection is retrieved from the connection pool and then associated with the current thread. Subsequent operations on the ``Engine`` while the DBAPI connection remains checked out will make use of the *same* DBAPI connection object. The connection stays allocated until all returned ``ResultProxy`` objects are closed, which occurs for a particular ``ResultProxy`` after all pending results are fetched, or immediately for an operation which returns no rows (such as an INSERT).
+
+.. sourcecode:: python+sql
+
+ # execute one statement and receive results. r1 now references a DBAPI connection resource.
+ r1 = db.execute("select * from table1")
+
+ # execute a second statement and receive results. r2 now references the *same* resource as r1
+ r2 = db.execute("select * from table2")
+
+ # fetch a row on r1 (assume more results are pending)
+ row1 = r1.fetchone()
+
+ # fetch a row on r2 (same)
+ row2 = r2.fetchone()
+
+ # close r1. the connection is still held by r2.
+ r1.close()
+
+ # close r2. with no more references to the underlying connection resources, they
+ # are returned to the pool.
+ r2.close()
+
+The above example does not illustrate any pattern that is particularly useful, as it is not a frequent occurence that two execute/result fetching operations "leapfrog" one another. There is a slight savings of connection pool checkout overhead between the two operations, and an implicit sharing of the same transactional context, but since there is no explicitly declared transaction, this association is short lived.
+
+The real usage of "threadlocal" comes when we want several operations to occur within the scope of a shared transaction. The ``Engine`` now has ``begin()``, ``commit()`` and ``rollback()`` methods which will retrieve a connection resource from the pool and establish a new transaction, maintaining the connection against the current thread until the transaction is committed or rolled back:
+
+.. sourcecode:: python+sql
+
+ db.begin()
+ try:
+ call_operation1()
+ call_operation2()
+ db.commit()
+ except:
+ db.rollback()
+
+``call_operation1()`` and ``call_operation2()`` can make use of the ``Engine`` as a global variable, using the "connectionless" execution style, and their operations will participate in the same transaction:
+
+.. sourcecode:: python+sql
+
+ def call_operation1():
+ engine.execute("insert into users values (?, ?)", 1, "john")
+
+ def call_operation2():
+ users.update(users.c.user_id==5).execute(name='ed')
+
+When using threadlocal, operations that do call upon the ``engine.connect()`` method will receive a ``Connection`` that is **outside** the scope of the transaction. This can be used for operations such as logging the status of an operation regardless of transaction success:
+
+.. sourcecode:: python+sql
+
+ db.begin()
+ conn = db.connect()
+ try:
+ conn.execute(log_table.insert(), message="Operation started")
+ call_operation1()
+ call_operation2()
+ db.commit()
+ conn.execute(log_table.insert(), message="Operation succeeded")
+ except:
+ db.rollback()
+ conn.execute(log_table.insert(), message="Operation failed")
+ finally:
+ conn.close()
+
+Functions which are written to use an explicit ``Connection`` object, but wish to participate in the threadlocal transaction, can receive their ``Connection`` object from the ``contextual_connect()`` method, which returns a ``Connection`` that is **inside** the scope of the transaction:
+
+.. sourcecode:: python+sql
+
+ conn = db.contextual_connect()
+ call_operation3(conn)
+ conn.close()
+
+Calling ``close()`` on the "contextual" connection does not release the connection resources to the pool if other resources are making use of it. A resource-counting mechanism is employed so that the connection is released back to the pool only when all users of that connection, including the transaction established by ``engine.begin()``, have been completed.
+
+So remember - if you're not sure if you need to use ``strategy="threadlocal"`` or not, the answer is **no** ! It's driven by a specific programming pattern that is generally not the norm.
+
+Configuring Logging
+====================
+
+Python's standard `logging <http://www.python.org/doc/lib/module-logging.html>`_ module is used to implement informational and debug log output with SQLAlchemy. This allows SQLAlchemy's logging to integrate in a standard way with other applications and libraries. The ``echo`` and ``echo_pool`` flags that are present on ``create_engine()``, as well as the ``echo_uow`` flag used on ``Session``, all interact with regular loggers.
+
+This section assumes familiarity with the above linked logging module. All logging performed by SQLAlchemy exists underneath the ``sqlalchemy`` namespace, as used by ``logging.getLogger('sqlalchemy')``. When logging has been configured (i.e. such as via ``logging.basicConfig()``), the general namespace of SA loggers that can be turned on is as follows:
+
+* ``sqlalchemy.engine`` - controls SQL echoing. set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
+* ``sqlalchemy.pool`` - controls connection pool logging. set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
+* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for configurational logging as well as unit of work dumps, ``logging.DEBUG`` for extensive logging during query and flush() operations. Subcategories of ``sqlalchemy.orm`` include:
+ * ``sqlalchemy.orm.attributes`` - logs certain instrumented attribute operations, such as triggered callables
+ * ``sqlalchemy.orm.mapper`` - logs Mapper configuration and operations
+ * ``sqlalchemy.orm.unitofwork`` - logs flush() operations, including dependency sort graphs and other operations
+ * ``sqlalchemy.orm.strategies`` - logs relation loader operations (i.e. lazy and eager loads)
+ * ``sqlalchemy.orm.sync`` - logs synchronization of attributes from parent to child instances during a flush()
+
+For example, to log SQL queries as well as unit of work debugging:
+
+.. sourcecode:: python+sql
+
+ import logging
+
+ logging.basicConfig()
+ logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
+ logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)
+
+By default, the log level is set to ``logging.ERROR`` within the entire ``sqlalchemy`` namespace so that no log operations occur, even within an application that has logging enabled otherwise.
+
+The ``echo`` flags present as keyword arguments to ``create_engine()`` and others as well as the ``echo`` property on ``Engine``, when set to ``True``, will first attempt to ensure that logging is enabled. Unfortunately, the ``logging`` module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set). For this reason, any ``echo=True`` flags will result in a call to ``logging.basicConfig()`` using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this configuration has the affect of being configured **in addition** to any existing logger configurations. Therefore, **when using Python logging, ensure all echo flags are set to False at all times**, to avoid getting duplicate log lines.
+++ /dev/null
-from toc import TOCElement
-import docstring
-import re
-
-from sqlalchemy import schema, types, engine, sql, pool, orm, exceptions, databases, interfaces, util
-from sqlalchemy.sql import compiler, expression, visitors
-from sqlalchemy.engine import default, strategies, threadlocal, url
-from sqlalchemy.orm import shard
-from sqlalchemy.ext import orderinglist, associationproxy, sqlsoup, declarative, serializer
-
-def make_doc(obj, classes=None, functions=None, **kwargs):
- """generate a docstring.ObjectDoc structure for an individual module, list of classes, and list of functions."""
- obj = docstring.ObjectDoc(obj, classes=classes, functions=functions, **kwargs)
- return (obj.name, obj)
-
-def make_all_docs():
- """generate a docstring.AbstractDoc structure."""
- print "generating docstrings"
- objects = [
- make_doc(obj=engine),
- make_doc(obj=default),
- make_doc(obj=strategies),
- make_doc(obj=threadlocal),
- make_doc(obj=url),
- make_doc(obj=exceptions),
- make_doc(obj=interfaces),
- make_doc(obj=pool),
- make_doc(obj=schema),
- make_doc(obj=compiler),
- make_doc(obj=expression,
- classes=[getattr(expression, key) for key in expression.__all__ if isinstance(getattr(expression, key), type)] +
- [expression._CompareMixin, expression.Operators, expression.ColumnOperators,
- expression._SelectBaseMixin, expression._Immutable, expression._ValuesBase, expression._UpdateBase]
- ),
- make_doc(obj=visitors),
- make_doc(obj=types),
- make_doc(obj=util),
- make_doc(obj=orm),
- make_doc(obj=orm.attributes),
- make_doc(obj=orm.collections, classes=[orm.collections.collection,
- orm.collections.MappedCollection,
- orm.collections.CollectionAdapter]),
- make_doc(obj=orm.interfaces),
- make_doc(obj=orm.mapperlib, classes=[orm.mapperlib.Mapper]),
- make_doc(obj=orm.properties),
- make_doc(obj=orm.query, classes=[orm.query.Query]),
- make_doc(obj=orm.session, classes=[orm.session.Session, orm.session.SessionExtension]),
- make_doc(obj=orm.shard),
- make_doc(obj=declarative),
- make_doc(obj=associationproxy, classes=[associationproxy.AssociationProxy]),
- make_doc(obj=orderinglist, classes=[orderinglist.OrderingList]),
- make_doc(obj=serializer),
- make_doc(obj=sqlsoup),
- ] + [make_doc(getattr(__import__('sqlalchemy.databases.%s' % m).databases, m)) for m in databases.__all__]
- return objects
-
-def create_docstring_toc(data, root):
- """given a docstring.AbstractDoc structure, create new TOCElement nodes corresponding
- to the elements and cross-reference them back to the doc structure."""
- root = TOCElement("docstrings", name="docstrings", description="API Documentation", parent=root, requires_paged=True)
- files = []
- def create_obj_toc(obj, toc):
- if obj.isclass:
- s = []
- for elem in obj.inherits:
- if isinstance(elem, docstring.ObjectDoc):
- s.append(elem.name)
- else:
- s.append(str(elem))
- description = "class " + obj.classname + "(%s)" % (','.join(s))
- filename = toc.filename
- else:
- description = obj.description
- filename = re.sub(r'\W', '_', obj.name)
-
- toc = TOCElement(filename, obj.name, description, parent=toc, requires_paged=True)
- obj.toc_path = toc.path
- if not obj.isclass:
- create_module_file(obj, toc)
- files.append(filename)
-
- if not obj.isclass and obj.functions:
- functoc = TOCElement(toc.filename, name="modfunc", description="Module Functions", parent=toc)
- obj.mod_path = functoc.path
- for func in obj.functions:
- t = TOCElement(toc.filename, name=func.name, description=func.name + "()", parent=functoc)
- func.toc_path = t.path
- #elif obj.functions:
- # for func in obj.functions:
- # t = TOCElement(toc.filename, name=func.name, description=func.name, parent=toc)
- # func.toc_path = t.path
-
- if obj.classes:
- for class_ in obj.classes:
- create_obj_toc(class_, toc)
-
- for key, obj in data:
- create_obj_toc(obj, root)
- return files
-
-def create_module_file(obj, toc):
- outname = 'output/%s.html' % toc.filename
- print "->", outname
- header = """# -*- coding: utf-8 -*-
- <%%inherit file="module.html"/>
- <%%def name="title()">%s - %s</%%def>
- ## This file is generated. Edit the .txt files instead of this one.
- <%%!
- filename = '%s'
- docstring = '%s'
- %%>
- """ % (toc.root.doctitle, obj.description, toc.filename, obj.name)
- file(outname, 'w').write(header)
- return outname
+++ /dev/null
-#!/usr/bin/env python
-import sys,re,os,shutil
-from os import path
-import cPickle as pickle
-
-sys.path = ['../../lib', './lib'] + sys.path
-
-import sqlalchemy
-import gen_docstrings, read_markdown, toc
-from mako.lookup import TemplateLookup
-from mako import exceptions, runtime
-import time
-import optparse
-
-files = [
- 'index',
- 'documentation',
- 'intro',
- 'ormtutorial',
- 'sqlexpression',
- 'mappers',
- 'session',
- 'dbengine',
- 'metadata',
- 'types',
- 'pooling',
- 'plugins',
- 'docstrings',
- ]
-
-post_files = [
- 'copyright'
-]
-
-v = open(path.join(path.dirname(__file__), '..', '..', 'VERSION'))
-VERSION = v.readline().strip()
-v.close()
-
-parser = optparse.OptionParser(usage = "usage: %prog [options] [tests...]")
-parser.add_option("--file", action="store", dest="file", help="only generate file <file>")
-parser.add_option("--docstrings", action="store_true", dest="docstrings", help="only generate docstrings")
-parser.add_option("--version", action="store", dest="version", default=VERSION, help="version string")
-
-(options, args) = parser.parse_args()
-if options.file:
- to_gen = [options.file]
-else:
- to_gen = files + post_files
-
-title='SQLAlchemy 0.5 Documentation'
-version = options.version
-
-
-root = toc.TOCElement('', 'root', '', version=version, doctitle=title)
-
-shutil.copy('./content/index.html', './output/index.html')
-shutil.copy('./content/docstrings.html', './output/docstrings.html')
-shutil.copy('./content/documentation.html', './output/documentation.html')
-
-if not options.docstrings:
- read_markdown.parse_markdown_files(root, [f for f in files if f in to_gen])
-
-if not options.file or options.docstrings:
- docstrings = gen_docstrings.make_all_docs()
- doc_files = gen_docstrings.create_docstring_toc(docstrings, root)
-
- pickle.dump(docstrings, file('./output/compiled_docstrings.pickle', 'w'))
-
-if not options.docstrings:
- read_markdown.parse_markdown_files(root, [f for f in post_files if f in to_gen])
-
-if not options.file or options.docstrings:
- pickle.dump(root, file('./output/table_of_contents.pickle', 'w'))
-
-template_dirs = ['./templates', './output']
-output = os.path.dirname(os.getcwd())
-
-lookup = TemplateLookup(template_dirs, output_encoding='utf-8', module_directory='./modules')
-
-def genfile(name, outname):
- infile = name + ".html"
- outfile = file(outname, 'w')
- print infile, '->', outname
- t = lookup.get_template(infile)
- outfile.write(t.render(attributes={}))
-
-if not options.docstrings:
- for filename in to_gen:
- try:
- genfile(filename, os.path.join(os.getcwd(), '../', filename + ".html"))
- except:
- print exceptions.text_error_template().render()
-
-if not options.file or options.docstrings:
- for filename in doc_files:
- try:
- genfile(filename, os.path.join(os.getcwd(), '../', os.path.basename(filename) + ".html"))
- except:
- print exceptions.text_error_template().render()
-
-
-
-
-
-
-
--- /dev/null
+Table of Contents
+=================
+
+Main Documentation
+------------------
+
+.. toctree::
+ :glob:
+
+ intro
+ ormtutorial
+ sqlexpression
+ mappers
+ session
+ dbengine
+ metadata
+ reference/index
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`search`
+
--- /dev/null
+.. _overview_toplevel:
+
+=======================
+Overview / Installation
+=======================
+
+Overview
+========
+
+
+The SQLAlchemy SQL Toolkit and Object Relational Mapper is a comprehensive set of tools for working with databases and Python. It has several distinct areas of functionality which can be used individually or combined together. Its major API components, all public-facing, are illustrated below::
+
+ +-----------------------------------------------------------+
+ | Object Relational Mapper (ORM) |
+ +-----------------------------------------------------------+
+ +---------+ +------------------------------------+ +--------+
+ | | | SQL Expression Language | | |
+ | | +------------------------------------+ | |
+ | +-----------------------+ +--------------+ |
+ | Dialect/Execution | | Schema Management |
+ +---------------------------------+ +-----------------------+
+ +----------------------+ +----------------------------------+
+ | Connection Pooling | | Types |
+ +----------------------+ +----------------------------------+
+
+Above, the two most significant front-facing portions of SQLAlchemy are the **Object Relational Mapper** and the **SQL Expression Language**. These are two separate toolkits, one building off the other. SQL Expressions can be used independently of the ORM. When using the ORM, the SQL Expression language is used to establish object-relational configurations as well as in querying.
+
+Tutorials
+=========
+
+* :ref:`ormtutorial_toplevel` - This describes the richest feature of SQLAlchemy, its object relational mapper. If you want to work with higher-level SQL which is constructed automatically for you, as well as management of Python objects, proceed to this tutorial.
+* :ref:`sqlexpression_toplevel` - The core of SQLAlchemy is its SQL expression language. The SQL Expression Language is a toolkit all its own, independent of the ORM package, which can be used to construct manipulable SQL expressions which can be programmatically constructed, modified, and executed, returning cursor-like result sets. It's a lot more lightweight than the ORM and is appropriate for higher scaling SQL operations. It's also heavily present within the ORM's public facing API, so advanced ORM users will want to master this language as well.
+
+Main Documentation
+==================
+
+* :ref:`datamapping_toplevel` - A comprehensive walkthrough of major ORM patterns and techniques.
+* :ref:`session_toplevel` - A detailed description of SQLAlchemy's Session object
+* :ref:`engines_toplevel` - Describes SQLAlchemy's database-connection facilities, including connection documentation and working with connections and transactions.
+* :ref:`metadata_toplevel` - All about schema management using ``MetaData`` and ``Table`` objects; reading database schemas into your application, creating and dropping tables, constraints, defaults, sequences, indexes.
+* :ref:`pooling_toplevel` - Further detail about SQLAlchemy's connection pool library.
+* :ref:`types` - Datatypes included with SQLAlchemy, their functions, as well as how to create your own types.
+* :ref:`plugins` - Included addons for SQLAlchemy
+
+API Reference
+=============
+
+An organized section of all SQLAlchemy APIs is at :ref:`api_reference_toplevel`.
+
+Installing SQLAlchemy
+======================
+
+Installing SQLAlchemy from scratch is most easily achieved with `setuptools <http://pypi.python.org/pypi/setuptools/>`_. Assuming it's installed, just run this from the command-line:
+
+.. sourcecode:: none
+
+ # easy_install SQLAlchemy
+
+This command will download the latest version of SQLAlchemy from the `Python Cheese Shop <http://pypi.python.org/pypi/SQLAlchemy>`_ and install it to your system.
+
+* `setuptools <http://peak.telecommunity.com/DevCenter/setuptools>`_
+* `install setuptools <http://peak.telecommunity.com/DevCenter/EasyInstall#installation-instructions>`_
+* `pypi <http://pypi.python.org/pypi/SQLAlchemy>`_
+
+Otherwise, you can install from the distribution using the ``setup.py`` script:
+
+.. sourcecode:: none
+
+ # python setup.py install
+
+Installing a Database API
+==========================
+
+SQLAlchemy is designed to operate with a `DB-API <http://www.python.org/doc/peps/pep-0249/>`_ implementation built for a particular database, and includes support for the most popular databases. The current list is at :ref:`supported_dbapis`.
+
+Checking the Installed SQLAlchemy Version
+=========================================
+
+This documentation covers SQLAlchemy version 0.5. If you're working on a system that already has SQLAlchemy installed, check the version from your Python prompt like this:
+
+.. sourcecode:: python+sql
+
+ >>> import sqlalchemy
+ >>> sqlalchemy.__version__ # doctest: +SKIP
+ 0.5.0
+
+0.4 to 0.5 Migration
+=====================
+
+Notes on what's changed from 0.4 to 0.5 is available on the SQLAlchemy wiki at `05Migration <http://www.sqlalchemy.org/trac/wiki/05Migration>`_.
+++ /dev/null
-"""
-defines a pickleable, recursive "generated python documentation" datastructure.
-"""
-
-import operator, re, types, string, inspect
-
-allobjects = {}
-
-class AbstractDoc(object):
- def __init__(self, obj):
- allobjects[id(obj)] = self
- self.id = id(obj)
- self.allobjects = allobjects
- self.toc_path = None
-
-class ObjectDoc(AbstractDoc):
- def __init__(self, obj, functions=None, classes=None, include_all_classes=False):
- super(ObjectDoc, self).__init__(obj)
- self.isclass = isinstance(obj, types.ClassType) or isinstance(obj, types.TypeType)
- self.name= obj.__name__
- self.include_all_classes = include_all_classes
- functions = functions
- classes= classes
-
- if not self.isclass:
- if not include_all_classes and hasattr(obj, '__all__'):
- objects = obj.__all__
- sort = True
- else:
- objects = obj.__dict__.keys()
- sort = True
- if functions is None:
- functions = [
- (x, getattr(obj, x, None))
- for x in objects
- if getattr(obj,x,None) is not None and
- (isinstance(getattr(obj,x), types.FunctionType))
- and not self._is_private_name(getattr(obj,x).__name__)]
- if sort:
- functions.sort(key=operator.itemgetter(0))
- if classes is None:
- classes = [getattr(obj, x, None) for x in objects
- if getattr(obj,x,None) is not None and
- (isinstance(getattr(obj,x), types.TypeType)
- or isinstance(getattr(obj,x), types.ClassType))
- and (self.include_all_classes or not self._is_private_name(getattr(obj,x).__name__))
- ]
- classes = list(set(classes))
- if sort:
- classes.sort(lambda a, b: cmp(a.__name__.replace('_', ''), b.__name__.replace('_', '')))
- else:
- if functions is None:
- methods = [
- (x, getattr(obj, x).im_func)
- for x in obj.__dict__.keys()
- if (isinstance(getattr(obj,x), types.MethodType) and
- (getattr(obj, x).__name__ == '__init__' or
- not self._is_private_name(x)))]
- props = [
- (x, getattr(obj, x))
- for x in obj.__dict__.keys()
- if (_is_property(getattr(obj,x)) and
- not self._is_private_name(x))]
-
- functions = methods + props
- functions.sort(_method_sort)
- if classes is None:
- classes = []
-
- if self.isclass:
- self.description = "class " + self.name
- self.classname = self.name
- if hasattr(obj, '__mro__'):
- l = []
- mro = list(obj.__mro__[1:])
- mro.reverse()
- for x in mro:
- for y in x.__mro__[1:]:
- if y in l:
- del l[l.index(y)]
- l.insert(0, x)
- self.description += "(" + string.join([x.__name__ for x in l], ',') + ")"
- self._inherits = [(id(x), x.__name__) for x in l]
- else:
- self._inherits = []
- else:
- self.description = "module " + self.name
-
- self.doc = obj.__doc__
-
- self.functions = []
-
- for name, func in functions:
- if isinstance(func, types.FunctionType):
- if self.isclass:
- self.functions.append(MethodDoc(name, func, self))
- else:
- self.functions.append(FunctionDoc(name, func))
- else:
- self.functions.append(PropertyDoc(name, func))
-
- self.classes = []
- for class_ in classes:
- self.classes.append(ObjectDoc(class_))
-
- def _is_private_name(self, name):
- if name in ('__weakref__', '__repr__','__str__', '__unicode__',
- '__getstate__', '__setstate__', '__reduce__',
- '__reduce_ex__', '__hash__'):
- return True
- elif re.match(r'^__.*__$', name):
- return False
- elif name.startswith('_'):
- return True
- else:
- return False
-
- def _get_inherits(self):
- for item in self._inherits:
- if item[0] in self.allobjects:
- yield self.allobjects[item[0]]
- else:
- yield item[1]
- inherits = property(_get_inherits)
- def accept_visitor(self, visitor):
- visitor.visit_object(self)
-
-def _is_property(elem):
- return isinstance(elem, property) or (hasattr(elem, '__get__') and hasattr(elem, '__set__'))
-
-class FunctionDoc(AbstractDoc):
- def __init__(self, name, func):
- super(FunctionDoc, self).__init__(func)
- argspec = inspect.getargspec(func)
- argnames = argspec[0]
- varargs = argspec[1]
- varkw = argspec[2]
- defaults = argspec[3] or ()
- argstrings = []
- for i in range(0, len(argnames)):
- if i >= len(argnames) - len(defaults):
- argstrings.append("%s=%s" % (argnames[i], repr(defaults[i - (len(argnames) - len(defaults))])))
- else:
- argstrings.append(argnames[i])
- if varargs is not None:
- argstrings.append("*%s" % varargs)
- if varkw is not None:
- argstrings.append("**%s" % varkw)
- self.argstrings = self.arglist = argstrings
- self.name = name
- self.link = func.__name__
- self.doc = func.__doc__
- def accept_visitor(self, visitor):
- visitor.visit_function(self)
-
-class MethodDoc(FunctionDoc):
- def __init__(self, name, func, owner):
- super(MethodDoc, self).__init__(name, func)
- if name == '__init__' and not self.doc:
- self.doc = "Construct a new ``%s``." % owner.name
-
-class PropertyDoc(AbstractDoc):
- def __init__(self, name, prop):
- super(PropertyDoc, self).__init__(prop)
- self.doc = prop.__doc__
- self.name = name
- self.link = name
- def accept_visitor(self, visitor):
- visitor.visit_property(self)
-
-def _method_sort(speca, specb):
- a = getattr(speca[1], '__name__', speca[0])
- b = getattr(specb[1], '__name__', speca[0])
-
- if a == '__init__': return -1
- if b == '__init__': return 1
-
- a_u = a.startswith('__') and a.endswith('__')
- b_u = b.startswith('__') and b.endswith('__')
-
- if a_u and not b_u: return 1
- if b_u and not a_u: return -1
-
- return cmp(a, b)
+++ /dev/null
-# $Id$
-# highlight.py - syntax highlighting functions for Myghty
-# Copyright (C) 2004 Michael Bayer mike_mp@zzzcomputing.com
-#
-# This module is part of SQLAlchemy and is released under
-# the MIT License: http://www.opensource.org/licenses/mit-license.php
-
-
-
-import re, StringIO, sys, string, os
-import token, tokenize, keyword
-
-# Highlighter - highlights Myghty and Python source code
-
-__all__ = ['highlight', 'PythonHighlighter', 'MyghtyHighlighter']
-
-pystyles = {
- token.ENDMARKER : 'python_operator' ,
- token.NAME : 'python_name' ,
- token.NUMBER : 'python_number' ,
- token.STRING : 'python_literal' ,
- token.NEWLINE : 'python_operator' ,
- token.INDENT : 'python_operator' ,
- token.DEDENT : 'python_operator' ,
- token.LPAR : 'python_enclosure' ,
- token.RPAR : 'python_enclosure' ,
- token.LSQB : 'python_enclosure' ,
- token.RSQB : 'python_enclosure' ,
- token.COLON : 'python_operator' ,
- token.COMMA : 'python_operator' ,
- token.SEMI : 'python_operator' ,
- token.PLUS : 'python_operator' ,
- token.MINUS : 'python_operator' ,
- token.STAR : 'python_operator' ,
- token.SLASH : 'python_operator' ,
- token.VBAR : 'python_operator' ,
- token.AMPER : 'python_operator' ,
- token.LESS : 'python_operator' ,
- token.GREATER : 'python_operator' ,
- token.EQUAL : 'python_operator' ,
- token.DOT : 'python_operator' ,
- token.PERCENT : 'python_operator' ,
- token.BACKQUOTE : 'python_operator' ,
- token.LBRACE : 'python_enclosure',
- token.RBRACE : 'python_enclosure' ,
- token.EQEQUAL : 'python_operator' ,
- token.NOTEQUAL : 'python_operator' ,
- token.LESSEQUAL : 'python_operator' ,
- token.GREATEREQUAL : 'python_operator' ,
- token.TILDE : 'python_operator' ,
- token.CIRCUMFLEX : 'python_operator' ,
- token.LEFTSHIFT : 'python_operator' ,
- token.RIGHTSHIFT : 'python_operator' ,
- token.DOUBLESTAR : 'python_operator' ,
- token.PLUSEQUAL : 'python_operator' ,
- token.MINEQUAL : 'python_operator' ,
- token.STAREQUAL : 'python_operator' ,
- token.SLASHEQUAL : 'python_operator' ,
- token.PERCENTEQUAL : 'python_operator' ,
- token.AMPEREQUAL : 'python_operator' ,
- token.VBAREQUAL : 'python_operator' ,
- token.CIRCUMFLEXEQUAL : 'python_operator' ,
- token.LEFTSHIFTEQUAL : 'python_operator' ,
- token.RIGHTSHIFTEQUAL : 'python_operator' ,
- token.DOUBLESTAREQUAL : 'python_operator' ,
- token.DOUBLESLASH : 'python_operator' ,
- token.DOUBLESLASHEQUAL : 'python_operator' ,
- token.OP : 'python_operator' ,
- token.ERRORTOKEN : 'python_operator' ,
- token.N_TOKENS : 'python_operator' ,
- token.NT_OFFSET : 'python_operator' ,
- tokenize.COMMENT: 'python_comment',
- }
-
-html_escapes = {
- '&' : '&',
- '>' : '>',
- '<' : '<',
- '"' : '"'
-}
-
-def do_html_escape(string):
- #return "@" + re.sub(r"([&<>])", lambda m: html_escapes[m.group()], string) + "+"
- return re.sub(r"([&<>])", lambda m: html_escapes[m.group()], string)
-
-def highlight(source, filename = None, syntaxtype = None, html_escape = True):
- if syntaxtype is not None:
- highlighter = highlighters.get(syntaxtype, None)
- elif filename is not None:
- (root, filename) = os.path.split(filename)
- highlighter = highlighters.get(filename, None)
- if highlighter is None:
- (root, ext) = os.path.splitext(filename)
- highlighter = highlighters.get(ext, None)
- else:
- highlighter = None
-
- if highlighter is None:
- if html_escape:
- return do_html_escape(source)
- else:
- return source
- else:
- return highlighter(source, html_escape = html_escape).highlight()
-
-class Highlighter:
- def __init__(self, source, output = None, html_escape = True):
- self.source = source
- self.pos = 0
- self.html_escape = html_escape
- if output is None:
- self.output = StringIO.StringIO()
- else:
- self.output = output
-
- def content(self):
- return self.output.getvalue()
-
- def highlight(self):raise NotImplementedError()
-
-
- def colorize(self, tokens):
- for pair in tokens:
- if pair[1] is None:
- if self.html_escape:
- self.output.write(do_html_escape(pair[0]))
- else:
- self.output.write(pair[0])
- else:
- if self.html_escape:
- self.output.write('<span class="%s">%s</span>' % (pair[1], do_html_escape(pair[0])))
- else:
- self.output.write('<span class="%s">%s</span>' % (pair[1], pair[0]))
-
-
-class PythonHighlighter(Highlighter):
-
- def _line_grid(self, str, start, end):
- lines = re.findall(re.compile(r'[^\n]*\n?', re.S), str)
- r = 0
- for l in lines[0 : end[0] - start[0]]:
- r += len(l)
- r += end[1]
- return (start, (start[0], r))
-
- def highlight(self):
- buf = StringIO.StringIO(self.source)
-
- # tokenize module not too good at getting the
- # whitespace at the end of a python block
- trailingspace = re.search(r"\n([ \t]+$)", self.source, re.S)
- if trailingspace:
- trailingspace = trailingspace.group(1)
-
- curl = -1
- tokens = []
- curstyle = None
- line = None
-
- for t in tokenize.generate_tokens(lambda: buf.readline()):
- if t[2][0] != curl:
- curl = t[2][0]
- curc = 0
-
- line = t[4]
-
- # pick up whitespace and output
- if t[2][1] > curc:
- tokens.append(line[curc : t[2][1]])
- curc = t[2][1]
-
- if self.get_style(t[0], t[1]) != curstyle:
- if tokens:
- self.colorize([(string.join(tokens, ''), curstyle)])
- tokens = []
- curstyle = self.get_style(t[0], t[1])
-
- (start, end) = self._line_grid(line, t[2], t[3])
- text = line[start[1]:end[1]]
-
- # special hardcoded rule to allow "interactive" demos without
- # >>> getting sucked in as >> , > operators
- if text == '">>>"':
- text = '>>>'
- tokens.append(text)
- curc = t[3][1]
- curl = t[3][0]
-
- # any remaining content to output, output it
- if tokens:
- self.colorize([(string.join(tokens, ''), curstyle)])
-
- if trailingspace:
- self.output.write(trailingspace)
-
- return self.content()
-
- def get_style(self, tokenid, str):
- if tokenid == token.NAME:
- if keyword.iskeyword(str):
- return "python_keyword"
- else:
- return "python_name"
- elif tokenid == token.OP:
- if "()[]{}".find(str) != -1:
- return "python_enclosure"
- else:
- return "python_operator"
- else:
- return pystyles.get(tokenid, None)
-
-class MyghtyHighlighter(Highlighter):
-
- def _match(self, regexp):
-
- match = regexp.match(self.source, self.pos)
- if match:
- (start, end) = match.span()
- self.output.write(self.source[self.pos:start])
-
- if start == end:
- self.pos = end + 1
- else:
- self.pos = end
-
- return match
- else:
- return None
-
-
- def highlight(self):
-
- while (self.pos < len(self.source)):
- if self.match_named_block():
- continue
-
- if self.match_block():
- continue
-
- if self.match_comp_call():
- continue
-
- if self.match_comp_content_call():
- continue
-
- if self.match_substitution():
- continue
-
- if self.match_line():
- continue
-
- if self.match_text():
- continue;
-
- break
-
- return self.content()
-
-
- def pythonize(self, text):
- py = PythonHighlighter(text, output = self.output)
- py.highlight()
-
- def match_text(self):
- textmatch = re.compile(r"""
- (.*?) # anything, followed by:
- (
- (?<=\n)(?=[%#]) # an eval or comment line
- |
- (?=</?[%&]) # a substitution or block or call start or end
- # - don't consume
- |
- (\\\n) # an escaped newline
- |
- \Z # end of string
- )""", re.X | re.S)
-
- match = self._match(textmatch)
- if match:
- self.colorize([(match.group(1), 'text')])
- if match.group(3):
- self.colorize([(match.group(3), 'python_operator')])
- return True
- else:
- return False
-
- def match_named_block(self):
- namedmatch = re.compile(r"(<%(def|method))(.*?)(>)(.*?)(</%\2>)", re.M | re.S)
-
- match = self._match(namedmatch)
- if match:
- self.colorize([(match.group(1), 'deftag')])
- self.colorize([(match.group(3), 'compname')])
- self.colorize([(match.group(4), 'deftag')])
- MyghtyHighlighter(match.group(5), self.output).highlight()
- self.colorize([(match.group(6), 'deftag')])
- return True
- else:
- return False
-
- def match_block(self):
- blockmatch = re.compile(r"(<%(\w+).*?>)(.*?)(</%\2\s*>)", re.M | re.S)
- match = self._match(blockmatch)
-
-
- if match:
- style = {
- 'doc': 'doctag',
- 'args': 'argstag',
- }.setdefault(match.group(2), "blocktag")
-
- self.colorize([(match.group(1), style)])
- if style == 'doctag':
- self.colorize([(match.group(3), 'doctag_text')])
-
- else:
- self.pythonize(match.group(3))
- self.colorize([(match.group(4), style)])
-
- return True
- else:
- return False
-
- def match_comp_call(self):
- compmatch = re.compile(r"(<&[^|])(.*?)(,.*?)?(&>)", re.M)
- match = self._match(compmatch)
- if match:
- self.colorize([(match.group(1), 'compcall')])
- self.colorize([(match.group(2), 'compname')])
- if match.group(3) is not None:
- self.pythonize(match.group(3))
- self.colorize([(match.group(4), 'compcall')])
- return True
- else:
- return False
-
-
- def match_substitution(self):
- submatch = re.compile(r"(<%)(.*?)(%>)", re.M)
- match = self._match(submatch)
- if match:
- self.colorize([(match.group(1), 'substitution')])
- self.pythonize(match.group(2))
- self.colorize([(match.group(3), 'substitution')])
- return True
- else:
- return False
-
- def match_comp_content_call(self):
- compcontmatch = re.compile(r"(<&\|)(.*?)(,.*?)?(&>)|(</&>)", re.M | re.S)
- match = self._match(compcontmatch)
- if match:
- if match.group(5) is not None:
- self.colorize([(match.group(5), 'compcall')])
- else:
- self.colorize([(match.group(1), 'compcall')])
- self.colorize([(match.group(2), 'compname')])
- if match.group(3) is not None:
- self.pythonize(match.group(3))
- self.colorize([(match.group(4), 'compcall')])
- return True
- else:
- return False
-
- def match_line(self):
- linematch = re.compile(r"(?<=^)([%#])([^\n]*)(\n|\Z)", re.M)
- match = self._match(linematch)
- if match:
- if match.group(1) == '#':
- self.colorize([(match.group(0), 'doctag')])
- else:
- #self.colorize([(match.group(0), 'doctag')])
- self.colorize([(match.group(1), 'controlline')])
- self.pythonize(match.group(2))
- self.output.write(match.group(3))
- return True
- else:
- return False
-
-
-highlighters = {
- '.myt': MyghtyHighlighter,
- '.myc': MyghtyHighlighter,
- 'autohandler' : MyghtyHighlighter,
- 'dhandler': MyghtyHighlighter,
- '.py': PythonHighlighter,
- 'myghty': MyghtyHighlighter,
- 'python' : PythonHighlighter
-}
+++ /dev/null
-#!/usr/bin/env python
-
-# The following constant specifies the name used in the usage
-# statement displayed for python versions lower than 2.3. (With
-# python2.3 and higher the usage statement is generated by optparse
-# and uses the actual name of the executable called.)
-
-EXECUTABLE_NAME_FOR_USAGE = "python markdown.py"
-
-SPEED_TEST = 0
-
-"""
-====================================================================
-IF YOA ARE LOOKING TO EXTEND MARKDOWN, SEE THE "FOOTNOTES" SECTION
-====================================================================
-
-Python-Markdown
-===============
-
-Converts Markdown to HTML. Basic usage as a module:
-
- import markdown
- html = markdown.markdown(your_text_string)
-
-Started by [Manfred Stienstra](http://www.dwerg.net/). Continued and
-maintained by [Yuri Takhteyev](http://www.freewisdom.org).
-
-Project website: http://www.freewisdom.org/projects/python-markdown
-Contact: yuri [at] freewisdom.org
-
-License: GPL 2 (http://www.gnu.org/copyleft/gpl.html) or BSD
-
-Version: 1.5a (July 9, 2006)
-
-For changelog, see end of file
-"""
-
-import re, sys, os, random, codecs
-
-# set debug level: 3 none, 2 critical, 1 informative, 0 all
-(VERBOSE, INFO, CRITICAL, NONE) = range(4)
-
-MESSAGE_THRESHOLD = CRITICAL
-
-def message(level, text) :
- if level >= MESSAGE_THRESHOLD :
- print text
-
-
-# --------------- CONSTANTS YOU MIGHT WANT TO MODIFY -----------------
-
-# all tabs will be expanded to up to this many spaces
-TAB_LENGTH = 4
-ENABLE_ATTRIBUTES = 1
-SMART_EMPHASIS = 1
-
-# --------------- CONSTANTS YOU _SHOULD NOT_ HAVE TO CHANGE ----------
-
-# a template for html placeholders
-HTML_PLACEHOLDER_PREFIX = "qaodmasdkwaspemas"
-HTML_PLACEHOLDER = HTML_PLACEHOLDER_PREFIX + "%dajkqlsmdqpakldnzsdfls"
-
-BLOCK_LEVEL_ELEMENTS = ['p', 'div', 'blockquote', 'pre', 'table',
- 'dl', 'ol', 'ul', 'script', 'noscript',
- 'form', 'fieldset', 'iframe', 'math', 'ins',
- 'del', 'hr', 'hr/', 'style']
-
-def is_block_level (tag) :
- return ( (tag in BLOCK_LEVEL_ELEMENTS) or
- (tag[0] == 'h' and tag[1] in "0123456789") )
-
-"""
-======================================================================
-========================== NANODOM ===================================
-======================================================================
-
-The three classes below implement some of the most basic DOM
-methods. I use this instead of minidom because I need a simpler
-functionality and do not want to require additional libraries.
-
-Importantly, NanoDom does not do normalization, which is what we
-want. It also adds extra white space when converting DOM to string
-"""
-
-
-class Document :
-
- def appendChild(self, child) :
- self.documentElement = child
- child.parent = self
- self.entities = {}
-
- def createElement(self, tag, textNode=None) :
- el = Element(tag)
- el.doc = self
- if textNode :
- el.appendChild(self.createTextNode(textNode))
- return el
-
- def createTextNode(self, text) :
- node = TextNode(text)
- node.doc = self
- return node
-
- def createEntityReference(self, entity):
- if entity not in self.entities:
- self.entities[entity] = EntityReference(entity)
- return self.entities[entity]
-
- def toxml (self) :
- return self.documentElement.toxml()
-
- def normalizeEntities(self, text) :
-
- pairs = [ ("&", "&"),
- ("<", "<"),
- (">", ">"),
- ("\"", """)]
-
-
- for old, new in pairs :
- text = text.replace(old, new)
- return text
-
- def find(self, test) :
- return self.documentElement.find(test)
-
- def unlink(self) :
- self.documentElement.unlink()
- self.documentElement = None
-
-
-class Element :
-
- type = "element"
-
- def __init__ (self, tag) :
-
- self.nodeName = tag
- self.attributes = []
- self.attribute_values = {}
- self.childNodes = []
-
- def unlink(self) :
- for child in self.childNodes :
- if child.type == "element" :
- child.unlink()
- self.childNodes = None
-
- def setAttribute(self, attr, value) :
- if not attr in self.attributes :
- self.attributes.append(attr)
-
- self.attribute_values[attr] = value
-
- def insertChild(self, position, child) :
- self.childNodes.insert(position, child)
- child.parent = self
-
- def removeChild(self, child) :
- self.childNodes.remove(child)
-
- def replaceChild(self, oldChild, newChild) :
- position = self.childNodes.index(oldChild)
- self.removeChild(oldChild)
- self.insertChild(position, newChild)
-
- def appendChild(self, child) :
- self.childNodes.append(child)
- child.parent = self
-
- def handleAttributes(self) :
- pass
-
- def find(self, test, depth=0) :
- """ Returns a list of descendants that pass the test function """
- matched_nodes = []
- for child in self.childNodes :
- if test(child) :
- matched_nodes.append(child)
- if child.type == "element" :
- matched_nodes += child.find(test, depth+1)
- return matched_nodes
-
- def toxml(self):
- if ENABLE_ATTRIBUTES :
- for child in self.childNodes:
- child.handleAttributes()
- buffer = ""
- if self.nodeName in ['h1', 'h2', 'h3', 'h4'] :
- buffer += "\n"
- elif self.nodeName in ['li'] :
- buffer += "\n "
- buffer += "<" + self.nodeName
- for attr in self.attributes :
- value = self.attribute_values[attr]
- value = self.doc.normalizeEntities(value)
- buffer += ' %s="%s"' % (attr, value)
- if self.childNodes or self.nodeName in ['blockquote']:
- buffer += ">"
- for child in self.childNodes :
- buffer += child.toxml()
- if self.nodeName == 'p' :
- buffer += "\n"
- elif self.nodeName == 'li' :
- buffer += "\n "
- buffer += "</%s>" % self.nodeName
- else :
- buffer += "/>"
- if self.nodeName in ['p', 'li', 'ul', 'ol',
- 'h1', 'h2', 'h3', 'h4'] :
- buffer += "\n"
-
- return buffer
-
-
-class TextNode :
-
- type = "text"
- attrRegExp = re.compile(r'\{@([^\}]*)=([^\}]*)}') # {@id=123}
-
- def __init__ (self, text) :
- self.value = text
-
- def attributeCallback(self, match) :
- self.parent.setAttribute(match.group(1), match.group(2))
-
- def handleAttributes(self) :
- self.value = self.attrRegExp.sub(self.attributeCallback, self.value)
-
- def toxml(self) :
- text = self.value
- if not text.startswith(HTML_PLACEHOLDER_PREFIX):
- if self.parent.nodeName == "p" :
- text = text.replace("\n", "\n ")
- elif (self.parent.nodeName == "li"
- and self.parent.childNodes[0]==self):
- text = "\n " + text.replace("\n", "\n ")
- text = self.doc.normalizeEntities(text)
- return text
-
-
-class EntityReference:
-
- type = "entity_ref"
-
- def __init__(self, entity):
- self.entity = entity
-
- def handleAttributes(self):
- pass
-
- def toxml(self):
- return "&" + self.entity + ";"
-
-
-"""
-======================================================================
-========================== PRE-PROCESSORS ============================
-======================================================================
-
-Preprocessors munge source text before we start doing anything too
-complicated.
-
-Each preprocessor implements a "run" method that takes a pointer to a list of lines of the document,
-modifies it as necessary and returns either the same pointer or a
-pointer to a new list. Preprocessors must extend
-markdown.Preprocessor.
-
-"""
-
-
-class Preprocessor :
- pass
-
-
-class HeaderPreprocessor (Preprocessor):
-
- """
- Replaces underlined headers with hashed headers to avoid
- the nead for lookahead later.
- """
-
- def run (self, lines) :
-
- i = -1
- while i+1 < len(lines) :
- i = i+1
- if not lines[i].strip() :
- continue
-
- if lines[i].startswith("#") :
- lines.insert(i+1, "\n")
-
- if (i+1 <= len(lines)
- and lines[i+1]
- and lines[i+1][0] in ['-', '=']) :
-
- underline = lines[i+1].strip()
-
- if underline == "="*len(underline) :
- lines[i] = "# " + lines[i].strip()
- lines[i+1] = ""
- elif underline == "-"*len(underline) :
- lines[i] = "## " + lines[i].strip()
- lines[i+1] = ""
-
- #for l in lines :
- # print l.encode('utf8')
- #sys.exit(0)
-
- return lines
-
-HEADER_PREPROCESSOR = HeaderPreprocessor()
-
-class LinePreprocessor (Preprocessor):
- """Deals with HR lines (needs to be done before processing lists)"""
-
- def run (self, lines) :
- for i in range(len(lines)) :
- if self._isLine(lines[i]) :
- lines[i] = "<hr />"
- return lines
-
- def _isLine(self, block) :
- """Determines if a block should be replaced with an <HR>"""
- if block.startswith(" ") : return 0 # a code block
- text = "".join([x for x in block if not x.isspace()])
- if len(text) <= 2 :
- return 0
- for pattern in ['isline1', 'isline2', 'isline3'] :
- m = RE.regExp[pattern].match(text)
- if (m and m.group(1)) :
- return 1
- else:
- return 0
-
-LINE_PREPROCESSOR = LinePreprocessor()
-
-
-class LineBreaksPreprocessor (Preprocessor):
- """Replaces double spaces at the end of the lines with <br/ >."""
-
- def run (self, lines) :
- for i in range(len(lines)) :
- if (lines[i].endswith(" ")
- and not RE.regExp['tabbed'].match(lines[i]) ):
- lines[i] += "<br />"
- return lines
-
-LINE_BREAKS_PREPROCESSOR = LineBreaksPreprocessor()
-
-
-class HtmlBlockPreprocessor (Preprocessor):
- """Removes html blocks from self.lines"""
-
- def _get_left_tag(self, block):
- return block[1:].replace(">", " ", 1).split()[0].lower()
-
-
- def _get_right_tag(self, left_tag, block):
- return block.rstrip()[-len(left_tag)-2:-1].lower()
-
- def _equal_tags(self, left_tag, right_tag):
- if left_tag in ['?', '?php', 'div'] : # handle PHP, etc.
- return True
- if ("/" + left_tag) == right_tag:
- return True
- elif left_tag == right_tag[1:] \
- and right_tag[0] != "<":
- return True
- else:
- return False
-
- def _is_oneliner(self, tag):
- return (tag in ['hr', 'hr/'])
-
-
- def run (self, lines) :
- new_blocks = []
- text = "\n".join(lines)
- text = text.split("\n\n")
-
- items = []
- left_tag = ''
- right_tag = ''
- in_tag = False # flag
-
- for block in text:
- if block.startswith("\n") :
- block = block[1:]
-
- if not in_tag:
-
- if block.startswith("<"):
-
- left_tag = self._get_left_tag(block)
- right_tag = self._get_right_tag(left_tag, block)
-
- if not (is_block_level(left_tag) \
- or block[1] in ["!", "?", "@", "%"]):
- new_blocks.append(block)
- continue
-
- if self._is_oneliner(left_tag):
- new_blocks.append(block.strip())
- continue
-
- if block[1] == "!":
- # is a comment block
- left_tag = "--"
- right_tag = self._get_right_tag(left_tag, block)
- # keep checking conditions below and maybe just append
-
- if block.rstrip().endswith(">") \
- and self._equal_tags(left_tag, right_tag):
- new_blocks.append(
- self.stash.store(block.strip()))
- continue
- elif not block[1] == "!":
- # if is block level tag and is not complete
- items.append(block.strip())
- in_tag = True
- continue
-
- new_blocks.append(block)
-
- else:
- items.append(block.strip())
-
- right_tag = self._get_right_tag(left_tag, block)
- if self._equal_tags(left_tag, right_tag):
- # if find closing tag
- in_tag = False
- new_blocks.append(
- self.stash.store('\n\n'.join(items)))
- items = []
-
- return "\n\n".join(new_blocks).split("\n")
-
-HTML_BLOCK_PREPROCESSOR = HtmlBlockPreprocessor()
-
-
-class ReferencePreprocessor (Preprocessor):
-
- def run (self, lines) :
-
- new_text = [];
- for line in lines:
- m = RE.regExp['reference-def'].match(line)
- if m:
- id = m.group(2).strip().lower()
- t = m.group(4).strip() # potential title
- if not t :
- self.references[id] = (m.group(3), t)
- elif (len(t) >= 2
- and (t[0] == t[-1] == "\""
- or t[0] == t[-1] == "\'"
- or (t[0] == "(" and t[-1] == ")") ) ) :
- self.references[id] = (m.group(3), t[1:-1])
- else :
- new_text.append(line)
- else:
- new_text.append(line)
-
- return new_text #+ "\n"
-
-REFERENCE_PREPROCESSOR = ReferencePreprocessor()
-
-"""
-======================================================================
-========================== INLINE PATTERNS ===========================
-======================================================================
-
-Inline patterns such as *emphasis* are handled by means of auxiliary
-objects, one per pattern. Pattern objects must be instances of classes
-that extend markdown.Pattern. Each pattern object uses a single regular
-expression and needs support the following methods:
-
- pattern.getCompiledRegExp() - returns a regular expression
-
- pattern.handleMatch(m, doc) - takes a match object and returns
- a NanoDom node (as a part of the provided
- doc) or None
-
-All of python markdown's built-in patterns subclass from Patter,
-but you can add additional patterns that don't.
-
-Also note that all the regular expressions used by inline must
-capture the whole block. For this reason, they all start with
-'^(.*)' and end with '(.*)!'. In case with built-in expression
-Pattern takes care of adding the "^(.*)" and "(.*)!".
-
-Finally, the order in which regular expressions are applied is very
-important - e.g. if we first replace http://.../ links with <a> tags
-and _then_ try to replace inline html, we would end up with a mess.
-So, we apply the expressions in the following order:
-
- * escape and backticks have to go before everything else, so
- that we can preempt any markdown patterns by escaping them.
-
- * then we handle auto-links (must be done before inline html)
-
- * then we handle inline HTML. At this point we will simply
- replace all inline HTML strings with a placeholder and add
- the actual HTML to a hash.
-
- * then inline images (must be done before links)
-
- * then bracketed links, first regular then reference-style
-
- * finally we apply strong and emphasis
-"""
-
-NOBRACKET = r'[^\]\[]*'
-BRK = ( r'\[('
- + (NOBRACKET + r'(\['+NOBRACKET)*6
- + (NOBRACKET+ r'\])*'+NOBRACKET)*6
- + NOBRACKET + r')\]' )
-
-BACKTICK_RE = r'\`([^\`]*)\`' # `e= m*c^2`
-DOUBLE_BACKTICK_RE = r'\`\`(.*)\`\`' # ``e=f("`")``
-ESCAPE_RE = r'\\(.)' # \<
-EMPHASIS_RE = r'\*([^\*]*)\*' # *emphasis*
-STRONG_RE = r'\*\*(.*)\*\*' # **strong**
-STRONG_EM_RE = r'\*\*\*([^_]*)\*\*\*' # ***strong***
-
-if SMART_EMPHASIS:
- EMPHASIS_2_RE = r'(?<!\S)_(\S[^_]*)_' # _emphasis_
-else :
- EMPHASIS_2_RE = r'_([^_]*)_' # _emphasis_
-
-STRONG_2_RE = r'__([^_]*)__' # __strong__
-STRONG_EM_2_RE = r'___([^_]*)___' # ___strong___
-
-LINK_RE = BRK + r'\s*\(([^\)]*)\)' # [text](url)
-LINK_ANGLED_RE = BRK + r'\s*\(<([^\)]*)>\)' # [text](<url>)
-IMAGE_LINK_RE = r'\!' + BRK + r'\s*\(([^\)]*)\)' # 
-REFERENCE_RE = BRK+ r'\s*\[([^\]]*)\]' # [Google][3]
-IMAGE_REFERENCE_RE = r'\!' + BRK + '\s*\[([^\]]*)\]' # ![alt text][2]
-NOT_STRONG_RE = r'( \* )' # stand-alone * or _
-AUTOLINK_RE = r'<(http://[^>]*)>' # <http://www.123.com>
-AUTOMAIL_RE = r'<([^> \!]*@[^> ]*)>' # <me@example.com>
-#HTML_RE = r'(\<[^\>]*\>)' # <...>
-HTML_RE = r'(\<[a-zA-Z/][^\>]*\>)' # <...>
-ENTITY_RE = r'(&[\#a-zA-Z0-9]*;)' # &
-
-class Pattern:
-
- def __init__ (self, pattern) :
- self.pattern = pattern
- self.compiled_re = re.compile("^(.*)%s(.*)$" % pattern, re.DOTALL)
-
- def getCompiledRegExp (self) :
- return self.compiled_re
-
-BasePattern = Pattern # for backward compatibility
-
-class SimpleTextPattern (Pattern) :
-
- def handleMatch(self, m, doc) :
- return doc.createTextNode(m.group(2))
-
-class SimpleTagPattern (Pattern):
-
- def __init__ (self, pattern, tag) :
- Pattern.__init__(self, pattern)
- self.tag = tag
-
- def handleMatch(self, m, doc) :
- el = doc.createElement(self.tag)
- el.appendChild(doc.createTextNode(m.group(2)))
- return el
-
-class BacktickPattern (Pattern):
-
- def __init__ (self, pattern):
- Pattern.__init__(self, pattern)
- self.tag = "code"
-
- def handleMatch(self, m, doc) :
- el = doc.createElement(self.tag)
- text = m.group(2).strip()
- #text = text.replace("&", "&")
- el.appendChild(doc.createTextNode(text))
- return el
-
-
-class DoubleTagPattern (SimpleTagPattern) :
-
- def handleMatch(self, m, doc) :
- tag1, tag2 = self.tag.split(",")
- el1 = doc.createElement(tag1)
- el2 = doc.createElement(tag2)
- el1.appendChild(el2)
- el2.appendChild(doc.createTextNode(m.group(2)))
- return el1
-
-
-class HtmlPattern (Pattern):
-
- def handleMatch (self, m, doc) :
- place_holder = self.stash.store(m.group(2))
- return doc.createTextNode(place_holder)
-
-
-class LinkPattern (Pattern):
-
- def handleMatch(self, m, doc) :
- el = doc.createElement('a')
- el.appendChild(doc.createTextNode(m.group(2)))
- parts = m.group(9).split()
- # We should now have [], [href], or [href, title]
- if parts :
- el.setAttribute('href', parts[0])
- else :
- el.setAttribute('href', "")
- if len(parts) > 1 :
- # we also got a title
- title = " ".join(parts[1:]).strip()
- title = dequote(title) #.replace('"', """)
- el.setAttribute('title', title)
- return el
-
-
-class ImagePattern (Pattern):
-
- def handleMatch(self, m, doc):
- el = doc.createElement('img')
- src_parts = m.group(9).split()
- el.setAttribute('src', src_parts[0])
- if len(src_parts) > 1 :
- el.setAttribute('title', dequote(" ".join(src_parts[1:])))
- if ENABLE_ATTRIBUTES :
- text = doc.createTextNode(m.group(2))
- el.appendChild(text)
- text.handleAttributes()
- truealt = text.value
- el.childNodes.remove(text)
- else:
- truealt = m.group(2)
- el.setAttribute('alt', truealt)
- return el
-
-class ReferencePattern (Pattern):
-
- def handleMatch(self, m, doc):
- if m.group(9) :
- id = m.group(9).lower()
- else :
- # if we got something like "[Google][]"
- # we'll use "google" as the id
- id = m.group(2).lower()
- if not self.references.has_key(id) : # ignore undefined refs
- return None
- href, title = self.references[id]
- text = m.group(2)
- return self.makeTag(href, title, text, doc)
-
- def makeTag(self, href, title, text, doc):
- el = doc.createElement('a')
- el.setAttribute('href', href)
- if title :
- el.setAttribute('title', title)
- el.appendChild(doc.createTextNode(text))
- return el
-
-
-class ImageReferencePattern (ReferencePattern):
-
- def makeTag(self, href, title, text, doc):
- el = doc.createElement('img')
- el.setAttribute('src', href)
- if title :
- el.setAttribute('title', title)
- el.setAttribute('alt', text)
- return el
-
-
-class AutolinkPattern (Pattern):
-
- def handleMatch(self, m, doc):
- el = doc.createElement('a')
- el.setAttribute('href', m.group(2))
- el.appendChild(doc.createTextNode(m.group(2)))
- return el
-
-class AutomailPattern (Pattern):
-
- def handleMatch(self, m, doc) :
- el = doc.createElement('a')
- email = m.group(2)
- if email.startswith("mailto:"):
- email = email[len("mailto:"):]
- for letter in email:
- entity = doc.createEntityReference("#%d" % ord(letter))
- el.appendChild(entity)
- mailto = "mailto:" + email
- mailto = "".join(['&#%d;' % ord(letter) for letter in mailto])
- el.setAttribute('href', mailto)
- return el
-
-ESCAPE_PATTERN = SimpleTextPattern(ESCAPE_RE)
-NOT_STRONG_PATTERN = SimpleTextPattern(NOT_STRONG_RE)
-
-BACKTICK_PATTERN = BacktickPattern(BACKTICK_RE)
-DOUBLE_BACKTICK_PATTERN = BacktickPattern(DOUBLE_BACKTICK_RE)
-STRONG_PATTERN = SimpleTagPattern(STRONG_RE, 'strong')
-STRONG_PATTERN_2 = SimpleTagPattern(STRONG_2_RE, 'strong')
-EMPHASIS_PATTERN = SimpleTagPattern(EMPHASIS_RE, 'em')
-EMPHASIS_PATTERN_2 = SimpleTagPattern(EMPHASIS_2_RE, 'em')
-
-STRONG_EM_PATTERN = DoubleTagPattern(STRONG_EM_RE, 'strong,em')
-STRONG_EM_PATTERN_2 = DoubleTagPattern(STRONG_EM_2_RE, 'strong,em')
-
-LINK_PATTERN = LinkPattern(LINK_RE)
-LINK_ANGLED_PATTERN = LinkPattern(LINK_ANGLED_RE)
-IMAGE_LINK_PATTERN = ImagePattern(IMAGE_LINK_RE)
-IMAGE_REFERENCE_PATTERN = ImageReferencePattern(IMAGE_REFERENCE_RE)
-REFERENCE_PATTERN = ReferencePattern(REFERENCE_RE)
-
-HTML_PATTERN = HtmlPattern(HTML_RE)
-ENTITY_PATTERN = HtmlPattern(ENTITY_RE)
-
-AUTOLINK_PATTERN = AutolinkPattern(AUTOLINK_RE)
-AUTOMAIL_PATTERN = AutomailPattern(AUTOMAIL_RE)
-
-
-"""
-======================================================================
-========================== POST-PROCESSORS ===========================
-======================================================================
-
-Markdown also allows post-processors, which are similar to
-preprocessors in that they need to implement a "run" method. Unlike
-pre-processors, they take a NanoDom document as a parameter and work
-with that.
-
-Post-Processor should extend markdown.Postprocessor.
-
-There are currently no standard post-processors, but the footnote
-extension below uses one.
-"""
-
-class Postprocessor :
- pass
-
-
-"""
-======================================================================
-========================== MISC AUXILIARY CLASSES ====================
-======================================================================
-"""
-
-class HtmlStash :
- """This class is used for stashing HTML objects that we extract
- in the beginning and replace with place-holders."""
-
- def __init__ (self) :
- self.html_counter = 0 # for counting inline html segments
- self.rawHtmlBlocks=[]
-
- def store(self, html) :
- """Saves an HTML segment for later reinsertion. Returns a
- placeholder string that needs to be inserted into the
- document.
-
- @param html: an html segment
- @returns : a placeholder string """
- self.rawHtmlBlocks.append(html)
- placeholder = HTML_PLACEHOLDER % self.html_counter
- self.html_counter += 1
- return placeholder
-
-
-class BlockGuru :
-
- def _findHead(self, lines, fn, allowBlank=0) :
-
- """Functional magic to help determine boundaries of indented
- blocks.
-
- @param lines: an array of strings
- @param fn: a function that returns a substring of a string
- if the string matches the necessary criteria
- @param allowBlank: specifies whether it's ok to have blank
- lines between matching functions
- @returns: a list of post processes items and the unused
- remainder of the original list"""
-
- items = []
- item = -1
-
- i = 0 # to keep track of where we are
-
- for line in lines :
-
- if not line.strip() and not allowBlank:
- return items, lines[i:]
-
- if not line.strip() and allowBlank:
- # If we see a blank line, this _might_ be the end
- i += 1
-
- # Find the next non-blank line
- for j in range(i, len(lines)) :
- if lines[j].strip() :
- next = lines[j]
- break
- else :
- # There is no more text => this is the end
- break
-
- # Check if the next non-blank line is still a part of the list
-
- part = fn(next)
-
- if part :
- items.append("")
- continue
- else :
- break # found end of the list
-
- part = fn(line)
-
- if part :
- items.append(part)
- i += 1
- continue
- else :
- return items, lines[i:]
- else :
- i += 1
-
- return items, lines[i:]
-
-
- def detabbed_fn(self, line) :
- """ An auxiliary method to be passed to _findHead """
- m = RE.regExp['tabbed'].match(line)
- if m:
- return m.group(4)
- else :
- return None
-
-
- def detectTabbed(self, lines) :
-
- return self._findHead(lines, self.detabbed_fn,
- allowBlank = 1)
-
-
-def print_error(string):
- """Print an error string to stderr"""
- sys.stderr.write(string +'\n')
-
-
-def dequote(string) :
- """ Removes quotes from around a string """
- if ( ( string.startswith('"') and string.endswith('"'))
- or (string.startswith("'") and string.endswith("'")) ) :
- return string[1:-1]
- else :
- return string
-
-"""
-======================================================================
-========================== CORE MARKDOWN =============================
-======================================================================
-
-This stuff is ugly, so if you are thinking of extending the syntax,
-see first if you can do it via pre-processors, post-processors,
-inline patterns or a combination of the three.
-"""
-
-class CorePatterns :
- """This class is scheduled for removal as part of a refactoring
- effort."""
-
- patterns = {
- 'header': r'(#*)([^#]*)(#*)', # # A title
- 'reference-def' : r'(\ ?\ ?\ ?)\[([^\]]*)\]:\s*([^ ]*)(.*)',
- # [Google]: http://www.google.com/
- 'containsline': r'([-]*)$|^([=]*)', # -----, =====, etc.
- 'ol': r'[ ]{0,3}[\d]*\.\s+(.*)', # 1. text
- 'ul': r'[ ]{0,3}[*+-]\s+(.*)', # "* text"
- 'isline1': r'(\**)', # ***
- 'isline2': r'(\-*)', # ---
- 'isline3': r'(\_*)', # ___
- 'tabbed': r'((\t)|( ))(.*)', # an indented line
- 'quoted' : r'> ?(.*)', # a quoted block ("> ...")
- }
-
- def __init__ (self) :
-
- self.regExp = {}
- for key in self.patterns.keys() :
- self.regExp[key] = re.compile("^%s$" % self.patterns[key],
- re.DOTALL)
-
- self.regExp['containsline'] = re.compile(r'^([-]*)$|^([=]*)$', re.M)
-
-RE = CorePatterns()
-
-
-class Markdown:
- """ Markdown formatter class for creating an html document from
- Markdown text """
-
-
- def __init__(self, source=None,
- extensions=[],
- extension_configs=None,
- encoding=None,
- safe_mode = True):
- """Creates a new Markdown instance.
-
- @param source: The text in Markdown format.
- @param encoding: The character encoding of <text>. """
-
- self.safeMode = safe_mode
- self.encoding = encoding
- self.source = source
- self.blockGuru = BlockGuru()
- self.registeredExtensions = []
- self.stripTopLevelTags = 1
- self.docType = ""
-
- self.preprocessors = [ HEADER_PREPROCESSOR,
- LINE_PREPROCESSOR,
- HTML_BLOCK_PREPROCESSOR,
- LINE_BREAKS_PREPROCESSOR,
- # A footnote preprocessor will
- # get inserted here
- REFERENCE_PREPROCESSOR ]
-
-
- self.postprocessors = [] # a footnote postprocessor will get
- # inserted later
-
- self.textPostprocessors = [] # a footnote postprocessor will get
- # inserted later
-
- self.prePatterns = []
-
-
- self.inlinePatterns = [ DOUBLE_BACKTICK_PATTERN,
- BACKTICK_PATTERN,
- ESCAPE_PATTERN,
- IMAGE_LINK_PATTERN,
- IMAGE_REFERENCE_PATTERN,
- REFERENCE_PATTERN,
- LINK_ANGLED_PATTERN,
- LINK_PATTERN,
- AUTOLINK_PATTERN,
- AUTOMAIL_PATTERN,
- HTML_PATTERN,
- ENTITY_PATTERN,
- NOT_STRONG_PATTERN,
- STRONG_EM_PATTERN,
- STRONG_EM_PATTERN_2,
- STRONG_PATTERN,
- STRONG_PATTERN_2,
- EMPHASIS_PATTERN,
- EMPHASIS_PATTERN_2
- # The order of the handlers matters!!!
- ]
-
- self.registerExtensions(extensions = extensions,
- configs = extension_configs)
-
- self.reset()
-
-
- def registerExtensions(self, extensions, configs) :
-
- if not configs :
- configs = {}
-
- for ext in extensions :
-
- extension_module_name = "mdx_" + ext
-
- try :
- module = __import__(extension_module_name)
-
- except :
- message(CRITICAL,
- "couldn't load extension %s (looking for %s module)"
- % (ext, extension_module_name) )
- else :
-
- if configs.has_key(ext) :
- configs_for_ext = configs[ext]
- else :
- configs_for_ext = []
- extension = module.makeExtension(configs_for_ext)
- extension.extendMarkdown(self, globals())
-
-
-
-
- def registerExtension(self, extension) :
- """ This gets called by the extension """
- self.registeredExtensions.append(extension)
-
- def reset(self) :
- """Resets all state variables so that we can start
- with a new text."""
- self.references={}
- self.htmlStash = HtmlStash()
-
- HTML_BLOCK_PREPROCESSOR.stash = self.htmlStash
- REFERENCE_PREPROCESSOR.references = self.references
- HTML_PATTERN.stash = self.htmlStash
- ENTITY_PATTERN.stash = self.htmlStash
- REFERENCE_PATTERN.references = self.references
- IMAGE_REFERENCE_PATTERN.references = self.references
-
- for extension in self.registeredExtensions :
- extension.reset()
-
-
- def _transform(self):
- """Transforms the Markdown text into a XHTML body document
-
- @returns: A NanoDom Document """
-
- # Setup the document
-
- self.doc = Document()
- self.top_element = self.doc.createElement("span")
- self.top_element.appendChild(self.doc.createTextNode('\n'))
- self.top_element.setAttribute('class', 'markdown')
- self.doc.appendChild(self.top_element)
-
- # Fixup the source text
- text = self.source.strip()
- text = text.replace("\r\n", "\n").replace("\r", "\n")
- text += "\n\n"
- text = text.expandtabs(TAB_LENGTH)
-
- # Split into lines and run the preprocessors that will work with
- # self.lines
-
- self.lines = text.split("\n")
-
- # Run the pre-processors on the lines
- for prep in self.preprocessors :
- self.lines = prep.run(self.lines)
-
- # Create a NanoDom tree from the lines and attach it to Document
-
-
- buffer = []
- for line in self.lines :
- if line.startswith("#") :
- self._processSection(self.top_element, buffer)
- buffer = [line]
- else :
- buffer.append(line)
- self._processSection(self.top_element, buffer)
-
- #self._processSection(self.top_element, self.lines)
-
- # Not sure why I put this in but let's leave it for now.
- self.top_element.appendChild(self.doc.createTextNode('\n'))
-
- # Run the post-processors
- for postprocessor in self.postprocessors :
- postprocessor.run(self.doc)
-
- return self.doc
-
-
- def _processSection(self, parent_elem, lines,
- inList = 0, looseList = 0) :
-
- """Process a section of a source document, looking for high
- level structural elements like lists, block quotes, code
- segments, html blocks, etc. Some those then get stripped
- of their high level markup (e.g. get unindented) and the
- lower-level markup is processed recursively.
-
- @param parent_elem: A NanoDom element to which the content
- will be added
- @param lines: a list of lines
- @param inList: a level
- @returns: None"""
-
- if not lines :
- return
-
- # Check if this section starts with a list, a blockquote or
- # a code block
-
- processFn = { 'ul' : self._processUList,
- 'ol' : self._processOList,
- 'quoted' : self._processQuote,
- 'tabbed' : self._processCodeBlock }
-
- for regexp in ['ul', 'ol', 'quoted', 'tabbed'] :
- m = RE.regExp[regexp].match(lines[0])
- if m :
- processFn[regexp](parent_elem, lines, inList)
- return
-
- # We are NOT looking at one of the high-level structures like
- # lists or blockquotes. So, it's just a regular paragraph
- # (though perhaps nested inside a list or something else). If
- # we are NOT inside a list, we just need to look for a blank
- # line to find the end of the block. If we ARE inside a
- # list, however, we need to consider that a sublist does not
- # need to be separated by a blank line. Rather, the following
- # markup is legal:
- #
- # * The top level list item
- #
- # Another paragraph of the list. This is where we are now.
- # * Underneath we might have a sublist.
- #
-
- if inList :
-
- start, theRest = self._linesUntil(lines, (lambda line:
- RE.regExp['ul'].match(line)
- or RE.regExp['ol'].match(line)
- or not line.strip()))
-
- self._processSection(parent_elem, start,
- inList - 1, looseList = looseList)
- self._processSection(parent_elem, theRest,
- inList - 1, looseList = looseList)
-
-
- else : # Ok, so it's just a simple block
-
- paragraph, theRest = self._linesUntil(lines, lambda line:
- not line.strip())
-
- if len(paragraph) and paragraph[0].startswith('#') :
- m = RE.regExp['header'].match(paragraph[0])
- if m :
- level = len(m.group(1))
- h = self.doc.createElement("h%d" % level)
- parent_elem.appendChild(h)
- for item in self._handleInlineWrapper2(m.group(2).strip()) :
- h.appendChild(item)
- else :
- message(CRITICAL, "We've got a problem header!")
-
- elif paragraph :
-
- list = self._handleInlineWrapper2("\n".join(paragraph))
-
- if ( parent_elem.nodeName == 'li'
- and not (looseList or parent_elem.childNodes)):
-
- #and not parent_elem.childNodes) :
- # If this is the first paragraph inside "li", don't
- # put <p> around it - append the paragraph bits directly
- # onto parent_elem
- el = parent_elem
- else :
- # Otherwise make a "p" element
- el = self.doc.createElement("p")
- parent_elem.appendChild(el)
-
- for item in list :
- el.appendChild(item)
-
- if theRest :
- theRest = theRest[1:] # skip the first (blank) line
-
- self._processSection(parent_elem, theRest, inList)
-
-
-
- def _processUList(self, parent_elem, lines, inList) :
- self._processList(parent_elem, lines, inList,
- listexpr='ul', tag = 'ul')
-
- def _processOList(self, parent_elem, lines, inList) :
- self._processList(parent_elem, lines, inList,
- listexpr='ol', tag = 'ol')
-
-
- def _processList(self, parent_elem, lines, inList, listexpr, tag) :
- """Given a list of document lines starting with a list item,
- finds the end of the list, breaks it up, and recursively
- processes each list item and the remainder of the text file.
-
- @param parent_elem: A dom element to which the content will be added
- @param lines: a list of lines
- @param inList: a level
- @returns: None"""
-
- ul = self.doc.createElement(tag) # ul might actually be '<ol>'
- parent_elem.appendChild(ul)
-
- looseList = 0
-
- # Make a list of list items
- items = []
- item = -1
-
- i = 0 # a counter to keep track of where we are
-
- for line in lines :
-
- loose = 0
- if not line.strip() :
- # If we see a blank line, this _might_ be the end of the list
- i += 1
- loose = 1
-
- # Find the next non-blank line
- for j in range(i, len(lines)) :
- if lines[j].strip() :
- next = lines[j]
- break
- else :
- # There is no more text => end of the list
- break
-
- # Check if the next non-blank line is still a part of the list
- if ( RE.regExp['ul'].match(next) or
- RE.regExp['ol'].match(next) or
- RE.regExp['tabbed'].match(next) ):
- # get rid of any white space in the line
- items[item].append(line.strip())
- looseList = loose or looseList
- continue
- else :
- break # found end of the list
-
- # Now we need to detect list items (at the current level)
- # while also detabing child elements if necessary
-
- for expr in ['ul', 'ol', 'tabbed']:
-
- m = RE.regExp[expr].match(line)
- if m :
- if expr in ['ul', 'ol'] : # We are looking at a new item
- if m.group(1) :
- items.append([m.group(1)])
- item += 1
- elif expr == 'tabbed' : # This line needs to be detabbed
- items[item].append(m.group(4)) #after the 'tab'
-
- i += 1
- break
- else :
- items[item].append(line) # Just regular continuation
- i += 1 # added on 2006.02.25
- else :
- i += 1
-
- # Add the dom elements
- for item in items :
- li = self.doc.createElement("li")
- ul.appendChild(li)
-
- self._processSection(li, item, inList + 1, looseList = looseList)
-
- # Process the remaining part of the section
-
- self._processSection(parent_elem, lines[i:], inList)
-
-
- def _linesUntil(self, lines, condition) :
- """ A utility function to break a list of lines upon the
- first line that satisfied a condition. The condition
- argument should be a predicate function.
- """
-
- i = -1
- for line in lines :
- i += 1
- if condition(line) : break
- else :
- i += 1
- return lines[:i], lines[i:]
-
- def _processQuote(self, parent_elem, lines, inList) :
- """Given a list of document lines starting with a quote finds
- the end of the quote, unindents it and recursively
- processes the body of the quote and the remainder of the
- text file.
-
- @param parent_elem: DOM element to which the content will be added
- @param lines: a list of lines
- @param inList: a level
- @returns: None """
-
- dequoted = []
- i = 0
- for line in lines :
- m = RE.regExp['quoted'].match(line)
- if m :
- dequoted.append(m.group(1))
- i += 1
- else :
- break
- else :
- i += 1
-
- blockquote = self.doc.createElement('blockquote')
- parent_elem.appendChild(blockquote)
-
- self._processSection(blockquote, dequoted, inList)
- self._processSection(parent_elem, lines[i:], inList)
-
-
-
-
- def _processCodeBlock(self, parent_elem, lines, inList) :
- """Given a list of document lines starting with a code block
- finds the end of the block, puts it into the dom verbatim
- wrapped in ("<pre><code>") and recursively processes the
- the remainder of the text file.
-
- @param parent_elem: DOM element to which the content will be added
- @param lines: a list of lines
- @param inList: a level
- @returns: None"""
-
- detabbed, theRest = self.blockGuru.detectTabbed(lines)
-
- pre = self.doc.createElement('pre')
- code = self.doc.createElement('code')
- parent_elem.appendChild(pre)
- pre.appendChild(code)
- text = "\n".join(detabbed).rstrip()+"\n"
- #text = text.replace("&", "&")
- code.appendChild(self.doc.createTextNode(text))
- self._processSection(parent_elem, theRest, inList)
-
-
- def _handleInlineWrapper2 (self, line) :
-
-
- parts = [line]
-
- #if not(line):
- # return [self.doc.createTextNode(' ')]
-
- for pattern in self.inlinePatterns :
-
- #print
- #print self.inlinePatterns.index(pattern)
-
- i = 0
-
- #print parts
- while i < len(parts) :
-
- x = parts[i]
- #print i
- if isinstance(x, (str, unicode)) :
- result = self._applyPattern(x, pattern)
- #print result
- #print result
- #print parts, i
- if result :
- i -= 1
- parts.remove(x)
- for y in result :
- parts.insert(i+1,y)
-
- i += 1
-
- for i in range(len(parts)) :
- x = parts[i]
- if isinstance(x, (str, unicode)) :
- parts[i] = self.doc.createTextNode(x)
-
- return parts
-
-
-
- def _handleInlineWrapper (self, line) :
-
- # A wrapper around _handleInline to avoid recursion
-
- parts = [line]
-
- i = 0
-
- while i < len(parts) :
- x = parts[i]
- if isinstance(x, (str, unicode)) :
- parts.remove(x)
- result = self._handleInline(x)
- for y in result :
- parts.insert(i,y)
- else :
- i += 1
-
- return parts
-
- def _handleInline(self, line):
- """Transform a Markdown line with inline elements to an XHTML
- fragment.
-
- This function uses auxiliary objects called inline patterns.
- See notes on inline patterns above.
-
- @param item: A block of Markdown text
- @return: A list of NanoDom nodes """
-
- if not(line):
- return [self.doc.createTextNode(' ')]
-
- for pattern in self.inlinePatterns :
- list = self._applyPattern( line, pattern)
- if list: return list
-
- return [self.doc.createTextNode(line)]
-
- def _applyPattern(self, line, pattern) :
- """ Given a pattern name, this function checks if the line
- fits the pattern, creates the necessary elements, and returns
- back a list consisting of NanoDom elements and/or strings.
-
- @param line: the text to be processed
- @param pattern: the pattern to be checked
-
- @returns: the appropriate newly created NanoDom element if the
- pattern matches, None otherwise.
- """
-
- # match the line to pattern's pre-compiled reg exp.
- # if no match, move on.
-
- m = pattern.getCompiledRegExp().match(line)
- if not m :
- return None
-
- # if we got a match let the pattern make us a NanoDom node
- # if it doesn't, move on
- node = pattern.handleMatch(m, self.doc)
-
- if node :
- # Those are in the reverse order!
- return ( m.groups()[-1], # the string to the left
- node, # the new node
- m.group(1)) # the string to the right of the match
-
- else :
- return None
-
- def __str__(self, source = None):
- """Return the document in XHTML format.
-
- @returns: A serialized XHTML body."""
- #try :
-
- if source :
- self.source = source
-
- doc = self._transform()
- xml = doc.toxml()
-
- #finally:
- # doc.unlink()
-
- # Let's stick in all the raw html pieces
-
- for i in range(self.htmlStash.html_counter) :
- html = self.htmlStash.rawHtmlBlocks[i]
- if self.safeMode :
- html = "[HTML_REMOVED]"
-
- xml = xml.replace("<p>%s\n</p>" % (HTML_PLACEHOLDER % i),
- html + "\n")
- xml = xml.replace(HTML_PLACEHOLDER % i,
- html)
-
- # And return everything but the top level tag
-
- if self.stripTopLevelTags :
- xml = xml.strip()[23:-7] + "\n"
-
- for pp in self.textPostprocessors :
- xml = pp.run(xml)
-
- return self.docType + xml
-
-
- toString = __str__
-
-
- def __unicode__(self):
- """Return the document in XHTML format as a Unicode object.
- """
- return str(self)#.decode(self.encoding)
-
-
- toUnicode = __unicode__
-
-
-
-
-# ====================================================================
-
-def markdownFromFile(input = None,
- output = None,
- extensions = [],
- encoding = None,
- message_threshold = CRITICAL,
- safe = False) :
-
- global MESSAGE_THRESHOLD
- MESSAGE_THRESHOLD = message_threshold
-
- message(VERBOSE, "input file: %s" % input)
-
-
- if not encoding :
- encoding = "utf-8"
-
- input_file = codecs.open(input, mode="r", encoding="utf-8")
- text = input_file.read()
- input_file.close()
-
- new_text = markdown(text, extensions, encoding, safe_mode = safe)
-
- if output :
- output_file = codecs.open(output, "w", encoding=encoding)
- output_file.write(new_text)
- output_file.close()
-
- else :
- sys.stdout.write(new_text.encode(encoding))
-
-def markdown(text,
- extensions = [],
- encoding = None,
- safe_mode = False) :
-
- message(VERBOSE, "in markdown.markdown(), received text:\n%s" % text)
-
- extension_names = []
- extension_configs = {}
-
- for ext in extensions :
- pos = ext.find("(")
- if pos == -1 :
- extension_names.append(ext)
- else :
- name = ext[:pos]
- extension_names.append(name)
- pairs = [x.split("=") for x in ext[pos+1:-1].split(",")]
- configs = [(x.strip(), y.strip()) for (x, y) in pairs]
- extension_configs[name] = configs
- #print configs
-
- md = Markdown(text, extensions=extension_names,
- extension_configs=extension_configs,
- safe_mode = safe_mode)
-
- return md.toString()
-
-
-class Extension :
-
- def __init__(self, configs = {}) :
- self.config = configs
-
- def getConfig(self, key) :
- if self.config.has_key(key) :
- #print self.config[key][0]
- return self.config[key][0]
- else :
- return ""
-
- def getConfigInfo(self) :
- return [(key, self.config[key][1]) for key in self.config.keys()]
-
- def setConfig(self, key, value) :
- self.config[key][0] = value
-
-
-OPTPARSE_WARNING = """
-Python 2.3 or higher required for advanced command line options.
-For lower versions of Python use:
-
- %s INPUT_FILE > OUTPUT_FILE
-
-""" % EXECUTABLE_NAME_FOR_USAGE
-
-def parse_options() :
-
- try :
- optparse = __import__("optparse")
- except :
- if len(sys.argv) == 2 :
- return {'input' : sys.argv[1],
- 'output' : None,
- 'message_threshold' : CRITICAL,
- 'safe' : False,
- 'extensions' : [],
- 'encoding' : None }
-
- else :
- print OPTPARSE_WARNING
- return None
-
- parser = optparse.OptionParser(usage="%prog INPUTFILE [options]")
-
- parser.add_option("-f", "--file", dest="filename",
- help="write output to OUTPUT_FILE",
- metavar="OUTPUT_FILE")
- parser.add_option("-e", "--encoding", dest="encoding",
- help="encoding for input and output files",)
- parser.add_option("-q", "--quiet", default = CRITICAL,
- action="store_const", const=NONE, dest="verbose",
- help="suppress all messages")
- parser.add_option("-v", "--verbose",
- action="store_const", const=INFO, dest="verbose",
- help="print info messages")
- parser.add_option("-s", "--safe",
- action="store_const", const=True, dest="safe",
- help="same mode (strip user's HTML tag)")
-
- parser.add_option("--noisy",
- action="store_const", const=VERBOSE, dest="verbose",
- help="print debug messages")
- parser.add_option("-x", "--extension", action="append", dest="extensions",
- help = "load extension EXTENSION", metavar="EXTENSION")
-
- (options, args) = parser.parse_args()
-
- if not len(args) == 1 :
- parser.print_help()
- return None
- else :
- input_file = args[0]
-
- if not options.extensions :
- options.extensions = []
-
- return {'input' : input_file,
- 'output' : options.filename,
- 'message_threshold' : options.verbose,
- 'safe' : options.safe,
- 'extensions' : options.extensions,
- 'encoding' : options.encoding }
-
-if __name__ == '__main__':
- """ Run Markdown from the command line. """
-
- options = parse_options()
-
- #if os.access(inFile, os.R_OK):
-
- if not options :
- sys.exit(0)
-
- markdownFromFile(**options)
-
-
-
-
-
-
-
-
-
-
+++ /dev/null
-"""
-defines a pickleable, recursive "table of contents" datastructure.
-
-TOCElements define a name, a description, and also a uniquely-identifying "path" which is
-used to generate hyperlinks between document sections.
-"""
-import time, re
-
-toc_by_file = {}
-toc_by_path = {}
-filenames = []
-
-class TOCElement(object):
- def __init__(self, filename, name, description, parent=None, version=None, last_updated=None, doctitle=None, requires_paged=False, **kwargs):
- self.filename = filename
- self.name = re.sub(r'[<>&;%]', '', name)
- self.description = description
- self.parent = parent
- self.content = None
- self.filenames = filenames
- self.toc_by_path = toc_by_path
- self.toc_by_file = toc_by_file
- self.last_updated = time.time()
- self.version = version
- self.doctitle = doctitle
- self.requires_paged = requires_paged
- (self.path, self.depth) = self._create_path()
- #print "NEW TOC:", self.path
- for key, value in kwargs.iteritems():
- setattr(self, key, value)
-
- toc_by_path[self.path] = self
-
- self.is_top = (self.parent is not None and self.parent.filename != self.filename) or self.parent is None
- if self.is_top:
- toc_by_file[self.filename] = self
- if self.filename:
- filenames.append(self.filename)
-
- self.root = self.parent and self.parent.root or self
-
- self.content = None
- self.previous = None
- self.next = None
- self.children = []
- if parent:
- if parent.children:
- self.previous = parent.children[-1]
- parent.children[-1].next = self
- parent.children.append(self)
- if parent is not parent.root:
- self.up = parent
- else:
- self.up = None
-
- def get_page_root(self):
- return self.toc_by_file[self.filename]
-
- def get_by_path(self, path):
- return self.toc_by_path.get(path)
-
- def get_by_file(self, filename):
- return self.toc_by_file[filename]
-
- def get_link(self, extension='html', anchor=True, usefilename=True):
- if usefilename or self.requires_paged:
- if anchor:
- return "%s.%s#%s" % (self.filename, extension, self.path)
- else:
- return "%s.%s" % (self.filename, extension)
- else:
- return "#%s" % (self.path)
-
-
- def _create_path(self):
- elem = self
- tokens = []
- depth = 0
- while elem.parent is not None:
- tokens.insert(0, elem.name)
- elem = elem.parent
- depth +=1
- return ('_'.join(tokens), depth)
--- /dev/null
+.. _datamapping_toplevel:
+
+====================
+Mapper Configuration
+====================
+This section references most major configurational patterns involving the :func:`~sqlalchemy.orm.mapper` and :func:`~sqlalchemy.orm.relation` functions. It assumes you've worked through :ref:`ormtutorial_toplevel` and know how to construct and use rudimentary mappers and relations.
+
+Mapper Configuration
+====================
+
+Customizing Column Properties
+------------------------------
+
+The default behavior of a ``mapper`` is to assemble all the columns in the mapped ``Table`` into mapped object attributes. This behavior can be modified in several ways, as well as enhanced by SQL expressions.
+
+To load only a part of the columns referenced by a table as attributes, use the ``include_properties`` and ``exclude_properties`` arguments::
+
+ mapper(User, users_table, include_properties=['user_id', 'user_name'])
+
+ mapper(Address, addresses_table, exclude_properties=['street', 'city', 'state', 'zip'])
+
+To change the name of the attribute mapped to a particular column, place the ``Column`` object in the ``properties`` dictionary with the desired key::
+
+ mapper(User, users_table, properties={
+ 'id': users_table.c.user_id,
+ 'name': users_table.c.user_name,
+ })
+
+To change the names of all attributes using a prefix, use the ``column_prefix`` option. This is useful for classes which wish to add their own ``property`` accessors::
+
+ mapper(User, users_table, column_prefix='_')
+
+The above will place attribute names such as ``_user_id``, ``_user_name``, ``_password`` etc. on the mapped ``User`` class.
+
+To place multiple columns which are known to be "synonymous" based on foreign key relationship or join condition into the same mapped attribute, put them together using a list, as below where we map to a ``Join``::
+
+ # join users and addresses
+ usersaddresses = sql.join(users_table, addresses_table, \
+ users_table.c.user_id == addresses_table.c.user_id)
+
+ mapper(User, usersaddresses, properties={
+ 'id':[users_table.c.user_id, addresses_table.c.user_id],
+ })
+
+Deferred Column Loading
+------------------------
+
+This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentially "column-level lazy loading". This feature is useful when one wants to avoid loading a large text or binary field into memory when it's not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together::
+
+ book_excerpts = Table('books', db,
+ Column('book_id', Integer, primary_key=True),
+ Column('title', String(200), nullable=False),
+ Column('summary', String(2000)),
+ Column('excerpt', String),
+ Column('photo', Binary)
+ )
+
+ class Book(object):
+ pass
+
+ # define a mapper that will load each of 'excerpt' and 'photo' in
+ # separate, individual-row SELECT statements when each attribute
+ # is first referenced on the individual object instance
+ mapper(Book, book_excerpts, properties={
+ 'excerpt': deferred(book_excerpts.c.excerpt),
+ 'photo': deferred(book_excerpts.c.photo)
+ })
+
+Deferred columns can be placed into groups so that they load together::
+
+ book_excerpts = Table('books', db,
+ Column('book_id', Integer, primary_key=True),
+ Column('title', String(200), nullable=False),
+ Column('summary', String(2000)),
+ Column('excerpt', String),
+ Column('photo1', Binary),
+ Column('photo2', Binary),
+ Column('photo3', Binary)
+ )
+
+ class Book(object):
+ pass
+
+ # define a mapper with a 'photos' deferred group. when one photo is referenced,
+ # all three photos will be loaded in one SELECT statement. The 'excerpt' will
+ # be loaded separately when it is first referenced.
+ mapper(Book, book_excerpts, properties = {
+ 'excerpt': deferred(book_excerpts.c.excerpt),
+ 'photo1': deferred(book_excerpts.c.photo1, group='photos'),
+ 'photo2': deferred(book_excerpts.c.photo2, group='photos'),
+ 'photo3': deferred(book_excerpts.c.photo3, group='photos')
+ })
+
+You can defer or undefer columns at the ``Query`` level using the ``defer`` and ``undefer`` options::
+
+ query = session.query(Book)
+ query.options(defer('summary')).all()
+ query.options(undefer('excerpt')).all()
+
+And an entire "deferred group", i.e. which uses the ``group`` keyword argument to :func:`deferred()`, can be undeferred using :func:`undefer_group()`, sending in the group name::
+
+ query = session.query(Book)
+ query.options(undefer_group('photos')).all()
+
+SQL Expressions as Mapped Attributes
+-------------------------------------
+
+To add a SQL clause composed of local or external columns as a read-only, mapped column attribute, use the :func:`column_property()` function. Any scalar-returning ``ClauseElement`` may be used, as long as it has a ``name`` attribute; usually, you'll want to call ``label()`` to give it a specific name::
+
+ mapper(User, users_table, properties={
+ 'fullname': column_property(
+ (users_table.c.firstname + " " + users_table.c.lastname).label('fullname')
+ )
+ })
+
+Correlated subqueries may be used as well:
+
+.. sourcecode:: python+sql
+
+ mapper(User, users_table, properties={
+ 'address_count': column_property(
+ select(
+ [func.count(addresses_table.c.address_id)],
+ addresses_table.c.user_id==users_table.c.user_id
+ ).label('address_count')
+ )
+ })
+
+Changing Attribute Behavior
+----------------------------
+
+
+Simple Validators
+~~~~~~~~~~~~~~~~~~
+
+
+A quick way to add a "validation" routine to an attribute is to use the :func:`~sqlalchemy.orm.validates` decorator. This is a shortcut for using the :class:`sqlalchemy.orm.util.Validator` attribute extension with individual column or relation based attributes. An attribute validator can raise an exception, halting the process of mutating the attribute's value, or can change the given value into something different. Validators, like all attribute extensions, are only called by normal userland code; they are not issued when the ORM is populating the object.
+
+.. sourcecode:: python+sql
+
+ addresses_table = Table('addresses', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('email', String)
+ )
+
+ class EmailAddress(object):
+ @validates('email')
+ def validate_email(self, key, address):
+ assert '@' in address
+ return address
+
+ mapper(EmailAddress, addresses_table)
+
+Validators also receive collection events, when items are added to a collection:
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ @validates('addresses')
+ def validate_address(self, key, address):
+ assert '@' in address.email
+ return address
+
+Using Descriptors
+~~~~~~~~~~~~~~~~~~
+
+A more comprehensive way to produce modified behavior for an attribute is to use descriptors. These are commonly used in Python using the ``property()`` function. The standard SQLAlchemy technique for descriptors is to create a plain descriptor, and to have it read/write from a mapped attribute with a different name. To have the descriptor named the same as a column, map the column under a different name, i.e.:
+
+.. sourcecode:: python+sql
+
+ class EmailAddress(object):
+ def _set_email(self, email):
+ self._email = email
+ def _get_email(self):
+ return self._email
+ email = property(_get_email, _set_email)
+
+ mapper(MyAddress, addresses_table, properties={
+ '_email': addresses_table.c.email
+ })
+
+However, the approach above is not complete. While our ``EmailAddress`` object will shuttle the value through the ``email`` descriptor and into the ``_email`` mapped attribute, the class level ``EmailAddress.email`` attribute does not have the usual expression semantics usable with ``Query``. To provide these, we instead use the ``synonym()`` function as follows:
+
+.. sourcecode:: python+sql
+
+ mapper(EmailAddress, addresses_table, properties={
+ 'email': synonym('_email', map_column=True)
+ })
+
+The ``email`` attribute is now usable in the same way as any other mapped attribute, including filter expressions, get/set operations, etc.:
+
+.. sourcecode:: python+sql
+
+ address = session.query(EmailAddress).filter(EmailAddress.email == 'some address').one()
+
+ address.email = 'some other address'
+ session.flush()
+
+ q = session.query(EmailAddress).filter_by(email='some other address')
+
+If the mapped class does not provide a property, the ``synonym()`` construct will create a default getter/setter object automatically.
+
+.. _custom_comparators:
+
+Custom Comparators
+~~~~~~~~~~~~~~~~~~~
+
+The expressions returned by comparison operations, such as ``User.name=='ed'``, can be customized. SQLAlchemy attributes generate these expressions using :class:`~sqlalchemy.orm.interfaces.PropComparator` objects, which provide common Python expression overrides including ``__eq__()``, ``__ne__()``, ``__lt__()``, and so on. Any mapped attribute can be passed a user-defined class via the ``comparator_factory`` keyword argument, which subclasses the appropriate ``PropComparator`` in use, which can provide any or all of these methods:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.properties import ColumnProperty
+ class MyComparator(ColumnProperty.Comparator):
+ def __eq__(self, other):
+ return func.lower(self.__clause_element__()) == func.lower(other)
+
+ mapper(EmailAddress, addresses_table, properties={
+ 'email':column_property(addresses_table.c.email, comparator_factory=MyComparator)
+ })
+
+Above, comparisons on the ``email`` column are wrapped in the SQL lower() function to produce case-insensitive matching:
+
+.. sourcecode:: python+sql
+
+ >>> str(EmailAddress.email == 'SomeAddress@foo.com')
+ lower(addresses.email) = lower(:lower_1)
+
+The ``__clause_element__()`` method is provided by the base ``Comparator`` class in use, and represents the SQL element which best matches what this attribute represents. For a column-based attribute, it's the mapped column. For a composite attribute, it's a :class:`~sqlalchemy.sql.expression.ClauseList` consisting of each column represented. For a relation, it's the table mapped by the local mapper (not the remote mapper). ``__clause_element__()`` should be honored by the custom comparator class in most cases since the resulting element will be applied any translations which are in effect, such as the correctly aliased member when using an ``aliased()`` construct or certain ``with_polymorphic()`` scenarios.
+
+There are four kinds of ``Comparator`` classes which may be subclassed, as according to the type of mapper property configured:
+
+ * ``column_property()`` attribute - ``sqlalchemy.orm.properties.ColumnProperty.Comparator``
+ * ``composite()`` attribute - ``sqlalchemy.orm.properties.CompositeProperty.Comparator``
+ * ``relation()`` attribute - ``sqlalchemy.orm.properties.RelationProperty.Comparator``
+ * ``comparable_property()`` attribute - ``sqlalchemy.orm.interfaces.PropComparator``
+
+When using ``comparable_property()``, which is a mapper property that isn't tied to any column or mapped table, the ``__clause_element__()`` method of ``PropComparator`` should also be implemented.
+
+The ``comparator_factory`` argument is accepted by all ``MapperProperty``-producing functions: ``column_property()``, ``composite()``, ``comparable_property()``, ``synonym()``, ``relation()``, ``backref()``, ``deferred()``, and ``dynamic_loader()``.
+
+Composite Column Types
+-----------------------
+
+Sets of columns can be associated with a single datatype. The ORM treats the group of columns like a single column which accepts and returns objects using the custom datatype you provide. In this example, we'll create a table ``vertices`` which stores a pair of x/y coordinates, and a custom datatype ``Point`` which is a composite type of an x and y column:
+
+.. sourcecode:: python+sql
+
+ vertices = Table('vertices', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('x1', Integer),
+ Column('y1', Integer),
+ Column('x2', Integer),
+ Column('y2', Integer),
+ )
+
+The requirements for the custom datatype class are that it have a constructor which accepts positional arguments corresponding to its column format, and also provides a method ``__composite_values__()`` which returns the state of the object as a list or tuple, in order of its column-based attributes. It also should supply adequate ``__eq__()`` and ``__ne__()`` methods which test the equality of two instances, and may optionally provide a ``__set_composite_values__`` method which is used to set internal state in some cases (typically when default values have been generated during a flush)::
+
+ class Point(object):
+ def __init__(self, x, y):
+ self.x = x
+ self.y = y
+ def __composite_values__(self):
+ return [self.x, self.y]
+ def __set_composite_values__(self, x, y):
+ self.x = x
+ self.y = y
+ def __eq__(self, other):
+ return other.x == self.x and other.y == self.y
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+If ``__set_composite_values__()`` is not provided, the names of the mapped columns are taken as the names of attributes on the object, and ``setattr()`` is used to set data.
+
+Setting up the mapping uses the :func:`~sqlalchemy.orm.composite()` function::
+
+ class Vertex(object):
+ pass
+
+ mapper(Vertex, vertices, properties={
+ 'start': composite(Point, vertices.c.x1, vertices.c.y1),
+ 'end': composite(Point, vertices.c.x2, vertices.c.y2)
+ })
+
+We can now use the ``Vertex`` instances as well as querying as though the ``start`` and ``end`` attributes are regular scalar attributes::
+
+ session = Session()
+ v = Vertex(Point(3, 4), Point(5, 6))
+ session.save(v)
+
+ v2 = session.query(Vertex).filter(Vertex.start == Point(3, 4))
+
+The "equals" comparison operation by default produces an AND of all corresponding columns equated to one another. This can be changed using the ``comparator_factory``, described in :ref:`custom_comparators`::
+
+ from sqlalchemy.orm.properties import CompositeProperty
+ from sqlalchemy import sql
+
+ class PointComparator(CompositeProperty.Comparator):
+ def __gt__(self, other):
+ """define the 'greater than' operation"""
+
+ return sql.and_(*[a>b for a, b in
+ zip(self.__clause_element__().clauses,
+ other.__composite_values__())])
+
+ maper(Vertex, vertices, properties={
+ 'start': composite(Point, vertices.c.x1, vertices.c.y1, comparator_factory=PointComparator),
+ 'end': composite(Point, vertices.c.x2, vertices.c.y2, comparator_factory=PointComparator)
+ })
+
+Controlling Ordering
+---------------------
+
+As of version 0.5, the ORM does not generate ordering for any query unless explicitly configured.
+
+The "default" ordering for a collection, which applies to list-based collections, can be configured using the ``order_by`` keyword argument on ``relation()``::
+
+ mapper(Address, addresses_table)
+
+ # order address objects by address id
+ mapper(User, users_table, properties={
+ 'addresses': relation(Address, order_by=addresses_table.c.address_id)
+ })
+
+Note that when using eager loaders with relations, the tables used by the eager load's join are anonymously aliased. You can only order by these columns if you specify it at the ``relation()`` level. To control ordering at the query level based on a related table, you ``join()`` to that relation, then order by it::
+
+ session.query(User).join('addresses').order_by(Address.street)
+
+Ordering for rows loaded through ``Query`` is usually specified using the ``order_by()`` generative method. There is also an option to set a default ordering for Queries which are against a single mapped entity and where there was no explicit ``order_by()`` stated, which is the ``order_by`` keyword argument to ``mapper()``::
+
+ # order by a column
+ mapper(User, users_table, order_by=users_table.c.user_id)
+
+ # order by multiple items
+ mapper(User, users_table, order_by=[users_table.c.user_id, users_table.c.user_name.desc()])
+
+Above, a ``Query`` issued for the ``User`` class will use the value of the mapper's ``order_by`` setting if the ``Query`` itself has no ordering specified.
+
+Mapping Class Inheritance Hierarchies
+--------------------------------------
+
+SQLAlchemy supports three forms of inheritance: *single table inheritance*, where several types of classes are stored in one table, *concrete table inheritance*, where each type of class is stored in its own table, and *joined table inheritance*, where the parent/child classes are stored in their own tables that are joined together in a select. Whereas support for single and joined table inheritance is strong, concrete table inheritance is a less common scenario with some particular problems so is not quite as flexible.
+
+When mappers are configured in an inheritance relationship, SQLAlchemy has the ability to load elements "polymorphically", meaning that a single query can return objects of multiple types.
+
+For the following sections, assume this class relationship:
+
+.. sourcecode:: python+sql
+
+ class Employee(object):
+ def __init__(self, name):
+ self.name = name
+ def __repr__(self):
+ return self.__class__.__name__ + " " + self.name
+
+ class Manager(Employee):
+ def __init__(self, name, manager_data):
+ self.name = name
+ self.manager_data = manager_data
+ def __repr__(self):
+ return self.__class__.__name__ + " " + self.name + " " + self.manager_data
+
+ class Engineer(Employee):
+ def __init__(self, name, engineer_info):
+ self.name = name
+ self.engineer_info = engineer_info
+ def __repr__(self):
+ return self.__class__.__name__ + " " + self.name + " " + self.engineer_info
+
+Joined Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In joined table inheritance, each class along a particular classes' list of parents is represented by a unique table. The total set of attributes for a particular instance is represented as a join along all tables in its inheritance path. Here, we first define a table to represent the ``Employee`` class. This table will contain a primary key column (or columns), and a column for each attribute that's represented by ``Employee``. In this case it's just ``name``::
+
+ employees = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('type', String(30), nullable=False)
+ )
+
+The table also has a column called ``type``. It is strongly advised in both single- and joined- table inheritance scenarios that the root table contains a column whose sole purpose is that of the **discriminator**; it stores a value which indicates the type of object represented within the row. The column may be of any desired datatype. While there are some "tricks" to work around the requirement that there be a discriminator column, they are more complicated to configure when one wishes to load polymorphically.
+
+Next we define individual tables for each of ``Engineer`` and ``Manager``, which contain columns that represent the attributes unique to the subclass they represent. Each table also must contain a primary key column (or columns), and in most cases a foreign key reference to the parent table. It is standard practice that the same column is used for both of these roles, and that the column is also named the same as that of the parent table. However this is optional in SQLAlchemy; separate columns may be used for primary key and parent-relation, the column may be named differently than that of the parent, and even a custom join condition can be specified between parent and child tables instead of using a foreign key::
+
+ engineers = Table('engineers', metadata,
+ Column('employee_id', Integer, ForeignKey('employees.employee_id'), primary_key=True),
+ Column('engineer_info', String(50)),
+ )
+
+ managers = Table('managers', metadata,
+ Column('employee_id', Integer, ForeignKey('employees.employee_id'), primary_key=True),
+ Column('manager_data', String(50)),
+ )
+
+One natural effect of the joined table inheritance configuration is that the identity of any mapped object can be determined entirely from the base table. This has obvious advantages, so SQLAlchemy always considers the primary key columns of a joined inheritance class to be those of the base table only, unless otherwise manually configured. In other words, the ``employee_id`` column of both the ``engineers`` and ``managers`` table is not used to locate the ``Engineer`` or ``Manager`` object itself - only the value in ``employees.employee_id`` is considered, and the primary key in this case is non-composite. ``engineers.employee_id`` and ``managers.employee_id`` are still of course critical to the proper operation of the pattern overall as they are used to locate the joined row, once the parent row has been determined, either through a distinct SELECT statement or all at once within a JOIN.
+
+We then configure mappers as usual, except we use some additional arguments to indicate the inheritance relationship, the polymorphic discriminator column, and the **polymorphic identity** of each class; this is the value that will be stored in the polymorphic discriminator column.
+
+.. sourcecode:: python+sql
+
+ mapper(Employee, employees, polymorphic_on=employees.c.type, polymorphic_identity='employee')
+ mapper(Engineer, engineers, inherits=Employee, polymorphic_identity='engineer')
+ mapper(Manager, managers, inherits=Employee, polymorphic_identity='manager')
+
+And that's it. Querying against ``Employee`` will return a combination of ``Employee``, ``Engineer`` and ``Manager`` objects. Newly saved ``Engineer``, ``Manager``, and ``Employee`` objects will automatically populate the ``employees.type`` column with ``engineer``, ``manager``, or ``employee``, as appropriate.
+
+Controlling Which Tables are Queried
++++++++++++++++++++++++++++++++++++++
+
+The ``with_polymorphic()`` method of ``Query`` affects the specific subclass tables which the Query selects from. Normally, a query such as this:
+
+.. sourcecode:: python+sql
+
+ session.query(Employee).all()
+
+...selects only from the ``employees`` table. When loading fresh from the database, our joined-table setup will query from the parent table only, using SQL such as this:
+
+.. sourcecode:: python+sql
+
+ {opensql}
+ SELECT employees.employee_id AS employees_employee_id, employees.name AS employees_name, employees.type AS employees_type
+ FROM employees
+ []
+
+As attributes are requested from those ``Employee`` objects which are represented in either the ``engineers`` or ``managers`` child tables, a second load is issued for the columns in that related row, if the data was not already loaded. So above, after accessing the objects you'd see further SQL issued along the lines of:
+
+.. sourcecode:: python+sql
+
+ {opensql}
+ SELECT managers.employee_id AS managers_employee_id, managers.manager_data AS managers_manager_data
+ FROM managers
+ WHERE ? = managers.employee_id
+ [5]
+ SELECT engineers.employee_id AS engineers_employee_id, engineers.engineer_info AS engineers_engineer_info
+ FROM engineers
+ WHERE ? = engineers.employee_id
+ [2]
+
+This behavior works well when issuing searches for small numbers of items, such as when using ``get()``, since the full range of joined tables are not pulled in to the SQL statement unnecessarily. But when querying a larger span of rows which are known to be of many types, you may want to actively join to some or all of the joined tables. The ``with_polymorphic`` feature of ``Query`` and ``mapper`` provides this.
+
+Telling our query to polymorphically load ``Engineer`` and ``Manager`` objects:
+
+.. sourcecode:: python+sql
+
+ query = session.query(Employee).with_polymorphic([Engineer, Manager])
+
+produces a query which joins the ``employees`` table to both the ``engineers`` and ``managers`` tables like the following:
+
+.. sourcecode:: python+sql
+
+ query.all()
+ {opensql}
+ SELECT employees.employee_id AS employees_employee_id, engineers.employee_id AS engineers_employee_id, managers.employee_id AS managers_employee_id, employees.name AS employees_name, employees.type AS employees_type, engineers.engineer_info AS engineers_engineer_info, managers.manager_data AS managers_manager_data
+ FROM employees LEFT OUTER JOIN engineers ON employees.employee_id = engineers.employee_id LEFT OUTER JOIN managers ON employees.employee_id = managers.employee_id
+ []
+
+``with_polymorphic()`` accepts a single class or mapper, a list of classes/mappers, or the string ``'*'`` to indicate all subclasses:
+
+.. sourcecode:: python+sql
+
+ # join to the engineers table
+ query.with_polymorphic(Engineer)
+
+ # join to the engineers and managers tables
+ query.with_polymorphic([Engineer, Manager])
+
+ # join to all subclass tables
+ query.with_polymorphic('*')
+
+It also accepts a second argument ``selectable`` which replaces the automatic join creation and instead selects directly from the selectable given. This feature is normally used with "concrete" inheritance, described later, but can be used with any kind of inheritance setup in the case that specialized SQL should be used to load polymorphically:
+
+.. sourcecode:: python+sql
+
+ # custom selectable
+ query.with_polymorphic([Engineer, Manager], employees.outerjoin(managers).outerjoin(engineers))
+
+``with_polymorphic()`` is also needed when you wish to add filter criterion that is specific to one or more subclasses, so that those columns are available to the WHERE clause:
+
+.. sourcecode:: python+sql
+
+ session.query(Employee).with_polymorphic([Engineer, Manager]).\
+ filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
+
+Note that if you only need to load a single subtype, such as just the ``Engineer`` objects, ``with_polymorphic()`` is not needed since you would query against the ``Engineer`` class directly.
+
+The mapper also accepts ``with_polymorphic`` as a configurational argument so that the joined-style load will be issued automatically. This argument may be the string ``'*'``, a list of classes, or a tuple consisting of either, followed by a selectable.
+
+.. sourcecode:: python+sql
+
+ mapper(Employee, employees, polymorphic_on=employees.c.type, \
+ polymorphic_identity='employee', with_polymorphic='*')
+ mapper(Engineer, engineers, inherits=Employee, polymorphic_identity='engineer')
+ mapper(Manager, managers, inherits=Employee, polymorphic_identity='manager')
+
+The above mapping will produce a query similar to that of ``with_polymorphic('*')`` for every query of ``Employee`` objects.
+
+Using ``with_polymorphic()`` with ``Query`` will override the mapper-level ``with_polymorphic`` setting.
+
+Creating Joins to Specific Subtypes
+++++++++++++++++++++++++++++++++++++
+
+The ``of_type()`` method is a helper which allows the construction of joins along ``relation`` paths while narrowing the criterion to specific subclasses. Suppose the ``employees`` table represents a collection of employees which are associated with a ``Company`` object. We'll add a ``company_id`` column to the ``employees`` table and a new table ``companies``:
+
+.. sourcecode:: python+sql
+
+ companies = Table('companies', metadata,
+ Column('company_id', Integer, primary_key=True),
+ Column('name', String(50))
+ )
+
+ employees = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('type', String(30), nullable=False),
+ Column('company_id', Integer, ForeignKey('companies.company_id'))
+ )
+
+ class Company(object):
+ pass
+
+ mapper(Company, companies, properties={
+ 'employees': relation(Employee)
+ })
+
+When querying from ``Company`` onto the ``Employee`` relation, the ``join()`` method as well as the ``any()`` and ``has()`` operators will create a join from ``companies`` to ``employees``, without including ``engineers`` or ``managers`` in the mix. If we wish to have criterion which is specifically against the ``Engineer`` class, we can tell those methods to join or subquery against the joined table representing the subclass using the ``of_type()`` operator:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).join(Company.employees.of_type(Engineer)).filter(Engineer.engineer_info=='someinfo')
+
+A longhand version of this would involve spelling out the full target selectable within a 2-tuple:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).join((employees.join(engineers), Company.employees)).filter(Engineer.engineer_info=='someinfo')
+
+Currently, ``of_type()`` accepts a single class argument. It may be expanded later on to accept multiple classes. For now, to join to any group of subclasses, the longhand notation allows this flexibility:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).join((employees.outerjoin(engineers).outerjoin(managers), Company.employees)).\
+ filter(or_(Engineer.engineer_info=='someinfo', Manager.manager_data=='somedata'))
+
+The ``any()`` and ``has()`` operators also can be used with ``of_type()`` when the embedded criterion is in terms of a subclass:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).filter(Company.employees.of_type(Engineer).any(Engineer.engineer_info=='someinfo')).all()
+
+Note that the ``any()`` and ``has()`` are both shorthand for a correlated EXISTS query. To build one by hand looks like:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).filter(
+ exists([1],
+ and_(Engineer.engineer_info=='someinfo', employees.c.company_id==companies.c.company_id),
+ from_obj=employees.join(engineers)
+ )
+ ).all()
+
+The EXISTS subquery above selects from the join of ``employees`` to ``engineers``, and also specifies criterion which correlates the EXISTS subselect back to the parent ``companies`` table.
+
+Single Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Single table inheritance is where the attributes of the base class as well as all subclasses are represented within a single table. A column is present in the table for every attribute mapped to the base class and all subclasses; the columns which correspond to a single subclass are nullable. This configuration looks much like joined-table inheritance except there's only one table. In this case, a ``type`` column is required, as there would be no other way to discriminate between classes. The table is specified in the base mapper only; for the inheriting classes, leave their ``table`` parameter blank:
+
+.. sourcecode:: python+sql
+
+ employees_table = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('manager_data', String(50)),
+ Column('engineer_info', String(50)),
+ Column('type', String(20), nullable=False)
+ )
+
+ employee_mapper = mapper(Employee, employees_table, \
+ polymorphic_on=employees_table.c.type, polymorphic_identity='employee')
+ manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager')
+ engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer')
+
+Note that the mappers for the derived classes Manager and Engineer omit the specification of their associated table, as it is inherited from the employee_mapper. Omitting the table specification for derived mappers in single-table inheritance is required.
+
+Concrete Table Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This form of inheritance maps each class to a distinct table, as below:
+
+.. sourcecode:: python+sql
+
+ employees_table = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ )
+
+ managers_table = Table('managers', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('manager_data', String(50)),
+ )
+
+ engineers_table = Table('engineers', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('engineer_info', String(50)),
+ )
+
+Notice in this case there is no ``type`` column. If polymorphic loading is not required, there's no advantage to using ``inherits`` here; you just define a separate mapper for each class.
+
+.. sourcecode:: python+sql
+
+ mapper(Employee, employees_table)
+ mapper(Manager, managers_table)
+ mapper(Engineer, engineers_table)
+
+To load polymorphically, the ``with_polymorphic`` argument is required, along with a selectable indicating how rows should be loaded. In this case we must construct a UNION of all three tables. SQLAlchemy includes a helper function to create these called ``polymorphic_union``, which will map all the different columns into a structure of selects with the same numbers and names of columns, and also generate a virtual ``type`` column for each subselect:
+
+.. sourcecode:: python+sql
+
+ pjoin = polymorphic_union({
+ 'employee': employees_table,
+ 'manager': managers_table,
+ 'engineer': engineers_table
+ }, 'type', 'pjoin')
+
+ employee_mapper = mapper(Employee, employees_table, with_polymorphic=('*', pjoin), \
+ polymorphic_on=pjoin.c.type, polymorphic_identity='employee')
+ manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, \
+ concrete=True, polymorphic_identity='manager')
+ engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, \
+ concrete=True, polymorphic_identity='engineer')
+
+Upon select, the polymorphic union produces a query like this:
+
+.. sourcecode:: python+sql
+
+ session.query(Employee).all()
+ {opensql}
+ SELECT pjoin.type AS pjoin_type, pjoin.manager_data AS pjoin_manager_data, pjoin.employee_id AS pjoin_employee_id,
+ pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
+ FROM (
+ SELECT employees.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
+ CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
+ FROM employees
+ UNION ALL
+ SELECT managers.employee_id AS employee_id, managers.manager_data AS manager_data, managers.name AS name,
+ CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
+ FROM managers
+ UNION ALL
+ SELECT engineers.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
+ engineers.engineer_info AS engineer_info, 'engineer' AS type
+ FROM engineers
+ ) AS pjoin
+ []
+
+Using Relations with Inheritance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Both joined-table and single table inheritance scenarios produce mappings which are usable in relation() functions; that is, it's possible to map a parent object to a child object which is polymorphic. Similarly, inheriting mappers can have ``relation()``s of their own at any level, which are inherited to each child class. The only requirement for relations is that there is a table relationship between parent and child. An example is the following modification to the joined table inheritance example, which sets a bi-directional relationship between ``Employee`` and ``Company``:
+
+.. sourcecode:: python+sql
+
+ employees_table = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('company_id', Integer, ForeignKey('companies.company_id'))
+ )
+
+ companies = Table('companies', metadata,
+ Column('company_id', Integer, primary_key=True),
+ Column('name', String(50)))
+
+ class Company(object):
+ pass
+
+ mapper(Company, companies, properties={
+ 'employees': relation(Employee, backref='company')
+ })
+
+SQLAlchemy has a lot of experience in this area; the optimized "outer join" approach can be used freely for parent and child relationships, eager loads are fully useable, query aliasing and other tricks are fully supported as well.
+
+In a concrete inheritance scenario, mapping relations is more difficult since the distinct classes do not share a table. In this case, you *can* establish a relationship from parent to child if a join condition can be constructed from parent to child, if each child table contains a foreign key to the parent:
+
+.. sourcecode:: python+sql
+
+ companies = Table('companies', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(50)))
+
+ employees_table = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('company_id', Integer, ForeignKey('companies.id'))
+ )
+
+ managers_table = Table('managers', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('manager_data', String(50)),
+ Column('company_id', Integer, ForeignKey('companies.id'))
+ )
+
+ engineers_table = Table('engineers', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('name', String(50)),
+ Column('engineer_info', String(50)),
+ Column('company_id', Integer, ForeignKey('companies.id'))
+ )
+
+ mapper(Employee, employees_table, with_polymorphic=('*', pjoin), polymorphic_on=pjoin.c.type, polymorphic_identity='employee')
+ mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
+ mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')
+ mapper(Company, companies, properties={
+ 'employees': relation(Employee)
+ })
+
+Let's crank it up and try loading with an eager load:
+
+.. sourcecode:: python+sql
+
+ session.query(Company).options(eagerload('employees')).all()
+ {opensql}
+ SELECT anon_1.type AS anon_1_type, anon_1.manager_data AS anon_1_manager_data, anon_1.engineer_info AS anon_1_engineer_info,
+ anon_1.employee_id AS anon_1_employee_id, anon_1.name AS anon_1_name, anon_1.company_id AS anon_1_company_id,
+ companies.id AS companies_id, companies.name AS companies_name
+ FROM companies LEFT OUTER JOIN (SELECT CAST(NULL AS VARCHAR(50)) AS engineer_info, employees.employee_id AS employee_id,
+ CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name, employees.company_id AS company_id, 'employee' AS type
+ FROM employees UNION ALL SELECT CAST(NULL AS VARCHAR(50)) AS engineer_info, managers.employee_id AS employee_id,
+ managers.manager_data AS manager_data, managers.name AS name, managers.company_id AS company_id, 'manager' AS type
+ FROM managers UNION ALL SELECT engineers.engineer_info AS engineer_info, engineers.employee_id AS employee_id,
+ CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name, engineers.company_id AS company_id, 'engineer' AS type
+ FROM engineers) AS anon_1 ON companies.id = anon_1.company_id
+ []
+
+The big limitation with concrete table inheritance is that relation()s placed on each concrete mapper do **not** propagate to child mappers. If you want to have the same relation()s set up on all concrete mappers, they must be configured manually on each.
+
+Mapping a Class against Multiple Tables
+----------------------------------------
+
+
+Mappers can be constructed against arbitrary relational units (called ``Selectables``) as well as plain ``Tables``. For example, The ``join`` keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.
+
+.. sourcecode:: python+sql
+
+ # a class
+ class AddressUser(object):
+ pass
+
+ # define a Join
+ j = join(users_table, addresses_table)
+
+ # map to it - the identity of an AddressUser object will be
+ # based on (user_id, address_id) since those are the primary keys involved
+ mapper(AddressUser, j, properties={
+ 'user_id': [users_table.c.user_id, addresses_table.c.user_id]
+ })
+
+A second example:
+
+.. sourcecode:: python+sql
+
+ # many-to-many join on an association table
+ j = join(users_table, userkeywords,
+ users_table.c.user_id==userkeywords.c.user_id).join(keywords,
+ userkeywords.c.keyword_id==keywords.c.keyword_id)
+
+ # a class
+ class KeywordUser(object):
+ pass
+
+ # map to it - the identity of a KeywordUser object will be
+ # (user_id, keyword_id) since those are the primary keys involved
+ mapper(KeywordUser, j, properties={
+ 'user_id': [users_table.c.user_id, userkeywords.c.user_id],
+ 'keyword_id': [userkeywords.c.keyword_id, keywords.c.keyword_id]
+ })
+
+In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
+
+Mapping a Class against Arbitrary Selects
+------------------------------------------
+
+
+Similar to mapping against a join, a plain select() object can be used with a mapper as well. Below, an example select which contains two aggregate functions and a group_by is mapped to a class:
+
+.. sourcecode:: python+sql
+
+ s = select([customers,
+ func.count(orders).label('order_count'),
+ func.max(orders.price).label('highest_order')],
+ customers.c.customer_id==orders.c.customer_id,
+ group_by=[c for c in customers.c]
+ ).alias('somealias')
+ class Customer(object):
+ pass
+
+ mapper(Customer, s)
+
+Above, the "customers" table is joined against the "orders" table to produce a full row for each customer row, the total count of related rows in the "orders" table, and the highest price in the "orders" table, grouped against the full set of columns in the "customers" table. That query is then mapped against the Customer class. New instances of Customer will contain attributes for each column in the "customers" table as well as an "order_count" and "highest_order" attribute. Updates to the Customer object will only be reflected in the "customers" table and not the "orders" table. This is because the primary key columns of the "orders" table are not represented in this mapper and therefore the table is not affected by save or delete operations.
+
+Multiple Mappers for One Class
+-------------------------------
+
+
+The first mapper created for a certain class is known as that class's "primary mapper." Other mappers can be created as well on the "load side" - these are called **secondary mappers**. This is a mapper that must be constructed with the keyword argument ``non_primary=True``, and represents a load-only mapper. Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper. It is also invalid to add new ``relation()``s to a non-primary mapper. To use this mapper with the Session, specify it to the ``query`` method:
+
+example:
+
+.. sourcecode:: python+sql
+
+ # primary mapper
+ mapper(User, users_table)
+
+ # make a secondary mapper to load User against a join
+ othermapper = mapper(User, users_table.join(someothertable), non_primary=True)
+
+ # select
+ result = session.query(othermapper).select()
+
+The "non primary mapper" is a rarely needed feature of SQLAlchemy; in most cases, the ``Query`` object can produce any kind of query that's desired. It's recommended that a straight ``Query`` be used in place of a non-primary mapper unless the mapper approach is absolutely needed. Current use cases for the "non primary mapper" are when you want to map the class to a particular select statement or view to which additional query criterion can be added, and for when the particular mapped select statement or view is to be placed in a ``relation()`` of a parent mapper.
+
+Versions of SQLAlchemy previous to 0.5 included another mapper flag called "entity_name", as of version 0.5.0 this feature has been removed (it never worked very well).
+
+Constructors and Object Initialization
+---------------------------------------
+
+Mapping imposes no restrictions or requirements on the constructor (``__init__``) method for the class. You are free to require any arguments for the function
+that you wish, assign attributes to the instance that are unknown to the ORM, and generally do anything else you would normally do when writing a constructor
+for a Python class.
+
+The SQLAlchemy ORM does not call ``__init__`` when recreating objects from database rows. The ORM's process is somewhat akin to the Python standard library's
+``pickle`` module, invoking the low level ``__new__`` method and then quietly restoring attributes directly on the instance rather than calling ``__init__``.
+
+If you need to do some setup on database-loaded instances before they're ready to use, you can use the ``@reconstructor`` decorator to tag a method as the ORM
+counterpart to ``__init__``. SQLAlchemy will call this method with no arguments every time it loads or reconstructs one of your instances. This is useful for
+recreating transient properties that are normally assigned in your ``__init__``::
+
+ from sqlalchemy import orm
+
+ class MyMappedClass(object):
+ def __init__(self, data):
+ self.data = data
+ # we need stuff on all instances, but not in the database.
+ self.stuff = []
+
+ @orm.reconstructor
+ def init_on_load(self):
+ self.stuff = []
+
+When ``obj = MyMappedClass()`` is executed, Python calls the ``__init__`` method as normal and the ``data`` argument is required. When instances are loaded
+during a ``Query`` operation as in ``query(MyMappedClass).one()``, ``init_on_load`` is called instead.
+
+Any method may be tagged as the ``reconstructor``, even the ``__init__`` method. SQLAlchemy will call the reconstructor method with no arguments. Scalar
+(non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet
+available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush()
+operation, so the activity within a reconstructor should be conservative.
+
+While the ORM does not call your ``__init__`` method, it will modify the class's ``__init__`` slightly. The method is lightly wrapped to act as a trigger for
+the ORM, allowing mappers to be compiled automatically and will fire a ``init_instance`` event that ``MapperExtension`` objectss may listen for.
+``MapperExtension`` objects can also listen for a ``reconstruct_instance`` event, analogous to the ``reconstructor`` decorator above.
+
+.. _extending_mapper:
+
+Extending Mapper
+-----------------
+
+Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class. This class is just a series of "hooks" where various functionality takes place. An application can make its own MapperExtension objects, overriding only the methods it needs. Methods that are not overridden return the special value ``sqlalchemy.orm.EXT_CONTINUE`` to allow processing to continue to the next MapperExtension or simply proceed normally if there are no more extensions.
+
+API documentation for MapperExtension: :class:`sqlalchemy.orm.interfaces.MapperExtension`
+
+To use MapperExtension, make your own subclass of it and just send it off to a mapper::
+
+ m = mapper(User, users_table, extension=MyExtension())
+
+Multiple extensions will be chained together and processed in order; they are specified as a list::
+
+ m = mapper(User, users_table, extension=[ext1, ext2, ext3])
+
+Relation Configuration
+=======================
+Basic Relational Patterns
+--------------------------
+
+A quick walkthrough of the basic relational patterns.
+
+One To Many
+~~~~~~~~~~~~
+
+A one to many relationship places a foreign key in the child table referencing the parent. SQLAlchemy creates the relationship as a collection on the parent object containing instances of the child object.
+
+.. sourcecode:: python+sql
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True))
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('parent.id')))
+
+ class Parent(object):
+ pass
+
+ class Child(object):
+ pass
+
+ mapper(Parent, parent_table, properties={
+ 'children': relation(Child)
+ })
+
+ mapper(Child, child_table)
+
+To establish a bi-directional relationship in one-to-many, where the "reverse" side is a many to one, specify the ``backref`` option:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, parent_table, properties={
+ 'children': relation(Child, backref='parent')
+ })
+
+ mapper(Child, child_table)
+
+``Child`` will get a ``parent`` attribute with many-to-one semantics.
+
+Many To One
+~~~~~~~~~~~~
+
+
+Many to one places a foreign key in the parent table referencing the child. The mapping setup is identical to one-to-many, however SQLAlchemy creates the relationship as a scalar attribute on the parent object referencing a single instance of the child object.
+
+.. sourcecode:: python+sql
+
+ parent_table = Table('parent', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('child_id', Integer, ForeignKey('child.id')))
+
+ child_table = Table('child', metadata,
+ Column('id', Integer, primary_key=True),
+ )
+
+ class Parent(object):
+ pass
+
+ class Child(object):
+ pass
+
+ mapper(Parent, parent_table, properties={
+ 'child': relation(Child)
+ })
+
+ mapper(Child, child_table)
+
+Backref behavior is available here as well, where ``backref="parents"`` will place a one-to-many collection on the ``Child`` class.
+
+One To One
+~~~~~~~~~~~
+
+
+One To One is essentially a bi-directional relationship with a scalar attribute on both sides. To achieve this, the ``uselist=False`` flag indicates the placement of a scalar attribute instead of a collection on the "many" side of the relationship. To convert one-to-many into one-to-one:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, parent_table, properties={
+ 'child': relation(Child, uselist=False, backref='parent')
+ })
+
+Or to turn many-to-one into one-to-one:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, parent_table, properties={
+ 'child': relation(Child, backref=backref('parent', uselist=False))
+ })
+
+Many To Many
+~~~~~~~~~~~~~
+
+
+Many to Many adds an association table between two classes. The association table is indicated by the ``secondary`` argument to ``relation()``.
+
+.. sourcecode:: python+sql
+
+ left_table = Table('left', metadata,
+ Column('id', Integer, primary_key=True))
+
+ right_table = Table('right', metadata,
+ Column('id', Integer, primary_key=True))
+
+ association_table = Table('association', metadata,
+ Column('left_id', Integer, ForeignKey('left.id')),
+ Column('right_id', Integer, ForeignKey('right.id')),
+ )
+
+ mapper(Parent, left_table, properties={
+ 'children': relation(Child, secondary=association_table)
+ })
+
+ mapper(Child, right_table)
+
+For a bi-directional relationship, both sides of the relation contain a collection by default, which can be modified on either side via the ``uselist`` flag to be scalar. The ``backref`` keyword will automatically use the same ``secondary`` argument for the reverse relation:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, left_table, properties={
+ 'children': relation(Child, secondary=association_table, backref='parents')
+ })
+
+.. _association_pattern:
+
+Association Object
+~~~~~~~~~~~~~~~~~~
+
+The association object pattern is a variant on many-to-many: it specifically is used when your association table contains additional columns beyond those which are foreign keys to the left and right tables. Instead of using the ``secondary`` argument, you map a new class directly to the association table. The left side of the relation references the association object via one-to-many, and the association class references the right side via many-to-one.
+
+.. sourcecode:: python+sql
+
+ left_table = Table('left', metadata,
+ Column('id', Integer, primary_key=True))
+
+ right_table = Table('right', metadata,
+ Column('id', Integer, primary_key=True))
+
+ association_table = Table('association', metadata,
+ Column('left_id', Integer, ForeignKey('left.id'), primary_key=True),
+ Column('right_id', Integer, ForeignKey('right.id'), primary_key=True),
+ Column('data', String(50))
+ )
+
+ mapper(Parent, left_table, properties={
+ 'children':relation(Association)
+ })
+
+ mapper(Association, association_table, properties={
+ 'child':relation(Child)
+ })
+
+ mapper(Child, right_table)
+
+The bi-directional version adds backrefs to both relations:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, left_table, properties={
+ 'children':relation(Association, backref="parent")
+ })
+
+ mapper(Association, association_table, properties={
+ 'child':relation(Child, backref="parent_assocs")
+ })
+
+ mapper(Child, right_table)
+
+Working with the association pattern in its direct form requires that child objects are associated with an association instance before being appended to the parent; similarly, access from parent to child goes through the association object:
+
+.. sourcecode:: python+sql
+
+ # create parent, append a child via association
+ p = Parent()
+ a = Association()
+ a.child = Child()
+ p.children.append(a)
+
+ # iterate through child objects via association, including association
+ # attributes
+ for assoc in p.children:
+ print assoc.data
+ print assoc.child
+
+To enhance the association object pattern such that direct access to the ``Association`` object is optional, SQLAlchemy provides the :ref:`associationproxy`.
+
+**Important Note**: it is strongly advised that the ``secondary`` table argument not be combined with the Association Object pattern, unless the ``relation()`` which contains the ``secondary`` argument is marked ``viewonly=True``. Otherwise, SQLAlchemy may persist conflicting data to the underlying association table since it is represented by two conflicting mappings. The Association Proxy pattern should be favored in the case where access to the underlying association data is only sometimes needed.
+
+Adjacency List Relationships
+-----------------------------
+
+
+The **adjacency list** pattern is a common relational pattern whereby a table contains a foreign key reference to itself. This is the most common and simple way to represent hierarchical data in flat tables. The other way is the "nested sets" model, sometimes called "modified preorder". Despite what many online articles say about modified preorder, the adjacency list model is probably the most appropriate pattern for the large majority of hierarchical storage needs, for reasons of concurrency, reduced complexity, and that modified preorder has little advantage over an application which can fully load subtrees into the application space.
+
+SQLAlchemy commonly refers to an adjacency list relation as a **self-referential mapper**. In this example, we'll work with a single table called ``treenodes`` to represent a tree structure::
+
+ nodes = Table('treenodes', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer, ForeignKey('treenodes.id')),
+ Column('data', String(50)),
+ )
+
+A graph such as the following::
+
+ root --+---> child1
+ +---> child2 --+--> subchild1
+ | +--> subchild2
+ +---> child3
+
+Would be represented with data such as::
+
+ id parent_id data
+ --- ------- ----
+ 1 NULL root
+ 2 1 child1
+ 3 1 child2
+ 4 3 subchild1
+ 5 3 subchild2
+ 6 1 child3
+
+SQLAlchemy's ``mapper()`` configuration for a self-referential one-to-many relationship is exactly like a "normal" one-to-many relationship. When SQLAlchemy encounters the foreign key relation from ``treenodes`` to ``treenodes``, it assumes one-to-many unless told otherwise:
+
+.. sourcecode:: python+sql
+
+ # entity class
+ class Node(object):
+ pass
+
+ mapper(Node, nodes, properties={
+ 'children': relation(Node)
+ })
+
+To create a many-to-one relationship from child to parent, an extra indicator of the "remote side" is added, which contains the ``Column`` object or objects indicating the remote side of the relation:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'parent': relation(Node, remote_side=[nodes.c.id])
+ })
+
+And the bi-directional version combines both:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'children': relation(Node, backref=backref('parent', remote_side=[nodes.c.id]))
+ })
+
+There are several examples included with SQLAlchemy illustrating self-referential strategies; these include `basic_tree.py <http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/adjacencytree/basic_tree.py>`_ and `optimized_al.py <http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/elementtree/optimized_al.py>`_, the latter of which illustrates how to persist and search XML documents in conjunction with `ElementTree <http://effbot.org/zone/element-index.htm>`_.
+
+Self-Referential Query Strategies
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Querying self-referential structures is done in the same way as any other query in SQLAlchemy, such as below, we query for any node whose ``data`` attribute stores the value ``child2``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'child2'
+ session.query(Node).filter(Node.data=='child2')
+
+On the subject of joins, i.e. those described in `datamapping_joins`, self-referential structures require the usage of aliases so that the same table can be referenced multiple times within the FROM clause of the query. Aliasing can be done either manually using the ``nodes`` ``Table`` object as a source of aliases:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2'
+ nodealias = nodes.alias()
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ filter(and_(Node.parent_id==nodealias.c.id, nodealias.c.data=='child2')).all()
+ SELECT treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.data AS treenodes_data
+ FROM treenodes, treenodes AS treenodes_1
+ WHERE treenodes.data = ? AND treenodes.parent_id = treenodes_1.id AND treenodes_1.data = ?
+ ['subchild1', 'child2']
+
+or automatically, using ``join()`` with ``aliased=True``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2'
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ join('parent', aliased=True).filter(Node.data=='child2').all()
+ SELECT treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.data AS treenodes_data
+ FROM treenodes JOIN treenodes AS treenodes_1 ON treenodes_1.id = treenodes.parent_id
+ WHERE treenodes.data = ? AND treenodes_1.data = ?
+ ['subchild1', 'child2']
+
+To add criterion to multiple points along a longer join, use ``from_joinpoint=True``:
+
+.. sourcecode:: python+sql
+
+ # get all nodes named 'subchild1' with a parent named 'child2' and a grandparent 'root'
+ {sql}session.query(Node).filter(Node.data=='subchild1').\
+ join('parent', aliased=True).filter(Node.data=='child2').\
+ join('parent', aliased=True, from_joinpoint=True).filter(Node.data=='root').all()
+ SELECT treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.data AS treenodes_data
+ FROM treenodes JOIN treenodes AS treenodes_1 ON treenodes_1.id = treenodes.parent_id JOIN treenodes AS treenodes_2 ON treenodes_2.id = treenodes_1.parent_id
+ WHERE treenodes.data = ? AND treenodes_1.data = ? AND treenodes_2.data = ?
+ ['subchild1', 'child2', 'root']
+
+Configuring Eager Loading
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Eager loading of relations occurs using joins or outerjoins from parent to child table during a normal query operation, such that the parent and its child collection can be populated from a single SQL statement. SQLAlchemy's eager loading uses aliased tables in all cases when joining to related items, so it is compatible with self-referential joining. However, to use eager loading with a self-referential relation, SQLAlchemy needs to be told how many levels deep it should join; otherwise the eager load will not take place. This depth setting is configured via ``join_depth``:
+
+.. sourcecode:: python+sql
+
+ mapper(Node, nodes, properties={
+ 'children': relation(Node, lazy=False, join_depth=2)
+ })
+
+ {sql}session.query(Node).all()
+ SELECT treenodes_1.id AS treenodes_1_id, treenodes_1.parent_id AS treenodes_1_parent_id, treenodes_1.data AS treenodes_1_data, treenodes_2.id AS treenodes_2_id, treenodes_2.parent_id AS treenodes_2_parent_id, treenodes_2.data AS treenodes_2_data, treenodes.id AS treenodes_id, treenodes.parent_id AS treenodes_parent_id, treenodes.data AS treenodes_data
+ FROM treenodes LEFT OUTER JOIN treenodes AS treenodes_2 ON treenodes.id = treenodes_2.parent_id LEFT OUTER JOIN treenodes AS treenodes_1 ON treenodes_2.id = treenodes_1.parent_id
+ []
+
+Specifying Alternate Join Conditions to relation()
+---------------------------------------------------
+
+
+The ``relation()`` function uses the foreign key relationship between the parent and child tables to formulate the **primary join condition** between parent and child; in the case of a many-to-many relationship it also formulates the **secondary join condition**. If you are working with a ``Table`` which has no ``ForeignKey`` objects on it (which can be the case when using reflected tables with MySQL), or if the join condition cannot be expressed by a simple foreign key relationship, use the ``primaryjoin`` and possibly ``secondaryjoin`` conditions to create the appropriate relationship.
+
+In this example we create a relation ``boston_addresses`` which will only load the user addresses with a city of "Boston":
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ pass
+ class Address(object):
+ pass
+
+ mapper(Address, addresses_table)
+ mapper(User, users_table, properties={
+ 'boston_addresses': relation(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='Boston'))
+ })
+
+Many to many relationships can be customized by one or both of ``primaryjoin`` and ``secondaryjoin``, shown below with just the default many-to-many relationship explicitly set:
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ pass
+ class Keyword(object):
+ pass
+ mapper(Keyword, keywords_table)
+ mapper(User, users_table, properties={
+ 'keywords': relation(Keyword, secondary=userkeywords_table,
+ primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id,
+ secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id
+ )
+ })
+
+Specifying Foreign Keys
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+When using ``primaryjoin`` and ``secondaryjoin``, SQLAlchemy also needs to be aware of which columns in the relation reference the other. In most cases, a ``Table`` construct will have ``ForeignKey`` constructs which take care of this; however, in the case of reflected tables on a database that does not report FKs (like MySQL ISAM) or when using join conditions on columns that don't have foreign keys, the ``relation()`` needs to be told specifically which columns are "foreign" using the ``foreign_keys`` collection:
+
+.. sourcecode:: python+sql
+
+ mapper(Address, addresses_table)
+ mapper(User, users_table, properties={
+ 'addresses': relation(Address, primaryjoin=
+ users_table.c.user_id==addresses_table.c.user_id,
+ foreign_keys=[addresses_table.c.user_id])
+ })
+
+Building Query-Enabled Properties
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Very ambitious custom join conditions may fail to be directly persistable, and in some cases may not even load correctly. To remove the persistence part of the equation, use the flag ``viewonly=True`` on the ``relation()``, which establishes it as a read-only attribute (data written to the collection will be ignored on flush()). However, in extreme cases, consider using a regular Python property in conjunction with ``Query`` as follows:
+
+.. sourcecode:: python+sql
+
+ class User(object):
+ def _get_addresses(self):
+ return object_session(self).query(Address).with_parent(self).filter(...).all()
+ addresses = property(_get_addresses)
+
+Multiple Relations against the Same Parent/Child
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Theres no restriction on how many times you can relate from parent to child. SQLAlchemy can usually figure out what you want, particularly if the join conditions are straightforward. Below we add a ``newyork_addresses`` attribute to complement the ``boston_addresses`` attribute:
+
+.. sourcecode:: python+sql
+
+ mapper(User, users_table, properties={
+ 'boston_addresses': relation(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='Boston')),
+ 'newyork_addresses': relation(Address, primaryjoin=
+ and_(users_table.c.user_id==addresses_table.c.user_id,
+ addresses_table.c.city=='New York')),
+ })
+
+Alternate Collection Implementations
+-------------------------------------
+
+
+Mapping a one-to-many or many-to-many relationship results in a collection of values accessible through an attribute on the parent instance. By default, this collection is a ``list``:
+
+.. sourcecode:: python+sql
+
+ mapper(Parent, properties={
+ children = relation(Child)
+ })
+
+ parent = Parent()
+ parent.children.append(Child())
+ print parent.children[0]
+
+Collections are not limited to lists. Sets, mutable sequences and almost any other Python object that can act as a container can be used in place of the default list.
+
+.. sourcecode:: python+sql
+
+ # use a set
+ mapper(Parent, properties={
+ children = relation(Child, collection_class=set)
+ })
+
+ parent = Parent()
+ child = Child()
+ parent.children.add(child)
+ assert child in parent.children
+
+.. _advdatamapping_entitycollections:
+
+Custom Collection Implementations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use your own types for collections as well. For most cases, simply inherit from ``list`` or ``set`` and add the custom behavior.
+
+Collections in SQLAlchemy are transparently *instrumented*. Instrumentation means that normal operations on the collection are tracked and result in changes being written to the database at flush time. Additionally, collection operations can fire *events* which indicate some secondary operation must take place. Examples of a secondary operation include saving the child item in the parent's ``Session`` (i.e. the ``save-update`` cascade), as well as synchronizing the state of a bi-directional relationship (i.e. a ``backref``).
+
+The collections package understands the basic interface of lists, sets and dicts and will automatically apply instrumentation to those built-in types and their subclasses. Object-derived types that implement a basic collection interface are detected and instrumented via duck-typing:
+
+.. sourcecode:: python+sql
+
+ class ListLike(object):
+ def __init__(self):
+ self.data = []
+ def append(self, item):
+ self.data.append(item)
+ def remove(self, item):
+ self.data.remove(item)
+ def extend(self, items):
+ self.data.extend(items)
+ def __iter__(self):
+ return iter(self.data)
+ def foo(self):
+ return 'foo'
+
+``append``, ``remove``, and ``extend`` are known list-like methods, and will be instrumented automatically. ``__iter__`` is not a mutator method and won't be instrumented, and ``foo`` won't be either.
+
+Duck-typing (i.e. guesswork) isn't rock-solid, of course, so you can be explicit about the interface you are implementing by providing an ``__emulates__`` class attribute:
+
+.. sourcecode:: python+sql
+
+ class SetLike(object):
+ __emulates__ = set
+
+ def __init__(self):
+ self.data = set()
+ def append(self, item):
+ self.data.add(item)
+ def remove(self, item):
+ self.data.remove(item)
+ def __iter__(self):
+ return iter(self.data)
+
+This class looks list-like because of ``append``, but ``__emulates__`` forces it to set-like. ``remove`` is known to be part of the set interface and will be instrumented.
+
+But this class won't work quite yet: a little glue is needed to adapt it for use by SQLAlchemy. The ORM needs to know which methods to use to append, remove and iterate over members of the collection. When using a type like ``list`` or ``set``, the appropriate methods are well-known and used automatically when present. This set-like class does not provide the expected ``add`` method, so we must supply an explicit mapping for the ORM via a decorator.
+
+Annotating Custom Collections via Decorators
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Decorators can be used to tag the individual methods the ORM needs to manage collections. Use them when your class doesn't quite meet the regular interface for its container type, or you simply would like to use a different method to get the job done.
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import collection
+
+ class SetLike(object):
+ __emulates__ = set
+
+ def __init__(self):
+ self.data = set()
+
+ @collection.appender
+ def append(self, item):
+ self.data.add(item)
+
+ def remove(self, item):
+ self.data.remove(item)
+
+ def __iter__(self):
+ return iter(self.data)
+
+And that's all that's needed to complete the example. SQLAlchemy will add instances via the ``append`` method. ``remove`` and ``__iter__`` are the default methods for sets and will be used for removing and iteration. Default methods can be changed as well:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import collection
+
+ class MyList(list):
+ @collection.remover
+ def zark(self, item):
+ # do something special...
+
+ @collection.iterator
+ def hey_use_this_instead_for_iteration(self):
+ # ...
+
+There is no requirement to be list-, or set-like at all. Collection classes can be any shape, so long as they have the append, remove and iterate interface marked for SQLAlchemy's use. Append and remove methods will be called with a mapped entity as the single argument, and iterator methods are called with no arguments and must return an iterator.
+
+Dictionary-Based Collections
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+A ``dict`` can be used as a collection, but a keying strategy is needed to map entities loaded by the ORM to key, value pairs. The `sqlalchemy.orm.collections` package provides several built-in types for dictionary-based collections:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm.collections import column_mapped_collection, attribute_mapped_collection, mapped_collection
+
+ mapper(Item, items_table, properties={
+ # key by column
+ 'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)),
+ # or named attribute
+ 'notes2': relation(Note, collection_class=attribute_mapped_collection('keyword')),
+ # or any callable
+ 'notes3': relation(Note, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
+ })
+
+ # ...
+ item = Item()
+ item.notes['color'] = Note('color', 'blue')
+ print item.notes['color']
+
+These functions each provide a ``dict`` subclass with decorated ``set`` and ``remove`` methods and the keying strategy of your choice.
+
+The `sqlalchemy.orm.collections.MappedCollection` class can be used as a base class for your custom types or as a mix-in to quickly add ``dict`` collection support to other classes. It uses a keying function to delegate to ``__setitem__`` and ``__delitem__``:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.util import OrderedDict
+ from sqlalchemy.orm.collections import MappedCollection
+
+ class NodeMap(OrderedDict, MappedCollection):
+ """Holds 'Node' objects, keyed by the 'name' attribute with insert order maintained."""
+
+ def __init__(self, *args, **kw):
+ MappedCollection.__init__(self, keyfunc=lambda node: node.name)
+ OrderedDict.__init__(self, *args, **kw)
+
+The ORM understands the ``dict`` interface just like lists and sets, and will automatically instrument all dict-like methods if you choose to subclass ``dict`` or provide dict-like collection behavior in a duck-typed class. You must decorate appender and remover methods, however- there are no compatible methods in the basic dictionary interface for SQLAlchemy to use by default. Iteration will go through ``itervalues()`` unless otherwise decorated.
+
+Instrumentation and Custom Types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Many custom types and existing library classes can be used as a entity collection type as-is without further ado. However, it is important to note that the instrumentation process _will_ modify the type, adding decorators around methods automatically.
+
+The decorations are lightweight and no-op outside of relations, but they do add unneeded overhead when triggered elsewhere. When using a library class as a collection, it can be good practice to use the "trivial subclass" trick to restrict the decorations to just your usage in relations. For example:
+
+.. sourcecode:: python+sql
+
+ class MyAwesomeList(some.great.library.AwesomeList):
+ pass
+
+ # ... relation(..., collection_class=MyAwesomeList)
+
+The ORM uses this approach for built-ins, quietly substituting a trivial subclass when a ``list``, ``set`` or ``dict`` is used directly.
+
+The collections package provides additional decorators and support for authoring custom types. See the `sqlalchemy.orm.collections` for more information and discussion of advanced usage and Python 2.3-compatible decoration options.
+
+Configuring Loader Strategies: Lazy Loading, Eager Loading
+-----------------------------------------------------------
+
+
+In the `datamapping`, we introduced the concept of **Eager Loading**. We used an ``option`` in conjunction with the ``Query`` object in order to indicate that a relation should be loaded at the same time as the parent, within a single SQL query:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack = session.query(User).options(eagerload('addresses')).filter_by(name='jack').all() #doctest: +NORMALIZE_WHITESPACE
+ SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname, users.password AS users_password
+ FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
+ WHERE users.name = ?
+ ['jack']
+
+By default, all relations are **lazy loading**. The scalar or collection attribute associated with a ``relation()`` contains a trigger which fires the first time the attribute is accessed, which issues a SQL call at that point:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack.addresses
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id
+ [5]
+ {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
+
+The default **loader strategy** for any ``relation()`` is configured by the ``lazy`` keyword argument, which defaults to ``True``. Below we set it as ``False`` so that the ``children`` relation is eager loading:
+
+.. sourcecode:: python+sql
+
+ # eager load 'children' attribute
+ mapper(Parent, parent_table, properties={
+ 'children': relation(Child, lazy=False)
+ })
+
+The loader strategy can be changed from lazy to eager as well as eager to lazy using the ``eagerload()`` and ``lazyload()`` query options:
+
+.. sourcecode:: python+sql
+
+ # set children to load lazily
+ session.query(Parent).options(lazyload('children')).all()
+
+ # set children to load eagerly
+ session.query(Parent).options(eagerload('children')).all()
+
+To reference a relation that is deeper than one level, separate the names by periods:
+
+.. sourcecode:: python+sql
+
+ session.query(Parent).options(eagerload('foo.bar.bat')).all()
+
+When using dot-separated names with ``eagerload()``, option applies **only** to the actual attribute named, and **not** its ancestors. For example, suppose a mapping from ``A`` to ``B`` to ``C``, where the relations, named ``atob`` and ``btoc``, are both lazy-loading. A statement like the following:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(eagerload('atob.btoc')).all()
+
+will load only ``A`` objects to start. When the ``atob`` attribute on each ``A`` is accessed, the returned ``B`` objects will *eagerly* load their ``C`` objects.
+
+Therefore, to modify the eager load to load both ``atob`` as well as ``btoc``, place eagerloads for both:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(eagerload('atob'), eagerload('atob.btoc')).all()
+
+or more simply just use ``eagerload_all()``:
+
+.. sourcecode:: python+sql
+
+ session.query(A).options(eagerload_all('atob.btoc')).all()
+
+There are two other loader strategies available, **dynamic loading** and **no loading**; these are described in :ref:`largecollections`.
+
+Routing Explicit Joins/Statements into Eagerly Loaded Collections
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The behavior of :func:`eagerload()` is such that joins are created automatically, the results of which are routed into collections and scalar references on loaded objects. It is often the case that a query already includes the necessary joins which represent a particular collection or scalar reference, and the joins added by the eagerload feature are redundant - yet you'd still like the collections/references to be populated.
+
+For this SQLAlchemy supplies the :func:`contains_eager()` option. This option is used in the same manner as the :func:`eagerload()` option except it is assumed that the ``Query`` will specify the appropriate joins explicitly. Below it's used with a ``from_statement`` load::
+
+ # mapping is the users->addresses mapping
+ mapper(User, users_table, properties={
+ 'addresses': relation(Address, addresses_table)
+ })
+
+ # define a query on USERS with an outer join to ADDRESSES
+ statement = users_table.outerjoin(addresses_table).select().apply_labels()
+
+ # construct a Query object which expects the "addresses" results
+ query = session.query(User).options(contains_eager('addresses'))
+
+ # get results normally
+ r = query.from_statement(statement)
+
+It works just as well with an inline ``Query.join()`` or ``Query.outerjoin()``::
+
+ session.query(User).outerjoin(User.addresses).options(contains_eager(User.addresses)).all()
+
+If the "eager" portion of the statement is "aliased", the ``alias`` keyword argument to ``contains_eager()`` may be used to indicate it. This is a string alias name or reference to an actual ``Alias`` object:
+
+.. sourcecode:: python+sql
+
+ # use an alias of the Address entity
+ adalias = aliased(Address)
+
+ # construct a Query object which expects the "addresses" results
+ query = session.query(User).outerjoin((adalias, User.addresses)).options(contains_eager(User.addresses, alias=adalias))
+
+ # get results normally
+ {sql}r = query.all()
+ SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, adalias.address_id AS adalias_address_id,
+ adalias.user_id AS adalias_user_id, adalias.email_address AS adalias_email_address, (...other columns...)
+ FROM users LEFT OUTER JOIN email_addresses AS email_addresses_1 ON users.user_id = email_addresses_1.user_id
+
+The path given as the argument to ``contains_eager()`` needs to be a full path from the starting entity. For example if we were loading ``Users->orders->Order->items->Item``, the string version would look like::
+
+ query(User).options(contains_eager('orders', 'items'))
+
+Or using the class-bound descriptor::
+
+ query(User).options(contains_eager(User.orders, Order.items))
+
+A variant on ``contains_eager()`` is the ``contains_alias()`` option, which is used in the rare case that the parent object is loaded from an alias within a user-defined SELECT statement::
+
+ # define an aliased UNION called 'ulist'
+ statement = users.select(users.c.user_id==7).union(users.select(users.c.user_id>7)).alias('ulist')
+
+ # add on an eager load of "addresses"
+ statement = statement.outerjoin(addresses).select().apply_labels()
+
+ # create query, indicating "ulist" is an alias for the main table, "addresses" property should
+ # be eager loaded
+ query = session.query(User).options(contains_alias('ulist'), contains_eager('addresses'))
+
+ # results
+ r = query.from_statement(statement)
+
+.. _largecollections:
+
+Working with Large Collections
+-------------------------------
+
+The default behavior of ``relation()`` is to fully load the collection of items in, as according to the loading strategy of the relation. Additionally, the Session by default only knows how to delete objects which are actually present within the session. When a parent instance is marked for deletion and flushed, the Session loads its full list of child items in so that they may either be deleted as well, or have their foreign key value set to null; this is to avoid constraint violations. For large collections of child items, there are several strategies to bypass full loading of child items both at load time as well as deletion time.
+
+Dynamic Relation Loaders
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+The most useful by far is the ``dynamic_loader()`` relation. This is a variant of ``relation()`` which returns a ``Query`` object in place of a collection when accessed. ``filter()`` criterion may be applied as well as limits and offsets, either explicitly or via array slices:
+
+.. sourcecode:: python+sql
+
+ mapper(User, users_table, properties={
+ 'posts': dynamic_loader(Post)
+ })
+
+ jack = session.query(User).get(id)
+
+ # filter Jack's blog posts
+ posts = jack.posts.filter(Post.headline=='this is a post')
+
+ # apply array slices
+ posts = jack.posts[5:20]
+
+The dynamic relation supports limited write operations, via the ``append()`` and ``remove()`` methods. Since the read side of the dynamic relation always queries the database, changes to the underlying collection will not be visible until the data has been flushed:
+
+.. sourcecode:: python+sql
+
+ oldpost = jack.posts.filter(Post.headline=='old post').one()
+ jack.posts.remove(oldpost)
+
+ jack.posts.append(Post('new post'))
+
+To place a dynamic relation on a backref, use ``lazy='dynamic'``:
+
+.. sourcecode:: python+sql
+
+ mapper(Post, posts_table, properties={
+ 'user': relation(User, backref=backref('posts', lazy='dynamic'))
+ })
+
+Note that eager/lazy loading options cannot be used in conjunction dynamic relations at this time.
+
+Setting Noload
+~~~~~~~~~~~~~~~
+
+
+The opposite of the dynamic relation is simply "noload", specified using ``lazy=None``:
+
+.. sourcecode:: python+sql
+
+ mapper(MyClass, table, properties={
+ 'children': relation(MyOtherClass, lazy=None)
+ })
+
+Above, the ``children`` collection is fully writeable, and changes to it will be persisted to the database as well as locally available for reading at the time they are added. However when instances of ``MyClass`` are freshly loaded from the database, the ``children`` collection stays empty.
+
+Using Passive Deletes
+~~~~~~~~~~~~~~~~~~~~~~
+
+
+Use ``passive_deletes=True`` to disable child object loading on a DELETE operation, in conjunction with "ON DELETE (CASCADE|SET NULL)" on your database to automatically cascade deletes to child objects. Note that "ON DELETE" is not supported on SQLite, and requires ``InnoDB`` tables when using MySQL:
+
+.. sourcecode:: python+sql
+
+ mytable = Table('mytable', meta,
+ Column('id', Integer, primary_key=True),
+ )
+
+ myothertable = Table('myothertable', meta,
+ Column('id', Integer, primary_key=True),
+ Column('parent_id', Integer),
+ ForeignKeyConstraint(['parent_id'], ['mytable.id'], ondelete="CASCADE"),
+ )
+
+ mapper(MyOtherClass, myothertable)
+
+ mapper(MyClass, mytable, properties={
+ 'children': relation(MyOtherClass, cascade="all, delete-orphan", passive_deletes=True)
+ })
+
+When ``passive_deletes`` is applied, the ``children`` relation will not be loaded into memory when an instance of ``MyClass`` is marked for deletion. The ``cascade="all, delete-orphan"`` *will* take effect for instances of ``MyOtherClass`` which are currently present in the session; however for instances of ``MyOtherClass`` which are not loaded, SQLAlchemy assumes that "ON DELETE CASCADE" rules will ensure that those rows are deleted by the database and that no foreign key violation will occur.
+
+Mutable Primary Keys / Update Cascades
+---------------------------------------
+
+
+As of SQLAlchemy 0.4.2, the primary key attributes of an instance can be changed freely, and will be persisted upon flush. When the primary key of an entity changes, related items which reference the primary key must also be updated as well. For databases which enforce referential integrity, it's required to use the database's ON UPDATE CASCADE functionality in order to propagate primary key changes. For those which don't, the ``passive_cascades`` flag can be set to ``False`` which instructs SQLAlchemy to issue UPDATE statements individually. The ``passive_cascades`` flag can also be ``False`` in conjunction with ON UPDATE CASCADE functionality, although in that case it issues UPDATE statements unnecessarily.
+
+A typical mutable primary key setup might look like:
+
+.. sourcecode:: python+sql
+
+ users = Table('users', metadata,
+ Column('username', String(50), primary_key=True),
+ Column('fullname', String(100)))
+
+ addresses = Table('addresses', metadata,
+ Column('email', String(50), primary_key=True),
+ Column('username', String(50), ForeignKey('users.username', onupdate="cascade")))
+
+ class User(object):
+ pass
+ class Address(object):
+ pass
+
+ mapper(User, users, properties={
+ 'addresses': relation(Address, passive_updates=False)
+ })
+ mapper(Address, addresses)
+
+passive_updates is set to ``True`` by default. Foreign key references to non-primary key columns are supported as well.
+
--- /dev/null
+.. _metadata_toplevel:
+
+==================
+Database Meta Data
+==================
+
+Describing Databases with MetaData
+==================================
+
+The core of SQLAlchemy's query and object mapping operations are supported by **database metadata**, which is comprised of Python objects that describe tables and other schema-level objects. These objects can be created by explicitly naming the various components and their properties, using the Table, Column, ForeignKey, Index, and Sequence objects imported from ``sqlalchemy.schema``. There is also support for **reflection** of some entities, which means you only specify the *name* of the entities and they are recreated from the database automatically.
+
+A collection of metadata entities is stored in an object aptly named ``MetaData``::
+
+ from sqlalchemy import *
+
+ metadata = MetaData()
+
+To represent a Table, use the ``Table`` class::
+
+ users = Table('users', metadata,
+ Column('user_id', Integer, primary_key = True),
+ Column('user_name', String(16), nullable = False),
+ Column('email_address', String(60), key='email'),
+ Column('password', String(20), nullable = False)
+ )
+
+ user_prefs = Table('user_prefs', metadata,
+ Column('pref_id', Integer, primary_key=True),
+ Column('user_id', Integer, ForeignKey("users.user_id"), nullable=False),
+ Column('pref_name', String(40), nullable=False),
+ Column('pref_value', String(100))
+ )
+
+The specific datatypes for each Column, such as Integer, String, etc. are described in `types`, and exist within the module ``sqlalchemy.types`` as well as the global ``sqlalchemy`` namespace.
+
+Foreign keys are most easily specified by the ``ForeignKey`` object within a ``Column`` object. For a composite foreign key, i.e. a foreign key that contains multiple columns referencing multiple columns to a composite primary key, an explicit syntax is provided which allows the correct table CREATE statements to be generated::
+
+ # a table with a composite primary key
+ invoices = Table('invoices', metadata,
+ Column('invoice_id', Integer, primary_key=True),
+ Column('ref_num', Integer, primary_key=True),
+ Column('description', String(60), nullable=False)
+ )
+
+ # a table with a composite foreign key referencing the parent table
+ invoice_items = Table('invoice_items', metadata,
+ Column('item_id', Integer, primary_key=True),
+ Column('item_name', String(60), nullable=False),
+ Column('invoice_id', Integer, nullable=False),
+ Column('ref_num', Integer, nullable=False),
+ ForeignKeyConstraint(['invoice_id', 'ref_num'], ['invoices.invoice_id', 'invoices.ref_num'])
+ )
+
+Above, the ``invoice_items`` table will have ``ForeignKey`` objects automatically added to the ``invoice_id`` and ``ref_num`` ``Column`` objects as a result of the additional ``ForeignKeyConstraint`` object.
+
+The ``MetaData`` object supports some handy methods, such as getting a list of Tables in the order (or reverse) of their dependency::
+
+ >>> for t in metadata.table_iterator(reverse=False):
+ ... print t.name
+ users
+ user_prefs
+
+And ``Table`` provides an interface to the table's properties as well as that of its columns::
+
+ employees = Table('employees', metadata,
+ Column('employee_id', Integer, primary_key=True),
+ Column('employee_name', String(60), nullable=False, key='name'),
+ Column('employee_dept', Integer, ForeignKey("departments.department_id"))
+ )
+
+ # access the column "EMPLOYEE_ID":
+ employees.columns.employee_id
+
+ # or just
+ employees.c.employee_id
+
+ # via string
+ employees.c['employee_id']
+
+ # iterate through all columns
+ for c in employees.c:
+ print c
+
+ # get the table's primary key columns
+ for primary_key in employees.primary_key:
+ print primary_key
+
+ # get the table's foreign key objects:
+ for fkey in employees.foreign_keys:
+ print fkey
+
+ # access the table's MetaData:
+ employees.metadata
+
+ # access the table's bound Engine or Connection, if its MetaData is bound:
+ employees.bind
+
+ # access a column's name, type, nullable, primary key, foreign key
+ employees.c.employee_id.name
+ employees.c.employee_id.type
+ employees.c.employee_id.nullable
+ employees.c.employee_id.primary_key
+ employees.c.employee_dept.foreign_key
+
+ # get the "key" of a column, which defaults to its name, but can
+ # be any user-defined string:
+ employees.c.name.key
+
+ # access a column's table:
+ employees.c.employee_id.table is employees
+
+ # get the table related by a foreign key
+ fcolumn = employees.c.employee_dept.foreign_key.column.table
+
+.. _metadata_binding:
+
+Binding MetaData to an Engine or Connection
+--------------------------------------------
+
+A ``MetaData`` object can be associated with an ``Engine`` or an individual ``Connection``; this process is called **binding**. The term used to describe "an engine or a connection" is often referred to as a **connectable**. Binding allows the ``MetaData`` and the elements which it contains to perform operations against the database directly, using the connection resources to which it's bound. Common operations which are made more convenient through binding include being able to generate SQL constructs which know how to execute themselves, creating ``Table`` objects which query the database for their column and constraint information, and issuing CREATE or DROP statements.
+
+To bind ``MetaData`` to an ``Engine``, use the ``bind`` attribute::
+
+ engine = create_engine('sqlite://', **kwargs)
+
+ # create MetaData
+ meta = MetaData()
+
+ # bind to an engine
+ meta.bind = engine
+
+Once this is done, the ``MetaData`` and its contained ``Table`` objects can access the database directly::
+
+ meta.create_all() # issue CREATE statements for all tables
+
+ # describe a table called 'users', query the database for its columns
+ users_table = Table('users', meta, autoload=True)
+
+ # generate a SELECT statement and execute
+ result = users_table.select().execute()
+
+Note that the feature of binding engines is **completely optional**. All of the operations which take advantage of "bound" ``MetaData`` also can be given an ``Engine`` or ``Connection`` explicitly with which to perform the operation. The equivalent "non-bound" of the above would be::
+
+ meta.create_all(engine) # issue CREATE statements for all tables
+
+ # describe a table called 'users', query the database for its columns
+ users_table = Table('users', meta, autoload=True, autoload_with=engine)
+
+ # generate a SELECT statement and execute
+ result = engine.execute(users_table.select())
+
+Reflecting Tables
+-----------------
+
+
+A ``Table`` object can be created without specifying any of its contained attributes, using the argument ``autoload=True`` in conjunction with the table's name and possibly its schema (if not the databases "default" schema). (You can also specify a list or set of column names to autoload as the kwarg include_columns, if you only want to load a subset of the columns in the actual database.) This will issue the appropriate queries to the database in order to locate all properties of the table required for SQLAlchemy to use it effectively, including its column names and datatypes, foreign and primary key constraints, and in some cases its default-value generating attributes. To use ``autoload=True``, the table's ``MetaData`` object need be bound to an ``Engine`` or ``Connection``, or alternatively the ``autoload_with=<some connectable>`` argument can be passed. Below we illustrate autoloading a table and then iterating through the names of its columns::
+
+ >>> messages = Table('messages', meta, autoload=True)
+ >>> [c.name for c in messages.columns]
+ ['message_id', 'message_name', 'date']
+
+Note that if a reflected table has a foreign key referencing another table, the related ``Table`` object will be automatically created within the ``MetaData`` object if it does not exist already. Below, suppose table ``shopping_cart_items`` references a table ``shopping_carts``. After reflecting, the ``shopping carts`` table is present:
+
+.. sourcecode:: pycon+sql
+
+ >>> shopping_cart_items = Table('shopping_cart_items', meta, autoload=True)
+ >>> 'shopping_carts' in meta.tables:
+ True
+
+To get direct access to 'shopping_carts', simply instantiate it via the ``Table`` constructor. ``Table`` uses a special constructor that will return the already created ``Table`` instance if it's already present:
+
+.. sourcecode:: python+sql
+
+ shopping_carts = Table('shopping_carts', meta)
+
+Of course, it's a good idea to use ``autoload=True`` with the above table regardless. This is so that if it hadn't been loaded already, the operation will load the table. The autoload operation only occurs for the table if it hasn't already been loaded; once loaded, new calls to ``Table`` will not re-issue any reflection queries.
+
+Overriding Reflected Columns
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Individual columns can be overridden with explicit values when reflecting tables; this is handy for specifying custom datatypes, constraints such as primary keys that may not be configured within the database, etc.::
+
+ >>> mytable = Table('mytable', meta,
+ ... Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
+ ... Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode
+ ... autoload=True)
+
+Reflecting All Tables at Once
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+The ``MetaData`` object can also get a listing of tables and reflect the full set. This is achieved by using the ``reflect()`` method. After calling it, all located tables are present within the ``MetaData`` object's dictionary of tables::
+
+ meta = MetaData()
+ meta.reflect(bind=someengine)
+ users_table = meta.tables['users']
+ addresses_table = meta.tables['addresses']
+
+``metadata.reflect()`` is also a handy way to clear or drop all tables in a database::
+
+ meta = MetaData()
+ meta.reflect(bind=someengine)
+ for table in reversed(meta.sorted_tables):
+ someengine.execute(table.delete())
+
+Specifying the Schema Name
+---------------------------
+
+
+Some databases support the concept of multiple schemas. A ``Table`` can reference this by specifying the ``schema`` keyword argument::
+
+ financial_info = Table('financial_info', meta,
+ Column('id', Integer, primary_key=True),
+ Column('value', String(100), nullable=False),
+ schema='remote_banks'
+ )
+
+Within the ``MetaData`` collection, this table will be identified by the combination of ``financial_info`` and ``remote_banks``. If another table called ``financial_info`` is referenced without the ``remote_banks`` schema, it will refer to a different ``Table``. ``ForeignKey`` objects can reference columns in this table using the form ``remote_banks.financial_info.id``.
+
+ON UPDATE and ON DELETE
+------------------------
+
+
+``ON UPDATE`` and ``ON DELETE`` clauses to a table create are specified within the ``ForeignKeyConstraint`` object, using the ``onupdate`` and ``ondelete`` keyword arguments::
+
+ foobar = Table('foobar', meta,
+ Column('id', Integer, primary_key=True),
+ Column('lala', String(40)),
+ ForeignKeyConstraint(['lala'],['hoho.lala'], onupdate="CASCADE", ondelete="CASCADE"))
+
+Note that these clauses are not supported on SQLite, and require ``InnoDB`` tables when used with MySQL. They may also not be supported on other databases.
+
+Other Options
+--------------
+
+``Tables`` may support database-specific options, such as MySQL's ``engine`` option that can specify "MyISAM", "InnoDB", and other backends for the table::
+
+ addresses = Table('engine_email_addresses', meta,
+ Column('address_id', Integer, primary_key = True),
+ Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
+ Column('email_address', String(20)),
+ mysql_engine='InnoDB'
+ )
+
+Creating and Dropping Database Tables
+======================================
+
+Creating and dropping individual tables can be done via the ``create()`` and ``drop()`` methods of ``Table``; these methods take an optional ``bind`` parameter which references an ``Engine`` or a ``Connection``. If not supplied, the ``Engine`` bound to the ``MetaData`` will be used, else an error is raised:
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ meta.bind = 'sqlite:///:memory:'
+
+ employees = Table('employees', meta,
+ Column('employee_id', Integer, primary_key=True),
+ Column('employee_name', String(60), nullable=False, key='name'),
+ Column('employee_dept', Integer, ForeignKey("departments.department_id"))
+ )
+ {sql}employees.create()
+ CREATE TABLE employees(
+ employee_id SERIAL NOT NULL PRIMARY KEY,
+ employee_name VARCHAR(60) NOT NULL,
+ employee_dept INTEGER REFERENCES departments(department_id)
+ )
+ {}
+
+``drop()`` method:
+
+.. sourcecode:: python+sql
+
+ {sql}employees.drop(bind=e)
+ DROP TABLE employees
+ {}
+
+The ``create()`` and ``drop()`` methods also support an optional keyword argument ``checkfirst`` which will issue the database's appropriate pragma statements to check if the table exists before creating or dropping::
+
+ employees.create(bind=e, checkfirst=True)
+ employees.drop(checkfirst=False)
+
+Entire groups of Tables can be created and dropped directly from the ``MetaData`` object with ``create_all()`` and ``drop_all()``. These methods always check for the existence of each table before creating or dropping. Each method takes an optional ``bind`` keyword argument which can reference an ``Engine`` or a ``Connection``. If no engine is specified, the underlying bound ``Engine``, if any, is used:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///:memory:')
+
+ metadata = MetaData()
+
+ users = Table('users', metadata,
+ Column('user_id', Integer, primary_key = True),
+ Column('user_name', String(16), nullable = False),
+ Column('email_address', String(60), key='email'),
+ Column('password', String(20), nullable = False)
+ )
+
+ user_prefs = Table('user_prefs', metadata,
+ Column('pref_id', Integer, primary_key=True),
+ Column('user_id', Integer, ForeignKey("users.user_id"), nullable=False),
+ Column('pref_name', String(40), nullable=False),
+ Column('pref_value', String(100))
+ )
+
+ {sql}metadata.create_all(bind=engine)
+ PRAGMA table_info(users){}
+ CREATE TABLE users(
+ user_id INTEGER NOT NULL PRIMARY KEY,
+ user_name VARCHAR(16) NOT NULL,
+ email_address VARCHAR(60),
+ password VARCHAR(20) NOT NULL
+ )
+ PRAGMA table_info(user_prefs){}
+ CREATE TABLE user_prefs(
+ pref_id INTEGER NOT NULL PRIMARY KEY,
+ user_id INTEGER NOT NULL REFERENCES users(user_id),
+ pref_name VARCHAR(40) NOT NULL,
+ pref_value VARCHAR(100)
+ )
+
+Column Insert/Update Defaults
+==============================
+
+
+SQLAlchemy includes several constructs which provide default values provided during INSERT and UPDATE statements. The defaults may be provided as Python constants, Python functions, or SQL expressions, and the SQL expressions themselves may be "pre-executed", executed inline within the insert/update statement itself, or can be created as a SQL level "default" placed on the table definition itself. A "default" value by definition is only invoked if no explicit value is passed into the INSERT or UPDATE statement.
+
+Pre-Executed Python Functions
+------------------------------
+
+
+The "default" keyword argument on Column can reference a Python value or callable which is invoked at the time of an insert::
+
+ # a function which counts upwards
+ i = 0
+ def mydefault():
+ global i
+ i += 1
+ return i
+
+ t = Table("mytable", meta,
+ # function-based default
+ Column('id', Integer, primary_key=True, default=mydefault),
+
+ # a scalar default
+ Column('key', String(10), default="default")
+ )
+
+Similarly, the "onupdate" keyword does the same thing for update statements:
+
+.. sourcecode:: python+sql
+
+ import datetime
+
+ t = Table("mytable", meta,
+ Column('id', Integer, primary_key=True),
+
+ # define 'last_updated' to be populated with datetime.now()
+ Column('last_updated', DateTime, onupdate=datetime.datetime.now),
+ )
+
+Pre-executed and Inline SQL Expressions
+----------------------------------------
+
+
+The "default" and "onupdate" keywords may also be passed SQL expressions, including select statements or direct function calls:
+
+.. sourcecode:: python+sql
+
+ t = Table("mytable", meta,
+ Column('id', Integer, primary_key=True),
+
+ # define 'create_date' to default to now()
+ Column('create_date', DateTime, default=func.now()),
+
+ # define 'key' to pull its default from the 'keyvalues' table
+ Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1))
+
+ # define 'last_modified' to use the current_timestamp SQL function on update
+ Column('last_modified', DateTime, onupdate=func.current_timestamp())
+ )
+
+The above SQL functions are usually executed "inline" with the INSERT or UPDATE statement being executed. In some cases, the function is "pre-executed" and its result pre-fetched explicitly. This happens under the following circumstances:
+
+* the column is a primary key column
+
+* the database dialect does not support a usable ``cursor.lastrowid`` accessor (or equivalent); this currently includes Postgres, Oracle, and Firebird.
+
+* the statement is a single execution, i.e. only supplies one set of parameters and doesn't use "executemany" behavior
+
+* the ``inline=True`` flag is not set on the ``Insert()`` or ``Update()`` construct.
+
+For a statement execution which is not an executemany, the returned ``ResultProxy`` will contain a collection accessible via ``result.postfetch_cols()`` which contains a list of all ``Column`` objects which had an inline-executed default. Similarly, all parameters which were bound to the statement, including all Python and SQL expressions which were pre-executed, are present in the ``last_inserted_params()`` or ``last_updated_params()`` collections on ``ResultProxy``. The ``last_inserted_ids()`` collection contains a list of primary key values for the row inserted.
+
+DDL-Level Defaults
+-------------------
+
+
+A variant on a SQL expression default is the ``server_default``, which gets placed in the CREATE TABLE statement during a ``create()`` operation:
+
+.. sourcecode:: python+sql
+
+ t = Table('test', meta,
+ Column('abc', String(20), server_default='abc'),
+ Column('created_at', DateTime, server_default=text("sysdate"))
+ )
+
+A create call for the above table will produce::
+
+ CREATE TABLE test (
+ abc varchar(20) default 'abc',
+ created_at datetime default sysdate
+ )
+
+The behavior of ``server_default`` is similar to that of a regular SQL default; if it's placed on a primary key column for a database which doesn't have a way to "postfetch" the ID, and the statement is not "inlined", the SQL expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on the database side normally.
+
+Triggered Columns
+------------------
+
+Columns with values set by a database trigger or other external process may be called out with a marker::
+
+ t = Table('test', meta,
+ Column('abc', String(20), server_default=FetchedValue())
+ Column('def', String(20), server_onupdate=FetchedValue())
+ )
+
+These markers do not emit a ````default```` clause when the table is created, however they do set the same internal flags as a static ``server_default`` clause, providing hints to higher-level tools that a "post-fetch" of these rows should be performed after an insert or update.
+
+Defining Sequences
+-------------------
+
+
+A table with a sequence looks like:
+
+.. sourcecode:: python+sql
+
+ table = Table("cartitems", meta,
+ Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
+ Column("description", String(40)),
+ Column("createdate", DateTime())
+ )
+
+The ``Sequence`` object works a lot like the ``default`` keyword on ``Column``, except that it only takes effect on a database which supports sequences. When used with a database that does not support sequences, the ``Sequence`` object has no effect; therefore it's safe to place on a table which is used against multiple database backends. The same rules for pre- and inline execution apply.
+
+When the ``Sequence`` is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus "bundling" the sequence object with its parent table.
+
+The flag ``optional=True`` on ``Sequence`` will produce a sequence that is only used on databases which have no "autoincrementing" capability. For example, Postgres supports primary key generation using the SERIAL keyword, whereas Oracle has no such capability. Therefore, a ``Sequence`` placed on a primary key column with ``optional=True`` will only be used with an Oracle backend but not Postgres.
+
+A sequence can also be executed standalone, using an ``Engine`` or ``Connection``, returning its next value in a database-independent fashion:
+
+.. sourcecode:: python+sql
+
+ seq = Sequence('some_sequence')
+ nextid = connection.execute(seq)
+
+Defining Constraints and Indexes
+=================================
+
+
+UNIQUE Constraint
+-----------------
+
+
+Unique constraints can be created anonymously on a single column using the ``unique`` keyword on ``Column``. Explicitly named unique constraints and/or those with multiple columns are created via the ``UniqueConstraint`` table-level construct.
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ mytable = Table('mytable', meta,
+
+ # per-column anonymous unique constraint
+ Column('col1', Integer, unique=True),
+
+ Column('col2', Integer),
+ Column('col3', Integer),
+
+ # explicit/composite unique constraint. 'name' is optional.
+ UniqueConstraint('col2', 'col3', name='uix_1')
+ )
+
+CHECK Constraint
+----------------
+
+
+Check constraints can be named or unnamed and can be created at the Column or Table level, using the ``CheckConstraint`` construct. The text of the check constraint is passed directly through to the database, so there is limited "database independent" behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table.
+
+Note that some databases do not actively support check constraints such as MySQL and SQLite.
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ mytable = Table('mytable', meta,
+
+ # per-column CHECK constraint
+ Column('col1', Integer, CheckConstraint('col1>5')),
+
+ Column('col2', Integer),
+ Column('col3', Integer),
+
+ # table level CHECK constraint. 'name' is optional.
+ CheckConstraint('col2 > col3 + 5', name='check1')
+ )
+
+Indexes
+-------
+
+
+Indexes can be created anonymously (using an auto-generated name "ix_<column label>") for a single column using the inline ``index`` keyword on ``Column``, which also modifies the usage of ``unique`` to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass more than one column, use the ``Index`` construct, which requires a name.
+
+Note that the ``Index`` construct is created **externally** to the table which it corresponds, using ``Column`` objects and not strings.
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ mytable = Table('mytable', meta,
+ # an indexed column, with index "ix_mytable_col1"
+ Column('col1', Integer, index=True),
+
+ # a uniquely indexed column with index "ix_mytable_col2"
+ Column('col2', Integer, index=True, unique=True),
+
+ Column('col3', Integer),
+ Column('col4', Integer),
+
+ Column('col5', Integer),
+ Column('col6', Integer),
+ )
+
+ # place an index on col3, col4
+ Index('idx_col34', mytable.c.col3, mytable.c.col4)
+
+ # place a unique index on col5, col6
+ Index('myindex', mytable.c.col5, mytable.c.col6, unique=True)
+
+The ``Index`` objects will be created along with the CREATE statements for the table itself. An index can also be created on its own independently of the table:
+
+.. sourcecode:: python+sql
+
+ # create a table
+ sometable.create()
+
+ # define an index
+ i = Index('someindex', sometable.c.col5)
+
+ # create the index, will use the table's bound connectable if the ``bind`` keyword argument not specified
+ i.create()
+
+Adapting Tables to Alternate Metadata
+======================================
+
+
+A ``Table`` object created against a specific ``MetaData`` object can be re-created against a new MetaData using the ``tometadata`` method:
+
+.. sourcecode:: python+sql
+
+ # create two metadata
+ meta1 = MetaData('sqlite:///querytest.db')
+ meta2 = MetaData()
+
+ # load 'users' from the sqlite engine
+ users_table = Table('users', meta1, autoload=True)
+
+ # create the same Table object for the plain metadata
+ users_table_2 = users_table.tometadata(meta2)
+
+
-Connection Pooling {@name=pooling}
-======================
+.. _pooling:
+
+==================
+Connection Pooling
+==================
This section describes the connection pool module of SQLAlchemy. The `Pool` object it provides is normally embedded within an `Engine` instance. For most cases, explicit access to the pool module is not required. However, the `Pool` object can be used on its own, without the rest of SA, to manage DBAPI connections; this section describes that usage. Also, this section will describe in more detail how to customize the pooling strategy used by an `Engine`.
--- /dev/null
+.. _ormtutorial_toplevel:
+
+==========================
+Object Relational Tutorial
+==========================
+In this tutorial we will cover a basic SQLAlchemy object-relational mapping scenario, where we store and retrieve Python objects from a database representation. The tutorial is in doctest format, meaning each ``>>>`` line represents something you can type at a Python command prompt, and the following text represents the expected return value.
+
+Version Check
+=============
+
+A quick check to verify that we are on at least **version 0.5** of SQLAlchemy::
+
+ >>> import sqlalchemy
+ >>> sqlalchemy.__version__ # doctest:+SKIP
+ 0.5.0
+
+Connecting
+==========
+
+For this tutorial we will use an in-memory-only SQLite database. To connect we use ``create_engine()``::
+
+ >>> from sqlalchemy import create_engine
+ >>> engine = create_engine('sqlite:///:memory:', echo=True)
+
+The ``echo`` flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard ``logging`` module. With it enabled, we'll see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to ``False``. This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see what's being generated.
+
+Define and Create a Table
+==========================
+Next we want to tell SQLAlchemy about our tables. We will start with just a single table called ``users``, which will store records for the end-users using our application (lets assume it's a website). We define our tables within a catalog called ``MetaData``, using the ``Table`` construct, which is used in a manner similar to SQL's CREATE TABLE syntax::
+
+ >>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
+ >>> metadata = MetaData()
+ >>> users_table = Table('users', metadata,
+ ... Column('id', Integer, primary_key=True),
+ ... Column('name', String),
+ ... Column('fullname', String),
+ ... Column('password', String)
+ ... )
+
+All about how to define ``Table`` objects, as well as how to load their definition from an existing database (known as **reflection**), is described in :ref:`metadata_toplevel`.
+
+Next, we can issue CREATE TABLE statements derived from our table metadata, by calling ``create_all()`` and passing it the ``engine`` instance which points to our database. This will check for the presence of a table first before creating, so it's safe to call multiple times:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> metadata.create_all(engine) # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
+ PRAGMA table_info("users")
+ {}
+ CREATE TABLE users (
+ id INTEGER NOT NULL,
+ name VARCHAR,
+ fullname VARCHAR,
+ password VARCHAR,
+ PRIMARY KEY (id)
+ )
+ {}
+ COMMIT
+
+Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite, this is a valid datatype, but on most databases it's not allowed. So if running this tutorial on a database such as Postgres or MySQL, and you wish to use SQLAlchemy to generate the tables, a "length" may be provided to the ``String`` type as below::
+
+ Column('name', String(50))
+
+The length field on ``String``, as well as similar precision/scale fields available on ``Integer``, ``Numeric``, etc. are not referenced by SQLAlchemy other than when creating tables.
+
+Define a Python Class to be Mapped
+===================================
+While the ``Table`` object defines information about our database, it does not say anything about the definition or behavior of the business objects used by our application; SQLAlchemy views this as a separate concern. To correspond to our ``users`` table, let's create a rudimentary ``User`` class. It only need subclass Python's built-in ``object`` class (i.e. it's a new style class)::
+
+ >>> class User(object):
+ ... def __init__(self, name, fullname, password):
+ ... self.name = name
+ ... self.fullname = fullname
+ ... self.password = password
+ ...
+ ... def __repr__(self):
+ ... return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password)
+
+The class has an ``__init__()`` and a ``__repr__()`` method for convenience. These methods are both entirely optional, and can be of any form. SQLAlchemy never calls ``__init__()`` directly.
+
+Setting up the Mapping
+======================
+With our ``users_table`` and ``User`` class, we now want to map the two together. That's where the SQLAlchemy ORM package comes in. We'll use the ``mapper`` function to create a **mapping** between ``users_table`` and ``User``::
+
+ >>> from sqlalchemy.orm import mapper
+ >>> mapper(User, users_table) # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
+ <Mapper at 0x...; User>
+
+The ``mapper()`` function creates a new ``Mapper`` object and stores it away for future reference, associated with our class. Let's now create and inspect a ``User`` object::
+
+ >>> ed_user = User('ed', 'Ed Jones', 'edspassword')
+ >>> ed_user.name
+ 'ed'
+ >>> ed_user.password
+ 'edspassword'
+ >>> str(ed_user.id)
+ 'None'
+
+The ``id`` attribute, which while not defined by our ``__init__()`` method, exists due to the ``id`` column present within the ``users_table`` object. By default, the ``mapper`` creates class attributes for all columns present within the ``Table``. These class attributes exist as Python descriptors, and define **instrumentation** for the mapped class. The functionality of this instrumentation is very rich and includes the ability to track modifications and automatically load new data from the database when needed.
+
+Since we have not yet told SQLAlchemy to persist ``Ed Jones`` within the database, its id is ``None``. When we persist the object later, this attribute will be populated with a newly generated value.
+
+Creating Table, Class and Mapper All at Once Declaratively
+===========================================================
+The preceding approach to configuration involving a ``Table``, user-defined class, and ``mapper()`` call illustrate classical SQLAlchemy usage, which values the highest separation of concerns possible. A large number of applications don't require this degree of separation, and for those SQLAlchemy offers an alternate "shorthand" configurational style called **declarative**. For many applications, this is the only style of configuration needed. Our above example using this style is as follows::
+
+ >>> from sqlalchemy.ext.declarative import declarative_base
+
+ >>> Base = declarative_base()
+ >>> class User(Base):
+ ... __tablename__ = 'users'
+ ...
+ ... id = Column(Integer, primary_key=True)
+ ... name = Column(String)
+ ... fullname = Column(String)
+ ... password = Column(String)
+ ...
+ ... def __init__(self, name, fullname, password):
+ ... self.name = name
+ ... self.fullname = fullname
+ ... self.password = password
+ ...
+ ... def __repr__(self):
+ ... return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password)
+
+Above, the ``declarative_base()`` function defines a new class which we name ``Base``, from which all of our ORM-enabled classes will derive. Note that we define ``Column`` objects with no "name" field, since it's inferred from the given attribute name.
+
+The underlying ``Table`` object created by our ``declarative_base()`` version of ``User`` is accessible via the ``__table__`` attribute::
+
+ >>> users_table = User.__table__
+
+and the owning ``MetaData`` object is available as well::
+
+ >>> metadata = Base.metadata
+
+Yet another "declarative" method is available for SQLAlchemy as a third party library called `Elixir <http://elixir.ematia.de/>`_. This is a full-featured configurational product which also includes many higher level mapping configurations built in. Like declarative, once classes and mappings are defined, ORM usage is the same as with a classical SQLAlchemy configuration.
+
+Creating a Session
+==================
+
+We're now ready to start talking to the database. The ORM's "handle" to the database is the ``Session``. When we first set up the application, at the same level as our ``create_engine()`` statement, we define a ``Session`` class which will serve as a factory for new ``Session`` objects:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import sessionmaker
+ >>> Session = sessionmaker(bind=engine)
+
+In the case where your application does not yet have an ``Engine`` when you define your module-level objects, just set it up like this:
+
+.. sourcecode:: python+sql
+
+ >>> Session = sessionmaker()
+
+Later, when you create your engine with ``create_engine()``, connect it to the ``Session`` using ``configure()``:
+
+.. sourcecode:: python+sql
+
+ >>> Session.configure(bind=engine) # once engine is available
+
+This custom-made ``Session`` class will create new ``Session`` objects which are bound to our database. Other transactional characteristics may be defined when calling ``sessionmaker()`` as well; these are described in a later chapter. Then, whenever you need to have a conversation with the database, you instantiate a ``Session``::
+
+ >>> session = Session()
+
+The above ``Session`` is associated with our SQLite ``engine``, but it hasn't opened any connections yet. When it's first used, it retrieves a connection from a pool of connections maintained by the ``engine``, and holds onto it until we commit all changes and/or close the session object.
+
+Adding new Objects
+==================
+
+To persist our ``User`` object, we ``add()`` it to our ``Session``::
+
+ >>> ed_user = User('ed', 'Ed Jones', 'edspassword')
+ >>> session.add(ed_user)
+
+At this point, the instance is **pending**; no SQL has yet been issued. The ``Session`` will issue the SQL to persist ``Ed Jones`` as soon as is needed, using a process known as a **flush**. If we query the database for ``Ed Jones``, all pending information will first be flushed, and the query is issued afterwards.
+
+For example, below we create a new ``Query`` object which loads instances of ``User``. We "filter by" the ``name`` attribute of ``ed``, and indicate that we'd like only the first result in the full list of rows. A ``User`` instance is returned which is equivalent to that which we've added:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> our_user = session.query(User).filter_by(name='ed').first() # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
+ BEGIN
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['ed', 'Ed Jones', 'edspassword']
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name = ?
+ LIMIT 1 OFFSET 0
+ ['ed']
+ {stop}>>> our_user
+ <User('ed','Ed Jones', 'edspassword')>
+
+In fact, the ``Session`` has identified that the row returned is the **same** row as one already represented within its internal map of objects, so we actually got back the identical instance as that which we just added::
+
+ >>> ed_user is our_user
+ True
+
+The ORM concept at work here is known as an **identity map** and ensures that all operations upon a particular row within a ``Session`` operate upon the same set of data. Once an object with a particular primary key is present in the ``Session``, all SQL queries on that ``Session`` will always return the same Python object for that particular primary key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary key within the session.
+
+We can add more ``User`` objects at once using ``add_all()``:
+
+.. sourcecode:: python+sql
+
+ >>> session.add_all([
+ ... User('wendy', 'Wendy Williams', 'foobar'),
+ ... User('mary', 'Mary Contrary', 'xxg527'),
+ ... User('fred', 'Fred Flinstone', 'blah')])
+
+Also, Ed has already decided his password isn't too secure, so lets change it:
+
+.. sourcecode:: python+sql
+
+ >>> ed_user.password = 'f8s7ccs'
+
+The ``Session`` is paying attention. It knows, for example, that ``Ed Jones`` has been modified:
+
+.. sourcecode:: python+sql
+
+ >>> session.dirty
+ IdentitySet([<User('ed','Ed Jones', 'f8s7ccs')>])
+
+and that three new ``User`` objects are pending:
+
+.. sourcecode:: python+sql
+
+ >>> session.new # doctest: +NORMALIZE_WHITESPACE
+ IdentitySet([<User('wendy','Wendy Williams', 'foobar')>,
+ <User('mary','Mary Contrary', 'xxg527')>,
+ <User('fred','Fred Flinstone', 'blah')>])
+
+We tell the ``Session`` that we'd like to issue all remaining changes to the database and commit the transaction, which has been in progress throughout. We do this via ``commit()``:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.commit()
+ UPDATE users SET password=? WHERE users.id = ?
+ ['f8s7ccs', 1]
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['wendy', 'Wendy Williams', 'foobar']
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['mary', 'Mary Contrary', 'xxg527']
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['fred', 'Fred Flinstone', 'blah']
+ COMMIT
+
+``commit()`` flushes whatever remaining changes remain to the database, and commits the transaction. The connection resources referenced by the session are now returned to the connection pool. Subsequent operations with this session will occur in a **new** transaction, which will again re-acquire connection resources when first needed.
+
+If we look at Ed's ``id`` attribute, which earlier was ``None``, it now has a value:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> ed_user.id # doctest: +NORMALIZE_WHITESPACE
+ BEGIN
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.id = ?
+ [1]
+ {stop}1
+
+After the ``Session`` inserts new rows in the database, all newly generated identifiers and database-generated defaults become available on the instance, either immediately or via load-on-first-access. In this case, the entire row was re-loaded on access because a new transaction was begun after we issued ``commit()``. SQLAlchemy by default refreshes data from a previous transaction the first time it's accessed within a new transaction, so that the most recent state is available. The level of reloading is configurable as is described in the chapter on Sessions.
+
+Rolling Back
+============
+Since the ``Session`` works within a transaction, we can roll back changes made too. Let's make two changes that we'll revert; ``ed_user``'s user name gets set to ``Edwardo``:
+
+.. sourcecode:: python+sql
+
+ >>> ed_user.name = 'Edwardo'
+
+and we'll add another erroneous user, ``fake_user``:
+
+.. sourcecode:: python+sql
+
+ >>> fake_user = User('fakeuser', 'Invalid', '12345')
+ >>> session.add(fake_user)
+
+Querying the session, we can see that they're flushed into the current transaction:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(User).filter(User.name.in_(['Edwardo', 'fakeuser'])).all()
+ UPDATE users SET name=? WHERE users.id = ?
+ ['Edwardo', 1]
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['fakeuser', 'Invalid', '12345']
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name IN (?, ?)
+ ['Edwardo', 'fakeuser']
+ {stop}[<User('Edwardo','Ed Jones', 'f8s7ccs')>, <User('fakeuser','Invalid', '12345')>]
+
+Rolling back, we can see that ``ed_user``'s name is back to ``ed``, and ``fake_user`` has been kicked out of the session:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.rollback()
+ ROLLBACK
+ {stop}
+
+ {sql}>>> ed_user.name
+ BEGIN
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.id = ?
+ [1]
+ {stop}u'ed'
+ >>> fake_user in session
+ False
+
+issuing a SELECT illustrates the changes made to the database:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(User).filter(User.name.in_(['ed', 'fakeuser'])).all()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name IN (?, ?)
+ ['ed', 'fakeuser']
+ {stop}[<User('ed','Ed Jones', 'f8s7ccs')>]
+
+Querying
+========
+
+A ``Query`` is created using the ``query()`` function on ``Session``. This function takes a variable number of arguments, which can be any combination of classes and class-instrumented descriptors. Below, we indicate a ``Query`` which loads ``User`` instances. When evaluated in an iterative context, the list of ``User`` objects present is returned:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for instance in session.query(User).order_by(User.id): # doctest: +NORMALIZE_WHITESPACE
+ ... print instance.name, instance.fullname
+ SELECT users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname, users.password AS users_password
+ FROM users ORDER BY users.id
+ []
+ {stop}ed Ed Jones
+ wendy Wendy Williams
+ mary Mary Contrary
+ fred Fred Flinstone
+
+The ``Query`` also accepts ORM-instrumented descriptors as arguments. Any time multiple class entities or column-based entities are expressed as arguments to the ``query()`` function, the return result is expressed as tuples:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for name, fullname in session.query(User.name, User.fullname): # doctest: +NORMALIZE_WHITESPACE
+ ... print name, fullname
+ SELECT users.name AS users_name, users.fullname AS users_fullname
+ FROM users
+ []
+ {stop}ed Ed Jones
+ wendy Wendy Williams
+ mary Mary Contrary
+ fred Fred Flinstone
+
+The tuples returned by ``Query`` are *named* tuples, and can be treated much like an ordinary Python object. The names are the same as the attribute's name for an attribute, and the class name for a class:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for row in session.query(User, User.name).all():
+ ... print row.User, row.name
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ []
+ {stop}<User('ed','Ed Jones', 'f8s7ccs')> ed
+ <User('wendy','Wendy Williams', 'foobar')> wendy
+ <User('mary','Mary Contrary', 'xxg527')> mary
+ <User('fred','Fred Flinstone', 'blah')> fred
+
+You can control the names using the ``label()`` construct for scalar attributes and ``aliased()`` for class constructs:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import aliased
+ >>> user_alias = aliased(User, name='user_alias')
+ {sql}>>> for row in session.query(user_alias, user_alias.name.label('name_label')).all():
+ ... print row.user_alias, row.name_label
+ SELECT users_1.id AS users_1_id, users_1.name AS users_1_name, users_1.fullname AS users_1_fullname, users_1.password AS users_1_password, users_1.name AS name_label
+ FROM users AS users_1
+ []
+ <User('ed','Ed Jones', 'f8s7ccs')> ed
+ <User('wendy','Wendy Williams', 'foobar')> wendy
+ <User('mary','Mary Contrary', 'xxg527')> mary
+ <User('fred','Fred Flinstone', 'blah')> fred
+
+Basic operations with ``Query`` include issuing LIMIT and OFFSET, most conveniently using Python array slices and typically in conjunction with ORDER BY:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for u in session.query(User).order_by(User.id)[1:3]: #doctest: +NORMALIZE_WHITESPACE
+ ... print u
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users ORDER BY users.id
+ LIMIT 2 OFFSET 1
+ []
+ {stop}<User('wendy','Wendy Williams', 'foobar')>
+ <User('mary','Mary Contrary', 'xxg527')>
+
+and filtering results, which is accomplished either with ``filter_by()``, which uses keyword arguments:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for name, in session.query(User.name).filter_by(fullname='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE
+ ... print name
+ SELECT users.name AS users_name FROM users
+ WHERE users.fullname = ?
+ ['Ed Jones']
+ {stop}ed
+
+...or ``filter()``, which uses more flexible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for name, in session.query(User.name).filter(User.fullname=='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE
+ ... print name
+ SELECT users.name AS users_name FROM users
+ WHERE users.fullname = ?
+ ['Ed Jones']
+ {stop}ed
+
+The ``Query`` object is fully *generative*, meaning that most method calls return a new ``Query`` object upon which further criteria may be added. For example, to query for users named "ed" with a full name of "Ed Jones", you can call ``filter()`` twice, which joins criteria using ``AND``:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for user in session.query(User).filter(User.name=='ed').filter(User.fullname=='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE
+ ... print user
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name = ? AND users.fullname = ?
+ ['ed', 'Ed Jones']
+ {stop}<User('ed','Ed Jones', 'f8s7ccs')>
+
+
+Common Filter Operators
+-----------------------
+
+Here's a rundown of some of the most common operators used in ``filter()``:
+
+* equals::
+
+ query.filter(User.name == 'ed')
+
+* not equals::
+
+ query.filter(User.name != 'ed')
+
+* LIKE::
+
+ query.filter(User.name.like('%ed%'))
+
+* IN::
+
+ query.filter(User.name.in_(['ed', 'wendy', 'jack']))
+
+* IS NULL::
+
+ filter(User.name == None)
+
+* AND::
+
+ from sqlalchemy import and_
+ filter(and_(User.name == 'ed', User.fullname == 'Ed Jones'))
+
+ # or call filter()/filter_by() multiple times
+ filter(User.name == 'ed').filter(User.fullname == 'Ed Jones')
+
+* OR::
+
+ from sqlalchemy import or_
+ filter(or_(User.name == 'ed', User.name == 'wendy'))
+
+* match::
+
+ query.filter(User.name.match('wendy'))
+
+ The contents of the match parameter are database backend specific.
+
+Returning Lists and Scalars
+---------------------------
+
+The ``all()``, ``one()``, and ``first()`` methods of ``Query`` immediately issue SQL and return a non-iterator value. ``all()`` returns a list:
+
+.. sourcecode:: python+sql
+
+ >>> query = session.query(User).filter(User.name.like('%ed')).order_by(User.id)
+ {sql}>>> query.all()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name LIKE ? ORDER BY users.id
+ ['%ed']
+ {stop}[<User('ed','Ed Jones', 'f8s7ccs')>, <User('fred','Fred Flinstone', 'blah')>]
+
+``first()`` applies a limit of one and returns the first result as a scalar:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> query.first()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name LIKE ? ORDER BY users.id
+ LIMIT 1 OFFSET 0
+ ['%ed']
+ {stop}<User('ed','Ed Jones', 'f8s7ccs')>
+
+``one()``, applies a limit of *two*, and if not exactly one row returned, raises an error:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> try:
+ ... user = query.one()
+ ... except Exception, e:
+ ... print e
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name LIKE ? ORDER BY users.id
+ LIMIT 2 OFFSET 0
+ ['%ed']
+ {stop}Multiple rows were found for one()
+
+.. sourcecode:: python+sql
+
+ {sql}>>> try:
+ ... user = query.filter(User.id == 99).one()
+ ... except Exception, e:
+ ... print e
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name LIKE ? AND users.id = ? ORDER BY users.id
+ LIMIT 2 OFFSET 0
+ ['%ed', 99]
+ {stop}No row was found for one()
+
+Using Literal SQL
+-----------------
+
+Literal strings can be used flexibly with ``Query``. Most methods accept strings in addition to SQLAlchemy clause constructs. For example, ``filter()`` and ``order_by()``:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for user in session.query(User).filter("id<224").order_by("id").all():
+ ... print user.name
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE id<224 ORDER BY id
+ []
+ {stop}ed
+ wendy
+ mary
+ fred
+
+Bind parameters can be specified with string-based SQL, using a colon. To specify the values, use the ``params()`` method:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(User).filter("id<:value and name=:name").\
+ ... params(value=224, name='fred').order_by(User.id).one() # doctest: +NORMALIZE_WHITESPACE
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE id<? and name=? ORDER BY users.id
+ LIMIT 2 OFFSET 0
+ [224, 'fred']
+ {stop}<User('fred','Fred Flinstone', 'blah')>
+
+To use an entirely string-based statement, using ``from_statement()``; just ensure that the columns clause of the statement contains the column names normally used by the mapper (below illustrated using an asterisk):
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(User).from_statement("SELECT * FROM users where name=:name").params(name='ed').all()
+ SELECT * FROM users where name=?
+ ['ed']
+ {stop}[<User('ed','Ed Jones', 'f8s7ccs')>]
+
+Building a Relation
+====================
+
+Now let's consider a second table to be dealt with. Users in our system also can store any number of email addresses associated with their username. This implies a basic one to many association from the ``users_table`` to a new table which stores email addresses, which we will call ``addresses``. Using declarative, we define this table along with its mapped class, ``Address``:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import ForeignKey
+ >>> from sqlalchemy.orm import relation, backref
+ >>> class Address(Base):
+ ... __tablename__ = 'addresses'
+ ... id = Column(Integer, primary_key=True)
+ ... email_address = Column(String, nullable=False)
+ ... user_id = Column(Integer, ForeignKey('users.id'))
+ ...
+ ... user = relation(User, backref=backref('addresses', order_by=id))
+ ...
+ ... def __init__(self, email_address):
+ ... self.email_address = email_address
+ ...
+ ... def __repr__(self):
+ ... return "<Address('%s')>" % self.email_address
+
+The above class introduces a **foreign key** constraint which references the ``users`` table. This defines for SQLAlchemy the relationship between the two tables at the database level. The relationship between the ``User`` and ``Address`` classes is defined separately using the ``relation()`` function, which defines an attribute ``user`` to be placed on the ``Address`` class, as well as an ``addresses`` collection to be placed on the ``User`` class. Such a relation is known as a **bidirectional** relationship. Because of the placement of the foreign key, from ``Address`` to ``User`` it is **many to one**, and from ``User`` to ``Address`` it is **one to many**. SQLAlchemy is automatically aware of many-to-one/one-to-many based on foreign keys.
+
+The ``relation()`` function is extremely flexible, and could just have easily been defined on the ``User`` class:
+
+.. sourcecode:: python+sql
+
+ class User(Base):
+ # ....
+ addresses = relation(Address, order_by=Address.id, backref="user")
+
+We are also free to not define a backref, and to define the func:`relation()` only on one class and not the other. It is also possible to define two separate :func:`relation` constructs for either direction, which is generally safe for many-to-one and one-to-many relations, but not for many-to-many relations.
+
+When using the ``declarative`` extension, ``relation()`` gives us the option to use strings for most arguments that concern the target class, in the case that the target class has not yet been defined. This **only** works in conjunction with ``declarative``:
+
+.. sourcecode:: python+sql
+
+ class User(Base):
+ ....
+ addresses = relation("Address", order_by="Address.id", backref="user")
+
+When ``declarative`` is not in use, you typically define your ``mapper()`` well after the target classes and ``Table`` objects have been defined, so string expressions are not needed.
+
+We'll need to create the ``addresses`` table in the database, so we will issue another CREATE from our metadata, which will skip over tables which have already been created:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE
+ PRAGMA table_info("users")
+ {}
+ PRAGMA table_info("addresses")
+ {}
+ CREATE TABLE addresses (
+ id INTEGER NOT NULL,
+ email_address VARCHAR NOT NULL,
+ user_id INTEGER,
+ PRIMARY KEY (id),
+ FOREIGN KEY(user_id) REFERENCES users (id)
+ )
+ {}
+ COMMIT
+
+Working with Related Objects
+=============================
+
+Now when we create a ``User``, a blank ``addresses`` collection will be present. By default, the collection is a Python list. Other collection types, such as sets and dictionaries, are available as well:
+
+.. sourcecode:: python+sql
+
+ >>> jack = User('jack', 'Jack Bean', 'gjffdd')
+ >>> jack.addresses
+ []
+
+We are free to add ``Address`` objects on our ``User`` object. In this case we just assign a full list directly:
+
+.. sourcecode:: python+sql
+
+ >>> jack.addresses = [Address(email_address='jack@google.com'), Address(email_address='j25@yahoo.com')]
+
+When using a bidirectional relationship, elements added in one direction automatically become visible in the other direction. This is the basic behavior of the **backref** keyword, which maintains the relationship purely in memory, without using any SQL:
+
+.. sourcecode:: python+sql
+
+ >>> jack.addresses[1]
+ <Address('j25@yahoo.com')>
+
+ >>> jack.addresses[1].user
+ <User('jack','Jack Bean', 'gjffdd')>
+
+Let's add and commit ``Jack Bean`` to the database. ``jack`` as well as the two ``Address`` members in his ``addresses`` collection are both added to the session at once, using a process known as **cascading**:
+
+.. sourcecode:: python+sql
+
+ >>> session.add(jack)
+ {sql}>>> session.commit()
+ INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
+ ['jack', 'Jack Bean', 'gjffdd']
+ INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
+ ['jack@google.com', 5]
+ INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
+ ['j25@yahoo.com', 5]
+ COMMIT
+
+Querying for Jack, we get just Jack back. No SQL is yet issued for Jack's addresses:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack = session.query(User).filter_by(name='jack').one()
+ BEGIN
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name = ?
+ LIMIT 2 OFFSET 0
+ ['jack']
+
+ >>> jack
+ <User('jack','Jack Bean', 'gjffdd')>
+
+Let's look at the ``addresses`` collection. Watch the SQL:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack.addresses
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id ORDER BY addresses.id
+ [5]
+ {stop}[<Address('jack@google.com')>, <Address('j25@yahoo.com')>]
+
+When we accessed the ``addresses`` collection, SQL was suddenly issued. This is an example of a **lazy loading relation**. The ``addresses`` collection is now loaded and behaves just like an ordinary list.
+
+If you want to reduce the number of queries (dramatically, in many cases), we can apply an **eager load** to the query operation. With the same query, we may apply an **option** to the query, indicating that we'd like ``addresses`` to load "eagerly". SQLAlchemy then constructs an outer join between the ``users`` and ``addresses`` tables, and loads them at once, populating the ``addresses`` collection on each ``User`` object if it's not already populated:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import eagerload
+
+ {sql}>>> jack = session.query(User).options(eagerload('addresses')).filter_by(name='jack').one() #doctest: +NORMALIZE_WHITESPACE
+ SELECT anon_1.users_id AS anon_1_users_id, anon_1.users_name AS anon_1_users_name,
+ anon_1.users_fullname AS anon_1_users_fullname, anon_1.users_password AS anon_1_users_password,
+ addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address,
+ addresses_1.user_id AS addresses_1_user_id
+ FROM (SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname,
+ users.password AS users_password
+ FROM users WHERE users.name = ?
+ LIMIT 2 OFFSET 0) AS anon_1 LEFT OUTER JOIN addresses AS addresses_1
+ ON anon_1.users_id = addresses_1.user_id ORDER BY addresses_1.id
+ ['jack']
+
+ >>> jack
+ <User('jack','Jack Bean', 'gjffdd')>
+
+ >>> jack.addresses
+ [<Address('jack@google.com')>, <Address('j25@yahoo.com')>]
+
+SQLAlchemy has the ability to control exactly which attributes and how many levels deep should be joined together in a single SQL query. More information on this feature is available in `advdatamapping_relation`.
+
+Querying with Joins
+====================
+
+While the eager load created a JOIN specifically to populate a collection, we can also work explicitly with joins in many ways. For example, to construct a simple inner join between ``User`` and ``Address``, we can just ``filter()`` their related columns together. Below we load the ``User`` and ``Address`` entities at once using this method:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for u, a in session.query(User, Address).filter(User.id==Address.user_id).\
+ ... filter(Address.email_address=='jack@google.com').all(): # doctest: +NORMALIZE_WHITESPACE
+ ... print u, a
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname,
+ users.password AS users_password, addresses.id AS addresses_id,
+ addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id
+ FROM users, addresses
+ WHERE users.id = addresses.user_id AND addresses.email_address = ?
+ ['jack@google.com']
+ {stop}<User('jack','Jack Bean', 'gjffdd')> <Address('jack@google.com')>
+
+Or we can make a real JOIN construct; one way to do so is to use the ORM ``join()`` function, and tell ``Query`` to "select from" this join:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import join
+ {sql}>>> session.query(User).select_from(join(User, Address)).\
+ ... filter(Address.email_address=='jack@google.com').all()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users JOIN addresses ON users.id = addresses.user_id
+ WHERE addresses.email_address = ?
+ ['jack@google.com']
+ {stop}[<User('jack','Jack Bean', 'gjffdd')>]
+
+``join()`` knows how to join between ``User`` and ``Address`` because there's only one foreign key between them. If there were no foreign keys, or several, ``join()`` would require a third argument indicating the ON clause of the join, in one of the following forms:
+
+.. sourcecode:: python+sql
+
+ join(User, Address, User.id==Address.user_id) # explicit condition
+ join(User, Address, User.addresses) # specify relation from left to right
+ join(User, Address, 'addresses') # same, using a string
+
+The functionality of ``join()`` is also available generatively from ``Query`` itself using ``Query.join``. This is most easily used with just the "ON" clause portion of the join, such as:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(User).join(User.addresses).\
+ ... filter(Address.email_address=='jack@google.com').all()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users JOIN addresses ON users.id = addresses.user_id
+ WHERE addresses.email_address = ?
+ ['jack@google.com']
+ {stop}[<User('jack','Jack Bean', 'gjffdd')>]
+
+To explicitly specify the target of the join, use tuples to form an argument list similar to the standalone join. This becomes more important when using aliases and similar constructs:
+
+.. sourcecode:: python+sql
+
+ session.query(User).join((Address, User.addresses))
+
+Multiple joins can be created by passing a list of arguments:
+
+.. sourcecode:: python+sql
+
+ session.query(Foo).join(Foo.bars, Bar.bats, (Bat, 'widgets'))
+
+The above would produce SQL something like ``foo JOIN bars ON <onclause> JOIN bats ON <onclause> JOIN widgets ON <onclause>``.
+
+Using Aliases
+-------------
+
+When querying across multiple tables, if the same table needs to be referenced more than once, SQL typically requires that the table be *aliased* with another name, so that it can be distinguished against other occurrences of that table. The ``Query`` supports this most explicitly using the ``aliased`` construct. Below we join to the ``Address`` entity twice, to locate a user who has two distinct email addresses at the same time:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import aliased
+ >>> adalias1 = aliased(Address)
+ >>> adalias2 = aliased(Address)
+ {sql}>>> for username, email1, email2 in \
+ ... session.query(User.name, adalias1.email_address, adalias2.email_address).\
+ ... join((adalias1, User.addresses), (adalias2, User.addresses)).\
+ ... filter(adalias1.email_address=='jack@google.com').\
+ ... filter(adalias2.email_address=='j25@yahoo.com'):
+ ... print username, email1, email2 # doctest: +NORMALIZE_WHITESPACE
+ SELECT users.name AS users_name, addresses_1.email_address AS addresses_1_email_address,
+ addresses_2.email_address AS addresses_2_email_address
+ FROM users JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
+ JOIN addresses AS addresses_2 ON users.id = addresses_2.user_id
+ WHERE addresses_1.email_address = ? AND addresses_2.email_address = ?
+ ['jack@google.com', 'j25@yahoo.com']
+ {stop}jack jack@google.com j25@yahoo.com
+
+Using Subqueries
+----------------
+
+The ``Query`` is suitable for generating statements which can be used as subqueries. Suppose we wanted to load ``User`` objects along with a count of how many ``Address`` records each user has. The best way to generate SQL like this is to get the count of addresses grouped by user ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so that we get rows back for those users who don't have any addresses, e.g.::
+
+ SELECT users.*, adr_count.address_count FROM users LEFT OUTER JOIN
+ (SELECT user_id, count(*) AS address_count FROM addresses GROUP BY user_id) AS adr_count
+ ON users.id=adr_count.user_id
+
+Using the ``Query``, we build a statement like this from the inside out. The ``statement`` accessor returns a SQL expression representing the statement generated by a particular ``Query`` - this is an instance of a ``select()`` construct, which are described in `sql`::
+
+ >>> from sqlalchemy.sql import func
+ >>> stmt = session.query(Address.user_id, func.count('*').label('address_count')).group_by(Address.user_id).subquery()
+
+The ``func`` keyword generates SQL functions, and the ``subquery()`` method on ``Query`` produces a SQL expression construct representing a SELECT statement embedded within an alias (it's actually shorthand for ``query.statement.alias()``).
+
+Once we have our statement, it behaves like a ``Table`` construct, such as the one we created for ``users`` at the start of this tutorial. The columns on the statement are accessible through an attribute called ``c``:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for u, count in session.query(User, stmt.c.address_count).\
+ ... outerjoin((stmt, User.id==stmt.c.user_id)).order_by(User.id): # doctest: +NORMALIZE_WHITESPACE
+ ... print u, count
+ SELECT users.id AS users_id, users.name AS users_name,
+ users.fullname AS users_fullname, users.password AS users_password,
+ anon_1.address_count AS anon_1_address_count
+ FROM users LEFT OUTER JOIN (SELECT addresses.user_id AS user_id, count(?) AS address_count
+ FROM addresses GROUP BY addresses.user_id) AS anon_1 ON users.id = anon_1.user_id
+ ORDER BY users.id
+ ['*']
+ {stop}<User('ed','Ed Jones', 'f8s7ccs')> None
+ <User('wendy','Wendy Williams', 'foobar')> None
+ <User('mary','Mary Contrary', 'xxg527')> None
+ <User('fred','Fred Flinstone', 'blah')> None
+ <User('jack','Jack Bean', 'gjffdd')> 2
+
+Using EXISTS
+------------
+
+The EXISTS keyword in SQL is a boolean operator which returns True if the given expression contains any rows. It may be used in many scenarios in place of joins, and is also useful for locating rows which do not have a corresponding row in a related table.
+
+There is an explicit EXISTS construct, which looks like this:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.sql import exists
+ >>> stmt = exists().where(Address.user_id==User.id)
+ {sql}>>> for name, in session.query(User.name).filter(stmt): # doctest: +NORMALIZE_WHITESPACE
+ ... print name
+ SELECT users.name AS users_name
+ FROM users
+ WHERE EXISTS (SELECT *
+ FROM addresses
+ WHERE addresses.user_id = users.id)
+ []
+ {stop}jack
+
+The ``Query`` features several operators which make usage of EXISTS automatically. Above, the statement can be expressed along the ``User.addresses`` relation using ``any()``:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for name, in session.query(User.name).filter(User.addresses.any()): # doctest: +NORMALIZE_WHITESPACE
+ ... print name
+ SELECT users.name AS users_name
+ FROM users
+ WHERE EXISTS (SELECT 1
+ FROM addresses
+ WHERE users.id = addresses.user_id)
+ []
+ {stop}jack
+
+``any()`` takes criterion as well, to limit the rows matched:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> for name, in session.query(User.name).\
+ ... filter(User.addresses.any(Address.email_address.like('%google%'))): # doctest: +NORMALIZE_WHITESPACE
+ ... print name
+ SELECT users.name AS users_name
+ FROM users
+ WHERE EXISTS (SELECT 1
+ FROM addresses
+ WHERE users.id = addresses.user_id AND addresses.email_address LIKE ?)
+ ['%google%']
+ {stop}jack
+
+``has()`` is the same operator as ``any()`` for many-to-one relations (note the ``~`` operator here too, which means "NOT"):
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(Address).filter(~Address.user.has(User.name=='jack')).all() # doctest: +NORMALIZE_WHITESPACE
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE NOT (EXISTS (SELECT 1
+ FROM users
+ WHERE users.id = addresses.user_id AND users.name = ?))
+ ['jack']
+ {stop}[]
+
+Common Relation Operators
+-------------------------
+
+Here's all the operators which build on relations:
+
+* equals (used for many-to-one)::
+
+ query.filter(Address.user == someuser)
+
+* not equals (used for many-to-one)::
+
+ query.filter(Address.user != someuser)
+
+* IS NULL (used for many-to-one)::
+
+ query.filter(Address.user == None)
+
+* contains (used for one-to-many and many-to-many collections)::
+
+ query.filter(User.addresses.contains(someaddress))
+
+* any (used for one-to-many and many-to-many collections)::
+
+ query.filter(User.addresses.any(Address.email_address == 'bar'))
+
+ # also takes keyword arguments:
+ query.filter(User.addresses.any(email_address='bar'))
+
+* has (used for many-to-one)::
+
+ query.filter(Address.user.has(name='ed'))
+
+* with_parent (used for any relation)::
+
+ session.query(Address).with_parent(someuser, 'addresses')
+
+Deleting
+========
+
+Let's try to delete ``jack`` and see how that goes. We'll mark as deleted in the session, then we'll issue a ``count`` query to see that no rows remain:
+
+.. sourcecode:: python+sql
+
+ >>> session.delete(jack)
+ {sql}>>> session.query(User).filter_by(name='jack').count() # doctest: +NORMALIZE_WHITESPACE
+ UPDATE addresses SET user_id=? WHERE addresses.id = ?
+ [None, 1]
+ UPDATE addresses SET user_id=? WHERE addresses.id = ?
+ [None, 2]
+ DELETE FROM users WHERE users.id = ?
+ [5]
+ SELECT count(1) AS count_1
+ FROM users
+ WHERE users.name = ?
+ ['jack']
+ {stop}0
+
+So far, so good. How about Jack's ``Address`` objects ?
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(Address).filter(
+ ... Address.email_address.in_(['jack@google.com', 'j25@yahoo.com'])
+ ... ).count() # doctest: +NORMALIZE_WHITESPACE
+ SELECT count(1) AS count_1
+ FROM addresses
+ WHERE addresses.email_address IN (?, ?)
+ ['jack@google.com', 'j25@yahoo.com']
+ {stop}2
+
+Uh oh, they're still there ! Analyzing the flush SQL, we can see that the ``user_id`` column of each address was set to NULL, but the rows weren't deleted. SQLAlchemy doesn't assume that deletes cascade, you have to tell it to do so.
+
+Configuring delete/delete-orphan Cascade
+----------------------------------------
+
+We will configure **cascade** options on the ``User.addresses`` relation to change the behavior. While SQLAlchemy allows you to add new attributes and relations to mappings at any point in time, in this case the existing relation needs to be removed, so we need to tear down the mappings completely and start again. This is not a typical operation and is here just for illustrative purposes.
+
+Removing all ORM state is as follows:
+
+.. sourcecode:: python+sql
+
+ >>> session.close() # roll back and close the transaction
+ >>> from sqlalchemy.orm import clear_mappers
+ >>> clear_mappers() # clear mappers
+
+Below, we use ``mapper()`` to reconfigure an ORM mapping for ``User`` and ``Address``, on our existing but currently un-mapped classes. The ``User.addresses`` relation now has ``delete, delete-orphan`` cascade on it, which indicates that DELETE operations will cascade to attached ``Address`` objects as well as ``Address`` objects which are removed from their parent:
+
+.. sourcecode:: python+sql
+
+ >>> mapper(User, users_table, properties={ # doctest: +ELLIPSIS
+ ... 'addresses':relation(Address, backref='user', cascade="all, delete, delete-orphan")
+ ... })
+ <Mapper at 0x...; User>
+
+ >>> addresses_table = Address.__table__
+ >>> mapper(Address, addresses_table) # doctest: +ELLIPSIS
+ <Mapper at 0x...; Address>
+
+Now when we load Jack (below using ``get()``, which loads by primary key), removing an address from his ``addresses`` collection will result in that ``Address`` being deleted:
+
+.. sourcecode:: python+sql
+
+ # load Jack by primary key
+ {sql}>>> jack = session.query(User).get(5) #doctest: +NORMALIZE_WHITESPACE
+ BEGIN
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.id = ?
+ [5]
+ {stop}
+
+ # remove one Address (lazy load fires off)
+ {sql}>>> del jack.addresses[1]
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id
+ [5]
+ {stop}
+
+ # only one address remains
+ {sql}>>> session.query(Address).filter(
+ ... Address.email_address.in_(['jack@google.com', 'j25@yahoo.com'])
+ ... ).count() # doctest: +NORMALIZE_WHITESPACE
+ DELETE FROM addresses WHERE addresses.id = ?
+ [2]
+ SELECT count(1) AS count_1
+ FROM addresses
+ WHERE addresses.email_address IN (?, ?)
+ ['jack@google.com', 'j25@yahoo.com']
+ {stop}1
+
+Deleting Jack will delete both Jack and his remaining ``Address``:
+
+.. sourcecode:: python+sql
+
+ >>> session.delete(jack)
+
+ {sql}>>> session.query(User).filter_by(name='jack').count() # doctest: +NORMALIZE_WHITESPACE
+ DELETE FROM addresses WHERE addresses.id = ?
+ [1]
+ DELETE FROM users WHERE users.id = ?
+ [5]
+ SELECT count(1) AS count_1
+ FROM users
+ WHERE users.name = ?
+ ['jack']
+ {stop}0
+
+ {sql}>>> session.query(Address).filter(
+ ... Address.email_address.in_(['jack@google.com', 'j25@yahoo.com'])
+ ... ).count() # doctest: +NORMALIZE_WHITESPACE
+ SELECT count(1) AS count_1
+ FROM addresses
+ WHERE addresses.email_address IN (?, ?)
+ ['jack@google.com', 'j25@yahoo.com']
+ {stop}0
+
+Building a Many To Many Relation
+=================================
+
+We're moving into the bonus round here, but lets show off a many-to-many relationship. We'll sneak in some other features too, just to take a tour. We'll make our application a blog application, where users can write ``BlogPost``s, which have ``Keywords`` associated with them.
+
+The declarative setup is as follows:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy import Text
+
+ >>> # association table
+ >>> post_keywords = Table('post_keywords', metadata,
+ ... Column('post_id', Integer, ForeignKey('posts.id')),
+ ... Column('keyword_id', Integer, ForeignKey('keywords.id'))
+ ... )
+
+ >>> class BlogPost(Base):
+ ... __tablename__ = 'posts'
+ ...
+ ... id = Column(Integer, primary_key=True)
+ ... user_id = Column(Integer, ForeignKey('users.id'))
+ ... headline = Column(String(255), nullable=False)
+ ... body = Column(Text)
+ ...
+ ... # many to many BlogPost<->Keyword
+ ... keywords = relation('Keyword', secondary=post_keywords, backref='posts')
+ ...
+ ... def __init__(self, headline, body, author):
+ ... self.author = author
+ ... self.headline = headline
+ ... self.body = body
+ ...
+ ... def __repr__(self):
+ ... return "BlogPost(%r, %r, %r)" % (self.headline, self.body, self.author)
+
+ >>> class Keyword(Base):
+ ... __tablename__ = 'keywords'
+ ...
+ ... id = Column(Integer, primary_key=True)
+ ... keyword = Column(String(50), nullable=False, unique=True)
+ ...
+ ... def __init__(self, keyword):
+ ... self.keyword = keyword
+
+Above, the many-to-many relation above is ``BlogPost.keywords``. The defining feature of a many to many relation is the ``secondary`` keyword argument which references a ``Table`` object representing the association table. This table only contains columns which reference the two sides of the relation; if it has *any* other columns, such as its own primary key, or foreign keys to other tables, SQLAlchemy requires a different usage pattern called the "association object", described at `association_pattern`.
+
+The many-to-many relation is also bi-directional using the ``backref`` keyword. This is the one case where usage of ``backref`` is generally required, since if a separate ``posts`` relation were added to the ``Keyword`` entity, both relations would independently add and remove rows from the ``post_keywords`` table and produce conflicts.
+
+We would also like our ``BlogPost`` class to have an ``author`` field. We will add this as another bidirectional relationship, except one issue we'll have is that a single user might have lots of blog posts. When we access ``User.posts``, we'd like to be able to filter results further so as not to load the entire collection. For this we use a setting accepted by ``relation()`` called ``lazy='dynamic'``, which configures an alternate **loader strategy** on the attribute. To use it on the "reverse" side of a ``relation()``, we use the ``backref()`` function:
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.orm import backref
+ >>> # "dynamic" loading relation to User
+ >>> BlogPost.author = relation(User, backref=backref('posts', lazy='dynamic'))
+
+Create new tables:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE
+ PRAGMA table_info("users")
+ {}
+ PRAGMA table_info("addresses")
+ {}
+ PRAGMA table_info("posts")
+ {}
+ PRAGMA table_info("keywords")
+ {}
+ PRAGMA table_info("post_keywords")
+ {}
+ CREATE TABLE posts (
+ id INTEGER NOT NULL,
+ user_id INTEGER,
+ headline VARCHAR(255) NOT NULL,
+ body TEXT,
+ PRIMARY KEY (id),
+ FOREIGN KEY(user_id) REFERENCES users (id)
+ )
+ {}
+ COMMIT
+ CREATE TABLE keywords (
+ id INTEGER NOT NULL,
+ keyword VARCHAR(50) NOT NULL,
+ PRIMARY KEY (id),
+ UNIQUE (keyword)
+ )
+ {}
+ COMMIT
+ CREATE TABLE post_keywords (
+ post_id INTEGER,
+ keyword_id INTEGER,
+ FOREIGN KEY(post_id) REFERENCES posts (id),
+ FOREIGN KEY(keyword_id) REFERENCES keywords (id)
+ )
+ {}
+ COMMIT
+
+Usage is not too different from what we've been doing. Let's give Wendy some blog posts:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> wendy = session.query(User).filter_by(name='wendy').one()
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
+ FROM users
+ WHERE users.name = ?
+ LIMIT 2 OFFSET 0
+ ['wendy']
+
+ >>> post = BlogPost("Wendy's Blog Post", "This is a test", wendy)
+ >>> session.add(post)
+
+We're storing keywords uniquely in the database, but we know that we don't have any yet, so we can just create them:
+
+.. sourcecode:: python+sql
+
+ >>> post.keywords.append(Keyword('wendy'))
+ >>> post.keywords.append(Keyword('firstpost'))
+
+We can now look up all blog posts with the keyword 'firstpost'. We'll use the ``any`` operator to locate "blog posts where any of its keywords has the keyword string 'firstpost'":
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(BlogPost).filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?)
+ [2, "Wendy's Blog Post", 'This is a test']
+ INSERT INTO keywords (keyword) VALUES (?)
+ ['wendy']
+ INSERT INTO keywords (keyword) VALUES (?)
+ ['firstpost']
+ INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?)
+ [[1, 1], [1, 2]]
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)
+ ['firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
+
+If we want to look up just Wendy's posts, we can tell the query to narrow down to her as a parent:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> session.query(BlogPost).filter(BlogPost.author==wendy).\
+ ... filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE ? = posts.user_id AND (EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?))
+ [2, 'firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
+
+Or we can use Wendy's own ``posts`` relation, which is a "dynamic" relation, to query straight from there:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> wendy.posts.filter(BlogPost.keywords.any(keyword='firstpost')).all()
+ SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body
+ FROM posts
+ WHERE ? = posts.user_id AND (EXISTS (SELECT 1
+ FROM post_keywords, keywords
+ WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?))
+ [2, 'firstpost']
+ {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
+
+Further Reference
+==================
+
+Query Reference: :ref:`query_api_toplevel`
+
+Further information on mapping setups are in :ref:`datamapping_toplevel`.
+
+Further information on working with Sessions: :ref:`session_toplevel`.
+++ /dev/null
-"""loads Markdown files, converts each one to HTML and parses the HTML into an ElementTree structure.
-The collection of ElementTrees are further parsed to generate a table of contents structure, and are
- manipulated to replace various markdown-generated HTML with specific Mako tags before being written
- to Mako templates, which then re-access the table of contents structure at runtime.
-
-Much thanks to Alexey Shamrin, who came up with the original idea and did all the heavy Markdown/Elementtree
-lifting for this module.
-"""
-
-import sys, re, os
-from toc import TOCElement
-
-try:
- import xml.etree.ElementTree as et
-except ImportError:
- try:
- import elementtree.ElementTree as et
- except:
- raise "This module requires ElementTree to run (http://effbot.org/zone/element-index.htm)"
-
-import markdown
-
-def dump_tree(elem, stream):
- if elem.tag.startswith('MAKO:'):
- dump_mako_tag(elem, stream)
- else:
- if elem.tag != 'html':
- if elem.attrib:
- stream.write("<%s %s>" % (elem.tag, " ".join(["%s=%s" % (key, repr(val)) for key, val in elem.attrib.iteritems()])))
- else:
- stream.write("<%s>" % elem.tag)
- if elem.text:
- stream.write(elem.text)
- for child in elem:
- dump_tree(child, stream)
- if child.tail:
- stream.write(child.tail)
- if elem.tag != 'html':
- stream.write("</%s>" % elem.tag)
-
-def dump_mako_tag(elem, stream):
- tag = elem.tag[5:]
- params = ','.join(['%s=%s' % i for i in elem.items()])
- stream.write('<%%call expr="%s(%s)">' % (tag, params))
- if elem.text:
- stream.write(elem.text)
- for n in elem:
- dump_tree(n, stream)
- if n.tail:
- stream.write(n.tail)
- stream.write("</%call>")
-
-def create_toc(filename, tree, tocroot):
- title = [None]
- current = [tocroot]
- level = [0]
- def process(tree):
- while True:
- i = find_header_index(tree)
- if i is None:
- return
- node = tree[i]
- taglevel = int(node.tag[1])
- start, end = i, end_of_header(tree, taglevel, i+1)
- content = tree[start+1:end]
- description = node.text.strip()
- if title[0] is None:
- title[0] = description
- name = node.get('name')
- if name is None:
- name = description.split()[0].lower()
-
- taglevel = node.tag[1]
- if taglevel > level[0]:
- current[0] = TOCElement(filename, name, description, current[0])
- elif taglevel == level[0]:
- current[0] = TOCElement(filename, name, description, current[0].parent)
- else:
- current[0] = TOCElement(filename, name, description, current[0].parent.parent)
-
- level[0] = taglevel
-
- tag = et.Element("MAKO:formatting.section", path=repr(current[0].path), paged='paged', extension='extension', toc='toc')
- tag.text = (node.tail or "") + '\n'
- tag.tail = '\n'
- tag[:] = content
- tree[start:end] = [tag]
-
- process(tag)
-
- process(tree)
- return (title[0], tocroot.get_by_file(filename))
-
-def literal(s):
- return '"%s"' % s
-
-def index(parent, item):
- for n, i in enumerate(parent):
- if i is item:
- return n
-
-def find_header_index(tree):
- for i, node in enumerate(tree):
- if is_header(node):
- return i
-
-def is_header(node):
- t = node.tag
- return (isinstance(t, str) and len(t) == 2 and t[0] == 'h'
- and t[1] in '123456789')
-
-def end_of_header(tree, level, start):
- for i, node in enumerate(tree[start:]):
- if is_header(node) and int(node.tag[1]) <= level:
- return start + i
- return len(tree)
-
-def process_rel_href(tree):
- parent = get_parent_map(tree)
- for a in tree.findall('.//a'):
- m = re.match(r'(bold)?rel\:(.+)', a.get('href'))
- if m:
- (bold, path) = m.group(1,2)
- text = a.text
- if text == path:
- tag = et.Element("MAKO:nav.toclink", path=repr(path), extension='extension', paged='paged', toc='toc')
- else:
- tag = et.Element("MAKO:nav.toclink", path=repr(path), description=repr(text), extension='extension', paged='paged', toc='toc')
- a_parent = parent[a]
- if bold:
- bold = et.Element('strong')
- bold.tail = a.tail
- bold.append(tag)
- a_parent[index(a_parent, a)] = bold
- else:
- tag.tail = a.tail
- a_parent[index(a_parent, a)] = tag
-
-def replace_pre_with_mako(tree):
- def splice_code_tag(pre, text, code=None, title=None):
- doctest_directives = re.compile(r'#\s*doctest:\s*[+-]\w+(,[+-]\w+)*\s*$', re.M)
- text = re.sub(doctest_directives, '', text)
- # process '>>>' to have quotes around it, to work with the pygments
- # syntax highlighter which uses the tokenize module
- text = re.sub(r'>>> ', r'">>>" ', text)
-
- sqlre = re.compile(r'{sql}(.*?)\n((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?)\n\s*((?:{stop})|\n|$)', re.S)
- if sqlre.search(text) is not None:
- use_sliders = False
- else:
- use_sliders = True
-
- text = sqlre.sub(r"""${formatting.poplink()}\1<%call expr="formatting.codepopper()">\2</%call>""", text)
-
- #sqlre2 = re.compile(r'{opensql}(.*?\n)((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|UPDATE|ROLLBACK|COMMIT|CREATE|DROP).*?)\n\s*((?:{stop})|\n|$)', re.S)
- sqlre2 = re.compile(r'{opensql}(.*?)\n?((?:PRAGMA|BEGIN|SELECT|INSERT|DELETE|ROLLBACK|COMMIT|UPDATE|CREATE|DROP|PRAGMA|DESCRIBE).*?)\n\s*((?:{stop})|\n|$)', re.S)
- text = sqlre2.sub(r"\1<%call expr='formatting.poppedcode()' >\2</%call>\n\n", text)
-
- tag = et.Element("MAKO:formatting.code", extension='extension', paged='paged', toc='toc')
- if code:
- tag.attrib["syntaxtype"] = repr(code)
- if title:
- tag.attrib["title"] = repr(title)
- if use_sliders:
- tag.attrib['use_sliders'] = True
- tag.text = text
-
- pre_parent = parents[pre]
- tag.tail = pre.tail
- pre_parent[reverse_parent(pre_parent, pre)] = tag
-
- parents = get_parent_map(tree)
-
- for precode in tree.findall('.//pre/code'):
- reg = re.compile(r'\{(python|code|diagram)(?: title="(.*?)"){0,1}\}(.*)', re.S)
- m = reg.match(precode[0].text.lstrip())
- if m:
- code = m.group(1)
- title = m.group(2)
- text = m.group(3)
- text = re.sub(r'{(python|code|diagram).*?}(\n\s*)?', '', text)
- text = re.sub(r'\\\n', r'${r"\\\\" + "\\n\\n"}', text)
- splice_code_tag(parents[precode], text, code=code, title=title)
- elif precode.text.lstrip().startswith('>>> '):
- splice_code_tag(parents[precode], precode.text)
-
-def safety_code(tree):
- parents = get_parent_map(tree)
- for code in tree.findall('.//code'):
- tag = et.Element('%text')
- if parents[code].tag != 'pre':
- tag.attrib["filter"] = "h"
- tag.text = code.text
- code.append(tag)
- code.text = ""
-
-def reverse_parent(parent, item):
- for n, i in enumerate(parent):
- if i is item:
- return n
-
-def get_parent_map(tree):
- return dict([(c, p) for p in tree.getiterator() for c in p])
-
-def header(toc, title, filename):
- return \
-"""# -*- coding: utf-8 -*-
-<%%inherit file="content_layout.html"/>
-<%%page args="toc, extension, paged"/>
-<%%namespace name="formatting" file="formatting.html"/>
-<%%namespace name="nav" file="nav.html"/>
-<%%def name="title()">%s - %s</%%def>
-<%%!
- filename = '%s'
-%%>
-## This file is generated. Edit the .txt files instead of this one.
-""" % (toc.root.doctitle, title, filename)
-
-class utf8stream(object):
- def __init__(self, stream):
- self.stream = stream
- def write(self, str):
- self.stream.write(str.encode('utf8'))
-
-def parse_markdown_files(toc, files):
- for inname in files:
- infile = 'content/%s.txt' % inname
- if not os.access(infile, os.F_OK):
- continue
- html = markdown.markdown(file(infile).read())
- #foo = file('foo', 'w')
- #foo.write(html)
- tree = et.fromstring("<html>" + html + "</html>")
- (title, toc_element) = create_toc(inname, tree, toc)
- safety_code(tree)
- replace_pre_with_mako(tree)
- process_rel_href(tree)
- outname = 'output/%s.html' % inname
- print infile, '->', outname
- outfile = utf8stream(file(outname, 'w'))
- outfile.write(header(toc, title, inname))
- dump_tree(tree, outfile)
-
-
--- /dev/null
+Access
+======
+
+.. automodule:: sqlalchemy.databases.access
--- /dev/null
+Firebird
+========
+
+.. automodule:: sqlalchemy.databases.firebird
--- /dev/null
+.. _sqlalchemy.databases:
+
+sqlalchemy.databases
+====================
+
+.. toctree::
+ :glob:
+
+ access
+ firebird
+ informix
+ maxdb
+ mssql
+ mysql
+ oracle
+ postgres
+ sqlite
+ sybase
+
--- /dev/null
+Informix
+========
+
+.. automodule:: sqlalchemy.databases.informix
--- /dev/null
+MaxDB
+=====
+
+.. automodule:: sqlalchemy.databases.maxdb
--- /dev/null
+SQL Server
+==========
+
+.. automodule:: sqlalchemy.databases.mssql
--- /dev/null
+MySQL
+=====
+
+.. automodule:: sqlalchemy.databases.mysql
+
+MySQL Column Types
+------------------
+
+.. autoclass:: MSNumeric
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSDecimal
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSDouble
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSReal
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSFloat
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSBigInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSMediumInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSTinyInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSSmallInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSBit
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSDateTime
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSDate
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSTime
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSTimeStamp
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSYear
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSText
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSTinyText
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSMediumText
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSLongText
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSString
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSChar
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSNVarChar
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSNChar
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSVarBinary
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSBinary
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSBlob
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSTinyBlob
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSMediumBlob
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSLongBlob
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSEnum
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSSet
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: MSBoolean
+ :members: __init__
+ :show-inheritance:
+
--- /dev/null
+Oracle
+======
+
+.. automodule:: sqlalchemy.databases.oracle
--- /dev/null
+PostgreSQL
+==========
+
+.. automodule:: sqlalchemy.databases.postgres
--- /dev/null
+SQLite
+======
+
+.. automodule:: sqlalchemy.databases.sqlite
--- /dev/null
+Sybase
+======
+
+.. automodule:: sqlalchemy.databases.sybase
--- /dev/null
+.. _associationproxy:
+
+associationproxy
+================
+
+.. module:: sqlalchemy.ext.associationproxy
+
+:author: Mike Bayer and Jason Kirtland
+:version: 0.3.1 or greater
+
+``associationproxy`` is used to create a simplified, read/write view of a
+relationship. It can be used to cherry-pick fields from a collection of
+related objects or to greatly simplify access to associated objects in an
+association relationship.
+
+Simplifying Relations
+---------------------
+
+Consider this "association object" mapping::
+
+ users_table = Table('users', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(64)),
+ )
+
+ keywords_table = Table('keywords', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('keyword', String(64))
+ )
+
+ userkeywords_table = Table('userkeywords', metadata,
+ Column('user_id', Integer, ForeignKey("users.id"),
+ primary_key=True),
+ Column('keyword_id', Integer, ForeignKey("keywords.id"),
+ primary_key=True)
+ )
+
+ class User(object):
+ def __init__(self, name):
+ self.name = name
+
+ class Keyword(object):
+ def __init__(self, keyword):
+ self.keyword = keyword
+
+ mapper(User, users_table, properties={
+ 'kw': relation(Keyword, secondary=userkeywords_table)
+ })
+ mapper(Keyword, keywords_table)
+
+Above are three simple tables, modeling users, keywords and a many-to-many
+relationship between the two. These ``Keyword`` objects are little more
+than a container for a name, and accessing them via the relation is
+awkward::
+
+ user = User('jek')
+ user.kw.append(Keyword('cheese inspector'))
+ print user.kw
+ # [<__main__.Keyword object at 0xb791ea0c>]
+ print user.kw[0].keyword
+ # 'cheese inspector'
+ print [keyword.keyword for keyword in user.kw]
+ # ['cheese inspector']
+
+With ``association_proxy`` you have a "view" of the relation that contains
+just the ``.keyword`` of the related objects. The proxy is a Python
+property, and unlike the mapper relation, is defined in your class::
+
+ from sqlalchemy.ext.associationproxy import association_proxy
+
+ class User(object):
+ def __init__(self, name):
+ self.name = name
+
+ # proxy the 'keyword' attribute from the 'kw' relation
+ keywords = association_proxy('kw', 'keyword')
+
+ # ...
+ >>> user.kw
+ [<__main__.Keyword object at 0xb791ea0c>]
+ >>> user.keywords
+ ['cheese inspector']
+ >>> user.keywords.append('snack ninja')
+ >>> user.keywords
+ ['cheese inspector', 'snack ninja']
+ >>> user.kw
+ [<__main__.Keyword object at 0x9272a4c>, <__main__.Keyword object at 0xb7b396ec>]
+
+The proxy is read/write. New associated objects are created on demand when
+values are added to the proxy, and modifying or removing an entry through
+the proxy also affects the underlying collection.
+
+ - The association proxy property is backed by a mapper-defined relation,
+ either a collection or scalar.
+
+ - You can access and modify both the proxy and the backing
+ relation. Changes in one are immediate in the other.
+
+ - The proxy acts like the type of the underlying collection. A list gets a
+ list-like proxy, a dict a dict-like proxy, and so on.
+
+ - Multiple proxies for the same relation are fine.
+
+ - Proxies are lazy, and won't trigger a load of the backing relation until
+ they are accessed.
+
+ - The relation is inspected to determine the type of the related objects.
+
+ - To construct new instances, the type is called with the value being
+ assigned, or key and value for dicts.
+
+ - A ````creator```` function can be used to create instances instead.
+
+Above, the ``Keyword.__init__`` takes a single argument ``keyword``, which
+maps conveniently to the value being set through the proxy. A ``creator``
+function could have been used instead if more flexibility was required.
+
+Because the proxies are backed a regular relation collection, all of the
+usual hooks and patterns for using collections are still in effect. The
+most convenient behavior is the automatic setting of "parent"-type
+relationships on assignment. In the example above, nothing special had to
+be done to associate the Keyword to the User. Simply adding it to the
+collection is sufficient.
+
+Simplifying Association Object Relations
+----------------------------------------
+
+Association proxies are also useful for keeping ``association objects`` out
+the way during regular use. For example, the ``userkeywords`` table
+might have a bunch of auditing columns that need to get updated when changes
+are made- columns that are updated but seldom, if ever, accessed in your
+application. A proxy can provide a very natural access pattern for the
+relation.
+
+.. sourcecode:: python
+
+ from sqlalchemy.ext.associationproxy import association_proxy
+
+ # users_table and keywords_table tables as above, then:
+
+ def get_current_uid():
+ """Return the uid of the current user."""
+ return 1 # hardcoded for this example
+
+ userkeywords_table = Table('userkeywords', metadata,
+ Column('user_id', Integer, ForeignKey("users.id"), primary_key=True),
+ Column('keyword_id', Integer, ForeignKey("keywords.id"), primary_key=True),
+ # add some auditing columns
+ Column('updated_at', DateTime, default=datetime.now),
+ Column('updated_by', Integer, default=get_current_uid, onupdate=get_current_uid),
+ )
+
+ def _create_uk_by_keyword(keyword):
+ """A creator function."""
+ return UserKeyword(keyword=keyword)
+
+ class User(object):
+ def __init__(self, name):
+ self.name = name
+ keywords = association_proxy('user_keywords', 'keyword', creator=_create_uk_by_keyword)
+
+ class Keyword(object):
+ def __init__(self, keyword):
+ self.keyword = keyword
+ def __repr__(self):
+ return 'Keyword(%s)' % repr(self.keyword)
+
+ class UserKeyword(object):
+ def __init__(self, user=None, keyword=None):
+ self.user = user
+ self.keyword = keyword
+
+ mapper(User, users_table)
+ mapper(Keyword, keywords_table)
+ mapper(UserKeyword, userkeywords_table, properties={
+ 'user': relation(User, backref='user_keywords'),
+ 'keyword': relation(Keyword),
+ })
+
+ user = User('log')
+ kw1 = Keyword('new_from_blammo')
+
+ # Adding a Keyword requires creating a UserKeyword association object
+ user.user_keywords.append(UserKeyword(user, kw1))
+
+ # And accessing Keywords requires traversing UserKeywords
+ print user.user_keywords[0]
+ # <__main__.UserKeyword object at 0xb79bbbec>
+
+ print user.user_keywords[0].keyword
+ # Keyword('new_from_blammo')
+
+ # Lots of work.
+
+ # It's much easier to go through the association proxy!
+ for kw in (Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')):
+ user.keywords.append(kw)
+
+ print user.keywords
+ # [Keyword('new_from_blammo'), Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')]
+
+
+Building Complex Views
+----------------------
+
+.. sourcecode:: python
+
+ stocks = Table("stocks", meta,
+ Column('symbol', String(10), primary_key=True),
+ Column('description', String(100), nullable=False),
+ Column('last_price', Numeric)
+ )
+
+ brokers = Table("brokers", meta,
+ Column('id', Integer,primary_key=True),
+ Column('name', String(100), nullable=False)
+ )
+
+ holdings = Table("holdings", meta,
+ Column('broker_id', Integer, ForeignKey('brokers.id'), primary_key=True),
+ Column('symbol', String(10), ForeignKey('stocks.symbol'), primary_key=True),
+ Column('shares', Integer)
+ )
+
+Above are three tables, modeling stocks, their brokers and the number of
+shares of a stock held by each broker. This situation is quite different
+from the association example above. ``shares`` is a *property of the
+relation*, an important one that we need to use all the time.
+
+For this example, it would be very convenient if ``Broker`` objects had a
+dictionary collection that mapped ``Stock`` instances to the shares held for
+each. That's easy::
+
+ from sqlalchemy.ext.associationproxy import association_proxy
+ from sqlalchemy.orm.collections import attribute_mapped_collection
+
+ def _create_holding(stock, shares):
+ """A creator function, constructs Holdings from Stock and share quantity."""
+ return Holding(stock=stock, shares=shares)
+
+ class Broker(object):
+ def __init__(self, name):
+ self.name = name
+
+ holdings = association_proxy('by_stock', 'shares', creator=_create_holding)
+
+ class Stock(object):
+ def __init__(self, symbol, description=None):
+ self.symbol = symbol
+ self.description = description
+ self.last_price = 0
+
+ class Holding(object):
+ def __init__(self, broker=None, stock=None, shares=0):
+ self.broker = broker
+ self.stock = stock
+ self.shares = shares
+
+ mapper(Stock, stocks_table)
+ mapper(Broker, brokers_table, properties={
+ 'by_stock': relation(Holding,
+ collection_class=attribute_mapped_collection('stock'))
+ })
+ mapper(Holding, holdings_table, properties={
+ 'stock': relation(Stock),
+ 'broker': relation(Broker)
+ })
+
+Above, we've set up the ``by_stock`` relation collection to act as a
+dictionary, using the ``.stock`` property of each Holding as a key.
+
+Populating and accessing that dictionary manually is slightly inconvenient
+because of the complexity of the Holdings association object::
+
+ stock = Stock('ZZK')
+ broker = Broker('paj')
+
+ broker.holdings[stock] = Holding(broker, stock, 10)
+ print broker.holdings[stock].shares
+ # 10
+
+The ``by_stock`` proxy we've added to the ``Broker`` class hides the details
+of the ``Holding`` while also giving access to ``.shares``::
+
+ for stock in (Stock('JEK'), Stock('STPZ')):
+ broker.holdings[stock] = 123
+
+ for stock, shares in broker.holdings.items():
+ print stock, shares
+
+ # lets take a peek at that holdings_table after committing changes to the db
+ print list(holdings_table.select().execute())
+ # [(1, 'ZZK', 10), (1, 'JEK', 123), (1, 'STEPZ', 123)]
+
+Further examples can be found in the ``examples/`` directory in the
+SQLAlchemy distribution.
+
+The ``association_proxy`` convenience function is not present in SQLAlchemy
+versions 0.3.1 through 0.3.7, instead instantiate the class directly::
+
+ from sqlalchemy.ext.associationproxy import AssociationProxy
+
+ class Article(object):
+ keywords = AssociationProxy('keyword_associations', 'keyword')
+
+API
+---
+
+.. autofunction:: association_proxy
+
+.. autoclass:: AssociationProxy
+ :members:
+ :undoc-members:
\ No newline at end of file
--- /dev/null
+declarative
+===========
+
+:author: Mike Bayer
+:version: 0.4.4 or greater
+
+``declarative`` intends to be a fully featured replacement for the very old ``activemapper`` extension. Its goal is to redefine the organization of class, ``Table``, and ``mapper()`` constructs such that they can all be defined "at once" underneath a class declaration. Unlike ``activemapper``, it does not redefine normal SQLAlchemy configurational semantics - regular ``Column``, ``relation()`` and other schema or ORM constructs are used in almost all cases.
+
+``declarative`` is a so-called "micro declarative layer"; it does not generate table or column names and requires almost as fully verbose a configuration as that of straight tables and mappers. As an alternative, the `Elixir <http://elixir.ematia.de/>`_ project is a full community-supported declarative layer for SQLAlchemy, and is recommended for its active-record-like semantics, its convention-based configuration, and plugin capabilities.
+
+SQLAlchemy object-relational configuration involves the usage of Table, mapper(), and class objects to define the three areas of configuration.
+declarative moves these three types of configuration underneath the individual mapped class. Regular SQLAlchemy schema and ORM constructs are used
+in most cases:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.ext.declarative import declarative_base
+
+ Base = declarative_base()
+
+ class SomeClass(Base):
+ __tablename__ = 'some_table'
+ id = Column('id', Integer, primary_key=True)
+ name = Column('name', String(50))
+
+Above, the ``declarative_base`` callable produces a new base class from which all mapped classes inherit from. When the class definition is
+completed, a new ``Table`` and ``mapper()`` have been generated, accessible via the ``__table__`` and ``__mapper__`` attributes on the
+``SomeClass`` class.
+
+You may omit the names from the Column definitions. Declarative will fill
+them in for you:
+
+.. sourcecode:: python+sql
+
+ class SomeClass(Base):
+ __tablename__ = 'some_table'
+ id = Column(Integer, primary_key=True)
+ name = Column(String(50))
+
+Attributes may be added to the class after its construction, and they will be added to the underlying ``Table`` and ``mapper()`` definitions as
+appropriate:
+
+.. sourcecode:: python+sql
+
+ SomeClass.data = Column('data', Unicode)
+ SomeClass.related = relation(RelatedInfo)
+
+Classes which are mapped explicitly using ``mapper()`` can interact freely with declarative classes.
+
+The ``declarative_base`` base class contains a ``MetaData`` object where newly defined ``Table`` objects are collected. This is accessed via the ````metadata```` class level accessor, so to create tables we can say:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite://')
+ Base.metadata.create_all(engine)
+
+The ``Engine`` created above may also be directly associated with the declarative base class using the ``bind`` keyword argument, where it will be associated with the underlying ``MetaData`` object and allow SQL operations involving that metadata and its tables to make use of that engine automatically:
+
+.. sourcecode:: python+sql
+
+ Base = declarative_base(bind=create_engine('sqlite://'))
+
+Or, as ``MetaData`` allows, at any time using the ``bind`` attribute:
+
+.. sourcecode:: python+sql
+
+ Base.metadata.bind = create_engine('sqlite://')
+
+The ``declarative_base`` can also receive a pre-created ``MetaData`` object, which allows a declarative setup to be associated with an already existing traditional collection of ``Table`` objects:
+
+.. sourcecode:: python+sql
+
+ mymetadata = MetaData()
+ Base = declarative_base(metadata=mymetadata)
+
+Relations to other classes are done in the usual way, with the added feature that the class specified to ``relation()`` may be a string name. The
+"class registry" associated with ``Base`` is used at mapper compilation time to resolve the name into the actual class object, which is expected to
+have been defined once the mapper configuration is used:
+
+.. sourcecode:: python+sql
+
+ class User(Base):
+ __tablename__ = 'users'
+
+ id = Column('id', Integer, primary_key=True)
+ name = Column('name', String(50))
+ addresses = relation("Address", backref="user")
+
+ class Address(Base):
+ __tablename__ = 'addresses'
+
+ id = Column('id', Integer, primary_key=True)
+ email = Column('email', String(50))
+ user_id = Column('user_id', Integer, ForeignKey('users.id'))
+
+Column constructs, since they are just that, are immediately usable, as below where we define a primary join condition on the ``Address`` class
+using them:
+
+.. sourcecode:: python+sql
+
+ class Address(Base)
+ __tablename__ = 'addresses'
+
+ id = Column('id', Integer, primary_key=True)
+ email = Column('email', String(50))
+ user_id = Column('user_id', Integer, ForeignKey('users.id'))
+ user = relation(User, primaryjoin=user_id==User.id)
+
+In addition to the main argument for ``relation``, other arguments
+which depend upon the columns present on an as-yet undefined class
+may also be specified as strings. These strings are evaluated as
+Python expressions. The full namespace available within this
+evaluation includes all classes mapped for this declarative base,
+as well as the contents of the ``sqlalchemy`` package, including
+expression functions like ``desc`` and ``func``:
+
+.. sourcecode:: python+sql
+
+ class User(Base):
+ # ....
+ addresses = relation("Address", order_by="desc(Address.email)",
+ primaryjoin="Address.user_id==User.id")
+
+As an alternative to string-based attributes, attributes may also be
+defined after all classes have been created. Just add them to the target
+class after the fact:
+
+.. sourcecode:: python+sql
+
+ User.addresses = relation(Address, primaryjoin=Address.user_id==User.id)
+
+Synonyms are one area where ``declarative`` needs to slightly change the usual SQLAlchemy configurational syntax. To define a
+getter/setter which proxies to an underlying attribute, use ``synonym`` with the ``descriptor`` argument:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base):
+ __tablename__ = 'sometable'
+
+ _attr = Column('attr', String)
+
+ def _get_attr(self):
+ return self._some_attr
+ def _set_attr(self, attr):
+ self._some_attr = attr
+ attr = synonym('_attr', descriptor=property(_get_attr, _set_attr))
+
+The above synonym is then usable as an instance attribute as well as a class-level expression construct:
+
+.. sourcecode:: python+sql
+
+ x = MyClass()
+ x.attr = "some value"
+ session.query(MyClass).filter(MyClass.attr == 'some other value').all()
+
+The ``synonym_for`` decorator can accomplish the same task:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base):
+ __tablename__ = 'sometable'
+
+ _attr = Column('attr', String)
+
+ @synonym_for('_attr')
+ @property
+ def attr(self):
+ return self._some_attr
+
+Similarly, ``comparable_using`` is a front end for the ``comparable_property`` ORM function:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base):
+ __tablename__ = 'sometable'
+
+ name = Column('name', String)
+
+ @comparable_using(MyUpperCaseComparator)
+ @property
+ def uc_name(self):
+ return self.name.upper()
+
+As an alternative to ``__tablename__``, a direct ``Table`` construct may be used. The ``Column`` objects, which in this case require their names, will be added to the mapping just like a regular mapping to a table:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base):
+ __table__ = Table('my_table', Base.metadata,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(50))
+ )
+
+Other table-based attributes include ``__table_args__``, which is
+either a dictionary as in:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base)
+ __tablename__ = 'sometable'
+ __table_args__ = {'mysql_engine':'InnoDB'}
+
+or a dictionary-containing tuple in the form
+``(arg1, arg2, ..., {kwarg1:value, ...})``, as in:
+
+.. sourcecode:: python+sql
+
+ class MyClass(Base)
+ __tablename__ = 'sometable'
+ __table_args__ = (ForeignKeyConstraint(['id'], ['remote_table.id']), {'autoload':True})
+
+Mapper arguments are specified using the ``__mapper_args__`` class variable. Note that the column objects declared on the class are immediately
+usable, as in this joined-table inheritance example:
+
+.. sourcecode:: python+sql
+
+ class Person(Base):
+ __tablename__ = 'people'
+ id = Column('id', Integer, primary_key=True)
+ discriminator = Column('type', String(50))
+ __mapper_args__ = {'polymorphic_on':discriminator}
+
+ class Engineer(Person):
+ __tablename__ = 'engineers'
+ __mapper_args__ = {'polymorphic_identity':'engineer'}
+ id = Column('id', Integer, ForeignKey('people.id'), primary_key=True)
+ primary_language = Column('primary_language', String(50))
+
+For single-table inheritance, the ``__tablename__`` and ``__table__`` class variables are optional on a class when the class inherits from another
+mapped class.
+
+As a convenience feature, the ``declarative_base()`` sets a default constructor on classes which takes keyword arguments, and assigns them to the
+named attributes:
+
+.. sourcecode:: python+sql
+
+ e = Engineer(primary_language='python')
+
+Note that ``declarative`` has no integration built in with sessions, and is only intended as an optional syntax for the regular usage of mappers
+and Table objects. A typical application setup using ``scoped_session`` might look like:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('postgres://scott:tiger@localhost/test')
+ Session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))
+ Base = declarative_base()
+
+Mapped instances then make usage of ``Session`` in the usual way.
+
+.. automodule:: sqlalchemy.ext.declarative
+ :members:
+ :undoc-members:
--- /dev/null
+.. _plugins:
+.. _sqlalchemy.ext:
+
+sqlalchemy.ext
+==============
+
+SQLAlchemy has a variety of extensions available which provide extra
+functionality to SA, either via explicit usage or by augmenting the
+core behavior.
+
+.. toctree::
+ :glob:
+
+ declarative
+ associationproxy
+ orderinglist
+ serializer
+ sqlsoup
+
--- /dev/null
+orderinglist
+============
+
+.. module: sqlalchemy.ext.orderinglist
+
+:author: Jason Kirtland
+
+``orderinglist`` is a helper for mutable ordered relations. It will intercept
+list operations performed on a relation collection and automatically
+synchronize changes in list position with an attribute on the related objects.
+(See :ref:`advdatamapping_entitycollections` for more information on the general pattern.)
+
+Example: Two tables that store slides in a presentation. Each slide
+has a number of bullet points, displayed in order by the 'position'
+column on the bullets table. These bullets can be inserted and re-ordered
+by your end users, and you need to update the 'position' column of all
+affected rows when changes are made.
+
+.. sourcecode:: python+sql
+
+ slides_table = Table('Slides', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('name', String))
+
+ bullets_table = Table('Bullets', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('slide_id', Integer, ForeignKey('Slides.id')),
+ Column('position', Integer),
+ Column('text', String))
+
+ class Slide(object):
+ pass
+ class Bullet(object):
+ pass
+
+ mapper(Slide, slides_table, properties={
+ 'bullets': relation(Bullet, order_by=[bullets_table.c.position])
+ })
+ mapper(Bullet, bullets_table)
+
+The standard relation mapping will produce a list-like attribute on each Slide
+containing all related Bullets, but coping with changes in ordering is totally
+your responsibility. If you insert a Bullet into that list, there is no
+magic- it won't have a position attribute unless you assign it it one, and
+you'll need to manually renumber all the subsequent Bullets in the list to
+accommodate the insert.
+
+An ``orderinglist`` can automate this and manage the 'position' attribute on all
+related bullets for you.
+
+.. sourcecode:: python+sql
+
+ mapper(Slide, slides_table, properties={
+ 'bullets': relation(Bullet,
+ collection_class=ordering_list('position'),
+ order_by=[bullets_table.c.position])
+ })
+ mapper(Bullet, bullets_table)
+
+ s = Slide()
+ s.bullets.append(Bullet())
+ s.bullets.append(Bullet())
+ s.bullets[1].position
+ >>> 1
+ s.bullets.insert(1, Bullet())
+ s.bullets[2].position
+ >>> 2
+
+Use the ``ordering_list`` function to set up the ``collection_class`` on relations
+(as in the mapper example above). This implementation depends on the list
+starting in the proper order, so be SURE to put an order_by on your relation.
+
+``ordering_list`` takes the name of the related object's ordering attribute as
+an argument. By default, the zero-based integer index of the object's
+position in the ``ordering_list`` is synchronized with the ordering attribute:
+index 0 will get position 0, index 1 position 1, etc. To start numbering at 1
+or some other integer, provide ``count_from=1``.
+
+Ordering values are not limited to incrementing integers. Almost any scheme
+can implemented by supplying a custom ``ordering_func`` that maps a Python list
+index to any value you require. See the [module
+documentation](rel:docstrings_sqlalchemy.ext.orderinglist) for more
+information, and also check out the unit tests for examples of stepped
+numbering, alphabetical and Fibonacci numbering.
+
+.. automodule:: sqlalchemy.ext.orderinglist
+ :members:
+ :undoc-members:
--- /dev/null
+serializer
+==========
+
+:author: Mike Bayer
+
+Serializer/Deserializer objects for usage with SQLAlchemy structures.
+
+Any SQLAlchemy structure, including Tables, Columns, expressions, mappers,
+Query objects etc. can be serialized in a minimally-sized format,
+and deserialized when given a Metadata and optional ScopedSession object
+to use as context on the way out.
+
+Usage is nearly the same as that of the standard Python pickle module:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.ext.serializer import loads, dumps
+ metadata = MetaData(bind=some_engine)
+ Session = scoped_session(sessionmaker())
+
+ # ... define mappers
+
+ query = Session.query(MyClass).filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)
+
+ # pickle the query
+ serialized = dumps(query)
+
+ # unpickle. Pass in metadata + scoped_session
+ query2 = loads(serialized, metadata, Session)
+
+ print query2.all()
+
+Similar restrictions as when using raw pickle apply; mapped classes must be
+themselves be pickleable, meaning they are importable from a module-level
+namespace.
+
+Note that instances of user-defined classes do not require this extension
+in order to be pickled; these contain no references to engines, sessions
+or expression constructs in the typical case and can be serialized directly.
+This module is specifically for ORM and expression constructs.
+
+.. automodule:: sqlalchemy.ext.serializer
+ :members:
+ :undoc-members:
--- /dev/null
+SqlSoup
+=======
+
+:author: Jonathan Ellis
+
+SqlSoup creates mapped classes on the fly from tables, which are automatically reflected from the database based on name. It is essentially a nicer version of the "row data gateway" pattern.
+
+.. sourcecode:: python+sql
+
+ >>> from sqlalchemy.ext.sqlsoup import SqlSoup
+ >>> soup = SqlSoup('sqlite:///')
+
+ >>> db.users.select(order_by=[db.users.c.name])
+ [MappedUsers(name='Bhargan Basepair',email='basepair@example.edu',password='basepair',classname=None,admin=1),
+ MappedUsers(name='Joe Student',email='student@example.edu',password='student',classname=None,admin=0)]
+
+Full SqlSoup documentation is on the `SQLAlchemy Wiki <http://www.sqlalchemy.org/trac/wiki/SqlSoup>`_.
+
+.. automodule:: sqlalchemy.ext.sqlsoup
+ :members:
+ :undoc-members:
--- /dev/null
+.. _api_reference_toplevel:
+
+API Reference
+=============
+
+.. toctree::
+ :maxdepth: 3
+
+ sqlalchemy/index
+ orm/index
+ dialects/index
+ ext/index
+
--- /dev/null
+.. _sqlalchemy_orm_toplevel:
+
+sqlalchemy.orm
+==============
+
+.. toctree::
+ :glob:
+
+ mapping
+ query
+ sessions
+ interfaces
+ utilities
+
+
--- /dev/null
+Interfaces
+==========
+
+.. automodule:: sqlalchemy.orm.interfaces
+ :members: AttributeExtension, InstrumentationManager, MapperExtension, PropComparator, SessionExtension
+ :undoc-members:
+
\ No newline at end of file
--- /dev/null
+Class Mapping
+=============
+
+.. module:: sqlalchemy.orm
+
+Defining Mappings
+-----------------
+
+Python classes are mapped to the database using the :func:`mapper` function.
+
+.. autofunction:: mapper
+
+Mapper Properties
+-----------------
+
+A basic mapping of a class will simply make the columns of the
+database table or selectable available as attributes on the class.
+**Mapper properties** allow you to customize and add additional
+properties to your classes, for example making the results one-to-many
+join available as a Python list of :func:`related <relation>` objects.
+
+Mapper properties are most commonly included in the :func:`mapper`
+call::
+
+ mapper(Parent, properties={
+ 'children': relation(Children)
+ }
+
+.. autofunction:: backref
+
+.. autofunction:: column_property
+
+.. autofunction:: comparable_property
+
+.. autofunction:: composite
+
+.. autofunction:: deferred
+
+.. autofunction:: dynamic_loader
+
+.. autofunction:: relation
+
+.. autofunction:: synonym
+
+Decorators
+----------
+
+.. autofunction:: reconstructor
+
+.. autofunction:: validates
+
+Utilities
+---------
+
+.. autofunction:: object_mapper
+
+.. autofunction:: class_mapper
+
+.. autofunction:: compile_mappers
+
+.. autofunction:: clear_mappers
+
+Internals
+---------
+
+.. autoclass:: sqlalchemy.orm.mapper.Mapper
+ :members:
--- /dev/null
+.. _query_api_toplevel:
+
+Querying
+========
+
+.. module:: sqlalchemy.orm
+
+The Query Object
+----------------
+
+:class:`~sqlalchemy.orm.query.Query` is produced in terms of a given :class:`~sqlalchemy.orm.session.Session`, using the :func:`~sqlalchemy.orm.query.Query.query` function::
+
+ q = session.query(SomeMappedClass)
+
+Following is the full interface for the :class:`Query` object.
+
+.. autoclass:: sqlalchemy.orm.query.Query
+ :members:
+ :undoc-members:
+
+ORM-Specific Query Constructs
+-----------------------------
+
+.. autoclass:: aliased
+
+.. autofunction:: join
+
+.. autofunction:: outerjoin
+
+Query Options
+-------------
+
+Options which are passed to ``query.options()``, to affect the behavior of loading.
+
+.. autofunction:: contains_eager
+
+.. autofunction:: defer
+
+.. autofunction:: eagerload
+
+.. autofunction:: eagerload_all
+
+.. autofunction:: extension
+
+.. autofunction:: lazyload
+
+.. autofunction:: undefer
+
--- /dev/null
+Sessions
+========
+
+.. module:: sqlalchemy.orm
+
+
+.. autofunction:: create_session
+
+.. autofunction:: scoped_session
+
+.. autofunction:: sessionmaker
+
+.. autoclass:: sqlalchemy.orm.session.Session
+ :members:
+
+.. autoclass:: sqlalchemy.orm.scoping.ScopedSession
+ :members:
--- /dev/null
+Utilities
+=========
+
+.. automodule:: sqlalchemy.orm.util
+ :members: identity_key, Validator, with_parent
+ :undoc-members:
--- /dev/null
+Connections
+===========
+
+Creating Engines
+----------------
+
+.. autofunction:: sqlalchemy.create_engine
+
+.. autofunction:: sqlalchemy.engine_from_config
+
+.. autoclass:: sqlalchemy.engine.url.URL
+ :members:
+
+Connectables
+------------
+
+.. autoclass:: sqlalchemy.engine.base.Engine
+ :members:
+
+.. autoclass:: sqlalchemy.engine.base.Connection
+ :members:
+
+.. autoclass:: sqlalchemy.engine.base.Connectable
+ :members:
+
+Result Objects
+--------------
+
+.. autoclass:: sqlalchemy.engine.base.ResultProxy
+ :members:
+
+.. autoclass:: sqlalchemy.engine.base.RowProxy
+ :members:
+
+Transactions
+------------
+
+.. autoclass:: sqlalchemy.engine.base.Transaction
+ :members:
+ :undoc-members:
+
+Internals
+---------
+
+.. autofunction:: sqlalchemy.engine.base.connection_memoize
+
+.. autoclass:: sqlalchemy.engine.base.Dialect
+ :members:
+
+.. autoclass:: sqlalchemy.engine.default.DefaultDialect
+ :members:
+ :show-inheritance:
+
+.. autoclass:: sqlalchemy.engine.default.DefaultExecutionContext
+ :members:
+ :show-inheritance:
+
+.. autoclass:: sqlalchemy.engine.base.DefaultRunner
+ :members:
+ :show-inheritance:
+
+.. autoclass:: sqlalchemy.engine.base.ExecutionContext
+ :members:
+
+.. autoclass:: sqlalchemy.engine.base.SchemaIterator
+ :members:
+ :show-inheritance:
+
\ No newline at end of file
--- /dev/null
+SQL Statements and Expressions
+==============================
+
+.. module:: sqlalchemy.sql.expression
+
+Functions
+---------
+
+The expression package uses functions to construct SQL expressions. The return value of each function is an object instance which is a subclass of :class:`~sqlalchemy.sql.expression.ClauseElement`.
+
+.. autofunction:: alias
+
+.. autofunction:: and_
+
+.. autofunction:: asc
+
+.. autofunction:: between
+
+.. autofunction:: bindparam
+
+.. autofunction:: case
+
+.. autofunction:: cast
+
+.. autofunction:: column
+
+.. autofunction:: collate
+
+.. autofunction:: delete
+
+.. autofunction:: desc
+
+.. autofunction:: distinct
+
+.. autofunction:: except_
+
+.. autofunction:: except_all
+
+.. autofunction:: exists
+
+.. autofunction:: extract
+
+.. attribute:: func
+
+ Generate SQL function expressions.
+
+ ``func`` is a special object instance which generates SQL functions based on name-based attributes, e.g.::
+
+ >>> print func.count(1)
+ count(:param_1)
+
+ Any name can be given to `func`. If the function name is unknown to SQLAlchemy, it will be rendered exactly as is. For common SQL functions which SQLAlchemy is aware of, the name may be interpreted as a *generic function* which will be compiled appropriately to the target database::
+
+ >>> print func.current_timestamp()
+ CURRENT_TIMESTAMP
+
+ To call functions which are present in dot-separated packages, specify them in the same manner::
+
+ >>> print func.stats.yield_curve(5, 10)
+ stats.yield_curve(:yield_curve_1, :yield_curve_2)
+
+ SQLAlchemy can be made aware of the return type of functions to enable type-specific lexical and result-based behavior. For example, to ensure that a string-based function returns a Unicode value and is similarly treated as a string in expressions, specify :class:`~sqlalchemy.types.Unicode` as the type:
+
+ >>> print func.my_string(u'hi', type_=Unicode) + ' ' + \
+ ... func.my_string(u'there', type_=Unicode)
+ my_string(:my_string_1) || :my_string_2 || my_string(:my_string_3)
+
+ Functions which are interpreted as "generic" functions know how to calculate their return type automatically. For a listing of known generic functions, see :ref:`generic_functions`.
+
+.. autofunction:: insert
+
+.. autofunction:: intersect
+
+.. autofunction:: intersect_all
+
+.. autofunction:: join
+
+.. autofunction:: label
+
+.. autofunction:: literal
+
+.. autofunction:: literal_column
+
+.. autofunction:: not_
+
+.. autofunction:: null
+
+.. autofunction:: or_
+
+.. autofunction:: outparam
+
+.. autofunction:: outerjoin
+
+.. autofunction:: select
+
+.. autofunction:: subquery
+
+.. autofunction:: table
+
+.. autofunction:: text
+
+.. autofunction:: union
+
+.. autofunction:: union_all
+
+.. autofunction:: update
+
+Classes
+-------
+
+.. autoclass:: Alias
+ :members:
+ :show-inheritance:
+
+.. autoclass:: ClauseElement
+ :members:
+ :show-inheritance:
+
+.. autoclass:: ColumnClause
+ :members:
+ :show-inheritance:
+
+.. autoclass:: ColumnCollection
+ :members:
+ :show-inheritance:
+
+.. autoclass:: ColumnElement
+ :members:
+ :show-inheritance:
+
+.. autoclass:: _CompareMixin
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: ColumnOperators
+ :members:
+ :undoc-members:
+ :inherited-members:
+
+.. autoclass:: CompoundSelect
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Delete
+ :members:
+ :show-inheritance:
+
+.. autoclass:: FromClause
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Insert
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Join
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Select
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Selectable
+ :members:
+ :show-inheritance:
+
+.. autoclass:: TableClause
+ :members:
+ :show-inheritance:
+
+.. autoclass:: Update
+ :members:
+ :show-inheritance:
+
+.. _generic_functions:
+
+Generic Functions
+-----------------
+
+SQL functions which are known to SQLAlchemy with regards to database-specific rendering, return types and argument behavior. Generic functions are invoked like all SQL functions, using the :attr:`func` attribute::
+
+ select([func.count()]).select_from(sometable)
+
+.. automodule:: sqlalchemy.sql.functions
+ :members:
+ :undoc-members:
+ :show-inheritance:
\ No newline at end of file
--- /dev/null
+sqlalchemy
+==========
+
+.. toctree::
+ :glob:
+
+ connections
+ pooling
+ expressions
+ schema
+ types
+ interfaces
+
+
--- /dev/null
+Interfaces
+----------
+
+.. automodule:: sqlalchemy.interfaces
+ :members:
+
--- /dev/null
+.. _pooling_toplevel:
+
+Connection Pooling
+==================
+
+.. module:: sqlalchemy.pool
+
+SQLAlchemy ships with a connection pooling framework that integrates
+with the Engine system and can also be used on its own to manage plain
+DB-API connections.
+
+At the base of any database helper library is a system for efficiently
+acquiring connections to the database. Since the establishment of a
+database connection is typically a somewhat expensive operation, an
+application needs a way to get at database connections repeatedly
+without incurring the full overhead each time. Particularly for
+server-side web applications, a connection pool is the standard way to
+maintain a group or "pool" of active database connections which are
+reused from request to request in a single server process.
+
+Connection Pool Configuration
+-----------------------------
+
+The :class:`~sqlalchemy.engine.Engine` returned by the
+:func:`~sqlalchemy.create_engine` function has a :class:`QueuePool`
+integrated, pre-configured with reasonable pooling defaults. If
+you're reading this section to simply enable pooling- congratulations!
+You're already done.
+
+The most common :class:`QueuePool` tuning parameters can be passed
+directly to :func:`~sqlalchemy.create_engine` as keyword arguments:
+``pool_size``, ``max_overflow``, ``pool_recycle`` and
+``pool_timeout``. For example::
+
+ engine = create_engine('postgres://me@localhost/mydb',
+ pool_size=20, max_overflow=0)
+
+
+Custom Pool Construction
+------------------------
+
+:class:`Pool` instances may be created directly for your own use or to
+supply to :func:`sqlalchemy.create_engine` via the ``pool=``
+keyword argument.
+
+Constructing your own pool requires supplying a callable function the
+Pool can use to create new connections. The function will be called
+with no arguments.
+
+Through this method, custom connection schemes can be made, such as a
+using connections from another library's pool, or making a new
+connection that automatically executes some initialization commands::
+
+ import sqlalchemy.pool as pool
+ import psycopg2
+
+ def getconn():
+ c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
+ # execute an initialization function on the connection before returning
+ c.cursor.execute("setup_encodings()")
+ return c
+
+ p = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
+
+Or with SingletonThreadPool::
+
+ import sqlalchemy.pool as pool
+ import sqlite
+
+ p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
+
+
+Builtin Pool Implementations
+----------------------------
+
+.. autoclass:: AssertionPool
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: NullPool
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: sqlalchemy.pool.Pool
+ :members:
+ :show-inheritance:
+ :undoc-members:
+ :inherited-members:
+
+ .. automethod:: __init__
+
+.. autoclass:: sqlalchemy.pool.QueuePool
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: SingletonThreadPool
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: StaticPool
+ :members: __init__
+ :show-inheritance:
+
+
+Pooling Plain DB-API Connections
+--------------------------------
+
+Any :pep:`249` DB-API module can be "proxied" through the connection
+pool transparently. Usage of the DB-API is exactly as before, except
+the ``connect()`` method will consult the pool. Below we illustrate
+this with ``psycopg2``::
+
+ import sqlalchemy.pool as pool
+ import psycopg2 as psycopg
+
+ psycopg = pool.manage(psycopg)
+
+ # then connect normally
+ connection = psycopg.connect(database='test', username='scott',
+ password='tiger')
+
+This produces a :class:`_DBProxy` object which supports the same
+``connect()`` function as the original DB-API module. Upon
+connection, a connection proxy object is returned, which delegates its
+calls to a real DB-API connection object. This connection object is
+stored persistently within a connection pool (an instance of
+:class:`Pool`) that corresponds to the exact connection arguments sent
+to the ``connect()`` function.
+
+The connection proxy supports all of the methods on the original
+connection object, most of which are proxied via ``__getattr__()``.
+The ``close()`` method will return the connection to the pool, and the
+``cursor()`` method will return a proxied cursor object. Both the
+connection proxy and the cursor proxy will also return the underlying
+connection to the pool after they have both been garbage collected,
+which is detected via the ``__del__()`` method.
+
+Additionally, when connections are returned to the pool, a
+``rollback()`` is issued on the connection unconditionally. This is
+to release any locks still held by the connection that may have
+resulted from normal activity.
+
+By default, the ``connect()`` method will return the same connection
+that is already checked out in the current thread. This allows a
+particular connection to be used in a given thread without needing to
+pass it around between functions. To disable this behavior, specify
+``use_threadlocal=False`` to the ``manage()`` function.
+
+.. autofunction:: sqlalchemy.pool.manage
+
+.. autofunction:: sqlalchemy.pool.clear_managers
+
--- /dev/null
+Database Schema
+===============
+
+.. automodule:: sqlalchemy.schema
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
\ No newline at end of file
--- /dev/null
+.. _types:
+
+Column and Data Types
+=====================
+
+.. module:: sqlalchemy
+
+SQLAlchemy provides abstractions for most common database data types,
+and a mechanism for specifying your own custom data types.
+
+The methods and attributes of type objects are rarely used directly.
+Type objects are supplied to :class:`~sqlalchemy.Table` definitions
+and can be supplied as type hints to `functions` for occasions where
+the database driver returns an incorrect type.
+
+.. code-block:: pycon
+
+ >>> users = Table('users', metadata,
+ ... Column('id', Integer, primary_key=True)
+ ... Column('login', String(32))
+ ... )
+
+
+SQLAlchemy will use the ``Integer`` and ``String(32)`` type
+information when issuing a ``CREATE TABLE`` statement and will use it
+again when reading back rows ``SELECTed`` from the database.
+Functions that accept a type (such as :func:`~sqlalchemy.Column`) will
+typically accept a type class or instance; ``Integer`` is equivalent
+to ``Integer()`` with no construction arguments in this case.
+
+Generic Types
+-------------
+
+Generic types specify a column that can read, write and store a
+particular type of Python data. SQLAlchemy will choose the best
+database column type available on the target database when issuing a
+``CREATE TABLE`` statement. For complete control over which column
+type is emitted in ``CREATE TABLE``, such as ``VARCHAR`` see `SQL
+Standard Types`_ and the other sections of this chapter.
+
+.. autoclass:: String
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Unicode
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Text
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: UnicodeText
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Integer
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: SmallInteger
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Numeric
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Float
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: DateTime
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Date
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Time
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Interval
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Boolean
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: Binary
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: PickleType
+ :members: __init__
+ :show-inheritance:
+
+
+SQL Standard Types
+------------------
+
+The SQL standard types always create database column types of the same
+name when ``CREATE TABLE`` is issued. Some types may not be supported
+on all databases.
+
+.. autoclass:: INT
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: sqlalchemy.types.INTEGER
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: CHAR
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: VARCHAR
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: NCHAR
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: TEXT
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: FLOAT
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: NUMERIC
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: DECIMAL
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: TIMESTAMP
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: DATETIME
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: CLOB
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: BLOB
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: BOOLEAN
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: SMALLINT
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: DATE
+ :members: __init__
+ :show-inheritance:
+
+.. autoclass:: TIME
+ :members: __init__
+ :show-inheritance:
+
+
+Vendor-Specific Types
+---------------------
+
+Database-specific types are also available for import from each
+database's dialect module. See the :ref:`sqlalchemy.databases`
+reference for the database you're interested in.
+
+For example, MySQL has a ``BIGINTEGER`` type and PostgreSQL has an
+``INET`` type. To use these, import them from the module explicitly::
+
+ from sqlalchemy.databases.mysql import MSBigInteger, MSEnum
+
+ table = Table('foo', meta,
+ Column('id', MSBigInteger),
+ Column('enumerates', MSEnum('a', 'b', 'c'))
+ )
+
+Or some PostgreSQL types::
+
+ from sqlalchemy.databases.postgres import PGInet, PGArray
+
+ table = Table('foo', meta,
+ Column('ipaddress', PGInet),
+ Column('elements', PGArray(str))
+ )
+
+
+.. module:: sqlalchemy.types
+
+Custom Types
+------------
+
+User-defined types may be created to match special capabilities of a
+particular database or simply for implementing custom processing logic
+in Python.
+
+The simplest method is implementing a :class:`TypeDecorator`, a helper
+class that makes it easy to augment the bind parameter and result
+processing capabilities of one of the built in types.
+
+To build a type object from scratch, subclass `:class:TypeEngine`.
+
+.. autoclass:: TypeDecorator
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
+
+.. autoclass:: TypeEngine
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
+
+.. autoclass:: AbstractType
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
+
+.. autoclass:: MutableType
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
+
+.. autoclass:: Concatenable
+ :members:
+ :undoc-members:
+ :inherited-members:
+ :show-inheritance:
+
+.. autoclass:: NullType
+ :show-inheritance:
+
--- /dev/null
+.. _session_toplevel:
+
+=================
+Using the Session
+=================
+
+The `Mapper` is the entrypoint to the configurational API of the SQLAlchemy object relational mapper. But the primary object one works with when using the ORM is the :class:`~sqlalchemy.orm.session.Session`.
+
+What does the Session do ?
+==========================
+
+In the most general sense, the ``Session`` establishes all conversations with the database and represents a "holding zone" for all the mapped instances which you've loaded or created during its lifespan. It implements the `Unit of Work <http://martinfowler.com/eaaCatalog/unitOfWork.html>`_ pattern, which means it keeps track of all changes which occur, and is capable of **flushing** those changes to the database as appropriate. Another important facet of the ``Session`` is that it's also maintaining **unique** copies of each instance, where "unique" means "only one object with a particular primary key" - this pattern is called the `Identity Map <http://martinfowler.com/eaaCatalog/identityMap.html>`_.
+
+Beyond that, the ``Session`` implements an interface which lets you move objects in or out of the session in a variety of ways, it provides the entryway to a ``Query`` object which is used to query the database for data, and it also provides a transactional context for SQL operations which rides on top of the transactional capabilities of ``Engine`` and ``Connection`` objects.
+
+Getting a Session
+=================
+
+``Session`` is a regular Python class which can be directly instantiated. However, to standardize how sessions are configured and acquired, the ``sessionmaker()`` function is normally used to create a top level ``Session`` configuration which can then be used throughout an application without the need to repeat the configurational arguments.
+
+Using a sessionmaker() Configuration
+------------------------------------
+
+The usage of ``sessionmaker()`` is illustrated below:
+
+.. sourcecode:: python+sql
+
+ from sqlalchemy.orm import sessionmaker
+
+ # create a configured "Session" class
+ Session = sessionmaker(bind=some_engine)
+
+ # create a Session
+ session = Session()
+
+ # work with sess
+ myobject = MyObject('foo', 'bar')
+ session.add(myobject)
+ session.commit()
+
+ # close when finished
+ session.close()
+
+Above, the ``sessionmaker`` call creates a class for us, which we assign to the name ``Session``. This class is a subclass of the actual ``sqlalchemy.orm.session.Session`` class, which will instantiate with a particular bound engine.
+
+When you write your application, place the call to ``sessionmaker()`` somewhere global, and then make your new ``Session`` class available to the rest of your application.
+
+Binding Session to an Engine
+----------------------------
+
+In our previous example regarding ``sessionmaker()``, we specified a ``bind`` for a particular ``Engine``. If we'd like to construct a ``sessionmaker()`` without an engine available and bind it later on, or to specify other options to an existing ``sessionmaker()``, we may use the ``configure()`` method::
+
+ # configure Session class with desired options
+ Session = sessionmaker()
+
+ # later, we create the engine
+ engine = create_engine('postgres://...')
+
+ # associate it with our custom Session class
+ Session.configure(bind=engine)
+
+ # work with the session
+ session = Session()
+
+It's actually entirely optional to bind a Session to an engine. If the underlying mapped ``Table`` objects use "bound" metadata, the ``Session`` will make use of the bound engine instead (or will even use multiple engines if multiple binds are present within the mapped tables). "Bound" metadata is described at :ref:`metadata_binding`.
+
+The ``Session`` also has the ability to be bound to multiple engines explicitly. Descriptions of these scenarios are described in :ref:`session_partitioning`.
+
+Binding Session to a Connection
+-------------------------------
+
+The ``Session`` can also be explicitly bound to an individual database ``Connection``. Reasons for doing this may include to join a ``Session`` with an ongoing transaction local to a specific ``Connection`` object, or to bypass connection pooling by just having connections persistently checked out and associated with distinct, long running sessions::
+
+ # global application scope. create Session class, engine
+ Session = sessionmaker()
+
+ engine = create_engine('postgres://...')
+
+ ...
+
+ # local scope, such as within a controller function
+
+ # connect to the database
+ connection = engine.connect()
+
+ # bind an individual Session to the connection
+ session = Session(bind=connection)
+
+Using create_session()
+----------------------
+
+As an alternative to ``sessionmaker()``, ``create_session()`` is a function which calls the normal ``Session`` constructor directly. All arguments are passed through and the new ``Session`` object is returned::
+
+ session = create_session(bind=myengine, autocommit=True, autoflush=False)
+
+Note that ``create_session()`` disables all optional "automation" by default. Called with no arguments, the session produced is not autoflushing, does not auto-expire, and does not maintain a transaction (i.e. it begins and commits a new transaction for each ``flush()``). SQLAlchemy uses ``create_session()`` extensively within its own unit tests.
+
+Configurational Arguments
+-------------------------
+
+Configurational arguments accepted by ``sessionmaker()`` and ``create_session()`` are the same as that of the ``Session`` class itself, and are described at :func:`sqlalchemy.orm.sessionmaker`.
+
+Note that the defaults of ``create_session()`` are the opposite of that of ``sessionmaker()``: autoflush and expire_on_commit are False, autocommit is True. It is recommended to use the ``sessionmaker()`` function instead of ``create_session()``. ``create_session()`` is used to get a session with no automation turned on and is useful for testing.
+
+Using the Session
+==================
+
+Quickie Intro to Object States
+------------------------------
+
+It's helpful to know the states which an instance can have within a session:
+
+* *Transient* - an instance that's not in a session, and is not saved to the database; i.e. it has no database identity. The only relationship such an object has to the ORM is that its class has a ``mapper()`` associated with it.
+
+* *Pending* - when you ``add()`` a transient instance, it becomes pending. It still wasn't actually flushed to the database yet, but it will be when the next flush occurs.
+
+* *Persistent* - An instance which is present in the session and has a record in the database. You get persistent instances by either flushing so that the pending instances become persistent, or by querying the database for existing instances (or moving persistent instances from other sessions into your local session).
+
+* *Detached* - an instance which has a record in the database, but is not in any session. There's nothing wrong with this, and you can use objects normally when they're detached, **except** they will not be able to issue any SQL in order to load collections or attributes which are not yet loaded, or were marked as "expired".
+
+Knowing these states is important, since the ``Session`` tries to be strict about ambiguous operations (such as trying to save the same object to two different sessions at the same time).
+
+Frequently Asked Questions
+--------------------------
+
+* When do I make a ``sessionmaker`` ?
+
+ Just one time, somewhere in your application's global scope. It should be looked upon as part of your application's configuration. If your application has three .py files in a package, you could, for example, place the ``sessionmaker`` line in your ``__init__.py`` file; from that point on your other modules say "from mypackage import Session". That way, everyone else just uses ``Session()``, and the configuration of that session is controlled by that central point.
+
+ If your application starts up, does imports, but does not know what database it's going to be connecting to, you can bind the ``Session`` at the "class" level to the engine later on, using ``configure()``.
+
+ In the examples in this section, we will frequently show the ``sessionmaker`` being created right above the line where we actually invoke ``Session()``. But that's just for example's sake ! In reality, the ``sessionmaker`` would be somewhere at the module level, and your individual ``Session()`` calls would be sprinkled all throughout your app, such as in a web application within each controller method.
+
+* When do I make a ``Session`` ?
+
+ You typically invoke ``Session()`` when you first need to talk to your database, and want to save some objects or load some existing ones. Then, you work with it, save your changes, and then dispose of it....or at the very least ``close()`` it. It's not a "global" kind of object, and should be handled more like a "local variable", as it's generally **not** safe to use with concurrent threads. Sessions are very inexpensive to make, and don't use any resources whatsoever until they are first used...so create some !
+
+ There is also a pattern whereby you're using a **contextual session**, this is described later in `unitofwork_contextual`. In this pattern, a helper object is maintaining a ``Session`` for you, most commonly one that is local to the current thread (and sometimes also local to an application instance). SQLAlchemy has worked this pattern out such that it still *looks* like you're creating a new session as you need one...so in that case, it's still a guaranteed win to just say ``Session()`` whenever you want a session.
+
+* Is the Session a cache ?
+
+ Yeee...no. It's somewhat used as a cache, in that it implements the identity map pattern, and stores objects keyed to their primary key. However, it doesn't do any kind of query caching. This means, if you say ``session.query(Foo).filter_by(name='bar')``, even if ``Foo(name='bar')`` is right there, in the identity map, the session has no idea about that. It has to issue SQL to the database, get the rows back, and then when it sees the primary key in the row, *then* it can look in the local identity map and see that the object is already there. It's only when you say ``query.get({some primary key})`` that the ``Session`` doesn't have to issue a query.
+
+ Additionally, the Session stores object instances using a weak reference by default. This also defeats the purpose of using the Session as a cache, unless the ``weak_identity_map`` flag is set to ``False``.
+
+ The ``Session`` is not designed to be a global object from which everyone consults as a "registry" of objects. That is the job of a **second level cache**. A good library for implementing second level caching is `Memcached <http://www.danga.com/memcached/>`_. It *is* possible to "sort of" use the ``Session`` in this manner, if you set it to be non-transactional and it never flushes any SQL, but it's not a terrific solution, since if concurrent threads load the same objects at the same time, you may have multiple copies of the same objects present in collections.
+
+* How can I get the ``Session`` for a certain object ?
+
+ Use the ``object_session()`` classmethod available on ``Session``::
+
+ session = Session.object_session(someobject)
+
+* Is the session threadsafe ?
+
+ Nope. It has no thread synchronization of any kind built in, and particularly when you do a flush operation, it definitely is not open to concurrent threads accessing it, because it holds onto a single database connection at that point. If you use a session which is non-transactional for read operations only, it's still not thread-"safe", but you also wont get any catastrophic failures either, since it opens and closes connections on an as-needed basis; it's just that different threads might load the same objects independently of each other, but only one will wind up in the identity map (however, the other one might still live in a collection somewhere).
+
+ But the bigger point here is, you should not *want* to use the session with multiple concurrent threads. That would be like having everyone at a restaurant all eat from the same plate. The session is a local "workspace" that you use for a specific set of tasks; you don't want to, or need to, share that session with other threads who are doing some other task. If, on the other hand, there are other threads participating in the same task you are, such as in a desktop graphical application, then you would be sharing the session with those threads, but you also will have implemented a proper locking scheme (or your graphical framework does) so that those threads do not collide.
+
+Querying
+--------
+
+The ``query()`` function takes one or more *entities* and returns a new ``Query`` object which will issue mapper queries within the context of this Session. An entity is defined as a mapped class, a ``Mapper`` object, an orm-enabled *descriptor*, or an ``AliasedClass`` object::
+
+ # query from a class
+ session.query(User).filter_by(name='ed').all()
+
+ # query with multiple classes, returns tuples
+ session.query(User, Address).join('addresses').filter_by(name='ed').all()
+
+ # query using orm-enabled descriptors
+ session.query(User.name, User.fullname).all()
+
+ # query from a mapper
+ user_mapper = class_mapper(User)
+ session.query(user_mapper)
+
+When ``Query`` returns results, each object instantiated is stored within the identity map. When a row matches an object which is already present, the same object is returned. In the latter case, whether or not the row is populated onto an existing object depends upon whether the attributes of the instance have been *expired* or not. As of 0.5, a default-configured ``Session`` automatically expires all instances along transaction boundaries, so that with a normally isolated transaction, there shouldn't be any issue of instances representing data which is stale with regards to the current transaction.
+
+Adding New or Existing Items
+----------------------------
+
+``add()`` is used to place instances in the session. For *transient* (i.e. brand new) instances, this will have the effect of an INSERT taking place for those instances upon the next flush. For instances which are *persistent* (i.e. were loaded by this session), they are already present and do not need to be added. Instances which are *detached* (i.e. have been removed from a session) may be re-associated with a session using this method::
+
+ user1 = User(name='user1')
+ user2 = User(name='user2')
+ session.add(user1)
+ session.add(user2)
+
+ session.commit() # write changes to the database
+
+To add a list of items to the session at once, use ``add_all()``::
+
+ session.add_all([item1, item2, item3])
+
+The ``add()`` operation **cascades** along the ``save-update`` cascade. For more details see the section `unitofwork_cascades`.
+
+Merging
+-------
+
+``merge()`` reconciles the current state of an instance and its associated children with existing data in the database, and returns a copy of the instance associated with the session. Usage is as follows::
+
+ merged_object = session.merge(existing_object)
+
+When given an instance, it follows these steps:
+
+ * It examines the primary key of the instance. If it's present, it attempts to load an instance with that primary key (or pulls from the local identity map).
+ * If there's no primary key on the given instance, or the given primary key does not exist in the database, a new instance is created.
+ * The state of the given instance is then copied onto the located/newly created instance.
+ * The operation is cascaded to associated child items along the ``merge`` cascade. Note that all changes present on the given instance, including changes to collections, are merged.
+ * The new instance is returned.
+
+With ``merge()``, the given instance is not placed within the session, and can be associated with a different session or detached. ``merge()`` is very useful for taking the state of any kind of object structure without regard for its origins or current session associations and placing that state within a session. Here's two examples:
+
+ * An application which reads an object structure from a file and wishes to save it to the database might parse the file, build up the structure, and then use ``merge()`` to save it to the database, ensuring that the data within the file is used to formulate the primary key of each element of the structure. Later, when the file has changed, the same process can be re-run, producing a slightly different object structure, which can then be ``merged()`` in again, and the ``Session`` will automatically update the database to reflect those changes.
+ * A web application stores mapped entities within an HTTP session object. When each request starts up, the serialized data can be merged into the session, so that the original entity may be safely shared among requests and threads.
+
+``merge()`` is frequently used by applications which implement their own second level caches. This refers to an application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans of time. When such an object needs to exist within a ``Session``, ``merge()`` is a good choice since it leaves the original cached object untouched. For this use case, merge provides a keyword option called ``dont_load=True``. When this boolean flag is set to ``True``, ``merge()`` will not issue any SQL to reconcile the given object against the current state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children may not contain any pending changes, and it's also of course possible that newer information in the database will not be present on the merged object, since no load is issued.
+
+Deleting
+--------
+
+The ``delete`` method places an instance into the Session's list of objects to be marked as deleted::
+
+ # mark two objects to be deleted
+ session.delete(obj1)
+ session.delete(obj2)
+
+ # commit (or flush)
+ session.commit()
+
+The big gotcha with ``delete()`` is that **nothing is removed from collections**. Such as, if a ``User`` has a collection of three ``Addresses``, deleting an ``Address`` will not remove it from ``user.addresses``::
+
+ >>> address = user.addresses[1]
+ >>> session.delete(address)
+ >>> session.flush()
+ >>> address in user.addresses
+ True
+
+The solution is to use proper cascading::
+
+ mapper(User, users_table, properties={
+ 'addresses':relation(Address, cascade="all, delete, delete-orphan")
+ })
+ del user.addresses[1]
+ session.flush()
+
+Flushing
+--------
+
+When the ``Session`` is used with its default configuration, the flush step is nearly always done transparently. Specifically, the flush occurs before any individual ``Query`` is issued, as well as within the ``commit()`` call before the transaction is committed. It also occurs before a SAVEPOINT is issued when ``begin_nested()`` is used. The "flush-on-Query" aspect of the behavior can be disabled by constructing ``sessionmaker()`` with the flag ``autoflush=False``.
+
+Regardless of the autoflush setting, a flush can always be forced by issuing ``flush()``::
+
+ session.flush()
+
+``flush()`` also supports the ability to flush a subset of objects which are present in the session, by passing a list of objects::
+
+ # saves only user1 and address2. all other modified
+ # objects remain present in the session.
+ session.flush([user1, address2])
+
+This second form of flush should be used carefully as it currently does not cascade, meaning that it will not necessarily affect other objects directly associated with the objects given.
+
+The flush process *always* occurs within a transaction, even if the ``Session`` has been configured with ``autocommit=True``, a setting that disables the session's persistent transactional state. If no transaction is present, ``flush()`` creates its own transaction and commits it. Any failures during flush will always result in a rollback of whatever transaction is present.
+
+Committing
+----------
+
+``commit()`` is used to commit the current transaction. It always issues ``flush()`` beforehand to flush any remaining state to the database; this is independent of the "autoflush" setting. If no transaction is present, it raises an error. Note that the default behavior of the ``Session`` is that a transaction is always present; this behavior can be disabled by setting ``autocommit=True``. In autocommit mode, a transaction can be initiated by calling the ``begin()`` method.
+
+Another behavior of ``commit()`` is that by default it expires the state of all instances present after the commit is complete. This is so that when the instances are next accessed, either through attribute access or by them being present in a ``Query`` result set, they receive the most recent state. To disable this behavior, configure ``sessionmaker()`` with ``expire_on_commit=False``.
+
+Normally, instances loaded into the ``Session`` are never changed by subsequent queries; the assumption is that the current transaction is isolated so the state most recently loaded is correct as long as the transaction continues. Setting ``autocommit=True`` works against this model to some degree since the ``Session`` behaves in exactly the same way with regard to attribute state, except no transaction is present.
+
+Rolling Back
+------------
+
+``rollback()`` rolls back the current transaction. With a default configured session, the post-rollback state of the session is as follows:
+
+ * All connections are rolled back and returned to the connection pool, unless the Session was bound directly to a Connection, in which case the connection is still maintained (but still rolled back).
+ * Objects which were initially in the *pending* state when they were added to the ``Session`` within the lifespan of the transaction are expunged, corresponding to their INSERT statement being rolled back. The state of their attributes remains unchanged.
+ * Objects which were marked as *deleted* within the lifespan of the transaction are promoted back to the *persistent* state, corresponding to their DELETE statement being rolled back. Note that if those objects were first *pending* within the transaction, that operation takes precedence instead.
+ * All objects not expunged are fully expired.
+
+With that state understood, the ``Session`` may safely continue usage after a rollback occurs (note that this is a new feature as of version 0.5).
+
+When a ``flush()`` fails, typically for reasons like primary key, foreign key, or "not nullable" constraint violations, a ``rollback()`` is issued automatically (it's currently not possible for a flush to continue after a partial failure). However, the flush process always uses its own transactional demarcator called a *subtransaction*, which is described more fully in the docstrings for ``Session``. What it means here is that even though the database transaction has been rolled back, the end user must still issue ``rollback()`` to fully reset the state of the ``Session``.
+
+Expunging
+---------
+
+Expunge removes an object from the Session, sending persistent instances to the detached state, and pending instances to the transient state:
+
+.. sourcecode:: python+sql
+
+ session.expunge(obj1)
+
+To remove all items, call ``session.expunge_all()`` (this method was formerly known as ``clear()``).
+
+Closing
+-------
+
+The ``close()`` method issues a ``expunge_all()``, and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well.
+
+Refreshing / Expiring
+---------------------
+
+To assist with the Session's "sticky" behavior of instances which are present, individual objects can have all of their attributes immediately re-loaded from the database, or marked as "expired" which will cause a re-load to occur upon the next access of any of the object's mapped attributes. This includes all relationships, so lazy-loaders will be re-initialized, eager relationships will be repopulated. Any changes marked on the object are discarded::
+
+ # immediately re-load attributes on obj1, obj2
+ session.refresh(obj1)
+ session.refresh(obj2)
+
+ # expire objects obj1, obj2, attributes will be reloaded
+ # on the next access:
+ session.expire(obj1)
+ session.expire(obj2)
+
+``refresh()`` and ``expire()`` also support being passed a list of individual attribute names in which to be refreshed. These names can reference any attribute, column-based or relation based::
+
+ # immediately re-load the attributes 'hello', 'world' on obj1, obj2
+ session.refresh(obj1, ['hello', 'world'])
+ session.refresh(obj2, ['hello', 'world'])
+
+ # expire the attributes 'hello', 'world' objects obj1, obj2, attributes will be reloaded
+ # on the next access:
+ session.expire(obj1, ['hello', 'world'])
+ session.expire(obj2, ['hello', 'world'])
+
+The full contents of the session may be expired at once using ``expire_all()``::
+
+ session.expire_all()
+
+``refresh()`` and ``expire()`` are usually not needed when working with a default-configured ``Session``. The usual need is when an UPDATE or DELETE has been issued manually within the transaction using ``Session.execute()``.
+
+Session Attributes
+------------------
+
+The ``Session`` itself acts somewhat like a set-like collection. All items present may be accessed using the iterator interface::
+
+ for obj in session:
+ print obj
+
+And presence may be tested for using regular "contains" semantics::
+
+ if obj in session:
+ print "Object is present"
+
+The session is also keeping track of all newly created (i.e. pending) objects, all objects which have had changes since they were last loaded or saved (i.e. "dirty"), and everything that's been marked as deleted::
+
+ # pending objects recently added to the Session
+ session.new
+
+ # persistent objects which currently have changes detected
+ # (this collection is now created on the fly each time the property is called)
+ session.dirty
+
+ # persistent objects that have been marked as deleted via session.delete(obj)
+ session.deleted
+
+Note that objects within the session are by default *weakly referenced*. This means that when they are dereferenced in the outside application, they fall out of scope from within the ``Session`` as well and are subject to garbage collection by the Python interpreter. The exceptions to this include objects which are pending, objects which are marked as deleted, or persistent objects which have pending changes on them. After a full flush, these collections are all empty, and all objects are again weakly referenced. To disable the weak referencing behavior and force all objects within the session to remain until explicitly expunged, configure ``sessionmaker()`` with the ``weak_identity_map=False`` setting.
+
+Cascades
+========
+
+Mappers support the concept of configurable *cascade* behavior on :func:`~sqlalchemy.orm.relation()` constructs. This behavior controls how the Session should treat the instances that have a parent-child relationship with another instance that is operated upon by the Session. Cascade is indicated as a comma-separated list of string keywords, with the possible values ``all``, ``delete``, ``save-update``, ``refresh-expire``, ``merge``, ``expunge``, and ``delete-orphan``.
+
+Cascading is configured by setting the ``cascade`` keyword argument on a ``relation()``::
+
+ mapper(Order, order_table, properties={
+ 'items' : relation(Item, items_table, cascade="all, delete-orphan"),
+ 'customer' : relation(User, users_table, user_orders_table, cascade="save-update"),
+ })
+
+The above mapper specifies two relations, ``items`` and ``customer``. The ``items`` relationship specifies "all, delete-orphan" as its ``cascade`` value, indicating that all ``add``, ``merge``, ``expunge``, ``refresh`` ``delete`` and ``expire`` operations performed on a parent ``Order`` instance should also be performed on the child ``Item`` instances attached to it. The ``delete-orphan`` cascade value additionally indicates that if an ``Item`` instance is no longer associated with an ``Order``, it should also be deleted. The "all, delete-orphan" cascade argument allows a so-called *lifecycle* relationship between an ``Order`` and an ``Item`` object.
+
+The ``customer`` relationship specifies only the "save-update" cascade value, indicating most operations will not be cascaded from a parent ``Order`` instance to a child ``User`` instance except for the ``add()`` operation. "save-update" cascade indicates that an ``add()`` on the parent will cascade to all child items, and also that items added to a parent which is already present in the session will also be added.
+
+The default value for ``cascade`` on :func:`~sqlalchemy.orm.relation()` is ``save-update, merge``.
+
+Managing Transactions
+=====================
+
+The ``Session`` manages transactions across all engines associated with it. As the ``Session`` receives requests to execute SQL statements using a particular ``Engine`` or ``Connection``, it adds each individual ``Engine`` encountered to its transactional state and maintains an open connection for each one (note that a simple application normally has just one ``Engine``). At commit time, all unflushed data is flushed, and each individual transaction is committed. If the underlying databases support two-phase semantics, this may be used by the Session as well if two-phase transactions are enabled.
+
+Normal operation ends the transactional state using the ``rollback()`` or ``commit()`` methods. After either is called, the ``Session`` starts a new transaction::
+
+ Session = sessionmaker()
+ session = Session()
+ try:
+ item1 = session.query(Item).get(1)
+ item2 = session.query(Item).get(2)
+ item1.foo = 'bar'
+ item2.bar = 'foo'
+
+ # commit- will immediately go into a new transaction afterwards
+ session.commit()
+ except:
+ # rollback - will immediately go into a new transaction afterwards.
+ session.rollback()
+
+A session which is configured with ``autocommit=True`` may be placed into a transaction using ``begin()``. With an ``autocommit=True`` session that's been placed into a transaction using ``begin()``, the session releases all connection resources after a ``commit()`` or ``rollback()`` and remains transaction-less (with the exception of flushes) until the next ``begin()`` call::
+
+ Session = sessionmaker(autocommit=True)
+ session = Session()
+ session.begin()
+ try:
+ item1 = session.query(Item).get(1)
+ item2 = session.query(Item).get(2)
+ item1.foo = 'bar'
+ item2.bar = 'foo'
+ session.commit()
+ except:
+ session.rollback()
+ raise
+
+The ``begin()`` method also returns a transactional token which is compatible with the Python 2.6 ``with`` statement::
+
+ Session = sessionmaker(autocommit=True)
+ session = Session()
+ with session.begin():
+ item1 = session.query(Item).get(1)
+ item2 = session.query(Item).get(2)
+ item1.foo = 'bar'
+ item2.bar = 'foo'
+
+Using SAVEPOINT
+---------------
+
+SAVEPOINT transactions, if supported by the underlying engine, may be delineated using the ``begin_nested()`` method::
+
+ Session = sessionmaker()
+ session = Session()
+ session.add(u1)
+ session.add(u2)
+
+ session.begin_nested() # establish a savepoint
+ session.add(u3)
+ session.rollback() # rolls back u3, keeps u1 and u2
+
+ session.commit() # commits u1 and u2
+
+``begin_nested()`` may be called any number of times, which will issue a new SAVEPOINT with a unique identifier for each call. For each ``begin_nested()`` call, a corresponding ``rollback()`` or ``commit()`` must be issued.
+
+When ``begin_nested()`` is called, a ``flush()`` is unconditionally issued (regardless of the ``autoflush`` setting). This is so that when a ``rollback()`` occurs, the full state of the session is expired, thus causing all subsequent attribute/instance access to reference the full state of the ``Session`` right before ``begin_nested()`` was called.
+
+Enabling Two-Phase Commit
+-------------------------
+
+Finally, for MySQL, PostgreSQL, and soon Oracle as well, the session can be instructed to use two-phase commit semantics. This will coordinate the committing of transactions across databases so that the transaction is either committed or rolled back in all databases. You can also ``prepare()`` the session for interacting with transactions not managed by SQLAlchemy. To use two phase transactions set the flag ``twophase=True`` on the session::
+
+ engine1 = create_engine('postgres://db1')
+ engine2 = create_engine('postgres://db2')
+
+ Session = sessionmaker(twophase=True)
+
+ # bind User operations to engine 1, Account operations to engine 2
+ Session.configure(binds={User:engine1, Account:engine2})
+
+ session = Session()
+
+ # .... work with accounts and users
+
+ # commit. session will issue a flush to all DBs, and a prepare step to all DBs,
+ # before committing both transactions
+ session.commit()
+
+Embedding SQL Insert/Update Expressions into a Flush
+=====================================================
+
+This feature allows the value of a database column to be set to a SQL expression instead of a literal value. It's especially useful for atomic updates, calling stored procedures, etc. All you do is assign an expression to an attribute::
+
+ class SomeClass(object):
+ pass
+ mapper(SomeClass, some_table)
+
+ someobject = session.query(SomeClass).get(5)
+
+ # set 'value' attribute to a SQL expression adding one
+ someobject.value = some_table.c.value + 1
+
+ # issues "UPDATE some_table SET value=value+1"
+ session.commit()
+
+This technique works both for INSERT and UPDATE statements. After the flush/commit operation, the ``value`` attribute on ``someobject`` above is expired, so that when next accessed the newly generated value will be loaded from the database.
+
+Using SQL Expressions with Sessions
+====================================
+
+SQL expressions and strings can be executed via the ``Session`` within its transactional context. This is most easily accomplished using the ``execute()`` method, which returns a ``ResultProxy`` in the same manner as an ``Engine`` or ``Connection``::
+
+ Session = sessionmaker(bind=engine)
+ session = Session()
+
+ # execute a string statement
+ result = session.execute("select * from table where id=:id", {'id':7})
+
+ # execute a SQL expression construct
+ result = session.execute(select([mytable]).where(mytable.c.id==7))
+
+The current ``Connection`` held by the ``Session`` is accessible using the ``connection()`` method::
+
+ connection = session.connection()
+
+The examples above deal with a ``Session`` that's bound to a single ``Engine`` or ``Connection``. To execute statements using a ``Session`` which is bound either to multiple engines, or none at all (i.e. relies upon bound metadata), both ``execute()`` and ``connection()`` accept a ``mapper`` keyword argument, which is passed a mapped class or ``Mapper`` instance, which is used to locate the proper context for the desired engine::
+
+ Session = sessionmaker()
+ session = Session()
+
+ # need to specify mapper or class when executing
+ result = session.execute("select * from table where id=:id", {'id':7}, mapper=MyMappedClass)
+
+ result = session.execute(select([mytable], mytable.c.id==7), mapper=MyMappedClass)
+
+ connection = session.connection(MyMappedClass)
+
+Joining a Session into an External Transaction
+===============================================
+
+If a ``Connection`` is being used which is already in a transactional state (i.e. has a ``Transaction``), a ``Session`` can be made to participate within that transaction by just binding the ``Session`` to that ``Connection``::
+
+ Session = sessionmaker()
+
+ # non-ORM connection + transaction
+ conn = engine.connect()
+ trans = conn.begin()
+
+ # create a Session, bind to the connection
+ session = Session(bind=conn)
+
+ # ... work with session
+
+ session.commit() # commit the session
+ session.close() # close it out, prohibit further actions
+
+ trans.commit() # commit the actual transaction
+
+Note that above, we issue a ``commit()`` both on the ``Session`` as well as the ``Transaction``. This is an example of where we take advantage of ``Connection``'s ability to maintain *subtransactions*, or nested begin/commit pairs. The ``Session`` is used exactly as though it were managing the transaction on its own; its ``commit()`` method issues its ``flush()``, and commits the subtransaction. The subsequent transaction the ``Session`` starts after commit will not begin until it's next used. Above we issue a ``close()`` to prevent this from occurring. Finally, the actual transaction is committed using ``Transaction.commit()``.
+
+When using the ``threadlocal`` engine context, the process above is simplified; the ``Session`` uses the same connection/transaction as everyone else in the current thread, whether or not you explicitly bind it::
+
+ engine = create_engine('postgres://mydb', strategy="threadlocal")
+ engine.begin()
+
+ session = Session() # session takes place in the transaction like everyone else
+
+ # ... go nuts
+
+ engine.commit() # commit the transaction
+
+Contextual/Thread-local Sessions
+=================================
+
+A common need in applications, particularly those built around web frameworks, is the ability to "share" a ``Session`` object among disparate parts of an application, without needing to pass the object explicitly to all method and function calls. What you're really looking for is some kind of "global" session object, or at least "global" to all the parts of an application which are tasked with servicing the current request. For this pattern, SQLAlchemy provides the ability to enhance the ``Session`` class generated by ``sessionmaker()`` to provide auto-contextualizing support. This means that whenever you create a ``Session`` instance with its constructor, you get an *existing* ``Session`` object which is bound to some "context". By default, this context is the current thread. This feature is what previously was accomplished using the ``sessioncontext`` SQLAlchemy extension.
+
+Creating a Thread-local Context
+-------------------------------
+
+The ``scoped_session()`` function wraps around the ``sessionmaker()`` function, and produces an object which behaves the same as the ``Session`` subclass returned by ``sessionmaker()``::
+
+ from sqlalchemy.orm import scoped_session, sessionmaker
+ Session = scoped_session(sessionmaker())
+
+However, when you instantiate this ``Session`` "class", in reality the object is pulled from a threadlocal variable, or if it doesn't exist yet, it's created using the underlying class generated by ``sessionmaker()``::
+
+ >>> # call Session() the first time. the new Session instance is created.
+ >>> session = Session()
+
+ >>> # later, in the same application thread, someone else calls Session()
+ >>> session2 = Session()
+
+ >>> # the two Session objects are *the same* object
+ >>> session is session2
+ True
+
+Since the ``Session()`` constructor now returns the same ``Session`` object every time within the current thread, the object returned by ``scoped_session()`` also implements most of the ``Session`` methods and properties at the "class" level, such that you don't even need to instantiate ``Session()``::
+
+ # create some objects
+ u1 = User()
+ u2 = User()
+
+ # save to the contextual session, without instantiating
+ Session.add(u1)
+ Session.add(u2)
+
+ # view the "new" attribute
+ assert u1 in Session.new
+
+ # commit changes
+ Session.commit()
+
+The contextual session may be disposed of by calling ``Session.remove()``::
+
+ # remove current contextual session
+ Session.remove()
+
+After ``remove()`` is called, the next operation with the contextual session will start a new ``Session`` for the current thread.
+
+Lifespan of a Contextual Session
+--------------------------------
+
+A (really, really) common question is when does the contextual session get created, when does it get disposed ? We'll consider a typical lifespan as used in a web application::
+
+ Web Server Web Framework User-defined Controller Call
+ -------------- -------------- ------------------------------
+ web request ->
+ call controller -> # call Session(). this establishes a new,
+ # contextual Session.
+ session = Session()
+
+ # load some objects, save some changes
+ objects = session.query(MyClass).all()
+
+ # some other code calls Session, it's the
+ # same contextual session as "sess"
+ session2 = Session()
+ session2.add(foo)
+ session2.commit()
+
+ # generate content to be returned
+ return generate_content()
+ Session.remove() <-
+ web response <-
+
+The above example illustrates an explicit call to ``Session.remove()``. This has the effect such that each web request starts fresh with a brand new session. When integrating with a web framework, there's actually many options on how to proceed for this step, particularly as of version 0.5:
+
+* Session.remove() - this is the most cut and dry approach; the ``Session`` is thrown away, all of its transactional/connection resources are closed out, everything within it is explicitly gone. A new ``Session`` will be used on the next request.
+* Session.close() - Similar to calling ``remove()``, in that all objects are explicitly expunged and all transactional/connection resources closed, except the actual ``Session`` object hangs around. It doesn't make too much difference here unless the start of the web request would like to pass specific options to the initial construction of ``Session()``, such as a specific ``Engine`` to bind to.
+* Session.commit() - In this case, the behavior is that any remaining changes pending are flushed, and the transaction is committed. The full state of the session is expired, so that when the next web request is started, all data will be reloaded. In reality, the contents of the ``Session`` are weakly referenced anyway so its likely that it will be empty on the next request in any case.
+* Session.rollback() - Similar to calling commit, except we assume that the user would have called commit explicitly if that was desired; the ``rollback()`` ensures that no transactional state remains and expires all data, in the case that the request was aborted and did not roll back itself.
+* do nothing - this is a valid option as well. The controller code is responsible for doing one of the above steps at the end of the request.
+
+Scoped Session API docs: :func:`sqlalchemy.orm.scoped_session`
+
+.. _session_partitioning:
+
+Partitioning Strategies
+=======================
+
+Vertical Partitioning
+---------------------
+
+Vertical partitioning places different kinds of objects, or different tables, across multiple databases::
+
+ engine1 = create_engine('postgres://db1')
+ engine2 = create_engine('postgres://db2')
+
+ Session = sessionmaker(twophase=True)
+
+ # bind User operations to engine 1, Account operations to engine 2
+ Session.configure(binds={User:engine1, Account:engine2})
+
+ session = Session()
+
+Horizontal Partitioning
+-----------------------
+
+Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases.
+
+See the "sharding" example in `attribute_shard.py <http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/sharding/attribute_shard.py>`_
+
+Extending Session
+=================
+
+Extending the session can be achieved through subclassing as well as through a simple extension class, which resembles the style of :ref:`extending_mapper` called :class:`~sqlalchemy.orm.interfaces.SessionExtension`. See the docstrings for more information on this class' methods.
+
+Basic usage is similar to :class:`~sqlalchemy.orm.interfaces.MapperExtension`::
+
+ class MySessionExtension(SessionExtension):
+ def before_commit(self, session):
+ print "before commit!"
+
+ Session = sessionmaker(extension=MySessionExtension())
+
+or with :func:`~sqlalchemy.orm.create_session()`::
+
+ session = create_session(extension=MySessionExtension())
+
+The same ``SessionExtension`` instance can be used with any number of sessions.
--- /dev/null
+.. _sqlexpression_toplevel:
+
+================================
+SQL Expression Language Tutorial
+================================
+
+This tutorial will cover SQLAlchemy SQL Expressions, which are Python constructs that represent SQL statements. The tutorial is in doctest format, meaning each ``>>>`` line represents something you can type at a Python command prompt, and the following text represents the expected return value. The tutorial has no prerequisites.
+
+Version Check
+=============
+
+
+A quick check to verify that we are on at least **version 0.5** of SQLAlchemy:
+
+.. sourcecode:: pycon+sql
+
+ >>> import sqlalchemy
+ >>> sqlalchemy.__version__ # doctest:+SKIP
+ 0.5.0
+
+Connecting
+==========
+
+
+For this tutorial we will use an in-memory-only SQLite database. This is an easy way to test things without needing to have an actual database defined anywhere. To connect we use ``create_engine()``:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy import create_engine
+ >>> engine = create_engine('sqlite:///:memory:', echo=True)
+
+The ``echo`` flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard ``logging`` module. With it enabled, we'll see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to ``False``. This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see what's being generated.
+
+Define and Create Tables
+=========================
+
+
+The SQL Expression Language constructs its expressions in most cases against table columns. In SQLAlchemy, a column is most often represented by an object called ``Column``, and in all cases a ``Column`` is associated with a ``Table``. A collection of ``Table`` objects and their associated child objects is referred to as **database metadata**. In this tutorial we will explicitly lay out several ``Table`` objects, but note that SA can also "import" whole sets of ``Table`` objects automatically from an existing database (this process is called **table reflection**).
+
+We define our tables all within a catalog called ``MetaData``, using the ``Table`` construct, which resembles regular SQL CREATE TABLE statements. We'll make two tables, one of which represents "users" in an application, and another which represents zero or more "email addreses" for each row in the "users" table:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
+ >>> metadata = MetaData()
+ >>> users = Table('users', metadata,
+ ... Column('id', Integer, primary_key=True),
+ ... Column('name', String),
+ ... Column('fullname', String),
+ ... )
+
+ >>> addresses = Table('addresses', metadata,
+ ... Column('id', Integer, primary_key=True),
+ ... Column('user_id', None, ForeignKey('users.id')),
+ ... Column('email_address', String, nullable=False)
+ ... )
+
+All about how to define ``Table`` objects, as well as how to create them from an existing database automatically, is described in :ref:`metadata_toplevel`.
+
+Next, to tell the ``MetaData`` we'd actually like to create our selection of tables for real inside the SQLite database, we use ``create_all()``, passing it the ``engine`` instance which points to our database. This will check for the presence of each table first before creating, so it's safe to call multiple times:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> metadata.create_all(engine) #doctest: +NORMALIZE_WHITESPACE
+ PRAGMA table_info("users")
+ {}
+ PRAGMA table_info("addresses")
+ {}
+ CREATE TABLE users (
+ id INTEGER NOT NULL,
+ name VARCHAR,
+ fullname VARCHAR,
+ PRIMARY KEY (id)
+ )
+ {}
+ COMMIT
+ CREATE TABLE addresses (
+ id INTEGER NOT NULL,
+ user_id INTEGER,
+ email_address VARCHAR NOT NULL,
+ PRIMARY KEY (id),
+ FOREIGN KEY(user_id) REFERENCES users (id)
+ )
+ {}
+ COMMIT
+
+Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite, this is a valid datatype, but on most databases it's not allowed. So if running this tutorial on a database such as PostgreSQL or MySQL, and you wish to use SQLAlchemy to generate the tables, a "length" may be provided to the ``String`` type as below::
+
+ Column('name', String(50))
+
+The length field on ``String``, as well as similar fields available on ``Integer``, ``Numeric``, etc. are not referenced by SQLAlchemy other than when creating tables.
+
+Insert Expressions
+==================
+
+The first SQL expression we'll create is the ``Insert`` construct, which represents an INSERT statement. This is typically created relative to its target table::
+
+ >>> ins = users.insert()
+
+To see a sample of the SQL this construct produces, use the ``str()`` function::
+
+ >>> str(ins)
+ 'INSERT INTO users (id, name, fullname) VALUES (:id, :name, :fullname)'
+
+Notice above that the INSERT statement names every column in the ``users`` table. This can be limited by using the ``values`` keyword, which establishes the VALUES clause of the INSERT explicitly::
+
+ >>> ins = users.insert(values={'name':'jack', 'fullname':'Jack Jones'})
+ >>> str(ins)
+ 'INSERT INTO users (name, fullname) VALUES (:name, :fullname)'
+
+Above, while the ``values`` keyword limited the VALUES clause to just two columns, the actual data we placed in ``values`` didn't get rendered into the string; instead we got named bind parameters. As it turns out, our data *is* stored within our ``Insert`` construct, but it typically only comes out when the statement is actually executed; since the data consists of literal values, SQLAlchemy automatically generates bind parameters for them. We can peek at this data for now by looking at the compiled form of the statement::
+
+ >>> ins.compile().params #doctest: +NORMALIZE_WHITESPACE
+ {'fullname': 'Jack Jones', 'name': 'jack'}
+
+Executing
+==========
+
+The interesting part of an ``Insert`` is executing it. In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some "shortcut" ways to do it. The ``engine`` object we created is a repository for database connections capable of issuing SQL to the database. To acquire a connection, we use the ``connect()`` method::
+
+ >>> conn = engine.connect()
+ >>> conn #doctest: +ELLIPSIS
+ <sqlalchemy.engine.base.Connection object at 0x...>
+
+The ``Connection`` object represents an actively checked out DBAPI connection resource. Lets feed it our ``Insert`` object and see what happens:
+
+.. sourcecode:: pycon+sql
+
+ >>> result = conn.execute(ins)
+ {opensql}INSERT INTO users (name, fullname) VALUES (?, ?)
+ ['jack', 'Jack Jones']
+ COMMIT
+
+So the INSERT statement was now issued to the database. Although we got positional "qmark" bind parameters instead of "named" bind parameters in the output. How come ? Because when executed, the ``Connection`` used the SQLite **dialect** to help generate the statement; when we use the ``str()`` function, the statement isn't aware of this dialect, and falls back onto a default which uses named parameters. We can view this manually as follows:
+
+.. sourcecode:: pycon+sql
+
+ >>> ins.bind = engine
+ >>> str(ins)
+ 'INSERT INTO users (name, fullname) VALUES (?, ?)'
+
+What about the ``result`` variable we got when we called ``execute()`` ? As the SQLAlchemy ``Connection`` object references a DBAPI connection, the result, known as a ``ResultProxy`` object, is analogous to the DBAPI cursor object. In the case of an INSERT, we can get important information from it, such as the primary key values which were generated from our statement:
+
+.. sourcecode:: pycon+sql
+
+ >>> result.last_inserted_ids()
+ [1]
+
+The value of ``1`` was automatically generated by SQLite, but only because we did not specify the ``id`` column in our ``Insert`` statement; otherwise, our explicit value would have been used. In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases' ``Dialect`` knows the specific steps needed to determine the correct value (or values; note that ``last_inserted_ids()`` returns a list so that it supports composite primary keys).
+
+Executing Multiple Statements
+==============================
+
+
+Our insert example above was intentionally a little drawn out to show some various behaviors of expression language constructs. In the usual case, an ``Insert`` statement is usually compiled against the parameters sent to the ``execute()`` method on ``Connection``, so that there's no need to use the ``values`` keyword with ``Insert``. Lets create a generic ``Insert`` statement again and use it in the "normal" way:
+
+.. sourcecode:: pycon+sql
+
+ >>> ins = users.insert()
+ >>> conn.execute(ins, id=2, name='wendy', fullname='Wendy Williams') # doctest: +ELLIPSIS
+ {opensql}INSERT INTO users (id, name, fullname) VALUES (?, ?, ?)
+ [2, 'wendy', 'Wendy Williams']
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+Above, because we specified all three columns in the the ``execute()`` method, the compiled ``Insert`` included all three columns. The ``Insert`` statement is compiled at execution time based on the parameters we specified; if we specified fewer parameters, the ``Insert`` would have fewer entries in its VALUES clause.
+
+To issue many inserts using DBAPI's ``executemany()`` method, we can send in a list of dictionaries each containing a distinct set of parameters to be inserted, as we do here to add some email addresses:
+
+.. sourcecode:: pycon+sql
+
+ >>> conn.execute(addresses.insert(), [ # doctest: +ELLIPSIS
+ ... {'user_id': 1, 'email_address' : 'jack@yahoo.com'},
+ ... {'user_id': 1, 'email_address' : 'jack@msn.com'},
+ ... {'user_id': 2, 'email_address' : 'www@www.org'},
+ ... {'user_id': 2, 'email_address' : 'wendy@aol.com'},
+ ... ])
+ {opensql}INSERT INTO addresses (user_id, email_address) VALUES (?, ?)
+ [[1, 'jack@yahoo.com'], [1, 'jack@msn.com'], [2, 'www@www.org'], [2, 'wendy@aol.com']]
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+Above, we again relied upon SQLite's automatic generation of primary key identifiers for each ``addresses`` row.
+
+When executing multiple sets of parameters, each dictionary must have the **same** set of keys; i.e. you cant have fewer keys in some dictionaries than others. This is because the ``Insert`` statement is compiled against the **first** dictionary in the list, and it's assumed that all subsequent argument dictionaries are compatible with that statement.
+
+Connectionless / Implicit Execution
+====================================
+
+
+We're executing our ``Insert`` using a ``Connection``. There's two options that allow you to not have to deal with the connection part. You can execute in the **connectionless** style, using the engine, which opens and closes a connection for you:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> result = engine.execute(users.insert(), name='fred', fullname="Fred Flintstone")
+ INSERT INTO users (name, fullname) VALUES (?, ?)
+ ['fred', 'Fred Flintstone']
+ COMMIT
+
+and you can save even more steps than that, if you connect the ``Engine`` to the ``MetaData`` object we created earlier. When this is done, all SQL expressions which involve tables within the ``MetaData`` object will be automatically **bound** to the ``Engine``. In this case, we call it **implicit execution**:
+
+.. sourcecode:: pycon+sql
+
+ >>> metadata.bind = engine
+ {sql}>>> result = users.insert().execute(name="mary", fullname="Mary Contrary")
+ INSERT INTO users (name, fullname) VALUES (?, ?)
+ ['mary', 'Mary Contrary']
+ COMMIT
+
+When the ``MetaData`` is bound, statements will also compile against the engine's dialect. Since a lot of the examples here assume the default dialect, we'll detach the engine from the metadata which we just attached:
+
+.. sourcecode:: pycon+sql
+
+ >>> metadata.bind = None
+
+Detailed examples of connectionless and implicit execution are available in the "Engines" chapter: `dbengine_implicit`.
+
+Selecting
+==========
+
+
+We began with inserts just so that our test database had some data in it. The more interesting part of the data is selecting it ! We'll cover UPDATE and DELETE statements later. The primary construct used to generate SELECT statements is the ``select()`` function:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import select
+ >>> s = select([users])
+ >>> result = conn.execute(s)
+ {opensql}SELECT users.id, users.name, users.fullname
+ FROM users
+ []
+
+Above, we issued a basic ``select()`` call, placing the ``users`` table within the COLUMNS clause of the select, and then executing. SQLAlchemy expanded the ``users`` table into the set of each of its columns, and also generated a FROM clause for us. The result returned is again a ``ResultProxy`` object, which acts much like a DBAPI cursor, including methods such as ``fetchone()`` and ``fetchall()``. The easiest way to get rows from it is to just iterate:
+
+.. sourcecode:: pycon+sql
+
+ >>> for row in result:
+ ... print row
+ (1, u'jack', u'Jack Jones')
+ (2, u'wendy', u'Wendy Williams')
+ (3, u'fred', u'Fred Flintstone')
+ (4, u'mary', u'Mary Contrary')
+
+Above, we see that printing each row produces a simple tuple-like result. We have more options at accessing the data in each row. One very common way is through dictionary access, using the string names of columns:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> result = conn.execute(s)
+ SELECT users.id, users.name, users.fullname
+ FROM users
+ []
+
+ >>> row = result.fetchone()
+ >>> print "name:", row['name'], "; fullname:", row['fullname']
+ name: jack ; fullname: Jack Jones
+
+Integer indexes work as well:
+
+.. sourcecode:: pycon+sql
+
+ >>> row = result.fetchone()
+ >>> print "name:", row[1], "; fullname:", row[2]
+ name: wendy ; fullname: Wendy Williams
+
+But another way, whose usefulness will become apparent later on, is to use the ``Column`` objects directly as keys:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> for row in conn.execute(s):
+ ... print "name:", row[users.c.name], "; fullname:", row[users.c.fullname]
+ SELECT users.id, users.name, users.fullname
+ FROM users
+ []
+ {stop}name: jack ; fullname: Jack Jones
+ name: wendy ; fullname: Wendy Williams
+ name: fred ; fullname: Fred Flintstone
+ name: mary ; fullname: Mary Contrary
+
+Result sets which have pending rows remaining should be explicitly closed before discarding. While the resources referenced by the ``ResultProxy`` will be closed when the object is garbage collected, it's better to make it explicit as some database APIs are very picky about such things:
+
+.. sourcecode:: pycon+sql
+
+ >>> result.close()
+
+If we'd like to more carefully control the columns which are placed in the COLUMNS clause of the select, we reference individual ``Column`` objects from our ``Table``. These are available as named attributes off the ``c`` attribute of the ``Table`` object:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users.c.name, users.c.fullname])
+ {sql}>>> result = conn.execute(s)
+ SELECT users.name, users.fullname
+ FROM users
+ []
+ {stop}>>> for row in result: #doctest: +NORMALIZE_WHITESPACE
+ ... print row
+ (u'jack', u'Jack Jones')
+ (u'wendy', u'Wendy Williams')
+ (u'fred', u'Fred Flintstone')
+ (u'mary', u'Mary Contrary')
+
+Lets observe something interesting about the FROM clause. Whereas the generated statement contains two distinct sections, a "SELECT columns" part and a "FROM table" part, our ``select()`` construct only has a list containing columns. How does this work ? Let's try putting *two* tables into our ``select()`` statement:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> for row in conn.execute(select([users, addresses])):
+ ... print row
+ SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.email_address
+ FROM users, addresses
+ []
+ {stop}(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com')
+ (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')
+ (1, u'jack', u'Jack Jones', 3, 2, u'www@www.org')
+ (1, u'jack', u'Jack Jones', 4, 2, u'wendy@aol.com')
+ (2, u'wendy', u'Wendy Williams', 1, 1, u'jack@yahoo.com')
+ (2, u'wendy', u'Wendy Williams', 2, 1, u'jack@msn.com')
+ (2, u'wendy', u'Wendy Williams', 3, 2, u'www@www.org')
+ (2, u'wendy', u'Wendy Williams', 4, 2, u'wendy@aol.com')
+ (3, u'fred', u'Fred Flintstone', 1, 1, u'jack@yahoo.com')
+ (3, u'fred', u'Fred Flintstone', 2, 1, u'jack@msn.com')
+ (3, u'fred', u'Fred Flintstone', 3, 2, u'www@www.org')
+ (3, u'fred', u'Fred Flintstone', 4, 2, u'wendy@aol.com')
+ (4, u'mary', u'Mary Contrary', 1, 1, u'jack@yahoo.com')
+ (4, u'mary', u'Mary Contrary', 2, 1, u'jack@msn.com')
+ (4, u'mary', u'Mary Contrary', 3, 2, u'www@www.org')
+ (4, u'mary', u'Mary Contrary', 4, 2, u'wendy@aol.com')
+
+It placed **both** tables into the FROM clause. But also, it made a real mess. Those who are familiar with SQL joins know that this is a **Cartesian product**; each row from the ``users`` table is produced against each row from the ``addresses`` table. So to put some sanity into this statement, we need a WHERE clause. Which brings us to the second argument of ``select()``:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users, addresses], users.c.id==addresses.c.user_id)
+ {sql}>>> for row in conn.execute(s):
+ ... print row
+ SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.email_address
+ FROM users, addresses
+ WHERE users.id = addresses.user_id
+ []
+ {stop}(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com')
+ (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')
+ (2, u'wendy', u'Wendy Williams', 3, 2, u'www@www.org')
+ (2, u'wendy', u'Wendy Williams', 4, 2, u'wendy@aol.com')
+
+So that looks a lot better, we added an expression to our ``select()`` which had the effect of adding ``WHERE users.id = addresses.user_id`` to our statement, and our results were managed down so that the join of ``users`` and ``addresses`` rows made sense. But let's look at that expression? It's using just a Python equality operator between two different ``Column`` objects. It should be clear that something is up. Saying ``1==1`` produces ``True``, and ``1==2`` produces ``False``, not a WHERE clause. So lets see exactly what that expression is doing:
+
+.. sourcecode:: pycon+sql
+
+ >>> users.c.id==addresses.c.user_id #doctest: +ELLIPSIS
+ <sqlalchemy.sql.expression._BinaryExpression object at 0x...>
+
+Wow, surprise ! This is neither a ``True`` nor a ``False``. Well what is it ?
+
+.. sourcecode:: pycon+sql
+
+ >>> str(users.c.id==addresses.c.user_id)
+ 'users.id = addresses.user_id'
+
+As you can see, the ``==`` operator is producing an object that is very much like the ``Insert`` and ``select()`` objects we've made so far, thanks to Python's ``__eq__()`` builtin; you call ``str()`` on it and it produces SQL. By now, one can that everything we are working with is ultimately the same type of object. SQLAlchemy terms the base class of all of these expressions as ``sqlalchemy.sql.ClauseElement``.
+
+Operators
+==========
+
+
+Since we've stumbled upon SQLAlchemy's operator paradigm, let's go through some of its capabilities. We've seen how to equate two columns to each other:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.id==addresses.c.user_id
+ users.id = addresses.user_id
+
+If we use a literal value (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.id==7
+ users.id = :id_1
+
+The ``7`` literal is embedded in ``ClauseElement``; we can use the same trick we did with the ``Insert`` object to see it:
+
+.. sourcecode:: pycon+sql
+
+ >>> (users.c.id==7).compile().params
+ {'id_1': 7}
+
+Most Python operators, as it turns out, produce a SQL expression here, like equals, not equals, etc.:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.id != 7
+ users.id != :id_1
+
+ >>> # None converts to IS NULL
+ >>> print users.c.name == None
+ users.name IS NULL
+
+ >>> # reverse works too
+ >>> print 'fred' > users.c.name
+ users.name < :name_1
+
+If we add two integer columns together, we get an addition expression:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.id + addresses.c.id
+ users.id + addresses.id
+
+Interestingly, the type of the ``Column`` is important ! If we use ``+`` with two string based columns (recall we put types like ``Integer`` and ``String`` on our ``Column`` objects at the beginning), we get something different:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.name + users.c.fullname
+ users.name || users.fullname
+
+Where ``||`` is the string concatenation operator used on most databases. But not all of them. MySQL users, fear not:
+
+.. sourcecode:: pycon+sql
+
+ >>> print (users.c.name + users.c.fullname).compile(bind=create_engine('mysql://'))
+ concat(users.name, users.fullname)
+
+The above illustrates the SQL that's generated for an ``Engine`` that's connected to a MySQL database; the ``||`` operator now compiles as MySQL's ``concat()`` function.
+
+If you have come across an operator which really isn't available, you can always use the ``op()`` method; this generates whatever operator you need:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.name.op('tiddlywinks')('foo')
+ users.name tiddlywinks :name_1
+
+Conjunctions
+=============
+
+
+We'd like to show off some of our operators inside of ``select()`` constructs. But we need to lump them together a little more, so let's first introduce some conjunctions. Conjunctions are those little words like AND and OR that put things together. We'll also hit upon NOT. AND, OR and NOT can work from the corresponding functions SQLAlchemy provides (notice we also throw in a LIKE):
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import and_, or_, not_
+ >>> print and_(users.c.name.like('j%'), users.c.id==addresses.c.user_id, #doctest: +NORMALIZE_WHITESPACE
+ ... or_(addresses.c.email_address=='wendy@aol.com', addresses.c.email_address=='jack@yahoo.com'),
+ ... not_(users.c.id>5))
+ users.name LIKE :name_1 AND users.id = addresses.user_id AND
+ (addresses.email_address = :email_address_1 OR addresses.email_address = :email_address_2)
+ AND users.id <= :id_1
+
+And you can also use the re-jiggered bitwise AND, OR and NOT operators, although because of Python operator precedence you have to watch your parenthesis:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.c.name.like('j%') & (users.c.id==addresses.c.user_id) & \
+ ... ((addresses.c.email_address=='wendy@aol.com') | (addresses.c.email_address=='jack@yahoo.com')) \
+ ... & ~(users.c.id>5) # doctest: +NORMALIZE_WHITESPACE
+ users.name LIKE :name_1 AND users.id = addresses.user_id AND
+ (addresses.email_address = :email_address_1 OR addresses.email_address = :email_address_2)
+ AND users.id <= :id_1
+
+So with all of this vocabulary, let's select all users who have an email address at AOL or MSN, whose name starts with a letter between "m" and "z", and we'll also generate a column containing their full name combined with their email address. We will add two new constructs to this statement, ``between()`` and ``label()``. ``between()`` produces a BETWEEN clause, and ``label()`` is used in a column expression to produce labels using the ``AS`` keyword; it's recommended when selecting from expressions that otherwise would not have a name:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([(users.c.fullname + ", " + addresses.c.email_address).label('title')],
+ ... and_(
+ ... users.c.id==addresses.c.user_id,
+ ... users.c.name.between('m', 'z'),
+ ... or_(
+ ... addresses.c.email_address.like('%@aol.com'),
+ ... addresses.c.email_address.like('%@msn.com')
+ ... )
+ ... )
+ ... )
+ >>> print conn.execute(s).fetchall() #doctest: +NORMALIZE_WHITESPACE
+ SELECT users.fullname || ? || addresses.email_address AS title
+ FROM users, addresses
+ WHERE users.id = addresses.user_id AND users.name BETWEEN ? AND ? AND
+ (addresses.email_address LIKE ? OR addresses.email_address LIKE ?)
+ [', ', 'm', 'z', '%@aol.com', '%@msn.com']
+ [(u'Wendy Williams, wendy@aol.com',)]
+
+Once again, SQLAlchemy figured out the FROM clause for our statement. In fact it will determine the FROM clause based on all of its other bits; the columns clause, the where clause, and also some other elements which we haven't covered yet, which include ORDER BY, GROUP BY, and HAVING.
+
+Using Text
+===========
+
+
+Our last example really became a handful to type. Going from what one understands to be a textual SQL expression into a Python construct which groups components together in a programmatic style can be hard. That's why SQLAlchemy lets you just use strings too. The ``text()`` construct represents any textual statement. To use bind parameters with ``text()``, always use the named colon format. Such as below, we create a ``text()`` and execute it, feeding in the bind parameters to the ``execute()`` method:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import text
+ >>> s = text("""SELECT users.fullname || ', ' || addresses.email_address AS title
+ ... FROM users, addresses
+ ... WHERE users.id = addresses.user_id AND users.name BETWEEN :x AND :y AND
+ ... (addresses.email_address LIKE :e1 OR addresses.email_address LIKE :e2)
+ ... """)
+ {sql}>>> print conn.execute(s, x='m', y='z', e1='%@aol.com', e2='%@msn.com').fetchall() # doctest:+NORMALIZE_WHITESPACE
+ SELECT users.fullname || ', ' || addresses.email_address AS title
+ FROM users, addresses
+ WHERE users.id = addresses.user_id AND users.name BETWEEN ? AND ? AND
+ (addresses.email_address LIKE ? OR addresses.email_address LIKE ?)
+ ['m', 'z', '%@aol.com', '%@msn.com']
+ {stop}[(u'Wendy Williams, wendy@aol.com',)]
+
+To gain a "hybrid" approach, any of SA's SQL constructs can have text freely intermingled wherever you like - the ``text()`` construct can be placed within any other ``ClauseElement`` construct, and when used in a non-operator context, a direct string may be placed which converts to ``text()`` automatically. Below we combine the usage of ``text()`` and strings with our constructed ``select()`` object, by using the ``select()`` object to structure the statement, and the ``text()``/strings to provide all the content within the structure. For this example, SQLAlchemy is not given any ``Column`` or ``Table`` objects in any of its expressions, so it cannot generate a FROM clause. So we also give it the ``from_obj`` keyword argument, which is a list of ``ClauseElements`` (or strings) to be placed within the FROM clause:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([text("users.fullname || ', ' || addresses.email_address AS title")],
+ ... and_(
+ ... "users.id = addresses.user_id",
+ ... "users.name BETWEEN 'm' AND 'z'",
+ ... "(addresses.email_address LIKE :x OR addresses.email_address LIKE :y)"
+ ... ),
+ ... from_obj=['users', 'addresses']
+ ... )
+ {sql}>>> print conn.execute(s, x='%@aol.com', y='%@msn.com').fetchall() #doctest: +NORMALIZE_WHITESPACE
+ SELECT users.fullname || ', ' || addresses.email_address AS title
+ FROM users, addresses
+ WHERE users.id = addresses.user_id AND users.name BETWEEN 'm' AND 'z' AND (addresses.email_address LIKE ? OR addresses.email_address LIKE ?)
+ ['%@aol.com', '%@msn.com']
+ {stop}[(u'Wendy Williams, wendy@aol.com',)]
+
+Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our expression to a specific target database; above, our expression won't work with MySQL since it has no ``||`` construct. It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind parameters required UTF-8 encoding before going in, or conversion from a Python ``datetime`` into a string (as is required with SQLite), we would have to add extra information to our ``text()`` construct. Similar issues arise on the result set side, where SQLAlchemy also performs type-specific data conversion in some cases; still more information can be added to ``text()`` to work around this. But what we really lose from our statement is the ability to manipulate it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational transformations. To show off what we mean, we'll first introduce the ALIAS construct and the JOIN construct, just so we have some juicier bits to play with.
+
+Using Aliases
+==============
+
+
+The alias corresponds to a "renamed" version of a table or arbitrary relation, which occurs anytime you say "SELECT .. FROM sometable AS someothername". The ``AS`` creates a new name for the table. Aliases are super important in SQL as they allow you to reference the same table more than once. Scenarios where you need to do this include when you self-join a table to itself, or more commonly when you need to join from a parent table to a child table multiple times. For example, we know that our user ``jack`` has two email addresses. How can we locate jack based on the combination of those two addresses? We need to join twice to it. Let's construct two distinct aliases for the ``addresses`` table and join:
+
+.. sourcecode:: pycon+sql
+
+ >>> a1 = addresses.alias('a1')
+ >>> a2 = addresses.alias('a2')
+ >>> s = select([users], and_(
+ ... users.c.id==a1.c.user_id,
+ ... users.c.id==a2.c.user_id,
+ ... a1.c.email_address=='jack@msn.com',
+ ... a2.c.email_address=='jack@yahoo.com'
+ ... ))
+ {sql}>>> print conn.execute(s).fetchall()
+ SELECT users.id, users.name, users.fullname
+ FROM users, addresses AS a1, addresses AS a2
+ WHERE users.id = a1.user_id AND users.id = a2.user_id AND a1.email_address = ? AND a2.email_address = ?
+ ['jack@msn.com', 'jack@yahoo.com']
+ {stop}[(1, u'jack', u'Jack Jones')]
+
+Easy enough. One thing that we're going for with the SQL Expression Language is the melding of programmatic behavior with SQL generation. Coming up with names like ``a1`` and ``a2`` is messy; we really didn't need to use those names anywhere, it's just the database that needed them. Plus, we might write some code that uses alias objects that came from several different places, and it's difficult to ensure that they all have unique names. So instead, we just let SQLAlchemy make the names for us, using "anonymous" aliases:
+
+.. sourcecode:: pycon+sql
+
+ >>> a1 = addresses.alias()
+ >>> a2 = addresses.alias()
+ >>> s = select([users], and_(
+ ... users.c.id==a1.c.user_id,
+ ... users.c.id==a2.c.user_id,
+ ... a1.c.email_address=='jack@msn.com',
+ ... a2.c.email_address=='jack@yahoo.com'
+ ... ))
+ {sql}>>> print conn.execute(s).fetchall()
+ SELECT users.id, users.name, users.fullname
+ FROM users, addresses AS addresses_1, addresses AS addresses_2
+ WHERE users.id = addresses_1.user_id AND users.id = addresses_2.user_id AND addresses_1.email_address = ? AND addresses_2.email_address = ?
+ ['jack@msn.com', 'jack@yahoo.com']
+ {stop}[(1, u'jack', u'Jack Jones')]
+
+One super-huge advantage of anonymous aliases is that not only did we not have to guess up a random name, but we can also be guaranteed that the above SQL string is **deterministically** generated to be the same every time. This is important for databases such as Oracle which cache compiled "query plans" for their statements, and need to see the same SQL string in order to make use of it.
+
+Aliases can of course be used for anything which you can SELECT from, including SELECT statements themselves. We can self-join the ``users`` table back to the ``select()`` we've created by making an alias of the entire statement. The ``correlate(None)`` directive is to avoid SQLAlchemy's attempt to "correlate" the inner ``users`` table with the outer one:
+
+.. sourcecode:: pycon+sql
+
+ >>> a1 = s.correlate(None).alias()
+ >>> s = select([users.c.name], users.c.id==a1.c.id)
+ {sql}>>> print conn.execute(s).fetchall()
+ SELECT users.name
+ FROM users, (SELECT users.id AS id, users.name AS name, users.fullname AS fullname
+ FROM users, addresses AS addresses_1, addresses AS addresses_2
+ WHERE users.id = addresses_1.user_id AND users.id = addresses_2.user_id AND addresses_1.email_address = ? AND addresses_2.email_address = ?) AS anon_1
+ WHERE users.id = anon_1.id
+ ['jack@msn.com', 'jack@yahoo.com']
+ {stop}[(u'jack',)]
+
+Using Joins
+============
+
+
+We're halfway along to being able to construct any SELECT expression. The next cornerstone of the SELECT is the JOIN expression. We've already been doing joins in our examples, by just placing two tables in either the columns clause or the where clause of the ``select()`` construct. But if we want to make a real "JOIN" or "OUTERJOIN" construct, we use the ``join()`` and ``outerjoin()`` methods, most commonly accessed from the left table in the join:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.join(addresses)
+ users JOIN addresses ON users.id = addresses.user_id
+
+The alert reader will see more surprises; SQLAlchemy figured out how to JOIN the two tables ! The ON condition of the join, as it's called, was automatically generated based on the ``ForeignKey`` object which we placed on the ``addresses`` table way at the beginning of this tutorial. Already the ``join()`` construct is looking like a much better way to join tables.
+
+Of course you can join on whatever expression you want, such as if we want to join on all users who use the same name in their email address as their username:
+
+.. sourcecode:: pycon+sql
+
+ >>> print users.join(addresses, addresses.c.email_address.like(users.c.name + '%'))
+ users JOIN addresses ON addresses.email_address LIKE users.name || :name_1
+
+When we create a ``select()`` construct, SQLAlchemy looks around at the tables we've mentioned and then places them in the FROM clause of the statement. When we use JOINs however, we know what FROM clause we want, so here we make usage of the ``from_obj`` keyword argument:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users.c.fullname], from_obj=[
+ ... users.join(addresses, addresses.c.email_address.like(users.c.name + '%'))
+ ... ])
+ {sql}>>> print conn.execute(s).fetchall()
+ SELECT users.fullname
+ FROM users JOIN addresses ON addresses.email_address LIKE users.name || ?
+ ['%']
+ {stop}[(u'Jack Jones',), (u'Jack Jones',), (u'Wendy Williams',)]
+
+The ``outerjoin()`` function just creates ``LEFT OUTER JOIN`` constructs. It's used just like ``join()``:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users.c.fullname], from_obj=[users.outerjoin(addresses)])
+ >>> print s
+ SELECT users.fullname
+ FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
+
+That's the output ``outerjoin()`` produces, unless, of course, you're stuck in a gig using Oracle prior to version 9, and you've set up your engine (which would be using ``OracleDialect``) to use Oracle-specific SQL:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.databases.oracle import OracleDialect
+ >>> print s.compile(dialect=OracleDialect(use_ansi=False))
+ SELECT users.fullname
+ FROM users, addresses
+ WHERE users.id = addresses.user_id(+)
+
+If you don't know what that SQL means, don't worry ! The secret tribe of Oracle DBAs don't want their black magic being found out ;).
+
+Intro to Generative Selects and Transformations
+================================================
+
+
+We've now gained the ability to construct very sophisticated statements. We can use all kinds of operators, table constructs, text, joins, and aliases. The point of all of this, as mentioned earlier, is not that it's an "easier" or "better" way to write SQL than just writing a SQL statement yourself; the point is that it's better for writing *programmatically generated* SQL which can be morphed and adapted as needed in automated scenarios.
+
+To support this, the ``select()`` construct we've been working with supports piecemeal construction, in addition to the "all at once" method we've been doing. Suppose you're writing a search function, which receives criterion and then must construct a select from it. To accomplish this, upon each criterion encountered, you apply "generative" criterion to an existing ``select()`` construct with new elements, one at a time. We start with a basic ``select()`` constructed with the shortcut method available on the ``users`` table:
+
+.. sourcecode:: pycon+sql
+
+ >>> query = users.select()
+ >>> print query
+ SELECT users.id, users.name, users.fullname
+ FROM users
+
+We encounter search criterion of "name='jack'". So we apply WHERE criterion stating such:
+
+.. sourcecode:: pycon+sql
+
+ >>> query = query.where(users.c.name=='jack')
+
+Next, we encounter that they'd like the results in descending order by full name. We apply ORDER BY, using an extra modifier ``desc``:
+
+.. sourcecode:: pycon+sql
+
+ >>> query = query.order_by(users.c.fullname.desc())
+
+We also come across that they'd like only users who have an address at MSN. A quick way to tack this on is by using an EXISTS clause, which we correlate to the ``users`` table in the enclosing SELECT:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import exists
+ >>> query = query.where(
+ ... exists([addresses.c.id],
+ ... and_(addresses.c.user_id==users.c.id, addresses.c.email_address.like('%@msn.com'))
+ ... ).correlate(users))
+
+And finally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the ``addresses`` table (using an outer join so that users with no addresses come back as well; since we're programmatic, we might not have kept track that we used an EXISTS clause against the ``addresses`` table too...). Additionally, since the ``users`` and ``addresses`` table both have a column named ``id``, let's isolate their names from each other in the COLUMNS clause by using labels:
+
+.. sourcecode:: pycon+sql
+
+ >>> query = query.column(addresses).select_from(users.outerjoin(addresses)).apply_labels()
+
+Let's bake for .0001 seconds and see what rises:
+
+.. sourcecode:: pycon+sql
+
+ >>> conn.execute(query).fetchall()
+ {opensql}SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses.id AS addresses_id, addresses.user_id AS addresses_user_id, addresses.email_address AS addresses_email_address
+ FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
+ WHERE users.name = ? AND (EXISTS (SELECT addresses.id
+ FROM addresses
+ WHERE addresses.user_id = users.id AND addresses.email_address LIKE ?)) ORDER BY users.fullname DESC
+ ['jack', '%@msn.com']
+ {stop}[(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
+
+So we started small, added one little thing at a time, and at the end we have a huge statement..which actually works. Now let's do one more thing; the searching function wants to add another ``email_address`` criterion on, however it doesn't want to construct an alias of the ``addresses`` table; suppose many parts of the application are written to deal specifically with the ``addresses`` table, and to change all those functions to support receiving an arbitrary alias of the address would be cumbersome. We can actually *convert* the ``addresses`` table within the *existing* statement to be an alias of itself, using ``replace_selectable()``:
+
+.. sourcecode:: pycon+sql
+
+ >>> a1 = addresses.alias()
+ >>> query = query.replace_selectable(addresses, a1)
+ >>> print query
+ {opensql}SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address
+ FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
+ WHERE users.name = :name_1 AND (EXISTS (SELECT addresses_1.id
+ FROM addresses AS addresses_1
+ WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE :email_address_1)) ORDER BY users.fullname DESC
+
+One more thing though, with automatic labeling applied as well as anonymous aliasing, how do we retrieve the columns from the rows for this thing ? The label for the ``email_addresses`` column is now the generated name ``addresses_1_email_address``; and in another statement might be something different ! This is where accessing by result columns by ``Column`` object becomes very useful:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> for row in conn.execute(query):
+ ... print "Name:", row[users.c.name], "; Email Address", row[a1.c.email_address]
+ SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address
+ FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id
+ WHERE users.name = ? AND (EXISTS (SELECT addresses_1.id
+ FROM addresses AS addresses_1
+ WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE ?)) ORDER BY users.fullname DESC
+ ['jack', '%@msn.com']
+ {stop}Name: jack ; Email Address jack@yahoo.com
+ Name: jack ; Email Address jack@msn.com
+
+The above example, by its end, got significantly more intense than the typical end-user constructed SQL will usually be. However when writing higher-level tools such as ORMs, they become more significant. SQLAlchemy's ORM relies very heavily on techniques like this.
+
+Everything Else
+================
+
+The concepts of creating SQL expressions have been introduced. What's left are more variants of the same themes. So now we'll catalog the rest of the important things we'll need to know.
+
+Bind Parameter Objects
+----------------------
+
+
+Throughout all these examples, SQLAlchemy is busy creating bind parameters wherever literal expressions occur. You can also specify your own bind parameters with your own names, and use the same statement repeatedly. The database dialect converts to the appropriate named or positional style, as here where it converts to positional for SQLite:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import bindparam
+ >>> s = users.select(users.c.name==bindparam('username'))
+ {sql}>>> conn.execute(s, username='wendy').fetchall()
+ SELECT users.id, users.name, users.fullname
+ FROM users
+ WHERE users.name = ?
+ ['wendy']
+ {stop}[(2, u'wendy', u'Wendy Williams')]
+
+Another important aspect of bind parameters is that they may be assigned a type. The type of the bind parameter will determine its behavior within expressions and also how the data bound to it is processed before being sent off to the database:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = users.select(users.c.name.like(bindparam('username', type_=String) + text("'%'")))
+ {sql}>>> conn.execute(s, username='wendy').fetchall()
+ SELECT users.id, users.name, users.fullname
+ FROM users
+ WHERE users.name LIKE ? || '%'
+ ['wendy']
+ {stop}[(2, u'wendy', u'Wendy Williams')]
+
+
+Bind parameters of the same name can also be used multiple times, where only a single named value is needed in the execute parameters:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users, addresses],
+ ... users.c.name.like(bindparam('name', type_=String) + text("'%'")) |
+ ... addresses.c.email_address.like(bindparam('name', type_=String) + text("'@%'")),
+ ... from_obj=[users.outerjoin(addresses)])
+ {sql}>>> conn.execute(s, name='jack').fetchall()
+ SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.email_address
+ FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
+ WHERE users.name LIKE ? || '%' OR addresses.email_address LIKE ? || '@%'
+ ['jack', 'jack']
+ {stop}[(1, u'jack', u'Jack Jones', 1, 1, u'jack@yahoo.com'), (1, u'jack', u'Jack Jones', 2, 1, u'jack@msn.com')]
+
+Functions
+---------
+
+
+SQL functions are created using the ``func`` keyword, which generates functions using attribute access:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import func
+ >>> print func.now()
+ now()
+
+ >>> print func.concat('x', 'y')
+ concat(:param_1, :param_2)
+
+Certain functions are marked as "ANSI" functions, which mean they don't get the parenthesis added after them, such as CURRENT_TIMESTAMP:
+
+.. sourcecode:: pycon+sql
+
+ >>> print func.current_timestamp()
+ CURRENT_TIMESTAMP
+
+Functions are most typically used in the columns clause of a select statement, and can also be labeled as well as given a type. Labeling a function is recommended so that the result can be targeted in a result row based on a string name, and assigning it a type is required when you need result-set processing to occur, such as for Unicode conversion and date conversions. Below, we use the result function ``scalar()`` to just read the first column of the first row and then close the result; the label, even though present, is not important in this case:
+
+.. sourcecode:: pycon+sql
+
+ >>> print conn.execute(
+ ... select([func.max(addresses.c.email_address, type_=String).label('maxemail')])
+ ... ).scalar()
+ {opensql}SELECT max(addresses.email_address) AS maxemail
+ FROM addresses
+ []
+ {stop}www@www.org
+
+Databases such as PostgreSQL and Oracle which support functions that return whole result sets can be assembled into selectable units, which can be used in statements. Such as, a database function ``calculate()`` which takes the parameters ``x`` and ``y``, and returns three columns which we'd like to name ``q``, ``z`` and ``r``, we can construct using "lexical" column objects as well as bind parameters:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import column
+ >>> calculate = select([column('q'), column('z'), column('r')],
+ ... from_obj=[func.calculate(bindparam('x'), bindparam('y'))])
+
+ >>> print select([users], users.c.id > calculate.c.z)
+ SELECT users.id, users.name, users.fullname
+ FROM users, (SELECT q, z, r
+ FROM calculate(:x, :y))
+ WHERE users.id > z
+
+If we wanted to use our ``calculate`` statement twice with different bind parameters, the ``unique_params()`` function will create copies for us, and mark the bind parameters as "unique" so that conflicting names are isolated. Note we also make two separate aliases of our selectable:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([users], users.c.id.between(
+ ... calculate.alias('c1').unique_params(x=17, y=45).c.z,
+ ... calculate.alias('c2').unique_params(x=5, y=12).c.z))
+
+ >>> print s
+ SELECT users.id, users.name, users.fullname
+ FROM users, (SELECT q, z, r
+ FROM calculate(:x_1, :y_1)) AS c1, (SELECT q, z, r
+ FROM calculate(:x_2, :y_2)) AS c2
+ WHERE users.id BETWEEN c1.z AND c2.z
+
+ >>> s.compile().params
+ {'x_2': 5, 'y_2': 12, 'y_1': 45, 'x_1': 17}
+
+See also :attr:`sqlalchemy.sql.expression.func`.
+
+Unions and Other Set Operations
+-------------------------------
+
+
+Unions come in two flavors, UNION and UNION ALL, which are available via module level functions:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import union
+ >>> u = union(
+ ... addresses.select(addresses.c.email_address=='foo@bar.com'),
+ ... addresses.select(addresses.c.email_address.like('%@yahoo.com')),
+ ... ).order_by(addresses.c.email_address)
+
+ {sql}>>> print conn.execute(u).fetchall()
+ SELECT addresses.id, addresses.user_id, addresses.email_address
+ FROM addresses
+ WHERE addresses.email_address = ? UNION SELECT addresses.id, addresses.user_id, addresses.email_address
+ FROM addresses
+ WHERE addresses.email_address LIKE ? ORDER BY addresses.email_address
+ ['foo@bar.com', '%@yahoo.com']
+ {stop}[(1, 1, u'jack@yahoo.com')]
+
+Also available, though not supported on all databases, are ``intersect()``, ``intersect_all()``, ``except_()``, and ``except_all()``:
+
+.. sourcecode:: pycon+sql
+
+ >>> from sqlalchemy.sql import except_
+ >>> u = except_(
+ ... addresses.select(addresses.c.email_address.like('%@%.com')),
+ ... addresses.select(addresses.c.email_address.like('%@msn.com'))
+ ... )
+
+ {sql}>>> print conn.execute(u).fetchall()
+ SELECT addresses.id, addresses.user_id, addresses.email_address
+ FROM addresses
+ WHERE addresses.email_address LIKE ? EXCEPT SELECT addresses.id, addresses.user_id, addresses.email_address
+ FROM addresses
+ WHERE addresses.email_address LIKE ?
+ ['%@%.com', '%@msn.com']
+ {stop}[(1, 1, u'jack@yahoo.com'), (4, 2, u'wendy@aol.com')]
+
+Scalar Selects
+--------------
+
+
+To embed a SELECT in a column expression, use ``as_scalar()``:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> print conn.execute(select([ # doctest: +NORMALIZE_WHITESPACE
+ ... users.c.name,
+ ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).as_scalar()
+ ... ])).fetchall()
+ SELECT users.name, (SELECT count(addresses.id) AS count_1
+ FROM addresses
+ WHERE users.id = addresses.user_id) AS anon_1
+ FROM users
+ []
+ {stop}[(u'jack', 2), (u'wendy', 2), (u'fred', 0), (u'mary', 0)]
+
+Alternatively, applying a ``label()`` to a select evaluates it as a scalar as well:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> print conn.execute(select([ # doctest: +NORMALIZE_WHITESPACE
+ ... users.c.name,
+ ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).label('address_count')
+ ... ])).fetchall()
+ SELECT users.name, (SELECT count(addresses.id) AS count_1
+ FROM addresses
+ WHERE users.id = addresses.user_id) AS address_count
+ FROM users
+ []
+ {stop}[(u'jack', 2), (u'wendy', 2), (u'fred', 0), (u'mary', 0)]
+
+Correlated Subqueries
+---------------------
+
+Notice in the examples on "scalar selects", the FROM clause of each embedded select did not contain the ``users`` table in its FROM clause. This is because SQLAlchemy automatically attempts to correlate embedded FROM objects to that of an enclosing query. To disable this, or to specify explicit FROM clauses to be correlated, use ``correlate()``::
+
+ >>> s = select([users.c.name], users.c.id==select([users.c.id]).correlate(None))
+ >>> print s
+ SELECT users.name
+ FROM users
+ WHERE users.id = (SELECT users.id
+ FROM users)
+
+ >>> s = select([users.c.name, addresses.c.email_address], users.c.id==
+ ... select([users.c.id], users.c.id==addresses.c.user_id).correlate(addresses)
+ ... )
+ >>> print s
+ SELECT users.name, addresses.email_address
+ FROM users, addresses
+ WHERE users.id = (SELECT users.id
+ FROM users
+ WHERE users.id = addresses.user_id)
+
+Ordering, Grouping, Limiting, Offset...ing...
+---------------------------------------------
+
+
+The ``select()`` function can take keyword arguments ``order_by``, ``group_by`` (as well as ``having``), ``limit``, and ``offset``. There's also ``distinct=True``. These are all also available as generative functions. ``order_by()`` expressions can use the modifiers ``asc()`` or ``desc()`` to indicate ascending or descending.
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([addresses.c.user_id, func.count(addresses.c.id)]).\
+ ... group_by(addresses.c.user_id).having(func.count(addresses.c.id)>1)
+ {sql}>>> print conn.execute(s).fetchall()
+ SELECT addresses.user_id, count(addresses.id) AS count_1
+ FROM addresses GROUP BY addresses.user_id
+ HAVING count(addresses.id) > ?
+ [1]
+ {stop}[(1, 2), (2, 2)]
+
+ >>> s = select([addresses.c.email_address, addresses.c.id]).distinct().\
+ ... order_by(addresses.c.email_address.desc(), addresses.c.id)
+ {sql}>>> conn.execute(s).fetchall()
+ SELECT DISTINCT addresses.email_address, addresses.id
+ FROM addresses ORDER BY addresses.email_address DESC, addresses.id
+ []
+ {stop}[(u'www@www.org', 3), (u'wendy@aol.com', 4), (u'jack@yahoo.com', 1), (u'jack@msn.com', 2)]
+
+ >>> s = select([addresses]).offset(1).limit(1)
+ {sql}>>> print conn.execute(s).fetchall() # doctest: +NORMALIZE_WHITESPACE
+ SELECT addresses.id, addresses.user_id, addresses.email_address
+ FROM addresses
+ LIMIT 1 OFFSET 1
+ []
+ {stop}[(2, 1, u'jack@msn.com')]
+
+Updates
+========
+
+
+Finally, we're back to UPDATE. Updates work a lot like INSERTS, except there is an additional WHERE clause that can be specified.
+
+.. sourcecode:: pycon+sql
+
+ >>> # change 'jack' to 'ed'
+ {sql}>>> conn.execute(users.update(users.c.name=='jack', values={'name':'ed'})) #doctest: +ELLIPSIS
+ UPDATE users SET name=? WHERE users.name = ?
+ ['ed', 'jack']
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+ >>> # use bind parameters
+ >>> u = users.update(users.c.name==bindparam('oldname'), values={'name':bindparam('newname')})
+ {sql}>>> conn.execute(u, oldname='jack', newname='ed') #doctest: +ELLIPSIS
+ UPDATE users SET name=? WHERE users.name = ?
+ ['ed', 'jack']
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+ >>> # update a column to an expression
+ {sql}>>> conn.execute(users.update(values={users.c.fullname:"Fullname: " + users.c.name})) #doctest: +ELLIPSIS
+ UPDATE users SET fullname=(? || users.name)
+ ['Fullname: ']
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+Correlated Updates
+------------------
+
+
+A correlated update lets you update a table using selection from another table, or the same table:
+
+.. sourcecode:: pycon+sql
+
+ >>> s = select([addresses.c.email_address], addresses.c.user_id==users.c.id).limit(1)
+ {sql}>>> conn.execute(users.update(values={users.c.fullname:s})) #doctest: +ELLIPSIS,+NORMALIZE_WHITESPACE
+ UPDATE users SET fullname=(SELECT addresses.email_address
+ FROM addresses
+ WHERE addresses.user_id = users.id
+ LIMIT 1 OFFSET 0)
+ []
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+Deletes
+========
+
+
+Finally, a delete. Easy enough:
+
+.. sourcecode:: pycon+sql
+
+ {sql}>>> conn.execute(addresses.delete()) #doctest: +ELLIPSIS
+ DELETE FROM addresses
+ []
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+ {sql}>>> conn.execute(users.delete(users.c.name > 'm')) #doctest: +ELLIPSIS
+ DELETE FROM users WHERE users.name > ?
+ ['m']
+ COMMIT
+ {stop}<sqlalchemy.engine.base.ResultProxy object at 0x...>
+
+Further Reference
+==================
+
+API docs: :mod:`sqlalchemy.sql.expression`
+
+Table Metadata Reference: :ref:`metadata_toplevel`
+
+Engine/Connection/Execution Reference: :ref:`engines_toplevel`
+
+SQL Types: :ref:`types`
+
+
--- /dev/null
+/* documentation section styles */
+
+body, td {
+ font-family: verdana, sans-serif;
+ font-size:.95em;
+}
+
+body {
+ background-color: #FDFBFC;
+ margin:20px 20px 20px 20px;
+}
+
+form {
+ display:inline;
+}
+
+p {
+ margin-top:10px;
+ margin-bottom:10px;
+}
+
+a {font-weight:normal; text-decoration:underline;}
+a:link {color:#0000FF;}
+a:visited {color:#0000FF;}
+a:active {color:#0000FF;}
+a:hover {color:#700000;}
+
+
+strong a {
+ font-weight: bold;
+}
+
+#search {
+ float:right;
+}
+
+#pagecontrol {
+ float:right;
+}
+
+.topnav
+{
+ background-color: #fbfbee;
+ border: solid 1px #ccc;
+ padding:10px;
+ margin:10px 0px 10px 0px;
+}
+
+.document {
+ border: solid 1px #ccc;
+}
+
+.topnav .prevnext {
+ padding: 5px 0px 0px 0px;
+ font-size: 0.8em
+}
+
+h1, h2, h3, h4, h5 {
+ font-family:arial,helvetica,sans-serif;
+ font-weight:bold;
+}
+
+.document h1, .document h2, .document h3, .document h4, .document h5 {
+ font-size: 1.4em;
+}
+
+.document h1 {
+ display:none;
+}
+
+h1 {
+ font: normal 20px/22px arial,helvetica,sans-serif;
+ color: #222;
+ padding:0px;
+ margin:0px;
+}
+
+.topnav h2 {
+ margin:26px 4px 0px 5px;
+ font-family:arial,helvetica,sans-serif;
+ font-size:1.6em;
+ font-weight:normal;
+ line-height:1.6em;
+}
+
+.topnav h3 {
+ font-weight: bold;
+ font-size: 1.4em;
+ margin:0px;
+ display:inline;
+ font-family:verdana,sans-serif;
+}
+
+.topnav li,
+li.toctree-l1,
+li.toctree-l1 li
+{
+ list-style-type:disc;
+ margin:0px;
+ padding:1px 8px;
+}
+
+
+.topnav li ul,
+li.toctree-l1 ul
+{
+ padding:0px 0px 0px 20px;
+}
+
+.topnav li ul li li,
+li.toctree-l1 ul li li
+{
+ /*font-size:.90em;*/
+}
+
+.sourcelink {
+ font-size:.8em;
+ text-align:right;
+ padding-top:10px;
+}
+
+.section {
+ line-height: 1.5em;
+ padding:8px 10px 20px 10px;
+ margin:10px 0px 0px;
+}
+
+.section .section {
+ margin:0px 0px 0px 0px;
+ padding: 0px;
+}
+
+.section .section .section {
+ margin:0px 0px 0px 20px;
+}
+
+.section .section .section .section {
+ margin:0px 0px 0px 20px;
+}
+
+
+.bottomnav {
+ background-color:#FBFBEE;
+ border:1px solid #CCCCCC;
+ float:right;
+ margin: 1em 0 1em 5px;
+ padding:10px;
+}
+
+.totoc {
+
+}
+
+.doc_copyright {
+ font-size:.85em;
+ padding:10px 0px 10px 0px;
+}
+
+pre {
+ background-color: #f0f0f0;
+ border: solid 1px #ccc;
+ padding:10px;
+ margin: 5px 5px 5px 5px;
+ overflow:auto;
+ line-height:1.3em;
+}
+
+.popup_sql, .show_sql
+{
+ background-color: #fbfbee;
+ padding:0px 10px;
+ margin:0px -10px;
+}
+
+.sql_link
+{
+ font-weight:normal;
+ font-family: arial, sans-serif;
+ text-transform: uppercase;
+ font-size: 0.9em;
+ color:#666;
+ border:1px solid;
+ padding:1px 2px 1px 2px;
+ margin:0px 10px 0px 15px;
+ float:right;
+ line-height:1.2em;
+}
+
+#docs a.sql_link, .sql_link
+{
+ text-decoration: none;
+ padding:1px 2px;
+}
+
+#docs a.sql_link:hover {
+ text-decoration: none;
+ color:#fff;
+ border:1px solid #900;
+ background-color: #900;
+}
+
+.versionheader {
+ margin-top: 0.5em;
+}
+.versionnum {
+ font-weight: bold;
+}
+
+.prerelease {
+ border: solid #c25757 2px;
+ border-radius: 4px;
+ -moz-border-radius: 4px;
+ -webkit-border-radius: 4px;
+ background-color: #c21a1a;
+ color: white;
+ padding: 0.05em 0.2em;
+}
+
+dl.function > dt,
+dl.class > dt
+{
+ background-color:#F0F0F0;
+ margin:0px -10px;
+ padding: 0px 10px;
+}
+
+dt:target {
+ background-color:#FBE54E;
+}
+
+a.headerlink {
+ font-size: 0.8em;
+ padding: 0 4px 0 4px;
+ text-decoration: none;
+ visibility: hidden;
+}
+
+h1:hover > a.headerlink,
+h2:hover > a.headerlink,
+h3:hover > a.headerlink,
+h4:hover > a.headerlink,
+h5:hover > a.headerlink,
+h6:hover > a.headerlink,
+dt:hover > a.headerlink {
+ visibility: visible;
+}
+
+a.headerlink:hover {
+ background-color: #00f;
+ color: white;
+}
+
+.clearboth {
+ clear:both;
+}
+
+tt.descname {
+ background-color:transparent;
+ font-size:1.2em;
+ font-weight:bold;
+}
+
+tt.descclassname {
+ background-color:transparent;
+}
+
+tt {
+ background-color:#ECF0F3;
+ padding:0 1px;
+}
+
+@media print {
+ #nav { display: none; }
+ #pagecontrol { display: none; }
+ .topnav .prevnext { display: none; }
+ .bottomnav { display: none; }
+ .totoc { display: none; }
+ .topnav ul li a { text-decoration: none; color: #000; }
+}
+
+/* syntax highlighting overrides */
+.k, .kn {color:#0908CE;}
+.o {color:#BF0005;}
+.go {color:#804049;}
--- /dev/null
+$(document).ready(function(){
+ $('div.popup_sql').hide();
+ $('a.sql_link').click(function() {
+ $(this).nextAll('div.popup_sql:first').toggle();
+ return false;
+ })
+});
+++ /dev/null
-<html>
-<head>
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
- <title>${self.title()}</title>
- ${self.style()}
-<%def name="style()">
-</%def>
-
-</head>
-<body>
-${next.body()}
-
-</body>
-</html>
-
-<%def name="style()">
- <link rel="stylesheet" href="style.css"></link>
- <link rel="stylesheet" href="docs.css"></link>
- <link href="syntaxhighlight.css" rel="stylesheet" type="text/css"></link>
- <script src="scripts.js"></script>
- % if parent:
- ${parent.style()}
- % endif
-</%def>
-
-<%def name="title()">
-Documentation
-</%def>
-
-
+++ /dev/null
-<%!
- from mako.ext.autohandler import autohandler
-%>
-<%inherit file="${autohandler(template, context)}"/>
-<%page cached="True" cache_key="${self.filename}"/>
-
-<%doc>
- base.html - common to all documentation pages. intentionally separate
- from autohandler, which can be swapped out for a different one
-</%doc>
-
-<%
- # bootstrap TOC structure from request args, or pickled file if not present.
- import cPickle as pickle
- import os, time
- #print "%s generating from table of contents for file %s" % (local.filename, self.filename)
- filename = os.path.join(os.path.dirname(self.filename), 'table_of_contents.pickle')
- toc = pickle.load(file(filename))
- version = toc.version
- last_updated = toc.last_updated
-
- kwargs = context.kwargs
- kwargs.setdefault('extension', 'html')
- extension = kwargs['extension']
- kwargs.setdefault('paged', True)
- kwargs.setdefault('toc', toc)
-
- version_cls = 'versionnum'
- if 'beta' in version:
- version_cls += ' prerelease'
-%>
-
-<div id="topanchor"><a name="top"> </a></div>
-
-
-<h1>${toc.root.doctitle}</h1>
-
-<div id="pagecontrol"><a href="index.${extension}">Multiple Pages</a> | <a href="documentation.${extension}">One Page</a></div>
-
-<div class="versionheader">
- Version: <span class="${version_cls}">${version}</span>
- Last Updated: ${time.strftime('%x %X', time.localtime(last_updated))}
-</div>
-
-${next.body(**kwargs)}
-
-
-
+++ /dev/null
-## defines the default layout for normal documentation pages (not including the index)
-<%inherit file="base.html"/>
-<%page args="toc, extension, paged"/>
-<%namespace file="nav.html" import="topnav, pagenav, bottomnav"/>
-
-<%
- current = toc.get_by_file(self.template.module.filename)
-%>
-
-<A name="<% current.path %>"></a>
-
-${topnav(item=current, toc=toc, extension=extension, paged=paged)}
-
-${next.body(toc=toc, extension=extension, paged=paged)}
-
-${bottomnav(item=current, extension=extension, paged=paged)}
\ No newline at end of file
+++ /dev/null
-## formatting.myt - Provides section formatting elements, syntax-highlighted code blocks, and other special filters.
-<%!
- import string, re, cgi
- from mako import filters
- import highlight
-
- def plainfilter(f):
- f = re.sub(r'\n[\s\t]*\n[\s\t]*', '</p>\n<p>', f)
- f = "<p>" + f + "</p>"
- return f
-
-%>
-
-<%namespace name="nav" file="nav.html"/>
-
-<%def name="section(toc, path, paged, extension, description=None)">
- ## Main section formatting element.
- <%
- content = capture(caller.body)
- re2 = re.compile(r"'''PYESC(.+?)PYESC'''", re.S)
- content = re2.sub(lambda m: filters.url_unescape(m.group(1)), content)
-
- item = toc.get_by_path(path)
- subsection = item.depth > 1
- level = min(item.depth, 4)
- %>
- <A name="${item.path}"></a>
-
- <div class="${'sectionL%d' % level}">
-
- % if (subsection):
- <h3>${description or item.description}</h3>
- % endif
-
- ${content}
-
- % if len(item.children) == 0:
- % if paged:
- <a href="#top" class="totoc">back to section top</a>
- % else:
- <a href="#${item.get_page_root().path}" class="totoc">back to section top</a>
- % endif
- % endif
- </div>
-
-</%def>
-
-
-<%def name="formatplain()" filter="plainfilter">
- ${ caller.body() | h}
-</%def>
-
-
-<%def name="codeline()" filter="trim,h">
- <span class="codeline">${ caller.body() }</span>
-</%def>
-
-<%def name="code(toc, paged, extension, title=None, syntaxtype='mako', html_escape=True, use_sliders=False)">
- <%
- def fix_indent(f):
- f =string.expandtabs(f, 4)
- g = ''
- lines = string.split(f, "\n")
- whitespace = None
- for line in lines:
- if whitespace is None:
- match = re.match(r"^([ ]*).+", line)
- if match is not None:
- whitespace = match.group(1)
-
- if whitespace is not None:
- line = re.sub(r"^%s" % whitespace, "", line)
-
- if whitespace is not None or re.search(r"\w", line) is not None:
- g += (line + "\n")
- else:
- g += "\n"
-
- return g[:-1] #.rstrip()
-
- p = re.compile(r'<pre>(.*?)</pre>', re.S)
-
- def hlight(match):
- try:
- return "<pre>" + highlight.highlight(fix_indent(match.group(1)), html_escape = html_escape, syntaxtype = syntaxtype) + "</pre>"
- except:
- print "TEXT IS", fix_indent(match.group(1))
-
- def link(match):
- return capture(nav.toclink, toc, match.group(2), extension, paged, description=match.group(1))
-
- content = re.sub(r'\[(.+?)\]\(rel:(.+?)\)', link, capture(caller.body))
- if syntaxtype != 'diagram':
- content = p.sub(hlight, "<pre>" + content + "</pre>")
- else:
- content = "<pre>" + content + "</pre>"
- %>
-
- <div class="${ use_sliders and "sliding_code" or "code" }">
- % if title is not None:
- <div class="codetitle">${title}</div>
- % endif
- ${ content }
- </div>
-</%def>
-
-
-<%def name="popboxlink(name=None, show='show', hide='hide')" filter="trim">
- <%
- if name is None:
- name = attributes.setdefault('popbox_name', 0)
- name += 1
- attributes['popbox_name'] = name
- name = "popbox_" + repr(name)
- %>
-javascript:togglePopbox('${name}', '${show}', '${hide}')
-</%def>
-
-<%def name="popbox(name=None, class_=None)" filter="trim">
-<%
- if name is None:
- name = 'popbox_' + repr(attributes['popbox_name'])
-%>
-<div id="${name}_div" class="${class_}" style="display:none;">${capture(caller.body) | trim}</div>
-</%def>
-
-<%def name="poplink(link='sql')" filter="trim">
- <%
- href = capture(popboxlink)
- %>
- '''PYESC${capture(nav.link, href=href, text=link, class_="codepoplink") | u}PYESC'''
-</%def>
-
-<%def name="codepopper()" filter="trim">
- <%
- c = capture(caller.body)
- c = re.sub(r'\n', '<br/>\n', filters.html_escape(c.strip()))
- %>
- </pre><%call expr="popbox(class_='codepop')">${c}</%call><pre>
-</%def>
-
-<%def name="poppedcode()" filter="trim">
- <%
- c = capture(caller.body)
- c = re.sub(r'\n', '<br/>\n', filters.html_escape(c.strip()))
- %>
- </pre><div class="codepop">${c}</div><pre>
-</%def>
-
-
-
-
--- /dev/null
+<%inherit file="layout.mako"/>
+
+<%def name="show_title()">${_('Index')}</%def>
+
+ <h1 id="index">${_('Index')}</h1>
+
+ % for i, (key, dummy) in enumerate(genindexentries):
+ ${i != 0 and '| ' or ''}<a href="#${key}"><strong>${key}</strong></a>
+ % endfor
+
+ <hr />
+
+ % for i, (key, entries) in enumerate(genindexentries):
+<h2 id="${key}">${key}</h2>
+<table width="100%" class="indextable"><tr><td width="33%" valign="top">
+<dl>
+ <%
+ breakat = genindexcounts[i] // 2
+ numcols = 1
+ numitems = 0
+ %>
+% for entryname, (links, subitems) in entries:
+
+<dt>
+ % if links:
+ <a href="${links[0]}">${entryname|h}</a>
+ % for link in links[1:]:
+ , <a href="${link}">[${i}]</a>
+ % endfor
+ % else:
+ ${entryname|h}
+ % endif
+
+ % if subitems:
+ <dd><dl>
+ % for subentryname, subentrylinks in subitems:
+ <dt><a href="${subentrylinks[0]}">${subentryname|h}</a>
+ % for j, link in enumerate(subentrylinks[1:]):
+ <a href="${link}">[${j}]</a>
+ % endfor
+ </dt>
+ % endfor
+ </dl></dd>
+ % endif
+ <%
+ numitems = numitems + 1 + len(subitems)
+ %>
+ % if numcols <2 and numitems > breakat:
+ <%
+ numcols = numcols + 1
+ %>
+ </dl></td><td width="33%" valign="top"><dl>
+% endif
+
+% endfor
+</dl></td></tr></table>
+% endfor
+
+<%def name="sidebarrel()">
+% if split_index:
+ <h4>${_('Index')}</h4>
+ <p>
+ % for i, (key, dummy) in enumerate(genindexentries):
+ ${i > 0 and '| ' or ''}
+ <a href="${pathto('genindex-' + key)}"><strong>${key}</strong></a>
+ % endfor
+ </p>
+
+ <p><a href="${pathto('genindex-all')}"><strong>${_('Full index on one page')}</strong></a></p>
+% endif
+ ${parent.sidebarrel()}
+</%def>
--- /dev/null
+## coding: utf-8
+<%inherit file="${context['mako_layout']}"/>
+
+<%def name="headers()">
+ <link rel="stylesheet" href="${pathto('_static/pygments.css', 1)}" type="text/css" />
+ <link rel="stylesheet" href="${pathto('_static/docs.css', 1)}" type="text/css" />
+
+ <script type="text/javascript">
+ var DOCUMENTATION_OPTIONS = {
+ URL_ROOT: '${pathto("", 1)}',
+ VERSION: '${release|h}',
+ COLLAPSE_MODINDEX: false,
+ FILE_SUFFIX: '${file_suffix}'
+ };
+ </script>
+ % for scriptfile in script_files + self.attr.local_script_files:
+ <script type="text/javascript" src="${pathto(scriptfile, 1)}"></script>
+ % endfor
+ <script type="text/javascript" src="${pathto('_static/init.js', 1)}"></script>
+ % if hasdoc('about'):
+ <link rel="author" title="${_('About these documents')}" href="${pathto('about')}" />
+ % endif
+ <link rel="index" title="${_('Index')}" href="${pathto('genindex')}" />
+ <link rel="search" title="${_('Search')}" href="${pathto('search')}" />
+ % if hasdoc('copyright'):
+ <link rel="copyright" title="${_('Copyright')}" href="${pathto('copyright')}" />
+ % endif
+ <link rel="top" title="${docstitle|h}" href="${pathto('index')}" />
+ % if parents:
+ <link rel="up" title="${parents[-1]['title']|util.striptags}" href="${parents[-1]['link']|h}" />
+ % endif
+ % if nexttopic:
+ <link rel="next" title="${nexttopic['title']|util.striptags}" href="${nexttopic['link']|h}" />
+ % endif
+ % if prevtopic:
+ <link rel="prev" title="${prevtopic['title']|util.striptags}" href="${prevtopic['link']|h}" />
+ % endif
+ ${self.extrahead()}
+</%def>
+<%def name="extrahead()"></%def>
+
+ <h1>${docstitle|h}</h1>
+
+ <div id="search">
+ Search:
+ <form class="search" action="${pathto('search')}" method="get">
+ <input type="text" name="q" size="18" /> <input type="submit" value="${_('Go')}" />
+ <input type="hidden" name="check_keywords" value="yes" />
+ <input type="hidden" name="area" value="default" />
+ </form>
+ </div>
+
+ <div class="versionheader">
+ Version: <span class="versionnum">${release}</span> Last Updated: ${last_updated}
+ </div>
+ <div class="clearboth"></div>
+
+ <div class="topnav">
+ <div id="pagecontrol">
+ <a href="${pathto('reference/index')}">API Reference</a>
+ |
+ <a href="${pathto('genindex')}">Index</a>
+
+ % if sourcename:
+ <div class="sourcelink">(<a href="${pathto('_sources/' + sourcename, True)|h}">${_('view source')})</div>
+ % endif
+ </div>
+
+ <div class="navbanner">
+ <a class="totoc" href="${pathto(master_doc)}">Table of Contents</a>
+ % if parents:
+ % for parent in parents:
+ » <a href="${parent['link']|h}" title="${parent['title']}">${parent['title']}</a>
+ % endfor
+ % endif
+ % if current_page_name != master_doc:
+ » ${self.show_title()}
+ % endif
+
+ ${prevnext()}
+ <h2>
+ ${self.show_title()}
+ </h2>
+ </div>
+ % if display_toc and not current_page_name.startswith('index'):
+ ${toc}
+ % endif
+ <div class="clearboth"></div>
+ </div>
+
+ <div class="document">
+ ${next.body()}
+ </div>
+
+ <%def name="footer()">
+ <div class="bottomnav">
+ ${prevnext()}
+ <div class="doc_copyright">
+ % if hasdoc('copyright'):
+ © <a href="${pathto('copyright')}">Copyright</a> ${copyright|h}.
+ % else:
+ © Copyright ${copyright|h}.
+ % endif
+ % if show_sphinx:
+ Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> ${sphinx_version|h}.
+ % endif
+ </div>
+ </div>
+ </%def>
+ ${self.footer()}
+
+<%def name="prevnext()">
+<div class="prevnext">
+ % if prevtopic:
+ Previous:
+ <a href="${prevtopic['link']|h}" title="${_('previous chapter')}">${prevtopic['title']}</a>
+ % endif
+ % if nexttopic:
+ Next:
+ <a href="${nexttopic['link']|h}" title="${_('next chapter')}">${nexttopic['title']}</a>
+ % endif
+</div>
+</%def>
+
+<%def name="show_title()">
+% if title:
+ ${title}
+% endif
+</%def>
+
+++ /dev/null
-<%inherit file="base.html"/>
-<%page args="toc, extension, paged"/>
-<%namespace name="formatting" file="formatting.html"/>
-<%namespace name="nav" file="nav.html"/>
-<%namespace name="pydoc" file="pydoc.html"/>
-<%!
- import cPickle as pickle
- import os
-%>
-<%
- current = toc.get_by_file(self.template.module.filename)
- docfile = os.path.join(os.path.dirname(self.filename), 'compiled_docstrings.pickle')
- data = dict(pickle.load(file(docfile)))
- data = data[self.template.module.docstring]
-%>
-
-<%def name="style()">
- ${parent.style()}
- <link rel="stylesheet" href="docutil.css"></link>
-</%def>
-
-${nav.topnav(item=current, toc=toc, extension=extension, paged=True)}
-
-${pydoc.obj_doc(obj=data, toc=toc, extension=extension, paged=True)}
-
-${nav.bottomnav(item=current, extension=extension, paged=True)}
-
+++ /dev/null
-## nav.myt - Provides page navigation elements that are derived from toc.TOCElement structures, including
-## individual hyperlinks as well as navigational toolbars and table-of-content listings.
-<%namespace name="tocns" file="toc.html"/>
-
-<%def name="itemlink(item, paged, extension, anchor=True)" filter="trim">
- <a href="${ item.get_link(anchor=anchor, usefilename=paged, extension=extension) }">${ item.description }</a>
-</%def>
-
-<%def name="toclink(toc, path, extension, paged, description=None)" filter="trim">
- <%
- item = toc.get_by_path(path)
- if description is None:
- if item:
- description = item.description
- else:
- description = path
- if item:
- anchor = not paged or item.depth > 1
- else:
- anchor = False
- %>
- % if item:
- <a href="${ item.get_link(extension=extension, anchor=anchor, usefilename=paged) }">${ description }</a>
- % else:
- <%
- #raise Exception("Can't find TOC link for '%s'" % path)
- %>
- <b>${ description }</b>
- % endif
-</%def>
-
-
-<%def name="link(href, text, class_)" filter="trim">
- <a href="${ href }" ${ class_ and (('class=\"%s\"' % class_) or '')}>${ text }</a>
-</%def>
-
-<%def name="topnav(item, toc, extension, paged)">
- <div class="topnav">
-
- ${pagenav(item, extension=extension, paged=paged)}
-
- ${tocns.printtoc(root=item, current=None, anchor_toplevel=True, paged=paged, extension=extension)}
- </div>
-</%def>
-
-<%def name="pagenav(item, paged, extension)">
- <div class="navbanner">
- <a href="${paged and 'index' or 'documentation'}.${ extension }" class="totoc">Table of Contents</a>
- ${prevnext(item, paged, extension)}
- <h2>${item.description}</h2>
- </div>
-</%def>
-
-<%def name="bottomnav(item, paged, extension)">
- <div class="bottomnav">
- ${prevnext(item, paged, extension)}
- </div>
-</%def>
-
-<%def name="prevnext(item, paged, extension)">
- <div class="prevnext">
- % if item.up:
- Up: ${itemlink(item=item.up, paged=paged, anchor=not paged, extension=extension)}
- % endif
-
- % if item.previous is not None:
- ${item.up is not None and " | " or ""}
- Previous: ${itemlink(item=item.previous, paged=paged, anchor=not paged, extension=extension)}
- % endif
-
- % if item.next is not None:
- ${item.previous is not None and " | " or ""}
- Next: ${itemlink(item=item.next, paged=paged, anchor=not paged, extension=extension)}
- % endif
- </div>
-</%def>
--- /dev/null
+<%inherit file="layout.mako"/>
+${body| util.strip_toplevel_anchors}
\ No newline at end of file
+++ /dev/null
-<%doc>pydoc.myt - provides formatting functions for printing docstring.AbstractDoc generated python documentation objects.</%doc>
-<%!
-import docstring
-from docutils.core import publish_parts
-import re, sys
-
-def whitespace(content):
- """trim left whitespace."""
- if not content:
- return ''
- # Convert tabs to spaces (following the normal Python rules)
- # and split into a list of lines:
- lines = content.expandtabs().splitlines()
- # Determine minimum indentation (first line doesn't count):
- indent = sys.maxint
- for line in lines[1:]:
- stripped = line.lstrip()
- if stripped:
- indent = min(indent, len(line) - len(stripped))
- # Remove indentation (first line is special):
- trimmed = [lines[0].strip()]
- if indent < sys.maxint:
- for line in lines[1:]:
- trimmed.append(line[indent:].rstrip())
- # Strip off trailing and leading blank lines:
- while trimmed and not trimmed[-1]:
- trimmed.pop()
- while trimmed and not trimmed[0]:
- trimmed.pop(0)
- # Return a single string:
- return '\n'.join(trimmed)
-
-def formatdocstring(content):
- return publish_parts(whitespace(content), writer_name='html')['body']
-%>
-
-<%def name="inline_links(toc, extension, paged)"><%
- def link(match):
- (module, desc) = match.group(1,2)
- if not desc:
- path = "docstrings_" + module
- elif desc.endswith('()'):
- path = "docstrings_" + module + "_modfunc_" + desc[:-2]
- else:
- path = "docstrings_" + module + "_" + desc
- return capture(nav.toclink, toc=toc, path=path, description=desc or None, extension=extension, paged=paged)
- return lambda content: re.sub('\[(.+?)#(.*?)\]', link, content)
-%></%def>
-
-<%namespace name="formatting" file="formatting.html"/>
-<%namespace name="nav" file="nav.html"/>
-
-<%def name="obj_doc(obj, toc, extension, paged)">
- <%
- if obj.isclass:
- links = []
- for elem in obj.inherits:
- if isinstance(elem, docstring.ObjectDoc):
- links.append(capture(nav.toclink, toc=toc, path=elem.toc_path, extension=extension, description=elem.name, paged=paged))
- else:
- links.append(str(elem))
- htmldescription = "class " + obj.classname + "(%s)" % (','.join(links))
- else:
- htmldescription = obj.description
-
- %>
-
- <%call expr="formatting.section(toc=toc, path=obj.toc_path, description=htmldescription, paged=paged, extension=extension)">
- % if obj.doc:
- <div class="darkcell">${obj.doc or '' | formatdocstring, inline_links(toc, extension, paged)}</div>
- % endif
-
- % if not obj.isclass and obj.functions:
-
- <%call expr="formatting.section(toc=toc, path=obj.mod_path, paged=paged, extension=extension)">
- % for func in obj.functions:
- ${function_doc(func=func,toc=toc, extension=extension, paged=paged)}
- % endfor
- </%call>
-
- % else:
-
- % if obj.functions:
- % for func in obj.functions:
- % if isinstance(func, docstring.FunctionDoc):
- ${function_doc(func=func, toc=toc, extension=extension, paged=paged)}
- % elif isinstance(func, docstring.PropertyDoc):
- ${property_doc(prop=func, toc=toc, extension=extension, paged=paged)}
- % endif
- % endfor
- % endif
- % endif
-
- % if obj.classes:
- % for class_ in obj.classes:
- ${obj_doc(obj=class_, toc=toc, extension=extension, paged=paged)}
- % endfor
- % endif
- </%call>
-</%def>
-
-<%def name="function_doc(func, toc, extension, paged)">
- <div class="darkcell">
- <%
- if hasattr(func, 'toc_path'):
- item = toc.get_by_path(func.toc_path)
- else:
- item = None
- %>
- <A name="${item and item.path or ''}"></a>
- <b>def ${func.name}(${", ".join(map(lambda k: "<i>%s</i>" % k, func.arglist))})</b>
- <div class="docstring">
- ${func.doc or '' | formatdocstring, inline_links(toc, extension, paged)}
- </div>
- </div>
-</%def>
-
-<%def name="property_doc(prop, toc, extension, paged)">
- <div class="darkcell">
- <A name=""></a>
- <b>${prop.name} = property()</b>
- <div class="docstring">
- ${prop.doc or '' | formatdocstring, inline_links(toc, extension, paged)}
- </div>
- </div>
-</%def>
-
-
--- /dev/null
+<%inherit file="layout.mako"/>
+
+<%!
+ local_script_files = ['_static/searchtools.js']
+%>
+<%def name="show_title()">${_('Search')}</%def>
+
+<div id="search-results"></div>
+
+<%def name="footer()">
+ ${parent.footer()}
+ <script type="text/javascript" src="searchindex.js"></script>
+</%def>
--- /dev/null
+<%text>#coding:utf-8
+<%inherit file="/base.html"/>
+<%page cache_type="file" cached="True"/>
+<%!
+ in_docs=True
+%>
+</%text>
+
+<div style="text-align:right">
+<b>Quick Select:</b> <a href="/docs/05/">0.5</a> | <a href="/docs/04/">0.4</a> | <a href="/docs/03/">0.3</a>
+</div>
+
+${'<%text>'}
+${next.body()}
+${'</%text>'}
+
+<%text><%def name="style()"></%text>
+ ${self.headers()}
+ <%text>${parent.style()}</%text>
+ <link href="/css/site_docs.css" rel="stylesheet" type="text/css"></link>
+<%text></%def></%text>
+
+<%text><%def name="title()"></%text>${capture(self.show_title)|util.striptags} — ${docstitle|h}<%text></%def></%text>
+
+<%!
+ local_script_files = []
+%>
--- /dev/null
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+
+<html>
+ <head>
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+ ${metatags and metatags or ''}
+ <title>${capture(self.show_title)|util.striptags} — ${docstitle|h}</title>
+ ${self.headers()}
+ </head>
+ <body>
+ ${next.body()}
+ </body>
+</html>
+
+
+<%!
+ local_script_files = []
+%>
+++ /dev/null
-## toc.myt - prints table of contents listings given toc.TOCElement strucures
-
-<%def name="toc(toc, paged, extension)">
- <div class="topnav">
-
- <a name="table_of_contents"></a>
- <h3>Table of Contents</h3>
-
- <a href="#full_index" class="totoc">(view full table)</a>
- ${printtoc(root=toc,paged=paged, extension=extension, current=None,children=False,anchor_toplevel=False)}
-
- <a name="full_index"></a>
- <h3>Table of Contents: Full</h3>
-
- <a href="#table_of_contents" class="totoc">(view brief table)</a>
-
- ${printtoc(root=toc,paged=paged, extension=extension, current=None,children=True,anchor_toplevel=False)}
-
- </div>
-</%def>
-
-
-<%def name="printtoc(root, paged, extension, current=None, children=True, anchor_toplevel=False)">
- % if root.children:
- <ul>
- % for item in root.children:
- <%
- anchor = anchor_toplevel
- if paged and item.filename != root.filename:
- anchor = False
- %>
- <li><a style="${item is current and "font-weight:bold;" or "" }" href="${item.get_link(extension=extension,anchor=anchor, usefilename=paged) }">${item.description}</a></li>
-
- % if children and item.children:
- <li>
- ${printtoc(item, current=current, children=True,anchor_toplevel=True, paged=paged, extension=extension)}
- </li>
- % endif
- % endfor
- </ul>
- % endif
-</%def>
-
return s
for filename in ('ormtutorial', 'sqlexpression'):
- filename = 'content/%s.txt' % filename
+ filename = '%s.rst' % filename
s = open(filename).read()
#s = replace_file(s, ':memory:')
s = re.sub(r'{(?:stop|sql|opensql)}', '', s)
+++ /dev/null
-/* documentation section styles */
-
-#topanchor {position:absolute;left:0px;top:0px;width:0px;height:0px;}
-#pagecontrol {float:right;}
-
-.topnav {
- background-color: #fbfbee;
- border: solid 1px #ccc;
- padding:10px 10px 0px 10px;
- margin:10px 0px 10px 0px;
-}
-
-pre {
- margin:0px;
- padding:0px;
-}
-
-.prevnext {
- padding: 5px 0px 0px 0px;
- font-size: 0.8em
-}
-
-.codetitle {
- font-family: verdana, sans-serif;
- font-weight: bold;
- text-decoration:underline;
- padding:5px;
-}
-
-.codeline {
- font-family: courier, "courier new", serif;
- font-family: "Deja Vu Sans Mono", "Vera Sans Mono", Courier, "Courier New", fixed;
- font-size: 1em;
- color: #960;
-}
-
-h1, h2, h3 {
- font-family:arial,helvetica,sans-serif;
-}
-
-h1 {
- font: normal 20px/22px arial,helvetica,sans-serif;
- color: #222;
- padding:0px;
- margin:0px;
-}
-
-h2 {
- font-family:arial,helvetica,sans-serif;
- font-size: 1.6em;
- font-weight:normal;
- line-height: 1.6em;
- margin:0px;
-}
-
-h3 {
- font-family: arial, sans-serif;
- font-size: 1.4em;
- font-weight:bold;
-}
-
-.topnav h3 {
- font-weight: bold;
- font-size: 1.4em;
- margin:0px;
- display:inline;
- font-family:verdana,sans-serif;
-}
-
-.topnav h2 {
- margin:26px 4px 0px 5px;
-}
-
-.sectionL1 {
- line-height: 1.5em;
- padding:8px 10px 20px 10px;
- margin:10px 0px 0px;
-}
-
-.sectionL2 {
- margin:0px 0px 0px 0px;
- line-height: 1.5em;
-}
-
-.sectionL3 {
- margin:0px 0px 0px 20px;
- line-height: 1.5em;
-}
-
-.sectionL4 {
- margin:0px 0px 0px 20px;
- line-height: 1.5em;
-}
-
-
-.topnav li {
- font-size: 1em;
- list-style-type:none;
- padding:0px 0px 3px 8px;
- margin:0px;
-}
-
-.topnav ul ul {
- padding:0px 0px 0px 8px;
-}
-
-.topnav ul ul li {
- font-size: 0.9em;
-}
-
-.topnav ul ul li li li {
- font-size: 1em;
-}
-
-.bottomnav {
- background-color:#FBFBEE;
- border:1px solid #CCCCCC;
- float:right;
- margin: 1em 0 1em 5px;
- padding:10px;
-}
-
-.toclink {
- font-weight: bold;
- font-size: 1em;
- padding:0px 0px 3px 8px;
- /*border:1px solid;*/
-}
-
-.totoc {
- font-size: smaller;
-}
-
-.smalltoclink {
- font-size: 0.9em;
- padding:0px 0px 3px 0px;
-}
-
-.docstring {
- margin-left:15px;
- margin-bottom:5px;
- margin-top:5px;
-}
-
-.darkcell {
- margin:0px 0px 10px 0px;
- padding:4px 4px 4px 4px;
- background-color: #f0f0f0;
- border: solid 1px #ccc;
-}
-
-.sliding_code {
- font-family: "Deja Vu Sans Mono", "Vera Sans Mono", "Monaco", Courier, "Courier New", fixed;
- background-color: #f0f0f0;
- border: solid 1px #ccc;
- padding:10px;
- margin: 5px 5px 5px 5px;
- overflow:auto;
-}
-
-code {
- font-family: "Deja Vu Sans Mono", "Vera Sans Mono", Courier, "Courier New", fixed;
- font-size: 0.95em;
- color: #222;
-}
-
-.code {
- font-family: "Deja Vu Sans Mono", "Vera Sans Mono", Courier, "Courier New", fixed;
- background-color: #f0f0f0;
- border: solid 1px #ccc;
- padding:10px; /*2px 2px 2px 10px;*/
- margin: 5px 5px 5px 5px;
- line-height:1.2em;
-}
-
-.codepop
-{
- font-family: "Deja Vu Sans Mono", "Vera Sans Mono", Courier, "Courier New", fixed;
- font-size: 0.75em;
- color:#000;
- background-color: #fbfbee;
- border: 1px solid #d9d9d9;
- border-right: 1px solid #999;
- border-bottom: 1px solid #999;
- padding:10px;
- width:95%;
- /*margin:5px 10px 5px 0px;*/
- /*clear:right;*/
-}
-
-.codepoplink,
-#docs a.codepoplink
-{
- font-weight:normal;
- font-family: arial, sans-serif;
- text-transform: uppercase;
- font-size: 0.9em;
- color:#666;
- border:1px solid;
- padding:1px 2px 1px 2px;
- margin:0px 10px 0px 15px;
- float:right;
-}
-#docs a.codepoplink {
- text-decoration: none;
-}
-#docs a.codepoplink:hover {
- text-decoration: none;
- color:#fff;
- border:1px solid #900;
- background-color: #900;
-}
-
-.versionheader {
- margin-top: 0.5em;
-}
-.versionnum {
- font-weight: bold;
-}
-.prerelease {
- border: solid #c25757 2px;
- border-radius: 4px;
- -moz-border-radius: 4px;
- -webkit-border-radius: 4px;
- background-color: #c21a1a;
- color: white;
- padding: 0.05em 0.2em;
-}
-
-@media print {
- #nav { display: none; }
- #pagecontrol { display: none; }
- .topnav .prevnext { display: none; }
- .bottomnav { display: none; }
- .totoc { display: none; }
- .topnav ul li a { text-decoration: none; color: #000; }
-}
+++ /dev/null
-/*
-:Author: David Goodger <goodger@python.org>
-:Id: $Id: html4css1.css 4993 2007-03-04 21:21:49Z fwiemann $
-:Copyright: This stylesheet has been placed in the public domain.
-
-Default cascading style sheet for the HTML output of Docutils.
-
-See http://docutils.sf.net/docs/howto/html-stylesheets.html for how to
-customize this style sheet.
-*/
-
-/* used to remove borders from tables and images */
-.borderless, table.borderless td, table.borderless th {
- border: 0 }
-
-table.borderless td, table.borderless th {
- /* Override padding for "table.docutils td" with "! important".
- The right padding separates the table cells. */
- padding: 0 0.5em 0 0 ! important }
-
-.first {
- /* Override more specific margin styles with "! important". */
- margin-top: 0 ! important }
-
-.last, .with-subtitle {
- margin-bottom: 0 ! important }
-
-.hidden {
- display: none }
-
-a.toc-backref {
- text-decoration: none ;
- color: black }
-
-blockquote.epigraph {
- margin: 2em 5em ; }
-
-dl.docutils dd {
- margin-bottom: 0.5em }
-
-/* Uncomment (and remove this text!) to get bold-faced definition list terms
-dl.docutils dt {
- font-weight: bold }
-*/
-
-div.abstract {
- margin: 2em 5em }
-
-div.abstract p.topic-title {
- font-weight: bold ;
- text-align: center }
-
-div.admonition, div.attention, div.caution, div.danger, div.error,
-div.hint, div.important, div.note, div.tip, div.warning {
- margin: 2em ;
- border: medium outset ;
- padding: 1em }
-
-div.admonition p.admonition-title, div.hint p.admonition-title,
-div.important p.admonition-title, div.note p.admonition-title,
-div.tip p.admonition-title {
- font-weight: bold ;
- font-family: sans-serif }
-
-div.attention p.admonition-title, div.caution p.admonition-title,
-div.danger p.admonition-title, div.error p.admonition-title,
-div.warning p.admonition-title {
- color: red ;
- font-weight: bold ;
- font-family: sans-serif }
-
-/* Uncomment (and remove this text!) to get reduced vertical space in
- compound paragraphs.
-div.compound .compound-first, div.compound .compound-middle {
- margin-bottom: 0.5em }
-
-div.compound .compound-last, div.compound .compound-middle {
- margin-top: 0.5em }
-*/
-
-div.dedication {
- margin: 2em 5em ;
- text-align: center ;
- font-style: italic }
-
-div.dedication p.topic-title {
- font-weight: bold ;
- font-style: normal }
-
-div.figure {
- margin-left: 2em ;
- margin-right: 2em }
-
-div.footer, div.header {
- clear: both;
- font-size: smaller }
-
-div.line-block {
- display: block ;
- margin-top: 1em ;
- margin-bottom: 1em }
-
-div.line-block div.line-block {
- margin-top: 0 ;
- margin-bottom: 0 ;
- margin-left: 1.5em }
-
-div.sidebar {
- margin-left: 1em ;
- border: medium outset ;
- padding: 1em ;
- background-color: #ffffee ;
- width: 40% ;
- float: right ;
- clear: right }
-
-div.sidebar p.rubric {
- font-family: sans-serif ;
- font-size: medium }
-
-div.system-messages {
- margin: 5em }
-
-div.system-messages h1 {
- color: red }
-
-div.system-message {
- border: medium outset ;
- padding: 1em }
-
-div.system-message p.system-message-title {
- color: red ;
- font-weight: bold }
-
-div.topic {
- margin: 2em }
-
-h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
-h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
- margin-top: 0.4em }
-
-h1.title {
- text-align: center }
-
-h2.subtitle {
- text-align: center }
-
-hr.docutils {
- width: 75% }
-
-img.align-left {
- clear: left }
-
-img.align-right {
- clear: right }
-
-ol.simple, ul.simple {
- margin-bottom: 1em }
-
-ol.arabic {
- list-style: decimal }
-
-ol.loweralpha {
- list-style: lower-alpha }
-
-ol.upperalpha {
- list-style: upper-alpha }
-
-ol.lowerroman {
- list-style: lower-roman }
-
-ol.upperroman {
- list-style: upper-roman }
-
-p.attribution {
- text-align: right ;
- margin-left: 50% }
-
-p.caption {
- font-style: italic }
-
-p.credits {
- font-style: italic ;
- font-size: smaller }
-
-p.label {
- white-space: nowrap }
-
-p.rubric {
- font-weight: bold ;
- font-size: larger ;
- color: maroon ;
- text-align: center }
-
-p.sidebar-title {
- font-family: sans-serif ;
- font-weight: bold ;
- font-size: larger }
-
-p.sidebar-subtitle {
- font-family: sans-serif ;
- font-weight: bold }
-
-p.topic-title {
- font-weight: bold }
-
-pre.address {
- margin-bottom: 0 ;
- margin-top: 0 ;
- font-family: serif ;
- font-size: 100% }
-
-pre.literal-block, pre.doctest-block {
- margin-left: 2em ;
- margin-right: 2em }
-
-span.classifier {
- font-family: sans-serif ;
- font-style: oblique }
-
-span.classifier-delimiter {
- font-family: sans-serif ;
- font-weight: bold }
-
-span.interpreted {
- font-family: sans-serif }
-
-span.option {
- white-space: nowrap }
-
-span.pre {
- white-space: pre }
-
-span.problematic {
- color: red }
-
-span.section-subtitle {
- /* font-size relative to parent (h1..h6 element) */
- font-size: 80% }
-
-table.citation {
- border-left: solid 1px gray;
- margin-left: 1px }
-
-table.docinfo {
- margin: 2em 4em }
-
-table.docutils {
- margin-top: 0.5em ;
- margin-bottom: 0.5em }
-
-table.footnote {
- border-left: solid 1px black;
- margin-left: 1px }
-
-table.docutils td, table.docutils th,
-table.docinfo td, table.docinfo th {
- padding-left: 0.5em ;
- padding-right: 0.5em ;
- vertical-align: top }
-
-table.docutils th.field-name, table.docinfo th.docinfo-name {
- font-weight: bold ;
- text-align: left ;
- white-space: nowrap ;
- padding-left: 0 }
-
-h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
-h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
- font-size: 100% }
-
-ul.auto-toc {
- list-style-type: none }
-
+++ /dev/null
-
-function togglePopbox(id, show, hide) {
- var link = document.getElementById(id + "_link");
- var div = document.getElementById(id + "_div");
- if (div.style.display == 'block') {
- div.style.display = 'none';
- if (link) {
- link.firstChild.nodeValue = show;
- }
- }
- else if (div.style.display == 'none') {
- div.style.display = 'block';
- if (link) {
- link.firstChild.nodeValue = hide;
- }
- }
-}
-
-function alphaApi() {
- window.open("alphaapi.html", "_blank", "width=600,height=400, scrollbars=yes,resizable=yes,toolbar=no");
-}
-
-function alphaImplementation() {
- window.open("alphaimplementation.html", "_blank", "width=600,height=400, scrollbars=yes,resizable=yes,toolbar=no");
-}
\ No newline at end of file
+++ /dev/null
-body, td, .normaltype {
- font-family: verdana, sans-serif;
-}
-
-body {
- background-color: #FDFBFC;
- margin:20px 20px 20px 20px;
-}
-
-
-p {
- margin-top:10px;
- margin-bottom:10px;
-}
-
-a {font-weight:normal; text-decoration:underline;}
-a:link {color:#0000FF;}
-a:visited {color:#0000FF;}
-a:active {color:#0000FF;}
-a:hover {color:#700000;}
-
-strong a {
- font-weight: bold;
-}
-
-.toc {
- background-color: #EEEEFB;
- border: 1px solid;
- /*padding:10px 8px 10px 15px;*/
- padding: 10px 10px 10px 10px;
- margin: 5px;
-}
-
-.trailbold {
- font-weight:bold;
-}
-
-.light {
- background-color: #EFEFEF;
-}
-
-.dark {
- background-color: #D2D2D2;
-}
-
-.smalllogo {
- float:left;
-}
-
-.headerbar {
- padding-bottom: 60px;
-}
-
-.header {
- font-weight: bold;
- font-size: 1.6em;
-}
-
-.smallheader {
- font-weight: bold;
- font-size: 1.4em;
-}
-
-
-.toolbar {
- text-align:right;
- margin: 0px 10px 0px 10px
-}
-
-.copyright {
- padding-top: 30px;
- text-align:center;
- font-size: 0.8em;
- color: #5F5F5F;
-}
-
-.small {
- font-size: 0.8em;
-}
-
-.sforgelogo {
- text-align:right;
- height: 40px;
-}
-
-.source {
- border: 1px solid;
- padding: 10px;
- width: auto;
-}
-
+++ /dev/null
-
-.substitution, .compcall {
- color: #DF2020;
-}
-
-
-.controlline {
- color: #10109E;
-}
-
-.doctag_text, .python_comment, .doctag {
- color: #109010;
-}
-
-.argstag_text {
- color: #10109E;
-}
-
-.blocktag, .python_keyword, .deftag, .argstag {
- #color: #1010FF;
- color: #0908CE;
-}
-
-.blocktag_text {
- color: #10109E;
-}
-
-.python_literal, .python_number {
- color: #804049;
-}
-
-.text {
- color: #807079;
-}
-
-.python_operator {
- color: #BF0005;
-}
-
-.python_enclosure {
- color: #0000FF;
-}
-
-.compname {
- color: #272767;
-}
-
-.python_name, name {
- color: #070707;
-}
-
-
-
-
+# -*- fill-column: 78 -*-
# mysql.py
# Copyright (C) 2005, 2006, 2007, 2008 Michael Bayer mike_mp@zzzcomputing.com
#
"""Support for the MySQL database.
+Overview
+--------
+
+For normal SQLAlchemy usage, importing this module is unnecessary. It will be
+loaded on-demand when a MySQL connection is needed. The generic column types
+like :class:`~sqlalchemy.String` and :class:`~sqlalchemy.Integer` will
+automatically be adapted to the optimal matching MySQL column type.
+
+But if you would like to use one of the MySQL-specific or enhanced column
+types when creating tables with your :class:`~sqlalchemy.Table` definitions,
+then you will need to import them from this module::
+
+ from sqlalchemy.databases import mysql
+
+ Table('mytable', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('ittybittyblob', mysql.MSTinyBlob),
+ Column('biggy', mysql.MSBigInteger(unsigned=True)))
+
+All standard MySQL column types are supported. The OpenGIS types are
+available for use via table reflection but have no special support or mapping
+to Python classes. If you're using these types and have opinions about how
+OpenGIS can be smartly integrated into SQLAlchemy please join the mailing
+list!
+
+Supported Versions and Features
+-------------------------------
+
SQLAlchemy supports 6 major MySQL versions: 3.23, 4.0, 4.1, 5.0, 5.1 and 6.0,
-with capablities increasing with more modern servers.
+with capabilities increasing with more modern servers.
Versions 4.1 and higher support the basic SQL functionality that SQLAlchemy
-uses in the ORM and SQL expressions. These versions pass the applicable
-tests in the suite 100%. No heroic measures are taken to work around major
-missing SQL features- if your server version does not support sub-selects, for
+uses in the ORM and SQL expressions. These versions pass the applicable tests
+in the suite 100%. No heroic measures are taken to work around major missing
+SQL features- if your server version does not support sub-selects, for
example, they won't work in SQLAlchemy either.
Currently, the only DB-API driver supported is `MySQL-Python` (also referred to
See the official MySQL documentation for detailed information about features
supported in any given server release.
+Character Sets
+--------------
+
Many MySQL server installations default to a ``latin1`` encoding for client
-connections. All data sent through the connection will be converted
-into ``latin1``, even if you have ``utf8`` or another character set on your
-tables and columns. With versions 4.1 and higher, you can change the
-connection character set either through server configuration or by passing
-the ``charset`` parameter to ``create_engine``. The ``charset`` option is
-passed through to MySQL-Python and has the side-effect of also enabling
-``use_unicode`` in the driver by default. For regular encoded strings, also
-pass ``use_unicode=0`` in the connection arguments.
-
-Most MySQL server installations have a default table type of `MyISAM`, a
-non-transactional table type. During a transaction, non-transactional
-storage engines do not participate and continue to store table changes in
-autocommit mode. For fully atomic transactions, all participating tables
-must use a transactional engine such as `InnoDB`, `Falcon`, `SolidDB`,
-`PBXT`, etc. Storage engines can be elected when creating tables in
-SQLAlchemy by supplying a ``mysql_engine='whatever'`` to the ``Table``
-constructor. Any MySQL table creation option can be specified in this syntax.
-
-Not all MySQL storage engines support foreign keys. For `MyISAM` and similar
-engines, the information loaded by table reflection will not include foreign
-keys. For these tables, you may supply ``ForeignKeyConstraints`` at reflection
-time::
-
- Table('mytable', metadata, autoload=True,
- ForeignKeyConstraint(['other_id'], ['othertable.other_id']))
-
-When creating tables, SQLAlchemy will automatically set AUTO_INCREMENT on an
-integer primary key column::
+connections. All data sent through the connection will be converted into
+``latin1``, even if you have ``utf8`` or another character set on your tables
+and columns. With versions 4.1 and higher, you can change the connection
+character set either through server configuration or by including the
+``charset`` parameter in the URL used for ``create_engine``. The ``charset``
+option is passed through to MySQL-Python and has the side-effect of also
+enabling ``use_unicode`` in the driver by default. For regular encoded
+strings, also pass ``use_unicode=0`` in the connection arguments::
+
+ # set client encoding to utf8; all strings come back as unicode
+ create_engine('mysql:///mydb?charset=utf8')
+
+ # set client encoding to utf8; all strings come back as utf8 str
+ create_engine('mysql:///mydb?charset=utf8&use_unicode=0')
+
+Storage Engines
+---------------
+
+Most MySQL server installations have a default table type of ``MyISAM``, a
+non-transactional table type. During a transaction, non-transactional storage
+engines do not participate and continue to store table changes in autocommit
+mode. For fully atomic transactions, all participating tables must use a
+transactional engine such as ``InnoDB``, ``Falcon``, ``SolidDB``, `PBXT`, etc.
+
+Storage engines can be elected when creating tables in SQLAlchemy by supplying
+a ``mysql_engine='whatever'`` to the ``Table`` constructor. Any MySQL table
+creation option can be specified in this syntax::
+
+ Table('mytable', metadata,
+ Column('data', String(32)),
+ mysql_engine='InnoDB',
+ mysql_charset='utf8'
+ )
+
+Keys
+----
+
+Not all MySQL storage engines support foreign keys. For ``MyISAM`` and
+similar engines, the information loaded by table reflection will not include
+foreign keys. For these tables, you may supply a
+:class:`~sqlalchemy.ForeignKeyConstraint` at reflection time::
+
+ Table('mytable', metadata,
+ ForeignKeyConstraint(['other_id'], ['othertable.other_id']),
+ autoload=True
+ )
+
+When creating tables, SQLAlchemy will automatically set ``AUTO_INCREMENT``` on
+an integer primary key column::
>>> t = Table('mytable', metadata,
- ... Column('mytable_id', Integer, primary_key=True))
+ ... Column('mytable_id', Integer, primary_key=True)
+ ... )
>>> t.create()
CREATE TABLE mytable (
id INTEGER NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
)
-You can disable this behavior by supplying ``autoincrement=False`` in addition.
-This can also be used to enable auto-increment on a secondary column in a
-multi-column key for some storage engines::
+You can disable this behavior by supplying ``autoincrement=False`` to the
+:class:`~sqlalchemy.Column`. This flag can also be used to enable
+auto-increment on a secondary column in a multi-column key for some storage
+engines::
Table('mytable', metadata,
Column('gid', Integer, primary_key=True, autoincrement=False),
- Column('id', Integer, primary_key=True))
+ Column('id', Integer, primary_key=True)
+ )
+
+SQL Mode
+--------
MySQL SQL modes are supported. Modes that enable ``ANSI_QUOTES`` (such as
``ANSI``) require an engine option to modify SQLAlchemy's quoting style.
create_engine('mysql://localhost/test', use_ansiquotes=True)
This is an engine-wide option and is not toggleable on a per-connection basis.
-SQLAlchemy does not presume to ``SET sql_mode`` for you with this option.
-For the best performance, set the quoting style server-wide in ``my.cnf`` or
-by supplying ``--sql-mode`` to ``mysqld``. You can also use a ``Pool`` hook
-to issue a ``SET SESSION sql_mode='...'`` on connect to configure each
-connection.
+SQLAlchemy does not presume to ``SET sql_mode`` for you with this option. For
+the best performance, set the quoting style server-wide in ``my.cnf`` or by
+supplying ``--sql-mode`` to ``mysqld``. You can also use a
+:class:`sqlalchemy.pool.Pool` listener hook to issue a ``SET SESSION
+sql_mode='...'`` on connect to configure each connection.
-If you do not specify 'use_ansiquotes', the regular MySQL quoting style is
-used by default. Table reflection operations will query the server
+If you do not specify ``use_ansiquotes``, the regular MySQL quoting style is
+used by default.
-If you do issue a 'SET sql_mode' through SQLAlchemy, the dialect must be
+If you do issue a ``SET sql_mode`` through SQLAlchemy, the dialect must be
updated if the quoting style is changed. Again, this change will affect all
connections::
connection.execute('SET sql_mode="ansi"')
connection.dialect.use_ansiquotes = True
-For normal SQLAlchemy usage, loading this module is unnescesary. It will be
-loaded on-demand when a MySQL connection is needed. The generic column types
-like ``String`` and ``Integer`` will automatically be adapted to the optimal
-matching MySQL column type.
-
-But if you would like to use one of the MySQL-specific or enhanced column
-types when creating tables with your ``Table`` definitions, then you will
-need to import them from this module::
-
- from sqlalchemy.databases import mysql
-
- Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
- Column('ittybittyblob', mysql.MSTinyBlob),
- Column('biggy', mysql.MSBigInteger(unsigned=True)))
-
-All standard MySQL column types are supported. The OpenGIS types are
-available for use via table reflection but have no special support or
-mapping to Python classes. If you're using these types and have opinions
-about how OpenGIS can be smartly integrated into SQLAlchemy please join
-the mailing list!
+MySQL SQL Extensions
+--------------------
Many of the MySQL SQL extensions are handled through SQLAlchemy's generic
function and operator support::
table.select(table.c.password==func.md5('plaintext'))
table.select(table.c.username.op('regexp')('^[a-d]'))
-And of course any valid statement can be executed as a string rather than
-through the SQL expression language.
+And of course any valid MySQL statement can be executed as a string as well.
-Some limited support for MySQL extensions to SQL expressions is currently
+Some limited direct support for MySQL extensions to SQL is currently
available.
* SELECT pragma::
update(..., mysql_limit=10)
+Troubleshooting
+---------------
+
If you have problems that seem server related, first check that you are
using the most recent stable MySQL-Python package available. The Database
Notes page on the wiki at http://www.sqlalchemy.org is a good resource for
timely information affecting MySQL in SQLAlchemy.
+
"""
import datetime, decimal, inspect, re, sys
def __init__(self, precision=10, scale=2, asdecimal=True, **kw):
"""Construct a NUMERIC.
- precision
- Total digits in this number. If scale and precision are both
- None, values are stored to limits allowed by the server.
+ :param precision: Total digits in this number. If scale and precision
+ are both None, values are stored to limits allowed by the server.
- scale
- The number of digits after the decimal point.
+ :param scale: The number of digits after the decimal point.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
_NumericType.__init__(self, kw)
sqltypes.Numeric.__init__(self, precision, scale, asdecimal=asdecimal, **kw)
def __init__(self, precision=10, scale=2, asdecimal=True, **kw):
"""Construct a DECIMAL.
- precision
- Total digits in this number. If scale and precision are both None,
- values are stored to limits allowed by the server.
+ :param precision: Total digits in this number. If scale and precision
+ are both None, values are stored to limits allowed by the server.
- scale
- The number of digits after the decimal point.
+ :param scale: The number of digits after the decimal point.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
super(MSDecimal, self).__init__(precision, scale, asdecimal=asdecimal, **kw)
def get_col_spec(self):
def __init__(self, precision=None, scale=None, asdecimal=True, **kw):
"""Construct a DOUBLE.
- precision
- Total digits in this number. If scale and precision are both None,
- values are stored to limits allowed by the server.
+ :param precision: Total digits in this number. If scale and precision
+ are both None, values are stored to limits allowed by the server.
- scale
- The number of digits after the decimal point.
+ :param scale: The number of digits after the decimal point.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
if ((precision is None and scale is not None) or
(precision is not None and scale is None)):
raise exc.ArgumentError(
def __init__(self, precision=None, scale=None, asdecimal=True, **kw):
"""Construct a REAL.
- precision
- Total digits in this number. If scale and precision are both None,
- values are stored to limits allowed by the server.
+ :param precision: Total digits in this number. If scale and precision
+ are both None, values are stored to limits allowed by the server.
- scale
- The number of digits after the decimal point.
+ :param scale: The number of digits after the decimal point.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
+
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
"""
MSDouble.__init__(self, precision, scale, asdecimal, **kw)
def __init__(self, precision=None, scale=None, asdecimal=False, **kw):
"""Construct a FLOAT.
- precision
- Total digits in this number. If scale and precision are both None,
- values are stored to limits allowed by the server.
+ :param precision: Total digits in this number. If scale and precision
+ are both None, values are stored to limits allowed by the server.
- scale
- The number of digits after the decimal point.
+ :param scale: The number of digits after the decimal point.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
_NumericType.__init__(self, kw)
sqltypes.Float.__init__(self, asdecimal=asdecimal, **kw)
self.scale = scale
def __init__(self, display_width=None, **kw):
"""Construct an INTEGER.
- display_width
- Optional, maximum display width for this number.
+ :param display_width: Optional, maximum display width for this number.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
if 'length' in kw:
util.warn_deprecated("'length' is deprecated for MSInteger and subclasses. Use 'display_width'.")
self.display_width = kw.pop('length')
def __init__(self, display_width=None, **kw):
"""Construct a BIGINTEGER.
- display_width
- Optional, maximum display width for this number.
+ :param display_width: Optional, maximum display width for this number.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
super(MSBigInteger, self).__init__(display_width, **kw)
def get_col_spec(self):
else:
return self._extend("BIGINT")
+
class MSMediumInteger(MSInteger):
"""MySQL MEDIUMINTEGER type."""
def __init__(self, display_width=None, **kw):
"""Construct a MEDIUMINTEGER
- display_width
- Optional, maximum display width for this number.
+ :param display_width: Optional, maximum display width for this number.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
super(MSMediumInteger, self).__init__(display_width, **kw)
def get_col_spec(self):
reflected during Table(..., autoload=True) are treated as
Boolean columns.
- display_width
- Optional, maximum display width for this number.
+ :param display_width: Optional, maximum display width for this number.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
super(MSTinyInteger, self).__init__(display_width, **kw)
def get_col_spec(self):
def __init__(self, display_width=None, **kw):
"""Construct a SMALLINTEGER.
- display_width
- Optional, maximum display width for this number.
+ :param display_width: Optional, maximum display width for this number.
- unsigned
- Optional.
+ :param unsigned: a boolean, optional.
- zerofill
- Optional. If true, values will be stored as strings left-padded with
- zeros. Note that this does not effect the values returned by the
- underlying database API, which continue to be numeric.
- """
+ :param zerofill: Optional. If true, values will be stored as strings
+ left-padded with zeros. Note that this does not effect the values
+ returned by the underlying database API, which continue to be
+ numeric.
+ """
self.display_width = display_width
_NumericType.__init__(self, kw)
sqltypes.SmallInteger.__init__(self, **kw)
"""MySQL BIT type.
This type is for MySQL 5.0.3 or greater for MyISAM, and 5.0.5 or greater for
- MyISAM, MEMORY, InnoDB and BDB. For older versions, use a MSTinyInteger(1)
+ MyISAM, MEMORY, InnoDB and BDB. For older versions, use a MSTinyInteger()
type.
+
"""
def __init__(self, length=None):
+ """Construct a BIT.
+
+ :param length: Optional, number of bits.
+
+ """
self.length = length
def result_processor(self, dialect):
class MSTimeStamp(sqltypes.TIMESTAMP):
"""MySQL TIMESTAMP type.
- To signal the orm to automatically re-select modified rows to retrieve
- the updated timestamp, add a DefaultClause to your column specification::
+ To signal the orm to automatically re-select modified rows to retrieve the
+ updated timestamp, add a ``server_default`` to your
+ :class:`~sqlalchemy.Column` specification::
from sqlalchemy.databases import mysql
Column('updated', mysql.MSTimeStamp,
- server_default=sql.text('CURRENT_TIMESTAMP'))
+ server_default=sql.text('CURRENT_TIMESTAMP')
+ )
The full range of MySQL 4.1+ TIMESTAMP defaults can be specified in
- the the default:
+ the the default::
server_default=sql.text('CURRENT TIMESTAMP ON UPDATE CURRENT_TIMESTAMP')
"""
+
def get_col_spec(self):
return "TIMESTAMP"
def __init__(self, length=None, **kwargs):
"""Construct a TEXT.
- length
- Optional, if provided the server may optimize storage by
- subsitituting the smallest TEXT type sufficient to store
+ :param length: Optional, if provided the server may optimize storage
+ by substituting the smallest TEXT type sufficient to store
``length`` characters.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- national
- Optional. If true, use the server's configured national
- character set.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
- """
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
+
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param national: Optional. If true, use the server's configured
+ national character set.
+
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+ """
_StringType.__init__(self, **kwargs)
sqltypes.Text.__init__(self, length,
kwargs.get('convert_unicode', False), kwargs.get('assert_unicode', None))
def __init__(self, **kwargs):
"""Construct a TINYTEXT.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- national
- Optional. If true, use the server's configured national
- character set.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
+
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param national: Optional. If true, use the server's configured
+ national character set.
+
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+
"""
super(MSTinyText, self).__init__(**kwargs)
def __init__(self, **kwargs):
"""Construct a MEDIUMTEXT.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- national
- Optional. If true, use the server's configured national
- character set.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
- """
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
+
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param national: Optional. If true, use the server's configured
+ national character set.
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+
+ """
super(MSMediumText, self).__init__(**kwargs)
def get_col_spec(self):
def __init__(self, **kwargs):
"""Construct a LONGTEXT.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- national
- Optional. If true, use the server's configured national
- character set.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
- """
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param national: Optional. If true, use the server's configured
+ national character set.
+
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+
+ """
super(MSLongText, self).__init__(**kwargs)
def get_col_spec(self):
def __init__(self, length=None, **kwargs):
"""Construct a VARCHAR.
- length
- Maximum data length, in characters.
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- national
- Optional. If true, use the server's configured national
- character set.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
- """
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param national: Optional. If true, use the server's configured
+ national character set.
+
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+ """
_StringType.__init__(self, **kwargs)
sqltypes.String.__init__(self, length,
kwargs.get('convert_unicode', False), kwargs.get('assert_unicode', None))
def __init__(self, length, **kwargs):
"""Construct an NCHAR.
- length
- Maximum data length, in characters.
+ :param length: Maximum data length, in characters.
+
+ :param binary: Optional, use the default binary collation for the
+ national character set. This does not affect the type of data
+ stored, use a BINARY type for binary data.
- binary
- Optional, use the default binary collation for the national character
- set. This does not affect the type of data stored, use a BINARY
- type for binary data.
+ :param collation: Optional, request a particular collation. Must be
+ compatible with the national character set.
- collation
- Optional, request a particular collation. Must be compatibile
- with the national character set.
"""
_StringType.__init__(self, **kwargs)
sqltypes.CHAR.__init__(self, length,
def __init__(self, length=None, **kwargs):
"""Construct an NVARCHAR.
- length
- Maximum data length, in characters.
+ :param length: Maximum data length, in characters.
- binary
- Optional, use the default binary collation for the national character
- set. This does not affect the type of data stored, use a VARBINARY
- type for binary data.
+ :param binary: Optional, use the default binary collation for the
+ national character set. This does not affect the type of data
+ stored, use a BINARY type for binary data.
- collation
- Optional, request a particular collation. Must be compatibile
- with the national character set.
- """
+ :param collation: Optional, request a particular collation. Must be
+ compatible with the national character set.
+ """
kwargs['national'] = True
_StringType.__init__(self, **kwargs)
sqltypes.String.__init__(self, length,
def __init__(self, length=None, **kwargs):
"""Construct an NCHAR. Arguments are:
- length
- Maximum data length, in characters.
+ :param length: Maximum data length, in characters.
- binary
- Optional, request the default binary collation for the
- national character set.
+ :param binary: Optional, use the default binary collation for the
+ national character set. This does not affect the type of data
+ stored, use a BINARY type for binary data.
- collation
- Optional, request a particular collation. Must be compatibile
- with the national character set.
- """
+ :param collation: Optional, request a particular collation. Must be
+ compatible with the national character set.
+ """
kwargs['national'] = True
_StringType.__init__(self, **kwargs)
sqltypes.CHAR.__init__(self, length,
def __init__(self, length=None, **kw):
"""Construct a VARBINARY. Arguments are:
- length
- Maximum data length, in bytes.
+ :param length: Maximum data length, in characters.
+
"""
super(MSVarBinary, self).__init__(length, **kw)
"""MySQL BINARY type, for fixed length binary data"""
def __init__(self, length=None, **kw):
- """Construct a BINARY. This is a fixed length type, and short
- values will be right-padded with a server-version-specific
- pad value.
+ """Construct a BINARY.
- length
- Maximum data length, in bytes. If length is not specified, this
- will generate a BLOB. This usage is deprecated.
- """
+ This is a fixed length type, and short values will be right-padded
+ with a server-version-specific pad value.
+
+ :param length: Maximum data length, in bytes. If length is not
+ specified, this will generate a BLOB. This usage is deprecated.
+ """
super(MSBinary, self).__init__(length, **kw)
def get_col_spec(self):
def __init__(self, length=None, **kw):
"""Construct a BLOB. Arguments are:
- length
- Optional, if provided the server may optimize storage by
- subsitituting the smallest TEXT type sufficient to store
+ :param length: Optional, if provided the server may optimize storage
+ by substituting the smallest TEXT type sufficient to store
``length`` characters.
- """
+ """
super(MSBlob, self).__init__(length, **kw)
def get_col_spec(self):
Arguments are:
- enums
- The range of valid values for this ENUM. Values will be quoted
- when generating the schema according to the quoting flag (see
+ :param enums: The range of valid values for this ENUM. Values will be
+ quoted when generating the schema according to the quoting flag (see
below).
- strict
- Defaults to False: ensure that a given value is in this ENUM's
- range of permissible values when inserting or updating rows.
- Note that MySQL will not raise a fatal error if you attempt to
- store an out of range value- an alternate value will be stored
- instead. (See MySQL ENUM documentation.)
+ :param strict: Defaults to False: ensure that a given value is in this
+ ENUM's range of permissible values when inserting or updating rows.
+ Note that MySQL will not raise a fatal error if you attempt to store
+ an out of range value- an alternate value will be stored instead.
+ (See MySQL ENUM documentation.)
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
- quoting
- Defaults to 'auto': automatically determine enum value quoting. If
- all enum values are surrounded by the same quoting character, then
- use 'quoted' mode. Otherwise, use 'unquoted' mode.
+ :param quoting: Defaults to 'auto': automatically determine enum value
+ quoting. If all enum values are surrounded by the same quoting
+ character, then use 'quoted' mode. Otherwise, use 'unquoted' mode.
'quoted': values in enums are already quoted, they will be used
directly when generating the schema.
Arguments are:
- values
- The range of valid values for this SET. Values will be used
- exactly as they appear when generating schemas. Strings must
- be quoted, as in the example above. Single-quotes are suggested
- for ANSI compatability and are required for portability to servers
- with ANSI_QUOTES enabled.
+ :param values: The range of valid values for this SET. Values will be
+ used exactly as they appear when generating schemas. Strings must
+ be quoted, as in the example above. Single-quotes are suggested for
+ ANSI compatibility and are required for portability to servers with
+ ANSI_QUOTES enabled.
- charset
- Optional, a column-level character set for this string
- value. Takes precendence to 'ascii' or 'unicode' short-hand.
-
- collation
- Optional, a column-level collation for this string value.
- Takes precedence to 'binary' short-hand.
-
- ascii
- Defaults to False: short-hand for the ``latin1`` character set,
- generates ASCII in schema.
-
- unicode
- Defaults to False: short-hand for the ``ucs2`` character set,
- generates UNICODE in schema.
-
- binary
- Defaults to False: short-hand, pick the binary collation type
- that matches the column's character set. Generates BINARY in
- schema. This does not affect the type of data stored, only the
- collation of character data.
- """
+ :param charset: Optional, a column-level character set for this string
+ value. Takes precedence to 'ascii' or 'unicode' short-hand.
+
+ :param collation: Optional, a column-level collation for this string
+ value. Takes precedence to 'binary' short-hand.
+
+ :param ascii: Defaults to False: short-hand for the ``latin1``
+ character set, generates ASCII in schema.
+ :param unicode: Defaults to False: short-hand for the ``ucs2``
+ character set, generates UNICODE in schema.
+
+ :param binary: Defaults to False: short-hand, pick the binary
+ collation type that matches the column's character set. Generates
+ BINARY in schema. This does not affect the type of data stored,
+ only the collation of character data.
+
+ """
self.__ddl_values = values
strip_values = []
``dialect://user:password@host/dbname[?key=value..]``, where
``dialect`` is a name such as ``mysql``, ``oracle``, ``postgres``,
etc. Alternatively, the URL can be an instance of
- ``sqlalchemy.engine.url.URL``.
-
- `**kwargs` represents options to be sent to the Engine itself as
- well as the components of the Engine, including the Dialect, the
- ConnectionProvider, and the Pool. A list of common options is as
- follows:
-
- poolclass
- a subclass of ``sqlalchemy.pool.Pool`` which will be used to
- instantiate a connection pool.
-
- pool
- an instance of ``sqlalchemy.pool.DBProxy`` or
- ``sqlalchemy.pool.Pool`` to be used as the underlying source for
- connections (DBProxy/Pool is described in the previous section).
- This argument supercedes "poolclass".
-
- echo
- defaults to False: if True, the Engine will log all statements
- as well as a repr() of their parameter lists to the engines
- logger, which defaults to ``sys.stdout``. A Engine instances'
- `echo` data member can be modified at any time to turn logging
- on and off. If set to the string 'debug', result rows will be
- printed to the standard output as well.
-
- logger
- defaults to None: a file-like object where logging output can be
- sent, if `echo` is set to True. This defaults to
- ``sys.stdout``.
-
- encoding
- defaults to 'utf-8': the encoding to be used when
- encoding/decoding Unicode strings.
-
- convert_unicode
- defaults to False: true if unicode conversion should be applied
- to all str types.
-
- module
- defaults to None: this is a reference to a DB-API 2.0 module to
- be used instead of the dialect's default module.
-
- strategy
- allows alternate Engine implementations to take effect. Current
- implementations include ``plain`` and ``threadlocal``. The
- default used by this function is ``plain``.
-
- ``plain`` provides support for a Connection object which can be
- used to execute SQL queries with a specific underlying DB-API
- connection.
-
- ``threadlocal`` is similar to ``plain`` except that it adds
- support for a thread-local connection and transaction context,
- which allows a group of engine operations to participate using
- the same underlying connection and transaction without the need
- for explicitly passing a single Connection.
+ :class:`~sqlalchemy.engine.url.URL`.
+
+ `**kwargs` takes a wide variety of options which are routed
+ towards their appropriate components. Arguments may be
+ specific to the Engine, the underlying Dialect, as well as the
+ Pool. Specific dialects also accept keyword arguments that
+ are unique to that dialect. Here, we describe the parameters
+ that are common to most ``create_engine()`` usage.
+
+ :param assert_unicode=False: When set to ``True`` alongside
+ convert_unicode=``True``, asserts that incoming string bind
+ parameters are instances of ``unicode``, otherwise raises an
+ error. Only takes effect when ``convert_unicode==True``. This
+ flag is also available on the ``String`` type and its
+ descendants. New in 0.4.2.
+
+ :param connect_args: a dictionary of options which will be
+ passed directly to the DBAPI's ``connect()`` method as
+ additional keyword arguments.
+
+ :param convert_unicode=False: if set to True, all
+ String/character based types will convert Unicode values to raw
+ byte values going into the database, and all raw byte values to
+ Python Unicode coming out in result sets. This is an
+ engine-wide method to provide unicode conversion across the
+ board. For unicode conversion on a column-by-column level, use
+ the ``Unicode`` column type instead, described in `types`.
+
+ :param creator: a callable which returns a DBAPI connection.
+ This creation function will be passed to the underlying
+ connection pool and will be used to create all new database
+ connections. Usage of this function causes connection
+ parameters specified in the URL argument to be bypassed.
+
+ :param echo=False: if True, the Engine will log all statements
+ as well as a repr() of their parameter lists to the engines
+ logger, which defaults to sys.stdout. The ``echo`` attribute of
+ ``Engine`` can be modified at any time to turn logging on and
+ off. If set to the string ``"debug"``, result rows will be
+ printed to the standard output as well. This flag ultimately
+ controls a Python logger; see `dbengine_logging` at the end of
+ this chapter for information on how to configure logging
+ directly.
+
+ :param echo_pool=False: if True, the connection pool will log
+ all checkouts/checkins to the logging stream, which defaults to
+ sys.stdout. This flag ultimately controls a Python logger; see
+ `dbengine_logging` for information on how to configure logging
+ directly.
+
+ :param encoding='utf-8': the encoding to use for all Unicode
+ translations, both by engine-wide unicode conversion as well as
+ the ``Unicode`` type object.
+
+ :param label_length=None: optional integer value which limits
+ the size of dynamically generated column labels to that many
+ characters. If less than 6, labels are generated as
+ "_(counter)". If ``None``, the value of
+ ``dialect.max_identifier_length`` is used instead.
+
+ :param module=None: used by database implementations which
+ support multiple DBAPI modules, this is a reference to a DBAPI2
+ module to be used instead of the engine's default module. For
+ Postgres, the default is psycopg2. For Oracle, it's cx_Oracle.
+
+ :param pool=None: an already-constructed instance of
+ :class:`~sqlalchemy.pool.Pool`, such as a
+ :class:`~sqlalchemy.pool.QueuePool` instance. If non-None, this
+ pool will be used directly as the underlying connection pool
+ for the engine, bypassing whatever connection parameters are
+ present in the URL argument. For information on constructing
+ connection pools manually, see `pooling`.
+
+ :param poolclass=None: a :class:`~sqlalchemy.pool.Pool`
+ subclass, which will be used to create a connection pool
+ instance using the connection parameters given in the URL. Note
+ this differs from ``pool`` in that you don't actually
+ instantiate the pool in this case, you just indicate what type
+ of pool to be used.
+
+ :param max_overflow=10: the number of connections to allow in
+ connection pool "overflow", that is connections that can be
+ opened above and beyond the pool_size setting, which defaults
+ to five. this is only used with :class:`~sqlalchemy.pool.QueuePool`.
+
+ :param pool_size=5: the number of connections to keep open
+ inside the connection pool. This used with :class:`~sqlalchemy.pool.QueuePool` as
+ well as :class:`~sqlalchemy.pool.SingletonThreadPool`.
+
+ :param pool_recycle=-1: this setting causes the pool to recycle
+ connections after the given number of seconds has passed. It
+ defaults to -1, or no timeout. For example, setting to 3600
+ means connections will be recycled after one hour. Note that
+ MySQL in particular will ``disconnect automatically`` if no
+ activity is detected on a connection for eight hours (although
+ this is configurable with the MySQLDB connection itself and the
+ server configuration as well).
+
+ :param pool_timeout=30: number of seconds to wait before giving
+ up on getting a connection from the pool. This is only used
+ with :class:`~sqlalchemy.pool.QueuePool`.
+
+ :param strategy='plain': used to invoke alternate :class:`~sqlalchemy.engine.base.Engine.`
+ implementations. Currently available is the ``threadlocal``
+ strategy, which is described in :ref:`threadlocal_strategy`.
+
"""
strategy = kwargs.pop('strategy', default_strategy)
'utf-8'.
schemagenerator
- a [sqlalchemy.schema#SchemaVisitor] class which generates
+ a :class:`~sqlalchemy.schema.SchemaVisitor` class which generates
schemas.
schemadropper
- a [sqlalchemy.schema#SchemaVisitor] class which drops schemas.
+ a :class:`~sqlalchemy.schema.SchemaVisitor` class which drops schemas.
defaultrunner
- a [sqlalchemy.schema#SchemaVisitor] class which executes
+ a :class:`~sqlalchemy.schema.SchemaVisitor` class which executes
defaults.
statement_compiler
- a [sqlalchemy.engine.base#Compiled] class used to compile SQL
+ a :class:`~sqlalchemy.engine.base.Compiled` class used to compile SQL
statements
preparer
- a [sqlalchemy.sql.compiler#IdentifierPreparer] class used to
+ a :class:`~sqlalchemy.sql.compiler.IdentifierPreparer` class used to
quote identifiers.
supports_alter
def create_connect_args(self, url):
"""Build DB-API compatible connection arguments.
- Given a [sqlalchemy.engine.url#URL] object, returns a tuple
+ Given a :class:`~sqlalchemy.engine.url.URL` object, returns a tuple
consisting of a `*args`/`**kwargs` suitable to send directly
to the dbapi's connect function.
"""
def type_descriptor(self, typeobj):
"""Transform a generic type to a database-specific type.
- Transforms the given [sqlalchemy.types#TypeEngine] instance
+ Transforms the given :class:`~sqlalchemy.types.TypeEngine` instance
from generic to database-specific.
Subclasses will usually use the
- [sqlalchemy.types#adapt_type()] method in the types module to
+ :func:`~sqlalchemy.types.adapt_type` method in the types module to
make this job easy.
"""
def reflecttable(self, connection, table, include_columns=None):
"""Load table description from the database.
- Given a [sqlalchemy.engine#Connection] and a
- [sqlalchemy.schema#Table] object, reflect its columns and
+ Given a :class:`~sqlalchemy.engine.Connection` and a
+ :class:`~sqlalchemy.schema.Table` object, reflect its columns and
properties from the database. If include_columns (a list or
set) is specified, limit the autoload to the given column
names.
def has_table(self, connection, table_name, schema=None):
"""Check the existence of a particular table in the database.
- Given a [sqlalchemy.engine#Connection] object and a string
+ Given a :class:`~sqlalchemy.engine.Connection` object and a string
`table_name`, return True if the given table (possibly within
the specified `schema`) exists in the database, False
otherwise.
def has_sequence(self, connection, sequence_name, schema=None):
"""Check the existence of a particular sequence in the database.
- Given a [sqlalchemy.engine#Connection] object and a string
+ Given a :class:`~sqlalchemy.engine.Connection` object and a string
`sequence_name`, return True if the given sequence exists in
the database, False otherwise.
"""
raise NotImplementedError()
def get_default_schema_name(self, connection):
- """Return the string name of the currently selected schema given a [sqlalchemy.engine#Connection]."""
+ """Return the string name of the currently selected schema given a :class:`~sqlalchemy.engine.Connection`."""
raise NotImplementedError()
def create_execution_context(self, connection, compiled=None, compiled_parameters=None, statement=None, parameters=None):
- """Return a new [sqlalchemy.engine#ExecutionContext] object."""
+ """Return a new :class:`~sqlalchemy.engine.ExecutionContext` object."""
raise NotImplementedError()
"""Construct a new Connection.
Connection objects are typically constructed by an
- [sqlalchemy.engine#Engine], see the ``connect()`` and
+ :class:`~sqlalchemy.engine.Engine`, see the ``connect()`` and
``contextual_connect()`` methods of Engine.
"""
This method can be used to insulate the rest of an application
from a modified state on a connection (such as a transaction
isolation level or similar). Also see
- [sqlalchemy.interfaces#PoolListener] for a mechanism to modify
+ :class:`~sqlalchemy.interfaces.PoolListener` for a mechanism to modify
connection state when connections leave and return to their
connection pool.
class Engine(Connectable):
"""
- Connects a Pool, a Dialect and a CompilerFactory together to
- provide a default implementation of SchemaEngine.
+ Connects a :class:`~sqlalchemy.pool.Pool` and :class:`~sqlalchemy.engine.base.Dialect`
+ together to provide a source of database connectivity and behavior.
+
"""
def __init__(self, pool, dialect, url, echo=None, proxy=None):
@property
def name(self):
- "String name of the [sqlalchemy.engine#Dialect] in use by this ``Engine``."
+ "String name of the :class:`~sqlalchemy.engine.Dialect` in use by this ``Engine``."
return self.dialect.name
These are semi-private implementation classes which provide the
underlying behavior for the "strategy" keyword argument available on
-[sqlalchemy.engine#create_engine()]. Current available options are
+:func:`~sqlalchemy.engine.create_engine`. Current available options are
``plain``, ``threadlocal``, and ``mock``.
New strategies can be added via new ``EngineStrategy`` classes.
"""Provides a thread-local transactional wrapper around the root Engine class.
The ``threadlocal`` module is invoked when using the ``strategy="threadlocal"`` flag
-with [sqlalchemy.engine#create_engine()]. This module is semi-private and is
+with :func:`~sqlalchemy.engine.create_engine`. This module is semi-private and is
invoked automatically when the threadlocal engine strategy is used.
"""
-"""Provides the [sqlalchemy.engine.url#URL] class which encapsulates
+"""Provides the :class:`~sqlalchemy.engine.url.URL` class which encapsulates
information about a database connection specification.
-The URL object is created automatically when [sqlalchemy.engine#create_engine()] is called
+The URL object is created automatically when :func:`~sqlalchemy.engine.create_engine` is called
with a string argument; alternatively, the URL is a public-facing construct which can
be used directly and is also accepted directly by ``create_engine()``.
"""
string by the ``module-level make_url()`` function. the string
format of the URL is an RFC-1738-style string.
- Attributes on URL include:
-
- drivername
- the name of the database backend. This name will correspond to
- a module in sqlalchemy/databases or a third party plug-in.
+ All initialization parameters are available as public attributes.
+
+ :param drivername: the name of the database backend.
+ This name will correspond to a module in sqlalchemy/databases
+ or a third party plug-in.
- username
- The user name for the connection.
+ :param username: The user name.
- password
- database password.
+ :param password: database password.
- host
- The name of the host.
+ :param host: The name of the host.
- port
- The port number.
+ :param port: The port number.
- database
- The database.
+ :param database: The database name.
- query
- A dictionary containing key/value pairs representing the URL's
- query string.
+ :param query: A dictionary of options to be passed to the
+ dialect and/or the DBAPI upon connect.
+
"""
def __init__(self, drivername, username=None, password=None, host=None, port=None, database=None, query=None):
used as the keys by default. Unset or false attributes are omitted
from the final dictionary.
- \**kw
- Optional, alternate key names for url attributes::
+ :param \**kw: Optional, alternate key names for url
+ attributes::
- # return 'username' as 'user'
- username='user'
+ # return 'username' as 'user'
+ username='user'
- # omit 'database'
- database=None
-
- names
- Deprecated. A list of key names. Equivalent to the keyword
- usage, must be provided in the order above.
+ # omit 'database'
+ database=None
+
+ :param names: Deprecated. Same purpose as the keyword-based alternate names,
+ but correlates the name to the original positionally.
+
"""
translated = {}
The base exception class is SQLAlchemyError. Exceptions which are raised as a
result of DBAPI exceptions are all subclasses of
-[sqlalchemy.exc#DBAPIError].
+:class:`~sqlalchemy.exc.DBAPIError`.
"""
def synonym_for(name, map_column=False):
"""Decorator, make a Python @property a query synonym for a column.
- A decorator version of [sqlalchemy.orm#synonym()]. The function being
+ A decorator version of :func:`~sqlalchemy.orm.synonym`. The function being
decorated is the 'descriptor', otherwise passes its arguments through
to synonym()::
def comparable_using(comparator_factory):
"""Decorator, allow a Python @property to be used in query criteria.
- A decorator front end to [sqlalchemy.orm#comparable_property()], passes
+ A decorator front end to :func:`~sqlalchemy.orm.comparable_property`, passes
through the comparator_factory and the function being decorated::
@comparable_using(MyComparatorType)
in this module and the main SQLAlchemy documentation for more information and
examples.
-The [sqlalchemy.ext.orderinglist#ordering_list] factory function is the
+The :class:`~sqlalchemy.ext.orderinglist.ordering_list` factory function is the
ORM-compatible constructor for `OrderingList` instances.
"""
Usage::
+ class MyListener(PoolListener):
+ def connect(self, dbapi_con, con_record):
+ '''perform connect operations'''
+ # etc.
+
# create a new pool with a listener
p = QueuePool(..., listeners=[MyListener()])
p.add_listener(MyListener())
# usage with create_engine()
- e = create_engine("url://", ...)
- e.pool.add_listener(MyListener())
+ e = create_engine("url://", listeners=[MyListener()])
- All of the standard connection [sqlalchemy.pool#Pool] types can
+ All of the standard connection :class:`~sqlalchemy.pool.Pool` types can
accept event listeners for key connection lifecycle events:
creation, pool check-out and check-in. There are no events fired
when a connection closes.
def scoped_session(session_factory, scopefunc=None):
"""Provides thread-local management of Sessions.
- This is a front-end function to the [sqlalchemy.orm.scoping#ScopedSession]
- class.
+ This is a front-end function to
+ :class:`~sqlalchemy.orm.scoping.ScopedSession`.
+
+ :param session_factory: a callable function that produces
+ :class:`Session` instances, such as :func:`sessionmaker` or
+ :func:`create_session`.
+
+ :param scopefunc: optional, TODO
+
+ :returns: an :class:`~sqlalchemy.orm.scoping.ScopedSession` instance
Usage::
return ScopedSession(session_factory, scopefunc=scopefunc)
def create_session(bind=None, **kwargs):
- """create a new [sqlalchemy.orm.session#Session].
-
- The defaults of create_session() are the opposite of
- that of sessionmaker(); autoflush and expire_on_commit
- are false, autocommit is True.
- In this sense the session acts more like the "classic"
- SQLAlchemy 0.3 session with these defaults.
-
- It is recommended to use the [sqlalchemy.orm#sessionmaker()] function
- instead of create_session().
- """
+ """Create a new :class:`~sqlalchemy.orm.session.Session`.
+
+ :param bind: optional, a single Connectable to use for all
+ database access in the created
+ :class:`~sqlalchemy.orm.session.Session`.
+
+ :param \*\*kwargs: optional, passed through to the
+ :class:`Session` constructor.
+
+ :returns: an :class:`~sqlalchemy.orm.session.Session` instance
+
+ The defaults of create_session() are the opposite of that of
+ :func:`sessionmaker`; ``autoflush`` and ``expire_on_commit`` are
+ False, ``autocommit`` is True. In this sense the session acts
+ more like the "classic" SQLAlchemy 0.3 session with these.
+
+ Usage::
+
+ >>> from sqlalchemy.orm import create_session
+ >>> session = create_session()
+
+ It is recommended to use :func:`sessionmaker` instead of
+ create_session().
+ """
if 'transactional' in kwargs:
sa_util.warn_deprecated(
"The 'transactional' argument to sessionmaker() is deprecated; "
"""Provide a relationship of a primary Mapper to a secondary Mapper.
This corresponds to a parent-child or associative table relationship. The
- constructed class is an instance of
- [sqlalchemy.orm.properties#RelationProperty].
-
- argument
- a class or Mapper instance, representing the target of the relation.
-
- secondary
- for a many-to-many relationship, specifies the intermediary table. The
- ``secondary`` keyword argument should generally only be used for a
- table that is not otherwise expressed in any class mapping. In
- particular, using the Association Object Pattern is generally mutually
- exclusive against using the ``secondary`` keyword argument.
-
- \**kwargs follow:
-
- backref
- indicates the name of a property to be placed on the related mapper's
- class that will handle this relationship in the other direction,
- including synchronizing the object attributes on both sides of the
- relation. Can also point to a ``backref()`` construct for more
- configurability.
-
- cascade
- a comma-separated list of cascade rules which determines how Session
- operations should be "cascaded" from parent to child. This defaults
- to "False", which means the default cascade should be used.
- The default value is "save-update, merge".
- Available cascades are:
-
- save-update - cascade the "add()" operation
- (formerly known as save() and update())
-
- merge - cascade the "merge()" operation
-
- expunge - cascade the "expunge()" operation
-
- delete - cascade the "delete()" operation
-
- delete-orphan - if an item of the child's type with no parent is detected,
- mark it for deletion. Note that this option prevents a pending item
- of the child's class from being persisted without a parent
- present.
-
- refresh-expire - cascade the expire() and refresh() operations
-
- all - shorthand for "save-update,merge, refresh-expire, expunge, delete"
-
- collection_class
- a class or function that returns a new list-holding object. will be
- used in place of a plain list for storing elements.
-
- comparator_factory
- a class which extends ``sqlalchemy.orm.properties.RelationProperty.Comparator``
- which provides custom SQL clause generation for comparison operations.
-
- extension
- an [sqlalchemy.orm.interfaces#AttributeExtension] instance,
- or list of extensions, which will be prepended to the list of
- attribute listeners for the resulting descriptor placed on the class.
- These listeners will receive append and set events before the
- operation proceeds, and may be used to halt (via exception throw)
- or change the value used in the operation.
-
- foreign_keys
- a list of columns which are to be used as "foreign key" columns.
- this parameter should be used in conjunction with explicit
- ``primaryjoin`` and ``secondaryjoin`` (if needed) arguments, and the
- columns within the ``foreign_keys`` list should be present within
- those join conditions. Normally, ``relation()`` will inspect the
- columns within the join conditions to determine which columns are
- the "foreign key" columns, based on information in the ``Table``
- metadata. Use this argument when no ForeignKey's are present in the
- join condition, or to override the table-defined foreign keys.
-
- join_depth=None
- when non-``None``, an integer value indicating how many levels deep
- eagerload joins should be constructed on a self-referring or
- cyclical relationship. The number counts how many times the same
- Mapper shall be present in the loading condition along a particular
- join branch. When left at its default of ``None``, eager loads will
- automatically stop chaining joins when they encounter a mapper which
- is already higher up in the chain.
-
- lazy=(True|False|None|'dynamic')
- specifies how the related items should be loaded. Values include:
-
- True - items should be loaded lazily when the property is first
- accessed.
-
- False - items should be loaded "eagerly" in the same query as that
- of the parent, using a JOIN or LEFT OUTER JOIN.
-
- None - no loading should occur at any time. This is to support
- "write-only" attributes, or attributes which are populated
- in some manner specific to the application.
-
- 'dynamic' - a ``DynaLoader`` will be attached, which returns a
- ``Query`` object for all read operations. The
- dynamic- collection supports only ``append()`` and
- ``remove()`` for write operations; changes to the
- dynamic property will not be visible until the data is
- flushed to the database.
-
- order_by
- indicates the ordering that should be applied when loading these
- items.
-
- passive_deletes=False
- Indicates loading behavior during delete operations.
-
- A value of True indicates that unloaded child items should not be
- loaded during a delete operation on the parent. Normally, when a
- parent item is deleted, all child items are loaded so that they can
- either be marked as deleted, or have their foreign key to the parent
- set to NULL. Marking this flag as True usually implies an ON DELETE
- <CASCADE|SET NULL> rule is in place which will handle
- updating/deleting child rows on the database side.
-
- Additionally, setting the flag to the string value 'all' will
- disable the "nulling out" of the child foreign keys, when there is
- no delete or delete-orphan cascade enabled. This is typically used
- when a triggering or error raise scenario is in place on the
- database side. Note that the foreign key attributes on in-session
- child objects will not be changed after a flush occurs so this is a
- very special use-case setting.
-
- passive_updates=True
- Indicates loading and INSERT/UPDATE/DELETE behavior when the source
- of a foreign key value changes (i.e. an "on update" cascade), which
- are typically the primary key columns of the source row.
-
- When True, it is assumed that ON UPDATE CASCADE is configured on the
- foreign key in the database, and that the database will handle
- propagation of an UPDATE from a source column to dependent rows.
- Note that with databases which enforce referential integrity
- (i.e. Postgres, MySQL with InnoDB tables), ON UPDATE CASCADE is
- required for this operation. The relation() will update the value
- of the attribute on related items which are locally present in the
- session during a flush.
-
- When False, it is assumed that the database does not enforce
- referential integrity and will not be issuing its own CASCADE
- operation for an update. The relation() will issue the appropriate
- UPDATE statements to the database in response to the change of a
- referenced key, and items locally present in the session during a
- flush will also be refreshed.
-
- This flag should probably be set to False if primary key changes are
- expected and the database in use doesn't support CASCADE
- (i.e. SQLite, MySQL MyISAM tables).
-
- post_update
- this indicates that the relationship should be handled by a second
- UPDATE statement after an INSERT or before a DELETE. Currently, it
- also will issue an UPDATE after the instance was UPDATEd as well,
- although this technically should be improved. This flag is used to
- handle saving bi-directional dependencies between two individual
- rows (i.e. each row references the other), where it would otherwise
- be impossible to INSERT or DELETE both rows fully since one row
- exists before the other. Use this flag when a particular mapping
- arrangement will incur two rows that are dependent on each other,
- such as a table that has a one-to-many relationship to a set of
- child rows, and also has a column that references a single child row
- within that list (i.e. both tables contain a foreign key to each
- other). If a ``flush()`` operation returns an error that a "cyclical
- dependency" was detected, this is a cue that you might want to use
- ``post_update`` to "break" the cycle.
-
- primaryjoin
- a ClauseElement that will be used as the primary join of this child
- object against the parent object, or in a many-to-many relationship
- the join of the primary object to the association table. By default,
- this value is computed based on the foreign key relationships of the
- parent and child tables (or association table).
-
- remote_side
- used for self-referential relationships, indicates the column or
- list of columns that form the "remote side" of the relationship.
-
- secondaryjoin
- a ClauseElement that will be used as the join of an association
- table to the child object. By default, this value is computed based
- on the foreign key relationships of the association and child
- tables.
-
- uselist=(True|False)
- a boolean that indicates if this property should be loaded as a list
- or a scalar. In most cases, this value is determined automatically
- by ``relation()``, based on the type and direction of the
- relationship - one to many forms a list, many to one forms a scalar,
- many to many is a list. If a scalar is desired where normally a list
- would be present, such as a bi-directional one-to-one relationship,
- set uselist to False.
-
- viewonly=False
- when set to True, the relation is used only for loading objects
- within the relationship, and has no effect on the unit-of-work flush
- process. Relations with viewonly can specify any kind of join
- conditions to provide additional views of related objects onto a
- parent object. Note that the functionality of a viewonly
- relationship has its limits - complicated join conditions may not
- compile into eager or lazy loaders properly. If this is the case,
- use an alternative method.
+ constructed class is an instance of :class:`RelationProperty`.
+
+ A typical :func:`relation`::
+
+ mapper(Parent, properties={
+ 'children': relation(Children)
+ })
+
+ :param argument:
+ a class or :class:`Mapper` instance, representing the target of
+ the relation.
+
+ :param secondary:
+ for a many-to-many relationship, specifies the intermediary
+ table. The *secondary* keyword argument should generally only
+ be used for a table that is not otherwise expressed in any class
+ mapping. In particular, using the Association Object Pattern is
+ generally mutually exclusive with the use of the *secondary*
+ keyword argument.
+
+ :param backref:
+ indicates the name of a property to be placed on the related
+ mapper's class that will handle this relationship in the other
+ direction, including synchronizing the object attributes on both
+ sides of the relation. Can also point to a :func:`backref` for
+ more configurability.
+
+ :param cascade:
+ a comma-separated list of cascade rules which determines how
+ Session operations should be "cascaded" from parent to child.
+ This defaults to ``False``, which means the default cascade
+ should be used. The default value is ``"save-update, merge"``.
+
+ Available cascades are:
+
+ ``save-update`` - cascade the "add()" operation (formerly
+ known as save() and update())
+
+ ``merge`` - cascade the "merge()" operation
+
+ ``expunge`` - cascade the "expunge()" operation
+
+ ``delete`` - cascade the "delete()" operation
+
+ ``delete-orphan`` - if an item of the child's type with no
+ parent is detected, mark it for deletion. Note that this
+ option prevents a pending item of the child's class from being
+ persisted without a parent present.
+
+ ``refresh-expire`` - cascade the expire() and refresh()
+ operations
+
+ ``all`` - shorthand for "save-update,merge, refresh-expire,
+ expunge, delete"
+
+ :param collection_class:
+ a class or callable that returns a new list-holding object. will
+ be used in place of a plain list for storing elements.
+
+ :param comparator_factory:
+ a class which extends :class:`RelationProperty.Comparator` which
+ provides custom SQL clause generation for comparison operations.
+
+ :param extension:
+ an :class:`AttributeExtension` instance, or list of extensions,
+ which will be prepended to the list of attribute listeners for
+ the resulting descriptor placed on the class. These listeners
+ will receive append and set events before the operation
+ proceeds, and may be used to halt (via exception throw) or
+ change the value used in the operation.
+
+ :param foreign_keys:
+
+ a list of columns which are to be used as "foreign key" columns.
+ this parameter should be used in conjunction with explicit
+ ``primaryjoin`` and ``secondaryjoin`` (if needed) arguments, and
+ the columns within the ``foreign_keys`` list should be present
+ within those join conditions. Normally, ``relation()`` will
+ inspect the columns within the join conditions to determine
+ which columns are the "foreign key" columns, based on
+ information in the ``Table`` metadata. Use this argument when no
+ ForeignKey's are present in the join condition, or to override
+ the table-defined foreign keys.
+
+ :param join_depth:
+ when non-``None``, an integer value indicating how many levels
+ deep eagerload joins should be constructed on a self-referring
+ or cyclical relationship. The number counts how many times the
+ same Mapper shall be present in the loading condition along a
+ particular join branch. When left at its default of ``None``,
+ eager loads will automatically stop chaining joins when they
+ encounter a mapper which is already higher up in the chain.
+
+ :param lazy=(True|False|None|'dynamic'):
+ specifies how the related items should be loaded. Values include:
+
+ True - items should be loaded lazily when the property is first
+ accessed.
+
+ False - items should be loaded "eagerly" in the same query as
+ that of the parent, using a JOIN or LEFT OUTER JOIN.
+
+ None - no loading should occur at any time. This is to support
+ "write-only" attributes, or attributes which are
+ populated in some manner specific to the application.
+
+ 'dynamic' - a ``DynaLoader`` will be attached, which returns a
+ ``Query`` object for all read operations. The
+ dynamic- collection supports only ``append()`` and
+ ``remove()`` for write operations; changes to the
+ dynamic property will not be visible until the data
+ is flushed to the database.
+
+ :param order_by:
+ indicates the ordering that should be applied when loading these
+ items.
+
+ :param passive_deletes=False:
+ Indicates loading behavior during delete operations.
+
+ A value of True indicates that unloaded child items should not
+ be loaded during a delete operation on the parent. Normally,
+ when a parent item is deleted, all child items are loaded so
+ that they can either be marked as deleted, or have their
+ foreign key to the parent set to NULL. Marking this flag as
+ True usually implies an ON DELETE <CASCADE|SET NULL> rule is in
+ place which will handle updating/deleting child rows on the
+ database side.
+
+ Additionally, setting the flag to the string value 'all' will
+ disable the "nulling out" of the child foreign keys, when there
+ is no delete or delete-orphan cascade enabled. This is
+ typically used when a triggering or error raise scenario is in
+ place on the database side. Note that the foreign key
+ attributes on in-session child objects will not be changed
+ after a flush occurs so this is a very special use-case
+ setting.
+
+ :param passive_updates=True:
+ Indicates loading and INSERT/UPDATE/DELETE behavior when the
+ source of a foreign key value changes (i.e. an "on update"
+ cascade), which are typically the primary key columns of the
+ source row.
+
+ When True, it is assumed that ON UPDATE CASCADE is configured on
+ the foreign key in the database, and that the database will
+ handle propagation of an UPDATE from a source column to
+ dependent rows. Note that with databases which enforce
+ referential integrity (i.e. Postgres, MySQL with InnoDB tables),
+ ON UPDATE CASCADE is required for this operation. The
+ relation() will update the value of the attribute on related
+ items which are locally present in the session during a flush.
+
+ When False, it is assumed that the database does not enforce
+ referential integrity and will not be issuing its own CASCADE
+ operation for an update. The relation() will issue the
+ appropriate UPDATE statements to the database in response to the
+ change of a referenced key, and items locally present in the
+ session during a flush will also be refreshed.
+
+ This flag should probably be set to False if primary key changes
+ are expected and the database in use doesn't support CASCADE
+ (i.e. SQLite, MySQL MyISAM tables).
+
+ :param post_update:
+ this indicates that the relationship should be handled by a
+ second UPDATE statement after an INSERT or before a
+ DELETE. Currently, it also will issue an UPDATE after the
+ instance was UPDATEd as well, although this technically should
+ be improved. This flag is used to handle saving bi-directional
+ dependencies between two individual rows (i.e. each row
+ references the other), where it would otherwise be impossible to
+ INSERT or DELETE both rows fully since one row exists before the
+ other. Use this flag when a particular mapping arrangement will
+ incur two rows that are dependent on each other, such as a table
+ that has a one-to-many relationship to a set of child rows, and
+ also has a column that references a single child row within that
+ list (i.e. both tables contain a foreign key to each other). If
+ a ``flush()`` operation returns an error that a "cyclical
+ dependency" was detected, this is a cue that you might want to
+ use ``post_update`` to "break" the cycle.
+
+ :param primaryjoin:
+ a ClauseElement that will be used as the primary join of this
+ child object against the parent object, or in a many-to-many
+ relationship the join of the primary object to the association
+ table. By default, this value is computed based on the foreign
+ key relationships of the parent and child tables (or association
+ table).
+
+ :param remote_side:
+ used for self-referential relationships, indicates the column or
+ list of columns that form the "remote side" of the relationship.
+
+ :param secondaryjoin:
+ a ClauseElement that will be used as the join of an association
+ table to the child object. By default, this value is computed
+ based on the foreign key relationships of the association and
+ child tables.
+
+ :param uselist=(True|False):
+ a boolean that indicates if this property should be loaded as a
+ list or a scalar. In most cases, this value is determined
+ automatically by ``relation()``, based on the type and direction
+ of the relationship - one to many forms a list, many to one
+ forms a scalar, many to many is a list. If a scalar is desired
+ where normally a list would be present, such as a bi-directional
+ one-to-one relationship, set uselist to False.
+
+ :param viewonly=False:
+ when set to True, the relation is used only for loading objects
+ within the relationship, and has no effect on the unit-of-work
+ flush process. Relations with viewonly can specify any kind of
+ join conditions to provide additional views of related objects
+ onto a parent object. Note that the functionality of a viewonly
+ relationship has its limits - complicated join conditions may
+ not compile into eager or lazy loaders properly. If this is the
+ case, use an alternative method.
"""
return RelationProperty(argument, secondary=secondary, **kwargs)
passive_deletes=False, order_by=None, comparator_factory=None):
"""Construct a dynamically-loading mapper property.
- This property is similar to relation(), except read operations return an
- active Query object, which reads from the database in all cases. Items
- may be appended to the attribute via append(), or removed via remove();
- changes will be persisted to the database during a flush(). However, no
- other list mutation operations are available.
+ This property is similar to :func:`relation`, except read
+ operations return an active :class:`Query` object which reads from
+ the database when accessed. Items may be appended to the
+ attribute via ``append()``, or removed via ``remove()``; changes
+ will be persisted to the database during a :meth:`Sesion.flush`.
+ However, no other Python list or collection mutation operations
+ are available.
+
+ A subset of arguments available to :func:`relation` are available
+ here.
+
+ :param argument:
+ a class or :class:`Mapper` instance, representing the target of
+ the relation.
+
+ :param secondary:
+ for a many-to-many relationship, specifies the intermediary
+ table. The *secondary* keyword argument should generally only
+ be used for a table that is not otherwise expressed in any class
+ mapping. In particular, using the Association Object Pattern is
+ generally mutually exclusive with the use of the *secondary*
+ keyword argument.
- A subset of arguments available to relation() are available here.
"""
from sqlalchemy.orm.dynamic import DynaLoader
when True, the column property is "deferred", meaning that
it does not load immediately, and is instead loaded when the
attribute is first accessed on an instance. See also
- [sqlalchemy.orm#deferred()].
+ :func:`~sqlalchemy.orm.deferred`.
extension
- an [sqlalchemy.orm.interfaces#AttributeExtension] instance,
+ an :class:`~sqlalchemy.orm.interfaces.AttributeExtension` instance,
or list of extensions, which will be prepended to the list of
attribute listeners for the resulting descriptor placed on the class.
These listeners will receive append and set events before the
return self.x, self.y
def __eq__(self, other):
return other is not None and self.x == other.x and self.y == other.y
-
+
# and then in the mapping:
... composite(Point, mytable.c.x, mytable.c.y) ...
The composite object may have its attributes populated based on the names
of the mapped columns. To override the way internal state is set,
- additionally implement ``__set_composite_values__``:
-
+ additionally implement ``__set_composite_values__``::
+
class Point(object):
def __init__(self, x, y):
self.some_x = x
deferred
When True, the column property is "deferred", meaning that it does not
load immediately, and is instead loaded when the attribute is first
- accessed on an instance. See also [sqlalchemy.orm#deferred()].
+ accessed on an instance. See also :func:`~sqlalchemy.orm.deferred`.
comparator_factory
a class which extends ``sqlalchemy.orm.properties.CompositeProperty.Comparator``
which provides custom SQL clause generation for comparison operations.
extension
- an [sqlalchemy.orm.interfaces#AttributeExtension] instance,
+ an :class:`~sqlalchemy.orm.interfaces.AttributeExtension` instance,
or list of extensions, which will be prepended to the list of
attribute listeners for the resulting descriptor placed on the class.
These listeners will receive append and set events before the
return ColumnProperty(deferred=True, *columns, **kwargs)
def mapper(class_, local_table=None, *args, **params):
- """Return a new [sqlalchemy.orm#Mapper] object.
+ """Return a new :class:`~sqlalchemy.orm.Mapper` object.
class\_
The class to be mapped.
erasing any in-memory changes with whatever information was loaded
from the database. Usage of this flag is highly discouraged; as an
alternative, see the method `populate_existing()` on
- [sqlalchemy.orm.query#Query].
+ :class:`~sqlalchemy.orm.query.Query`.
allow_column_override
If True, allows the usage of a ``relation()`` which has the
with its parent mapper.
extension
- A [sqlalchemy.orm#MapperExtension] instance or list of
+ A :class:`~sqlalchemy.orm.MapperExtension` instance or list of
``MapperExtension`` instances which will be applied to all
operations by this ``Mapper``.
def synonym(name, map_column=False, descriptor=None, comparator_factory=None, proxy=False):
"""Set up `name` as a synonym to another mapped property.
- Used with the ``properties`` dictionary sent to [sqlalchemy.orm#mapper()].
+ Used with the ``properties`` dictionary sent to :func:`~sqlalchemy.orm.mapper`.
Any existing attributes on the class which map the key name sent
to the ``properties`` dictionary will be used by the synonym to provide
mapper(MyClass, mytable, properties=dict(
'myprop': comparable_property(MyComparator)))
- Used with the ``properties`` dictionary sent to [sqlalchemy.orm#mapper()].
+ Used with the ``properties`` dictionary sent to :func:`~sqlalchemy.orm.mapper`.
comparator_factory
A PropComparator subclass or factory that defines operator behavior
"""
-Semi-private implementation objects which form the basis of ORM-mapped
-attributes, query options and mapper extension.
+Semi-private module containing various base classes used throughout the ORM.
-Defines the [sqlalchemy.orm.interfaces#MapperExtension] class, which can be
-end-user subclassed to add event-based functionality to mappers. The
-remainder of this module is generally private to the ORM.
+Defines the extension classes :class:`MapperExtension`,
+:class:`SessionExtension`, and :class:`AttributeExtension` as
+well as other user-subclassable extension objects.
"""
\**flags
extra information about the row, same as criterion in
- ``create_row_processor()`` method of [sqlalchemy.orm.interfaces#MapperProperty]
+ ``create_row_processor()`` method of :class:`~sqlalchemy.orm.interfaces.MapperProperty`
"""
return EXT_CONTINUE
"""Logic to map Python classes to and from selectables.
-Defines the [sqlalchemy.orm.mapper#Mapper] class, the central configurational
+Defines the :class:`~sqlalchemy.orm.mapper.Mapper` class, the central configurational
unit which associates a class with a database table.
This is a semi-private module; the main configurational API of the ORM is
-available in [sqlalchemy.orm#].
+available in :class:`~sqlalchemy.orm.`.
"""
columns.
Instances of this class should be constructed via the
- [sqlalchemy.orm#mapper()] function.
+ :func:`~sqlalchemy.orm.mapper` function.
"""
def __init__(self,
eager_defaults=False):
"""Construct a new mapper.
- Mappers are normally constructed via the [sqlalchemy.orm#mapper()]
+ Mappers are normally constructed via the :func:`~sqlalchemy.orm.mapper`
function. See for details.
"""
"""Iterate each element and its mapper in an object graph,
for all relations that meet the given cascade rule.
- type\_
+ ``type\_``:
The name of the cascade rule (i.e. save-update, delete,
etc.)
- state
+ ``state``:
The lead InstanceState. child items will be processed per
the relations defined for this object's mapper.
"""The Query class and support.
-Defines the [sqlalchemy.orm.query#Query] class, the central construct used by
+Defines the :class:`~sqlalchemy.orm.query.Query` class, the central construct used by
the ORM to construct database queries.
The ``Query`` class should not be confused with the
-[sqlalchemy.sql.expression#Select] class, which defines database SELECT
+:class:`~sqlalchemy.sql.expression.Select` class, which defines database SELECT
operations at the SQL (non-ORM) level. ``Query`` differs from ``Select`` in
that it returns ORM-mapped objects and interacts with an ORM session, whereas
the ``Select`` construct interacts directly with the database to return
instances will also have those columns already loaded so that
no "post fetch" of those columns will be required.
- :param cls_or_mappers: - a single class or mapper, or list of class/mappers,
- which inherit from this Query's mapper. Alternatively, it
- may also be the string ``'*'``, in which case all descending
- mappers will be added to the FROM clause.
-
- :param selectable: - a table or select() statement that will
- be used in place of the generated FROM clause. This argument
- is required if any of the desired mappers use concrete table
- inheritance, since SQLAlchemy currently cannot generate UNIONs
- among tables automatically. If used, the ``selectable``
- argument must represent the full set of tables and columns mapped
- by every desired mapper. Otherwise, the unaccounted mapped columns
- will result in their table being appended directly to the FROM
- clause which will usually lead to incorrect results.
-
- :param discriminator: - a column to be used as the "discriminator"
- column for the given selectable. If not given, the polymorphic_on
- attribute of the mapper will be used, if any. This is useful
- for mappers that don't have polymorphic loading behavior by default,
- such as concrete table mappers.
+ :param cls_or_mappers: a single class or mapper, or list of class/mappers,
+ which inherit from this Query's mapper. Alternatively, it
+ may also be the string ``'*'``, in which case all descending
+ mappers will be added to the FROM clause.
+
+ :param selectable: a table or select() statement that will
+ be used in place of the generated FROM clause. This argument
+ is required if any of the desired mappers use concrete table
+ inheritance, since SQLAlchemy currently cannot generate UNIONs
+ among tables automatically. If used, the ``selectable``
+ argument must represent the full set of tables and columns mapped
+ by every desired mapper. Otherwise, the unaccounted mapped columns
+ will result in their table being appended directly to the FROM
+ clause which will usually lead to incorrect results.
+
+ :param discriminator: a column to be used as the "discriminator"
+ column for the given selectable. If not given, the polymorphic_on
+ attribute of the mapper will be used, if any. This is useful
+ for mappers that don't have polymorphic loading behavior by default,
+ such as concrete table mappers.
"""
entity = self._generate_mapper_zero()
def sessionmaker(bind=None, class_=None, autoflush=True, autocommit=False,
expire_on_commit=True, **kwargs):
- """Generate a custom-configured [sqlalchemy.orm.session#Session] class.
+ """Generate a custom-configured :class:`~sqlalchemy.orm.session.Session` class.
The returned object is a subclass of ``Session``, which, when instantiated
with no arguments, uses the keyword arguments configured here as its
transaction will load from the most recent database state.
extension
- An optional [sqlalchemy.orm.session#SessionExtension] instance, or
+ An optional :class:`~sqlalchemy.orm.session.SessionExtension` instance, or
a list of such instances, which
will receive pre- and post- commit and flush events, as well as a
post-rollback event. User- defined code may be placed within these
query_cls
Class which should be used to create new Query objects, as returned
- by the ``query()`` method. Defaults to [sqlalchemy.orm.query#Query].
+ by the ``query()`` method. Defaults to :class:`~sqlalchemy.orm.query.Query`.
twophase
When ``True``, all transactions will be started using
- [sqlalchemy.engine_TwoPhaseTransaction]. During a ``commit()``, after
+ :mod:~sqlalchemy.engine_TwoPhaseTransaction. During a ``commit()``, after
``flush()`` has been issued for all attached databases, the
``prepare()`` method on each database's ``TwoPhaseTransaction`` will be
called. This allows each database to roll back the entire transaction,
class SessionTransaction(object):
"""A Session-level transaction.
- This corresponds to one or more [sqlalchemy.engine#Transaction]
+ This corresponds to one or more :class:`~sqlalchemy.engine.Transaction`
instances behind the scenes, with one ``Transaction`` per ``Engine`` in
use.
is either to use mutexes to limit concurrent access to one thread at a
time, or more commonly to establish a unique session for every thread,
using a threadlocal variable. SQLAlchemy provides a thread-managed
- Session adapter, provided by the [sqlalchemy.orm#scoped_session()]
+ Session adapter, provided by the :func:`~sqlalchemy.orm.scoped_session`
function.
"""
"""Construct a new Session.
Arguments to ``Session`` are described using the
- [sqlalchemy.orm#sessionmaker()] function.
+ :func:`~sqlalchemy.orm.sessionmaker` function.
"""
class Validator(AttributeExtension):
- """Runs a validation method on an attribute value to be set or appended."""
+ """Runs a validation method on an attribute value to be set or appended.
+
+ The Validator class is used by the :func:`~sqlalchemy.orm.validates`
+ decorator, and direct access is usually not needed.
+
+ """
def __init__(self, key, validator):
"""Construct a new Validator.
class AliasedClass(object):
"""Represents an 'alias'ed form of a mapped class for usage with Query.
- The ORM equivalent of a sqlalchemy.sql.expression.Alias
+ The ORM equivalent of a :class:`~sqlalchemy.sql.expression.Alias`
object, this object mimics the mapped class using a
__getattr__ scheme and maintains a reference to a
real Alias object. It indicates to Query that the
"""Produce an inner join between left and right clauses.
In addition to the interface provided by
- sqlalchemy.sql.join(), left and right may be mapped
+ :func:`~sqlalchemy.sql.expression.join()`, left and right may be mapped
classes or AliasedClass instances. The onclause may be a
string name of a relation(), or a class-bound descriptor
representing a relation.
"""Produce a left outer join between left and right clauses.
In addition to the interface provided by
- sqlalchemy.sql.outerjoin(), left and right may be mapped
+ :func:`~sqlalchemy.sql.expression.outerjoin()`, left and right may be mapped
classes or AliasedClass instances. The onclause may be a
string name of a relation(), or a class-bound descriptor
representing a relation.
creating new connection pools for each distinct set of connection
arguments sent to the decorated module's connect() function.
- Arguments:
+ :param module: a DB-API 2.0 database module
- module
- A DB-API 2.0 database module.
+ :param poolclass: the class used by the pool module to provide
+ pooling. Defaults to :class:`QueuePool`.
- poolclass
- The class used by the pool module to provide pooling. Defaults
- to ``QueuePool``.
+ :param \*\*params: will be passed through to *poolclass*
- See the ``Pool`` class for options.
"""
try:
return proxies[module]
proxies.clear()
class Pool(object):
- """Base class for connection pools.
-
- This is an abstract class, implemented by various subclasses
- including:
-
- QueuePool
- Pools multiple connections using ``Queue.Queue``.
-
- SingletonThreadPool
- Stores a single connection per execution thread.
-
- NullPool
- Doesn't do any pooling; opens and closes connections.
-
- AssertionPool
- Stores only one connection, and asserts that only one connection
- is checked out at a time.
-
- The main argument, `creator`, is a callable function that returns
- a newly connected DB-API connection object.
-
- Options that are understood by Pool are:
-
- echo
- If set to True, connections being pulled and retrieved from/to
- the pool will be logged to the standard output, as well as pool
- sizing information. Echoing can also be achieved by enabling
- logging for the "sqlalchemy.pool" namespace. Defaults to False.
-
- use_threadlocal
- If set to True, repeated calls to ``connect()`` within the same
- application thread will be guaranteed to return the same
- connection object, if one has already been retrieved from the
- pool and has not been returned yet. This allows code to retrieve
- a connection from the pool, and then while still holding on to
- that connection, to call other functions which also ask the pool
- for a connection of the same arguments; those functions will act
- upon the same connection that the calling method is using.
- Defaults to False.
-
- recycle
- If set to non -1, a number of seconds between connection
- recycling, which means upon checkout, if this timeout is
- surpassed the connection will be closed and replaced with a
- newly opened connection. Defaults to -1.
-
- listeners
- A list of ``PoolListener``-like objects or dictionaries of callables
- that receive events when DB-API connections are created, checked out and
- checked in to the pool.
-
- reset_on_return
- Defaults to True. Reset the database state of connections returned to
- the pool. This is typically a ROLLBACK to release locks and transaction
- resources. Disable at your own peril.
+ """Abstract base class for connection pools."""
- """
def __init__(self, creator, recycle=-1, echo=None, use_threadlocal=False,
reset_on_return=True, listeners=None):
+ """Construct a Pool.
+
+ :param creator: a callable function that returns a DB-API
+ connection object. The function will be called with
+ parameters.
+
+ :param recycle: If set to non -1, number of seconds between
+ connection recycling, which means upon checkout, if this
+ timeout is surpassed the connection will be closed and
+ replaced with a newly opened connection. Defaults to -1.
+
+ :param echo: If True, connections being pulled and retrieved
+ from the pool will be logged to the standard output, as well
+ as pool sizing information. Echoing can also be achieved by
+ enabling logging for the "sqlalchemy.pool"
+ namespace. Defaults to False.
+
+ :param use_threadlocal: If set to True, repeated calls to
+ :meth:`connect` within the same application thread will be
+ guaranteed to return the same connection object, if one has
+ already been retrieved from the pool and has not been
+ returned yet. Offers a slight performance advantage at the
+ cost of individual transactions by default. The
+ :meth:`unique_connection` method is provided to bypass the
+ threadlocal behavior installed into :meth:`connect`.
+
+ :param reset_on_return: If true, reset the database state of
+ connections returned to the pool. This is typically a
+ ROLLBACK to release locks and transaction resources.
+ Disable at your own peril. Defaults to True.
+
+ :param listeners: A list of
+ :class:`~sqlalchemy.interfaces.PoolListener`-like objects or
+ dictionaries of callables that receive events when DB-API
+ connections are created, checked out and checked in to the
+ pool.
+
+ """
self.logger = log.instance_logger(self, echoflag=echo)
self._threadconns = threading.local()
self._creator = creator
return c
class QueuePool(Pool):
- """A Pool that imposes a limit on the number of open connections.
-
- Arguments include all those used by the base Pool class, as well
- as:
-
- pool_size
- The size of the pool to be maintained. This is the largest
- number of connections that will be kept persistently in the
- pool. Note that the pool begins with no connections; once this
- number of connections is requested, that number of connections
- will remain. Defaults to 5.
-
- max_overflow
- The maximum overflow size of the pool. When the number of
- checked-out connections reaches the size set in pool_size,
- additional connections will be returned up to this limit. When
- those additional connections are returned to the pool, they are
- disconnected and discarded. It follows then that the total
- number of simultaneous connections the pool will allow is
- pool_size + `max_overflow`, and the total number of "sleeping"
- connections the pool will allow is pool_size. `max_overflow` can
- be set to -1 to indicate no overflow limit; no limit will be
- placed on the total number of concurrent connections. Defaults
- to 10.
-
- timeout
- The number of seconds to wait before giving up on returning a
- connection. Defaults to 30.
- """
+ """A Pool that imposes a limit on the number of open connections."""
+
+ def __init__(self, creator, pool_size=5, max_overflow=10, timeout=30,
+ **params):
+ """Construct a QueuePool.
+
+ :param creator: a callable function that returns a DB-API
+ connection object. The function will be called with
+ parameters.
+
+ :param pool_size: The size of the pool to be maintained. This
+ is the largest number of connections that will be kept
+ persistently in the pool. Note that the pool begins with no
+ connections; once this number of connections is requested,
+ that number of connections will remain. Defaults to 5.
+
+ :param max_overflow: The maximum overflow size of the
+ pool. When the number of checked-out connections reaches the
+ size set in pool_size, additional connections will be
+ returned up to this limit. When those additional connections
+ are returned to the pool, they are disconnected and
+ discarded. It follows then that the total number of
+ simultaneous connections the pool will allow is pool_size +
+ `max_overflow`, and the total number of "sleeping"
+ connections the pool will allow is pool_size. `max_overflow`
+ can be set to -1 to indicate no overflow limit; no limit
+ will be placed on the total number of concurrent
+ connections. Defaults to 10.
+
+ :param timeout: The number of seconds to wait before giving up
+ on returning a connection. Defaults to 30.
+
+ :param recycle: If set to non -1, number of seconds between
+ connection recycling, which means upon checkout, if this
+ timeout is surpassed the connection will be closed and
+ replaced with a newly opened connection. Defaults to -1.
+
+ :param echo: If True, connections being pulled and retrieved
+ from the pool will be logged to the standard output, as well
+ as pool sizing information. Echoing can also be achieved by
+ enabling logging for the "sqlalchemy.pool"
+ namespace. Defaults to False.
+
+ :param use_threadlocal: If set to True, repeated calls to
+ :meth:`connect` within the same application thread will be
+ guaranteed to return the same connection object, if one has
+ already been retrieved from the pool and has not been
+ returned yet. Offers a slight performance advantage at the
+ cost of individual transactions by default. The
+ :meth:`unique_connection` method is provided to bypass the
+ threadlocal behavior installed into :meth:`connect`.
+
+ :param reset_on_return: If true, reset the database state of
+ connections returned to the pool. This is typically a
+ ROLLBACK to release locks and transaction resources.
+ Disable at your own peril. Defaults to True.
+
+ :param listeners: A list of
+ :class:`~sqlalchemy.interfaces.PoolListener`-like objects or
+ dictionaries of callables that receive events when DB-API
+ connections are created, checked out and checked in to the
+ pool.
- def __init__(self, creator, pool_size = 5, max_overflow = 10, timeout=30, **params):
+ """
Pool.__init__(self, creator, **params)
self._pool = Queue.Queue(pool_size)
self._overflow = 0 - pool_size
Instead it literally opens and closes the underlying DB-API connection
per each connection open/close.
+
"""
def status(self):
"""A Pool of exactly one connection, used for all requests."""
def __init__(self, creator, **params):
+ """Construct a StaticPool.
+
+ :param creator: a callable function that returns a DB-API
+ connection object. The function will be called with
+ parameters.
+
+ :param recycle: If set to non -1, number of seconds between
+ connection recycling, which means upon checkout, if this
+ timeout is surpassed the connection will be closed and
+ replaced with a newly opened connection. Defaults to -1.
+
+ :param echo: If True, connections being pulled and retrieved
+ from the pool will be logged to the standard output, as well
+ as pool sizing information. Echoing can also be achieved by
+ enabling logging for the "sqlalchemy.pool"
+ namespace. Defaults to False.
+
+ :param use_threadlocal: If set to True, repeated calls to
+ :meth:`connect` within the same application thread will be
+ guaranteed to return the same connection object, if one has
+ already been retrieved from the pool and has not been
+ returned yet. Offers a slight performance advantage at the
+ cost of individual transactions by default. The
+ :meth:`unique_connection` method is provided to bypass the
+ threadlocal behavior installed into :meth:`connect`.
+
+ :param reset_on_return: If true, reset the database state of
+ connections returned to the pool. This is typically a
+ ROLLBACK to release locks and transaction resources.
+ Disable at your own peril. Defaults to True.
+
+ :param listeners: A list of
+ :class:`~sqlalchemy.interfaces.PoolListener`-like objects or
+ dictionaries of callables that receive events when DB-API
+ connections are created, checked out and checked in to the
+ pool.
+
+ """
Pool.__init__(self, creator, **params)
self._conn = creator()
self.connection = _ConnectionRecord(self)
def dispose(self):
self._conn.close()
self._conn = None
-
+
def create_connection(self):
return self._conn
This will raise an exception if more than one connection is checked out
at a time. Useful for debugging code that is using more connections
than desired.
+
"""
## TODO: modify this to handle an arbitrary connection count.
def __init__(self, creator, **params):
+ """Construct an AssertionPool.
+
+ :param creator: a callable function that returns a DB-API
+ connection object. The function will be called with
+ parameters.
+
+ :param recycle: If set to non -1, number of seconds between
+ connection recycling, which means upon checkout, if this
+ timeout is surpassed the connection will be closed and
+ replaced with a newly opened connection. Defaults to -1.
+
+ :param echo: If True, connections being pulled and retrieved
+ from the pool will be logged to the standard output, as well
+ as pool sizing information. Echoing can also be achieved by
+ enabling logging for the "sqlalchemy.pool"
+ namespace. Defaults to False.
+
+ :param use_threadlocal: If set to True, repeated calls to
+ :meth:`connect` within the same application thread will be
+ guaranteed to return the same connection object, if one has
+ already been retrieved from the pool and has not been
+ returned yet. Offers a slight performance advantage at the
+ cost of individual transactions by default. The
+ :meth:`unique_connection` method is provided to bypass the
+ threadlocal behavior installed into :meth:`connect`.
+
+ :param reset_on_return: If true, reset the database state of
+ connections returned to the pool. This is typically a
+ ROLLBACK to release locks and transaction resources.
+ Disable at your own peril. Defaults to True.
+
+ :param listeners: A list of
+ :class:`~sqlalchemy.interfaces.PoolListener`-like objects or
+ dictionaries of callables that receive events when DB-API
+ connections are created, checked out and checked in to the
+ pool.
+
+ """
Pool.__init__(self, creator, **params)
self.connection = _ConnectionRecord(self)
self._conn = self.connection
created and dropped, or is otherwise part of such an entity. Examples include
tables, columns, sequences, and indexes.
-All entities are subclasses of [sqlalchemy.schema#SchemaItem], and as defined
+All entities are subclasses of :class:`~sqlalchemy.schema.SchemaItem`, and as defined
in this module they are intended to be agnostic of any vendor-specific
constructs.
A collection of entities are grouped into a unit called
-[sqlalchemy.schema#MetaData]. MetaData serves as a logical grouping of schema
+:class:`~sqlalchemy.schema.MetaData`. MetaData serves as a logical grouping of schema
elements, and can also be associated with an actual database connection such
that operations involving the contained elements can contact the database as
needed.
Two of the elements here also build upon their "syntactic" counterparts, which
-are defined in [sqlalchemy.sql.expression#], specifically
-[sqlalchemy.schema#Table] and [sqlalchemy.schema#Column]. Since these objects
+are defined in :class:`~sqlalchemy.sql.expression.`, specifically
+:class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.Column`. Since these objects
are part of the SQL expression language, they are usable as components in SQL
expressions.
'UniqueConstraint', 'DefaultGenerator', 'Constraint', 'MetaData',
'ThreadLocalMetaData', 'SchemaVisitor', 'PassiveDefault',
'DefaulClause', 'FetchedValue', 'ColumnDefault', 'DDL']
-
+__all__.sort()
class SchemaItem(visitors.Visitable):
"""Base class for items that define a database schema."""
args.append(c.copy(schema=schema))
return Table(self.name, metadata, schema=schema, *args)
-class Column(SchemaItem, expression._ColumnClause):
+class Column(SchemaItem, expression.ColumnClause):
"""Represent a column in a database table.
This is a subclass of ``expression.ColumnClause`` and represents an actual
return [x for x in (self.default, self.onupdate) if x is not None] + \
list(self.foreign_keys) + list(self.constraints)
else:
- return expression._ColumnClause.get_children(self, **kwargs)
+ return expression.ColumnClause.get_children(self, **kwargs)
class ForeignKey(SchemaItem):
"""A collection of Tables and their associated schema constructs.
Holds a collection of Tables and an optional binding to an ``Engine`` or
- ``Connection``. If bound, the [sqlalchemy.schema#Table] objects in the
+ ``Connection``. If bound, the :class:`~sqlalchemy.schema.Table` objects in the
collection and their columns may participate in implicit SQL execution.
The `Table` objects themselves are stored in the `metadata.tables`
``connect()``. You can also re-bind dynamically multiple times per
thread, just like a regular ``MetaData``.
- Use this type of MetaData when your tables are present in more than one
- database and you need to address them simultanesouly.
"""
__visit_name__ = 'metadata'
"""Base SQL and DDL compiler implementations.
-Provides the [sqlalchemy.sql.compiler#DefaultCompiler] class, which is
+Provides the :class:`~sqlalchemy.sql.compiler.DefaultCompiler` class, which is
responsible for generating all SQL query strings, as well as
-[sqlalchemy.sql.compiler#SchemaGenerator] and [sqlalchemy.sql.compiler#SchemaDropper]
+:class:`~sqlalchemy.sql.compiler.SchemaGenerator` and :class:`~sqlalchemy.sql.compiler.SchemaDropper`
which issue CREATE and DROP DDL for tables, sequences, and indexes.
The elements in this module are used by public-facing constructs like
-[sqlalchemy.sql.expression#ClauseElement] and [sqlalchemy.engine#Engine].
+:class:`~sqlalchemy.sql.expression.ClauseElement` and :class:`~sqlalchemy.engine.Engine`.
While dialect authors will want to be familiar with this module for the purpose of
creating database-specific compilers and schema generators, the module
is otherwise internal to SQLAlchemy.
if \
asfrom and \
- isinstance(column, sql._ColumnClause) and \
+ isinstance(column, sql.ColumnClause) and \
not column.is_literal and \
column.table is not None and \
not isinstance(column.table, sql.Select):
"""Defines the base components of SQL expression trees.
All components are derived from a common base class
-[sqlalchemy.sql.expression#ClauseElement]. Common behaviors are organized
+:class:`~sqlalchemy.sql.expression.ClauseElement`. Common behaviors are organized
based on class hierarchies, in some cases via mixins.
All object construction from this package occurs via functions which
def outerjoin(left, right, onclause=None):
"""Return an ``OUTER JOIN`` clause element.
- The returned object is an instance of [sqlalchemy.sql.expression#Join].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.Join`.
Similar functionality is also available via the ``outerjoin()``
- method on any [sqlalchemy.sql.expression#FromClause].
+ method on any :class:`~sqlalchemy.sql.expression.FromClause`.
left
The left side of the join.
def join(left, right, onclause=None, isouter=False):
"""Return a ``JOIN`` clause element (regular inner join).
- The returned object is an instance of [sqlalchemy.sql.expression#Join].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.Join`.
Similar functionality is also available via the ``join()`` method
- on any [sqlalchemy.sql.expression#FromClause].
+ on any :class:`~sqlalchemy.sql.expression.FromClause`.
left
The left side of the join.
"""Returns a ``SELECT`` clause element.
Similar functionality is also available via the ``select()``
- method on any [sqlalchemy.sql.expression#FromClause].
+ method on any :class:`~sqlalchemy.sql.expression.FromClause`.
- The returned object is an instance of [sqlalchemy.sql.expression#Select].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.Select`.
All arguments which accept ``ClauseElement`` arguments also accept
string arguments, which will be converted as appropriate into
return s
def subquery(alias, *args, **kwargs):
- """Return an [sqlalchemy.sql.expression#Alias] object derived from a [sqlalchemy.sql.expression#Select].
+ """Return an :class:`~sqlalchemy.sql.expression.Alias` object derived from a :class:`~sqlalchemy.sql.expression.Select`.
name
alias name
\*args, \**kwargs
- all other arguments are delivered to the [sqlalchemy.sql.expression#select()]
+ all other arguments are delivered to the :func:`~sqlalchemy.sql.expression.select`
function.
"""
return Select(*args, **kwargs).alias(alias)
def insert(table, values=None, inline=False, **kwargs):
- """Return an [sqlalchemy.sql.expression#Insert] clause element.
+ """Return an :class:`~sqlalchemy.sql.expression.Insert` clause element.
Similar functionality is available via the ``insert()`` method on
- [sqlalchemy.schema#Table].
+ :class:`~sqlalchemy.schema.Table`.
table
The table to be inserted into.
return Insert(table, values, inline=inline, **kwargs)
def update(table, whereclause=None, values=None, inline=False, **kwargs):
- """Return an [sqlalchemy.sql.expression#Update] clause element.
+ """Return an :class:`~sqlalchemy.sql.expression.Update` clause element.
Similar functionality is available via the ``update()`` method on
- [sqlalchemy.schema#Table].
+ :class:`~sqlalchemy.schema.Table`.
table
The table to be updated.
return Update(table, whereclause=whereclause, values=values, inline=inline, **kwargs)
def delete(table, whereclause = None, **kwargs):
- """Return a [sqlalchemy.sql.expression#Delete] clause element.
+ """Return a :class:`~sqlalchemy.sql.expression.Delete` clause element.
Similar functionality is available via the ``delete()`` method on
- [sqlalchemy.schema#Table].
+ :class:`~sqlalchemy.schema.Table`.
table
The table to be updated.
"""Join a list of clauses together using the ``AND`` operator.
The ``&`` operator is also overloaded on all
- [sqlalchemy.sql.expression#_CompareMixin] subclasses to produce the same
+ :class:`~sqlalchemy.sql.expression._CompareMixin` subclasses to produce the same
result.
"""
"""Join a list of clauses together using the ``OR`` operator.
The ``|`` operator is also overloaded on all
- [sqlalchemy.sql.expression#_CompareMixin] subclasses to produce the same
+ :class:`~sqlalchemy.sql.expression._CompareMixin` subclasses to produce the same
result.
"""
"""Return a negation of the given clause, i.e. ``NOT(clause)``.
The ``~`` operator is also overloaded on all
- [sqlalchemy.sql.expression#_CompareMixin] subclasses to produce the same
+ :class:`~sqlalchemy.sql.expression._CompareMixin` subclasses to produce the same
result.
"""
Equivalent of SQL ``clausetest BETWEEN clauseleft AND clauseright``.
- The ``between()`` method on all [sqlalchemy.sql.expression#_CompareMixin] subclasses
+ The ``between()`` method on all :class:`~sqlalchemy.sql.expression._CompareMixin` subclasses
provides similar functionality.
"""
Equivalent of SQL ``CAST(clause AS totype)``.
- Use with a [sqlalchemy.types#TypeEngine] subclass, i.e::
+ Use with a :class:`~sqlalchemy.types.TypeEngine` subclass, i.e::
cast(table.c.unit_price * table.c.qty, Numeric(10,4))
operator=operators.collate, group=False)
def exists(*args, **kwargs):
- """Return an ``EXISTS`` clause as applied to a [sqlalchemy.sql.expression#Select] object.
+ """Return an ``EXISTS`` clause as applied to a :class:`~sqlalchemy.sql.expression.Select` object.
Calling styles are of the following forms::
def union(*selects, **kwargs):
"""Return a ``UNION`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
A similar ``union()`` method is available on all
- [sqlalchemy.sql.expression#FromClause] subclasses.
+ :class:`~sqlalchemy.sql.expression.FromClause` subclasses.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('UNION', *selects, **kwargs)
def union_all(*selects, **kwargs):
"""Return a ``UNION ALL`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
A similar ``union_all()`` method is available on all
- [sqlalchemy.sql.expression#FromClause] subclasses.
+ :class:`~sqlalchemy.sql.expression.FromClause` subclasses.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('UNION ALL', *selects, **kwargs)
def except_(*selects, **kwargs):
"""Return an ``EXCEPT`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('EXCEPT', *selects, **kwargs)
def except_all(*selects, **kwargs):
"""Return an ``EXCEPT ALL`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('EXCEPT ALL', *selects, **kwargs)
def intersect(*selects, **kwargs):
"""Return an ``INTERSECT`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('INTERSECT', *selects, **kwargs)
def intersect_all(*selects, **kwargs):
"""Return an ``INTERSECT ALL`` of multiple selectables.
- The returned object is an instance of [sqlalchemy.sql.expression#CompoundSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression.CompoundSelect`.
\*selects
- a list of [sqlalchemy.sql.expression#Select] instances.
+ a list of :class:`~sqlalchemy.sql.expression.Select` instances.
\**kwargs
available keyword arguments are the same as those of
- [sqlalchemy.sql.expression#select()].
+ :func:`~sqlalchemy.sql.expression.select`.
"""
return _compound_select('INTERSECT ALL', *selects, **kwargs)
def alias(selectable, alias=None):
- """Return an [sqlalchemy.sql.expression#Alias] object.
+ """Return an :class:`~sqlalchemy.sql.expression.Alias` object.
- An ``Alias`` represents any [sqlalchemy.sql.expression#FromClause] with
+ An ``Alias`` represents any :class:`~sqlalchemy.sql.expression.FromClause` with
an alternate name assigned within SQL, typically using the ``AS``
clause when generated, e.g. ``SELECT * FROM table AS aliasname``.
Literal clauses are created automatically when non-
``ClauseElement`` objects (such as strings, ints, dates, etc.) are
used in a comparison operation with a
- [sqlalchemy.sql.expression#_CompareMixin] subclass, such as a ``Column``
+ :class:`~sqlalchemy.sql.expression._CompareMixin` subclass, such as a ``Column``
object. Use this function to force the generation of a literal
clause, which will be created as a
- [sqlalchemy.sql.expression#_BindParamClause] with a bound value.
+ :class:`~sqlalchemy.sql.expression._BindParamClause` with a bound value.
value
the value to be bound. Can be any Python object supported by
argument.
type\_
- an optional [sqlalchemy.types#TypeEngine] which will provide
+ an optional :class:`~sqlalchemy.types.TypeEngine` which will provide
bind-parameter translation for this literal.
"""
return _BindParamClause(None, value, type_=type_, unique=True)
def label(name, obj):
- """Return a [sqlalchemy.sql.expression#_Label] object for the given [sqlalchemy.sql.expression#ColumnElement].
+ """Return a :class:`~sqlalchemy.sql.expression._Label` object for the given :class:`~sqlalchemy.sql.expression.ColumnElement`.
A label changes the name of an element in the columns clause of a
``SELECT`` statement, typically via the ``AS`` SQL keyword.
def column(text, type_=None):
"""Return a textual column clause, as would be in the columns clause of a ``SELECT`` statement.
- The object returned is an instance of [sqlalchemy.sql.expression#_ColumnClause],
+ The object returned is an instance of :class:`~sqlalchemy.sql.expression.ColumnClause`,
which represents the "syntactical" portion of the schema-level
- [sqlalchemy.schema#Column] object.
+ :class:`~sqlalchemy.schema.Column` object.
text
the name of the column. Quoting rules will be applied to the
clause like any other column name. For textual column
constructs that are not to be quoted, use the
- [sqlalchemy.sql.expression#literal_column()] function.
+ :func:`~sqlalchemy.sql.expression.literal_column` function.
type\_
- an optional [sqlalchemy.types#TypeEngine] object which will
+ an optional :class:`~sqlalchemy.types.TypeEngine` object which will
provide result-set translation for this column.
"""
- return _ColumnClause(text, type_=type_)
+ return ColumnClause(text, type_=type_)
def literal_column(text, type_=None):
"""Return a textual column expression, as would be in the columns
the text of the expression; can be any SQL expression. Quoting rules
will not be applied. To specify a column-name expression which should
be subject to quoting rules, use the
- [sqlalchemy.sql.expression#column()] function.
+ :func:`~sqlalchemy.sql.expression.column` function.
type\_
- an optional [sqlalchemy.types#TypeEngine] object which will provide
+ an optional :class:`~sqlalchemy.types.TypeEngine` object which will provide
result-set translation and additional expression semantics for this
column. If left as None the type will be NullType.
"""
- return _ColumnClause(text, type_=type_, is_literal=True)
+ return ColumnClause(text, type_=type_, is_literal=True)
def table(name, *columns):
- """Return a [sqlalchemy.sql.expression#Table] object.
+ """Return a :class:`~sqlalchemy.sql.expression.Table` object.
- This is a primitive version of the [sqlalchemy.schema#Table] object,
+ This is a primitive version of the :class:`~sqlalchemy.schema.Table` object,
which is a subclass of this object.
"""
mostly useful with value-based bind params.
"""
- if isinstance(key, _ColumnClause):
+ if isinstance(key, ColumnClause):
return _BindParamClause(key.name, value, type_=key.type, unique=unique, shortname=shortname)
else:
return _BindParamClause(key, value, type_=type_, unique=unique, shortname=shortname)
The ``outparam`` can be used like a regular function parameter.
The "output" value will be available from the
- [sqlalchemy.engine#ResultProxy] object via its ``out_parameters``
+ :class:`~sqlalchemy.engine.ResultProxy` object via its ``out_parameters``
attribute, which returns a dictionary containing the values.
"""
def compile(self, bind=None, column_keys=None, compiler=None, dialect=None, inline=False):
"""Compile this SQL expression.
- The return value is a [sqlalchemy.engine#Compiled] object.
+ The return value is a :class:`~sqlalchemy.engine.Compiled` object.
Calling `str()` or `unicode()` on the returned value will yield
a string representation of the result. The ``Compiled``
object also can return a dictionary of bind parameter names and
"""
if name:
- co = _ColumnClause(name, selectable, type_=getattr(self, 'type', None))
+ co = ColumnClause(name, selectable, type_=getattr(self, 'type', None))
else:
name = str(self)
- co = _ColumnClause(self.anon_label, selectable, type_=getattr(self, 'type', None))
+ co = ColumnClause(self.anon_label, selectable, type_=getattr(self, 'type', None))
co.proxies = [self]
selectable.columns[name] = co
e.proxies.append(self)
return e
-class _ColumnClause(_Immutable, ColumnElement):
+class ColumnClause(_Immutable, ColumnElement):
"""Represents a generic column expression from any textual string.
This includes columns associated with tables, aliases and select
statements, but also any arbitrary text. May or may not be bound
- to an underlying ``Selectable``. ``_ColumnClause`` is usually
+ to an underlying ``Selectable``. ``ColumnClause`` is usually
created publically via the ``column()`` function or the
``literal_column()`` function.
parent selectable.
type
- ``TypeEngine`` object which can associate this ``_ColumnClause``
+ ``TypeEngine`` object which can associate this ``ColumnClause``
with a type.
is_literal
- if True, the ``_ColumnClause`` is assumed to be an exact
+ if True, the ``ColumnClause`` is assumed to be an exact
expression that will be delivered to the output with no quoting
rules applied regardless of case sensitive settings. the
``literal_column()`` function is usually used to create such a
- ``_ColumnClause``.
+ ``ColumnClause``.
"""
def __init__(self, text, selectable=None, type_=None, is_literal=False):
if name is None:
return self
else:
- return super(_ColumnClause, self).label(name)
+ return super(ColumnClause, self).label(name)
@property
def _from_objects(self):
# propagate the "is_literal" flag only if we are keeping our name,
# otherwise its considered to be a label
is_literal = self.is_literal and (name is None or name == self.name)
- c = _ColumnClause(name or self.name, selectable=selectable, type_=self.type, is_literal=is_literal)
+ c = ColumnClause(name or self.name, selectable=selectable, type_=self.type, is_literal=is_literal)
c.proxies = [self]
if attach:
selectable.columns[c.name] = c
Typically, a select statement which has only one column in its columns clause
is eligible to be used as a scalar expression.
- The returned object is an instance of [sqlalchemy.sql.expression#_ScalarSelect].
+ The returned object is an instance of :class:`~sqlalchemy.sql.expression._ScalarSelect`.
"""
return _ScalarSelect(self)
return list(self.inner_columns)[0]._make_proxy(selectable, name)
class CompoundSelect(_SelectBaseMixin, FromClause):
+ """Forms the basis of ``UNION``, ``UNION ALL``, and other SELECT-based set operations."""
+
def __init__(self, keyword, *selects, **kwargs):
self._should_correlate = kwargs.pop('correlate', False)
self.keyword = keyword
"""Construct a Select object.
The public constructor for Select is the
- [sqlalchemy.sql.expression#select()] function; see that function for
+ :func:`~sqlalchemy.sql.expression.select` function; see that function for
argument descriptions.
Additional generative and mutator methods are available on the
- [sqlalchemy.sql.expression#_SelectBaseMixin] superclass.
+ :class:`~sqlalchemy.sql.expression._SelectBaseMixin` superclass.
"""
self._should_correlate = correlate
GenericFunction.__init__(self, args=args, **kwargs)
class count(GenericFunction):
- """The ANSI COUNT aggregate function. With no arguments, emits COUNT *."""
+ """The ANSI COUNT aggregate function. With no arguments, emits COUNT \*."""
__return_type__ = sqltypes.Integer
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""defines genericized SQL types, each represented by a subclass of
-[sqlalchemy.types#AbstractType]. Dialects define further subclasses of these
+:class:`~sqlalchemy.types.AbstractType`. Dialects define further subclasses of these
types.
For more information see the SQLAlchemy documentation on types.
return None
def compare_values(self, x, y):
- """compare two values for equality."""
+ """Compare two values for equality."""
return x == y
def is_mutable(self):
- """return True if the target Python type is 'mutable'.
+ """Return True if the target Python type is 'mutable'.
- This allows systems like the ORM to know if an object
- can be considered 'not changed' by identity alone.
- """
+ This allows systems like the ORM to know if a column value can
+ be considered 'not changed' by comparing the identity of
+ objects alone.
+
+ Use the :class:`MutableType` mixin or override this method to
+ return True in custom types that hold mutable values such as
+ ``dict``, ``list`` and custom objects.
+ """
return False
def get_dbapi_type(self, dbapi):
This can be useful for calling ``setinputsizes()``, for example.
"""
-
return None
def adapt_operator(self, op):
- """given an operator from the sqlalchemy.sql.operators package,
+ """Given an operator from the sqlalchemy.sql.operators package,
translate it to a new operator based on the semantics of this type.
- By default, returns the operator unchanged."""
+ By default, returns the operator unchanged.
+ """
return op
def __repr__(self):
for k in inspect.getargspec(self.__init__)[0][1:]))
class TypeEngine(AbstractType):
+ """Base for built-in types.
+
+ May be sub-classed to create entirely new types. Example::
+
+ import sqlalchemy.types as types
+
+ class MyType(types.TypeEngine):
+ def __init__(self, precision = 8):
+ self.precision = precision
+
+ def get_col_spec(self):
+ return "MYTYPE(%s)" % self.precision
+
+ def bind_processor(self, dialect):
+ def process(value):
+ return value
+ return process
+
+ def result_processor(self, dialect):
+ def process(value):
+ return value
+ return process
+
+ Once the type is made, it's immediately usable::
+
+ table = Table('foo', meta,
+ Column('id', Integer, primary_key=True),
+ Column('data', MyType(16))
+ )
+
+ """
+
def dialect_impl(self, dialect, **kwargs):
try:
return self._impl_dict[dialect]
return d
def get_col_spec(self):
+ """Return the DDL representation for this type."""
raise NotImplementedError()
def bind_processor(self, dialect):
+ """Return a conversion function for processing bind values.
+
+ Returns a callable which will receive a bind parameter value
+ as the sole positional argument and will return a value to
+ send to the DB-API.
+
+ If processing is not necessary, the method should return ``None``.
+
+ """
return None
def result_processor(self, dialect):
+ """Return a conversion function for processing result row values.
+
+ Returns a callable which will receive a result row column
+ value as the sole positional argument and will return a value
+ to return to the user.
+
+ If processing is not necessary, the method should return ``None``.
+
+ """
return None
def adapt(self, cls):
class TypeDecorator(AbstractType):
"""Allows the creation of types which add additional functionality
- to an existing type. Typical usage::
-
- class MyCustomType(TypeDecorator):
- impl = String
-
+ to an existing type.
+
+ Typical usage::
+
+ import sqlalchemy.types as types
+
+ class MyType(types.TypeDecorator):
+ # Prefixes Unicode values with "PREFIX:" on the way in and
+ # strips it off on the way out.
+
+ impl = types.Unicode
+
def process_bind_param(self, value, dialect):
- return value + "incoming string"
-
+ return "PREFIX:" + value
+
def process_result_value(self, value, dialect):
- return value[0:-16]
-
+ return value[7:]
+
+ def copy(self):
+ return MyType(self.impl.length)
+
The class-level "impl" variable is required, and can reference any
- TypeEngine class. Alternatively, the load_dialect_impl() method can
- be used to provide different type classes based on the dialect given;
- in this case, the "impl" variable can reference ``TypeEngine`` as a
- placeholder.
-
+ TypeEngine class. Alternatively, the load_dialect_impl() method
+ can be used to provide different type classes based on the dialect
+ given; in this case, the "impl" variable can reference
+ ``TypeEngine`` as a placeholder.
+
+ The reason that type behavior is modified using class decoration
+ instead of subclassing is due to the way dialect specific types
+ are used. Such as with the example above, when using the mysql
+ dialect, the actual type in use will be a
+ ``sqlalchemy.databases.mysql.MSString`` instance.
+ ``TypeDecorator`` handles the mechanics of passing the values
+ between user-defined ``process_`` methods and the current
+ dialect-specific type in use.
+
"""
-
+
def __init__(self, *args, **kwargs):
if not hasattr(self.__class__, 'impl'):
raise AssertionError("TypeDecorator implementations require a class-level variable 'impl' which refers to the class of type being decorated")
return tt
def load_dialect_impl(self, dialect):
- """loads the dialect-specific implementation of this type.
+ """Loads the dialect-specific implementation of this type.
by default calls dialect.type_descriptor(self.impl), but
can be overridden to provide different behavior.
return self.impl.is_mutable()
class MutableType(object):
- """A mixin that marks a Type as holding a mutable object."""
+ """A mixin that marks a Type as holding a mutable object.
+
+ :meth:`copy_value` and :meth:`compare_values` should be customized
+ as needed to match the needs of the object.
+
+ """
def is_mutable(self):
+ """Return True, mutable."""
return True
def copy_value(self, value):
+ """Unimplemented."""
raise NotImplementedError()
def compare_values(self, x, y):
+ """Compare *x* == *y*."""
return x == y
def to_instance(typeobj):
return typeobj.adapt(impltype)
class NullType(TypeEngine):
+ """An unknown type.
+
+ NullTypes will stand in if :class:`~sqlalchemy.Table` reflection
+ encounters a column data type unknown to SQLAlchemy. The
+ resulting columns are nearly fully usable: the DB-API adapter will
+ handle all translation to and from the database data type.
+
+ NullType does not have sufficient information to particpate in a
+ ``CREATE TABLE`` statement and will raise an exception if
+ encountered during a :meth:`~sqlalchemy.Table.create` operation.
+
+ """
+
def get_col_spec(self):
raise NotImplementedError()
NullTypeEngine = NullType
class Concatenable(object):
- """marks a type as supporting 'concatenation'"""
+ """A mixin that marks a type as supporting 'concatenation', typically strings."""
+
def adapt_operator(self, op):
+ """Converts an add operator to concat."""
from sqlalchemy.sql import operators
if op == operators.add:
return operators.concat_op
return op
class String(Concatenable, TypeEngine):
- """A sized string type.
+ """The base for all string and character types.
In SQL, corresponds to VARCHAR. Can also take Python unicode objects
and encode to the database's encoding in bind params (and the reverse for
result sets.)
- The `length` field is usually required when the `String` type is used within a
- CREATE TABLE statement, since VARCHAR requires a length on most databases.
- Currently SQLite is an exception to this.
-
+ The `length` field is usually required when the `String` type is
+ used within a CREATE TABLE statement, as VARCHAR requires a length
+ on most databases.
+
"""
+
def __init__(self, length=None, convert_unicode=False, assert_unicode=None):
+ """Create a string-holding type.
+
+ :param length: optional, a length for the column for use in
+ DDL statements. May be safely omitted if no ``CREATE
+ TABLE`` will be issued. Certain databases may require a
+ *length* for use in DDL, and will raise an exception when
+ the ``CREATE TABLE`` DDL is issued. Whether the value is
+ interpreted as bytes or characters is database specific.
+
+ :param convert_unicode: defaults to False. If True, convert
+ ``unicode`` data sent to the database to a ``str``
+ bytestring, and convert bytestrings coming back from the
+ database into ``unicode``.
+
+ Bytestrings are encoded using the dialect's
+ :attr:`~sqlalchemy.engine.base.Dialect.encoding`, which
+ defaults to `utf-8`.
+
+ If False, may be overridden by
+ :attr:`sqlalchemy.engine.base.Dialect.convert_unicode`.
+
+ :param assert_unicode:
+
+ If None (the default), no assertion will take place unless
+ overridden by :attr:`sqlalchemy.engine.base.Dialect.assert_unicode`.
+
+ If 'warn', will issue a runtime warning if a ``str``
+ instance is used as a bind value.
+
+ If true, will raise an :exc:`sqlalchemy.exc.InvalidRequestError`.
+
+ """
self.length = length
self.convert_unicode = convert_unicode
self.assert_unicode = assert_unicode
return dbapi.STRING
class Text(String):
+ """A variably sized string type.
+
+ In SQL, usually corresponds to CLOB or TEXT. Can also take Python
+ unicode objects and encode to the database's encoding in bind
+ params (and the reverse for result sets.)
+
+ """
def dialect_impl(self, dialect, **kwargs):
return TypeEngine.dialect_impl(self, dialect, **kwargs)
class Unicode(String):
- """A synonym for String(length, convert_unicode=True, assert_unicode='warn')."""
+ """A variable length Unicode string.
+
+ The ``Unicode`` type is a :class:`String` which converts Python
+ ``unicode`` objects (i.e., strings that are defined as
+ ``u'somevalue'``) into encoded bytestrings when passing the value
+ to the database driver, and similarly decodes values from the
+ database back into Python ``unicode`` objects.
+
+ When using the ``Unicode`` type, it is only appropriate to pass
+ Python ``unicode`` objects, and not plain ``str``. If a
+ bytestring (``str``) is passed, a runtime warning is issued. If
+ you notice your application raising these warnings but you're not
+ sure where, the Python ``warnings`` filter can be used to turn
+ these warnings into exceptions which will illustrate a stack
+ trace::
+
+ import warnings
+ warnings.simplefilter('error')
+
+ Bytestrings sent to and received from the database are encoded
+ using the dialect's
+ :attr:`~sqlalchemy.engine.base.Dialect.encoding`, which defaults
+ to `utf-8`.
+
+ A synonym for String(length, convert_unicode=True, assert_unicode='warn').
+
+ """
def __init__(self, length=None, **kwargs):
+ """Create a Unicode-converting String type.
+
+ :param length: optional, a length for the column for use in
+ DDL statements. May be safely omitted if no ``CREATE
+ TABLE`` will be issued. Certain databases may require a
+ *length* for use in DDL, and will raise an exception when
+ the ``CREATE TABLE`` DDL is issued. Whether the value is
+ interpreted as bytes or characters is database specific.
+
+ """
kwargs.setdefault('convert_unicode', True)
kwargs.setdefault('assert_unicode', 'warn')
super(Unicode, self).__init__(length=length, **kwargs)
"""A synonym for Text(convert_unicode=True, assert_unicode='warn')."""
def __init__(self, length=None, **kwargs):
+ """Create a Unicode-converting Text type.
+
+ :param length: optional, a length for the column for use in
+ DDL statements. May be safely omitted if no ``CREATE
+ TABLE`` will be issued. Certain databases may require a
+ *length* for use in DDL, and will raise an exception when
+ the ``CREATE TABLE`` DDL is issued. Whether the value is
+ interpreted as bytes or characters is database specific.
+
+ """
kwargs.setdefault('convert_unicode', True)
kwargs.setdefault('assert_unicode', 'warn')
super(UnicodeText, self).__init__(length=length, **kwargs)
+
class Integer(TypeEngine):
- """Integer datatype."""
+ """A type for ``int`` integers."""
def get_dbapi_type(self, dbapi):
return dbapi.NUMBER
+
class SmallInteger(Integer):
- """Smallint datatype."""
+ """A type for smaller ``int`` integers.
+
+ Typically generates a ``SMALLINT`` in DDL, and otherwise acts like
+ a normal :class:`Integer` on the Python side.
+
+ """
Smallinteger = SmallInteger
class Numeric(TypeEngine):
- """Numeric datatype, usually resolves to DECIMAL or NUMERIC."""
+ """A type for fixed precision numbers.
+
+ Typically generates DECIMAL or NUMERIC. Returns
+ ``decimal.Decimal`` objects by default.
+
+ """
def __init__(self, precision=10, scale=2, asdecimal=True, length=None):
+ """Construct a Numeric.
+
+ :param precision: the numeric precision for use in DDL ``CREATE TABLE``.
+
+ :param scale: the numeric scale for use in DDL ``CREATE TABLE``.
+
+ :param asdecimal: default True. If False, values will be
+ returned as-is from the DB-API, and may be either
+ ``Decimal`` or ``float`` types depending on the DB-API in
+ use.
+
+ """
if length:
util.warn_deprecated("'length' is deprecated for Numeric. Use 'scale'.")
scale = length
else:
return None
+
class Float(Numeric):
- def __init__(self, precision = 10, asdecimal=False, **kwargs):
+ """A type for ``float`` numbers."""
+
+ def __init__(self, precision=10, asdecimal=False, **kwargs):
+ """Construct a Float.
+
+ :param precision: the numeric precision for use in DDL ``CREATE TABLE``.
+
+ """
self.precision = precision
self.asdecimal = asdecimal
def adapt(self, impltype):
return impltype(precision=self.precision, asdecimal=self.asdecimal)
+
class DateTime(TypeEngine):
- """Implement a type for ``datetime.datetime()`` objects."""
+ """A type for ``datetime.datetime()`` objects.
+
+ Date and time types return objects from the Python ``datetime``
+ module. Most DBAPIs have built in support for the datetime
+ module, with the noted exception of SQLite. In the case of
+ SQLite, date and time types are stored as strings which are then
+ converted back to datetime objects when rows are returned.
+
+ """
def __init__(self, timezone=False):
self.timezone = timezone
def get_dbapi_type(self, dbapi):
return dbapi.DATETIME
+
class Date(TypeEngine):
- """Implement a type for ``datetime.date()`` objects."""
+ """A type for ``datetime.date()`` objects."""
def get_dbapi_type(self, dbapi):
return dbapi.DATETIME
+
class Time(TypeEngine):
- """Implement a type for ``datetime.time()`` objects."""
+ """A type for ``datetime.time()`` objects."""
def __init__(self, timezone=False):
self.timezone = timezone
def get_dbapi_type(self, dbapi):
return dbapi.DATETIME
+
class Binary(TypeEngine):
+ """A type for binary byte data.
+
+ The Binary type generates BLOB or BYTEA when tables are created,
+ and also converts incoming values using the ``Binary`` callable
+ provided by each DB-API.
+
+ """
+
def __init__(self, length=None):
+ """Construct a Binary type.
+
+ :param length: optional, a length for the column for use in
+ DDL statements. May be safely omitted if no ``CREATE
+ TABLE`` will be issued. Certain databases may require a
+ *length* for use in DDL, and will raise an exception when
+ the ``CREATE TABLE`` DDL is issued.
+
+ """
self.length = length
def bind_processor(self, dialect):
def get_dbapi_type(self, dbapi):
return dbapi.BINARY
+
class PickleType(MutableType, TypeDecorator):
+ """Holds Python objects.
+
+ PickleType builds upon the Binary type to apply Python's
+ ``pickle.dumps()`` to incoming objects, and ``pickle.loads()`` on
+ the way out, allowing any pickleable Python object to be stored as
+ a serialized binary field.
+
+ """
+
impl = Binary
def __init__(self, protocol=pickle.HIGHEST_PROTOCOL, pickler=None, mutable=True, comparator=None):
+ """Construct a PickleType.
+
+ :param protocol: defaults to ``pickle.HIGHEST_PROTOCOL``.
+
+ :param pickler: defaults to cPickle.pickle or pickle.pickle if
+ cPickle is not available. May be any object with
+ pickle-compatible ``dumps` and ``loads`` methods.
+
+ :param mutable: defaults to True; implements
+ :meth:`AbstractType.is_mutable`.
+
+ :param comparator: optional. a 2-arg callable predicate used
+ to compare values of this type. Defaults to equality if
+ *mutable* is False or ``pickler.dumps()`` equality if
+ *mutable* is True.
+
+ """
self.protocol = protocol
self.pickler = pickler or pickle
self.mutable = mutable
def is_mutable(self):
return self.mutable
+
class Boolean(TypeEngine):
- pass
+ """A bool datatype.
+
+ Boolean typically uses BOOLEAN or SMALLINT on the DDL side, and on
+ the Python side deals in ``True`` or ``False``.
+
+ """
+
class Interval(TypeDecorator):
- """Type to be used in Column statements to store python timedeltas.
+ """A type for ``datetime.timedelta()`` objects.
- If it's possible it uses native engine features to store timedeltas
- (now it's only PostgreSQL Interval type), if there is no such it
- fallbacks to DateTime storage with converting from/to timedelta on the fly
+ The Interval type deals with ``datetime.timedelta`` objects. In
+ PostgreSQL, the native ``INTERVAL`` type is used; for others, the
+ value is stored as a date which is relative to the "epoch"
+ (Jan. 1, 1970).
- Converting is very simple - just use epoch(zero timestamp, 01.01.1970) as
- base, so if we need to store timedelta = 1 day (24 hours) in database it
- will be stored as DateTime = '2nd Jan 1970 00:00', see bind_processor
- and result_processor to actual conversion code
"""
impl = TypeEngine