--- /dev/null
+.. _sqlalchemy.ext.compiler_toplevel:
+
+Custom SQL Constructs and Compilation Extension
+===============================================
+
+.. automodule:: sqlalchemy.ext.compiler
+ :members:
\ No newline at end of file
--- /dev/null
+=====================================
+Working with Engines and Connections
+=====================================
+
+.. module:: sqlalchemy.engine.base
+
+Recall from the beginning of :ref:`engines_toplevel` that the :class:`.Engine` provides a
+``connect()`` method which returns a
+:class:`~sqlalchemy.engine.base.Connection` object.
+:class:`~sqlalchemy.engine.base.Connection` is a *proxy* object which
+maintains a reference to a DBAPI connection instance. The ``close()`` method
+on :class:`~sqlalchemy.engine.base.Connection` does not actually close the
+DBAPI connection, but instead returns it to the connection pool referenced by
+the :class:`~sqlalchemy.engine.base.Engine`.
+:class:`~sqlalchemy.engine.base.Connection` will also automatically return its
+resources to the connection pool when the object is garbage collected, i.e.
+its ``__del__()`` method is called. When using the standard C implementation
+of Python, this method is usually called immediately as soon as the object is
+dereferenced. With other Python implementations such as Jython, this is not so
+guaranteed.
+
+The ``execute()`` methods on both :class:`~sqlalchemy.engine.base.Engine` and
+:class:`~sqlalchemy.engine.base.Connection` can also receive SQL clause
+constructs as well::
+
+ connection = engine.connect()
+ result = connection.execute(select([table1], table1.c.col1==5))
+ for row in result:
+ print row['col1'], row['col2']
+ connection.close()
+
+The above SQL construct is known as a ``select()``. The full range of SQL
+constructs available are described in :ref:`sqlexpression_toplevel`.
+
+Both :class:`~sqlalchemy.engine.base.Connection` and
+:class:`~sqlalchemy.engine.base.Engine` fulfill an interface known as
+:class:`~sqlalchemy.engine.base.Connectable` which specifies common
+functionality between the two objects, namely being able to call ``connect()``
+to return a :class:`~sqlalchemy.engine.base.Connection` object
+(:class:`~sqlalchemy.engine.base.Connection` just returns itself), and being
+able to call ``execute()`` to get a result set. Following this, most
+SQLAlchemy functions and objects which accept an
+:class:`~sqlalchemy.engine.base.Engine` as a parameter or attribute with which
+to execute SQL will also accept a :class:`~sqlalchemy.engine.base.Connection`.
+This argument is named ``bind``::
+
+ engine = create_engine('sqlite:///:memory:')
+
+ # specify some Table metadata
+ metadata = MetaData()
+ table = Table('sometable', metadata, Column('col1', Integer))
+
+ # create the table with the Engine
+ table.create(bind=engine)
+
+ # drop the table with a Connection off the Engine
+ connection = engine.connect()
+ table.drop(bind=connection)
+
+.. index::
+ single: thread safety; connections
+
+Connection facts:
+
+* the Connection object is **not thread-safe**. While a Connection can be
+ shared among threads using properly synchronized access, this is also not
+ recommended as many DBAPIs have issues with, if not outright disallow,
+ sharing of connection state between threads.
+* The Connection object represents a single dbapi connection checked out from
+ the connection pool. In this state, the connection pool has no affect upon
+ the connection, including its expiration or timeout state. For the
+ connection pool to properly manage connections, **connections should be
+ returned to the connection pool (i.e. ``connection.close()``) whenever the
+ connection is not in use**. If your application has a need for management of
+ multiple connections or is otherwise long running (this includes all web
+ applications, threaded or not), don't hold a single connection open at the
+ module level.
+
+Connection API
+===============
+
+.. autoclass:: Connection
+ :members:
+ :undoc-members:
+
+.. autoclass:: Connectable
+ :members:
+
+Engine API
+===========
+
+.. autoclass:: Engine
+ :members:
+
+Result Object API
+=================
+
+.. autoclass:: sqlalchemy.engine.base.ResultProxy
+ :members:
+
+.. autoclass:: sqlalchemy.engine.base.RowProxy
+ :members:
+
+Using Connection-level Transactions
+===================================
+
+The :class:`~sqlalchemy.engine.base.Connection` object provides a ``begin()``
+method which returns a :class:`~sqlalchemy.engine.base.Transaction` object.
+This object is usually used within a try/except clause so that it is
+guaranteed to ``rollback()`` or ``commit()``::
+
+ trans = connection.begin()
+ try:
+ r1 = connection.execute(table1.select())
+ connection.execute(table1.insert(), col1=7, col2='this is some data')
+ trans.commit()
+ except:
+ trans.rollback()
+ raise
+
+The :class:`~sqlalchemy.engine.base.Transaction` object also handles "nested"
+behavior by keeping track of the outermost begin/commit pair. In this example,
+two functions both issue a transaction on a Connection, but only the outermost
+Transaction object actually takes effect when it is committed.
+
+.. sourcecode:: python+sql
+
+ # method_a starts a transaction and calls method_b
+ def method_a(connection):
+ trans = connection.begin() # open a transaction
+ try:
+ method_b(connection)
+ trans.commit() # transaction is committed here
+ except:
+ trans.rollback() # this rolls back the transaction unconditionally
+ raise
+
+ # method_b also starts a transaction
+ def method_b(connection):
+ trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
+ try:
+ connection.execute("insert into mytable values ('bat', 'lala')")
+ connection.execute(mytable.insert(), col1='bat', col2='lala')
+ trans.commit() # transaction is not committed yet
+ except:
+ trans.rollback() # this rolls back the transaction unconditionally
+ raise
+
+ # open a Connection and call method_a
+ conn = engine.connect()
+ method_a(conn)
+ conn.close()
+
+Above, ``method_a`` is called first, which calls ``connection.begin()``. Then
+it calls ``method_b``. When ``method_b`` calls ``connection.begin()``, it just
+increments a counter that is decremented when it calls ``commit()``. If either
+``method_a`` or ``method_b`` calls ``rollback()``, the whole transaction is
+rolled back. The transaction is not committed until ``method_a`` calls the
+``commit()`` method. This "nesting" behavior allows the creation of functions
+which "guarantee" that a transaction will be used if one was not already
+available, but will automatically participate in an enclosing transaction if
+one exists.
+
+Note that SQLAlchemy's Object Relational Mapper also provides a way to control
+transaction scope at a higher level; this is described in
+:ref:`unitofwork_transaction`.
+
+.. index::
+ single: thread safety; transactions
+
+Transaction Facts:
+
+* the Transaction object, just like its parent Connection, is **not thread-safe**.
+
+.. autoclass:: Transaction
+ :members:
+
+
+Understanding Autocommit
+========================
+
+The previous transaction example illustrates how to use
+:class:`~sqlalchemy.engine.base.Transaction` so that several executions can
+take part in the same transaction. What happens when we issue an INSERT,
+UPDATE or DELETE call without using
+:class:`~sqlalchemy.engine.base.Transaction`? The answer is **autocommit**.
+While many DBAPIs implement a flag called ``autocommit``, the current
+SQLAlchemy behavior is such that it implements its own autocommit. This is
+achieved by detecting statements which represent data-changing operations,
+i.e. INSERT, UPDATE, DELETE, etc., and then issuing a COMMIT automatically if
+no transaction is in progress. The detection is based on compiled statement
+attributes, or in the case of a text-only statement via regular expressions.
+
+.. sourcecode:: python+sql
+
+ conn = engine.connect()
+ conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
+
+.. _dbengine_implicit:
+
+Connectionless Execution, Implicit Execution
+=============================================
+
+Recall from the first section we mentioned executing with and without a
+:class:`~sqlalchemy.engine.base.Connection`. ``Connectionless`` execution
+refers to calling the ``execute()`` method on an object which is not a
+:class:`~sqlalchemy.engine.base.Connection`, which could be on the
+:class:`~sqlalchemy.engine.base.Engine` itself, or could be a constructed SQL
+object. When we say "implicit", we mean that we are calling the ``execute()``
+method on an object which is neither a
+:class:`~sqlalchemy.engine.base.Connection` nor an
+:class:`~sqlalchemy.engine.base.Engine` object; this can only be used with
+constructed SQL objects which have their own ``execute()`` method, and can be
+"bound" to an :class:`~sqlalchemy.engine.base.Engine`. A description of
+"constructed SQL objects" may be found in :ref:`sqlexpression_toplevel`.
+
+A summary of all three methods follows below. First, assume the usage of the
+following :class:`~sqlalchemy.schema.MetaData` and
+:class:`~sqlalchemy.schema.Table` objects; while we haven't yet introduced
+these concepts, for now you only need to know that we are representing a
+database table, and are creating an "executable" SQL construct which issues a
+statement to the database. These objects are described in
+:ref:`metadata_toplevel`.
+
+.. sourcecode:: python+sql
+
+ meta = MetaData()
+ users_table = Table('users', meta,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(50))
+ )
+
+Explicit execution delivers the SQL text or constructed SQL expression to the
+``execute()`` method of :class:`~sqlalchemy.engine.base.Connection`:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ connection = engine.connect()
+ result = connection.execute(users_table.select())
+ for row in result:
+ # ....
+ connection.close()
+
+Explicit, connectionless execution delivers the expression to the
+``execute()`` method of :class:`~sqlalchemy.engine.base.Engine`:
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ result = engine.execute(users_table.select())
+ for row in result:
+ # ....
+ result.close()
+
+Implicit execution is also connectionless, and calls the ``execute()`` method
+on the expression itself, utilizing the fact that either an
+:class:`~sqlalchemy.engine.base.Engine` or
+:class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression
+object (binding is discussed further in the next section,
+:ref:`metadata_toplevel`):
+
+.. sourcecode:: python+sql
+
+ engine = create_engine('sqlite:///file.db')
+ meta.bind = engine
+ result = users_table.select().execute()
+ for row in result:
+ # ....
+ result.close()
+
+In both "connectionless" examples, the
+:class:`~sqlalchemy.engine.base.Connection` is created behind the scenes; the
+:class:`~sqlalchemy.engine.base.ResultProxy` returned by the ``execute()``
+call references the :class:`~sqlalchemy.engine.base.Connection` used to issue
+the SQL statement. When we issue ``close()`` on the
+:class:`~sqlalchemy.engine.base.ResultProxy`, or if the result set object
+falls out of scope and is garbage collected, the underlying
+:class:`~sqlalchemy.engine.base.Connection` is closed for us, resulting in the
+DBAPI connection being returned to the pool.
+
+.. _threadlocal_strategy:
+
+Using the Threadlocal Execution Strategy
+-----------------------------------------
+
+The "threadlocal" engine strategy is used by non-ORM applications which wish
+to bind a transaction to the current thread, such that all parts of the
+application can participate in that transaction implicitly without the need to
+explicitly reference a :class:`~sqlalchemy.engine.base.Connection`.
+"threadlocal" is designed for a very specific pattern of use, and is not
+appropriate unless this very specfic pattern, described below, is what's
+desired. It has **no impact** on the "thread safety" of SQLAlchemy components
+or one's application. It also should not be used when using an ORM
+:class:`~sqlalchemy.orm.session.Session` object, as the
+:class:`~sqlalchemy.orm.session.Session` itself represents an ongoing
+transaction and itself handles the job of maintaining connection and
+transactional resources.
+
+Enabling ``threadlocal`` is achieved as follows:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('mysql://localhost/test', strategy='threadlocal')
+
+When the engine above is used in a "connectionless" style, meaning
+``engine.execute()`` is called, a DBAPI connection is retrieved from the
+connection pool and then associated with the current thread. Subsequent
+operations on the :class:`~sqlalchemy.engine.base.Engine` while the DBAPI
+connection remains checked out will make use of the *same* DBAPI connection
+object. The connection stays allocated until all returned
+:class:`~sqlalchemy.engine.base.ResultProxy` objects are closed, which occurs
+for a particular :class:`~sqlalchemy.engine.base.ResultProxy` after all
+pending results are fetched, or immediately for an operation which returns no
+rows (such as an INSERT).
+
+.. sourcecode:: python+sql
+
+ # execute one statement and receive results. r1 now references a DBAPI connection resource.
+ r1 = db.execute("select * from table1")
+
+ # execute a second statement and receive results. r2 now references the *same* resource as r1
+ r2 = db.execute("select * from table2")
+
+ # fetch a row on r1 (assume more results are pending)
+ row1 = r1.fetchone()
+
+ # fetch a row on r2 (same)
+ row2 = r2.fetchone()
+
+ # close r1. the connection is still held by r2.
+ r1.close()
+
+ # close r2. with no more references to the underlying connection resources, they
+ # are returned to the pool.
+ r2.close()
+
+The above example does not illustrate any pattern that is particularly useful,
+as it is not a frequent occurence that two execute/result fetching operations
+"leapfrog" one another. There is a slight savings of connection pool checkout
+overhead between the two operations, and an implicit sharing of the same
+transactional context, but since there is no explicitly declared transaction,
+this association is short lived.
+
+The real usage of "threadlocal" comes when we want several operations to occur
+within the scope of a shared transaction. The
+:class:`~sqlalchemy.engine.base.Engine` now has ``begin()``, ``commit()`` and
+``rollback()`` methods which will retrieve a connection resource from the pool
+and establish a new transaction, maintaining the connection against the
+current thread until the transaction is committed or rolled back:
+
+.. sourcecode:: python+sql
+
+ db.begin()
+ try:
+ call_operation1()
+ call_operation2()
+ db.commit()
+ except:
+ db.rollback()
+
+``call_operation1()`` and ``call_operation2()`` can make use of the
+:class:`~sqlalchemy.engine.base.Engine` as a global variable, using the
+"connectionless" execution style, and their operations will participate in the
+same transaction:
+
+.. sourcecode:: python+sql
+
+ def call_operation1():
+ engine.execute("insert into users values (?, ?)", 1, "john")
+
+ def call_operation2():
+ users.update(users.c.user_id==5).execute(name='ed')
+
+When using threadlocal, operations that do call upon the ``engine.connect()``
+method will receive a :class:`~sqlalchemy.engine.base.Connection` that is
+**outside** the scope of the transaction. This can be used for operations such
+as logging the status of an operation regardless of transaction success:
+
+.. sourcecode:: python+sql
+
+ db.begin()
+ conn = db.connect()
+ try:
+ conn.execute(log_table.insert(), message="Operation started")
+ call_operation1()
+ call_operation2()
+ db.commit()
+ conn.execute(log_table.insert(), message="Operation succeeded")
+ except:
+ db.rollback()
+ conn.execute(log_table.insert(), message="Operation failed")
+ finally:
+ conn.close()
+
+Functions which are written to use an explicit
+:class:`~sqlalchemy.engine.base.Connection` object, but wish to participate in
+the threadlocal transaction, can receive their
+:class:`~sqlalchemy.engine.base.Connection` object from the
+``contextual_connect()`` method, which returns a
+:class:`~sqlalchemy.engine.base.Connection` that is **inside** the scope of
+the transaction:
+
+.. sourcecode:: python+sql
+
+ conn = db.contextual_connect()
+ call_operation3(conn)
+ conn.close()
+
+Calling ``close()`` on the "contextual" connection does not release the
+connection resources to the pool if other resources are making use of it. A
+resource-counting mechanism is employed so that the connection is released
+back to the pool only when all users of that connection, including the
+transaction established by ``engine.begin()``, have been completed.
+
+So remember - if you're not sure if you need to use ``strategy="threadlocal"``
+or not, the answer is **no** ! It's driven by a specific programming pattern
+that is generally not the norm.
+
--- /dev/null
+.. _engines_toplevel:
+
+================
+Database Engines
+================
+The **Engine** is the starting point for any SQLAlchemy application. It's
+"home base" for the actual database and its DBAPI, delivered to the SQLAlchemy
+application through a connection pool and a **Dialect**, which describes how
+to talk to a specific kind of database/DBAPI combination.
+
+The general structure is this::
+
+ +-----------+ __________
+ /---| Pool |---\ (__________)
+ +-------------+ / +-----------+ \ +--------+ | |
+ connect() <--| Engine |---x x----| DBAPI |---| database |
+ +-------------+ \ +-----------+ / +--------+ | |
+ \---| Dialect |---/ |__________|
+ +-----------+ (__________)
+
+Where above, a :class:`~sqlalchemy.engine.base.Engine` references both a
+:class:`~sqlalchemy.engine.base.Dialect` and :class:`~sqlalchemy.pool.Pool`,
+which together interpret the DBAPI's module functions as well as the behavior
+of the database.
+
+Creating an engine is just a matter of issuing a single call,
+:func:`create_engine()`::
+
+ engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
+
+The above engine invokes the ``postgresql`` dialect and a connection pool
+which references ``localhost:5432``.
+
+Note that the appropriate usage of :func:`create_engine()` is once per
+particular configuration, held globally for the lifetime of a single
+application process (not including child processes via ``fork()`` - these
+would require a new engine). A single :class:`~sqlalchemy.engine.base.Engine`
+manages connections on behalf of the process and is intended to be called upon
+in a concurrent fashion. Creating engines for each particular operation is not
+the intended usage.
+
+The engine can be used directly to issue SQL to the database. The most generic
+way is to use connections, which you get via the ``connect()`` method::
+
+ connection = engine.connect()
+ result = connection.execute("select username from users")
+ for row in result:
+ print "username:", row['username']
+ connection.close()
+
+The connection is an instance of :class:`~sqlalchemy.engine.base.Connection`,
+which is a **proxy** object for an actual DBAPI connection. The returned
+result is an instance of :class:`~sqlalchemy.engine.ResultProxy`, which acts
+very much like a DBAPI cursor.
+
+When you say ``engine.connect()``, a new
+:class:`~sqlalchemy.engine.base.Connection` object is created, and a DBAPI
+connection is retrieved from the connection pool. Later, when you call
+``connection.close()``, the DBAPI connection is returned to the pool; nothing
+is actually "closed" from the perspective of the database.
+
+To execute some SQL more quickly, you can skip the
+:class:`~sqlalchemy.engine.base.Connection` part and just say::
+
+ result = engine.execute("select username from users")
+ for row in result:
+ print "username:", row['username']
+ result.close()
+
+Where above, the ``execute()`` method on the
+:class:`~sqlalchemy.engine.base.Engine` does the ``connect()`` part for you,
+and returns the :class:`~sqlalchemy.engine.base.ResultProxy` directly. The
+actual :class:`~sqlalchemy.engine.base.Connection` is *inside* the
+:class:`~sqlalchemy.engine.base.ResultProxy`, waiting for you to finish
+reading the result. In this case, when you ``close()`` the
+:class:`~sqlalchemy.engine.base.ResultProxy`, the underlying
+:class:`~sqlalchemy.engine.base.Connection` is closed, which returns the DBAPI
+connection to the pool.
+
+To summarize the above two examples, when you use a
+:class:`~sqlalchemy.engine.base.Connection` object, it's known as **explicit
+execution**. When you don't see the
+:class:`~sqlalchemy.engine.base.Connection` object, but you still use the
+``execute()`` method on the :class:`~sqlalchemy.engine.base.Engine`, it's
+called **explicit, connectionless execution**. A third variant of execution
+also exists called **implicit execution**; this will be described later.
+
+The :class:`~sqlalchemy.engine.base.Engine` and
+:class:`~sqlalchemy.engine.base.Connection` can do a lot more than what we
+illustrated above; SQL strings are only its most rudimentary function. Later
+chapters will describe how "constructed SQL" expressions can be used with
+engines; in many cases, you don't have to deal with the
+:class:`~sqlalchemy.engine.base.Engine` at all after it's created. The Object
+Relational Mapper (ORM), an optional feature of SQLAlchemy, also uses the
+:class:`~sqlalchemy.engine.base.Engine` in order to get at connections; that's
+also a case where you can often create the engine once, and then forget about
+it.
+
+.. _supported_dbapis:
+
+Supported Databases
+====================
+
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package. A
+SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
+
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.6. The values
+translate as:
+
+* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
+* yes / OS platform - The DBAPI supports that platform.
+* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.
+* no / OS platform - The DBAPI does not support that platform.
+* partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
+* development - a development version of the dialect exists, but is not yet usable.
+* thirdparty - the dialect itself is maintained by a third party, who should be consulted for
+ information on current support.
+* \* - indicates the given DBAPI is the "default" for SQLAlchemy, i.e. when just the database name is specified
+
+========================= =========================== =========== =========== =========== ================= ============
+Driver Connect string Py2K Py3K Jython Unix Windows
+========================= =========================== =========== =========== =========== ================= ============
+**DB2/Informix IDS**
+ibm-db_ thirdparty thirdparty thirdparty thirdparty thirdparty thirdparty
+**Firebird**
+kinterbasdb_ ``firebird+kinterbasdb``\* yes development no yes yes
+**Informix**
+informixdb_ ``informix+informixdb``\* development development no unknown unknown
+**MaxDB**
+sapdb_ ``maxdb+sapdb``\* development development no yes unknown
+**Microsoft Access**
+pyodbc_ ``access+pyodbc``\* development development no unknown yes
+**Microsoft SQL Server**
+adodbapi_ ``mssql+adodbapi`` development development no no yes
+`jTDS JDBC Driver`_ ``mssql+zxjdbc`` no no development yes yes
+mxodbc_ ``mssql+mxodbc`` yes development no yes with FreeTDS_ yes
+pyodbc_ ``mssql+pyodbc``\* yes development no yes with FreeTDS_ yes
+pymssql_ ``mssql+pymssql`` yes development no yes yes
+**MySQL**
+`MySQL Connector/J`_ ``mysql+zxjdbc`` no no yes yes yes
+`MySQL Connector/Python`_ ``mysql+mysqlconnector`` yes partial no yes yes
+mysql-python_ ``mysql+mysqldb``\* yes development no yes yes
+OurSQL_ ``mysql+oursql`` yes partial no yes yes
+**Oracle**
+cx_oracle_ ``oracle+cx_oracle``\* yes development no yes yes
+`Oracle JDBC Driver`_ ``oracle+zxjdbc`` no no yes yes yes
+**Postgresql**
+pg8000_ ``postgresql+pg8000`` yes yes no yes yes
+`PostgreSQL JDBC Driver`_ ``postgresql+zxjdbc`` no no yes yes yes
+psycopg2_ ``postgresql+psycopg2``\* yes development no yes yes
+pypostgresql_ ``postgresql+pypostgresql`` no yes no yes yes
+**SQLite**
+pysqlite_ ``sqlite+pysqlite``\* yes yes no yes yes
+sqlite3_ ``sqlite+pysqlite``\* yes yes no yes yes
+**Sybase ASE**
+mxodbc_ ``sybase+mxodbc`` development development no yes yes
+pyodbc_ ``sybase+pyodbc``\* partial development no unknown unknown
+python-sybase_ ``sybase+pysybase`` partial development no yes yes
+========================= =========================== =========== =========== =========== ================= ============
+
+.. _psycopg2: http://www.initd.org/
+.. _pg8000: http://pybrary.net/pg8000/
+.. _pypostgresql: http://python.projects.postgresql.org/
+.. _mysql-python: http://sourceforge.net/projects/mysql-python
+.. _MySQL Connector/Python: https://launchpad.net/myconnpy
+.. _OurSQL: http://packages.python.org/oursql/
+.. _PostgreSQL JDBC Driver: http://jdbc.postgresql.org/
+.. _sqlite3: http://docs.python.org/library/sqlite3.html
+.. _pysqlite: http://pypi.python.org/pypi/pysqlite/
+.. _MySQL Connector/J: http://dev.mysql.com/downloads/connector/j/
+.. _cx_Oracle: http://cx-oracle.sourceforge.net/
+.. _Oracle JDBC Driver: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
+.. _kinterbasdb: http://firebirdsql.org/index.php?op=devel&sub=python
+.. _pyodbc: http://code.google.com/p/pyodbc/
+.. _mxodbc: http://www.egenix.com/products/python/mxODBC/
+.. _FreeTDS: http://www.freetds.org/
+.. _adodbapi: http://adodbapi.sourceforge.net/
+.. _pymssql: http://code.google.com/p/pymssql/
+.. _jTDS JDBC Driver: http://jtds.sourceforge.net/
+.. _ibm-db: http://code.google.com/p/ibm-db/
+.. _informixdb: http://informixdb.sourceforge.net/
+.. _sapdb: http://www.sapdb.org/sapdbapi.html
+.. _python-sybase: http://python-sybase.sourceforge.net/
+
+Further detail on dialects is available at :ref:`sqlalchemy.dialects_toplevel`
+as well as additional notes on the wiki at `Database Notes
+<http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
+
+
+.. _create_engine_args:
+
+Database Engine Options
+========================
+
+Keyword options can also be specified to :func:`~sqlalchemy.create_engine`,
+following the string URL as follows:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgresql://...', encoding='latin1', echo=True)
+
+.. autofunction:: sqlalchemy.create_engine
+
+.. autofunction:: sqlalchemy.engine_from_config
+
+Database Urls
+=============
+
+SQLAlchemy indicates the source of an Engine strictly via `RFC-1738
+<http://rfc.net/rfc1738.html>`_ style URLs, combined with optional keyword
+arguments to specify options for the Engine. The form of the URL is:
+
+ dialect+driver://username:password@host:port/database
+
+Dialect names include the identifying name of the SQLAlchemy dialect which
+include ``sqlite``, ``mysql``, ``postgresql``, ``oracle``, ``mssql``, and
+``firebird``. The drivername is the name of the DBAPI to be used to connect to
+the database using all lowercase letters. If not specified, a "default" DBAPI
+will be imported if available - this default is typically the most widely
+known driver available for that backend (i.e. cx_oracle, pysqlite/sqlite3,
+psycopg2, mysqldb). For Jython connections, specify the `zxjdbc` driver, which
+is the JDBC-DBAPI bridge included with Jython.
+
+.. sourcecode:: python+sql
+
+ # postgresql - psycopg2 is the default driver.
+ pg_db = create_engine('postgresql://scott:tiger@localhost/mydatabase')
+ pg_db = create_engine('postgresql+psycopg2://scott:tiger@localhost/mydatabase')
+ pg_db = create_engine('postgresql+pg8000://scott:tiger@localhost/mydatabase')
+ pg_db = create_engine('postgresql+pypostgresql://scott:tiger@localhost/mydatabase')
+
+ # postgresql on Jython
+ pg_db = create_engine('postgresql+zxjdbc://scott:tiger@localhost/mydatabase')
+
+ # mysql - MySQLdb (mysql-python) is the default driver
+ mysql_db = create_engine('mysql://scott:tiger@localhost/foo')
+ mysql_db = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')
+
+ # mysql on Jython
+ mysql_db = create_engine('mysql+zxjdbc://localhost/foo')
+
+ # mysql with pyodbc (buggy)
+ mysql_db = create_engine('mysql+pyodbc://scott:tiger@some_dsn')
+
+ # oracle - cx_oracle is the default driver
+ oracle_db = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')
+
+ # oracle via TNS name
+ oracle_db = create_engine('oracle+cx_oracle://scott:tiger@tnsname')
+
+ # mssql using ODBC datasource names. PyODBC is the default driver.
+ mssql_db = create_engine('mssql://mydsn')
+ mssql_db = create_engine('mssql+pyodbc://mydsn')
+ mssql_db = create_engine('mssql+adodbapi://mydsn')
+ mssql_db = create_engine('mssql+pyodbc://username:password@mydsn')
+
+SQLite connects to file based databases. The same URL format is used, omitting
+the hostname, and using the "file" portion as the filename of the database.
+This has the effect of four slashes being present for an absolute file path::
+
+ # sqlite://<nohostname>/<path>
+ # where <path> is relative:
+ sqlite_db = create_engine('sqlite:///foo.db')
+
+ # or absolute, starting with a slash:
+ sqlite_db = create_engine('sqlite:////absolute/path/to/foo.db')
+
+To use a SQLite ``:memory:`` database, specify an empty URL::
+
+ sqlite_memory_db = create_engine('sqlite://')
+
+The :class:`~sqlalchemy.engine.base.Engine` will ask the connection pool for a
+connection when the ``connect()`` or ``execute()`` methods are called. The
+default connection pool, :class:`~sqlalchemy.pool.QueuePool`, as well as the
+default connection pool used with SQLite,
+:class:`~sqlalchemy.pool.SingletonThreadPool`, will open connections to the
+database on an as-needed basis. As concurrent statements are executed,
+:class:`~sqlalchemy.pool.QueuePool` will grow its pool of connections to a
+default size of five, and will allow a default "overflow" of ten. Since the
+:class:`~sqlalchemy.engine.base.Engine` is essentially "home base" for the
+connection pool, it follows that you should keep a single
+:class:`~sqlalchemy.engine.base.Engine` per database established within an
+application, rather than creating a new one for each connection.
+
+.. autoclass:: sqlalchemy.engine.url.URL
+ :members:
+
+Custom DBAPI connect() arguments
+=================================
+
+Custom arguments used when issuing the ``connect()`` call to the underlying
+DBAPI may be issued in three distinct ways. String-based arguments can be
+passed directly from the URL string as query arguments:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar')
+
+If SQLAlchemy's database connector is aware of a particular query argument, it
+may convert its type from string to its proper type.
+
+:func:`~sqlalchemy.create_engine` also takes an argument ``connect_args`` which is an additional dictionary that will be passed to ``connect()``. This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter:
+
+.. sourcecode:: python+sql
+
+ db = create_engine('postgresql://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
+
+The most customizable connection method of all is to pass a ``creator``
+argument, which specifies a callable that returns a DBAPI connection:
+
+.. sourcecode:: python+sql
+
+ def connect():
+ return psycopg.connect(user='scott', host='localhost')
+
+ db = create_engine('postgresql://', creator=connect)
+
+
+
+.. _dbengine_logging:
+
+Configuring Logging
+====================
+
+Python's standard `logging
+<http://www.python.org/doc/lib/module-logging.html>`_ module is used to
+implement informational and debug log output with SQLAlchemy. This allows
+SQLAlchemy's logging to integrate in a standard way with other applications
+and libraries. The ``echo`` and ``echo_pool`` flags that are present on
+:func:`~sqlalchemy.create_engine`, as well as the ``echo_uow`` flag used on
+:class:`~sqlalchemy.orm.session.Session`, all interact with regular loggers.
+
+This section assumes familiarity with the above linked logging module. All
+logging performed by SQLAlchemy exists underneath the ``sqlalchemy``
+namespace, as used by ``logging.getLogger('sqlalchemy')``. When logging has
+been configured (i.e. such as via ``logging.basicConfig()``), the general
+namespace of SA loggers that can be turned on is as follows:
+
+* ``sqlalchemy.engine`` - controls SQL echoing. set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
+* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects. See the documentation of individual dialects for details.
+* ``sqlalchemy.pool`` - controls connection pool logging. set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
+* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for configurational logging as well as unit of work dumps, ``logging.DEBUG`` for extensive logging during query and flush() operations. Subcategories of ``sqlalchemy.orm`` include:
+ * ``sqlalchemy.orm.attributes`` - logs certain instrumented attribute operations, such as triggered callables
+ * ``sqlalchemy.orm.mapper`` - logs Mapper configuration and operations
+ * ``sqlalchemy.orm.unitofwork`` - logs flush() operations, including dependency sort graphs and other operations
+ * ``sqlalchemy.orm.strategies`` - logs relationship loader operations (i.e. lazy and eager loads)
+ * ``sqlalchemy.orm.sync`` - logs synchronization of attributes from parent to child instances during a flush()
+
+For example, to log SQL queries as well as unit of work debugging:
+
+.. sourcecode:: python+sql
+
+ import logging
+
+ logging.basicConfig()
+ logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
+ logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)
+
+By default, the log level is set to ``logging.ERROR`` within the entire
+``sqlalchemy`` namespace so that no log operations occur, even within an
+application that has logging enabled otherwise.
+
+The ``echo`` flags present as keyword arguments to
+:func:`~sqlalchemy.create_engine` and others as well as the ``echo`` property
+on :class:`~sqlalchemy.engine.base.Engine`, when set to ``True``, will first
+attempt to ensure that logging is enabled. Unfortunately, the ``logging``
+module provides no way of determining if output has already been configured
+(note we are referring to if a logging configuration has been set up, not just
+that the logging level is set). For this reason, any ``echo=True`` flags will
+result in a call to ``logging.basicConfig()`` using sys.stdout as the
+destination. It also sets up a default format using the level name, timestamp,
+and logger name. Note that this configuration has the affect of being
+configured **in addition** to any existing logger configurations. Therefore,
+**when using Python logging, ensure all echo flags are set to False at all
+times**, to avoid getting duplicate log lines.
+
+The logger name of instance such as an :class:`~sqlalchemy.engine.base.Engine`
+or :class:`~sqlalchemy.pool.Pool` defaults to using a truncated hex identifier
+string. To set this to a specific name, use the "logging_name" and
+"pool_logging_name" keyword arguments with :func:`sqlalchemy.create_engine`.
+
+.. _core_toplevel:
+
SQLAlchemy Core
===============
tutorial
expression_api
engines
- pooling
connections
+ pooling
schema
types
+ interfaces
compiler
\ No newline at end of file
--- /dev/null
+.. currentmodule:: sqlalchemy.interfaces
+
+.. _interfaces_core_toplevel:
+
+
+Core Event Interfaces
+====================
+
+This section describes the various categories of events which can be intercepted
+in SQLAlchemy core, including execution and connection pool events.
+
+For ORM event documentation, see :ref:`interfaces_orm_toplevel`.
+
+A new version of this API with a significantly more flexible and consistent
+interface will be available in version 0.7.
+
+Execution, Connection and Cursor Events
+---------------------------------------
+
+.. autoclass:: ConnectionProxy
+ :members:
+ :undoc-members:
+
+Connection Pool Events
+----------------------
+
+.. autoclass:: PoolListener
+ :members:
+ :undoc-members:
+
+
.. _metadata_toplevel:
-==================
-Database Meta Data
-==================
+==========================
+Schema Definition Language
+==========================
+
+.. module:: sqlalchemy.schema
Describing Databases with MetaData
==================================
-The core of SQLAlchemy's query and object mapping operations are supported by *database metadata*, which is comprised of Python objects that describe tables and other schema-level objects. These objects are at the core of three major types of operations - issuing CREATE and DROP statements (known as *DDL*), constructing SQL queries, and expressing information about structures that already exist within the database.
-
-Database metadata can be expressed by explicitly naming the various components and their properties, using constructs such as :class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.Sequence`, all of which are imported from the ``sqlalchemy.schema`` package. It can also be generated by SQLAlchemy using a process called *reflection*, which means you start with a single object such as :class:`~sqlalchemy.schema.Table`, assign it a name, and then instruct SQLAlchemy to load all the additional information related to that name from a particular engine source.
-
-A key feature of SQLAlchemy's database metadata constructs is that they are designed to be used in a *declarative* style which closely resembles that of real DDL. They are therefore most intuitive to those who have some background in creating real schema generation scripts.
-
-A collection of metadata entities is stored in an object aptly named :class:`~sqlalchemy.schema.MetaData`::
+The core of SQLAlchemy's query and object mapping operations are supported by
+*database metadata*, which is comprised of Python objects that describe tables
+and other schema-level objects. These objects are at the core of three major
+types of operations - issuing CREATE and DROP statements (known as *DDL*),
+constructing SQL queries, and expressing information about structures that
+already exist within the database.
+
+Database metadata can be expressed by explicitly naming the various components
+and their properties, using constructs such as
+:class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.Column`,
+:class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.Sequence`, all of which are imported from the
+``sqlalchemy.schema`` package. It can also be generated by SQLAlchemy using a
+process called *reflection*, which means you start with a single object such
+as :class:`~sqlalchemy.schema.Table`, assign it a name, and then instruct
+SQLAlchemy to load all the additional information related to that name from a
+particular engine source.
+
+A key feature of SQLAlchemy's database metadata constructs is that they are
+designed to be used in a *declarative* style which closely resembles that of
+real DDL. They are therefore most intuitive to those who have some background
+in creating real schema generation scripts.
+
+A collection of metadata entities is stored in an object aptly named
+:class:`~sqlalchemy.schema.MetaData`::
from sqlalchemy import *
metadata = MetaData()
-:class:`~sqlalchemy.schema.MetaData` is a container object that keeps together many different features of a database (or multiple databases) being described.
+:class:`~sqlalchemy.schema.MetaData` is a container object that keeps together
+many different features of a database (or multiple databases) being described.
-To represent a table, use the :class:`~sqlalchemy.schema.Table` class. Its two primary arguments are the table name, then the :class:`~sqlalchemy.schema.MetaData` object which it will be associated with. The remaining positional arguments are mostly :class:`~sqlalchemy.schema.Column` objects describing each column::
+To represent a table, use the :class:`~sqlalchemy.schema.Table` class. Its two
+primary arguments are the table name, then the
+:class:`~sqlalchemy.schema.MetaData` object which it will be associated with.
+The remaining positional arguments are mostly
+:class:`~sqlalchemy.schema.Column` objects describing each column::
user = Table('user', metadata,
Column('user_id', Integer, primary_key = True),
Column('password', String(20), nullable = False)
)
-Above, a table called ``user`` is described, which contains four columns. The primary key of the table consists of the ``user_id`` column. Multiple columns may be assigned the ``primary_key=True`` flag which denotes a multi-column primary key, known as a *composite* primary key.
+Above, a table called ``user`` is described, which contains four columns. The
+primary key of the table consists of the ``user_id`` column. Multiple columns
+may be assigned the ``primary_key=True`` flag which denotes a multi-column
+primary key, known as a *composite* primary key.
-Note also that each column describes its datatype using objects corresponding to genericized types, such as :class:`~sqlalchemy.types.Integer` and :class:`~sqlalchemy.types.String`. SQLAlchemy features dozens of types of varying levels of specificity as well as the ability to create custom types. Documentation on the type system can be found at :ref:`types`.
+Note also that each column describes its datatype using objects corresponding
+to genericized types, such as :class:`~sqlalchemy.types.Integer` and
+:class:`~sqlalchemy.types.String`. SQLAlchemy features dozens of types of
+varying levels of specificity as well as the ability to create custom types.
+Documentation on the type system can be found at :ref:`types`.
Accessing Tables and Columns
----------------------------
-The :class:`~sqlalchemy.schema.MetaData` object contains all of the schema constructs we've associated with it. It supports a few methods of accessing these table objects, such as the ``sorted_tables`` accessor which returns a list of each :class:`~sqlalchemy.schema.Table` object in order of foreign key dependency (that is, each table is preceded by all tables which it references)::
+The :class:`~sqlalchemy.schema.MetaData` object contains all of the schema
+constructs we've associated with it. It supports a few methods of accessing
+these table objects, such as the ``sorted_tables`` accessor which returns a
+list of each :class:`~sqlalchemy.schema.Table` object in order of foreign key
+dependency (that is, each table is preceded by all tables which it
+references)::
>>> for t in metadata.sorted_tables:
... print t.name
invoice
invoice_item
-In most cases, individual :class:`~sqlalchemy.schema.Table` objects have been explicitly declared, and these objects are typically accessed directly as module-level variables in an application.
-Once a :class:`~sqlalchemy.schema.Table` has been defined, it has a full set of accessors which allow inspection of its properties. Given the following :class:`~sqlalchemy.schema.Table` definition::
+In most cases, individual :class:`~sqlalchemy.schema.Table` objects have been
+explicitly declared, and these objects are typically accessed directly as
+module-level variables in an application. Once a
+:class:`~sqlalchemy.schema.Table` has been defined, it has a full set of
+accessors which allow inspection of its properties. Given the following
+:class:`~sqlalchemy.schema.Table` definition::
employees = Table('employees', metadata,
Column('employee_id', Integer, primary_key=True),
Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)
-Note the :class:`~sqlalchemy.schema.ForeignKey` object used in this table - this construct defines a reference to a remote table, and is fully described in :ref:`metadata_foreignkeys`. Methods of accessing information about this table include::
+Note the :class:`~sqlalchemy.schema.ForeignKey` object used in this table -
+this construct defines a reference to a remote table, and is fully described
+in :ref:`metadata_foreignkeys`. Methods of accessing information about this
+table include::
# access the column "EMPLOYEE_ID":
employees.columns.employee_id
Creating and Dropping Database Tables
-------------------------------------
-Once you've defined some :class:`~sqlalchemy.schema.Table` objects, assuming you're working with a brand new database one thing you might want to do is issue CREATE statements for those tables and their related constructs (as an aside, it's also quite possible that you *don't* want to do this, if you already have some preferred methodology such as tools included with your database or an existing scripting system - if that's the case, feel free to skip this section - SQLAlchemy has no requirement that it be used to create your tables).
-
-The usual way to issue CREATE is to use :func:`~sqlalchemy.schema.MetaData.create_all` on the :class:`~sqlalchemy.schema.MetaData` object. This method will issue queries that first check for the existence of each individual table, and if not found will issue the CREATE statements:
+Once you've defined some :class:`~sqlalchemy.schema.Table` objects, assuming
+you're working with a brand new database one thing you might want to do is
+issue CREATE statements for those tables and their related constructs (as an
+aside, it's also quite possible that you *don't* want to do this, if you
+already have some preferred methodology such as tools included with your
+database or an existing scripting system - if that's the case, feel free to
+skip this section - SQLAlchemy has no requirement that it be used to create
+your tables).
+
+The usual way to issue CREATE is to use
+:func:`~sqlalchemy.schema.MetaData.create_all` on the
+:class:`~sqlalchemy.schema.MetaData` object. This method will issue queries
+that first check for the existence of each individual table, and if not found
+will issue the CREATE statements:
.. sourcecode:: python+sql
pref_value VARCHAR(100)
)
-:func:`~sqlalchemy.schema.MetaData.create_all` creates foreign key constraints between tables usually inline with the table definition itself, and for this reason it also generates the tables in order of their dependency. There are options to change this behavior such that ``ALTER TABLE`` is used instead.
+:func:`~sqlalchemy.schema.MetaData.create_all` creates foreign key constraints
+between tables usually inline with the table definition itself, and for this
+reason it also generates the tables in order of their dependency. There are
+options to change this behavior such that ``ALTER TABLE`` is used instead.
-Dropping all tables is similarly achieved using the :func:`~sqlalchemy.schema.MetaData.drop_all` method. This method does the exact opposite of :func:`~sqlalchemy.schema.MetaData.create_all` - the presence of each table is checked first, and tables are dropped in reverse order of dependency.
+Dropping all tables is similarly achieved using the
+:func:`~sqlalchemy.schema.MetaData.drop_all` method. This method does the
+exact opposite of :func:`~sqlalchemy.schema.MetaData.create_all` - the
+presence of each table is checked first, and tables are dropped in reverse
+order of dependency.
-Creating and dropping individual tables can be done via the ``create()`` and ``drop()`` methods of :class:`~sqlalchemy.schema.Table`. These methods by default issue the CREATE or DROP regardless of the table being present:
+Creating and dropping individual tables can be done via the ``create()`` and
+``drop()`` methods of :class:`~sqlalchemy.schema.Table`. These methods by
+default issue the CREATE or DROP regardless of the table being present:
.. sourcecode:: python+sql
DROP TABLE employees
{}
-To enable the "check first for the table existing" logic, add the ``checkfirst=True`` argument to ``create()`` or ``drop()``::
+To enable the "check first for the table existing" logic, add the
+``checkfirst=True`` argument to ``create()`` or ``drop()``::
employees.create(engine, checkfirst=True)
employees.drop(engine, checkfirst=False)
Binding MetaData to an Engine or Connection
--------------------------------------------
-Notice in the previous section the creator/dropper methods accept an argument for the database engine in use. When a schema construct is combined with an :class:`~sqlalchemy.engine.base.Engine` object, or an individual :class:`~sqlalchemy.engine.base.Connection` object, we call this the *bind*. In the above examples the bind is associated with the schema construct only for the duration of the operation. However, the option exists to persistently associate a bind with a set of schema constructs via the :class:`~sqlalchemy.schema.MetaData` object's ``bind`` attribute::
+Notice in the previous section the creator/dropper methods accept an argument
+for the database engine in use. When a schema construct is combined with an
+:class:`~sqlalchemy.engine.base.Engine` object, or an individual
+:class:`~sqlalchemy.engine.base.Connection` object, we call this the *bind*.
+In the above examples the bind is associated with the schema construct only
+for the duration of the operation. However, the option exists to persistently
+associate a bind with a set of schema constructs via the
+:class:`~sqlalchemy.schema.MetaData` object's ``bind`` attribute::
engine = create_engine('sqlite://')
# bind to an engine
meta.bind = engine
-We can now call methods like :func:`~sqlalchemy.schema.MetaData.create_all` without needing to pass the :class:`~sqlalchemy.engine.base.Engine`::
+We can now call methods like :func:`~sqlalchemy.schema.MetaData.create_all`
+without needing to pass the :class:`~sqlalchemy.engine.base.Engine`::
meta.create_all()
-The MetaData's bind is used for anything that requires an active connection, such as loading the definition of a table from the database automatically (called *reflection*)::
+The MetaData's bind is used for anything that requires an active connection,
+such as loading the definition of a table from the database automatically
+(called *reflection*)::
# describe a table called 'users', query the database for its columns
users_table = Table('users', meta, autoload=True)
# generate a SELECT statement and execute
result = users_table.select().execute()
-Binding the MetaData to the Engine is a **completely optional** feature. The above operations can be achieved without the persistent bind using parameters::
+Binding the MetaData to the Engine is a **completely optional** feature. The
+above operations can be achieved without the persistent bind using
+parameters::
# describe a table called 'users', query the database for its columns
users_table = Table('users', meta, autoload=True, autoload_with=engine)
# generate a SELECT statement and execute
result = engine.execute(users_table.select())
-Should you use bind ? It's probably best to start without it, and wait for a specific need to arise. Bind is useful if:
+Should you use bind ? It's probably best to start without it, and wait for a
+specific need to arise. Bind is useful if:
+
+* You aren't using the ORM, are usually using "connectionless" execution, and
+ find yourself constantly needing to specify the same
+ :class:`~sqlalchemy.engine.base.Engine` object throughout the entire
+ application. Bind can be used here to provide "implicit" execution.
+* Your application has multiple schemas that correspond to different engines.
+ Using one :class:`~sqlalchemy.schema.MetaData` for each schema, bound to
+ each engine, provides a decent place to delineate between the schemas. The
+ ORM will also integrate with this approach, where the :class:`Session` will
+ naturally use the engine that is bound to each table via its metadata
+ (provided the :class:`Session` itself has no ``bind`` configured.).
+
+Alternatively, the ``bind`` attribute of :class:`~sqlalchemy.schema.MetaData`
+is *confusing* if:
+
+* Your application talks to multiple database engines at different times,
+ which use the *same* set of :class:`Table` objects. It's usually confusing
+ and unnecessary to begin to create "copies" of :class:`Table` objects just
+ so that different engines can be used for different operations. An example
+ is an application that writes data to a "master" database while performing
+ read-only operations from a "read slave". A global
+ :class:`~sqlalchemy.schema.MetaData` object is *not* appropriate for
+ per-request switching like this, although a
+ :class:`~sqlalchemy.schema.ThreadLocalMetaData` object is.
+* You are using the ORM :class:`Session` to handle which class/table is bound
+ to which engine, or you are using the :class:`Session` to manage switching
+ between engines. Its a good idea to keep the "binding of tables to engines"
+ in one place - either using :class:`~sqlalchemy.schema.MetaData` only (the
+ :class:`Session` can of course be present, it just has no ``bind``
+ configured), or using :class:`Session` only (the ``bind`` attribute of
+ :class:`~sqlalchemy.schema.MetaData` is left empty).
+
+Specifying the Schema Name
+---------------------------
+
+Some databases support the concept of multiple schemas. A :class:`~sqlalchemy.schema.Table` can reference this by specifying the ``schema`` keyword argument::
+
+ financial_info = Table('financial_info', meta,
+ Column('id', Integer, primary_key=True),
+ Column('value', String(100), nullable=False),
+ schema='remote_banks'
+ )
+
+Within the :class:`~sqlalchemy.schema.MetaData` collection, this table will be identified by the combination of ``financial_info`` and ``remote_banks``. If another table called ``financial_info`` is referenced without the ``remote_banks`` schema, it will refer to a different :class:`~sqlalchemy.schema.Table`. :class:`~sqlalchemy.schema.ForeignKey` objects can specify references to columns in this table using the form ``remote_banks.financial_info.id``.
+
+The ``schema`` argument should be used for any name qualifiers required, including Oracle's "owner" attribute and similar. It also can accommodate a dotted name for longer schemes::
+
+ schema="dbo.scott"
+
+Backend-Specific Options
+------------------------
+
+:class:`~sqlalchemy.schema.Table` supports database-specific options. For example, MySQL has different table backend types, including "MyISAM" and "InnoDB". This can be expressed with :class:`~sqlalchemy.schema.Table` using ``mysql_engine``::
-* You aren't using the ORM, are usually using "connectionless" execution, and find yourself constantly needing to specify the same :class:`~sqlalchemy.engine.base.Engine` object throughout the entire application. Bind can be used here to provide "implicit" execution.
-* Your application has multiple schemas that correspond to different engines. Using one :class:`~sqlalchemy.schema.MetaData` for each schema, bound to each engine, provides a decent place to delineate between the schemas. The ORM will also integrate with this approach, where the :class:`Session` will naturally use the engine that is bound to each table via its metadata (provided the :class:`Session` itself has no ``bind`` configured.).
+ addresses = Table('engine_email_addresses', meta,
+ Column('address_id', Integer, primary_key = True),
+ Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
+ Column('email_address', String(20)),
+ mysql_engine='InnoDB'
+ )
-Alternatively, the ``bind`` attribute of :class:`~sqlalchemy.schema.MetaData` is *confusing* if:
+Other backends may support table-level options as well.
-* Your application talks to multiple database engines at different times, which use the *same* set of :class:`Table` objects. It's usually confusing and unnecessary to begin to create "copies" of :class:`Table` objects just so that different engines can be used for different operations. An example is an application that writes data to a "master" database while performing read-only operations from a "read slave". A global :class:`~sqlalchemy.schema.MetaData` object is *not* appropriate for per-request switching like this, although a :class:`~sqlalchemy.schema.ThreadLocalMetaData` object is.
-* You are using the ORM :class:`Session` to handle which class/table is bound to which engine, or you are using the :class:`Session` to manage switching between engines. Its a good idea to keep the "binding of tables to engines" in one place - either using :class:`~sqlalchemy.schema.MetaData` only (the :class:`Session` can of course be present, it just has no ``bind`` configured), or using :class:`Session` only (the ``bind`` attribute of :class:`~sqlalchemy.schema.MetaData` is left empty).
+API Constructs
+--------------
+
+.. autoclass:: Column
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: MetaData
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: Table
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: ThreadLocalMetaData
+ :members:
+ :undoc-members:
+ :show-inheritance:
.. _metadata_reflection:
-Reflecting Tables
------------------
+Reflecting Database Objects
+===========================
-A :class:`~sqlalchemy.schema.Table` object can be instructed to load information about itself from the corresponding database schema object already existing within the database. This process is called *reflection*. Most simply you need only specify the table name, a :class:`~sqlalchemy.schema.MetaData` object, and the ``autoload=True`` flag. If the :class:`~sqlalchemy.schema.MetaData` is not persistently bound, also add the ``autoload_with`` argument::
+A :class:`~sqlalchemy.schema.Table` object can be instructed to load
+information about itself from the corresponding database schema object already
+existing within the database. This process is called *reflection*. Most simply
+you need only specify the table name, a :class:`~sqlalchemy.schema.MetaData`
+object, and the ``autoload=True`` flag. If the
+:class:`~sqlalchemy.schema.MetaData` is not persistently bound, also add the
+``autoload_with`` argument::
>>> messages = Table('messages', meta, autoload=True, autoload_with=engine)
>>> [c.name for c in messages.columns]
['message_id', 'message_name', 'date']
-The above operation will use the given engine to query the database for information about the ``messages`` table, and will then generate :class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey`, and other objects corresponding to this information as though the :class:`~sqlalchemy.schema.Table` object were hand-constructed in Python.
+The above operation will use the given engine to query the database for
+information about the ``messages`` table, and will then generate
+:class:`~sqlalchemy.schema.Column`, :class:`~sqlalchemy.schema.ForeignKey`,
+and other objects corresponding to this information as though the
+:class:`~sqlalchemy.schema.Table` object were hand-constructed in Python.
-When tables are reflected, if a given table references another one via foreign key, a second :class:`~sqlalchemy.schema.Table` object is created within the :class:`~sqlalchemy.schema.MetaData` object representing the connection. Below, assume the table ``shopping_cart_items`` references a table named ``shopping_carts``. Reflecting the ``shopping_cart_items`` table has the effect such that the ``shopping_carts`` table will also be loaded::
+When tables are reflected, if a given table references another one via foreign
+key, a second :class:`~sqlalchemy.schema.Table` object is created within the
+:class:`~sqlalchemy.schema.MetaData` object representing the connection.
+Below, assume the table ``shopping_cart_items`` references a table named
+``shopping_carts``. Reflecting the ``shopping_cart_items`` table has the
+effect such that the ``shopping_carts`` table will also be loaded::
>>> shopping_cart_items = Table('shopping_cart_items', meta, autoload=True, autoload_with=engine)
>>> 'shopping_carts' in meta.tables:
True
-The :class:`~sqlalchemy.schema.MetaData` has an interesting "singleton-like" behavior such that if you requested both tables individually, :class:`~sqlalchemy.schema.MetaData` will ensure that exactly one :class:`~sqlalchemy.schema.Table` object is created for each distinct table name. The :class:`~sqlalchemy.schema.Table` constructor actually returns to you the already-existing :class:`~sqlalchemy.schema.Table` object if one already exists with the given name. Such as below, we can access the already generated ``shopping_carts`` table just by naming it::
+The :class:`~sqlalchemy.schema.MetaData` has an interesting "singleton-like"
+behavior such that if you requested both tables individually,
+:class:`~sqlalchemy.schema.MetaData` will ensure that exactly one
+:class:`~sqlalchemy.schema.Table` object is created for each distinct table
+name. The :class:`~sqlalchemy.schema.Table` constructor actually returns to
+you the already-existing :class:`~sqlalchemy.schema.Table` object if one
+already exists with the given name. Such as below, we can access the already
+generated ``shopping_carts`` table just by naming it::
shopping_carts = Table('shopping_carts', meta)
-Of course, it's a good idea to use ``autoload=True`` with the above table regardless. This is so that the table's attributes will be loaded if they have not been already. The autoload operation only occurs for the table if it hasn't already been loaded; once loaded, new calls to :class:`~sqlalchemy.schema.Table` with the same name will not re-issue any reflection queries.
+Of course, it's a good idea to use ``autoload=True`` with the above table
+regardless. This is so that the table's attributes will be loaded if they have
+not been already. The autoload operation only occurs for the table if it
+hasn't already been loaded; once loaded, new calls to
+:class:`~sqlalchemy.schema.Table` with the same name will not re-issue any
+reflection queries.
Overriding Reflected Columns
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------
-Individual columns can be overridden with explicit values when reflecting tables; this is handy for specifying custom datatypes, constraints such as primary keys that may not be configured within the database, etc.::
+Individual columns can be overridden with explicit values when reflecting
+tables; this is handy for specifying custom datatypes, constraints such as
+primary keys that may not be configured within the database, etc.::
>>> mytable = Table('mytable', meta,
... Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
... autoload=True)
Reflecting Views
-~~~~~~~~~~~~~~~~
+-----------------
-The reflection system can also reflect views. Basic usage is the same as that of a table::
+The reflection system can also reflect views. Basic usage is the same as that
+of a table::
my_view = Table("some_view", metadata, autoload=True)
-Above, ``my_view`` is a :class:`~sqlalchemy.schema.Table` object with :class:`~sqlalchemy.schema.Column` objects representing the names and types
-of each column within the view "some_view".
+Above, ``my_view`` is a :class:`~sqlalchemy.schema.Table` object with
+:class:`~sqlalchemy.schema.Column` objects representing the names and types of
+each column within the view "some_view".
-Usually, it's desired to have at least a primary key constraint when reflecting a view, if not
-foreign keys as well. View reflection doesn't extrapolate these constraints.
+Usually, it's desired to have at least a primary key constraint when
+reflecting a view, if not foreign keys as well. View reflection doesn't
+extrapolate these constraints.
Use the "override" technique for this, specifying explicitly those columns
which are part of the primary key or have foreign key constraints::
)
Reflecting All Tables at Once
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------
-The :class:`~sqlalchemy.schema.MetaData` object can also get a listing of tables and reflect the full set. This is achieved by using the :func:`~sqlalchemy.schema.MetaData.reflect` method. After calling it, all located tables are present within the :class:`~sqlalchemy.schema.MetaData` object's dictionary of tables::
+The :class:`~sqlalchemy.schema.MetaData` object can also get a listing of
+tables and reflect the full set. This is achieved by using the
+:func:`~sqlalchemy.schema.MetaData.reflect` method. After calling it, all
+located tables are present within the :class:`~sqlalchemy.schema.MetaData`
+object's dictionary of tables::
meta = MetaData()
meta.reflect(bind=someengine)
someengine.execute(table.delete())
Fine Grained Reflection with Inspector
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------------------
-A low level interface which provides a backend-agnostic system of loading lists of schema, table, column, and constraint descriptions from a given database is also available. This is known as the "Inspector" and is described in the API documentation at :ref:`inspector_api_toplevel`.
+A low level interface which provides a backend-agnostic system of loading
+lists of schema, table, column, and constraint descriptions from a given
+database is also available. This is known as the "Inspector"::
-Specifying the Schema Name
----------------------------
-
-Some databases support the concept of multiple schemas. A :class:`~sqlalchemy.schema.Table` can reference this by specifying the ``schema`` keyword argument::
+ from sqlalchemy import create_engine
+ from sqlalchemy.engine import reflection
+ engine = create_engine('...')
+ insp = reflection.Inspector.from_engine(engine)
+ print insp.get_table_names()
- financial_info = Table('financial_info', meta,
- Column('id', Integer, primary_key=True),
- Column('value', String(100), nullable=False),
- schema='remote_banks'
- )
-
-Within the :class:`~sqlalchemy.schema.MetaData` collection, this table will be identified by the combination of ``financial_info`` and ``remote_banks``. If another table called ``financial_info`` is referenced without the ``remote_banks`` schema, it will refer to a different :class:`~sqlalchemy.schema.Table`. :class:`~sqlalchemy.schema.ForeignKey` objects can specify references to columns in this table using the form ``remote_banks.financial_info.id``.
-
-The ``schema`` argument should be used for any name qualifiers required, including Oracle's "owner" attribute and similar. It also can accommodate a dotted name for longer schemes::
-
- schema="dbo.scott"
-
-Backend-Specific Options
-------------------------
-
-:class:`~sqlalchemy.schema.Table` supports database-specific options. For example, MySQL has different table backend types, including "MyISAM" and "InnoDB". This can be expressed with :class:`~sqlalchemy.schema.Table` using ``mysql_engine``::
-
- addresses = Table('engine_email_addresses', meta,
- Column('address_id', Integer, primary_key = True),
- Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
- Column('email_address', String(20)),
- mysql_engine='InnoDB'
- )
-
-Other backends may support table-level options as well. See the API documentation for each backend for further details.
+.. autoclass:: sqlalchemy.engine.reflection.Inspector
+ :members:
+ :undoc-members:
+ :show-inheritance:
Column Insert/Update Defaults
==============================
-SQLAlchemy provides a very rich featureset regarding column level events which take place during INSERT and UPDATE statements. Options include:
+SQLAlchemy provides a very rich featureset regarding column level events which
+take place during INSERT and UPDATE statements. Options include:
* Scalar values used as defaults during INSERT and UPDATE operations
* Python functions which execute upon INSERT and UPDATE operations
* Server side default values used during INSERT
* Markers for server-side triggers used during UPDATE
-The general rule for all insert/update defaults is that they only take effect if no value for a particular column is passed as an ``execute()`` parameter; otherwise, the given value is used.
+The general rule for all insert/update defaults is that they only take effect
+if no value for a particular column is passed as an ``execute()`` parameter;
+otherwise, the given value is used.
Scalar Defaults
---------------
Column("somecolumn", Integer, default=12)
)
-Above, the value "12" will be bound as the column value during an INSERT if no other value is supplied.
+Above, the value "12" will be bound as the column value during an INSERT if no
+other value is supplied.
-A scalar value may also be associated with an UPDATE statement, though this is not very common (as UPDATE statements are usually looking for dynamic defaults)::
+A scalar value may also be associated with an UPDATE statement, though this is
+not very common (as UPDATE statements are usually looking for dynamic
+defaults)::
Table("mytable", meta,
Column("somecolumn", Integer, onupdate=25)
Python-Executed Functions
-------------------------
-The ``default`` and ``onupdate`` keyword arguments also accept Python functions. These functions are invoked at the time of insert or update if no other value for that column is supplied, and the value returned is used for the column's value. Below illustrates a crude "sequence" that assigns an incrementing counter to a primary key column::
+The ``default`` and ``onupdate`` keyword arguments also accept Python
+functions. These functions are invoked at the time of insert or update if no
+other value for that column is supplied, and the value returned is used for
+the column's value. Below illustrates a crude "sequence" that assigns an
+incrementing counter to a primary key column::
# a function which counts upwards
i = 0
Column('id', Integer, primary_key=True, default=mydefault),
)
-It should be noted that for real "incrementing sequence" behavior, the built-in capabilities of the database should normally be used, which may include sequence objects or other autoincrementing capabilities. For primary key columns, SQLAlchemy will in most cases use these capabilities automatically. See the API documentation for :class:`~sqlalchemy.schema.Column` including the ``autoincrement`` flag, as well as the section on :class:`~sqlalchemy.schema.Sequence` later in this chapter for background on standard primary key generation techniques.
+It should be noted that for real "incrementing sequence" behavior, the
+built-in capabilities of the database should normally be used, which may
+include sequence objects or other autoincrementing capabilities. For primary
+key columns, SQLAlchemy will in most cases use these capabilities
+automatically. See the API documentation for
+:class:`~sqlalchemy.schema.Column` including the ``autoincrement`` flag, as
+well as the section on :class:`~sqlalchemy.schema.Sequence` later in this
+chapter for background on standard primary key generation techniques.
-To illustrate onupdate, we assign the Python ``datetime`` function ``now`` to the ``onupdate`` attribute::
+To illustrate onupdate, we assign the Python ``datetime`` function ``now`` to
+the ``onupdate`` attribute::
import datetime
Column('last_updated', DateTime, onupdate=datetime.datetime.now),
)
-When an update statement executes and no value is passed for ``last_updated``, the ``datetime.datetime.now()`` Python function is executed and its return value used as the value for ``last_updated``. Notice that we provide ``now`` as the function itself without calling it (i.e. there are no parenthesis following) - SQLAlchemy will execute the function at the time the statement executes.
+When an update statement executes and no value is passed for ``last_updated``,
+the ``datetime.datetime.now()`` Python function is executed and its return
+value used as the value for ``last_updated``. Notice that we provide ``now``
+as the function itself without calling it (i.e. there are no parenthesis
+following) - SQLAlchemy will execute the function at the time the statement
+executes.
Context-Sensitive Default Functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The Python functions used by ``default`` and ``onupdate`` may also make use of the current statement's context in order to determine a value. The `context` of a statement is an internal SQLAlchemy object which contains all information about the statement being executed, including its source expression, the parameters associated with it and the cursor. The typical use case for this context with regards to default generation is to have access to the other values being inserted or updated on the row. To access the context, provide a function that accepts a single ``context`` argument::
+The Python functions used by ``default`` and ``onupdate`` may also make use of
+the current statement's context in order to determine a value. The `context`
+of a statement is an internal SQLAlchemy object which contains all information
+about the statement being executed, including its source expression, the
+parameters associated with it and the cursor. The typical use case for this
+context with regards to default generation is to have access to the other
+values being inserted or updated on the row. To access the context, provide a
+function that accepts a single ``context`` argument::
def mydefault(context):
return context.current_parameters['counter'] + 12
Column('counter_plus_twelve', Integer, default=mydefault, onupdate=mydefault)
)
-Above we illustrate a default function which will execute for all INSERT and UPDATE statements where a value for ``counter_plus_twelve`` was otherwise not provided, and the value will be that of whatever value is present in the execution for the ``counter`` column, plus the number 12.
+Above we illustrate a default function which will execute for all INSERT and
+UPDATE statements where a value for ``counter_plus_twelve`` was otherwise not
+provided, and the value will be that of whatever value is present in the
+execution for the ``counter`` column, plus the number 12.
-While the context object passed to the default function has many attributes, the ``current_parameters`` member is a special member provided only during the execution of a default function for the purposes of deriving defaults from its existing values. For a single statement that is executing many sets of bind parameters, the user-defined function is called for each set of parameters, and ``current_parameters`` will be provided with each individual parameter set for each execution.
+While the context object passed to the default function has many attributes,
+the ``current_parameters`` member is a special member provided only during the
+execution of a default function for the purposes of deriving defaults from its
+existing values. For a single statement that is executing many sets of bind
+parameters, the user-defined function is called for each set of parameters,
+and ``current_parameters`` will be provided with each individual parameter set
+for each execution.
SQL Expressions
---------------
Column('last_modified', DateTime, onupdate=func.utc_timestamp())
)
-Above, the ``create_date`` column will be populated with the result of the ``now()`` SQL function (which, depending on backend, compiles into ``NOW()`` or ``CURRENT_TIMESTAMP`` in most cases) during an INSERT statement, and the ``key`` column with the result of a SELECT subquery from another table. The ``last_modified`` column will be populated with the value of ``UTC_TIMESTAMP()``, a function specific to MySQL, when an UPDATE statement is emitted for this table.
-
-Note that when using ``func`` functions, unlike when using Python `datetime` functions we *do* call the function, i.e. with parenthesis "()" - this is because what we want in this case is the return value of the function, which is the SQL expression construct that will be rendered into the INSERT or UPDATE statement.
-
-The above SQL functions are usually executed "inline" with the INSERT or UPDATE statement being executed, meaning, a single statement is executed which embeds the given expressions or subqueries within the VALUES or SET clause of the statement. Although in some cases, the function is "pre-executed" in a SELECT statement of its own beforehand. This happens when all of the following is true:
+Above, the ``create_date`` column will be populated with the result of the
+``now()`` SQL function (which, depending on backend, compiles into ``NOW()``
+or ``CURRENT_TIMESTAMP`` in most cases) during an INSERT statement, and the
+``key`` column with the result of a SELECT subquery from another table. The
+``last_modified`` column will be populated with the value of
+``UTC_TIMESTAMP()``, a function specific to MySQL, when an UPDATE statement is
+emitted for this table.
+
+Note that when using ``func`` functions, unlike when using Python `datetime`
+functions we *do* call the function, i.e. with parenthesis "()" - this is
+because what we want in this case is the return value of the function, which
+is the SQL expression construct that will be rendered into the INSERT or
+UPDATE statement.
+
+The above SQL functions are usually executed "inline" with the INSERT or
+UPDATE statement being executed, meaning, a single statement is executed which
+embeds the given expressions or subqueries within the VALUES or SET clause of
+the statement. Although in some cases, the function is "pre-executed" in a
+SELECT statement of its own beforehand. This happens when all of the following
+is true:
* the column is a primary key column
-* the database dialect does not support a usable ``cursor.lastrowid`` accessor (or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as well as some MySQL dialects.
-
-* the dialect does not support the "RETURNING" clause or similar, or the ``implicit_returning`` flag is set to ``False`` for the dialect. Dialects which support RETURNING currently include Postgresql, Oracle, Firebird, and MS-SQL.
-
-* the statement is a single execution, i.e. only supplies one set of parameters and doesn't use "executemany" behavior
-
-* the ``inline=True`` flag is not set on the :class:`~sqlalchemy.sql.expression.Insert()` or :class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has not defined an explicit `returning()` clause.
-
-Whether or not the default generation clause "pre-executes" is not something that normally needs to be considered, unless it is being addressed for performance reasons.
-
-When the statement is executed with a single set of parameters (that is, it is not an "executemany" style execution), the returned :class:`~sqlalchemy.engine.base.ResultProxy` will contain a collection accessible via ``result.postfetch_cols()`` which contains a list of all :class:`~sqlalchemy.schema.Column` objects which had an inline-executed default. Similarly, all parameters which were bound to the statement, including all Python and SQL expressions which were pre-executed, are present in the ``last_inserted_params()`` or ``last_updated_params()`` collections on :class:`~sqlalchemy.engine.base.ResultProxy`. The ``inserted_primary_key`` collection contains a list of primary key values for the row inserted (a list so that single-column and composite-column primary keys are represented in the same format).
+* the database dialect does not support a usable ``cursor.lastrowid`` accessor
+(or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as
+well as some MySQL dialects.
+
+* the dialect does not support the "RETURNING" clause or similar, or the
+``implicit_returning`` flag is set to ``False`` for the dialect. Dialects
+which support RETURNING currently include Postgresql, Oracle, Firebird, and
+MS-SQL.
+
+* the statement is a single execution, i.e. only supplies one set of
+parameters and doesn't use "executemany" behavior
+
+* the ``inline=True`` flag is not set on the
+:class:`~sqlalchemy.sql.expression.Insert()` or
+:class:`~sqlalchemy.sql.expression.Update()` construct, and the statement has
+not defined an explicit `returning()` clause.
+
+Whether or not the default generation clause "pre-executes" is not something
+that normally needs to be considered, unless it is being addressed for
+performance reasons.
+
+When the statement is executed with a single set of parameters (that is, it is
+not an "executemany" style execution), the returned
+:class:`~sqlalchemy.engine.base.ResultProxy` will contain a collection
+accessible via ``result.postfetch_cols()`` which contains a list of all
+:class:`~sqlalchemy.schema.Column` objects which had an inline-executed
+default. Similarly, all parameters which were bound to the statement,
+including all Python and SQL expressions which were pre-executed, are present
+in the ``last_inserted_params()`` or ``last_updated_params()`` collections on
+:class:`~sqlalchemy.engine.base.ResultProxy`. The ``inserted_primary_key``
+collection contains a list of primary key values for the row inserted (a list
+so that single-column and composite-column primary keys are represented in the
+same format).
Server Side Defaults
--------------------
-A variant on the SQL expression default is the ``server_default``, which gets placed in the CREATE TABLE statement during a ``create()`` operation:
+A variant on the SQL expression default is the ``server_default``, which gets
+placed in the CREATE TABLE statement during a ``create()`` operation:
.. sourcecode:: python+sql
created_at datetime default sysdate
)
-The behavior of ``server_default`` is similar to that of a regular SQL default; if it's placed on a primary key column for a database which doesn't have a way to "postfetch" the ID, and the statement is not "inlined", the SQL expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on the database side normally.
+The behavior of ``server_default`` is similar to that of a regular SQL
+default; if it's placed on a primary key column for a database which doesn't
+have a way to "postfetch" the ID, and the statement is not "inlined", the SQL
+expression is pre-executed; otherwise, SQLAlchemy lets the default fire off on
+the database side normally.
Triggered Columns
------------------
-Columns with values set by a database trigger or other external process may be called out with a marker::
+Columns with values set by a database trigger or other external process may be
+called out with a marker::
t = Table('test', meta,
Column('abc', String(20), server_default=FetchedValue()),
Column('def', String(20), server_onupdate=FetchedValue())
)
-These markers do not emit a "default" clause when the table is created, however they do set the same internal flags as a static ``server_default`` clause, providing hints to higher-level tools that a "post-fetch" of these rows should be performed after an insert or update.
+These markers do not emit a "default" clause when the table is created,
+however they do set the same internal flags as a static ``server_default``
+clause, providing hints to higher-level tools that a "post-fetch" of these
+rows should be performed after an insert or update.
Defining Sequences
-------------------
-SQLAlchemy represents database sequences using the :class:`~sqlalchemy.schema.Sequence` object, which is considered to be a special case of "column default". It only has an effect on databases which have explicit support for sequences, which currently includes Postgresql, Oracle, and Firebird. The :class:`~sqlalchemy.schema.Sequence` object is otherwise ignored.
+SQLAlchemy represents database sequences using the
+:class:`~sqlalchemy.schema.Sequence` object, which is considered to be a
+special case of "column default". It only has an effect on databases which
+have explicit support for sequences, which currently includes Postgresql,
+Oracle, and Firebird. The :class:`~sqlalchemy.schema.Sequence` object is
+otherwise ignored.
-The :class:`~sqlalchemy.schema.Sequence` may be placed on any column as a "default" generator to be used during INSERT operations, and can also be configured to fire off during UPDATE operations if desired. It is most commonly used in conjunction with a single integer primary key column::
+The :class:`~sqlalchemy.schema.Sequence` may be placed on any column as a
+"default" generator to be used during INSERT operations, and can also be
+configured to fire off during UPDATE operations if desired. It is most
+commonly used in conjunction with a single integer primary key column::
table = Table("cartitems", meta,
Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
Column("createdate", DateTime())
)
-Where above, the table "cartitems" is associated with a sequence named "cart_id_seq". When INSERT statements take place for "cartitems", and no value is passed for the "cart_id" column, the "cart_id_seq" sequence will be used to generate a value.
+Where above, the table "cartitems" is associated with a sequence named
+"cart_id_seq". When INSERT statements take place for "cartitems", and no value
+is passed for the "cart_id" column, the "cart_id_seq" sequence will be used to
+generate a value.
+
+When the :class:`~sqlalchemy.schema.Sequence` is associated with a table,
+CREATE and DROP statements issued for that table will also issue CREATE/DROP
+for the sequence object as well, thus "bundling" the sequence object with its
+parent table.
+
+The :class:`~sqlalchemy.schema.Sequence` object also implements special
+functionality to accommodate Postgresql's SERIAL datatype. The SERIAL type in
+PG automatically generates a sequence that is used implicitly during inserts.
+This means that if a :class:`~sqlalchemy.schema.Table` object defines a
+:class:`~sqlalchemy.schema.Sequence` on its primary key column so that it
+works with Oracle and Firebird, the :class:`~sqlalchemy.schema.Sequence` would
+get in the way of the "implicit" sequence that PG would normally use. For this
+use case, add the flag ``optional=True`` to the
+:class:`~sqlalchemy.schema.Sequence` object - this indicates that the
+:class:`~sqlalchemy.schema.Sequence` should only be used if the database
+provides no other option for generating primary key identifiers.
+
+The :class:`~sqlalchemy.schema.Sequence` object also has the ability to be
+executed standalone like a SQL expression, which has the effect of calling its
+"next value" function::
-When the :class:`~sqlalchemy.schema.Sequence` is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus "bundling" the sequence object with its parent table.
+ seq = Sequence('some_sequence')
+ nextid = connection.execute(seq)
-The :class:`~sqlalchemy.schema.Sequence` object also implements special functionality to accommodate Postgresql's SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a :class:`~sqlalchemy.schema.Table` object defines a :class:`~sqlalchemy.schema.Sequence` on its primary key column so that it works with Oracle and Firebird, the :class:`~sqlalchemy.schema.Sequence` would get in the way of the "implicit" sequence that PG would normally use. For this use case, add the flag ``optional=True`` to the :class:`~sqlalchemy.schema.Sequence` object - this indicates that the :class:`~sqlalchemy.schema.Sequence` should only be used if the database provides no other option for generating primary key identifiers.
+API Constructs
+--------------
-The :class:`~sqlalchemy.schema.Sequence` object also has the ability to be executed standalone like a SQL expression, which has the effect of calling its "next value" function::
+.. autoclass:: ColumnDefault
+ :show-inheritance:
- seq = Sequence('some_sequence')
- nextid = connection.execute(seq)
+.. autoclass:: DefaultClause
+ :show-inheritance:
+
+.. autoclass:: DefaultGenerator
+ :show-inheritance:
+
+.. autoclass:: FetchedValue
+ :show-inheritance:
+
+.. autoclass:: PassiveDefault
+ :show-inheritance:
+
+.. autoclass:: Sequence
+ :show-inheritance:
Defining Constraints and Indexes
=================================
Defining Foreign Keys
---------------------
-A *foreign key* in SQL is a table-level construct that constrains one or more columns in that table to only allow values that are present in a different set of columns, typically but not always located on a different table. We call the columns which are constrained the *foreign key* columns and the columns which they are constrained towards the *referenced* columns. The referenced columns almost always define the primary key for their owning table, though there are exceptions to this. The foreign key is the "joint" that connects together pairs of rows which have a relationship with each other, and SQLAlchemy assigns very deep importance to this concept in virtually every area of its operation.
-
-In SQLAlchemy as well as in DDL, foreign key constraints can be defined as additional attributes within the table clause, or for single-column foreign keys they may optionally be specified within the definition of a single column. The single column foreign key is more common, and at the column level is specified by constructing a :class:`~sqlalchemy.schema.ForeignKey` object as an argument to a :class:`~sqlalchemy.schema.Column` object::
+A *foreign key* in SQL is a table-level construct that constrains one or more
+columns in that table to only allow values that are present in a different set
+of columns, typically but not always located on a different table. We call the
+columns which are constrained the *foreign key* columns and the columns which
+they are constrained towards the *referenced* columns. The referenced columns
+almost always define the primary key for their owning table, though there are
+exceptions to this. The foreign key is the "joint" that connects together
+pairs of rows which have a relationship with each other, and SQLAlchemy
+assigns very deep importance to this concept in virtually every area of its
+operation.
+
+In SQLAlchemy as well as in DDL, foreign key constraints can be defined as
+additional attributes within the table clause, or for single-column foreign
+keys they may optionally be specified within the definition of a single
+column. The single column foreign key is more common, and at the column level
+is specified by constructing a :class:`~sqlalchemy.schema.ForeignKey` object
+as an argument to a :class:`~sqlalchemy.schema.Column` object::
user_preference = Table('user_preference', metadata,
Column('pref_id', Integer, primary_key=True),
Column('pref_value', String(100))
)
-Above, we define a new table ``user_preference`` for which each row must contain a value in the ``user_id`` column that also exists in the ``user`` table's ``user_id`` column.
+Above, we define a new table ``user_preference`` for which each row must
+contain a value in the ``user_id`` column that also exists in the ``user``
+table's ``user_id`` column.
-The argument to :class:`~sqlalchemy.schema.ForeignKey` is most commonly a string of the form *<tablename>.<columnname>*, or for a table in a remote schema or "owner" of the form *<schemaname>.<tablename>.<columnname>*. It may also be an actual :class:`~sqlalchemy.schema.Column` object, which as we'll see later is accessed from an existing :class:`~sqlalchemy.schema.Table` object via its ``c`` collection::
+The argument to :class:`~sqlalchemy.schema.ForeignKey` is most commonly a
+string of the form *<tablename>.<columnname>*, or for a table in a remote
+schema or "owner" of the form *<schemaname>.<tablename>.<columnname>*. It may
+also be an actual :class:`~sqlalchemy.schema.Column` object, which as we'll
+see later is accessed from an existing :class:`~sqlalchemy.schema.Table`
+object via its ``c`` collection::
ForeignKey(user.c.user_id)
-The advantage to using a string is that the in-python linkage between ``user`` and ``user_preference`` is resolved only when first needed, so that table objects can be easily spread across multiple modules and defined in any order.
+The advantage to using a string is that the in-python linkage between ``user``
+and ``user_preference`` is resolved only when first needed, so that table
+objects can be easily spread across multiple modules and defined in any order.
-Foreign keys may also be defined at the table level, using the :class:`~sqlalchemy.schema.ForeignKeyConstraint` object. This object can describe a single- or multi-column foreign key. A multi-column foreign key is known as a *composite* foreign key, and almost always references a table that has a composite primary key. Below we define a table ``invoice`` which has a composite primary key::
+Foreign keys may also be defined at the table level, using the
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` object. This object can
+describe a single- or multi-column foreign key. A multi-column foreign key is
+known as a *composite* foreign key, and almost always references a table that
+has a composite primary key. Below we define a table ``invoice`` which has a
+composite primary key::
invoice = Table('invoice', metadata,
Column('invoice_id', Integer, primary_key=True),
Column('description', String(60), nullable=False)
)
-And then a table ``invoice_item`` with a composite foreign key referencing ``invoice``::
+And then a table ``invoice_item`` with a composite foreign key referencing
+``invoice``::
invoice_item = Table('invoice_item', metadata,
Column('item_id', Integer, primary_key=True),
ForeignKeyConstraint(['invoice_id', 'ref_num'], ['invoice.invoice_id', 'invoice.ref_num'])
)
-It's important to note that the :class:`~sqlalchemy.schema.ForeignKeyConstraint` is the only way to define a composite foreign key. While we could also have placed individual :class:`~sqlalchemy.schema.ForeignKey` objects on both the ``invoice_item.invoice_id`` and ``invoice_item.ref_num`` columns, SQLAlchemy would not be aware that these two values should be paired together - it would be two individual foreign key constraints instead of a single composite foreign key referencing two columns.
+It's important to note that the
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` is the only way to define a
+composite foreign key. While we could also have placed individual
+:class:`~sqlalchemy.schema.ForeignKey` objects on both the
+``invoice_item.invoice_id`` and ``invoice_item.ref_num`` columns, SQLAlchemy
+would not be aware that these two values should be paired together - it would
+be two individual foreign key constraints instead of a single composite
+foreign key referencing two columns.
Creating/Dropping Foreign Key Constraints via ALTER
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In all the above examples, the :class:`~sqlalchemy.schema.ForeignKey` object causes the "REFERENCES" keyword to be added inline to a column definition within a "CREATE TABLE" statement when :func:`~sqlalchemy.schema.MetaData.create_all` is issued, and :class:`~sqlalchemy.schema.ForeignKeyConstraint` invokes the "CONSTRAINT" keyword inline with "CREATE TABLE". There are some cases where this is undesireable, particularly when two tables reference each other mutually, each with a foreign key referencing the other. In such a situation at least one of the foreign key constraints must be generated after both tables have been built. To support such a scheme, :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.ForeignKeyConstraint` offer the flag ``use_alter=True``. When using this flag, the constraint will be generated using a definition similar to "ALTER TABLE <tablename> ADD CONSTRAINT <name> ...". Since a name is required, the ``name`` attribute must also be specified. For example::
+In all the above examples, the :class:`~sqlalchemy.schema.ForeignKey` object
+causes the "REFERENCES" keyword to be added inline to a column definition
+within a "CREATE TABLE" statement when
+:func:`~sqlalchemy.schema.MetaData.create_all` is issued, and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` invokes the "CONSTRAINT"
+keyword inline with "CREATE TABLE". There are some cases where this is
+undesireable, particularly when two tables reference each other mutually, each
+with a foreign key referencing the other. In such a situation at least one of
+the foreign key constraints must be generated after both tables have been
+built. To support such a scheme, :class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` offer the flag
+``use_alter=True``. When using this flag, the constraint will be generated
+using a definition similar to "ALTER TABLE <tablename> ADD CONSTRAINT <name>
+...". Since a name is required, the ``name`` attribute must also be specified.
+For example::
node = Table('node', meta,
Column('node_id', Integer, primary_key=True),
ON UPDATE and ON DELETE
~~~~~~~~~~~~~~~~~~~~~~~
-Most databases support *cascading* of foreign key values, that is the when a parent row is updated the new value is placed in child rows, or when the parent row is deleted all corresponding child rows are set to null or deleted. In data definition language these are specified using phrases like "ON UPDATE CASCADE", "ON DELETE CASCADE", and "ON DELETE SET NULL", corresponding to foreign key constraints. The phrase after "ON UPDATE" or "ON DELETE" may also other allow other phrases that are specific to the database in use. The :class:`~sqlalchemy.schema.ForeignKey` and :class:`~sqlalchemy.schema.ForeignKeyConstraint` objects support the generation of this clause via the ``onupdate`` and ``ondelete`` keyword arguments. The value is any string which will be output after the appropriate "ON UPDATE" or "ON DELETE" phrase::
+Most databases support *cascading* of foreign key values, that is the when a
+parent row is updated the new value is placed in child rows, or when the
+parent row is deleted all corresponding child rows are set to null or deleted.
+In data definition language these are specified using phrases like "ON UPDATE
+CASCADE", "ON DELETE CASCADE", and "ON DELETE SET NULL", corresponding to
+foreign key constraints. The phrase after "ON UPDATE" or "ON DELETE" may also
+other allow other phrases that are specific to the database in use. The
+:class:`~sqlalchemy.schema.ForeignKey` and
+:class:`~sqlalchemy.schema.ForeignKeyConstraint` objects support the
+generation of this clause via the ``onupdate`` and ``ondelete`` keyword
+arguments. The value is any string which will be output after the appropriate
+"ON UPDATE" or "ON DELETE" phrase::
child = Table('child', meta,
Column('id', Integer,
)
)
-Note that these clauses are not supported on SQLite, and require ``InnoDB`` tables when used with MySQL. They may also not be supported on other databases.
+Note that these clauses are not supported on SQLite, and require ``InnoDB``
+tables when used with MySQL. They may also not be supported on other
+databases.
+
+API Constructs
+~~~~~~~~~~~~~~~
+
+.. autoclass:: ForeignKey
+ :members:
+ :show-inheritance:
+
+.. autoclass:: ForeignKeyConstraint
+ :members:
+ :show-inheritance:
+
UNIQUE Constraint
-----------------
-Unique constraints can be created anonymously on a single column using the ``unique`` keyword on :class:`~sqlalchemy.schema.Column`. Explicitly named unique constraints and/or those with multiple columns are created via the :class:`~sqlalchemy.schema.UniqueConstraint` table-level construct.
+Unique constraints can be created anonymously on a single column using the
+``unique`` keyword on :class:`~sqlalchemy.schema.Column`. Explicitly named
+unique constraints and/or those with multiple columns are created via the
+:class:`~sqlalchemy.schema.UniqueConstraint` table-level construct.
.. sourcecode:: python+sql
UniqueConstraint('col2', 'col3', name='uix_1')
)
+.. autoclass:: UniqueConstraint
+ :show-inheritance:
+
CHECK Constraint
----------------
-Check constraints can be named or unnamed and can be created at the Column or Table level, using the :class:`~sqlalchemy.schema.CheckConstraint` construct. The text of the check constraint is passed directly through to the database, so there is limited "database independent" behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table.
+Check constraints can be named or unnamed and can be created at the Column or
+Table level, using the :class:`~sqlalchemy.schema.CheckConstraint` construct.
+The text of the check constraint is passed directly through to the database,
+so there is limited "database independent" behavior. Column level check
+constraints generally should only refer to the column to which they are
+placed, while table level constraints can refer to any columns in the table.
-Note that some databases do not actively support check constraints such as MySQL and SQLite.
+Note that some databases do not actively support check constraints such as
+MySQL and SQLite.
.. sourcecode:: python+sql
CONSTRAINT check1 CHECK (col2 > col3 + 5)
){stop}
+.. autoclass:: CheckConstraint
+ :show-inheritance:
+
+Other Constraint Classes
+------------------------
+
+.. autoclass:: Constraint
+ :show-inheritance:
+
+.. autoclass:: PrimaryKeyConstraint
+ :show-inheritance:
+
+
Indexes
-------
-Indexes can be created anonymously (using an auto-generated name ``ix_<column label>``) for a single column using the inline ``index`` keyword on :class:`~sqlalchemy.schema.Column`, which also modifies the usage of ``unique`` to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass more than one column, use the :class:`~sqlalchemy.schema.Index` construct, which requires a name.
+Indexes can be created anonymously (using an auto-generated name ``ix_<column
+label>``) for a single column using the inline ``index`` keyword on
+:class:`~sqlalchemy.schema.Column`, which also modifies the usage of
+``unique`` to apply the uniqueness to the index itself, instead of adding a
+separate UNIQUE constraint. For indexes with specific names or which encompass
+more than one column, use the :class:`~sqlalchemy.schema.Index` construct,
+which requires a name.
-Note that the :class:`~sqlalchemy.schema.Index` construct is created **externally** to the table which it corresponds, using :class:`~sqlalchemy.schema.Column` objects and not strings.
+Note that the :class:`~sqlalchemy.schema.Index` construct is created
+**externally** to the table which it corresponds, using
+:class:`~sqlalchemy.schema.Column` objects and not strings.
-Below we illustrate a :class:`~sqlalchemy.schema.Table` with several :class:`~sqlalchemy.schema.Index` objects associated. The DDL for "CREATE INDEX" is issued right after the create statements for the table:
+Below we illustrate a :class:`~sqlalchemy.schema.Table` with several
+:class:`~sqlalchemy.schema.Index` objects associated. The DDL for "CREATE
+INDEX" is issued right after the create statements for the table:
.. sourcecode:: python+sql
{sql}i.create(engine)
CREATE INDEX someindex ON mytable (col5){stop}
+.. autoclass:: Index
+ :show-inheritance:
+
Customizing DDL
===============
-In the preceding sections we've discussed a variety of schema constructs including :class:`~sqlalchemy.schema.Table`, :class:`~sqlalchemy.schema.ForeignKeyConstraint`, :class:`~sqlalchemy.schema.CheckConstraint`, and :class:`~sqlalchemy.schema.Sequence`. Throughout, we've relied upon the ``create()`` and :func:`~sqlalchemy.schema.MetaData.create_all` methods of :class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.MetaData` in order to issue data definition language (DDL) for all constructs. When issued, a pre-determined order of operations is invoked, and DDL to create each table is created unconditionally including all constraints and other objects associated with it. For more complex scenarios where database-specific DDL is required, SQLAlchemy offers two techniques which can be used to add any DDL based on any condition, either accompanying the standard generation of tables or by itself.
+In the preceding sections we've discussed a variety of schema constructs
+including :class:`~sqlalchemy.schema.Table`,
+:class:`~sqlalchemy.schema.ForeignKeyConstraint`,
+:class:`~sqlalchemy.schema.CheckConstraint`, and
+:class:`~sqlalchemy.schema.Sequence`. Throughout, we've relied upon the
+``create()`` and :func:`~sqlalchemy.schema.MetaData.create_all` methods of
+:class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.MetaData` in
+order to issue data definition language (DDL) for all constructs. When issued,
+a pre-determined order of operations is invoked, and DDL to create each table
+is created unconditionally including all constraints and other objects
+associated with it. For more complex scenarios where database-specific DDL is
+required, SQLAlchemy offers two techniques which can be used to add any DDL
+based on any condition, either accompanying the standard generation of tables
+or by itself.
Controlling DDL Sequences
-------------------------
Custom DDL
----------
-Custom DDL phrases are most easily achieved using the :class:`~sqlalchemy.schema.DDL` construct. This construct works like all the other DDL elements except it accepts a string which is the
-text to be emitted:
+Custom DDL phrases are most easily achieved using the
+:class:`~sqlalchemy.schema.DDL` construct. This construct works like all the
+other DDL elements except it accepts a string which is the text to be emitted:
.. sourcecode:: python+sql
"cst_user_name_length "
" CHECK (length(user_name) >= 8)").execute_at("after-create", metadata)
-A more comprehensive method of creating libraries of DDL constructs is to use the :ref:`sqlalchemy.ext.compiler_toplevel` extension. See that chapter for full details.
-
-Adapting Tables to Alternate Metadata
-======================================
-
+A more comprehensive method of creating libraries of DDL constructs is to use
+custom compilation - see :ref:`sqlalchemy.ext.compiler_toplevel` for
+ details.
-A :class:`~sqlalchemy.schema.Table` object created against a specific :class:`~sqlalchemy.schema.MetaData` object can be re-created against a new MetaData using the :func:`~sqlalchemy.schema.Table.tometadata` method:
-
-.. sourcecode:: python+sql
-
- # create two metadata
- meta1 = MetaData('sqlite:///querytest.db')
- meta2 = MetaData()
-
- # load 'users' from the sqlite engine
- users_table = Table('users', meta1, autoload=True)
-
- # create the same Table object for the plain metadata
- users_table_2 = users_table.tometadata(meta2)
+DDL API
+-------
+.. autoclass:: DDLElement
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: DDL
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: CreateTable
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: DropTable
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: CreateSequence
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: DropSequence
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: CreateIndex
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: DropIndex
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: AddConstraint
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. autoclass:: DropConstraint
+ :members:
+ :undoc-members:
+ :show-inheritance:
-serializer
-==========
-
-:author: Mike Bayer
+Expression Serializer Extension
+===============================
.. automodule:: sqlalchemy.ext.serializer
:members:
Custom Types
------------
-User-defined types may be created to match special capabilities of a
-particular database or simply for implementing custom processing logic
-in Python.
+A variety of methods exist to redefine the behavior of existing types
+as well as to provide new ones.
-The simplest method is implementing a :class:`TypeDecorator`, a helper
-class that makes it easy to augment the bind parameter and result
-processing capabilities of one of the built in types.
+Overriding Type Compilation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To build a type object from scratch, subclass `:class:UserDefinedType`.
+The string produced by any type object, when rendered in a CREATE TABLE
+statement or other SQL function like CAST, can be changed. See the
+section :ref:`type_compilation_extension`, a subsection of
+:ref:`sqlalchemy.ext.compiler_toplevel`, for a short example.
+
+Augmenting Existing Types
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :class:`TypeDecorator` allows the creation of custom types which
+add bind-parameter and result-processing behavior to an existing
+type object. It is used when additional in-Python marshalling of data
+to and from the database is required.
.. autoclass:: TypeDecorator
:members:
:inherited-members:
:show-inheritance:
+Creating New Types
+~~~~~~~~~~~~~~~~~~
+
+The :class:`UserDefinedType` class is provided as a simple base class
+for defining entirely new database types:
+
.. autoclass:: UserDefinedType
:members:
:undoc-members:
:inherited-members:
:show-inheritance:
-.. autoclass:: TypeEngine
+Base Type API
+--------------
+
+.. autoclass:: AbstractType
:members:
:undoc-members:
:inherited-members:
:show-inheritance:
-.. autoclass:: AbstractType
+.. autoclass:: TypeEngine
:members:
:undoc-members:
:inherited-members:
+++ /dev/null
-.. _engines_toplevel:
-
-================
-Database Engines
-================
-The **Engine** is the starting point for any SQLAlchemy application. It's "home base" for the actual database and its DBAPI, delivered to the SQLAlchemy application through a connection pool and a **Dialect**, which describes how to talk to a specific kind of database/DBAPI combination.
-
-The general structure is this::
-
- +-----------+ __________
- /---| Pool |---\ (__________)
- +-------------+ / +-----------+ \ +--------+ | |
- connect() <--| Engine |---x x----| DBAPI |---| database |
- +-------------+ \ +-----------+ / +--------+ | |
- \---| Dialect |---/ |__________|
- +-----------+ (__________)
-
-Where above, a :class:`~sqlalchemy.engine.base.Engine` references both a :class:`~sqlalchemy.engine.base.Dialect` and :class:`~sqlalchemy.pool.Pool`, which together interpret the DBAPI's module functions as well as the behavior of the database.
-
-Creating an engine is just a matter of issuing a single call, :func:`create_engine()`::
-
- engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
-
-The above engine invokes the ``postgresql`` dialect and a connection pool which references ``localhost:5432``.
-
-Note that the appropriate usage of :func:`create_engine()` is once per particular configuration, held globally for the lifetime of a single application process (not including child processes via ``fork()`` - these would require a new engine). A single :class:`~sqlalchemy.engine.base.Engine` manages connections on behalf of the process and is intended to be called upon in a concurrent fashion. Creating engines for each particular operation is not the intended usage.
-
-The engine can be used directly to issue SQL to the database. The most generic way is to use connections, which you get via the ``connect()`` method::
-
- connection = engine.connect()
- result = connection.execute("select username from users")
- for row in result:
- print "username:", row['username']
- connection.close()
-
-The connection is an instance of :class:`~sqlalchemy.engine.base.Connection`, which is a **proxy** object for an actual DBAPI connection. The returned result is an instance of :class:`~sqlalchemy.engine.ResultProxy`, which acts very much like a DBAPI cursor.
-
-When you say ``engine.connect()``, a new :class:`~sqlalchemy.engine.base.Connection` object is created, and a DBAPI connection is retrieved from the connection pool. Later, when you call ``connection.close()``, the DBAPI connection is returned to the pool; nothing is actually "closed" from the perspective of the database.
-
-To execute some SQL more quickly, you can skip the :class:`~sqlalchemy.engine.base.Connection` part and just say::
-
- result = engine.execute("select username from users")
- for row in result:
- print "username:", row['username']
- result.close()
-
-Where above, the ``execute()`` method on the :class:`~sqlalchemy.engine.base.Engine` does the ``connect()`` part for you, and returns the :class:`~sqlalchemy.engine.base.ResultProxy` directly. The actual :class:`~sqlalchemy.engine.base.Connection` is *inside* the :class:`~sqlalchemy.engine.base.ResultProxy`, waiting for you to finish reading the result. In this case, when you ``close()`` the :class:`~sqlalchemy.engine.base.ResultProxy`, the underlying :class:`~sqlalchemy.engine.base.Connection` is closed, which returns the DBAPI connection to the pool.
-
-To summarize the above two examples, when you use a :class:`~sqlalchemy.engine.base.Connection` object, it's known as **explicit execution**. When you don't see the :class:`~sqlalchemy.engine.base.Connection` object, but you still use the ``execute()`` method on the :class:`~sqlalchemy.engine.base.Engine`, it's called **explicit, connectionless execution**. A third variant of execution also exists called **implicit execution**; this will be described later.
-
-The :class:`~sqlalchemy.engine.base.Engine` and :class:`~sqlalchemy.engine.base.Connection` can do a lot more than what we illustrated above; SQL strings are only its most rudimentary function. Later chapters will describe how "constructed SQL" expressions can be used with engines; in many cases, you don't have to deal with the :class:`~sqlalchemy.engine.base.Engine` at all after it's created. The Object Relational Mapper (ORM), an optional feature of SQLAlchemy, also uses the :class:`~sqlalchemy.engine.base.Engine` in order to get at connections; that's also a case where you can often create the engine once, and then forget about it.
-
-.. _supported_dbapis:
-
-Supported Databases
-====================
-
-SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
-backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package. A
-SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
-
-The table below summarizes the state of DBAPI support in SQLAlchemy 0.6. The values
-translate as:
-
-* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
-* yes / OS platform - The DBAPI supports that platform.
-* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.
-* no / OS platform - The DBAPI does not support that platform.
-* partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
-* development - a development version of the dialect exists, but is not yet usable.
-* thirdparty - the dialect itself is maintained by a third party, who should be consulted for
- information on current support.
-* \* - indicates the given DBAPI is the "default" for SQLAlchemy, i.e. when just the database name is specified
-
-========================= =========================== =========== =========== =========== ================= ============
-Driver Connect string Py2K Py3K Jython Unix Windows
-========================= =========================== =========== =========== =========== ================= ============
-**DB2/Informix IDS**
-ibm-db_ thirdparty thirdparty thirdparty thirdparty thirdparty thirdparty
-**Firebird**
-kinterbasdb_ ``firebird+kinterbasdb``\* yes development no yes yes
-**Informix**
-informixdb_ ``informix+informixdb``\* development development no unknown unknown
-**MaxDB**
-sapdb_ ``maxdb+sapdb``\* development development no yes unknown
-**Microsoft Access**
-pyodbc_ ``access+pyodbc``\* development development no unknown yes
-**Microsoft SQL Server**
-adodbapi_ ``mssql+adodbapi`` development development no no yes
-`jTDS JDBC Driver`_ ``mssql+zxjdbc`` no no development yes yes
-mxodbc_ ``mssql+mxodbc`` yes development no yes with FreeTDS_ yes
-pyodbc_ ``mssql+pyodbc``\* yes development no yes with FreeTDS_ yes
-pymssql_ ``mssql+pymssql`` yes development no yes yes
-**MySQL**
-`MySQL Connector/J`_ ``mysql+zxjdbc`` no no yes yes yes
-`MySQL Connector/Python`_ ``mysql+mysqlconnector`` yes partial no yes yes
-mysql-python_ ``mysql+mysqldb``\* yes development no yes yes
-OurSQL_ ``mysql+oursql`` yes partial no yes yes
-**Oracle**
-cx_oracle_ ``oracle+cx_oracle``\* yes development no yes yes
-`Oracle JDBC Driver`_ ``oracle+zxjdbc`` no no yes yes yes
-**Postgresql**
-pg8000_ ``postgresql+pg8000`` yes yes no yes yes
-`PostgreSQL JDBC Driver`_ ``postgresql+zxjdbc`` no no yes yes yes
-psycopg2_ ``postgresql+psycopg2``\* yes development no yes yes
-pypostgresql_ ``postgresql+pypostgresql`` no yes no yes yes
-**SQLite**
-pysqlite_ ``sqlite+pysqlite``\* yes yes no yes yes
-sqlite3_ ``sqlite+pysqlite``\* yes yes no yes yes
-**Sybase ASE**
-mxodbc_ ``sybase+mxodbc`` development development no yes yes
-pyodbc_ ``sybase+pyodbc``\* partial development no unknown unknown
-python-sybase_ ``sybase+pysybase`` partial development no yes yes
-========================= =========================== =========== =========== =========== ================= ============
-
-.. _psycopg2: http://www.initd.org/
-.. _pg8000: http://pybrary.net/pg8000/
-.. _pypostgresql: http://python.projects.postgresql.org/
-.. _mysql-python: http://sourceforge.net/projects/mysql-python
-.. _MySQL Connector/Python: https://launchpad.net/myconnpy
-.. _OurSQL: http://packages.python.org/oursql/
-.. _PostgreSQL JDBC Driver: http://jdbc.postgresql.org/
-.. _sqlite3: http://docs.python.org/library/sqlite3.html
-.. _pysqlite: http://pypi.python.org/pypi/pysqlite/
-.. _MySQL Connector/J: http://dev.mysql.com/downloads/connector/j/
-.. _cx_Oracle: http://cx-oracle.sourceforge.net/
-.. _Oracle JDBC Driver: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
-.. _kinterbasdb: http://firebirdsql.org/index.php?op=devel&sub=python
-.. _pyodbc: http://code.google.com/p/pyodbc/
-.. _mxodbc: http://www.egenix.com/products/python/mxODBC/
-.. _FreeTDS: http://www.freetds.org/
-.. _adodbapi: http://adodbapi.sourceforge.net/
-.. _pymssql: http://code.google.com/p/pymssql/
-.. _jTDS JDBC Driver: http://jtds.sourceforge.net/
-.. _ibm-db: http://code.google.com/p/ibm-db/
-.. _informixdb: http://informixdb.sourceforge.net/
-.. _sapdb: http://www.sapdb.org/sapdbapi.html
-.. _python-sybase: http://python-sybase.sourceforge.net/
-
-Further detail on dialects is available at :ref:`sqlalchemy.dialects_toplevel` as well as additional notes on the wiki at `Database Notes <http://www.sqlalchemy.org/trac/wiki/DatabaseNotes>`_
-
-create_engine() URL Arguments
-==============================
-
-SQLAlchemy indicates the source of an Engine strictly via `RFC-1738 <http://rfc.net/rfc1738.html>`_ style URLs, combined with optional keyword arguments to specify options for the Engine. The form of the URL is:
-
- dialect+driver://username:password@host:port/database
-
-Dialect names include the identifying name of the SQLAlchemy dialect which include ``sqlite``, ``mysql``, ``postgresql``, ``oracle``, ``mssql``, and ``firebird``. The drivername is the name of the DBAPI to be used to connect to the database using all lowercase letters. If not specified, a "default" DBAPI will be imported if available - this default is typically the most widely known driver available for that backend (i.e. cx_oracle, pysqlite/sqlite3, psycopg2, mysqldb). For Jython connections, specify the `zxjdbc` driver, which is the JDBC-DBAPI bridge included with Jython.
-
-.. sourcecode:: python+sql
-
- # postgresql - psycopg2 is the default driver.
- pg_db = create_engine('postgresql://scott:tiger@localhost/mydatabase')
- pg_db = create_engine('postgresql+psycopg2://scott:tiger@localhost/mydatabase')
- pg_db = create_engine('postgresql+pg8000://scott:tiger@localhost/mydatabase')
- pg_db = create_engine('postgresql+pypostgresql://scott:tiger@localhost/mydatabase')
-
- # postgresql on Jython
- pg_db = create_engine('postgresql+zxjdbc://scott:tiger@localhost/mydatabase')
-
- # mysql - MySQLdb (mysql-python) is the default driver
- mysql_db = create_engine('mysql://scott:tiger@localhost/foo')
- mysql_db = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')
-
- # mysql on Jython
- mysql_db = create_engine('mysql+zxjdbc://localhost/foo')
-
- # mysql with pyodbc (buggy)
- mysql_db = create_engine('mysql+pyodbc://scott:tiger@some_dsn')
-
- # oracle - cx_oracle is the default driver
- oracle_db = create_engine('oracle://scott:tiger@127.0.0.1:1521/sidname')
-
- # oracle via TNS name
- oracle_db = create_engine('oracle+cx_oracle://scott:tiger@tnsname')
-
- # mssql using ODBC datasource names. PyODBC is the default driver.
- mssql_db = create_engine('mssql://mydsn')
- mssql_db = create_engine('mssql+pyodbc://mydsn')
- mssql_db = create_engine('mssql+adodbapi://mydsn')
- mssql_db = create_engine('mssql+pyodbc://username:password@mydsn')
-
-SQLite connects to file based databases. The same URL format is used, omitting the hostname, and using the "file" portion as the filename of the database. This has the effect of four slashes being present for an absolute file path::
-
- # sqlite://<nohostname>/<path>
- # where <path> is relative:
- sqlite_db = create_engine('sqlite:///foo.db')
-
- # or absolute, starting with a slash:
- sqlite_db = create_engine('sqlite:////absolute/path/to/foo.db')
-
-To use a SQLite ``:memory:`` database, specify an empty URL::
-
- sqlite_memory_db = create_engine('sqlite://')
-
-The :class:`~sqlalchemy.engine.base.Engine` will ask the connection pool for a connection when the ``connect()`` or ``execute()`` methods are called. The default connection pool, :class:`~sqlalchemy.pool.QueuePool`, as well as the default connection pool used with SQLite, :class:`~sqlalchemy.pool.SingletonThreadPool`, will open connections to the database on an as-needed basis. As concurrent statements are executed, :class:`~sqlalchemy.pool.QueuePool` will grow its pool of connections to a default size of five, and will allow a default "overflow" of ten. Since the :class:`~sqlalchemy.engine.base.Engine` is essentially "home base" for the connection pool, it follows that you should keep a single :class:`~sqlalchemy.engine.base.Engine` per database established within an application, rather than creating a new one for each connection.
-
-Custom DBAPI connect() arguments
---------------------------------
-
-Custom arguments used when issuing the ``connect()`` call to the underlying DBAPI may be issued in three distinct ways. String-based arguments can be passed directly from the URL string as query arguments:
-
-.. sourcecode:: python+sql
-
- db = create_engine('postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar')
-
-If SQLAlchemy's database connector is aware of a particular query argument, it may convert its type from string to its proper type.
-
-:func:`~sqlalchemy.create_engine` also takes an argument ``connect_args`` which is an additional dictionary that will be passed to ``connect()``. This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter:
-
-.. sourcecode:: python+sql
-
- db = create_engine('postgresql://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
-
-The most customizable connection method of all is to pass a ``creator`` argument, which specifies a callable that returns a DBAPI connection:
-
-.. sourcecode:: python+sql
-
- def connect():
- return psycopg.connect(user='scott', host='localhost')
-
- db = create_engine('postgresql://', creator=connect)
-
-.. _create_engine_args:
-
-Database Engine Options
-========================
-
-Keyword options can also be specified to :func:`~sqlalchemy.create_engine`, following the string URL as follows:
-
-.. sourcecode:: python+sql
-
- db = create_engine('postgresql://...', encoding='latin1', echo=True)
-
-Options common to all database dialects are described at :func:`~sqlalchemy.create_engine`.
-
-More On Connections
-====================
-
-Recall from the beginning of this section that the Engine provides a ``connect()`` method which returns a :class:`~sqlalchemy.engine.base.Connection` object. :class:`~sqlalchemy.engine.base.Connection` is a *proxy* object which maintains a reference to a DBAPI connection instance. The ``close()`` method on :class:`~sqlalchemy.engine.base.Connection` does not actually close the DBAPI connection, but instead returns it to the connection pool referenced by the :class:`~sqlalchemy.engine.base.Engine`. :class:`~sqlalchemy.engine.base.Connection` will also automatically return its resources to the connection pool when the object is garbage collected, i.e. its ``__del__()`` method is called. When using the standard C implementation of Python, this method is usually called immediately as soon as the object is dereferenced. With other Python implementations such as Jython, this is not so guaranteed.
-
-The ``execute()`` methods on both :class:`~sqlalchemy.engine.base.Engine` and :class:`~sqlalchemy.engine.base.Connection` can also receive SQL clause constructs as well::
-
- connection = engine.connect()
- result = connection.execute(select([table1], table1.c.col1==5))
- for row in result:
- print row['col1'], row['col2']
- connection.close()
-
-The above SQL construct is known as a ``select()``. The full range of SQL constructs available are described in :ref:`sqlexpression_toplevel`.
-
-Both :class:`~sqlalchemy.engine.base.Connection` and :class:`~sqlalchemy.engine.base.Engine` fulfill an interface known as :class:`~sqlalchemy.engine.base.Connectable` which specifies common functionality between the two objects, namely being able to call ``connect()`` to return a :class:`~sqlalchemy.engine.base.Connection` object (:class:`~sqlalchemy.engine.base.Connection` just returns itself), and being able to call ``execute()`` to get a result set. Following this, most SQLAlchemy functions and objects which accept an :class:`~sqlalchemy.engine.base.Engine` as a parameter or attribute with which to execute SQL will also accept a :class:`~sqlalchemy.engine.base.Connection`. This argument is named ``bind``::
-
- engine = create_engine('sqlite:///:memory:')
-
- # specify some Table metadata
- metadata = MetaData()
- table = Table('sometable', metadata, Column('col1', Integer))
-
- # create the table with the Engine
- table.create(bind=engine)
-
- # drop the table with a Connection off the Engine
- connection = engine.connect()
- table.drop(bind=connection)
-
-.. index::
- single: thread safety; connections
-
-Connection facts:
-
-* the Connection object is **not thread-safe**. While a Connection can be shared among threads using properly synchronized access, this is also not recommended as many DBAPIs have issues with, if not outright disallow, sharing of connection state between threads.
-* The Connection object represents a single dbapi connection checked out from the connection pool. In this state, the connection pool has no affect upon the connection, including its expiration or timeout state. For the connection pool to properly manage connections, **connections should be returned to the connection pool (i.e. ``connection.close()``) whenever the connection is not in use**. If your application has a need for management of multiple connections or is otherwise long running (this includes all web applications, threaded or not), don't hold a single connection open at the module level.
-
-Using Transactions with Connection
-===================================
-
-The :class:`~sqlalchemy.engine.base.Connection` object provides a ``begin()`` method which returns a :class:`~sqlalchemy.engine.base.Transaction` object. This object is usually used within a try/except clause so that it is guaranteed to ``rollback()`` or ``commit()``::
-
- trans = connection.begin()
- try:
- r1 = connection.execute(table1.select())
- connection.execute(table1.insert(), col1=7, col2='this is some data')
- trans.commit()
- except:
- trans.rollback()
- raise
-
-The :class:`~sqlalchemy.engine.base.Transaction` object also handles "nested" behavior by keeping track of the outermost begin/commit pair. In this example, two functions both issue a transaction on a Connection, but only the outermost Transaction object actually takes effect when it is committed.
-
-.. sourcecode:: python+sql
-
- # method_a starts a transaction and calls method_b
- def method_a(connection):
- trans = connection.begin() # open a transaction
- try:
- method_b(connection)
- trans.commit() # transaction is committed here
- except:
- trans.rollback() # this rolls back the transaction unconditionally
- raise
-
- # method_b also starts a transaction
- def method_b(connection):
- trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
- try:
- connection.execute("insert into mytable values ('bat', 'lala')")
- connection.execute(mytable.insert(), col1='bat', col2='lala')
- trans.commit() # transaction is not committed yet
- except:
- trans.rollback() # this rolls back the transaction unconditionally
- raise
-
- # open a Connection and call method_a
- conn = engine.connect()
- method_a(conn)
- conn.close()
-
-Above, ``method_a`` is called first, which calls ``connection.begin()``. Then it calls ``method_b``. When ``method_b`` calls ``connection.begin()``, it just increments a counter that is decremented when it calls ``commit()``. If either ``method_a`` or ``method_b`` calls ``rollback()``, the whole transaction is rolled back. The transaction is not committed until ``method_a`` calls the ``commit()`` method. This "nesting" behavior allows the creation of functions which "guarantee" that a transaction will be used if one was not already available, but will automatically participate in an enclosing transaction if one exists.
-
-Note that SQLAlchemy's Object Relational Mapper also provides a way to control transaction scope at a higher level; this is described in :ref:`unitofwork_transaction`.
-
-.. index::
- single: thread safety; transactions
-
-Transaction Facts:
-
-* the Transaction object, just like its parent Connection, is **not thread-safe**.
-
-Understanding Autocommit
-------------------------
-
-
-The above transaction example illustrates how to use :class:`~sqlalchemy.engine.base.Transaction` so that several executions can take part in the same transaction. What happens when we issue an INSERT, UPDATE or DELETE call without using :class:`~sqlalchemy.engine.base.Transaction`? The answer is **autocommit**. While many DBAPIs implement a flag called ``autocommit``, the current SQLAlchemy behavior is such that it implements its own autocommit. This is achieved by detecting statements which represent data-changing operations, i.e. INSERT, UPDATE, DELETE, etc., and then issuing a COMMIT automatically if no transaction is in progress. The detection is based on compiled statement attributes, or in the case of a text-only statement via regular expressions.
-
-.. sourcecode:: python+sql
-
- conn = engine.connect()
- conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
-
-.. _dbengine_implicit:
-
-Connectionless Execution, Implicit Execution
-=============================================
-
-Recall from the first section we mentioned executing with and without a :class:`~sqlalchemy.engine.base.Connection`. ``Connectionless`` execution refers to calling the ``execute()`` method on an object which is not a :class:`~sqlalchemy.engine.base.Connection`, which could be on the :class:`~sqlalchemy.engine.base.Engine` itself, or could be a constructed SQL object. When we say "implicit", we mean that we are calling the ``execute()`` method on an object which is neither a :class:`~sqlalchemy.engine.base.Connection` nor an :class:`~sqlalchemy.engine.base.Engine` object; this can only be used with constructed SQL objects which have their own ``execute()`` method, and can be "bound" to an :class:`~sqlalchemy.engine.base.Engine`. A description of "constructed SQL objects" may be found in :ref:`sqlexpression_toplevel`.
-
-A summary of all three methods follows below. First, assume the usage of the following :class:`~sqlalchemy.schema.MetaData` and :class:`~sqlalchemy.schema.Table` objects; while we haven't yet introduced these concepts, for now you only need to know that we are representing a database table, and are creating an "executable" SQL construct which issues a statement to the database. These objects are described in :ref:`metadata_toplevel`.
-
-.. sourcecode:: python+sql
-
- meta = MetaData()
- users_table = Table('users', meta,
- Column('id', Integer, primary_key=True),
- Column('name', String(50))
- )
-
-Explicit execution delivers the SQL text or constructed SQL expression to the ``execute()`` method of :class:`~sqlalchemy.engine.base.Connection`:
-
-.. sourcecode:: python+sql
-
- engine = create_engine('sqlite:///file.db')
- connection = engine.connect()
- result = connection.execute(users_table.select())
- for row in result:
- # ....
- connection.close()
-
-Explicit, connectionless execution delivers the expression to the ``execute()`` method of :class:`~sqlalchemy.engine.base.Engine`:
-
-.. sourcecode:: python+sql
-
- engine = create_engine('sqlite:///file.db')
- result = engine.execute(users_table.select())
- for row in result:
- # ....
- result.close()
-
-Implicit execution is also connectionless, and calls the ``execute()`` method on the expression itself, utilizing the fact that either an :class:`~sqlalchemy.engine.base.Engine` or :class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression object (binding is discussed further in the next section, :ref:`metadata_toplevel`):
-
-.. sourcecode:: python+sql
-
- engine = create_engine('sqlite:///file.db')
- meta.bind = engine
- result = users_table.select().execute()
- for row in result:
- # ....
- result.close()
-
-In both "connectionless" examples, the :class:`~sqlalchemy.engine.base.Connection` is created behind the scenes; the :class:`~sqlalchemy.engine.base.ResultProxy` returned by the ``execute()`` call references the :class:`~sqlalchemy.engine.base.Connection` used to issue the SQL statement. When we issue ``close()`` on the :class:`~sqlalchemy.engine.base.ResultProxy`, or if the result set object falls out of scope and is garbage collected, the underlying :class:`~sqlalchemy.engine.base.Connection` is closed for us, resulting in the DBAPI connection being returned to the pool.
-
-.. _threadlocal_strategy:
-
-Using the Threadlocal Execution Strategy
------------------------------------------
-
-The "threadlocal" engine strategy is used by non-ORM applications which wish to bind a transaction to the current thread, such that all parts of the application can participate in that transaction implicitly without the need to explicitly reference a :class:`~sqlalchemy.engine.base.Connection`. "threadlocal" is designed for a very specific pattern of use, and is not appropriate unless this very specfic pattern, described below, is what's desired. It has **no impact** on the "thread safety" of SQLAlchemy components or one's application. It also should not be used when using an ORM :class:`~sqlalchemy.orm.session.Session` object, as the :class:`~sqlalchemy.orm.session.Session` itself represents an ongoing transaction and itself handles the job of maintaining connection and transactional resources.
-
-Enabling ``threadlocal`` is achieved as follows:
-
-.. sourcecode:: python+sql
-
- db = create_engine('mysql://localhost/test', strategy='threadlocal')
-
-When the engine above is used in a "connectionless" style, meaning ``engine.execute()`` is called, a DBAPI connection is retrieved from the connection pool and then associated with the current thread. Subsequent operations on the :class:`~sqlalchemy.engine.base.Engine` while the DBAPI connection remains checked out will make use of the *same* DBAPI connection object. The connection stays allocated until all returned :class:`~sqlalchemy.engine.base.ResultProxy` objects are closed, which occurs for a particular :class:`~sqlalchemy.engine.base.ResultProxy` after all pending results are fetched, or immediately for an operation which returns no rows (such as an INSERT).
-
-.. sourcecode:: python+sql
-
- # execute one statement and receive results. r1 now references a DBAPI connection resource.
- r1 = db.execute("select * from table1")
-
- # execute a second statement and receive results. r2 now references the *same* resource as r1
- r2 = db.execute("select * from table2")
-
- # fetch a row on r1 (assume more results are pending)
- row1 = r1.fetchone()
-
- # fetch a row on r2 (same)
- row2 = r2.fetchone()
-
- # close r1. the connection is still held by r2.
- r1.close()
-
- # close r2. with no more references to the underlying connection resources, they
- # are returned to the pool.
- r2.close()
-
-The above example does not illustrate any pattern that is particularly useful, as it is not a frequent occurence that two execute/result fetching operations "leapfrog" one another. There is a slight savings of connection pool checkout overhead between the two operations, and an implicit sharing of the same transactional context, but since there is no explicitly declared transaction, this association is short lived.
-
-The real usage of "threadlocal" comes when we want several operations to occur within the scope of a shared transaction. The :class:`~sqlalchemy.engine.base.Engine` now has ``begin()``, ``commit()`` and ``rollback()`` methods which will retrieve a connection resource from the pool and establish a new transaction, maintaining the connection against the current thread until the transaction is committed or rolled back:
-
-.. sourcecode:: python+sql
-
- db.begin()
- try:
- call_operation1()
- call_operation2()
- db.commit()
- except:
- db.rollback()
-
-``call_operation1()`` and ``call_operation2()`` can make use of the :class:`~sqlalchemy.engine.base.Engine` as a global variable, using the "connectionless" execution style, and their operations will participate in the same transaction:
-
-.. sourcecode:: python+sql
-
- def call_operation1():
- engine.execute("insert into users values (?, ?)", 1, "john")
-
- def call_operation2():
- users.update(users.c.user_id==5).execute(name='ed')
-
-When using threadlocal, operations that do call upon the ``engine.connect()`` method will receive a :class:`~sqlalchemy.engine.base.Connection` that is **outside** the scope of the transaction. This can be used for operations such as logging the status of an operation regardless of transaction success:
-
-.. sourcecode:: python+sql
-
- db.begin()
- conn = db.connect()
- try:
- conn.execute(log_table.insert(), message="Operation started")
- call_operation1()
- call_operation2()
- db.commit()
- conn.execute(log_table.insert(), message="Operation succeeded")
- except:
- db.rollback()
- conn.execute(log_table.insert(), message="Operation failed")
- finally:
- conn.close()
-
-Functions which are written to use an explicit :class:`~sqlalchemy.engine.base.Connection` object, but wish to participate in the threadlocal transaction, can receive their :class:`~sqlalchemy.engine.base.Connection` object from the ``contextual_connect()`` method, which returns a :class:`~sqlalchemy.engine.base.Connection` that is **inside** the scope of the transaction:
-
-.. sourcecode:: python+sql
-
- conn = db.contextual_connect()
- call_operation3(conn)
- conn.close()
-
-Calling ``close()`` on the "contextual" connection does not release the connection resources to the pool if other resources are making use of it. A resource-counting mechanism is employed so that the connection is released back to the pool only when all users of that connection, including the transaction established by ``engine.begin()``, have been completed.
-
-So remember - if you're not sure if you need to use ``strategy="threadlocal"`` or not, the answer is **no** ! It's driven by a specific programming pattern that is generally not the norm.
-
-.. _dbengine_logging:
-
-Configuring Logging
-====================
-
-Python's standard `logging <http://www.python.org/doc/lib/module-logging.html>`_ module is used to implement informational and debug log output with SQLAlchemy. This allows SQLAlchemy's logging to integrate in a standard way with other applications and libraries. The ``echo`` and ``echo_pool`` flags that are present on :func:`~sqlalchemy.create_engine`, as well as the ``echo_uow`` flag used on :class:`~sqlalchemy.orm.session.Session`, all interact with regular loggers.
-
-This section assumes familiarity with the above linked logging module. All logging performed by SQLAlchemy exists underneath the ``sqlalchemy`` namespace, as used by ``logging.getLogger('sqlalchemy')``. When logging has been configured (i.e. such as via ``logging.basicConfig()``), the general namespace of SA loggers that can be turned on is as follows:
-
-* ``sqlalchemy.engine`` - controls SQL echoing. set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
-* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects. See the documentation of individual dialects for details.
-* ``sqlalchemy.pool`` - controls connection pool logging. set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
-* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for configurational logging as well as unit of work dumps, ``logging.DEBUG`` for extensive logging during query and flush() operations. Subcategories of ``sqlalchemy.orm`` include:
- * ``sqlalchemy.orm.attributes`` - logs certain instrumented attribute operations, such as triggered callables
- * ``sqlalchemy.orm.mapper`` - logs Mapper configuration and operations
- * ``sqlalchemy.orm.unitofwork`` - logs flush() operations, including dependency sort graphs and other operations
- * ``sqlalchemy.orm.strategies`` - logs relationship loader operations (i.e. lazy and eager loads)
- * ``sqlalchemy.orm.sync`` - logs synchronization of attributes from parent to child instances during a flush()
-
-For example, to log SQL queries as well as unit of work debugging:
-
-.. sourcecode:: python+sql
-
- import logging
-
- logging.basicConfig()
- logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
- logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)
-
-By default, the log level is set to ``logging.ERROR`` within the entire ``sqlalchemy`` namespace so that no log operations occur, even within an application that has logging enabled otherwise.
-
-The ``echo`` flags present as keyword arguments to :func:`~sqlalchemy.create_engine` and others as well as the ``echo`` property on :class:`~sqlalchemy.engine.base.Engine`, when set to ``True``, will first attempt to ensure that logging is enabled. Unfortunately, the ``logging`` module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set). For this reason, any ``echo=True`` flags will result in a call to ``logging.basicConfig()`` using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this configuration has the affect of being configured **in addition** to any existing logger configurations. Therefore, **when using Python logging, ensure all echo flags are set to False at all times**, to avoid getting duplicate log lines.
-
-The logger name of instance such as an :class:`~sqlalchemy.engine.base.Engine` or :class:`~sqlalchemy.pool.Pool` defaults to using a truncated hex identifier string. To set this to a specific name, use the "logging_name" and "pool_logging_name" keyword arguments with :func:`sqlalchemy.create_engine`.
-.. _sqlalchemy.dialects_toplevel:
+.. _dialect_toplevel:
Dialects
========
.. automodule:: association
+
+.. _examples_instrumentation:
+
Attribute Instrumentation
-------------------------
Above, the two most significant front-facing portions of SQLAlchemy are the **Object Relational Mapper** and the **SQL Expression Language**. SQL Expressions can be used independently of the ORM. When using the ORM, the SQL Expression language remains part of the public facing API as it is used within object-relational configurations and queries.
-Tutorials
-=========
+Documentation Overview
+======================
-* :ref:`ormtutorial_toplevel` - This describes the richest feature of SQLAlchemy, its object relational mapper. If you want to work with higher-level SQL which is constructed automatically for you, as well as management of Python objects, proceed to this tutorial.
-* :ref:`sqlexpression_toplevel` - The core of SQLAlchemy is its SQL expression language. The SQL Expression Language is a toolkit all its own, independent of the ORM package, which can be used to construct manipulable SQL expressions which can be programmatically constructed, modified, and executed, returning cursor-like result sets. It's a lot more lightweight than the ORM and is appropriate for higher scaling SQL operations. It's also heavily present within the ORM's public facing API, so advanced ORM users will want to master this language as well.
+The documentation is separated into three sections: :ref:`orm_toplevel`, :ref:`core_toplevel`, and :ref:`dialect_toplevel`.
-Main Documentation
-==================
+In :ref:`orm_toplevel`, the Object Relational Mapper is introduced and fully
+described. New users should begin with the :ref:`ormtutorial_toplevel`. If you
+want to work with higher-level SQL which is constructed automatically for you,
+as well as management of Python objects, proceed to this tutorial.
-* :ref:`datamapping_toplevel` - A comprehensive walkthrough of major ORM patterns and techniques.
-* :ref:`session_toplevel` - A detailed description of SQLAlchemy's Session object
-* :ref:`engines_toplevel` - Describes SQLAlchemy's database-connection facilities, including connection documentation and working with connections and transactions.
-* :ref:`metadata_toplevel` - All about schema management using :class:`~sqlalchemy.schema.MetaData` and :class:`~sqlalchemy.schema.Table` objects; reading database schemas into your application, creating and dropping tables, constraints, defaults, sequences, indexes.
-* :ref:`pooling_toplevel` - Further detail about SQLAlchemy's connection pool library.
-* :ref:`types` - Datatypes included with SQLAlchemy, their functions, as well as how to create your own types.
-* :ref:`plugins` - Included addons for SQLAlchemy
+In :ref:`core_toplevel`, the breadth of SQLAlchemy's SQL and database
+integration and description services are documented, the core of which is the
+SQL Expression language. The SQL Expression Language is a toolkit all its own,
+independent of the ORM package, which can be used to construct manipulable SQL
+expressions which can be programmatically constructed, modified, and executed,
+returning cursor-like result sets. In contrast to the ORM's domain-centric
+mode of usage, the expression language provides a schema-centric usage
+paradigm. New users should begin here with :ref:`sqlexpression_toplevel`.
+SQLAlchemy engine, connection, and pooling services are also described in
+:ref:`core_toplevel`.
-Code Examples
-=============
+In :ref:`dialect_toplevel`, reference documentation for all provided
+database and DBAPI backends is provided.
-Working code examples are included in the SQLAlchemy distribution, and there are also usage recipes on the SQLAlchemy wiki. A description of all the included example applications is at :ref:`examples_toplevel`.
-
-API Reference
+Code Examples
=============
-An organized section of all SQLAlchemy APIs is at :ref:`api_reference_toplevel`.
+Working code examples, mostly regarding the ORM, are included in the
+SQLAlchemy distribution, and there are also usage recipes on the SQLAlchemy
+wiki. A description of all the included example applications is at
+:ref:`examples_toplevel`.
Installing SQLAlchemy
======================
-Installing SQLAlchemy from scratch is most easily achieved with `setuptools <http://pypi.python.org/pypi/setuptools/>`_. Assuming it's installed, just run this from the command-line:
+Installing SQLAlchemy from scratch is most easily achieved with `setuptools
+<http://pypi.python.org/pypi/setuptools/>`_. Assuming it's installed, just run
+this from the command-line:
.. sourcecode:: none
# easy_install SQLAlchemy
-This command will download the latest version of SQLAlchemy from the `Python Cheese Shop <http://pypi.python.org/pypi/SQLAlchemy>`_ and install it to your system.
+This command will download the latest version of SQLAlchemy from the `Python
+Cheese Shop <http://pypi.python.org/pypi/SQLAlchemy>`_ and install it to your
+system.
* setuptools_
* `install setuptools <http://peak.telecommunity.com/DevCenter/EasyInstall#installation-instructions>`_
.. _collections_toplevel:
+.. currentmodule:: sqlalchemy.orm
+
Collection Configuration and Techniques
=======================================
Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
+.. autofunction:: dynamic_loader
+
Setting Noload
~~~~~~~~~~~~~~~
.. autoclass:: collection
-.. autoclass:: sqlalchemy.orm.collections.MappedCollection
- :members:
-
.. autofunction:: collection_adapter
.. autofunction:: column_mapped_collection
.. autofunction:: mapped_collection
+.. autoclass:: sqlalchemy.orm.collections.MappedCollection
+ :members:
+
+
-.. module:: sqlalchemy.orm.interfaces
-
-Deprecated ORM Interfaces
-=========================
-
-The event system described in :ref:`events` now supercedes the previous
-"Extension" classes found in all versions of SQLAlchemy through version 0.7.
-The classes here are deprecated.
-
-MapperExtension
------------------
-
-MapperExtension is now superceded by the events described in :class:`.MapperEvents`.
-
-To use MapperExtension, make your own subclass of it and just send it off to a mapper::
-
- m = mapper(User, users_table, extension=MyExtension())
-
-Multiple extensions will be chained together and processed in order; they are specified as a list::
-
- m = mapper(User, users_table, extension=[ext1, ext2, ext3])
-
-.. autoclass:: MapperExtension
-
-SessionExtension
------------------
-
-SessionExtension is now superceded by the events described in :class:`.SessionEvents`.
-
-The :class:`~sqlalchemy.orm.interfaces.SessionExtension` applies plugin points for :class:`.Session` objects::
-
- class MySessionExtension(SessionExtension):
- def before_commit(self, session):
- print "before commit!"
-
- Session = sessionmaker(extension=MySessionExtension())
-
-The same :class:`~sqlalchemy.orm.interfaces.SessionExtension` instance can be
-used with any number of sessions.
-
-.. autoclass:: SessionExtension
-
-AttributeExtension
---------------------
-
-AttributeExtension is now superceded by the events described in :class:`.AttributeEvents`.
-
-..autoclass:: AttributeExtension
-
+## In 0.7, move "interfaces" to here.
\ No newline at end of file
.. _associationproxy:
-associationproxy
-================
+Association Proxy
+=================
.. module:: sqlalchemy.ext.associationproxy
-declarative
+Declarative
===========
.. automodule:: sqlalchemy.ext.declarative
-Horizontal Shard
-=================
+Horizontal Sharding
+===================
.. automodule:: sqlalchemy.ext.horizontal_shard
--- /dev/null
+.. _plugins:
+.. _sqlalchemy.ext:
+
+ORM Extensions
+==============
+
+SQLAlchemy has a variety of ORM extensions available, which add additional
+functionality to the core behavior.
+
+.. toctree::
+ :maxdepth: 1
+
+ associationproxy
+ declarative
+ orderinglist
+ horizontal_shard
+ sqlsoup
+
-orderinglist
-============
+Ordering List
+=============
.. automodule:: sqlalchemy.ext.orderinglist
+.. _orm_toplevel:
+
SQLAlchemy ORM
===============
session
query
loading
- events
- extensions
+ interfaces
+ extensions/index
examples
- deprecated
+
\ No newline at end of file
--- /dev/null
+.. currentmodule:: sqlalchemy.orm.interfaces
+
+.. _interfaces_orm_toplevel:
+
+
+ORM Event Interfaces
+====================
+
+This section describes the various categories of events which can be intercepted
+within the SQLAlchemy ORM.
+
+For non-ORM event documentation, see :ref:`interfaces_core_toplevel`.
+
+A new version of this API with a significantly more flexible and consistent
+interface will be available in version 0.7.
+
+Mapper Events
+-----------------
+
+To use :class:`.MapperExtension`, make your own subclass of it and just send it off to a mapper::
+
+ from sqlalchemy.orm.interfaces import MapperExtension
+
+ class MyExtension(MapperExtension):
+ def before_insert(self, mapper, connection, instance):
+ print "instance %s before insert !" % instance
+
+ m = mapper(User, users_table, extension=MyExtension())
+
+Multiple extensions will be chained together and processed in order; they are specified as a list::
+
+ m = mapper(User, users_table, extension=[ext1, ext2, ext3])
+
+.. autoclass:: MapperExtension
+ :members:
+
+Session Events
+-----------------
+
+The :class:`.SessionExtension` applies plugin points for :class:`.Session` objects::
+
+ from sqlalchemy.orm.interfaces import SessionExtension
+
+ class MySessionExtension(SessionExtension):
+ def before_commit(self, session):
+ print "before commit!"
+
+ Session = sessionmaker(extension=MySessionExtension())
+
+The same :class:`~sqlalchemy.orm.interfaces.SessionExtension` instance can be
+used with any number of sessions.
+
+.. autoclass:: SessionExtension
+ :members:
+
+Attribute Events
+--------------------
+
+:class:`.AttributeExtension` is used to listen for set, remove, and append
+events on individual mapped attributes. It is established on an individual
+mapped attribute using the `extension` argument, available on
+:func:`.column_property`, :func:`.relationship`, and others::
+
+ from sqlalchemy.orm.interfaces import AttributeExtension
+ from sqlalchemy.orm import mapper, relationship, column_property
+
+ class MyAttrExt(AttributeExtension):
+ def append(self, state, value, initiator):
+ print "append event !"
+ return value
+
+ def set(self, state, value, oldvalue, initiator):
+ print "set event !"
+ return value
+
+ mapper(SomeClass, sometable, properties={
+ 'foo':column_property(sometable.c.foo, extension=MyAttrExt()),
+ 'bar':relationship(Bar, extension=MyAttrExt())
+ })
+
+Note that the :class:`AttributeExtension` methods
+:meth:`~.AttributeExtension.append` and :meth:`~.AttributeExtension.set` need
+to return the ``value`` parameter. The returned value is used as the effective
+value, and allows the extension to change what is ultimately persisted.
+
+.. autoclass:: AttributeExtension
+ :members:
+
+Instrumentation Events and Re-implementation
+---------------------------------------------
+
+:class:`.InstrumentationManager` can be subclassed in order to receieve class
+instrumentation events as well as to change how class instrumentation
+proceeds. This class exists for the purposes of integration with other object
+management frameworks which would like to entirely modify the instrumentation
+methodology of the ORM, and is not intended for regular usage. One possible
+exception is the :meth:`.InstrumentationManager.post_configure_attribute`
+method, which can be useful for adding extensions to all mapped attributes,
+though a much better way to do this will be available in a future release of
+SQLAlchemy.
+
+For an example of :class:`.InstrumentationManager`, see the example
+:ref:`examples_instrumentation`.
+
+.. autoclass:: InstrumentationManager
+ :members:
+ :undoc-members:
\ No newline at end of file
Using Loader Strategies: Lazy Loading, Eager Loading
----------------------------------------------------
+By default, all inter-object relationships are **lazy loading**. The scalar or
+collection attribute associated with a :func:`~sqlalchemy.orm.relationship`
+contains a trigger which fires the first time the attribute is accessed. This
+trigger, in all but one case, issues a SQL call at the point of access
+in order to load the related object or objects:
+
+.. sourcecode:: python+sql
+
+ {sql}>>> jack.addresses
+ SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
+ addresses.user_id AS addresses_user_id
+ FROM addresses
+ WHERE ? = addresses.user_id
+ [5]
+ {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
+
+The one case where SQL is not emitted is for a simple many-to-one relationship, when
+the related object can be identified by its primary key alone and that object is already
+present in the current :class:`.Session`.
+
+This default behavior of "load upon attribute access" is known as "lazy" or
+"select" loading - the name "select" because a "SELECT" statement is typically emitted
+when the attribute is first accessed.
+
In the :ref:`ormtutorial_toplevel`, we introduced the concept of **Eager
Loading**. We used an ``option`` in conjunction with the
:class:`~sqlalchemy.orm.query.Query` object in order to indicate that a
relationship should be loaded at the same time as the parent, within a single
-SQL query:
+SQL query. This option, known as :func:`.joinedload`
.. sourcecode:: python+sql
WHERE users.name = ?
['jack']
-By default, all inter-object relationships are **lazy loading**. The scalar or
-collection attribute associated with a :func:`~sqlalchemy.orm.relationship`
-contains a trigger which fires the first time the attribute is accessed, which
-issues a SQL call at that point:
-
-.. sourcecode:: python+sql
-
- {sql}>>> jack.addresses
- SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address,
- addresses.user_id AS addresses_user_id
- FROM addresses
- WHERE ? = addresses.user_id
- [5]
- {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
-A second option for eager loading exists, called "subquery" loading. This kind
-of eager loading emits an additional SQL statement for each collection
-requested, aggregated across all parent objects:
+In addition to "joined eager loading", a second option for eager loading
+exists, called "subquery eager loading". This kind of eager loading emits an
+additional SQL statement for each collection requested, aggregated across all
+parent objects:
.. sourcecode:: python+sql
('jack',)
The default **loader strategy** for any :func:`~sqlalchemy.orm.relationship`
-is configured by the ``lazy`` keyword argument, which defaults to ``select``.
+is configured by the ``lazy`` keyword argument, which defaults to ``select`` - this indicates
+a "select" statement .
Below we set it as ``joined`` so that the ``children`` relationship is eager
loading, using a join:
There are two other loader strategies available, **dynamic loading** and **no loading**; these are described in :ref:`largecollections`.
+The Zen of Eager Loading
+-------------------------
+
+The philosophy behind loader strategies is that any set of loading schemes can be
+applied to a particular query, and *the results don't change* - only the number
+of SQL statements required to fully load related objects and collections changes. A particular
+query might start out using all lazy loads. After using it in context, it might be revealed
+that particular attributes or collections are always accessed, and that it would be more
+efficient to change the loader strategy for these. The strategy can be changed with no other
+modifications to the query, the results will remain identical, but fewer SQL statements would be emitted.
+In theory (and pretty much in practice), nothing you can do to the :class:`.Query` would make it load
+a different set of primary or related objects based on a change in loader strategy.
+
+The way eagerloading does this, and in particular how :func:`joinedload`
+works, is that it creates an anonymous alias of all the joins it adds to your
+query, so that they can't be referenced by other parts of the query. If the
+query contains a DISTINCT, or a limit or offset, the statement is first
+wrapped inside a subquery, and joins are applied to that. As the user, you
+don't have access to these aliases or subqueries, and you cannot affect what
+data they will load at query time - a typical beginner misunderstanding is
+that adding a :meth:`.Query.order_by`, naming the joined relationship, would
+change the order of the collection, or that the entries in the collection as
+it is loaded could be affected by :meth:`.Query.filter`. Not the case ! If
+you'd like to join from one table to another, filtering or ordering on the
+joined result, you'd use :meth:`.Query.join`. If you then wanted that joined
+result to populate itself into a related collection, this is also available,
+via :func:`.contains_eager` option - see :ref:`contains_eager`.
+
What Kind of Loading to Use ?
-----------------------------
joined loading, however, except perhaps that subquery loading can use an INNER JOIN in all cases
whereas joined loading requires that the foreign key is NOT NULL.
+.. _contains_eager:
+
Routing Explicit Joins/Statements into Eagerly Loaded Collections
------------------------------------------------------------------
For further examples on this particular use case, see :ref:`maptojoin`.
+.. _deferred:
+
Deferred Column Loading
------------------------
.. autofunction:: clear_mappers
+.. autofunction:: sqlalchemy.orm.util.identity_key
+
+.. autofunction:: sqlalchemy.orm.util.polymorphic_union
+
.. autoclass:: sqlalchemy.orm.mapper.Mapper
:members:
Querying
========
+This section provides API documentation for the :class:`.Query` object and related constructs.
+
+For an in-depth introduction to querying with the SQLAlchemy ORM, please see the :ref:`ormtutorial_toplevel`.
+
+
.. module:: sqlalchemy.orm
The Query Object
.. autofunction:: backref
-.. autofunction:: dynamic_loader
-
.. autofunction:: relation
Using the Session
=================
+.. module:: sqlalchemy.orm.session
+
The :func:`.orm.mapper` function and :mod:`~sqlalchemy.ext.declarative` extensions
are the primary configurational interface for the ORM. Once mappings are
configured, the primary usage interface for persistence operations is the
engine.commit() # commit the transaction
+The :class:`.Session` object and :func:`.sessionmaker` function
+================================================================
+
+.. autofunction:: sessionmaker
+
+.. autoclass:: sqlalchemy.orm.session.Session
+ :members:
+
.. _unitofwork_contextual:
Contextual/Thread-local Sessions
Session.remove() <-
web response <-
-The above example illustrates an explicit call to ``Session.remove()``. This
+The above example illustrates an explicit call to :meth:`.ScopedSession.remove`. This
has the effect such that each web request starts fresh with a brand new
-session. When integrating with a web framework, there's actually many options
-on how to proceed for this step:
-
-* Session.remove() - this is the most cut and dry approach; the
- :class:`~sqlalchemy.orm.session.Session` is thrown away, all of its
- transactional resources rolled back and connections checked back to the
- connection pool. A new :class:`~sqlalchemy.orm.session.Session` will be used
- on the next request.
-* Session.close() - Similar to calling ``remove()``, in that all objects are
- explicitly expunged, transactional resources are rolled back, connection
- resources checked back into the connection pool, except the actual
- :class:`~sqlalchemy.orm.session.Session` object hangs around. It doesn't
- make too much difference here unless the start of the web request would like
- to pass specific options to the initial construction of
- :class:`~sqlalchemy.orm.session.Session()`, such as a specific
- :class:`~sqlalchemy.engine.base.Engine` to bind to.
-* Session.commit() - In this case, the behavior is that any remaining changes
- pending are flushed, and the transaction is committed; connection resources
- are returned to the connection pool. The full state of the session is
- expired, so that when the next web request is started, all data will be
- reloaded. In reality, the contents of the
- :class:`~sqlalchemy.orm.session.Session` are weakly referenced anyway so its
- likely that it will be empty on the next request in any case.
-* Session.rollback() - Similar to calling commit, except we assume that the
- user would have called commit explicitly if that was desired; the
- :func:`~sqlalchemy.orm.session.Session.rollback` ensures that no
- transactional state remains, returns connections to the connection pool, and
- expires all data, in the case that the request was aborted and did not roll
- back itself.
-* do nothing - this is a valid option as well. The controller code is
- responsible for doing one of the above steps at the end of the request.
-
-Scoped Session API docs: :func:`sqlalchemy.orm.scoped_session`
+session, and is the most definitive approach to closing out a request.
+
+It's not strictly necessary to remove the session at the end of the request -
+other options include calling :meth:`.Session.close`, :meth:`.Session.rollback`,
+:meth:`.Session.commit` at the end so that the existing session returns
+its connections to the pool and removes any existing transactional context.
+Doing nothing is an option too, if individual controller methods take responsibility
+for ensuring that no transactions remain open after a request ends.
+
+Contextual Session API
+-----------------------
+
+.. autofunction:: sqlalchemy.orm.scoped_session
+
+.. autoclass:: sqlalchemy.orm.scoping.ScopedSession
+ :members:
.. _session_partitioning:
Horizontal partitioning partitions the rows of a single table (or a set of
tables) across multiple databases.
-See the "sharding" example in `attribute_shard.py
-<http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/sharding/attribute_shard.py>`_
+See the "sharding" example: :ref:`examples_sharding`.
+
+Session Utilities
+=================
+
+.. autofunction:: make_transient
+
+.. autofunction:: object_session
+
+Attribute and State Management Utilities
+========================================
+
+.. autofunction:: sqlalchemy.orm.attributes.del_attribute
+
+.. autofunction:: sqlalchemy.orm.attributes.get_attribute
+
+.. autofunction:: sqlalchemy.orm.attributes.get_history
+
+.. autofunction:: sqlalchemy.orm.attributes.init_collection
+
+.. function:: sqlalchemy.orm.attributes.instance_state
+
+ Return the :class:`InstanceState` for a given object.
+
+.. autofunction:: sqlalchemy.orm.attributes.is_instrumented
+
+.. function:: sqlalchemy.orm.attributes.manager_of_class
+
+ Return the :class:`ClassManager` for a given class.
+
+.. autofunction:: sqlalchemy.orm.attributes.set_attribute
+
+.. autofunction:: sqlalchemy.orm.attributes.set_committed_value
+
+++ /dev/null
-.. _sqlalchemy.ext.compiler_toplevel:
-
-compiler
-========
-
-.. automodule:: sqlalchemy.ext.compiler
- :members:
\ No newline at end of file
+++ /dev/null
-.. _plugins:
-.. _sqlalchemy.ext:
-
-sqlalchemy.ext
-==============
-
-SQLAlchemy has a variety of extensions available which provide extra
-functionality to SA, either via explicit usage or by augmenting the
-core behavior.
-
-.. toctree::
- :glob:
-
- declarative
- associationproxy
- orderinglist
- serializer
- sqlsoup
- compiler
- horizontal_shard
-
+++ /dev/null
-.. _api_reference_toplevel:
-
-API Reference
-=============
-
-.. toctree::
- :maxdepth: 3
-
- sqlalchemy/index
- orm/index
- dialects/index
- ext/index
-
-Advanced Collection Mapping
-===========================
-
-This is an in-depth discussion of collection mechanics. For simple examples, see :ref:`alternate_collection_implementations`.
-
-.. automodule:: sqlalchemy.orm.collections
-
-.. autofunction:: attribute_mapped_collection
-
-.. autoclass:: collection
-
-.. autoclass:: sqlalchemy.orm.collections.MappedCollection
- :members:
-
-.. autofunction:: collection_adapter
-
-.. autofunction:: column_mapped_collection
-
-.. autofunction:: mapped_collection
-
-.. _sqlalchemy_orm_toplevel:
-
-sqlalchemy.orm
-==============
-
-.. toctree::
- :glob:
-
- mapping
- collections
- query
- sessions
- interfaces
- utilities
-
-
-Interfaces
-==========
-
-
\ No newline at end of file
.. autofunction:: sqlalchemy.orm.attributes.set_committed_value
-Internals
----------
-
-.. autoclass:: sqlalchemy.orm.interfaces.MapperProperty
- :members:
Sessions
========
-.. module:: sqlalchemy.orm.session
-
-.. autofunction:: sqlalchemy.orm.create_session
-
-.. autofunction:: make_transient
-
-.. autofunction:: object_session
-
-.. autofunction:: sqlalchemy.orm.scoped_session
-
-.. autofunction:: sessionmaker
-
-.. autoclass:: sqlalchemy.orm.session.Session
- :members:
-
-.. autoclass:: sqlalchemy.orm.scoping.ScopedSession
- :members:
=========
.. automodule:: sqlalchemy.orm.util
- :members: identity_key, Validator, with_parent, polymorphic_union
+ :members: polymorphic_union
:undoc-members:
+++ /dev/null
-Connections
-===========
-
-Creating Engines
-----------------
-
-.. autofunction:: sqlalchemy.create_engine
-
-.. autofunction:: sqlalchemy.engine_from_config
-
-.. autoclass:: sqlalchemy.engine.url.URL
- :members:
-
-Connectables
-------------
-
-.. currentmodule:: sqlalchemy.engine.base
-
-.. autoclass:: Engine
- :members:
- :undoc-members:
-
-.. autoclass:: Connection
- :members:
- :undoc-members:
-
-.. autoclass:: Connectable
- :members:
- :undoc-members:
-
-Result Objects
---------------
-
-.. autoclass:: sqlalchemy.engine.base.ResultProxy
- :members:
-
-.. autoclass:: sqlalchemy.engine.base.RowProxy
- :members:
-
-Transactions
-------------
-
-.. autoclass:: Transaction
- :members:
- :undoc-members:
-
-Internals
----------
-
-.. autofunction:: connection_memoize
-
-.. autoclass:: Dialect
- :members:
-
-.. autoclass:: sqlalchemy.engine.default.DefaultDialect
- :members:
- :show-inheritance:
-
-.. autoclass:: sqlalchemy.engine.default.DefaultExecutionContext
- :members:
- :show-inheritance:
-
-.. autoclass:: ExecutionContext
- :members:
-
+++ /dev/null
-.. _inspector_api_toplevel:
-
-Schema Introspection
-====================
-
-.. module:: sqlalchemy.engine.reflection
-
-SQLAlchemy provides rich schema introspection capabilities. The most common methods for this include the "autoload" argument of :class:`~sqlalchemy.schema.Table`::
-
- from sqlalchemy import create_engine, MetaData, Table
- engine = create_engine('...')
- meta = MetaData()
- user_table = Table('user', meta, autoload=True, autoload_with=engine)
-
-As well as the :meth:`~sqlalchemy.schema.MetaData.reflect` method of :class:`~sqlalchemy.schema.MetaData`::
-
- from sqlalchemy import create_engine, MetaData, Table
- engine = create_engine('...')
- meta = MetaData()
- meta.reflect(engine)
- user_table = meta.tables['user']
-
-Further examples of reflection using :class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.MetaData` can be found at :ref:`metadata_reflection`.
-
-There is also a low-level inspection interface available for more specific operations, known as the :class:`Inspector`::
-
- from sqlalchemy import create_engine
- from sqlalchemy.engine import reflection
- engine = create_engine('...')
- insp = reflection.Inspector.from_engine(engine)
- print insp.get_table_names()
-
-.. autoclass:: Inspector
- :members:
- :undoc-members:
- :show-inheritance:
+++ /dev/null
-Interfaces
-----------
-
-.. automodule:: sqlalchemy.interfaces
- :members:
- :undoc-members:
-
-
Tables and Columns
------------------
-.. autoclass:: Column
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: MetaData
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: Table
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: ThreadLocalMetaData
- :members:
- :undoc-members:
- :show-inheritance:
Constraints
-----------
-.. autoclass:: CheckConstraint
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: Constraint
- :members:
- :undoc-members:
- :show-inheritance:
-.. autoclass:: ForeignKey
- :members:
- :undoc-members:
- :show-inheritance:
-.. autoclass:: ForeignKeyConstraint
- :members:
- :undoc-members:
- :show-inheritance:
-.. autoclass:: Index
- :members:
- :undoc-members:
- :show-inheritance:
-.. autoclass:: PrimaryKeyConstraint
- :members:
- :undoc-members:
- :show-inheritance:
-.. autoclass:: UniqueConstraint
- :members:
- :undoc-members:
- :show-inheritance:
Default Generators and Markers
------------------------------
-.. autoclass:: ColumnDefault
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DefaultClause
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DefaultGenerator
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: FetchedValue
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: PassiveDefault
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: Sequence
- :members:
- :undoc-members:
- :show-inheritance:
.. _schema_api_ddl:
-DDL Generation
---------------
-
-.. autoclass:: DDLElement
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DDL
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: CreateTable
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DropTable
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: CreateSequence
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DropSequence
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: CreateIndex
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DropIndex
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: AddConstraint
- :members:
- :undoc-members:
- :show-inheritance:
-
-.. autoclass:: DropConstraint
- :members:
- :undoc-members:
- :show-inheritance:
Internals
---------
The above compiler will prefix all INSERT statements with "some prefix" when compiled.
+.. _type_compilation_extension:
+
Changing Compilation of Types
=============================
============
SqlSoup provides a convenient way to access existing database tables without
-having to declare table or mapper classes ahead of time. It is built on top of the SQLAlchemy ORM and provides a super-minimalistic interface to an existing database.
+having to declare table or mapper classes ahead of time. It is built on top of
+the SQLAlchemy ORM and provides a super-minimalistic interface to an existing
+database.
+
+SqlSoup effectively provides a coarse grained, alternative interface to
+working with the SQLAlchemy ORM, providing a "self configuring" interface
+for extremely rudimental operations. It's somewhat akin to a
+"super novice mode" version of the ORM. While SqlSoup can be very handy,
+users are strongly encouraged to use the full ORM for non-trivial applications.
Suppose we have a database with users, books, and loans tables
(corresponding to the PyWebOff dataset, if you're curious).
See also :func:`eagerload` for the "automatic" version of this
functionality.
+ For additional examples of :func:`contains_eager` see
+ :ref:`contains_eager`.
+
"""
alias = kwargs.pop('alias', None)
if kwargs:
class MapperExtension(object):
"""Base implementation for customizing ``Mapper`` behavior.
-
+
New extension classes subclass ``MapperExtension`` and are specified
using the ``extension`` mapper() argument, which is a single
``MapperExtension`` or a list of such. A single mapper
when this symbol is returned. Like EXT_CONTINUE, it also
has additional significance in some cases that a default
mapper activity will not be performed.
-
+
"""
+
def instrument_class(self, mapper, class_):
"""Receive a class when the mapper is first constructed, and has
applied instrumentation to the mapped class.
\**flags
extra information about the row, same as criterion in
``create_row_processor()`` method of
- :class:`~sqlalchemy.orm.interfaces.MapperProperty`
+ :class:`~sqlalchemy.orm.interfaces.MapperProperty`
"""
return EXT_CONTINUE
def after_delete(self, mapper, connection, instance):
"""Receive an object instance after that instance is deleted.
-
- The return value is only significant within the ``MapperExtension``
+
+ The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
-
+
"""
return EXT_CONTINUE
def options(self, *args):
"""Return a new Query object, applying the given list of
- MapperOptions.
-
+ mapper options.
+
+ Most supplied options regard changing how column- and
+ relationship-mapped attributes are loaded. See the sections
+ :ref:`deferred` and :ref:`loading_toplevel` for reference
+ documentation.
+
"""
return self._options(False, *args)
class Session(object):
"""Manages persistence operations for ORM-mapped objects.
- The Session is the front end to SQLAlchemy's **Unit of Work**
- implementation. The concept behind Unit of Work is to track modifications
- to a field of objects, and then be able to flush those changes to the
- database in a single operation.
-
- SQLAlchemy's unit of work includes these functions:
-
- * The ability to track in-memory changes on scalar- and collection-based
- object attributes, such that database persistence operations can be
- assembled based on those changes.
-
- * The ability to organize individual SQL queries and population of newly
- generated primary and foreign key-holding attributes during a persist
- operation such that referential integrity is maintained at all times.
-
- * The ability to maintain insert ordering against the order in which new
- instances were added to the session.
-
- * An Identity Map, which is a dictionary keying instances to their unique
- primary key identity. This ensures that only one copy of a particular
- entity is ever present within the session, even if repeated load
- operations for the same entity occur. This allows many parts of an
- application to get a handle to a particular object without any chance of
- modifications going to two different places.
-
- When dealing with instances of mapped classes, an instance may be
- *attached* to a particular Session, else it is *unattached* . An instance
- also may or may not correspond to an actual row in the database. These
- conditions break up into four distinct states:
-
- * *Transient* - an instance that's not in a session, and is not saved to
- the database; i.e. it has no database identity. The only relationship
- such an object has to the ORM is that its class has a ``mapper()``
- associated with it.
-
- * *Pending* - when you ``add()`` a transient instance, it becomes
- pending. It still wasn't actually flushed to the database yet, but it
- will be when the next flush occurs.
-
- * *Persistent* - An instance which is present in the session and has a
- record in the database. You get persistent instances by either flushing
- so that the pending instances become persistent, or by querying the
- database for existing instances (or moving persistent instances from
- other sessions into your local session).
-
- * *Detached* - an instance which has a record in the database, but is not
- in any session. Theres nothing wrong with this, and you can use objects
- normally when they're detached, **except** they will not be able to
- issue any SQL in order to load collections or attributes which are not
- yet loaded, or were marked as "expired".
-
- The session methods which control instance state include :meth:`.Session.add`,
- :meth:`.Session.delete`, :meth:`.Session.merge`, and :meth:`.Session.expunge`.
-
- The Session object is generally **not** threadsafe. A session which is
- set to ``autocommit`` and is only read from may be used by concurrent
- threads if it's acceptable that some object instances may be loaded twice.
-
- The typical pattern to managing Sessions in a multi-threaded environment
- is either to use mutexes to limit concurrent access to one thread at a
- time, or more commonly to establish a unique session for every thread,
- using a threadlocal variable. SQLAlchemy provides a thread-managed
- Session adapter, provided by the :func:`~sqlalchemy.orm.scoped_session`
- function.
-
+ The Session's usage paradigm is described at :ref:`session_toplevel`.
+
"""
public_methods = (
query_cls=query.Query):
"""Construct a new Session.
- Arguments to ``Session`` are described using the
- :func:`~sqlalchemy.orm.sessionmaker` function.
+ Arguments to :class:`.Session` are described using the
+ :func:`.sessionmaker` function, which is the
+ typical point of entry.
"""
def tometadata(self, metadata, schema=RETAIN_SCHEMA):
- """Return a copy of this ``Table`` associated with a different
- ``MetaData``."""
+ """Return a copy of this :class:`Table` associated with a different
+ :class:`MetaData`.
+
+ E.g.::
+
+ # create two metadata
+ meta1 = MetaData('sqlite:///querytest.db')
+ meta2 = MetaData()
+
+ # load 'users' from the sqlite engine
+ users_table = Table('users', meta1, autoload=True)
+
+ # create the same Table object for the plain metadata
+ users_table_2 = users_table.tometadata(meta2)
+
+ """
try:
if schema is RETAIN_SCHEMA:
class ColumnDefault(DefaultGenerator):
"""A plain default value on a column.
- This could correspond to a constant, a callable function, or a SQL clause.
+ This could correspond to a constant, a callable function,
+ or a SQL clause.
+
+ :class:`.ColumnDefault` is generated automatically
+ whenever the ``default``, ``onupdate`` arguments of
+ :class:`.Column` are used. A :class:`.ColumnDefault`
+ can be passed positionally as well.
+
+ For example, the following::
+
+ Column('foo', Integer, default=50)
+
+ Is equivalent to::
+
+ Column('foo', Integer, ColumnDefault(50))
+
+
"""
def __init__(self, arg, **kwargs):
class FetchedValue(object):
- """A default that takes effect on the database side."""
+ """A marker for a transparent database-side default.
+
+ Use :class:`.FetchedValue` when the database is configured
+ to provide some automatic default for a column.
+
+ E.g.::
+
+ Column('foo', Integer, FetchedValue())
+
+ Would indicate that some trigger or default generator
+ will create a new value for the ``foo`` column during an
+ INSERT.
+
+ """
def __init__(self, for_update=False):
self.for_update = for_update
class DefaultClause(FetchedValue):
- """A DDL-specified DEFAULT column value."""
+ """A DDL-specified DEFAULT column value.
+
+ :class:`.DefaultClause` is a :class:`.FetchedValue`
+ that also generates a "DEFAULT" clause when
+ "CREATE TABLE" is emitted.
+
+ :class:`.DefaultClause` is generated automatically
+ whenever the ``server_default``, ``server_onupdate`` arguments of
+ :class:`.Column` are used. A :class:`.DefaultClause`
+ can be passed positionally as well.
+
+ For example, the following::
+
+ Column('foo', Integer, server_default="50")
+
+ Is equivalent to::
+
+ Column('foo', Integer, DefaultClause("50"))
+
+ """
def __init__(self, arg, for_update=False):
util.assert_arg_type(arg, (basestring,
return self.impl.compare_values(x, y)
def is_mutable(self):
+ """Return True if the target Python type is 'mutable'.
+
+ When this method is overridden, :meth:`copy_value` should
+ also be supplied. The :class:`.MutableType` mixin
+ is recommended as a helper.
+
+ """
+
return self.impl.is_mutable()
def _adapt_expression(self, op, othertype):
"""
def is_mutable(self):
- """Return True, mutable."""
+ """Return True if the target Python type is 'mutable'.
+
+ """
return True
def copy_value(self, value):