connection is retrieved from the connection pool at the point at which
:class:`.Connection` is created.
-The returned result is an instance of :class:`.ResultProxy`, which
+The returned result is an instance of :class:`.ResultProxy`, which
references a DBAPI cursor and provides a largely compatible interface
with that of the DBAPI cursor. The DBAPI cursor will be closed
-by the :class:`.ResultProxy` when all of its result rows (if any) are
+by the :class:`.ResultProxy` when all of its result rows (if any) are
exhausted. A :class:`.ResultProxy` that returns no rows, such as that of
-an UPDATE statement (without any returned rows),
+an UPDATE statement (without any returned rows),
releases cursor resources immediately upon construction.
When the :meth:`~.Connection.close` method is called, the referenced DBAPI
of weakref callbacks - *never* the ``__del__`` method) - however it's never a
good idea to rely upon Python garbage collection to manage resources.
-Our example above illustrated the execution of a textual SQL string.
-The :meth:`~.Connection.execute` method can of course accommodate more than
+Our example above illustrated the execution of a textual SQL string.
+The :meth:`~.Connection.execute` method can of course accommodate more than
that, including the variety of SQL expression constructs described
in :ref:`sqlexpression_toplevel`.
Using Transactions
==================
-.. note::
+.. note::
- This section describes how to use transactions when working directly
+ This section describes how to use transactions when working directly
with :class:`.Engine` and :class:`.Connection` objects. When using the
SQLAlchemy ORM, the public API for transaction control is via the
:class:`.Session` object, which makes usage of the :class:`.Transaction`
transaction is in progress. The detection is based on the presence of the
``autocommit=True`` execution option on the statement. If the statement
is a text-only statement and the flag is not set, a regular expression is used
-to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
+to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
for a particular backend::
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The "autocommit" feature is only in effect when no :class:`.Transaction` has
-otherwise been declared. This means the feature is not generally used with
-the ORM, as the :class:`.Session` object by default always maintains an
+otherwise been declared. This means the feature is not generally used with
+the ORM, as the :class:`.Session` object by default always maintains an
ongoing :class:`.Transaction`.
Full control of the "autocommit" behavior is available using the generative
:class:`.Connection`. This was illustrated using the :meth:`~.Engine.execute` method
of :class:`.Engine`.
-In addition to "connectionless" execution, it is also possible
-to use the :meth:`~.Executable.execute` method of
+In addition to "connectionless" execution, it is also possible
+to use the :meth:`~.Executable.execute` method of
any :class:`.Executable` construct, which is a marker for SQL expression objects
that support execution. The SQL expression object itself references an
:class:`.Engine` or :class:`.Connection` known as the **bind**, which it uses
on the expression itself, utilizing the fact that either an
:class:`~sqlalchemy.engine.base.Engine` or
:class:`~sqlalchemy.engine.base.Connection` has been *bound* to the expression
-object (binding is discussed further in
+object (binding is discussed further in
:ref:`metadata_toplevel`):
.. sourcecode:: python+sql
call_operation3(conn)
conn.close()
-Calling :meth:`~.Connection.close` on the "contextual" connection does not release
+Calling :meth:`~.Connection.close` on the "contextual" connection does not release
its resources until all other usages of that resource are closed as well, including
that any ongoing transactions are rolled back or committed.
"""
If the dialect is providing support for a particular DBAPI on top of
-an existing SQLAlchemy-supported database, the name can be given
+an existing SQLAlchemy-supported database, the name can be given
including a database-qualification. For example, if ``FooDialect``
were in fact a MySQL dialect, the entry point could be established like this::
Supported Databases
====================
-SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
-backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package. A
+SQLAlchemy includes many :class:`~sqlalchemy.engine.base.Dialect` implementations for various
+backends; each is described as its own package in the :ref:`sqlalchemy.dialects_toplevel` package. A
SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed.
-The table below summarizes the state of DBAPI support in SQLAlchemy 0.7. The values
+The table below summarizes the state of DBAPI support in SQLAlchemy 0.7. The values
translate as:
* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
:class:`.Engine` per database established within an
application, rather than creating a new one for each connection.
-.. note::
+.. note::
:class:`.QueuePool` is not used by default for SQLite engines. See
:ref:`sqlite_toplevel` for details on SQLite connection pool usage.
namespace of SA loggers that can be turned on is as follows:
* ``sqlalchemy.engine`` - controls SQL echoing. set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
-* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects. See the documentation of individual dialects for details.
+* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects. See the documentation of individual dialects for details.
* ``sqlalchemy.pool`` - controls connection pool logging. set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for information on mapper configurations.
The SQLAlchemy :class:`.Engine` conserves Python function call overhead
by only emitting log statements when the current logging level is detected
- as ``logging.INFO`` or ``logging.DEBUG``. It only checks this level when
- a new connection is procured from the connection pool. Therefore when
+ as ``logging.INFO`` or ``logging.DEBUG``. It only checks this level when
+ a new connection is procured from the connection pool. Therefore when
changing the logging configuration for an already-running application, any
:class:`.Connection` that's currently active, or more commonly a
:class:`~.orm.session.Session` object that's active in a transaction, won't log any
- SQL according to the new configuration until a new :class:`.Connection`
- is procured (in the case of :class:`~.orm.session.Session`, this is
+ SQL according to the new configuration until a new :class:`.Connection`
+ is procured (in the case of :class:`~.orm.session.Session`, this is
after the current transaction ends and a new one begins).
Core Internals
==============
-Some key internal constructs are listed here.
+Some key internal constructs are listed here.
.. currentmodule: sqlalchemy
* **Plain Python Distutils** - SQLAlchemy can be installed with a clean
Python install using the services provided via `Python Distutils <http://docs.python.org/distutils/>`_,
using the ``setup.py`` script. The C extensions as well as Python 3 builds are supported.
-* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_,
+* **Standard Setuptools** - When using `setuptools <http://pypi.python.org/pypi/setuptools/>`_,
SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
extensions are supported. setuptools is not supported on Python 3 at the time
of of this writing.
-* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_,
+* **Distribute** - With `distribute <http://pypi.python.org/pypi/distribute/>`_,
SQLAlchemy can be installed via ``setup.py`` or ``easy_install``, and the C
extensions as well as Python 3 builds are supported.
* **pip** - `pip <http://pypi.python.org/pypi/pip/>`_ is an installer that
rides on top of ``setuptools`` or ``distribute``, replacing the usage
of ``easy_install``. It is often preferred for its simpler mode of usage.
-.. note::
+.. note::
It is strongly recommended that either ``setuptools`` or ``distribute`` be installed.
Python's built-in ``distutils`` lacks many widely used installation features.
Install via easy_install or pip
-------------------------------
-When ``easy_install`` or ``pip`` is available, the distribution can be
+When ``easy_install`` or ``pip`` is available, the distribution can be
downloaded from Pypi and installed in one step::
easy_install SQLAlchemy
python setup.py --without-cextensions install
-.. note::
+.. note::
The ``--without-cextensions`` flag is available **only** if ``setuptools``
or ``distribute`` is installed. It is not available on a plain Python ``distutils``
jack.posts.append(Post('new post'))
-Since the read side of the dynamic relationship always queries the
-database, changes to the underlying collection will not be visible
-until the data has been flushed. However, as long as "autoflush" is
-enabled on the :class:`.Session` in use, this will occur
-automatically each time the collection is about to emit a
+Since the read side of the dynamic relationship always queries the
+database, changes to the underlying collection will not be visible
+until the data has been flushed. However, as long as "autoflush" is
+enabled on the :class:`.Session` in use, this will occur
+automatically each time the collection is about to emit a
query.
To place a dynamic relationship on a backref, use the :func:`~.orm.backref`
class Post(Base):
__table__ = posts_table
- user = relationship(User,
+ user = relationship(User,
backref=backref('posts', lazy='dynamic')
)
Note that eager/lazy loading options cannot be used in conjunction dynamic relationships at this time.
-.. note::
+.. note::
The :func:`~.orm.dynamic_loader` function is essentially the same
as :func:`~.orm.relationship` with the ``lazy='dynamic'`` argument specified.
Setting Noload
---------------
-A "noload" relationship never loads from the database, even when
+A "noload" relationship never loads from the database, even when
accessed. It is configured using ``lazy='noload'``::
class MyClass(Base):
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
- children = relationship("MyOtherClass",
- cascade="all, delete-orphan",
+ children = relationship("MyOtherClass",
+ cascade="all, delete-orphan",
passive_deletes=True)
class MyOtherClass(Base):
__tablename__ = 'myothertable'
id = Column(Integer, primary_key=True)
- parent_id = Column(Integer,
+ parent_id = Column(Integer,
ForeignKey('mytable.id', ondelete='CASCADE')
)
Dictionary Collections
-----------------------
-A little extra detail is needed when using a dictionary as a collection.
+A little extra detail is needed when using a dictionary as a collection.
This because objects are always loaded from the database as lists, and a key-generation
strategy must be available to populate the dictionary correctly. The
:func:`.attribute_mapped_collection` function is by far the most common way
class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
- notes = relationship("Note",
- collection_class=attribute_mapped_collection('keyword'),
+ notes = relationship("Note",
+ collection_class=attribute_mapped_collection('keyword'),
cascade="all, delete-orphan")
class Note(Base):
>>> item.notes.items()
{'a': <__main__.Note object at 0x2eaaf0>}
-:func:`.attribute_mapped_collection` will ensure that
+:func:`.attribute_mapped_collection` will ensure that
the ``.keyword`` attribute of each ``Note`` complies with the key in the
dictionary. Such as, when assigning to ``Item.notes``, the dictionary
key we supply must match that of the actual ``Note`` object::
item = Item()
item.notes = {
- 'a': Note('a', 'atext'),
+ 'a': Note('a', 'atext'),
'b': Note('b', 'btext')
}
The attribute which :func:`.attribute_mapped_collection` uses as a key
does not need to be mapped at all! Using a regular Python ``@property`` allows virtually
-any detail or combination of details about the object to be used as the key, as
+any detail or combination of details about the object to be used as the key, as
below when we establish it as a tuple of ``Note.keyword`` and the first ten letters
of the ``Note.text`` field::
class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
- notes = relationship("Note",
- collection_class=attribute_mapped_collection('note_key'),
+ notes = relationship("Note",
+ collection_class=attribute_mapped_collection('note_key'),
backref="item",
cascade="all, delete-orphan")
class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
- notes = relationship("Note",
- collection_class=column_mapped_collection(Note.__table__.c.keyword),
+ notes = relationship("Note",
+ collection_class=column_mapped_collection(Note.__table__.c.keyword),
cascade="all, delete-orphan")
as well as :func:`.mapped_collection` which is passed any callable function.
class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
- notes = relationship("Note",
- collection_class=mapped_collection(lambda note: note.text[0:10]),
+ notes = relationship("Note",
+ collection_class=mapped_collection(lambda note: note.text[0:10]),
cascade="all, delete-orphan")
Dictionary mappings are often combined with the "Association Proxy" extension to produce
-streamlined dictionary views. See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy`
+streamlined dictionary views. See :ref:`proxying_dictionaries` and :ref:`composite_association_proxy`
for examples.
.. autofunction:: attribute_mapped_collection
For the first use case, the :func:`.orm.validates` decorator is by far
the simplest way to intercept incoming values in all cases for the purposes
- of validation and simple marshaling. See :ref:`simple_validators`
+ of validation and simple marshaling. See :ref:`simple_validators`
for an example of this.
For the second use case, the :ref:`associationproxy_toplevel` extension is a
unaffected and avoids the need to carefully tailor collection behavior on a
method-by-method basis.
- Customized collections are useful when the collection needs to
- have special behaviors upon access or mutation operations that can't
+ Customized collections are useful when the collection needs to
+ have special behaviors upon access or mutation operations that can't
otherwise be modeled externally to the collection. They can of course
be combined with the above two approaches.
MappedCollection.__init__(self, keyfunc=lambda node: node.name)
OrderedDict.__init__(self, *args, **kw)
-When subclassing :class:`.MappedCollection`, user-defined versions
+When subclassing :class:`.MappedCollection`, user-defined versions
of ``__setitem__()`` or ``__delitem__()`` should be decorated
with :meth:`.collection.internally_instrumented`, **if** they call down
to those same methods on :class:`.MappedCollection`. This because the methods
collection
class MyMappedCollection(MappedCollection):
- """Use @internally_instrumented when your methods
+ """Use @internally_instrumented when your methods
call down to already-instrumented methods.
"""
.. note::
- Due to a bug in MappedCollection prior to version 0.7.6, this
+ Due to a bug in MappedCollection prior to version 0.7.6, this
workaround usually needs to be called before a custom subclass
of :class:`.MappedCollection` which uses :meth:`.collection.internally_instrumented`
can be used::
In joined table inheritance, each class along a particular classes' list of
parents is represented by a unique table. The total set of attributes for a
particular instance is represented as a join along all tables in its
-inheritance path. Here, we first define the ``Employee`` class.
+inheritance path. Here, we first define the ``Employee`` class.
This table will contain a primary key column (or columns), and a column
for each attribute that's represented by ``Employee``. In this case it's just
``name``::
The mapped table also has a column called ``type``. The purpose of
this column is to act as the **discriminator**, and stores a value
which indicates the type of object represented within the row. The column may
-be of any datatype, though string and integer are the most common.
+be of any datatype, though string and integer are the most common.
The discriminator column is only needed if polymorphic loading is
desired, as is usually the case. It is not strictly necessary that
-it be present directly on the base mapped table, and can instead be defined on a
-derived select statement that's used when the class is queried;
+it be present directly on the base mapped table, and can instead be defined on a
+derived select statement that's used when the class is queried;
however, this is a much more sophisticated configuration scenario.
The mapping receives additional arguments via the ``__mapper_args__``
-dictionary. Here the ``type`` column is explicitly stated as the
+dictionary. Here the ``type`` column is explicitly stated as the
discriminator column, and the **polymorphic identity** of ``employee``
is also given; this is the value that will be
stored in the polymorphic discriminator column for instances of this
}
It is standard practice that the same column is used for both the role
-of primary key as well as foreign key to the parent table,
+of primary key as well as foreign key to the parent table,
and that the column is also named the same as that of the parent table.
However, both of these practices are optional. Separate columns may be used for
primary key and parent-relationship, the column may be named differently than
One natural effect of the joined table inheritance configuration is that the
identity of any mapped object can be determined entirely from the base table.
This has obvious advantages, so SQLAlchemy always considers the primary key
- columns of a joined inheritance class to be those of the base table only.
+ columns of a joined inheritance class to be those of the base table only.
In other words, the ``id``
columns of both the ``engineer`` and ``manager`` tables are not used to locate
``Engineer`` or ``Manager`` objects - only the value in
.. sourcecode:: python+sql
{opensql}
- SELECT employee.id AS employee_id,
+ SELECT employee.id AS employee_id,
employee.name AS employee_name, employee.type AS employee_type
FROM employee
[]
.. sourcecode:: python+sql
{opensql}
- SELECT manager.id AS manager_id,
+ SELECT manager.id AS manager_id,
manager.manager_data AS manager_manager_data
FROM manager
WHERE ? = manager.id
[5]
- SELECT engineer.id AS engineer_id,
+ SELECT engineer.id AS engineer_id,
engineer.engineer_info AS engineer_engineer_info
FROM engineer
WHERE ? = engineer.id
query = session.query(eng_plus_manager)
-The above produces a query which joins the ``employee`` table to both the
+The above produces a query which joins the ``employee`` table to both the
``engineer`` and ``manager`` tables like the following:
.. sourcecode:: python+sql
query.all()
{opensql}
- SELECT employee.id AS employee_id,
- engineer.id AS engineer_id,
- manager.id AS manager_id,
- employee.name AS employee_name,
- employee.type AS employee_type,
- engineer.engineer_info AS engineer_engineer_info,
+ SELECT employee.id AS employee_id,
+ engineer.id AS engineer_id,
+ manager.id AS manager_id,
+ employee.name AS employee_name,
+ employee.type AS employee_type,
+ engineer.engineer_info AS engineer_engineer_info,
manager.manager_data AS manager_manager_data
- FROM employee
- LEFT OUTER JOIN engineer
- ON employee.id = engineer.id
- LEFT OUTER JOIN manager
+ FROM employee
+ LEFT OUTER JOIN engineer
+ ON employee.id = engineer.id
+ LEFT OUTER JOIN manager
ON employee.id = manager.id
[]
The entity returned by :func:`.orm.with_polymorphic` is an :class:`.AliasedClass`
object, which can be used in a :class:`.Query` like any other alias, including
-named attributes for those attributes on the ``Employee`` class. In our
+named attributes for those attributes on the ``Employee`` class. In our
example, ``eng_plus_manager`` becomes the entity that we use to refer to the
-three-way outer join above. It also includes namespaces for each class named
-in the list of classes, so that attributes specific to those subclasses can be
+three-way outer join above. It also includes namespaces for each class named
+in the list of classes, so that attributes specific to those subclasses can be
called upon as well. The following example illustrates calling upon attributes
specific to ``Engineer`` as well as ``Manager`` in terms of ``eng_plus_manager``::
eng_plus_manager = with_polymorphic(Employee, [Engineer, Manager])
query = session.query(eng_plus_manager).filter(
or_(
- eng_plus_manager.Engineer.engineer_info=='x',
+ eng_plus_manager.Engineer.engineer_info=='x',
eng_plus_manager.Manager.manager_data=='y'
)
)
engineer = Engineer.__table__
entity = with_polymorphic(
Employee,
- [Engineer, Manager],
+ [Engineer, Manager],
employee.outerjoin(manager).outerjoin(engineer)
)
+++++++++++++++++++++++++++++++++++++++++++++
The ``with_polymorphic`` functions work fine for
-simplistic scenarios. However, direct control of table rendering
+simplistic scenarios. However, direct control of table rendering
is called for, such as the case when one wants to
render to only the subclass table and not the parent table.
-This use case can be achieved by using the mapped :class:`.Table`
-objects directly. For example, to
+This use case can be achieved by using the mapped :class:`.Table`
+objects directly. For example, to
query the name of employees with particular criterion::
engineer = Engineer.__table__
id = Column(Integer, primary_key=True)
name = Column(String(50))
- employees = relationship("Employee",
+ employees = relationship("Employee",
backref='company',
cascade='all, delete-orphan')
function to create a polymorphic selectable::
manager_and_engineer = with_polymorphic(
- Employee, [Manager, Engineer],
+ Employee, [Manager, Engineer],
aliased=True)
session.query(Company).\
join(manager_and_engineer, Company.employees).\
filter(
- or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
+ or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
manager_and_engineer.Manager.manager_data=='somedata')
)
with the polymorphic construct::
manager_and_engineer = with_polymorphic(
- Employee, [Manager, Engineer],
+ Employee, [Manager, Engineer],
aliased=True)
session.query(Company).\
join(Company.employees.of_type(manager_and_engineer)).\
filter(
- or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
+ or_(manager_and_engineer.Engineer.engineer_info=='someinfo',
manager_and_engineer.Manager.manager_data=='somedata')
)
session.query(Company).filter(
exists([1],
- and_(Engineer.engineer_info=='someinfo',
+ and_(Engineer.engineer_info=='someinfo',
employees.c.company_id==companies.c.company_id),
from_obj=employees.join(engineers)
)
The :func:`.joinedload` and :func:`.subqueryload` options also support
paths which make use of :func:`~sqlalchemy.orm.interfaces.PropComparator.of_type`.
-Below we load ``Company`` rows while eagerly loading related ``Engineer``
+Below we load ``Company`` rows while eagerly loading related ``Engineer``
objects, querying the ``employee`` and ``engineer`` tables simultaneously::
session.query(Company).\
- options(subqueryload_all(Company.employees.of_type(Engineer),
+ options(subqueryload_all(Company.employees.of_type(Engineer),
Engineer.machines))
.. versionadded:: 0.8
:func:`.joinedload` and :func:`.subqueryload` support
- paths that are qualified with
+ paths that are qualified with
:func:`~sqlalchemy.orm.interfaces.PropComparator.of_type`.
Single Table Inheritance
}
Note that the mappers for the derived classes Manager and Engineer omit the
-``__tablename__``, indicating they do not have a mapped table of
+``__tablename__``, indicating they do not have a mapped table of
their own.
.. _concrete_inheritance:
.. note::
this section is currently using classical mappings. The
- Declarative system fully supports concrete inheritance
+ Declarative system fully supports concrete inheritance
however. See the links below for more information on using
declarative with concrete table inheritance.
'engineer': engineers_table
}, 'type', 'pjoin')
- employee_mapper = mapper(Employee, employees_table,
- with_polymorphic=('*', pjoin),
- polymorphic_on=pjoin.c.type,
+ employee_mapper = mapper(Employee, employees_table,
+ with_polymorphic=('*', pjoin),
+ polymorphic_on=pjoin.c.type,
polymorphic_identity='employee')
- manager_mapper = mapper(Manager, managers_table,
- inherits=employee_mapper,
- concrete=True,
+ manager_mapper = mapper(Manager, managers_table,
+ inherits=employee_mapper,
+ concrete=True,
polymorphic_identity='manager')
- engineer_mapper = mapper(Engineer, engineers_table,
- inherits=employee_mapper,
- concrete=True,
+ engineer_mapper = mapper(Engineer, engineers_table,
+ inherits=employee_mapper,
+ concrete=True,
polymorphic_identity='engineer')
Upon select, the polymorphic union produces a query like this:
session.query(Employee).all()
{opensql}
- SELECT pjoin.type AS pjoin_type,
- pjoin.manager_data AS pjoin_manager_data,
+ SELECT pjoin.type AS pjoin_type,
+ pjoin.manager_data AS pjoin_manager_data,
pjoin.employee_id AS pjoin_employee_id,
pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
FROM (
- SELECT employees.employee_id AS employee_id,
+ SELECT employees.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
FROM employees
UNION ALL
- SELECT managers.employee_id AS employee_id,
+ SELECT managers.employee_id AS employee_id,
managers.manager_data AS manager_data, managers.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
FROM managers
UNION ALL
- SELECT engineers.employee_id AS employee_id,
+ SELECT engineers.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
engineers.engineer_info AS engineer_info, 'engineer' AS type
FROM engineers
Column('company_id', Integer, ForeignKey('companies.id'))
)
- mapper(Employee, employees_table,
- with_polymorphic=('*', pjoin),
- polymorphic_on=pjoin.c.type,
+ mapper(Employee, employees_table,
+ with_polymorphic=('*', pjoin),
+ polymorphic_on=pjoin.c.type,
polymorphic_identity='employee')
- mapper(Manager, managers_table,
- inherits=employee_mapper,
- concrete=True,
+ mapper(Manager, managers_table,
+ inherits=employee_mapper,
+ concrete=True,
polymorphic_identity='manager')
- mapper(Engineer, engineers_table,
- inherits=employee_mapper,
- concrete=True,
+ mapper(Engineer, engineers_table,
+ inherits=employee_mapper,
+ concrete=True,
polymorphic_identity='engineer')
mapper(Company, companies, properties={
'some_c':relationship(C, back_populates='many_a')
})
mapper(C, c_table, properties={
- 'many_a':relationship(A, collection_class=set,
+ 'many_a':relationship(A, collection_class=set,
back_populates='some_c'),
})
Basic Relational Patterns
--------------------------
-A quick walkthrough of the basic relational patterns.
+A quick walkthrough of the basic relational patterns.
The imports used for each of the following sections is as follows::
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
- children = relationship("Child",
+ children = relationship("Child",
secondary=association_table)
class Child(Base):
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
- children = relationship("Child",
- secondary=association_table,
+ children = relationship("Child",
+ secondary=association_table,
backref="parents")
class Child(Base):
id = Column(Integer, primary_key=True)
The ``secondary`` argument of :func:`.relationship` also accepts a callable
-that returns the ultimate argument, which is evaluated only when mappers are
+that returns the ultimate argument, which is evaluated only when mappers are
first used. Using this, we can define the ``association_table`` at a later
point, as long as it's available to the callable after all module initialization
is complete::
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
- children = relationship("Child",
- secondary=lambda: association_table,
+ children = relationship("Child",
+ secondary=lambda: association_table,
backref="parents")
With the declarative extension in use, the traditional "string name of the table"
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
- children = relationship("Child",
- secondary="association",
+ children = relationship("Child",
+ secondary="association",
backref="parents")
Deleting Rows from the Many to Many Table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A behavior which is unique to the ``secondary`` argument to :func:`.relationship`
-is that the :class:`.Table` which is specified here is automatically subject
+is that the :class:`.Table` which is specified here is automatically subject
to INSERT and DELETE statements, as objects are added or removed from the collection.
-There is **no need to delete from this table manually**. The act of removing a
+There is **no need to delete from this table manually**. The act of removing a
record from the collection will have the effect of the row being deleted on flush::
# row will be deleted from the "secondary" table
There are several possibilities here:
-* If there is a :func:`.relationship` from ``Parent`` to ``Child``, but there is
+* If there is a :func:`.relationship` from ``Parent`` to ``Child``, but there is
**not** a reverse-relationship that links a particular ``Child`` to each ``Parent``,
SQLAlchemy will not have any awareness that when deleting this particular
``Child`` object, it needs to maintain the "secondary" table that links it to
the ``Parent``. No delete of the "secondary" table will occur.
* If there is a relationship that links a particular ``Child`` to each ``Parent``,
- suppose it's called ``Child.parents``, SQLAlchemy by default will load in
+ suppose it's called ``Child.parents``, SQLAlchemy by default will load in
the ``Child.parents`` collection to locate all ``Parent`` objects, and remove
each row from the "secondary" table which establishes this link. Note that
this relationship does not need to be bidrectional; SQLAlchemy is strictly
looking at every :func:`.relationship` associated with the ``Child`` object
being deleted.
-* A higher performing option here is to use ON DELETE CASCADE directives
+* A higher performing option here is to use ON DELETE CASCADE directives
with the foreign keys used by the database. Assuming the database supports
- this feature, the database itself can be made to automatically delete rows in the
+ this feature, the database itself can be made to automatically delete rows in the
"secondary" table as referencing rows in "child" are deleted. SQLAlchemy
- can be instructed to forego actively loading in the ``Child.parents``
+ can be instructed to forego actively loading in the ``Child.parents``
collection in this case using the ``passive_deletes=True`` directive
on :meth:`.relationship`; see :ref:`passive_deletes` for more details
on this.
Association Object
~~~~~~~~~~~~~~~~~~
-The association object pattern is a variant on many-to-many: it's
+The association object pattern is a variant on many-to-many: it's
used when your association table contains additional columns beyond those
which are foreign keys to the left and right tables. Instead of using the
``secondary`` argument, you map a new class directly to the association table.
The left side of the relationship references the association object via
one-to-many, and the association class references the right side via
-many-to-one. Below we illustrate an association table mapped to the
+many-to-one. Below we illustrate an association table mapped to the
``Association`` class which includes a column called ``extra_data``,
which is a string value that is stored along with each association
between ``Parent`` and ``Child``::
advisable that the association-mapped table not be used
as the ``secondary`` argument on a :func:`.relationship`
elsewhere, unless that :func:`.relationship` contains
- the option ``viewonly=True``. SQLAlchemy otherwise
- may attempt to emit redundant INSERT and DELETE
+ the option ``viewonly=True``. SQLAlchemy otherwise
+ may attempt to emit redundant INSERT and DELETE
statements on the same table, if similar state is detected
on the related attribute as well as the associated
object.
-----------------------------
The **adjacency list** pattern is a common relational pattern whereby a table
-contains a foreign key reference to itself. This is the most common
+contains a foreign key reference to itself. This is the most common
way to represent hierarchical data in flat tables. Other methods
include **nested sets**, sometimes called "modified preorder",
as well as **materialized path**. Despite the appeal that modified preorder
6 1 child3
The :func:`.relationship` configuration here works in the
-same way as a "normal" one-to-many relationship, with the
+same way as a "normal" one-to-many relationship, with the
exception that the "direction", i.e. whether the relationship
is one-to-many or many-to-one, is assumed by default to
be one-to-many. To establish the relationship as many-to-one,
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('node.id'))
data = Column(String(50))
- children = relationship("Node",
+ children = relationship("Node",
backref=backref('parent', remote_side=[id])
)
# get all nodes named 'child2'
session.query(Node).filter(Node.data=='child2')
-However extra care is needed when attempting to join along
+However extra care is needed when attempting to join along
the foreign key from one level of the tree to the next. In SQL,
a join from a table to itself requires that at least one side of the
expression be "aliased" so that it can be unambiguously referred to.
Recall from :ref:`ormtutorial_aliases` in the ORM tutorial that the
-:class:`.orm.aliased` construct is normally used to provide an "alias" of
+:class:`.orm.aliased` construct is normally used to provide an "alias" of
an ORM entity. Joining from ``Node`` to itself using this technique
looks like:
join(nodealias, Node.parent).\
filter(nodealias.data=="child2").\
all()
- SELECT node.id AS node_id,
- node.parent_id AS node_parent_id,
+ SELECT node.id AS node_id,
+ node.parent_id AS node_parent_id,
node.data AS node_data
FROM node JOIN node AS node_1
- ON node.parent_id = node_1.id
- WHERE node.data = ?
+ ON node.parent_id = node_1.id
+ WHERE node.data = ?
AND node_1.data = ?
['subchild1', 'child2']
-:meth:`.Query.join` also includes a feature known as ``aliased=True`` that
+:meth:`.Query.join` also includes a feature known as ``aliased=True`` that
can shorten the verbosity self-referential joins, at the expense
of query flexibility. This feature
-performs a similar "aliasing" step to that above, without the need for an
-explicit entity. Calls to :meth:`.Query.filter` and similar subsequent to
+performs a similar "aliasing" step to that above, without the need for an
+explicit entity. Calls to :meth:`.Query.filter` and similar subsequent to
the aliased join will **adapt** the ``Node`` entity to be that of the alias:
.. sourcecode:: python+sql
join(Node.parent, aliased=True).\
filter(Node.data=='child2').\
all()
- SELECT node.id AS node_id,
- node.parent_id AS node_parent_id,
+ SELECT node.id AS node_id,
+ node.parent_id AS node_parent_id,
node.data AS node_data
- FROM node
+ FROM node
JOIN node AS node_1 ON node_1.id = node.parent_id
WHERE node.data = ? AND node_1.data = ?
['subchild1', 'child2']
.. sourcecode:: python+sql
- # get all nodes named 'subchild1' with a
+ # get all nodes named 'subchild1' with a
# parent named 'child2' and a grandparent 'root'
{sql}session.query(Node).\
filter(Node.data=='subchild1').\
join(Node.parent, aliased=True, from_joinpoint=True).\
filter(Node.data=='root').\
all()
- SELECT node.id AS node_id,
- node.parent_id AS node_parent_id,
+ SELECT node.id AS node_id,
+ node.parent_id AS node_parent_id,
node.data AS node_data
- FROM node
- JOIN node AS node_1 ON node_1.id = node.parent_id
+ FROM node
+ JOIN node AS node_1 ON node_1.id = node.parent_id
JOIN node AS node_2 ON node_2.id = node_1.parent_id
- WHERE node.data = ?
- AND node_1.data = ?
+ WHERE node.data = ?
+ AND node_1.data = ?
AND node_2.data = ?
['subchild1', 'child2', 'root']
-:meth:`.Query.reset_joinpoint` will also remove the "aliasing" from filtering
+:meth:`.Query.reset_joinpoint` will also remove the "aliasing" from filtering
calls::
session.query(Node).\
join_depth=2)
{sql}session.query(Node).all()
- SELECT node_1.id AS node_1_id,
- node_1.parent_id AS node_1_parent_id,
- node_1.data AS node_1_data,
- node_2.id AS node_2_id,
- node_2.parent_id AS node_2_parent_id,
- node_2.data AS node_2_data,
- node.id AS node_id,
- node.parent_id AS node_parent_id,
+ SELECT node_1.id AS node_1_id,
+ node_1.parent_id AS node_1_parent_id,
+ node_1.data AS node_1_data,
+ node_2.id AS node_2_id,
+ node_2.parent_id AS node_2_parent_id,
+ node_2.data AS node_2_data,
+ node.id AS node_id,
+ node.parent_id AS node_parent_id,
node.data AS node_data
- FROM node
- LEFT OUTER JOIN node AS node_2
- ON node.id = node_2.parent_id
- LEFT OUTER JOIN node AS node_1
+ FROM node
+ LEFT OUTER JOIN node AS node_2
+ ON node.id = node_2.parent_id
+ LEFT OUTER JOIN node AS node_1
ON node_2.id = node_1.parent_id
[]
user = relationship("User", back_populates="addresses")
-Above, we add a ``.user`` relationship to ``Address`` explicitly. On
-both relationships, the ``back_populates`` directive tells each relationship
+Above, we add a ``.user`` relationship to ``Address`` explicitly. On
+both relationships, the ``back_populates`` directive tells each relationship
about the other one, indicating that they should establish "bidirectional"
behavior between each other. The primary effect of this configuration
-is that the relationship adds event handlers to both attributes
+is that the relationship adds event handlers to both attributes
which have the behavior of "when an append or set event occurs here, set ourselves
onto the incoming attribute using this particular attribute name".
The behavior is illustrated as follows. Start with a ``User`` and an ``Address``
This behavior of course works in reverse for removal operations as well, as well
as for equivalent operations on both sides. Such as
-when ``.user`` is set again to ``None``, the ``Address`` object is removed
+when ``.user`` is set again to ``None``, the ``Address`` object is removed
from the reverse collection::
>>> a1.user = None
>>> u1.addresses
[]
-The manipulation of the ``.addresses`` collection and the ``.user`` attribute
-occurs entirely in Python without any interaction with the SQL database.
+The manipulation of the ``.addresses`` collection and the ``.user`` attribute
+occurs entirely in Python without any interaction with the SQL database.
Without this behavior, the proper state would be apparent on both sides once the
data has been flushed to the database, and later reloaded after a commit or
expiration operation occurs. The ``backref``/``back_populates`` behavior has the advantage
~~~~~~~~~~~~~~~~~~
We've established that the ``backref`` keyword is merely a shortcut for building
-two individual :func:`.relationship` constructs that refer to each other. Part of
-the behavior of this shortcut is that certain configurational arguments applied to
+two individual :func:`.relationship` constructs that refer to each other. Part of
+the behavior of this shortcut is that certain configurational arguments applied to
the :func:`.relationship`
will also be applied to the other direction - namely those arguments that describe
the relationship at a schema level, and are unlikely to be different in the reverse
direction. The usual case
here is a many-to-many :func:`.relationship` that has a ``secondary`` argument,
-or a one-to-many or many-to-one which has a ``primaryjoin`` argument (the
+or a one-to-many or many-to-one which has a ``primaryjoin`` argument (the
``primaryjoin`` argument is discussed in :ref:`relationship_primaryjoin`). Such
as if we limited the list of ``Address`` objects to those which start with "tony"::
id = Column(Integer, primary_key=True)
name = Column(String)
- addresses = relationship("Address",
+ addresses = relationship("Address",
primaryjoin="and_(User.id==Address.user_id, "
"Address.email.startswith('tony'))",
backref="user")
>>> print User.addresses.property.primaryjoin
"user".id = address.user_id AND address.email LIKE :email_1 || '%%'
- >>>
+ >>>
>>> print Address.user.property.primaryjoin
"user".id = address.user_id AND address.email LIKE :email_1 || '%%'
- >>>
+ >>>
This reuse of arguments should pretty much do the "right thing" - it uses
only arguments that are applicable, and in the case of a many-to-many
relationship, will reverse the usage of ``primaryjoin`` and ``secondaryjoin``
-to correspond to the other direction (see the example in :ref:`self_referential_many_to_many`
+to correspond to the other direction (see the example in :ref:`self_referential_many_to_many`
for this).
It's very often the case however that we'd like to specify arguments that
-are specific to just the side where we happened to place the "backref".
+are specific to just the side where we happened to place the "backref".
This includes :func:`.relationship` arguments like ``lazy``, ``remote_side``,
``cascade`` and ``cascade_backrefs``. For this case we use the :func:`.backref`
function in place of a string::
id = Column(Integer, primary_key=True)
name = Column(String)
- addresses = relationship("Address",
+ addresses = relationship("Address",
backref=backref("user", lazy="joined"))
Where above, we placed a ``lazy="joined"`` directive only on the ``Address.user``
An unusual case is that of the "one way backref". This is where the "back-populating"
behavior of the backref is only desirable in one direction. An example of this
is a collection which contains a filtering ``primaryjoin`` condition. We'd like to append
-items to this collection as needed, and have them populate the "parent" object on the
+items to this collection as needed, and have them populate the "parent" object on the
incoming object. However, we'd also like to have items that are not part of the collection,
-but still have the same "parent" association - these items should never be in the
-collection.
+but still have the same "parent" association - these items should never be in the
+collection.
Taking our previous example, where we established a ``primaryjoin`` that limited the
collection only to ``Address`` objects whose email address started with the word ``tony``,
the transaction committed and their attributes expired for a re-load, the ``addresses``
collection will hit the database on next access and no longer have this ``Address`` object
present, due to the filtering condition. But we can do away with this unwanted side
-of the "backref" behavior on the Python side by using two separate :func:`.relationship` constructs,
+of the "backref" behavior on the Python side by using two separate :func:`.relationship` constructs,
placing ``back_populates`` only on one side::
from sqlalchemy import Integer, ForeignKey, String, Column
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String)
- addresses = relationship("Address",
+ addresses = relationship("Address",
primaryjoin="and_(User.id==Address.user_id, "
"Address.email.startswith('tony'))",
back_populates="user")
Setting the primaryjoin and secondaryjoin
-----------------------------------------
-A common scenario arises when we attempt to relate two
+A common scenario arises when we attempt to relate two
classes together, where there exist multiple ways to join the
two tables.
to load in an associated ``Address``, there is the choice of retrieving
the ``Address`` referred to by the ``billing_address_id`` column or the one
referred to by the ``shipping_address_id`` column. The :func:`.relationship`,
-as it is, cannot determine its full configuration. The examples at
+as it is, cannot determine its full configuration. The examples at
:ref:`relationship_patterns` didn't have this issue, because in each of those examples
there was only **one** way to refer to the related table.
-To resolve this issue, :func:`.relationship` accepts an argument named
+To resolve this issue, :func:`.relationship` accepts an argument named
``primaryjoin`` which accepts a Python-based SQL expression, using the system described
at :ref:`sqlexpression_toplevel`, that describes how the two tables should be joined
together. When using the declarative system, we often will specify this Python
billing_address_id = Column(Integer, ForeignKey("address.id"))
shipping_address_id = Column(Integer, ForeignKey("address.id"))
- billing_address = relationship("Address",
+ billing_address = relationship("Address",
primaryjoin="Address.id==Customer.billing_address_id")
- shipping_address = relationship("Address",
+ shipping_address = relationship("Address",
primaryjoin="Address.id==Customer.shipping_address_id")
Above, loading the ``Customer.billing_address`` relationship from a ``Customer``
-object will use the value present in ``billing_address_id`` in order to
+object will use the value present in ``billing_address_id`` in order to
identify the row in ``Address`` to be loaded; similarly, ``shipping_address_id``
-is used for the ``shipping_address`` relationship. The linkage of the two
+is used for the ``shipping_address`` relationship. The linkage of the two
columns also plays a role during persistence; the newly generated primary key
-of a just-inserted ``Address`` object will be copied into the appropriate
+of a just-inserted ``Address`` object will be copied into the appropriate
foreign key column of an associated ``Customer`` object during a flush.
Specifying Alternate Join Conditions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The open-ended nature of ``primaryjoin`` also allows us to customize how
-related items are loaded. In the example below, using the ``User`` class
-as well as an ``Address`` class which stores a street address, we
+The open-ended nature of ``primaryjoin`` also allows us to customize how
+related items are loaded. In the example below, using the ``User`` class
+as well as an ``Address`` class which stores a street address, we
create a relationship ``boston_addresses`` which will only
load those ``Address`` objects which specify a city of "Boston"::
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String)
- addresses = relationship("Address",
+ addresses = relationship("Address",
primaryjoin="and_(User.id==Address.user_id, "
"Address.city=='Boston')")
``Address.user_id`` columns to each other, as well as limiting rows in ``Address``
to just ``city='Boston'``. When using Declarative, rudimentary SQL functions like
:func:`.and_` are automatically available in the evaluated namespace of a string
-:func:`.relationship` argument.
+:func:`.relationship` argument.
When using classical mappings, we have the advantage of the :class:`.Table` objects
already being present when the mapping is defined, so that the SQL expression
Note that the custom criteria we use in a ``primaryjoin`` is generally only significant
when SQLAlchemy is rendering SQL in order to load or represent this relationship.
That is, it's used
-in the SQL statement that's emitted in order to perform a per-attribute lazy load, or when a join is
+in the SQL statement that's emitted in order to perform a per-attribute lazy load, or when a join is
constructed at query time, such as via :meth:`.Query.join`, or via the eager "joined" or "subquery"
styles of loading. When in-memory objects are being manipulated, we can place any ``Address`` object
we'd like into the ``boston_addresses`` collection, regardless of what the value of the ``.city``
attribute is. The objects will remain present in the collection until the attribute is expired
-and re-loaded from the database where the criterion is applied. When
+and re-loaded from the database where the criterion is applied. When
a flush occurs, the objects inside of ``boston_addresses`` will be flushed unconditionally, assigning
value of the primary key ``user.id`` column onto the foreign-key-holding ``address.user_id`` column
for each row. The ``city`` criteria has no effect here, as the flush process only cares about synchronizing primary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many to many relationships can be customized by one or both of ``primaryjoin``
-and ``secondaryjoin`` - the latter is significant for a relationship that
-specifies a many-to-many reference using the ``secondary`` argument.
+and ``secondaryjoin`` - the latter is significant for a relationship that
+specifies a many-to-many reference using the ``secondary`` argument.
A common situation which involves the usage of ``primaryjoin`` and ``secondaryjoin``
is when establishing a many-to-many relationship from a class to itself, as shown below::
)})
-Note that in both examples, the ``backref`` keyword specifies a ``left_nodes``
-backref - when :func:`.relationship` creates the second relationship in the reverse
+Note that in both examples, the ``backref`` keyword specifies a ``left_nodes``
+backref - when :func:`.relationship` creates the second relationship in the reverse
direction, it's smart enough to reverse the ``primaryjoin`` and ``secondaryjoin`` arguments.
Specifying Foreign Keys
class User(Base):
__table__ = users_table
- addresses = relationship(Address,
+ addresses = relationship(Address,
primaryjoin=
users_table.c.user_id==addresses_table.c.user_id,
foreign_keys=[addresses_table.c.user_id])
and DELETE in order to delete without violating foreign key constraints). The
two use cases are:
-* A table contains a foreign key to itself, and a single row will
+* A table contains a foreign key to itself, and a single row will
have a foreign key value pointing to its own primary key.
-* Two tables each contain a foreign key referencing the other
+* Two tables each contain a foreign key referencing the other
table, with a row in each table referencing the other.
For example::
identifiers were populated manually (again essentially bypassing
:func:`~sqlalchemy.orm.relationship`).
-To enable the usage of a supplementary UPDATE statement,
+To enable the usage of a supplementary UPDATE statement,
we use the ``post_update`` option
of :func:`.relationship`. This specifies that the linkage between the
two rows should be created using an UPDATE statement after both rows
-have been INSERTED; it also causes the rows to be de-associated with
+have been INSERTED; it also causes the rows to be de-associated with
each other via UPDATE before a DELETE is emitted. The flag should
-be placed on just *one* of the relationships, preferably the
+be placed on just *one* of the relationships, preferably the
many-to-one side. Below we illustrate
a complete example, including two :class:`.ForeignKey` constructs, one which
specifies ``use_alter=True`` to help with emitting CREATE TABLE statements::
__tablename__ = 'widget'
widget_id = Column(Integer, primary_key=True)
- favorite_entry_id = Column(Integer,
- ForeignKey('entry.entry_id',
- use_alter=True,
+ favorite_entry_id = Column(Integer,
+ ForeignKey('entry.entry_id',
+ use_alter=True,
name="fk_favorite_entry"))
name = Column(String(50))
__table_args__ = (
ForeignKeyConstraint(
- ["widget_id", "favorite_entry_id"],
+ ["widget_id", "favorite_entry_id"],
["entry.widget_id", "entry.entry_id"],
name="fk_favorite_entry", use_alter=True
),
well. For databases which enforce referential integrity,
it's required to use the database's ON UPDATE CASCADE
functionality in order to propagate primary key changes
-to referenced foreign keys - the values cannot be out
+to referenced foreign keys - the values cannot be out
of sync for any moment.
For databases that don't support this, such as SQLite and
-MySQL without their referential integrity options turned
+MySQL without their referential integrity options turned
on, the ``passive_updates`` flag can
be set to ``False``, most preferably on a one-to-many or
many-to-many :func:`.relationship`, which instructs
__tablename__ = 'address'
email = Column(String(50), primary_key=True)
- username = Column(String(50),
+ username = Column(String(50),
ForeignKey('user.username', onupdate="cascade")
)
parent_id = Column(Integer, ForeignKey(id))
name = Column(String(50), nullable=False)
- children = relationship("TreeNode",
+ children = relationship("TreeNode",
# cascade deletions
cascade="all",
# many to one + adjacency list - remote_side
- # is required to reference the 'remote'
+ # is required to reference the 'remote'
# column in the join condition.
backref=backref("parent", remote_side=id),
return " " * _indent + repr(self) + \
"\n" + \
"".join([
- c.dump(_indent +1)
+ c.dump(_indent +1)
for c in self.children.values()]
)
"selecting tree on root, using eager loading to join four levels deep.")
session.expunge_all()
node = session.query(TreeNode).\
- options(joinedload_all("children", "children",
+ options(joinedload_all("children", "children",
"children", "children")).\
filter(TreeNode.name=="rootnode").\
first()
meta = MetaData()
-org_table = Table('organizations', meta,
+org_table = Table('organizations', meta,
Column('org_id', Integer, primary_key=True),
Column('org_name', String(50), nullable=False, key='name'),
mysql_engine='InnoDB')
self.name = name
mapper(Organization, org_table, properties = {
- 'members' : relationship(Member,
+ 'members' : relationship(Member,
# Organization.members will be a Query object - no loading
# of the entire collection occurs unless requested
- lazy="dynamic",
+ lazy="dynamic",
- # Member objects "belong" to their parent, are deleted when
+ # Member objects "belong" to their parent, are deleted when
# removed from the collection
cascade="all, delete-orphan",
# "delete, delete-orphan" cascade does not load in objects on delete,
# allows ON DELETE CASCADE to handle it.
- # this only works with a database that supports ON DELETE CASCADE -
+ # this only works with a database that supports ON DELETE CASCADE -
# *not* sqlite or MySQL with MyISAM
- passive_deletes=True,
+ passive_deletes=True,
)
})
print "-------------------------\nflush one - save org + 3 members\n"
sess.commit()
- # the 'members' collection is a Query. it issues
+ # the 'members' collection is a Query. it issues
# SQL as needed to load subsets of the collection.
print "-------------------------\nload subset of members\n"
members = org.members.filter(member_table.c.name.like('%member t%')).all()
print "-------------------------\nflush two - save 3 more members\n"
sess.commit()
- # delete the object. Using ON DELETE CASCADE
- # SQL is only emitted for the head row - the Member rows
+ # delete the object. Using ON DELETE CASCADE
+ # SQL is only emitted for the head row - the Member rows
# disappear automatically without the need for additional SQL.
sess.delete(org)
print "-------------------------\nflush three - delete org, delete members in one statement\n"
-"""A naive example illustrating techniques to help
+"""A naive example illustrating techniques to help
embed PostGIS functionality.
This example was originally developed in the hopes that it would be extrapolated into a comprehensive PostGIS integration layer. We are pleased to announce that this has come to fruition as `GeoAlchemy <http://www.geoalchemy.org/>`_.
The example illustrates:
-* a DDL extension which allows CREATE/DROP to work in
+* a DDL extension which allows CREATE/DROP to work in
conjunction with AddGeometryColumn/DropGeometryColumn
* a Geometry type, as well as a few subtypes, which
* a standalone operator example.
The implementation is limited to only public, well known
-and simple to use extension points.
+and simple to use extension points.
E.g.::
import datetime
# step 2. databases.
-# db1 is used for id generation. The "pool_threadlocal"
+# db1 is used for id generation. The "pool_threadlocal"
# causes the id_generator() to use the same connection as that
# of an ongoing transaction within db1.
echo = True
# we need a way to create identifiers which are unique across all
# databases. one easy way would be to just use a composite primary key, where one
-# value is the shard id. but here, we'll show something more "generic", an
+# value is the shard id. but here, we'll show something more "generic", an
# id generation function. we'll use a simplistic "id table" stored in database
# #1. Any other method will do just as well; UUID, hilo, application-specific, etc.
# table setup. we'll store a lead table of continents/cities,
# and a secondary table storing locations.
# a particular row will be placed in the database whose shard id corresponds to the
-# 'continent'. in this setup, secondary rows in 'weather_reports' will
+# 'continent'. in this setup, secondary rows in 'weather_reports' will
# be placed in the same DB as that of the parent, but this can be changed
# if you're willing to write more complex sharding functions.
# step 5. define sharding functions.
-# we'll use a straight mapping of a particular set of "country"
+# we'll use a straight mapping of a particular set of "country"
# attributes to shard id.
shard_lookup = {
'North America':'north_america',
"""shard chooser.
looks at the given instance and returns a shard id
- note that we need to define conditions for
+ note that we need to define conditions for
the WeatherLocation class, as well as our secondary Report class which will
point back to its WeatherLocation via its 'location' attribute.
given a primary key, returns a list of shards
to search. here, we don't have any particular information from a
- pk so we just return all shard ids. often, youd want to do some
- kind of round-robin strategy here so that requests are evenly
+ pk so we just return all shard ids. often, youd want to do some
+ kind of round-robin strategy here so that requests are evenly
distributed among DBs.
"""
# "shares_lineage()" returns True if both columns refer to the same
# statement column, adjusting for any annotations present.
# (an annotation is an internal clone of a Column object
- # and occur when using ORM-mapped attributes like
- # "WeatherLocation.continent"). A simpler comparison, though less accurate,
+ # and occur when using ORM-mapped attributes like
+ # "WeatherLocation.continent"). A simpler comparison, though less accurate,
# would be "column.key == 'continent'".
if column.shares_lineage(weather_locations.c.continent):
if operator == operators.eq:
"""Search an orm.Query object for binary expressions.
Returns expressions which match a Column against one or more
- literal values as a list of tuples of the form
+ literal values as a list of tuples of the form
(column, operator, values). "values" is a single value
or tuple of values depending on the operator.
comparisons = []
def visit_bindparam(bind):
- # visit a bind parameter.
+ # visit a bind parameter.
# check in _params for it first
if bind.key in query._params:
value = query._params[bind.key]
elif bind.callable:
- # some ORM functions (lazy loading)
- # place the bind's value as a
- # callable for deferred evaulation.
+ # some ORM functions (lazy loading)
+ # place the bind's value as a
+ # callable for deferred evaulation.
value = bind.callable()
else:
# just use .value
binary.operator == operators.in_op and \
hasattr(binary.right, 'clauses'):
comparisons.append(
- (binary.left, binary.operator,
+ (binary.left, binary.operator,
tuple(binds[bind] for bind in binary.right.clauses)
)
)
# further configure create_session to use these functions
create_session.configure(
- shard_chooser=shard_chooser,
- id_chooser=id_chooser,
+ shard_chooser=shard_chooser,
+ id_chooser=id_chooser,
query_chooser=query_chooser
)
be run via nose::
cd examples/versioning
- nosetests -v
+ nosetests -v
A fragment of example usage, using declarative::
__visit_name__ = 'VARCHAR'
def __init__(self, length = None, **kwargs):
- super(VARCHAR, self).__init__(length=length, **kwargs)
+ super(VARCHAR, self).__init__(length=length, **kwargs)
class CHAR(_StringType, sqltypes.CHAR):
"""Firebird CHAR type"""
}
-# TODO: date conversion types (should be implemented as _FBDateTime,
+# TODO: date conversion types (should be implemented as _FBDateTime,
# _FBDate, etc. as bind/result functionality is required)
class FBTypeCompiler(compiler.GenericTypeCompiler):
"""Get the next value from the sequence using ``gen_id()``."""
return self._execute_scalar(
- "SELECT gen_id(%s, 1) FROM rdb$database" %
+ "SELECT gen_id(%s, 1) FROM rdb$database" %
self.dialect.identifier_preparer.format_sequence(seq),
type_
)
return name
def has_table(self, connection, table_name, schema=None):
- """Return ``True`` if the given table exists, ignoring
+ """Return ``True`` if the given table exists, ignoring
the `schema`."""
tblqry = """
return {'constrained_columns':pkfields, 'name':None}
@reflection.cache
- def get_column_sequence(self, connection,
- table_name, column_name,
+ def get_column_sequence(self, connection,
+ table_name, column_name,
schema=None, **kw):
tablename = self.denormalize_name(table_name)
colname = self.denormalize_name(column_name)
COALESCE(cs.rdb$bytes_per_character,1) AS flen,
f.rdb$field_precision AS fprec,
f.rdb$field_scale AS fscale,
- COALESCE(r.rdb$default_source,
+ COALESCE(r.rdb$default_source,
f.rdb$default_source) AS fdefault
FROM rdb$relation_fields r
JOIN rdb$fields f ON r.rdb$field_source=f.rdb$field_name
coltype = sqltypes.NULLTYPE
elif colspec == 'INT64':
coltype = coltype(
- precision=row['fprec'],
+ precision=row['fprec'],
scale=row['fscale'] * -1)
elif colspec in ('VARYING', 'CSTRING'):
coltype = coltype(row['flen'])
if row['fdefault'] is not None:
# the value comes down as "DEFAULT 'value'": there may be
# more than one whitespace around the "DEFAULT" keyword
- # and it may also be lower case
+ # and it may also be lower case
# (see also http://tracker.firebirdsql.org/browse/CORE-356)
defexpr = row['fdefault'].lstrip()
assert defexpr[:8].rstrip().upper() == \
SQLAlchemy uses 200 with Unicode, datetime and decimal support (see
details__).
-* concurrency_level - set the backend policy with regards to threading
+* concurrency_level - set the backend policy with regards to threading
issues: by default SQLAlchemy uses policy 1 (see details__).
-* enable_rowcount - True by default, setting this to False disables
- the usage of "cursor.rowcount" with the
+* enable_rowcount - True by default, setting this to False disables
+ the usage of "cursor.rowcount" with the
Kinterbasdb dialect, which SQLAlchemy ordinarily calls upon automatically
- after any UPDATE or DELETE statement. When disabled, SQLAlchemy's
- ResultProxy will return -1 for result.rowcount. The rationale here is
- that Kinterbasdb requires a second round trip to the database when
- .rowcount is called - since SQLA's resultproxy automatically closes
- the cursor after a non-result-returning statement, rowcount must be
+ after any UPDATE or DELETE statement. When disabled, SQLAlchemy's
+ ResultProxy will return -1 for result.rowcount. The rationale here is
+ that Kinterbasdb requires a second round trip to the database when
+ .rowcount is called - since SQLA's resultproxy automatically closes
+ the cursor after a non-result-returning statement, rowcount must be
called, if at all, before the result object is returned. Additionally,
cursor.rowcount may not return correct results with older versions
- of Firebird, and setting this flag to False will also cause the
+ of Firebird, and setting this flag to False will also cause the
SQLAlchemy ORM to ignore its usage. The behavior can also be controlled on a
per-execution basis using the `enable_rowcount` option with
:meth:`execution_options()`::
class FBExecutionContext_kinterbasdb(FBExecutionContext):
@property
def rowcount(self):
- if self.execution_options.get('enable_rowcount',
+ if self.execution_options.get('enable_rowcount',
self.dialect.enable_rowcount):
return self.cursor.rowcount
else:
# that for backward compatibility reasons returns a string like
# LI-V6.3.3.12981 Firebird 2.0
# where the first version is a fake one resembling the old
- # Interbase signature.
+ # Interbase signature.
fbconn = connection.connection
version = fbconn.server_version
msg = str(e)
return ('Unable to complete network request to host' in msg or
'Invalid connection state' in msg or
- 'Invalid cursor state' in msg or
+ 'Invalid cursor state' in msg or
'connection shutdown' in msg)
else:
return False
SELECT TOP n
If using SQL Server 2005 or above, LIMIT with OFFSET
-support is available through the ``ROW_NUMBER OVER`` construct.
+support is available through the ``ROW_NUMBER OVER`` construct.
For versions below 2005, LIMIT with OFFSET usage will fail.
Nullability
SQLAlchemy by default uses OUTPUT INSERTED to get at newly
generated primary key values via IDENTITY columns or other
-server side defaults. MS-SQL does not
+server side defaults. MS-SQL does not
allow the usage of OUTPUT INSERTED on tables that have triggers.
To disable the usage of OUTPUT INSERTED on a per-table basis,
specify ``implicit_returning=False`` for each :class:`.Table`
which has triggers::
- Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
+ Table('mytable', metadata,
+ Column('id', Integer, primary_key=True),
# ...,
implicit_returning=False
)
Enabling Snapshot Isolation
---------------------------
-Not necessarily specific to SQLAlchemy, SQL Server has a default transaction
+Not necessarily specific to SQLAlchemy, SQL Server has a default transaction
isolation mode that locks entire tables, and causes even mildly concurrent
applications to have long held locks and frequent deadlocks.
-Enabling snapshot isolation for the database as a whole is recommended
-for modern levels of concurrency support. This is accomplished via the
+Enabling snapshot isolation for the database as a whole is recommended
+for modern levels of concurrency support. This is accomplished via the
following ALTER DATABASE commands executed at the SQL prompt::
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
return value.date()
elif isinstance(value, basestring):
return datetime.date(*[
- int(x or 0)
+ int(x or 0)
for x in self._reg.match(value).groups()
])
else:
return value.time()
elif isinstance(value, basestring):
return datetime.time(*[
- int(x or 0)
+ int(x or 0)
for x in self._reg.match(value).groups()])
else:
return value
return self._extend("TEXT", type_)
def visit_VARCHAR(self, type_):
- return self._extend("VARCHAR", type_,
+ return self._extend("VARCHAR", type_,
length = type_.length or 'max')
def visit_CHAR(self, type_):
return self._extend("NCHAR", type_)
def visit_NVARCHAR(self, type_):
- return self._extend("NVARCHAR", type_,
+ return self._extend("NVARCHAR", type_,
length = type_.length or 'max')
def visit_date(self, type_):
def visit_VARBINARY(self, type_):
return self._extend(
- "VARBINARY",
- type_,
+ "VARBINARY",
+ type_,
length=type_.length or 'max')
def visit_boolean(self, type_):
not self.executemany
if self._enable_identity_insert:
- self.root_connection._cursor_execute(self.cursor,
- "SET IDENTITY_INSERT %s ON" %
+ self.root_connection._cursor_execute(self.cursor,
+ "SET IDENTITY_INSERT %s ON" %
self.dialect.identifier_preparer.format_table(tbl),
())
conn = self.root_connection
if self._select_lastrowid:
if self.dialect.use_scope_identity:
- conn._cursor_execute(self.cursor,
+ conn._cursor_execute(self.cursor,
"SELECT scope_identity() AS lastrowid", ())
else:
- conn._cursor_execute(self.cursor,
+ conn._cursor_execute(self.cursor,
"SELECT @@identity AS lastrowid", ())
# fetchall() ensures the cursor is consumed without closing it
row = self.cursor.fetchall()[0]
self._result_proxy = base.FullyBufferedResultProxy(self)
if self._enable_identity_insert:
- conn._cursor_execute(self.cursor,
+ conn._cursor_execute(self.cursor,
"SET IDENTITY_INSERT %s OFF" %
self.dialect.identifier_preparer.
format_table(self.compiled.statement.table),
if self._enable_identity_insert:
try:
self.cursor.execute(
- "SET IDENTITY_INSERT %s OFF" %
+ "SET IDENTITY_INSERT %s OFF" %
self.dialect.identifier_preparer.\
format_table(self.compiled.statement.table)
)
def visit_concat_op(self, binary, **kw):
return "%s + %s" % \
- (self.process(binary.left, **kw),
+ (self.process(binary.left, **kw),
self.process(binary.right, **kw))
def visit_match_op(self, binary, **kw):
return "CONTAINS (%s, %s)" % (
- self.process(binary.left, **kw),
+ self.process(binary.left, **kw),
self.process(binary.right, **kw))
def get_select_precolumns(self, select):
return "SAVE TRANSACTION %s" % self.preparer.format_savepoint(savepoint_stmt)
def visit_rollback_to_savepoint(self, savepoint_stmt):
- return ("ROLLBACK TRANSACTION %s"
+ return ("ROLLBACK TRANSACTION %s"
% self.preparer.format_savepoint(savepoint_stmt))
def visit_column(self, column, result_map=None, **kwargs):
t, column)
if result_map is not None:
- result_map[column.name
- if self.dialect.case_sensitive
+ result_map[column.name
+ if self.dialect.case_sensitive
else column.name.lower()] = \
- (column.name, (column, ),
+ (column.name, (column, ),
column.type)
return super(MSSQLCompiler, self).\
- visit_column(converted,
+ visit_column(converted,
result_map=None, **kwargs)
- return super(MSSQLCompiler, self).visit_column(column,
- result_map=result_map,
+ return super(MSSQLCompiler, self).visit_column(column,
+ result_map=result_map,
**kwargs)
def visit_binary(self, binary, **kwargs):
"""
if (
- isinstance(binary.left, expression.BindParameter)
+ isinstance(binary.left, expression.BindParameter)
and binary.operator == operator.eq
and not isinstance(binary.right, expression.BindParameter)
):
return self.process(
- expression.BinaryExpression(binary.right,
- binary.left,
- binary.operator),
+ expression.BinaryExpression(binary.right,
+ binary.left,
+ binary.operator),
**kwargs)
return super(MSSQLCompiler, self).visit_binary(binary, **kwargs)
columns = [
self.process(
- col_label(c),
- within_columns_clause=True,
+ col_label(c),
+ within_columns_clause=True,
result_map=self.result_map
- )
+ )
for c in expression._select_iterables(returning_cols)
]
return 'OUTPUT ' + ', '.join(columns)
label_select_column(select, column, asfrom)
def for_update_clause(self, select):
- # "FOR UPDATE" is only allowed on "DECLARE CURSOR" which
+ # "FOR UPDATE" is only allowed on "DECLARE CURSOR" which
# SQLAlchemy doesn't use
return ''
from_hints,
**kw):
"""Render the UPDATE..FROM clause specific to MSSQL.
-
+
In MSSQL, if the UPDATE statement involves an alias of the table to
be updated, then the table itself must be added to the FROM list as
well. Otherwise, it is optional. Here, we add it regardless.
-
+
"""
return "FROM " + ', '.join(
t._compiler_dispatch(self, asfrom=True,
def visit_in_op(self, binary, **kw):
kw['literal_binds'] = True
return "%s IN %s" % (
- self.process(binary.left, **kw),
+ self.process(binary.left, **kw),
self.process(binary.right, **kw)
)
def visit_notin_op(self, binary, **kw):
kw['literal_binds'] = True
return "%s NOT IN %s" % (
- self.process(binary.left, **kw),
+ self.process(binary.left, **kw),
self.process(binary.right, **kw)
)
class MSDDLCompiler(compiler.DDLCompiler):
def get_column_specification(self, column, **kwargs):
- colspec = (self.preparer.format_column(column) + " "
+ colspec = (self.preparer.format_column(column) + " "
+ self.dialect.type_compiler.process(column.type))
if column.nullable is not None:
if column.table is None:
raise exc.CompileError(
- "mssql requires Table-bound columns "
+ "mssql requires Table-bound columns "
"in order to generate DDL")
seq_col = column.table._autoincrement_column
reserved_words = RESERVED_WORDS
def __init__(self, dialect):
- super(MSIdentifierPreparer, self).__init__(dialect, initial_quote='[',
+ super(MSIdentifierPreparer, self).__init__(dialect, initial_quote='[',
final_quote=']')
def _escape_identifier(self, value):
super(MSDialect, self).initialize(connection)
if self.server_version_info[0] not in range(8, 17):
# FreeTDS with version 4.2 seems to report here
- # a number like "95.10.255". Don't know what
+ # a number like "95.10.255". Don't know what
# that is. So emit warning.
util.warn(
"Unrecognized server version info '%s'. Version specific "
"join sys.schemas as sch on sch.schema_id=tab.schema_id "
"where tab.name = :tabname "
"and sch.name=:schname "
- "and ind.is_primary_key=0",
+ "and ind.is_primary_key=0",
bindparams=[
- sql.bindparam('tabname', tablename,
+ sql.bindparam('tabname', tablename,
sqltypes.String(convert_unicode=True)),
- sql.bindparam('schname', current_schema,
+ sql.bindparam('schname', current_schema,
sqltypes.String(convert_unicode=True))
],
typemap = {
"where tab.name=:tabname "
"and sch.name=:schname",
bindparams=[
- sql.bindparam('tabname', tablename,
+ sql.bindparam('tabname', tablename,
sqltypes.String(convert_unicode=True)),
- sql.bindparam('schname', current_schema,
+ sql.bindparam('schname', current_schema,
sqltypes.String(convert_unicode=True))
],
typemap = {
"views.schema_id=sch.schema_id and "
"views.name=:viewname and sch.name=:schname",
bindparams=[
- sql.bindparam('viewname', viewname,
+ sql.bindparam('viewname', viewname,
sqltypes.String(convert_unicode=True)),
- sql.bindparam('schname', current_schema,
+ sql.bindparam('schname', current_schema,
sqltypes.String(convert_unicode=True))
]
)
row = c.fetchone()
if row is None:
break
- (name, type, nullable, charlen,
+ (name, type, nullable, charlen,
numericprec, numericscale, default, collation) = (
row[columns.c.column_name],
row[columns.c.data_type],
coltype = self.ischema_names.get(type, None)
kwargs = {}
- if coltype in (MSString, MSChar, MSNVarchar, MSNChar, MSText,
+ if coltype in (MSString, MSChar, MSNVarchar, MSNChar, MSText,
MSNText, MSBinary, MSVarBinary,
sqltypes.LargeBinary):
kwargs['length'] = charlen
if coltype is None:
util.warn(
- "Did not recognize type '%s' of column '%s'" %
+ "Did not recognize type '%s' of column '%s'" %
(type, name))
coltype = sqltypes.NULLTYPE
else:
colmap[col['name']] = col
# We also run an sp_columns to check for identity columns:
cursor = connection.execute("sp_columns @table_name = '%s', "
- "@table_owner = '%s'"
+ "@table_owner = '%s'"
% (tablename, current_schema))
ic = None
while True:
if ic is not None and self.server_version_info >= MS_2005_VERSION:
table_fullname = "%s.%s" % (current_schema, tablename)
cursor = connection.execute(
- "select ident_seed('%s'), ident_incr('%s')"
+ "select ident_seed('%s'), ident_incr('%s')"
% (table_fullname, table_fullname)
)
RR = ischema.ref_constraints
# information_schema.table_constraints
TC = ischema.constraints
- # information_schema.constraint_column_usage:
+ # information_schema.constraint_column_usage:
# the constrained column
- C = ischema.key_constraints.alias('C')
- # information_schema.constraint_column_usage:
+ C = ischema.key_constraints.alias('C')
+ # information_schema.constraint_column_usage:
# the referenced column
- R = ischema.key_constraints.alias('R')
+ R = ischema.key_constraints.alias('R')
# Primary key constraints
s = sql.select([C.c.column_name, TC.c.constraint_type],
RR = ischema.ref_constraints
# information_schema.table_constraints
TC = ischema.constraints
- # information_schema.constraint_column_usage:
+ # information_schema.constraint_column_usage:
# the constrained column
- C = ischema.key_constraints.alias('C')
- # information_schema.constraint_column_usage:
+ C = ischema.key_constraints.alias('C')
+ # information_schema.constraint_column_usage:
# the referenced column
- R = ischema.key_constraints.alias('R')
+ R = ischema.key_constraints.alias('R')
# Foreign key constraints
s = sql.select([C.c.column_name,
For this reason, the mxODBC dialect uses the "native" mode by default only for
INSERT, UPDATE, and DELETE statements, and uses the escaped string mode for
-all other statements.
+all other statements.
This behavior can be controlled via
:meth:`~sqlalchemy.sql.expression.Executable.execution_options` using the
from sqlalchemy import types as sqltypes
from sqlalchemy.connectors.mxodbc import MxODBCConnector
from sqlalchemy.dialects.mssql.pyodbc import MSExecutionContext_pyodbc
-from sqlalchemy.dialects.mssql.base import (MSDialect,
+from sqlalchemy.dialects.mssql.base import (MSDialect,
MSSQLStrictCompiler,
_MSDateTime, _MSDate, TIME)
Google App Engine connections appear to be randomly recycled,
so the dialect does not pool connections. The :class:`.NullPool`
-implementation is installed within the :class:`.Engine` by
+implementation is installed within the :class:`.Engine` by
default.
"""
import re
-class MySQLDialect_gaerdbms(MySQLDialect_mysqldb):
+class MySQLDialect_gaerdbms(MySQLDialect_mysqldb):
- @classmethod
- def dbapi(cls):
+ @classmethod
+ def dbapi(cls):
from google.appengine.api import rdbms
return rdbms
MySQL-Python Compatibility
--------------------------
-The pymysql DBAPI is a pure Python port of the MySQL-python (MySQLdb) driver,
-and targets 100% compatibility. Most behavioral notes for MySQL-python apply to
+The pymysql DBAPI is a pure Python port of the MySQL-python (MySQLdb) driver,
+and targets 100% compatibility. Most behavioral notes for MySQL-python apply to
the pymysql driver as well.
"""
-from sqlalchemy.dialects.mysql.mysqldb import MySQLDialect_mysqldb
+from sqlalchemy.dialects.mysql.mysqldb import MySQLDialect_mysqldb
-class MySQLDialect_pymysql(MySQLDialect_mysqldb):
+class MySQLDialect_pymysql(MySQLDialect_mysqldb):
driver = 'pymysql'
description_encoding = None
- @classmethod
- def dbapi(cls):
- return __import__('pymysql')
+ @classmethod
+ def dbapi(cls):
+ return __import__('pymysql')
-dialect = MySQLDialect_pymysql
\ No newline at end of file
+dialect = MySQLDialect_pymysql
\ No newline at end of file
Connect Arguments
-----------------
-The dialect supports several :func:`~sqlalchemy.create_engine()` arguments which
+The dialect supports several :func:`~sqlalchemy.create_engine()` arguments which
affect the behavior of the dialect regardless of driver in use.
* *use_ansi* - Use ANSI JOIN constructs (see the section on Oracle 8). Defaults
SQLAlchemy Table objects which include integer primary keys are usually assumed to have
"autoincrementing" behavior, meaning they can generate their own primary key values upon
-INSERT. Since Oracle has no "autoincrement" feature, SQLAlchemy relies upon sequences
+INSERT. Since Oracle has no "autoincrement" feature, SQLAlchemy relies upon sequences
to produce these values. With the Oracle dialect, *a sequence must always be explicitly
-specified to enable autoincrement*. This is divergent with the majority of documentation
+specified to enable autoincrement*. This is divergent with the majority of documentation
examples which assume the usage of an autoincrement-capable database. To specify sequences,
use the sqlalchemy.schema.Sequence object which is passed to a Column construct::
- t = Table('mytable', metadata,
+ t = Table('mytable', metadata,
Column('id', Integer, Sequence('id_seq'), primary_key=True),
Column(...), ...
)
This step is also required when using table reflection, i.e. autoload=True::
- t = Table('mytable', metadata,
+ t = Table('mytable', metadata,
Column('id', Integer, Sequence('id_seq'), primary_key=True),
autoload=True
- )
+ )
Identifier Casing
-----------------
-In Oracle, the data dictionary represents all case insensitive identifier names
+In Oracle, the data dictionary represents all case insensitive identifier names
using UPPERCASE text. SQLAlchemy on the other hand considers an all-lower case identifier
name to be case insensitive. The Oracle dialect converts all case insensitive identifiers
to and from those two formats during schema level communication, such as reflection of
-tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a
+tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a
case sensitive identifier, and SQLAlchemy will quote the name - this will cause mismatches
against data dictionary data received from Oracle, so unless identifier names have been
truly created as case sensitive (i.e. using quoted names), all lowercase names should be
Also note that Oracle supports unicode data through the NVARCHAR and NCLOB data types.
When using the SQLAlchemy Unicode and UnicodeText types, these DDL types will be used
-within CREATE TABLE statements. Usage of VARCHAR2 and CLOB with unicode text still
+within CREATE TABLE statements. Usage of VARCHAR2 and CLOB with unicode text still
requires NLS_LANG to be set.
LIMIT/OFFSET Support
--------------------
-Oracle has no support for the LIMIT or OFFSET keywords. SQLAlchemy uses
-a wrapped subquery approach in conjunction with ROWNUM. The exact methodology
+Oracle has no support for the LIMIT or OFFSET keywords. SQLAlchemy uses
+a wrapped subquery approach in conjunction with ROWNUM. The exact methodology
is taken from
-http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html .
+http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html .
There are two options which affect its behavior:
optimization directive, specify ``optimize_limits=True`` to :func:`.create_engine`.
* the values passed for the limit/offset are sent as bound parameters. Some users have observed
that Oracle produces a poor query plan when the values are sent as binds and not
- rendered literally. To render the limit/offset values literally within the SQL
+ rendered literally. To render the limit/offset values literally within the SQL
statement, specify ``use_binds_for_limits=False`` to :func:`.create_engine`.
-Some users have reported better performance when the entirely different approach of a
-window query is used, i.e. ROW_NUMBER() OVER (ORDER BY), to provide LIMIT/OFFSET (note
-that the majority of users don't observe this). To suit this case the
-method used for LIMIT/OFFSET can be replaced entirely. See the recipe at
+Some users have reported better performance when the entirely different approach of a
+window query is used, i.e. ROW_NUMBER() OVER (ORDER BY), to provide LIMIT/OFFSET (note
+that the majority of users don't observe this). To suit this case the
+method used for LIMIT/OFFSET can be replaced entirely. See the recipe at
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowFunctionsByDefault
which installs a select compiler that overrides the generation of limit/offset with
a window function.
ON UPDATE CASCADE
-----------------
-Oracle doesn't have native ON UPDATE CASCADE functionality. A trigger based solution
+Oracle doesn't have native ON UPDATE CASCADE functionality. A trigger based solution
is available at http://asktom.oracle.com/tkyte/update_cascade/index.html .
When using the SQLAlchemy ORM, the ORM has limited ability to manually issue
-cascading updates - specify ForeignKey objects using the
+cascading updates - specify ForeignKey objects using the
"deferrable=True, initially='deferred'" keyword arguments,
and specify "passive_updates=False" on each relationship().
JOIN phrases into the WHERE clause, and in the case of LEFT OUTER JOIN
makes use of Oracle's (+) operator.
-* the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when
- the :class:`~sqlalchemy.types.Unicode` is used - VARCHAR2 and CLOB are issued
+* the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when
+ the :class:`~sqlalchemy.types.Unicode` is used - VARCHAR2 and CLOB are issued
instead. This because these types don't seem to work correctly on Oracle 8
- even though they are available. The :class:`~sqlalchemy.types.NVARCHAR`
+ even though they are available. The :class:`~sqlalchemy.types.NVARCHAR`
and :class:`~sqlalchemy.dialects.oracle.NCLOB` types will always generate NVARCHAR2 and NCLOB.
-* the "native unicode" mode is disabled when using cx_oracle, i.e. SQLAlchemy
+* the "native unicode" mode is disabled when using cx_oracle, i.e. SQLAlchemy
encodes all Python unicode objects to "string" before passing in as bind parameters.
Synonym/DBLINK Reflection
-------------------------
When using reflection with Table objects, the dialect can optionally search for tables
-indicated by synonyms that reference DBLINK-ed tables by passing the flag
-oracle_resolve_synonyms=True as a keyword argument to the Table construct. If DBLINK
+indicated by synonyms that reference DBLINK-ed tables by passing the flag
+oracle_resolve_synonyms=True as a keyword argument to the Table construct. If DBLINK
is not in use this flag should be left off.
"""
class INTERVAL(sqltypes.TypeEngine):
__visit_name__ = 'INTERVAL'
- def __init__(self,
- day_precision=None,
+ def __init__(self,
+ day_precision=None,
second_precision=None):
"""Construct an INTERVAL.
def visit_INTERVAL(self, type_):
return "INTERVAL DAY%s TO SECOND%s" % (
- type_.day_precision is not None and
+ type_.day_precision is not None and
"(%d)" % type_.day_precision or
"",
- type_.second_precision is not None and
+ type_.second_precision is not None and
"(%d)" % type_.second_precision or
"",
)
else:
return "%(name)s(%(precision)s, %(scale)s)" % {'name':name,'precision': precision, 'scale' : scale}
- def visit_string(self, type_):
+ def visit_string(self, type_):
return self.visit_VARCHAR2(type_)
def visit_VARCHAR2(self, type_):
def _visit_varchar(self, type_, n, num):
if not n and self.dialect._supports_char_length:
return "VARCHAR%(two)s(%(length)s CHAR)" % {
- 'length' : type_.length,
+ 'length' : type_.length,
'two':num}
else:
- return "%(n)sVARCHAR%(two)s(%(length)s)" % {'length' : type_.length,
+ return "%(n)sVARCHAR%(two)s(%(length)s)" % {'length' : type_.length,
'two':num, 'n':n}
def visit_text(self, type_):
return ""
def default_from(self):
- """Called when a ``SELECT`` statement has no froms,
+ """Called when a ``SELECT`` statement has no froms,
and no ``FROM`` clause is to be appended.
The Oracle compiler tacks a "FROM DUAL" to the statement.
if constraint.ondelete is not None:
text += " ON DELETE %s" % constraint.ondelete
- # oracle has no ON UPDATE CASCADE -
+ # oracle has no ON UPDATE CASCADE -
# its only available via triggers http://asktom.oracle.com/tkyte/update_cascade/index.html
if constraint.onupdate is not None:
util.warn(
class OracleExecutionContext(default.DefaultExecutionContext):
def fire_sequence(self, seq, type_):
- return self._execute_scalar("SELECT " +
- self.dialect.identifier_preparer.format_sequence(seq) +
+ return self._execute_scalar("SELECT " +
+ self.dialect.identifier_preparer.format_sequence(seq) +
".nextval FROM DUAL", type_)
class OracleDialect(default.DefaultDialect):
reflection_options = ('oracle_resolve_synonyms', )
- def __init__(self,
- use_ansi=True,
- optimize_limits=False,
+ def __init__(self,
+ use_ansi=True,
+ optimize_limits=False,
use_binds_for_limits=True,
**kwargs):
default.DefaultDialect.__init__(self, **kwargs)
if resolve_synonyms:
actual_name, owner, dblink, synonym = self._resolve_synonym(
- connection,
- desired_owner=self.denormalize_name(schema),
+ connection,
+ desired_owner=self.denormalize_name(schema),
desired_synonym=self.denormalize_name(table_name)
)
else:
char_length_col = 'char_length'
else:
char_length_col = 'data_length'
-
+
c = connection.execute(sql.text(
"SELECT column_name, data_type, %(char_length_col)s, data_precision, data_scale, "
"nullable, data_default FROM ALL_TAB_COLUMNS%(dblink)s "
- "WHERE table_name = :table_name AND owner = :owner "
+ "WHERE table_name = :table_name AND owner = :owner "
"ORDER BY column_id" % {'dblink': dblink, 'char_length_col':char_length_col}),
table_name=table_name, owner=schema)
coltype = NUMBER(precision, scale)
elif coltype in ('VARCHAR2', 'NVARCHAR2', 'CHAR'):
coltype = self.ischema_names.get(coltype)(length)
- elif 'WITH TIME ZONE' in coltype:
+ elif 'WITH TIME ZONE' in coltype:
coltype = TIMESTAMP(timezone=True)
else:
coltype = re.sub(r'\(\d+\)', '', coltype)
indexes = []
q = sql.text("""
SELECT a.index_name, a.column_name, b.uniqueness
- FROM ALL_IND_COLUMNS%(dblink)s a,
- ALL_INDEXES%(dblink)s b
+ FROM ALL_IND_COLUMNS%(dblink)s a,
+ ALL_INDEXES%(dblink)s b
WHERE
a.index_name = b.index_name
AND a.table_owner = b.table_owner
if resolve_synonyms:
ref_remote_name, ref_remote_owner, ref_dblink, ref_synonym = \
self._resolve_synonym(
- connection,
- desired_owner=self.denormalize_name(remote_owner),
+ connection,
+ desired_owner=self.denormalize_name(remote_owner),
desired_table=self.denormalize_name(remote_table)
)
if ref_synonym:
------
The psycopg2 driver is available at http://pypi.python.org/pypi/psycopg2/ .
-The dialect has several behaviors which are specifically tailored towards compatibility
+The dialect has several behaviors which are specifically tailored towards compatibility
with this module.
Note that psycopg1 is **not** supported.
create_engine("postgresql+psycopg2://user:password@/dbname")
By default, the socket file used is to connect to a Unix-domain socket
-in ``/tmp``, or whatever socket directory was specified when PostgreSQL
+in ``/tmp``, or whatever socket directory was specified when PostgreSQL
was built. This value can be overridden by passing a pathname to psycopg2,
using ``host`` as an additional keyword argument::
Per-Statement/Connection Execution Options
-------------------------------------------
-The following DBAPI-specific options are respected when used with
+The following DBAPI-specific options are respected when used with
:meth:`.Connection.execution_options`, :meth:`.Executable.execution_options`,
:meth:`.Query.execution_options`, in addition to those not specific to DBAPIs:
-* isolation_level - Set the transaction isolation level for the lifespan of a
+* isolation_level - Set the transaction isolation level for the lifespan of a
:class:`.Connection` (can only be set on a connection, not a statement or query).
This includes the options ``SERIALIZABLE``, ``READ COMMITTED``,
``READ UNCOMMITTED`` and ``REPEATABLE READ``.
extension, such that the DBAPI receives and returns all strings as Python
Unicode objects directly - SQLAlchemy passes these values through without
change. Psycopg2 here will encode/decode string values based on the
-current "client encoding" setting; by default this is the value in
-the ``postgresql.conf`` file, which often defaults to ``SQL_ASCII``.
+current "client encoding" setting; by default this is the value in
+the ``postgresql.conf`` file, which often defaults to ``SQL_ASCII``.
Typically, this can be changed to ``utf-8``, as a more useful default::
#client_encoding = sql_ascii # actually, defaults to database
A second way to affect the client encoding is to set it within Psycopg2
locally. SQLAlchemy will call psycopg2's ``set_client_encoding()``
method (see: http://initd.org/psycopg/docs/connection.html#connection.set_client_encoding)
-on all new connections based on the value passed to
+on all new connections based on the value passed to
:func:`.create_engine` using the ``client_encoding`` parameter::
engine = create_engine("postgresql://user:pass@host/dbname", client_encoding='utf8')
SQLAlchemy can also be instructed to skip the usage of the psycopg2
``UNICODE`` extension and to instead utilize it's own unicode encode/decode
-services, which are normally reserved only for those DBAPIs that don't
-fully support unicode directly. Passing ``use_native_unicode=False``
+services, which are normally reserved only for those DBAPIs that don't
+fully support unicode directly. Passing ``use_native_unicode=False``
to :func:`.create_engine` will disable usage of ``psycopg2.extensions.UNICODE``.
-SQLAlchemy will instead encode data itself into Python bytestrings on the way
+SQLAlchemy will instead encode data itself into Python bytestrings on the way
in and coerce from bytes on the way back,
-using the value of the :func:`.create_engine` ``encoding`` parameter, which
+using the value of the :func:`.create_engine` ``encoding`` parameter, which
defaults to ``utf-8``.
SQLAlchemy's own unicode encode/decode functionality is steadily becoming
-obsolete as more DBAPIs support unicode fully along with the approach of
+obsolete as more DBAPIs support unicode fully along with the approach of
Python 3; in modern usage psycopg2 should be relied upon to handle unicode.
Transactions
NOTICE logging
---------------
-The psycopg2 dialect will log Postgresql NOTICE messages via the
+The psycopg2 dialect will log Postgresql NOTICE messages via the
``sqlalchemy.dialects.postgresql`` logger::
import logging
(self.compiled and isinstance(self.compiled.statement, expression.Selectable) \
or \
(
- (not self.compiled or
- isinstance(self.compiled.statement, expression.TextClause))
+ (not self.compiled or
+ isinstance(self.compiled.statement, expression.TextClause))
and self.statement and SERVER_SIDE_CURSOR_RE.match(self.statement))
)
)
def _log_notices(self, cursor):
for notice in cursor.connection.notices:
- # NOTICE messages have a
+ # NOTICE messages have a
# newline character at the end
logger.info(notice.rstrip())
}
)
- def __init__(self, server_side_cursors=False, use_native_unicode=True,
+ def __init__(self, server_side_cursors=False, use_native_unicode=True,
client_encoding=None, **kwargs):
PGDialect.__init__(self, **kwargs)
self.server_side_cursors = server_side_cursors
self.supports_unicode_binds = use_native_unicode
self.client_encoding = client_encoding
if self.dbapi and hasattr(self.dbapi, '__version__'):
- m = re.match(r'(\d+)\.(\d+)(?:\.(\d+))?',
+ m = re.match(r'(\d+)\.(\d+)(?:\.(\d+))?',
self.dbapi.__version__)
if m:
self.psycopg2_version = tuple(
- int(x)
- for x in m.group(1, 2, 3)
+ int(x)
+ for x in m.group(1, 2, 3)
if x is not None)
@classmethod
def _isolation_lookup(self):
extensions = __import__('psycopg2.extensions').extensions
return {
- 'READ COMMITTED':extensions.ISOLATION_LEVEL_READ_COMMITTED,
- 'READ UNCOMMITTED':extensions.ISOLATION_LEVEL_READ_UNCOMMITTED,
+ 'READ COMMITTED':extensions.ISOLATION_LEVEL_READ_COMMITTED,
+ 'READ UNCOMMITTED':extensions.ISOLATION_LEVEL_READ_UNCOMMITTED,
'REPEATABLE READ':extensions.ISOLATION_LEVEL_REPEATABLE_READ,
'SERIALIZABLE':extensions.ISOLATION_LEVEL_SERIALIZABLE
}
except KeyError:
raise exc.ArgumentError(
"Invalid value '%s' for isolation_level. "
- "Valid isolation levels for %s are %s" %
+ "Valid isolation levels for %s are %s" %
(level, self.name, ", ".join(self._isolation_lookup))
- )
+ )
connection.set_isolation_level(level)
def is_disconnect(self, e, connection, cursor):
if isinstance(e, self.dbapi.OperationalError):
# these error messages from libpq: interfaces/libpq/fe-misc.c.
- # TODO: these are sent through gettext in libpq and we can't
- # check within other locales - consider using connection.closed
+ # TODO: these are sent through gettext in libpq and we can't
+ # check within other locales - consider using connection.closed
return 'closed the connection' in str(e) or \
'connection not open' in str(e) or \
'could not receive data from server' in str(e)
return 'connection already closed' in str(e) or \
'cursor already closed' in str(e)
elif isinstance(e, self.dbapi.ProgrammingError):
- # not sure where this path is originally from, it may
+ # not sure where this path is originally from, it may
# be obsolete. It really says "losed", not "closed".
return "losed the connection unexpectedly" in str(e)
else:
# sybase/base.py
# Copyright (C) 2010-2011 the SQLAlchemy authors and contributors <see AUTHORS file>
# get_select_precolumns(), limit_clause() implementation
-# copyright (C) 2007 Fisch Asset Management
-# AG http://www.fam.ch, with coding by Alexander Houben
+# copyright (C) 2007 Fisch Asset Management
+# AG http://www.fam.ch, with coding by Alexander Houben
# alexander.houben@thor-solutions.ch
#
# This module is part of SQLAlchemy and is released under
.. note::
The Sybase dialect functions on current SQLAlchemy versions
- but is not regularly tested, and may have many issues and
+ but is not regularly tested, and may have many issues and
caveats not currently handled. In particular, the table
and database reflection features are not implemented.
class IMAGE(sqltypes.LargeBinary):
__visit_name__ = 'IMAGE'
-
+
class SybaseTypeCompiler(compiler.GenericTypeCompiler):
def visit_large_binary(self, type_):
self._enable_identity_insert = False
if self._enable_identity_insert:
- self.cursor.execute("SET IDENTITY_INSERT %s ON" %
+ self.cursor.execute("SET IDENTITY_INSERT %s ON" %
self.dialect.identifier_preparer.format_table(tbl))
if self.isddl:
# TODO: to enhance this, we can detect "ddl in tran" on the
- # database settings. this error message should be improved to
+ # database settings. this error message should be improved to
# include a note about that.
if not self.should_autocommit:
raise exc.InvalidRequestError(
"AUTOCOMMIT (Assuming no Sybase 'ddl in tran')")
self.set_ddl_autocommit(
- self.root_connection.connection.connection,
+ self.root_connection.connection.connection,
True)
field, self.process(extract.expr, **kw))
def for_update_clause(self, select):
- # "FOR UPDATE" is only allowed on "DECLARE CURSOR"
+ # "FOR UPDATE" is only allowed on "DECLARE CURSOR"
# which SQLAlchemy doesn't use
return ''
class PoolListener(object):
"""Hooks into the lifecycle of connections in a :class:`.Pool`.
- .. note::
-
+ .. note::
+
:class:`.PoolListener` is deprecated. Please
refer to :class:`.PoolEvents`.
class MyListener(PoolListener):
def connect(self, dbapi_con, con_record):
'''perform connect operations'''
- # etc.
+ # etc.
# create a new pool with a listener
p = QueuePool(..., listeners=[MyListener()])
class ConnectionProxy(object):
"""Allows interception of statement execution by Connections.
- .. note::
-
+ .. note::
+
:class:`.ConnectionProxy` is deprecated. Please
refer to :class:`.ConnectionEvents`.
event.listen(self, 'before_execute', adapt_execute)
- def adapt_cursor_execute(conn, cursor, statement,
+ def adapt_cursor_execute(conn, cursor, statement,
parameters,context, executemany, ):
def execute_wrapper(
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
-"""Heuristics related to join conditions as used in
+"""Heuristics related to join conditions as used in
:func:`.relationship`.
Provides the :class:`.JoinCondition` object, which encapsulates
from .. import sql, util, exc as sa_exc, schema
from ..sql.util import (
- ClauseAdapter,
+ ClauseAdapter,
join_condition, _shallow_annotate, visit_binary_product,
_deep_deannotate, find_tables
)
from .interfaces import MANYTOMANY, MANYTOONE, ONETOMANY
def remote(expr):
- """Annotate a portion of a primaryjoin expression
+ """Annotate a portion of a primaryjoin expression
with a 'remote' annotation.
-
+
:func:`.remote`, :func:`.foreign`, and :func:`.remote_foreign`
- are intended to be used with
- :func:`.relationship` in conjunction with a
+ are intended to be used with
+ :func:`.relationship` in conjunction with a
``primaryjoin`` expression which contains
indirect equality conditions, meaning the comparison
of mapped columns involves extraneous SQL functions
- such as :func:`.cast`. They can also be used in
+ such as :func:`.cast`. They can also be used in
lieu of the ``foreign_keys`` and ``remote_side``
- parameters to :func:`.relationship`, if a
+ parameters to :func:`.relationship`, if a
primaryjoin expression is also being sent explicitly.
-
+
Below, a mapped class ``DNSRecord`` relates to the
``DHCPHost`` class using a primaryjoin that casts
the ``content`` column to a string. The :func:`.foreign`
- and :func:`.remote` annotation functions are used
+ and :func:`.remote` annotation functions are used
to mark with full accuracy those mapped columns that
are significant to the :func:`.relationship`, in terms
of how they are joined::
from sqlalchemy import cast, String
from sqlalchemy.orm import remote, foreign
from sqlalchemy.dialects.postgresql import INET
-
+
class DNSRecord(Base):
__tablename__ = 'dns'
-
+
id = Column(Integer, primary_key=True)
content = Column(INET)
dhcphost = relationship(DHCPHost,
- primaryjoin=cast(foreign(content), String) ==
+ primaryjoin=cast(foreign(content), String) ==
remote(DHCPHost.ip_address)
)
.. versionadded:: 0.8
See also:
-
+
* :func:`.foreign`
-
+
* :func:`.remote_foreign`
-
+
"""
return _annotate_columns(expression._clause_element_as_expr(expr), {"remote":True})
def foreign(expr):
- """Annotate a portion of a primaryjoin expression
+ """Annotate a portion of a primaryjoin expression
with a 'foreign' annotation.
See the example at :func:`.remote`.
return _annotate_columns(expression._clause_element_as_expr(expr), {"foreign":True})
def remote_foreign(expr):
- """Annotate a portion of a primaryjoin expression
+ """Annotate a portion of a primaryjoin expression
with a 'remote' and 'foreign' annotation.
-
+
See the example at :func:`.remote`.
.. versionadded:: 0.8
"""
- return _annotate_columns(expr, {"foreign":True,
+ return _annotate_columns(expr, {"foreign":True,
"remote":True})
def _annotate_columns(element, annotations):
return element
class JoinCondition(object):
- def __init__(self,
- parent_selectable,
+ def __init__(self,
+ parent_selectable,
child_selectable,
parent_local_selectable,
child_local_selectable,
if self.secondaryjoin is None:
self.secondaryjoin = \
join_condition(
- self.child_selectable,
+ self.child_selectable,
self.secondary,
a_subset=self.child_local_selectable,
consider_as_foreign_keys=\
if self.primaryjoin is None:
self.primaryjoin = \
join_condition(
- self.parent_selectable,
- self.secondary,
+ self.parent_selectable,
+ self.secondary,
a_subset=self.parent_local_selectable,
consider_as_foreign_keys=\
self.consider_as_foreign_keys or None
if self.primaryjoin is None:
self.primaryjoin = \
join_condition(
- self.parent_selectable,
- self.child_selectable,
+ self.parent_selectable,
+ self.child_selectable,
a_subset=self.parent_local_selectable,
consider_as_foreign_keys=\
self.consider_as_foreign_keys or None
@util.memoized_property
def primaryjoin_reverse_remote(self):
- """Return the primaryjoin condition suitable for the
- "reverse" direction.
-
+ """Return the primaryjoin condition suitable for the
+ "reverse" direction.
+
If the primaryjoin was delivered here with pre-existing
"remote" annotations, the local/remote annotations
are reversed. Otherwise, the local/remote annotations
are removed.
-
+
"""
if self._has_remote_annotations:
def replace(element):
else:
if self._has_foreign_annotations:
# TODO: coverage
- return _deep_deannotate(self.primaryjoin,
+ return _deep_deannotate(self.primaryjoin,
values=("local", "remote"))
else:
return _deep_deannotate(self.primaryjoin)
"""Annotate the primaryjoin and secondaryjoin
structures with 'foreign' annotations marking columns
considered as foreign.
-
+
"""
if self._has_foreign_annotations:
return
def _refers_to_parent_table(self):
"""Return True if the join condition contains column
comparisons where both columns are in both tables.
-
+
"""
pt = self.parent_selectable
mt = self.child_selectable
"""Annotate the primaryjoin and secondaryjoin
structures with 'remote' annotations marking columns
considered as part of the 'remote' side.
-
+
"""
if self._has_remote_annotations:
return
def _annotate_remote_secondary(self):
"""annotate 'remote' in primaryjoin, secondaryjoin
when 'secondary' is present.
-
+
"""
def repl(element):
if self.secondary.c.contains_column(element):
def _annotate_selfref(self, fn):
"""annotate 'remote' in primaryjoin, secondaryjoin
when the relationship is detected as self-referential.
-
+
"""
def visit_binary(binary):
equated = binary.left.compare(binary.right)
self._warn_non_column_elements()
self.primaryjoin = visitors.cloned_traverse(
- self.primaryjoin, {},
+ self.primaryjoin, {},
{"binary":visit_binary})
def _annotate_remote_from_args(self):
"""annotate 'remote' in primaryjoin, secondaryjoin
when the 'remote_side' or '_local_remote_pairs'
arguments are used.
-
+
"""
if self._local_remote_pairs:
if self._remote_side:
def _annotate_remote_with_overlap(self):
"""annotate 'remote' in primaryjoin, secondaryjoin
- when the parent/child tables have some set of
+ when the parent/child tables have some set of
tables in common, though is not a fully self-referential
relationship.
-
+
"""
def visit_binary(binary):
- binary.left, binary.right = proc_left_right(binary.left,
+ binary.left, binary.right = proc_left_right(binary.left,
binary.right)
- binary.right, binary.left = proc_left_right(binary.right,
+ binary.right, binary.left = proc_left_right(binary.right,
binary.left)
def proc_left_right(left, right):
if isinstance(left, expression.ColumnClause) and \
return left, right
self.primaryjoin = visitors.cloned_traverse(
- self.primaryjoin, {},
+ self.primaryjoin, {},
{"binary":visit_binary})
def _annotate_remote_distinct_selectables(self):
"""annotate 'remote' in primaryjoin, secondaryjoin
- when the parent/child tables are entirely
+ when the parent/child tables are entirely
separate.
-
+
"""
def repl(element):
if self.child_selectable.c.contains_column(element) and \
)
def _annotate_local(self):
- """Annotate the primaryjoin and secondaryjoin
+ """Annotate the primaryjoin and secondaryjoin
structures with 'local' annotations.
-
- This annotates all column elements found
- simultaneously in the parent table
- and the join condition that don't have a
- 'remote' annotation set up from
+
+ This annotates all column elements found
+ simultaneously in the parent table
+ and the join condition that don't have a
+ 'remote' annotation set up from
_annotate_remote() or user-defined.
-
+
"""
if self._has_annotation(self.primaryjoin, "local"):
return
if self._local_remote_pairs:
- local_side = util.column_set([l for (l, r)
+ local_side = util.column_set([l for (l, r)
in self._local_remote_pairs])
else:
local_side = util.column_set(self.parent_selectable.c)
% (self.prop, ))
def _check_foreign_cols(self, join_condition, primary):
- """Check the foreign key columns collected and emit error
+ """Check the foreign key columns collected and emit error
messages."""
can_sync = False
return
# from here below is just determining the best error message
- # to report. Check for a join condition using any operator
+ # to report. Check for a join condition using any operator
# (not just ==), perhaps they need to turn on "viewonly=True".
if self.support_sync and has_foreign and not can_sync:
err = "Could not locate any simple equality expressions "\
"involving locally mapped foreign key columns for "\
"%s join condition "\
"'%s' on relationship %s." % (
- primary and 'primary' or 'secondary',
- join_condition,
+ primary and 'primary' or 'secondary',
+ join_condition,
self.prop
)
err += \
else:
err = "Could not locate any relevant foreign key columns "\
"for %s join condition '%s' on relationship %s." % (
- primary and 'primary' or 'secondary',
- join_condition,
+ primary and 'primary' or 'secondary',
+ join_condition,
self.prop
)
err += \
raise sa_exc.ArgumentError(err)
def _determine_direction(self):
- """Determine if this relationship is one to many, many to one,
+ """Determine if this relationship is one to many, many to one,
many to many.
"""
"nor the child's mapped tables" % self.prop)
def _deannotate_pairs(self, collection):
- """provide deannotation for the various lists of
+ """provide deannotation for the various lists of
pairs, so that using them in hashes doesn't incur
high-overhead __eq__() comparisons against
original columns mapped.
-
+
"""
- return [(x._deannotate(), y._deannotate())
+ return [(x._deannotate(), y._deannotate())
for x, y in collection]
def _setup_pairs(self):
])
- def join_targets(self, source_selectable,
+ def join_targets(self, source_selectable,
dest_selectable,
aliased,
single_crit=None):
# place a barrier on the destination such that
# replacement traversals won't ever dig into it.
- # its internal structure remains fixed
+ # its internal structure remains fixed
# regardless of context.
dest_selectable = _shallow_annotate(
- dest_selectable,
+ dest_selectable,
{'no_replacement_traverse':True})
primaryjoin, secondaryjoin, secondary = self.primaryjoin, \
# adjust the join condition for single table inheritance,
# in the case that the join is to a subclass
- # this is analogous to the
+ # this is analogous to the
# "_adjust_for_single_table_inheritance()" method in Query.
if single_crit is not None:
if self.deannotated_secondaryjoin is None or not reverse_direction:
lazywhere = visitors.replacement_traverse(
- lazywhere, {}, col_to_bind)
+ lazywhere, {}, col_to_bind)
if self.deannotated_secondaryjoin is not None:
secondaryjoin = self.deannotated_secondaryjoin
"""
__all__ = [ 'TypeEngine', 'TypeDecorator', 'AbstractType', 'UserDefinedType',
'INT', 'CHAR', 'VARCHAR', 'NCHAR', 'NVARCHAR','TEXT', 'Text',
- 'FLOAT', 'NUMERIC', 'REAL', 'DECIMAL', 'TIMESTAMP', 'DATETIME',
+ 'FLOAT', 'NUMERIC', 'REAL', 'DECIMAL', 'TIMESTAMP', 'DATETIME',
'CLOB', 'BLOB', 'BINARY', 'VARBINARY', 'BOOLEAN', 'BIGINT', 'SMALLINT',
'INTEGER', 'DATE', 'TIME', 'String', 'Integer', 'SmallInteger',
'BigInteger', 'Numeric', 'Float', 'DateTime', 'Date', 'Time',
import array
class AbstractType(Visitable):
- """Base for all types - not needed except for backwards
+ """Base for all types - not needed except for backwards
compatibility."""
class TypeEngine(AbstractType):
@property
def python_type(self):
"""Return the Python type object expected to be returned
- by instances of this type, if known.
-
+ by instances of this type, if known.
+
Basically, for those types which enforce a return type,
- or are known across the board to do such for all common
+ or are known across the board to do such for all common
DBAPIs (like ``int`` for example), will return that type.
-
+
If a return type is not defined, raises
``NotImplementedError``.
-
+
Note that any type also accommodates NULL in SQL which
means you can also get back ``None`` from any type
in practice.
raise NotImplementedError()
def with_variant(self, type_, dialect_name):
- """Produce a new type object that will utilize the given
+ """Produce a new type object that will utilize the given
type when applied to the dialect of the given name.
e.g.::
The construction of :meth:`.TypeEngine.with_variant` is always
from the "fallback" type to that which is dialect specific.
The returned type is an instance of :class:`.Variant`, which
- itself provides a :meth:`~sqlalchemy.types.Variant.with_variant` that can
+ itself provides a :meth:`~sqlalchemy.types.Variant.with_variant` that can
be called repeatedly.
:param type_: a :class:`.TypeEngine` that will be selected
as a variant from the originating type, when a dialect
of the given name is in use.
- :param dialect_name: base name of the dialect which uses
+ :param dialect_name: base name of the dialect which uses
this type. (i.e. ``'postgresql'``, ``'mysql'``, etc.)
.. versionadded:: 0.7.2
return rp
def _dialect_info(self, dialect):
- """Return a dialect-specific registry which
+ """Return a dialect-specific registry which
caches a dialect-specific implementation, bind processing
function, and one or more result processing functions."""
return dialect.type_descriptor(self)
def adapt(self, cls, **kw):
- """Produce an "adapted" form of this type, given an "impl" class
- to work with.
+ """Produce an "adapted" form of this type, given an "impl" class
+ to work with.
- This method is used internally to associate generic
+ This method is used internally to associate generic
types with "implementation" types that are specific to a particular
dialect.
"""
to return a type which the value should be coerced into.
The default behavior here is conservative; if the right-hand
- side is already coerced into a SQL type based on its
+ side is already coerced into a SQL type based on its
Python type, it is usually left alone.
End-user functionality extension here should generally be via
def adapt_operator(self, op):
"""A hook which allows the given operator to be adapted
- to something new.
+ to something new.
See also UserDefinedType._adapt_expression(), an as-yet-
semi-public method with greater capability in this regard.
to an existing type.
This method is preferred to direct subclassing of SQLAlchemy's
- built-in types as it ensures that all required functionality of
+ built-in types as it ensures that all required functionality of
the underlying type is kept in place.
Typical usage::
mytable.c.somecol + datetime.date(2009, 5, 15)
- Above, if "somecol" is an ``Integer`` variant, it makes sense that
+ Above, if "somecol" is an ``Integer`` variant, it makes sense that
we're doing date arithmetic, where above is usually interpreted
- by databases as adding a number of days to the given date.
+ by databases as adding a number of days to the given date.
The expression system does the right thing by not attempting to
coerce the "date()" value into an integer-oriented bind parameter.
def __init__(self, *args, **kwargs):
"""Construct a :class:`.TypeDecorator`.
- Arguments sent here are passed to the constructor
+ Arguments sent here are passed to the constructor
of the class assigned to the ``impl`` class level attribute,
assuming the ``impl`` is a callable, and the resulting
object is assigned to the ``self.impl`` instance attribute
(thus overriding the class attribute of the same name).
-
+
If the class level ``impl`` is not a callable (the unusual case),
- it will be assigned to the same instance attribute 'as-is',
+ it will be assigned to the same instance attribute 'as-is',
ignoring those arguments passed to the constructor.
Subclasses can override this to customize the generation
This is an end-user override hook that can be used to provide
differing types depending on the given dialect. It is used
- by the :class:`.TypeDecorator` implementation of :meth:`type_engine`
+ by the :class:`.TypeDecorator` implementation of :meth:`type_engine`
to help determine what type should ultimately be returned
for a given :class:`.TypeDecorator`.
Subclasses override this method to return the
value that should be passed along to the underlying
- :class:`.TypeEngine` object, and from there to the
+ :class:`.TypeEngine` object, and from there to the
DBAPI ``execute()`` method.
The operation could be anything desired to perform custom
- behavior, such as transforming or serializing data.
+ behavior, such as transforming or serializing data.
This could also be used as a hook for validating logic.
This operation should be designed with the reverse operation
from the DBAPI cursor method ``fetchone()`` or similar.
The operation could be anything desired to perform custom
- behavior, such as transforming or serializing data.
+ behavior, such as transforming or serializing data.
This could also be used as a hook for validating logic.
:param value: Data to operate upon, of any type expected by
raise NotImplementedError()
def bind_processor(self, dialect):
- """Provide a bound value processing function for the
+ """Provide a bound value processing function for the
given :class:`.Dialect`.
- This is the method that fulfills the :class:`.TypeEngine`
+ This is the method that fulfills the :class:`.TypeEngine`
contract for bound value conversion. :class:`.TypeDecorator`
- will wrap a user-defined implementation of
+ will wrap a user-defined implementation of
:meth:`process_bind_param` here.
User-defined code can override this method directly,
def result_processor(self, dialect, coltype):
"""Provide a result value processing function for the given :class:`.Dialect`.
- This is the method that fulfills the :class:`.TypeEngine`
+ This is the method that fulfills the :class:`.TypeEngine`
contract for result value conversion. :class:`.TypeDecorator`
- will wrap a user-defined implementation of
+ will wrap a user-defined implementation of
:meth:`process_result_value` here.
User-defined code can override this method directly,
"""Suggest a type for a 'coerced' Python value in an expression.
By default, returns self. This method is called by
- the expression system when an object using this type is
+ the expression system when an object using this type is
on the left or right side of an expression against a plain Python
object which does not yet have a SQLAlchemy type assigned::
def copy(self):
"""Produce a copy of this :class:`.TypeDecorator` instance.
- This is a shallow copy and is provided to fulfill part of
+ This is a shallow copy and is provided to fulfill part of
the :class:`.TypeEngine` contract. It usually does not
need to be overridden unless the user-defined :class:`.TypeDecorator`
has local state that should be deep-copied.
def get_dbapi_type(self, dbapi):
"""Return the DBAPI type object represented by this :class:`.TypeDecorator`.
- By default this calls upon :meth:`.TypeEngine.get_dbapi_type` of the
+ By default this calls upon :meth:`.TypeEngine.get_dbapi_type` of the
underlying "impl".
"""
return self.impl.get_dbapi_type(dbapi)
def compare_values(self, x, y):
"""Given two values, compare them for equality.
- By default this calls upon :meth:`.TypeEngine.compare_values`
+ By default this calls upon :meth:`.TypeEngine.compare_values`
of the underlying "impl", which in turn usually
uses the Python equals operator ``==``.
class Variant(TypeDecorator):
"""A wrapping type that selects among a variety of
implementations based on dialect in use.
-
+
The :class:`.Variant` type is typically constructed
using the :meth:`.TypeEngine.with_variant` method.
-
+
.. versionadded:: 0.7.2
-
+
"""
def __init__(self, base, mapping):
"""Construct a new :class:`.Variant`.
-
+
:param base: the base 'fallback' type
- :param mapping: dictionary of string dialect names to :class:`.TypeEngine`
+ :param mapping: dictionary of string dialect names to :class:`.TypeEngine`
instances.
-
+
"""
self.impl = base
self.mapping = mapping
def with_variant(self, type_, dialect_name):
"""Return a new :class:`.Variant` which adds the given
- type + dialect name to the mapping, in addition to the
+ type + dialect name to the mapping, in addition to the
mapping present in this :class:`.Variant`.
-
+
:param type_: a :class:`.TypeEngine` that will be selected
as a variant from the originating type, when a dialect
of the given name is in use.
- :param dialect_name: base name of the dialect which uses
+ :param dialect_name: base name of the dialect which uses
this type. (i.e. ``'postgresql'``, ``'mysql'``, etc.)
"""
__visit_name__ = 'string'
- def __init__(self, length=None, convert_unicode=False,
+ def __init__(self, length=None, convert_unicode=False,
assert_unicode=None, unicode_error=None,
_warn_on_bytestring=False
):
with no length is included. Whether the value is
interpreted as bytes or characters is database specific.
- :param convert_unicode: When set to ``True``, the
+ :param convert_unicode: When set to ``True``, the
:class:`.String` type will assume that
input is to be passed as Python ``unicode`` objects,
and results returned as Python ``unicode`` objects.
If the DBAPI in use does not support Python unicode
(which is fewer and fewer these days), SQLAlchemy
- will encode/decode the value, using the
- value of the ``encoding`` parameter passed to
+ will encode/decode the value, using the
+ value of the ``encoding`` parameter passed to
:func:`.create_engine` as the encoding.
-
+
When using a DBAPI that natively supports Python
- unicode objects, this flag generally does not
+ unicode objects, this flag generally does not
need to be set. For columns that are explicitly
intended to store non-ASCII data, the :class:`.Unicode`
- or :class:`UnicodeText`
+ or :class:`UnicodeText`
types should be used regardless, which feature
- the same behavior of ``convert_unicode`` but
+ the same behavior of ``convert_unicode`` but
also indicate an underlying column type that
directly supports unicode, such as ``NVARCHAR``.
cause SQLAlchemy's encode/decode services to be
used unconditionally.
- :param assert_unicode: Deprecated. A warning is emitted
- when a non-``unicode`` object is passed to the
- :class:`.Unicode` subtype of :class:`.String`,
- or the :class:`.UnicodeText` subtype of :class:`.Text`.
- See :class:`.Unicode` for information on how to
+ :param assert_unicode: Deprecated. A warning is emitted
+ when a non-``unicode`` object is passed to the
+ :class:`.Unicode` subtype of :class:`.String`,
+ or the :class:`.UnicodeText` subtype of :class:`.Text`.
+ See :class:`.Unicode` for information on how to
control this warning.
:param unicode_error: Optional, a method to use to handle Unicode
def result_processor(self, dialect, coltype):
wants_unicode = self.convert_unicode or dialect.convert_unicode
needs_convert = wants_unicode and \
- (dialect.returns_unicode_strings is not True or
+ (dialect.returns_unicode_strings is not True or
self.convert_unicode == 'force')
if needs_convert:
that assumes input and output as Python ``unicode`` data,
and in that regard is equivalent to the usage of the
``convert_unicode`` flag with the :class:`.String` type.
- However, unlike plain :class:`.String`, it also implies an
+ However, unlike plain :class:`.String`, it also implies an
underlying column type that is explicitly supporting of non-ASCII
data, such as ``NVARCHAR`` on Oracle and SQL Server.
- This can impact the output of ``CREATE TABLE`` statements
- and ``CAST`` functions at the dialect level, and can
+ This can impact the output of ``CREATE TABLE`` statements
+ and ``CAST`` functions at the dialect level, and can
also affect the handling of bound parameters in some
specific DBAPI scenarios.
-
+
The encoding used by the :class:`.Unicode` type is usually
- determined by the DBAPI itself; most modern DBAPIs
+ determined by the DBAPI itself; most modern DBAPIs
feature support for Python ``unicode`` objects as bound
values and result set values, and the encoding should
be configured as detailed in the notes for the target
DBAPI in the :ref:`dialect_toplevel` section.
-
+
For those DBAPIs which do not support, or are not configured
to accommodate Python ``unicode`` objects
directly, SQLAlchemy does the encoding and decoding
- outside of the DBAPI. The encoding in this scenario
- is determined by the ``encoding`` flag passed to
+ outside of the DBAPI. The encoding in this scenario
+ is determined by the ``encoding`` flag passed to
:func:`.create_engine`.
- When using the :class:`.Unicode` type, it is only appropriate
+ When using the :class:`.Unicode` type, it is only appropriate
to pass Python ``unicode`` objects, and not plain ``str``.
If a plain ``str`` is passed under Python 2, a warning
- is emitted. If you notice your application emitting these warnings but
- you're not sure of the source of them, the Python
- ``warnings`` filter, documented at
- http://docs.python.org/library/warnings.html,
- can be used to turn these warnings into exceptions
+ is emitted. If you notice your application emitting these warnings but
+ you're not sure of the source of them, the Python
+ ``warnings`` filter, documented at
+ http://docs.python.org/library/warnings.html,
+ can be used to turn these warnings into exceptions
which will illustrate a stack trace::
import warnings
For an application that wishes to pass plain bytestrings
and Python ``unicode`` objects to the ``Unicode`` type
- equally, the bytestrings must first be decoded into
+ equally, the bytestrings must first be decoded into
unicode. The recipe at :ref:`coerce_to_unicode` illustrates
how this is done.
def __init__(self, length=None, **kwargs):
"""
Create a :class:`.Unicode` object.
-
+
Parameters are the same as that of :class:`.String`,
with the exception that ``convert_unicode``
defaults to ``True``.
See :class:`.Unicode` for details on the unicode
behavior of this object.
- Like :class:`.Unicode`, usage the :class:`.UnicodeText` type implies a
- unicode-capable type being used on the backend, such as
+ Like :class:`.Unicode`, usage the :class:`.UnicodeText` type implies a
+ unicode-capable type being used on the backend, such as
``NCLOB``, ``NTEXT``.
"""
``decimal.Decimal`` objects by default, applying
conversion as needed.
- .. note::
-
+ .. note::
+
The `cdecimal <http://pypi.python.org/pypi/cdecimal/>`_ library
is a high performing alternative to Python's built-in
``decimal.Decimal`` type, which performs very poorly in high volume
import cdecimal
sys.modules["decimal"] = cdecimal
- While the global patch is a little ugly, it's particularly
- important to use just one decimal library at a time since
- Python Decimal and cdecimal Decimal objects
+ While the global patch is a little ugly, it's particularly
+ important to use just one decimal library at a time since
+ Python Decimal and cdecimal Decimal objects
are not currently compatible *with each other*::
>>> import cdecimal
>>> decimal.Decimal("10") == cdecimal.Decimal("10")
False
- SQLAlchemy will provide more natural support of
+ SQLAlchemy will provide more natural support of
cdecimal if and when it becomes a standard part of Python
installations and is supported by all DBAPIs.
that the asdecimal setting is apppropriate for the DBAPI in use -
when Numeric applies a conversion from Decimal->float or float->
Decimal, this conversion incurs an additional performance overhead
- for all result columns received.
+ for all result columns received.
- DBAPIs that return Decimal natively (e.g. psycopg2) will have
+ DBAPIs that return Decimal natively (e.g. psycopg2) will have
better accuracy and higher performance with a setting of ``True``,
as the native translation to Decimal reduces the amount of floating-
point issues at play, and the Numeric type itself doesn't need
- to apply any further conversions. However, another DBAPI which
- returns floats natively *will* incur an additional conversion
- overhead, and is still subject to floating point data loss - in
+ to apply any further conversions. However, another DBAPI which
+ returns floats natively *will* incur an additional conversion
+ overhead, and is still subject to floating point data loss - in
which case ``asdecimal=False`` will at least remove the extra
conversion overhead.
results in floating point conversion.
:param \**kwargs: deprecated. Additional arguments here are ignored
- by the default :class:`.Float` type. For database specific
- floats that support additional arguments, see that dialect's
+ by the default :class:`.Float` type. For database specific
+ floats that support additional arguments, see that dialect's
documentation for details, such as :class:`sqlalchemy.dialects.mysql.FLOAT`.
-
+
"""
self.precision = precision
self.asdecimal = asdecimal
def __init__(self, timezone=False):
"""Construct a new :class:`.DateTime`.
-
+
:param timezone: boolean. If True, and supported by the
backend, will produce 'TIMESTAMP WITH TIMEZONE'. For backends
that don't support timezone aware timestamps, has no
effect.
-
+
"""
self.timezone = timezone
Interval:DateTime,
# date - datetime = interval,
- # this one is not in the PG docs
+ # this one is not in the PG docs
# but works
DateTime:Interval,
},
return None
return process
- # Python 3 has native bytes() type
+ # Python 3 has native bytes() type
# both sqlite3 and pg8000 seem to return it
# (i.e. and not 'memoryview')
# Py2K
as well as types that are complimented by table or schema level
constraints, triggers, and other rules.
- :class:`.SchemaType` classes can also be targets for the
+ :class:`.SchemaType` classes can also be targets for the
:meth:`.DDLEvents.before_parent_attach` and :meth:`.DDLEvents.after_parent_attach`
events, where the events fire off surrounding the association of
the type object with a parent :class:`.Column`.
class Enum(String, SchemaType):
"""Generic Enum Type.
- The Enum type provides a set of possible string values which the
+ The Enum type provides a set of possible string values which the
column is constrained towards.
- By default, uses the backend's native ENUM type if available,
+ By default, uses the backend's native ENUM type if available,
else uses VARCHAR + a CHECK constraint.
-
+
See also:
-
+
:class:`~.postgresql.ENUM` - PostgreSQL-specific type,
which has additional functionality.
-
+
"""
__visit_name__ = 'enum'
length =max(len(x) for x in self.enums)
else:
length = 0
- String.__init__(self,
+ String.__init__(self,
length =length,
- convert_unicode=convert_unicode,
+ convert_unicode=convert_unicode,
)
SchemaType.__init__(self, **kw)
def adapt(self, impltype, **kw):
if issubclass(impltype, Enum):
- return impltype(name=self.name,
- quote=self.quote,
- schema=self.schema,
+ return impltype(name=self.name,
+ quote=self.quote,
+ schema=self.schema,
metadata=self.metadata,
convert_unicode=self.convert_unicode,
native_enum=self.native_enum,
impl = LargeBinary
- def __init__(self, protocol=pickle.HIGHEST_PROTOCOL,
+ def __init__(self, protocol=pickle.HIGHEST_PROTOCOL,
pickler=None, comparator=None):
"""
Construct a PickleType.
pickle-compatible ``dumps` and ``loads`` methods.
:param comparator: a 2-arg callable predicate used
- to compare values of this type. If left as ``None``,
+ to compare values of this type. If left as ``None``,
the Python "equals" operator is used to compare values.
"""
super(PickleType, self).__init__()
def __reduce__(self):
- return PickleType, (self.protocol,
- None,
+ return PickleType, (self.protocol,
+ None,
self.comparator)
def bind_processor(self, dialect):
def __init__(self, create_constraint=True, name=None):
"""Construct a Boolean.
- :param create_constraint: defaults to True. If the boolean
+ :param create_constraint: defaults to True. If the boolean
is generated as an int/smallint, also create a CHECK constraint
on the table that ensures 1 or 0 as a value.
impl = DateTime
epoch = dt.datetime.utcfromtimestamp(0)
- def __init__(self, native=True,
- second_precision=None,
+ def __init__(self, native=True,
+ second_precision=None,
day_precision=None):
"""Construct an Interval object.
:param native: when True, use the actual
INTERVAL type provided by the database, if
supported (currently Postgresql, Oracle).
- Otherwise, represent the interval data as
+ Otherwise, represent the interval data as
an epoch value regardless.
:param second_precision: For native interval types
which support a "fractional seconds precision" parameter,
i.e. Oracle and Postgresql
- :param day_precision: for native interval types which
+ :param day_precision: for native interval types which
support a "day precision" parameter, i.e. Oracle.
"""
return cls._adapt_from_generic_interval(self, **kw)
else:
return self.__class__(
- native=self.native,
- second_precision=self.second_precision,
+ native=self.native,
+ second_precision=self.second_precision,
day_precision=self.day_precision,
**kw)
sources=['lib/sqlalchemy/cextension/resultproxy.c'])
]
-ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError)
+ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError)
if sys.platform == 'win32' and sys.version_info > (2, 6):
# 2.6's distutils.msvc9compiler can raise an IOError when failing to
# find the compiler
- ext_errors += (IOError,)
+ ext_errors += (IOError,)
class BuildFailed(Exception):
packages.append(fragment.replace(os.sep, '.'))
return packages
-v_file = open(os.path.join(os.path.dirname(__file__),
+v_file = open(os.path.join(os.path.dirname(__file__),
'lib', 'sqlalchemy', '__init__.py'))
VERSION = re.compile(r".*__version__ = '(.*?)'",
re.S).match(v_file.read()).group(1)
from sqlalchemy import pool as pool_module
class QueuePoolTest(fixtures.TestBase, AssertsExecutionResults):
- __requires__ = 'cpython',
+ __requires__ = 'cpython',
class Connection(object):
def rollback(self):
def teardown(self):
# the tests leave some fake connections
- # around which dont necessarily
+ # around which dont necessarily
# get gc'ed as quickly as we'd like,
# on backends like pypy, python3.2
pool_module._refs.clear()
# the callcount on this test seems to vary
- # based on tests that ran before (particularly py3k),
+ # based on tests that ran before (particularly py3k),
# probably
# due to the event mechanics being established
# or not already...
- @profiling.function_call_count(72, {'2.4': 68, '2.7':75,
+ @profiling.function_call_count(72, {'2.4': 68, '2.7':75,
'2.7+cextension':75,
'3':62},
variance=.15)
self.assert_compile(
t.update().where(t.c.somecolumn=="q").
values(somecolumn="x").
- with_hint("WITH (PAGLOCK)",
- selectable=targ,
+ with_hint("WITH (PAGLOCK)",
+ selectable=targ,
dialect_name=darg),
"UPDATE sometable WITH (PAGLOCK) "
"SET somecolumn=:somecolumn "
for darg in ("*", "mssql"):
self.assert_compile(
t.delete().where(t.c.somecolumn=="q").
- with_hint("WITH (PAGLOCK)",
- selectable=targ,
+ with_hint("WITH (PAGLOCK)",
+ selectable=targ,
dialect_name=darg),
"DELETE FROM sometable WITH (PAGLOCK) "
"WHERE sometable.somecolumn = :somecolumn_1"
self.assert_compile(
t.update().where(t.c.somecolumn==t2.c.somecolumn).
values(somecolumn="x").
- with_hint("WITH (PAGLOCK)",
- selectable=t2,
+ with_hint("WITH (PAGLOCK)",
+ selectable=t2,
dialect_name=darg),
"UPDATE sometable SET somecolumn=:somecolumn "
"FROM sometable, othertable WITH (PAGLOCK) "
# for darg in ("*", "mssql"):
# self.assert_compile(
# t.delete().where(t.c.somecolumn==t2.c.somecolumn).
- # with_hint("WITH (PAGLOCK)",
- # selectable=t2,
+ # with_hint("WITH (PAGLOCK)",
+ # selectable=t2,
# dialect_name=darg),
# ""
# )
for expr, compile in [
(
- select([literal("x"), literal("y")]),
+ select([literal("x"), literal("y")]),
"SELECT 'x' AS anon_1, 'y' AS anon_2",
),
(
def test_indexes_cols_with_commas(self):
metadata = self.metadata
- t1 = Table('t', metadata,
- Column('x, col', Integer, key='x'),
+ t1 = Table('t', metadata,
+ Column('x, col', Integer, key='x'),
Column('y', Integer)
)
Index('foo', t1.c.x, t1.c.y)
def test_indexes_cols_with_spaces(self):
metadata = self.metadata
- t1 = Table('t', metadata, Column('x col', Integer, key='x'),
+ t1 = Table('t', metadata, Column('x col', Integer, key='x'),
Column('y', Integer))
Index('foo', t1.c.x, t1.c.y)
metadata.create_all()
case of a table having an identity (autoincrement)
primary key column, and which also has a trigger configured
to fire upon each insert and subsequently perform an
- insert into a different table.
+ insert into a different table.
SQLALchemy's MSSQL dialect by default will attempt to
use an OUTPUT_INSERTED clause, which in this case will
raise the following error:
- ProgrammingError: (ProgrammingError) ('42000', 334,
- "[Microsoft][SQL Server Native Client 10.0][SQL Server]The
+ ProgrammingError: (ProgrammingError) ('42000', 334,
+ "[Microsoft][SQL Server Native Client 10.0][SQL Server]The
target table 't1' of the DML statement cannot have any enabled
triggers if the statement contains an OUTPUT clause without
INTO clause.", 7748) 'INSERT INTO t1 (descr) OUTPUT inserted.id
# though the ExecutionContext will still have a
# _select_lastrowid, so the SELECT SCOPE_IDENTITY() will
# hopefully be called instead.
- implicit_returning = False
+ implicit_returning = False
)
t2 = Table('t2', meta,
Column('id', Integer, Sequence('fred', 200, 1),
testing.db,
lambda: engine.execute(t1.insert()),
ExactSQL("INSERT INTO t1 DEFAULT VALUES"),
- # we dont have an event for
+ # we dont have an event for
# "SELECT @@IDENTITY" part here.
# this will be in 0.8 with #2459
)
meta = MetaData(testing.db)
con = testing.db.connect()
con.execute('create schema paj')
- tbl = Table('test', meta,
+ tbl = Table('test', meta,
Column('id', Integer, primary_key=True), schema='paj')
tbl.create()
try:
Column('category_id', Integer, ForeignKey('cattable.id')),
PrimaryKeyConstraint('id', name='PK_matchtable'),
)
- DDL("""CREATE FULLTEXT INDEX
- ON cattable (description)
+ DDL("""CREATE FULLTEXT INDEX
+ ON cattable (description)
KEY INDEX PK_cattable""").execute_at('after-create'
, matchtable)
- DDL("""CREATE FULLTEXT INDEX
- ON matchtable (title)
+ DDL("""CREATE FULLTEXT INDEX
+ ON matchtable (title)
KEY INDEX PK_matchtable""").execute_at('after-create'
, matchtable)
metadata.create_all()
url.make_url('mssql+pymssql://scott:tiger@somehost/test')
connection = dialect.create_connect_args(u)
eq_(
- [[], {'host': 'somehost', 'password': 'tiger',
+ [[], {'host': 'somehost', 'password': 'tiger',
'user': 'scott', 'database': 'test'}], connection
)
url.make_url('mssql+pymssql://scott:tiger@somehost:5000/test')
connection = dialect.create_connect_args(u)
eq_(
- [[], {'host': 'somehost:5000', 'password': 'tiger',
+ [[], {'host': 'somehost:5000', 'password': 'tiger',
'user': 'scott', 'database': 'test'}], connection
)
)
self.view_str = view_str = \
"CREATE VIEW huge_named_view AS SELECT %s FROM base_table" % (
- ",".join("long_named_column_number_%d" % i
+ ",".join("long_named_column_number_%d" % i
for i in xrange(self.col_num))
)
assert len(view_str) > 4000
self.assert_compile(Unicode(50),"VARCHAR2(50)",dialect=dialect)
self.assert_compile(UnicodeText(),"CLOB",dialect=dialect)
- dialect = oracle.dialect(implicit_returning=True,
+ dialect = oracle.dialect(implicit_returning=True,
dbapi=testing.db.dialect.dbapi)
dialect._get_server_version_info = server_version_info
dialect.initialize(testing.db.connect())
for stmt in """
create table test_schema.parent(
- id integer primary key,
+ id integer primary key,
data varchar2(50)
);
create table test_schema.child(
id integer primary key,
- data varchar2(50),
+ data varchar2(50),
parent_id integer references test_schema.parent(id)
);
create synonym test_schema.ptable for test_schema.parent;
create synonym test_schema.ctable for test_schema.child;
--- can't make a ref from local schema to the
--- remote schema's table without this,
+-- can't make a ref from local schema to the
+-- remote schema's table without this,
-- *and* cant give yourself a grant !
--- so we give it to public. ideas welcome.
+-- so we give it to public. ideas welcome.
grant references on test_schema.parent to public;
grant references on test_schema.child to public;
""".split(";"):
def test_create_same_names_explicit_schema(self):
schema = testing.db.dialect.default_schema_name
meta = MetaData(testing.db)
- parent = Table('parent', meta,
+ parent = Table('parent', meta,
Column('pid', Integer, primary_key=True),
schema=schema
)
- child = Table('child', meta,
+ child = Table('child', meta,
Column('cid', Integer, primary_key=True),
Column('pid', Integer, ForeignKey('%s.parent.pid' % schema)),
schema=schema
def test_create_same_names_implicit_schema(self):
meta = MetaData(testing.db)
- parent = Table('parent', meta,
+ parent = Table('parent', meta,
Column('pid', Integer, primary_key=True),
)
- child = Table('child', meta,
+ child = Table('child', meta,
Column('cid', Integer, primary_key=True),
Column('pid', Integer, ForeignKey('parent.pid')),
)
parent = Table('parent', meta, autoload=True, schema='test_schema')
child = Table('child', meta, autoload=True, schema='test_schema')
- self.assert_compile(parent.join(child),
+ self.assert_compile(parent.join(child),
"test_schema.parent JOIN test_schema.child ON "
"test_schema.parent.id = test_schema.child.parent_id")
select([parent, child]).\
__dialect__ = oracle.OracleDialect()
def test_no_clobs_for_string_params(self):
- """test that simple string params get a DBAPI type of
- VARCHAR, not CLOB. This is to prevent setinputsizes
+ """test that simple string params get a DBAPI type of
+ VARCHAR, not CLOB. This is to prevent setinputsizes
from setting up cx_oracle.CLOBs on
string-based bind params [ticket:793]."""
@testing.fails_on('+zxjdbc', 'zxjdbc lacks the FIXED_CHAR dbapi type')
def test_fixed_char(self):
m = MetaData(testing.db)
- t = Table('t1', m,
+ t = Table('t1', m,
Column('id', Integer, primary_key=True),
Column('data', CHAR(30), nullable=False)
)
dict(id=3, data="value 3")
)
- eq_(t.select().where(t.c.data=='value 2').execute().fetchall(),
+ eq_(t.select().where(t.c.data=='value 2').execute().fetchall(),
[(2, 'value 2 ')]
)
m2 = MetaData(testing.db)
t2 = Table('t1', m2, autoload=True)
assert type(t2.c.data.type) is CHAR
- eq_(t2.select().where(t2.c.data=='value 2').execute().fetchall(),
+ eq_(t2.select().where(t2.c.data=='value 2').execute().fetchall(),
[(2, 'value 2 ')]
)
def test_numerics(self):
m = MetaData(testing.db)
- t1 = Table('t1', m,
+ t1 = Table('t1', m,
Column('intcol', Integer),
Column('numericcol', Numeric(precision=9, scale=2)),
Column('floatcol1', Float()),
t1.create()
try:
t1.insert().execute(
- intcol=1,
- numericcol=5.2,
- floatcol1=6.5,
+ intcol=1,
+ numericcol=5.2,
+ floatcol1=6.5,
floatcol2 = 8.5,
- doubleprec = 9.5,
+ doubleprec = 9.5,
numbercol1=12,
numbercol2=14.85,
numbercol3=15.76
for row in (
t1.select().execute().first(),
- t2.select().execute().first()
+ t2.select().execute().first()
):
for i, (val, type_) in enumerate((
(1, int),
foo.create()
foo.insert().execute(
- {'idata':5, 'ndata':decimal.Decimal("45.6"),
- 'ndata2':decimal.Decimal("45.0"),
+ {'idata':5, 'ndata':decimal.Decimal("45.6"),
+ 'ndata2':decimal.Decimal("45.0"),
'nidata':decimal.Decimal('53'), 'fdata':45.68392},
)
stmt = """
- SELECT
+ SELECT
idata,
ndata,
ndata2,
row = testing.db.execute(stmt).fetchall()[0]
eq_([type(x) for x in row], [int, decimal.Decimal, decimal.Decimal, int, float])
eq_(
- row,
+ row,
(5, decimal.Decimal('45.6'), decimal.Decimal('45'), 53, 45.683920000000001)
)
- # with a nested subquery,
+ # with a nested subquery,
# both Numeric values that don't have decimal places, regardless
# of their originating type, come back as ints with no useful
# typing information beyond "numeric". So native handler
# totally sucks.
stmt = """
- SELECT
+ SELECT
(SELECT (SELECT idata FROM foo) FROM DUAL) AS idata,
(SELECT CAST((SELECT ndata FROM foo) AS NUMERIC(20, 2)) FROM DUAL)
AS ndata,
row = testing.db.execute(stmt).fetchall()[0]
eq_([type(x) for x in row], [int, decimal.Decimal, int, int, decimal.Decimal])
eq_(
- row,
+ row,
(5, decimal.Decimal('45.6'), 45, 53, decimal.Decimal('45.68392'))
)
- row = testing.db.execute(text(stmt,
+ row = testing.db.execute(text(stmt,
typemap={
- 'idata':Integer(),
- 'ndata':Numeric(20, 2),
- 'ndata2':Numeric(20, 2),
+ 'idata':Integer(),
+ 'ndata':Numeric(20, 2),
+ 'ndata2':Numeric(20, 2),
'nidata':Numeric(5, 0),
'fdata':Float()
})).fetchall()[0]
eq_([type(x) for x in row], [int, decimal.Decimal, decimal.Decimal, decimal.Decimal, float])
- eq_(row,
+ eq_(row,
(5, decimal.Decimal('45.6'), decimal.Decimal('45'), decimal.Decimal('53'), 45.683920000000001)
)
stmt = """
- SELECT
+ SELECT
anon_1.idata AS anon_1_idata,
anon_1.ndata AS anon_1_ndata,
anon_1.ndata2 AS anon_1_ndata2,
anon_1.fdata AS anon_1_fdata
FROM (SELECT idata, ndata, ndata2, nidata, fdata
FROM (
- SELECT
+ SELECT
(SELECT (SELECT idata FROM foo) FROM DUAL) AS idata,
- (SELECT CAST((SELECT ndata FROM foo) AS NUMERIC(20, 2))
+ (SELECT CAST((SELECT ndata FROM foo) AS NUMERIC(20, 2))
FROM DUAL) AS ndata,
- (SELECT CAST((SELECT ndata2 FROM foo) AS NUMERIC(20, 2))
+ (SELECT CAST((SELECT ndata2 FROM foo) AS NUMERIC(20, 2))
FROM DUAL) AS ndata2,
- (SELECT CAST((SELECT nidata FROM foo) AS NUMERIC(5, 0))
+ (SELECT CAST((SELECT nidata FROM foo) AS NUMERIC(5, 0))
FROM DUAL) AS nidata,
- (SELECT CAST((SELECT fdata FROM foo) AS FLOAT) FROM DUAL)
+ (SELECT CAST((SELECT fdata FROM foo) AS FLOAT) FROM DUAL)
AS fdata
FROM dual
)
eq_([type(x) for x in row], [int, decimal.Decimal, int, int, decimal.Decimal])
eq_(row, (5, decimal.Decimal('45.6'), 45, 53, decimal.Decimal('45.68392')))
- row = testing.db.execute(text(stmt,
+ row = testing.db.execute(text(stmt,
typemap={
- 'anon_1_idata':Integer(),
- 'anon_1_ndata':Numeric(20, 2),
- 'anon_1_ndata2':Numeric(20, 2),
- 'anon_1_nidata':Numeric(5, 0),
+ 'anon_1_idata':Integer(),
+ 'anon_1_ndata':Numeric(20, 2),
+ 'anon_1_ndata2':Numeric(20, 2),
+ 'anon_1_nidata':Numeric(5, 0),
'anon_1_fdata':Float()
})).fetchall()[0]
eq_([type(x) for x in row], [int, decimal.Decimal, decimal.Decimal, decimal.Decimal, float])
- eq_(row,
+ eq_(row,
(5, decimal.Decimal('45.6'), decimal.Decimal('45'), decimal.Decimal('53'), 45.683920000000001)
)
- row = testing.db.execute(text(stmt,
+ row = testing.db.execute(text(stmt,
typemap={
- 'anon_1_idata':Integer(),
- 'anon_1_ndata':Numeric(20, 2, asdecimal=False),
- 'anon_1_ndata2':Numeric(20, 2, asdecimal=False),
- 'anon_1_nidata':Numeric(5, 0, asdecimal=False),
+ 'anon_1_idata':Integer(),
+ 'anon_1_ndata':Numeric(20, 2, asdecimal=False),
+ 'anon_1_ndata2':Numeric(20, 2, asdecimal=False),
+ 'anon_1_nidata':Numeric(5, 0, asdecimal=False),
'anon_1_fdata':Float(asdecimal=True)
})).fetchall()[0]
eq_([type(x) for x in row], [int, float, float, float, decimal.Decimal])
- eq_(row,
+ eq_(row,
(5, 45.6, 45, 53, decimal.Decimal('45.68392'))
)
# nvarchar returns unicode natively. cx_oracle
# _OracleNVarChar type should be at play here.
assert isinstance(
- t2.c.data.type.dialect_impl(testing.db.dialect),
+ t2.c.data.type.dialect_impl(testing.db.dialect),
cx_oracle._OracleNVarChar)
data = u'm’a réveillé.'
def test_lobs_without_convert(self):
engine = testing_engine(options=dict(auto_convert_lobs=False))
metadata = MetaData()
- t = Table("z_test", metadata, Column('id', Integer, primary_key=True),
+ t = Table("z_test", metadata, Column('id', Integer, primary_key=True),
Column('data', Text), Column('bindata', LargeBinary))
t.create(engine)
try:
- engine.execute(t.insert(), id=1,
- data='this is text',
+ engine.execute(t.insert(), id=1,
+ data='this is text',
bindata='this is binary')
row = engine.execute(t.select()).first()
eq_(row['data'].read(), 'this is text')
"""test that index overflow tables aren't included in
table_names."""
- __only_on__ = 'oracle'
+ __only_on__ = 'oracle'
def setup(self):
testing.db.execute("""
CREATE TABLE admin_docindex(
- token char(20),
+ token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(2000),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
- ORGANIZATION INDEX
+ ORGANIZATION INDEX
TABLESPACE users
PCTTHRESHOLD 20
OVERFLOW TABLESPACE users
def setup_class(cls):
global binary_table, stream, meta
meta = MetaData(testing.db)
- binary_table = Table('binary_table', meta,
+ binary_table = Table('binary_table', meta,
Column('id', Integer, primary_key=True),
Column('data', LargeBinary)
)
meta.create_all()
stream = os.path.join(
- os.path.dirname(__file__), "..",
+ os.path.dirname(__file__), "..",
'binary_data_one.dat')
stream = file(stream).read(12000)
def setup(self):
global metadata
metadata = MetaData(testing.db)
- t1 = Table('test_index_reflect', metadata,
+ t1 = Table('test_index_reflect', metadata,
Column('data', String(20), primary_key=True)
)
metadata.create_all()
)
# "group" is a keyword, so lower case
- normalind = Index('tableind', table.c.id_b, table.c.group)
+ normalind = Index('tableind', table.c.id_b, table.c.group)
# create
metadata.create_all()
@classmethod
def define_tables(cls, metadata):
Table('graphs', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(30)))
Table('edges', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
- Column('graph_id', Integer,
+ Column('graph_id', Integer,
ForeignKey('graphs.id')),
Column('x1', Integer),
Column('y1', Integer),
self.classes.Point)
# current contract. the composite is None
- # when hasn't been populated etc. on a
+ # when hasn't been populated etc. on a
# pending/transient object.
e1 = Edge()
assert e1.end is None
# created unconditionally in all cases.
# but as we are just trying to fix [ticket:2308] and
# [ticket:2309] without changing behavior we maintain
- # that only "persistent" gets the composite with the
+ # that only "persistent" gets the composite with the
# Nones
sess.flush()
g.edges[1]
eq_(
- sess.query(Edge).filter(Edge.start==None).all(),
+ sess.query(Edge).filter(Edge.start==None).all(),
[]
)
sess = self._fixture()
eq_(
- sess.query(Edge.start, Edge.end).all(),
+ sess.query(Edge.start, Edge.end).all(),
[(3, 4, 5, 6), (14, 5, 2, 7)]
)
del e.end
sess.flush()
eq_(
- sess.query(Edge.start, Edge.end).all(),
+ sess.query(Edge.start, Edge.end).all(),
[(3, 4, 5, 6), (14, 5, None, None)]
)
@classmethod
def define_tables(cls, metadata):
Table('graphs', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
- Column('version_id', Integer, primary_key=True,
+ Column('version_id', Integer, primary_key=True,
nullable=True),
Column('name', String(30)))
@classmethod
def define_tables(cls, metadata):
Table('foobars', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('x1', Integer, default=2),
Column('x2', Integer),
self.goofy_x1, self.x2, self.x3, self.x4
)
mapper(Foobar, foobars, properties=dict(
- foob=sa.orm.composite(FBComposite,
- foobars.c.x1,
- foobars.c.x2,
- foobars.c.x3,
+ foob=sa.orm.composite(FBComposite,
+ foobars.c.x1,
+ foobars.c.x2,
+ foobars.c.x3,
foobars.c.x4)
))
@classmethod
def define_tables(cls, metadata):
Table('descriptions', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('d1', String(20)),
Column('d2', String(20)),
)
Table('values', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
- Column('description_id', Integer,
+ Column('description_id', Integer,
ForeignKey('descriptions.id'),
nullable=False),
Column('v1', String(20)),
desc_values = select(
[values, descriptions.c.d1, descriptions.c.d2],
descriptions.c.id == values.c.description_id
- ).alias('descriptions_values')
+ ).alias('descriptions_values')
mapper(Descriptions, descriptions, properties={
'values': relationship(Values, lazy='dynamic'),
})
mapper(Values, desc_values, properties={
- 'custom_values': composite(CustomValues,
+ 'custom_values': composite(CustomValues,
desc_values.c.v1,
desc_values.c.v2),
class ManyToOneTest(fixtures.MappedTest):
@classmethod
def define_tables(cls, metadata):
- Table('a',
+ Table('a',
metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('b1', String(20)),
Column('b2_id', Integer, ForeignKey('b.id'))
)
Table('b', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('data', String(20))
)
@classmethod
def define_tables(cls, metadata):
Table('edge', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('x1', Integer),
Column('y1', Integer),
self.classes.Edge,
self.classes.Point)
mapper(Edge, edge, properties={
- 'start':sa.orm.composite(Point, edge.c.x1, edge.c.y1,
+ 'start':sa.orm.composite(Point, edge.c.x1, edge.c.y1,
deferred=True, group='s'),
- 'end': sa.orm.composite(Point, edge.c.x2, edge.c.y2,
+ 'end': sa.orm.composite(Point, edge.c.x2, edge.c.y2,
deferred=True)
})
self._test_roundtrip()
@classmethod
def define_tables(cls, metadata):
Table('edge', metadata,
- Column('id', Integer, primary_key=True,
+ Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('x1', Integer),
Column('y1', Integer),
return diff_x * diff_x + diff_y * diff_y <= d * d
mapper(Edge, edge, properties={
- 'start': sa.orm.composite(Point, edge.c.x1, edge.c.y1,
+ 'start': sa.orm.composite(Point, edge.c.x1, edge.c.y1,
comparator_factory=CustomComparator),
'end': sa.orm.composite(Point, edge.c.x2, edge.c.y2)
})
e2
eq_(
- sess.query(Edge).filter(Edge.start==None).all(),
+ sess.query(Edge).filter(Edge.start==None).all(),
[]
)
self.classes.User)
mapper(User, users, properties = {
- 'addresses':relationship(mapper(Address, addresses),
+ 'addresses':relationship(mapper(Address, addresses),
lazy='joined', order_by=addresses.c.email_address),
})
q = create_session().query(User)
self.classes.User)
mapper(User, users, properties = {
- 'addresses':relationship(mapper(Address, addresses),
- lazy='joined',
+ 'addresses':relationship(mapper(Address, addresses),
+ lazy='joined',
order_by=[addresses.c.email_address, addresses.c.id]),
})
q = create_session().query(User)
], q.order_by(User.id).all())
def test_orderby_related(self):
- """A regular mapper select on a single table can
+ """A regular mapper select on a single table can
order by a relationship to a second table"""
Address, addresses, users, User = (self.classes.Address,
'orders':relationship(Order, order_by=orders.c.id), # o2m, m2o
})
mapper(Order, orders, properties={
- 'items':relationship(Item,
+ 'items':relationship(Item,
secondary=order_items, order_by=items.c.id), #m2m
})
mapper(Item, items, properties={
- 'keywords':relationship(Keyword,
+ 'keywords':relationship(Keyword,
secondary=item_keywords,
order_by=keywords.c.id) #m2m
})
for opt, count in [
((
- joinedload(User.orders, Order.items),
+ joinedload(User.orders, Order.items),
), 10),
((joinedload("orders.items"), ), 10),
((
- joinedload(User.orders, ),
- joinedload(User.orders, Order.items),
- joinedload(User.orders, Order.items, Item.keywords),
+ joinedload(User.orders, ),
+ joinedload(User.orders, Order.items),
+ joinedload(User.orders, Order.items, Item.keywords),
), 1),
((
- joinedload(User.orders, Order.items, Item.keywords),
+ joinedload(User.orders, Order.items, Item.keywords),
), 10),
((
- joinedload(User.orders, Order.items),
- joinedload(User.orders, Order.items, Item.keywords),
+ joinedload(User.orders, Order.items),
+ joinedload(User.orders, Order.items, Item.keywords),
), 5),
]:
sess = create_session()
eq_(self.static.user_address_result, sess.query(User).order_by(User.id).all())
def test_double(self):
- """Eager loading with two relationships simultaneously,
+ """Eager loading with two relationships simultaneously,
from the same table, using aliases."""
users, orders, User, Address, Order, addresses = (self.tables.users,
self.assert_sql_count(testing.db, go, 1)
def test_double_same_mappers(self):
- """Eager loading with two relationships simulatneously,
+ """Eager loading with two relationships simulatneously,
from the same table, using aliases."""
addresses, items, order_items, orders, Item, User, Address, Order, users = (self.tables.addresses,
self.assert_sql_count(testing.db, go, 1)
def test_no_false_hits(self):
- """Eager loaders don't interpret main table columns as
+ """Eager loaders don't interpret main table columns as
part of their eager load."""
addresses, orders, User, Address, Order, users = (self.tables.addresses,
sess = create_session()
q = sess.query(Item)
- l = q.filter((Item.description=='item 2') |
- (Item.description=='item 5') |
+ l = q.filter((Item.description=='item 2') |
+ (Item.description=='item 5') |
(Item.description=='item 3')).\
order_by(Item.id).limit(2).all()
@testing.fails_on('maxdb', 'FIXME: unknown')
def test_limit_3(self):
- """test that the ORDER BY is propagated from the inner
+ """test that the ORDER BY is propagated from the inner
select to the outer select, when using the
- 'wrapped' select statement resulting from the combination of
+ 'wrapped' select statement resulting from the combination of
eager loading and limit/offset clauses."""
addresses, items, order_items, orders, Item, User, Address, Order, users = (self.tables.addresses,
self.tables.users,
self.tables.orders)
- # tests the LIMIT/OFFSET aliasing on a mapper
+ # tests the LIMIT/OFFSET aliasing on a mapper
# against a select. original issue from ticket #904
sel = sa.select([users, addresses.c.email_address],
users.c.id==addresses.c.user_id).alias('useralias')
mapper(User, sel, properties={
- 'orders':relationship(Order, primaryjoin=sel.c.id==orders.c.user_id,
+ 'orders':relationship(Order, primaryjoin=sel.c.id==orders.c.user_id,
lazy='joined', order_by=orders.c.id)
})
mapper(Order, orders)
u1 = sess.query(User).filter(User.id==8).one()
def go():
eq_(u1.addresses[0].user, u1)
- self.assert_sql_execution(testing.db, go,
+ self.assert_sql_execution(testing.db, go,
CompiledSQL(
"SELECT addresses.id AS addresses_id, addresses.user_id AS "
"addresses_user_id, addresses.email_address AS "
def test_manytoone_limit(self):
- """test that the subquery wrapping only occurs with
+ """test that the subquery wrapping only occurs with
limit/offset and m2m or o2m joins present."""
users, items, order_items, Order, Item, User, Address, orders, addresses = (self.tables.users,
)
self.assert_compile(
- sess.query(User).options(joinedload("orders", innerjoin=True),
+ sess.query(User).options(joinedload("orders", innerjoin=True),
joinedload("orders.address", innerjoin=True)).limit(10),
"SELECT anon_1.users_id AS anon_1_users_id, anon_1.users_name AS anon_1_users_name, "
"addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, "
self.classes.User)
mapper(User, users, properties = dict(
- address = relationship(mapper(Address, addresses),
+ address = relationship(mapper(Address, addresses),
lazy='joined', uselist=False)
))
q = create_session().query(User)
self.tables.orders)
- # use a primaryjoin intended to defeat SA's usage of
+ # use a primaryjoin intended to defeat SA's usage of
# query.get() for a many-to-one lazyload
mapper(Order, orders, properties = dict(
- address = relationship(mapper(Address, addresses),
+ address = relationship(mapper(Address, addresses),
primaryjoin=and_(
addresses.c.id==orders.c.address_id,
addresses.c.email_address != None
'orders':relationship(Order, backref='user', lazy='joined',
order_by=orders.c.id),
'max_order':relationship(
- mapper(Order, max_orders, non_primary=True),
+ mapper(Order, max_orders, non_primary=True),
lazy='joined', uselist=False)
})
self.assert_sql_count(testing.db, go, 1)
def test_uselist_false_warning(self):
- """test that multiple rows received by a
+ """test that multiple rows received by a
uselist=False raises a warning."""
User, users, orders, Order = (self.classes.User,
], q.all())
def test_aliasing(self):
- """test that eager loading uses aliases to insulate the eager
+ """test that eager loading uses aliases to insulate the eager
load from regular criterion against those tables."""
Address, addresses, users, User = (self.classes.Address,
mapper(User, users, properties = dict(
- addresses = relationship(mapper(Address, addresses),
+ addresses = relationship(mapper(Address, addresses),
lazy='joined', order_by=addresses.c.id)
))
q = create_session().query(User)
self.classes.User)
mapper(User, users, properties = dict(
- addresses = relationship(mapper(Address, addresses), lazy='joined',
+ addresses = relationship(mapper(Address, addresses), lazy='joined',
innerjoin=True, order_by=addresses.c.id)
))
sess = create_session()
eq_(
[User(id=7, addresses=[ Address(id=1) ]),
- User(id=8,
- addresses=[ Address(id=2, email_address='ed@wood.com'),
- Address(id=3, email_address='ed@bettyboop.com'),
+ User(id=8,
+ addresses=[ Address(id=2, email_address='ed@wood.com'),
+ Address(id=3, email_address='ed@bettyboop.com'),
Address(id=4, email_address='ed@lala.com'), ]),
User(id=9, addresses=[ Address(id=5) ])]
,sess.query(User).all()
)
- self.assert_compile(sess.query(User),
+ self.assert_compile(sess.query(User),
"SELECT users.id AS users_id, users.name AS users_name, "
"addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, "
"addresses_1.email_address AS addresses_1_email_address FROM users JOIN "
self.tables.orders)
mapper(User, users, properties = dict(
- orders =relationship(Order, innerjoin=True,
+ orders =relationship(Order, innerjoin=True,
lazy=False)
))
mapper(Order, orders, properties=dict(
- items=relationship(Item, secondary=order_items, lazy=False,
+ items=relationship(Item, secondary=order_items, lazy=False,
innerjoin=True)
))
mapper(Item, items)
orders =relationship(Order, lazy=False)
))
mapper(Order, orders, properties=dict(
- items=relationship(Item, secondary=order_items, lazy=False,
+ items=relationship(Item, secondary=order_items, lazy=False,
innerjoin=True)
))
mapper(Item, items)
))
mapper(Item, items)
sess = create_session()
- self.assert_compile(sess.query(User).options(joinedload(User.orders, innerjoin=True)),
+ self.assert_compile(sess.query(User).options(joinedload(User.orders, innerjoin=True)),
"SELECT users.id AS users_id, users.name AS users_name, orders_1.id AS orders_1_id, "
"orders_1.user_id AS orders_1_user_id, orders_1.address_id AS orders_1_address_id, "
"orders_1.description AS orders_1_description, orders_1.isopen AS orders_1_isopen "
"FROM users JOIN orders AS orders_1 ON users.id = orders_1.user_id ORDER BY orders_1.id"
, use_default_dialect=True)
- self.assert_compile(sess.query(User).options(joinedload_all(User.orders, Order.items, innerjoin=True)),
+ self.assert_compile(sess.query(User).options(joinedload_all(User.orders, Order.items, innerjoin=True)),
"SELECT users.id AS users_id, users.name AS users_name, items_1.id AS items_1_id, "
"items_1.description AS items_1_description, orders_1.id AS orders_1_id, "
"orders_1.user_id AS orders_1_user_id, orders_1.address_id AS orders_1_address_id, "
def go():
eq_(
sess.query(User).options(
- joinedload(User.orders, innerjoin=True),
+ joinedload(User.orders, innerjoin=True),
joinedload(User.orders, Order.items, innerjoin=True)).
order_by(User.id).all(),
- [User(id=7,
- orders=[
- Order(id=1, items=[ Item(id=1), Item(id=2), Item(id=3)]),
- Order(id=3, items=[ Item(id=3), Item(id=4), Item(id=5)]),
+ [User(id=7,
+ orders=[
+ Order(id=1, items=[ Item(id=1), Item(id=2), Item(id=3)]),
+ Order(id=3, items=[ Item(id=3), Item(id=4), Item(id=5)]),
Order(id=5, items=[Item(id=5)])]),
User(id=9, orders=[
- Order(id=2, items=[ Item(id=1), Item(id=2), Item(id=3)]),
+ Order(id=2, items=[ Item(id=1), Item(id=2), Item(id=3)]),
Order(id=4, items=[ Item(id=1), Item(id=5)])])
]
)
User, Order, Item = self.classes.User, \
self.classes.Order, self.classes.Item
mapper(User, self.tables.users, properties={
- 'orders':relationship(Order),
+ 'orders':relationship(Order),
})
mapper(Order, self.tables.orders, properties={
'items':relationship(Item, secondary=self.tables.order_items),
self.children.append(node)
mapper(Node, nodes, properties={
- 'children':relationship(Node,
- lazy='joined',
+ 'children':relationship(Node,
+ lazy='joined',
join_depth=3, order_by=nodes.c.id)
})
sess = create_session()
sess.expunge_all()
def go():
- eq_(
+ eq_(
Node(data='n1', children=[Node(data='n11'), Node(data='n12')]),
sess.query(Node).order_by(Node.id).first(),
)
options(joinedload('children.children')).first()
# test that the query isn't wrapping the initial query for eager loading.
- self.assert_sql_execution(testing.db, go,
+ self.assert_sql_execution(testing.db, go,
CompiledSQL(
"SELECT nodes.id AS nodes_id, nodes.parent_id AS "
"nodes_parent_id, nodes.data AS nodes_data FROM nodes "
eq_(
[
(
- User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
+ User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
Order(description=u'order 2', isopen=0, items=[Item(description=u'item 1'), Item(description=u'item 2'), Item(description=u'item 3')]),
- User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
+ User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
Order(description=u'order 3', isopen=1, items=[Item(description=u'item 3'), Item(description=u'item 4'), Item(description=u'item 5')])
- ),
+ ),
(
- User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
+ User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
Order(description=u'order 2', isopen=0, items=[Item(description=u'item 1'), Item(description=u'item 2'), Item(description=u'item 3')]),
- User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
+ User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
Order(address_id=None, description=u'order 5', isopen=0, items=[Item(description=u'item 5')])
- ),
+ ),
(
- User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
+ User(addresses=[Address(email_address=u'fred@fred.com')], name=u'fred'),
Order(description=u'order 4', isopen=1, items=[Item(description=u'item 1'), Item(description=u'item 5')]),
- User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
+ User(addresses=[Address(email_address=u'jack@bean.com')], name=u'jack'),
Order(address_id=None, description=u'order 5', isopen=0, items=[Item(description=u'item 5')])
- ),
+ ),
],
sess.query(User, Order, u1, o1).\
join(Order, User.orders).options(joinedload(User.addresses), joinedload(Order.items)).filter(User.id==9).\
(User(id=9, addresses=[Address(id=5)]), Order(id=4, items=[Item(id=1), Item(id=5)])),
],
sess.query(User, oalias).join(oalias, User.orders).
- options(joinedload(User.addresses),
+ options(joinedload(User.addresses),
joinedload(oalias.items)).
filter(User.id==9).
order_by(User.id, oalias.id).all(),
})
session = create_session()
- session.add(User(name='joe', tags=[Tag(score1=5.0, score2=3.0),
+ session.add(User(name='joe', tags=[Tag(score1=5.0, score2=3.0),
Tag(score1=55.0, score2=1.0)]))
- session.add(User(name='bar', tags=[Tag(score1=5.0, score2=4.0),
- Tag(score1=50.0, score2=1.0),
+ session.add(User(name='bar', tags=[Tag(score1=5.0, score2=4.0),
+ Tag(score1=50.0, score2=1.0),
Tag(score1=15.0, score2=2.0)]))
session.flush()
session.expunge_all()
if aliasstuff:
salias = stuff.alias()
else:
- # if we don't alias the 'stuff' table within the correlated subquery,
+ # if we don't alias the 'stuff' table within the correlated subquery,
# it gets aliased in the eager load along with the "stuff" table to "stuff_1".
# but it's a scalar subquery, and this doesn't actually matter
salias = stuff
# starts as False. This is because all of Firebird,
# Postgresql, Oracle, SQL Server started supporting RETURNING
# as of a certain version, and the flag is not set until
- # version detection occurs. If some DB comes along that has
+ # version detection occurs. If some DB comes along that has
# RETURNING in all cases, this test can be adjusted.
- assert e.dialect.implicit_returning is False
+ assert e.dialect.implicit_returning is False
# version detection on connect sets it
c = e.connect()
s = select([table1.c.col1.label('c2'), table1.c.col1,
table1.c.col1.label('c1')])
- # this tests the same thing as
- # test_direct_correspondence_on_labels below -
+ # this tests the same thing as
+ # test_direct_correspondence_on_labels below -
# that the presence of label() affects the 'distance'
assert s.corresponding_column(table1.c.col1) is s.c.col1
is j2.c.table1_col1
def test_clone_append_column(self):
- sel = select([literal_column('1').label('a')])
+ sel = select([literal_column('1').label('a')])
cloned = visitors.ReplacingCloningVisitor().traverse(sel)
- cloned.append_column(literal_column('2').label('b'))
- cloned.append_column(func.foo())
- eq_(cloned.c.keys(), ['a', 'b', 'foo()'])
+ cloned.append_column(literal_column('2').label('b'))
+ cloned.append_column(func.foo())
+ eq_(cloned.c.keys(), ['a', 'b', 'foo()'])
def test_append_column_after_replace_selectable(self):
basesel = select([literal_column('1').label('a')])
"JOIN (SELECT 1 AS a, 2 AS b) AS joinfrom "
"ON basefrom.a = joinfrom.a"
)
- replaced.append_column(joinfrom.c.b)
+ replaced.append_column(joinfrom.c.b)
self.assert_compile(
replaced,
"SELECT basefrom.a, joinfrom.b FROM (SELECT 1 AS a) AS basefrom "
assert u.corresponding_column(s2.c.table2_col2) is u.c.col2
def test_union_precedence(self):
- # conflicting column correspondence should be resolved based on
+ # conflicting column correspondence should be resolved based on
# the order of the select()s in the union
s1 = select([table1.c.col1, table1.c.col2])
eq_(c1._from_objects, [t])
eq_(c2._from_objects, [t])
- self.assert_compile(select([c1]),
+ self.assert_compile(select([c1]),
"SELECT t.c1 FROM t")
- self.assert_compile(select([c2]),
+ self.assert_compile(select([c2]),
"SELECT t.c2 FROM t")
def test_from_list_deferred_whereclause(self):
eq_(c1._from_objects, [t])
eq_(c2._from_objects, [t])
- self.assert_compile(select([c1]),
+ self.assert_compile(select([c1]),
"SELECT t.c1 FROM t")
- self.assert_compile(select([c2]),
+ self.assert_compile(select([c2]),
"SELECT t.c2 FROM t")
def test_from_list_deferred_fromlist(self):
eq_(c1._from_objects, [t2])
- self.assert_compile(select([c1]),
+ self.assert_compile(select([c1]),
"SELECT t2.c1 FROM t2")
def test_from_list_deferred_cloning(self):
table1 = table('t1', column('a'))
table2 = table('t2', column('b'))
s1 = select([table1.c.a, table2.c.b])
- self.assert_compile(s1,
+ self.assert_compile(s1,
"SELECT t1.a, t2.b FROM t1, t2"
)
s2 = s1.with_only_columns([table2.c.b])
- self.assert_compile(s2,
+ self.assert_compile(s2,
"SELECT t2.b FROM t2"
)
s3 = sql_util.ClauseAdapter(table1).traverse(s1)
- self.assert_compile(s3,
+ self.assert_compile(s3,
"SELECT t1.a, t2.b FROM t1, t2"
)
s4 = s3.with_only_columns([table2.c.b])
- self.assert_compile(s4,
+ self.assert_compile(s4,
"SELECT t2.b FROM t2"
)
def test_join_cond_no_such_unrelated_table(self):
m = MetaData()
- # bounding the "good" column with two "bad" ones is so to
+ # bounding the "good" column with two "bad" ones is so to
# try to get coverage to get the "continue" statements
# in the loop...
- t1 = Table('t1', m,
+ t1 = Table('t1', m,
Column('y', Integer, ForeignKey('t22.id')),
- Column('x', Integer, ForeignKey('t2.id')),
- Column('q', Integer, ForeignKey('t22.id')),
+ Column('x', Integer, ForeignKey('t2.id')),
+ Column('q', Integer, ForeignKey('t22.id')),
)
t2 = Table('t2', m, Column('id', Integer))
assert sql_util.join_condition(t1, t2).compare(t1.c.x==t2.c.id)
def test_join_cond_no_such_unrelated_column(self):
m = MetaData()
- t1 = Table('t1', m, Column('x', Integer, ForeignKey('t2.id')),
+ t1 = Table('t1', m, Column('x', Integer, ForeignKey('t2.id')),
Column('y', Integer, ForeignKey('t3.q')))
t2 = Table('t2', m, Column('id', Integer))
t3 = Table('t3', m, Column('id', Integer))
def test_init_doesnt_blowitaway(self):
meta = MetaData()
- a = Table('a', meta,
- Column('id', Integer, primary_key=True),
+ a = Table('a', meta,
+ Column('id', Integer, primary_key=True),
Column('x', Integer))
- b = Table('b', meta,
- Column('id', Integer, ForeignKey('a.id'), primary_key=True),
+ b = Table('b', meta,
+ Column('id', Integer, ForeignKey('a.id'), primary_key=True),
Column('x', Integer))
j = a.join(b)
def test_non_column_clause(self):
meta = MetaData()
- a = Table('a', meta,
- Column('id', Integer, primary_key=True),
+ a = Table('a', meta,
+ Column('id', Integer, primary_key=True),
Column('x', Integer))
- b = Table('b', meta,
- Column('id', Integer, ForeignKey('a.id'), primary_key=True),
+ b = Table('b', meta,
+ Column('id', Integer, ForeignKey('a.id'), primary_key=True),
Column('x', Integer, primary_key=True))
j = a.join(b, and_(a.c.id==b.c.id, b.c.x==5))
Column('id', Integer, primary_key= True),
)
- engineer = Table('Engineer', metadata,
+ engineer = Table('Engineer', metadata,
Column('id', Integer,
ForeignKey('Employee.id'), primary_key=True))
'BaseItem':
base_item_table.select(
base_item_table.c.child_name
- == 'BaseItem'),
- 'Item': base_item_table.join(item_table)},
+ == 'BaseItem'),
+ 'Item': base_item_table.join(item_table)},
None, 'item_join')
eq_(util.column_set(sql_util.reduce_columns([item_join.c.id,
item_join.c.dummy, item_join.c.child_name])),
select([
page_table.c.id,
- magazine_page_table.c.page_id,
+ magazine_page_table.c.page_id,
cast(null(), Integer).label('magazine_page_id')
]).
select_from(page_table.join(magazine_page_table))
pjoin = union(select([
page_table.c.id,
- magazine_page_table.c.page_id,
+ magazine_page_table.c.page_id,
cast(null(), Integer).label('magazine_page_id')
]).
select_from(page_table.join(magazine_page_table)),
assert t1.c is t2.c
assert t1.c.col1 is t2.c.col1
- inner = select([s1])
+ inner = select([s1])
assert inner.corresponding_column(t2.c.col1,
require_embedded=False) \
b4._annotations, b4.left._annotations:
assert elem == {}
- assert b2.left is not bin.left
+ assert b2.left is not bin.left
assert b3.left is not b2.left is not bin.left
assert b4.left is bin.left # since column is immutable
# deannotate copies the element
#2453 - however note this was modified by
#1401, and it's likely that re49563072578
is helping us with the str() comparison
- case now, as deannotate is making
+ case now, as deannotate is making
clones again in some cases.
"""
table1 = table('table1', column('x'))
def test_annotate_varied_annot_same_col(self):
"""test two instances of the same column with different annotations
preserving them when deep_annotate is run on them.
-
+
"""
t1 = table('table1', column("col1"), column("col2"))
s = select([t1.c.col1._annotate({"foo":"bar"})])
)
def test_deannotate_3(self):
- table1 = table('table1', column("col1"), column("col2"),
+ table1 = table('table1', column("col1"), column("col2"),
column("col3"), column("col4"))
j = and_(
table1.c.col1._annotate({"remote":True})==
- table1.c.col2._annotate({"local":True}),
+ table1.c.col2._annotate({"local":True}),
table1.c.col3._annotate({"remote":True})==
table1.c.col4._annotate({"local":True})
)