or collections taken into account when deleting
objects, despite passive_deletes remaining at
its default of False. [ticket:2002]
-
+
- A warning is emitted when version_id_col is specified
on an inheriting mapper when the inherited mapper
already has one, if those column expressions are not
the same. [ticket:1987]
-
+
- "innerjoin" flag doesn't take effect along the chain
of joinedload() joins if a previous join in that chain
is an outer join, thus allowing primary rows without
- The mapper argument "primary_key" can be passed as a
single column as well as a list or tuple. [ticket:1971]
The documentation examples that illustrated it as a
- scalar value have been changed to lists.
+ scalar value have been changed to lists.
- Added active_history flag to relationship()
and column_property(), forces attribute events to
always load the "old" value, so that it's available to
attributes.get_history(). [ticket:1961]
-
+
- Query.get() will raise if the number of params
in a composite key is too large, as well as too
small. [ticket:1977]
and foreign_keys wasn't used - adds "foreign_keys" to
the suggestion. Also add "foreign_keys" to the
suggestion for the generic "direction" error.
-
+
- sql
- Fixed operator precedence rules for multiple
chains of a single non-associative operator.
and not "x - y - z". Also works with labels,
i.e. "x - (y - z).label('foo')"
[ticket:1984]
-
+
- The 'info' attribute of Column is copied during
Column.copy(), i.e. as occurs when using columns
in declarative mixins. [ticket:1967]
- Added a bind processor for booleans which coerces
to int, for DBAPIs such as pymssql that naively call
str() on values.
-
+
- engine
- The "unicode warning" against non-unicode bind data
is now raised only when the
Unicode type is used explictly; not when
convert_unicode=True is used on the engine
or String type.
-
+
- Fixed memory leak in C version of Decimal result
processor. [ticket:1978]
-
+
- Implemented sequence check capability for the C
version of RowProxy, as well as 2.7 style
"collections.Sequence" registration for RowProxy.
- Threadlocal engine methods rollback(), commit(),
prepare() won't raise if no transaction is in progress;
this was a regression introduced in 0.6. [ticket:1998]
-
+
- postgresql
- Single element tuple expressions inside an IN clause
parenthesize correctly, also from [ticket:1984]
-
+
- Ensured every numeric, float, int code, scalar + array,
are recognized by psycopg2 and pg8000's "numeric"
base type. [ticket:1955]
- Fixed bug whereby KeyError would occur with non-ENUM
supported PG versions after a pool dispose+recreate
would occur, [ticket:1989]
-
+
- mysql
- Fixed error handling for Jython + zxjdbc, such that
has_table() property works again. Regression from
this character. cx_oracle 5.0.3 or greater is also required
when using a non-period-decimal-point NLS_LANG setting.
[ticket:1953].
-
+
- declarative
- An error is raised if __table_args__ is not in tuple
or dict format, and is not None. [ticket:1972]
- examples
- The versioning example now supports detection of changes
in an associated relationship().
-
+
0.6.5
=====
- orm
- - Added a new "lazyload" option "immediateload".
+ - Added a new "lazyload" option "immediateload".
Issues the usual "lazy" load operation automatically
as the object is populated. The use case
here is when loading objects to be placed in
the session isn't available, and straight 'select'
loading, not 'joined' or 'subquery', is desired.
[ticket:1914]
-
+
- New Query methods: query.label(name), query.as_scalar(),
return the query's statement as a scalar subquery
with /without label [ticket:1920];
Roughly equivalent to a generative form of query.values()
which accepts mapped entities as well as column
expressions.
-
+
- Fixed recursion bug which could occur when moving
an object from one reference to another, with
backrefs involved, where the initiating parent
- Fixed a regression in 0.6.4 which occurred if you
passed an empty list to "include_properties" on
mapper() [ticket:1918]
-
+
- Fixed labeling bug in Query whereby the NamedTuple
would mis-apply labels if any of the column
expressions were un-labeled.
-
+
- Patched a case where query.join() would adapt the
right side to the right side of the left's join
inappropriately [ticket:1925]
a mapped entity and not a plain selectable,
as the default "left" side, not the first entity
in the Query object's list of entities.
-
+
- The exception raised by Session when it is used
subsequent to a subtransaction rollback (which is what
happens when a flush fails in autocommit=False mode) has
expiration would fail if the column expression key was
a class attribute with a different keyname as the
actual column name. [ticket:1935]
-
+
- Added an assertion during flush which ensures
that no NULL-holding identity keys were generated
on "newly persistent" objects.
is not triggered on these loads when the attributes are
determined and the "committed" state may not be
available. [ticket:1910]
-
+
- A new flag on relationship(), load_on_pending, allows
the lazy loader to fire off on pending objects without a
flush taking place, as well as a transient object that's
object is loaded, so backrefs aren't available until
after a flush. The flag is only intended for very
specific use cases.
-
+
- Another new flag on relationship(), cascade_backrefs,
disables the "save-update" cascade when the event was
initiated on the "reverse" side of a bidirectional
it getting sucked into the child object's session,
while still allowing the forward collection to
cascade. We *might* default this to False in 0.7.
-
+
- Slight improvement to the behavior of
"passive_updates=False" when placed only on the
many-to-one side of a relationship; documentation has
- Placing passive_deletes=True on a many-to-one emits
a warning, since you probably intended to put it on
the one-to-many side.
-
+
- Fixed bug that would prevent "subqueryload" from
working correctly with single table inheritance
for a relationship from a subclass - the "where
type in (x, y, z)" only gets placed on the inside,
instead of repeatedly.
-
+
- When using from_self() with single table inheritance,
the "where type in (x, y, z)" is placed on the outside
of the query only, instead of repeatedly. May make
- reworked the internals of mapper.cascade_iterator() to
cut down method calls by about 9% in some circumstances.
[ticket:1932]
-
+
- sql
- Fixed bug in TypeDecorator whereby the dialect-specific
type was getting pulled in to generate the DDL for a
given type, which didn't always return the correct result.
-
+
- TypeDecorator can now have a fully constructed type
specified as its "impl", in addition to a type class.
- TypeDecorator.load_dialect_impl() returns "self.impl" by
default, i.e. not the dialect implementation type of
- "self.impl". This to support compilation correctly.
+ "self.impl". This to support compilation correctly.
Behavior can be user-overridden in exactly the same way
as before to the same effect.
- - Added type_coerce(expr, type_) expression element.
+ - Added type_coerce(expr, type_) expression element.
Treats the given expression as the given type when evaluating
expressions and processing result rows, but does not
affect the generation of SQL, other than an anonymous
label.
-
+
- Table.tometadata() now copies Index objects associated
with the Table as well.
- Fixed recursion overflow which could occur when operating
with two expressions both of type "NullType", but
not the singleton NULLTYPE instance. [ticket:1907]
-
+
- declarative
- @classproperty (soon/now @declared_attr) takes effect for
__mapper_args__, __table_args__, __tablename__ on
- A mixin can now specify a column that overrides
a column of the same name associated with a superclass.
Thanks to Oystein Haaland.
-
+
- engine
-
+
- Fixed a regression in 0.6.4 whereby the change that
allowed cursor errors to be raised consistently broke
the result.lastrowid accessor. Test coverage has
been added for result.lastrowid. Note that lastrowid
is only supported by Pysqlite and some MySQL drivers,
so isn't super-useful in the general case.
-
+
- the logging message emitted by the engine when
a connection is first used is now "BEGIN (implicit)"
to emphasize that DBAPI has no explicit begin().
-
+
- added "views=True" option to metadata.reflect(),
will add the list of available views to those
being reflected. [ticket:1936]
- engine_from_config() now accepts 'debug' for
'echo', 'echo_pool', 'force' for 'convert_unicode',
- boolean values for 'use_native_unicode'.
+ boolean values for 'use_native_unicode'.
[ticket:1899]
- postgresql
Oracle. Previously, the flag would be forced
to False if server version info was < 10.
[ticket:1878]
-
+
- mssql
- Fixed reflection bug which did not properly handle
reflection of unknown types. [ticket:1946]
-
+
- Fixed bug where aliasing of tables with "schema" would
fail to compile properly. [ticket:1943]
(spaces, embedded commas, etc.) can be reflected.
Note that reflection of indexes requires SQL
Server 2005 or greater. [ticket:1770]
-
+
- mssql+pymssql dialect now honors the "port" portion
of the URL instead of discarding it. [ticket:1952]
-
+
- informix
- *Major* cleanup / modernization of the Informix
dialect for 0.6, courtesy Florian Apolloner.
--with-coverage option to turn on coverage before
SQLAlchemy modules are imported, allowing coverage
to work correctly.
-
+
- misc
- CircularDependencyError now has .cycles and .edges
members, which are the set of elements involved in
- one or more cycles, and the set of edges as 2-tuples.
+ one or more cycles, and the set of edges as 2-tuples.
[ticket:1890]
-
+
0.6.4
=====
- orm
iterable. This because asyncrhonous gc
can remove items via the gc thread at any time.
[ticket:1891]
-
+
- The Session class is now present in sqlalchemy.orm.*.
We're moving away from the usage of create_session(),
which has non-standard defaults, for those situations
where a one-step Session constructor is desired. Most
users should stick with sessionmaker() for general use,
however.
-
+
- query.with_parent() now accepts transient objects
and will use the non-persistent values of their pk/fk
- attributes in order to formulate the criterion.
+ attributes in order to formulate the criterion.
Docs are also clarified as to the purpose of with_parent().
-
+
- The include_properties and exclude_properties arguments
to mapper() now accept Column objects as members in
addition to strings. This so that same-named Column
In 0.7 this warning will be an exception. Note that
this warning is not emitted when the combination occurs
as a result of inheritance, so that attributes
- still allow being overridden naturally.
+ still allow being overridden naturally.
[ticket:1896]. In 0.7 this will be improved further.
-
+
- The primary_key argument to mapper() can now specify
a series of columns that are only a subset of
the calculated "primary key" columns of the mapped
in the selectable that are actually marked as
"primary_key", such as a join against two
tables on their primary key columns [ticket:1896].
-
+
- An object that's been deleted now gets a flag
'deleted', which prohibits the object from
being re-add()ed to the session, as previously
- make_transient() can be safely called on an
already transient instance.
-
+
- a warning is emitted in mapper() if the polymorphic_on
column is not present either in direct or derived
form in the mapped selectable or in the
the foreign keys to be elsewhere in any case.
A warning is now emitted instead of an error,
and the mapping succeeds. [ticket:1877]
-
+
- Moving an o2m object from one collection to
another, or vice versa changing the referenced
object by an m2o, where the foreign key is also a
at the "old", assuming passive_updates=True,
unless we know it was a PK switch that
triggered the change. [ticket:1856]
-
+
- The value of version_id_col can be changed
manually, and this will result in an UPDATE
of the row. Versioned UPDATEs and DELETEs
expressions are enforced - lists of strings
are explicitly disallowed since this is a
very common error
-
+
- Dynamic attributes don't support collection
population - added an assertion for when
set_committed_value() is called, as well as
- the versioning example works correctly now
if versioning on a col that was formerly
NULL.
-
+
- sql
- Calling execute() on an alias() construct is pending
deprecation for 0.7, as it is not itself an
"executable" construct. It currently "proxies" its
inner element and is conditionally "executable" but
this is not the kind of ambiguity we like these days.
-
+
- The execute() and scalar() methods of ClauseElement
are now moved appropriately to the Executable
subclass. ClauseElement.execute()/ scalar() are still
these would always raise an error anyway if you were
not an Executable (unless you were an alias(), see
previous note).
-
+
- Added basic math expression coercion for
Numeric->Integer,
so that resulting type is Numeric regardless
of the direction of the expression.
-
+
- Changed the scheme used to generate truncated
"auto" index names when using the "index=True"
flag on Column. The truncation only takes
upon the base "SET SESSION ISOLATION" command,
as psycopg2 resets the isolation level on each new
transaction otherwise.
-
+
- mssql
- Fixed "default schema" query to work with
pymssql backend.
- firebird
- Fixed bug whereby a column default would fail to
reflect if the "default" keyword were lower case.
-
+
- oracle
- Added ROWID type to the Oracle dialect, for those
cases where an explicit CAST might be needed.
"SQLAlchemy ORM" sections, mapper/relationship docs
have been broken out. Lots of sections rewritten
and/or reorganized.
-
+
- examples
- The beaker_caching example has been reorgnized
such that the Session, cache manager,
when copying columns, so that the versioning
table handles multiple rows with repeating values.
[ticket:1887]
-
+
0.6.3
=====
- orm
themselves + a selectable (i.e. from_self(),
union(), etc.), so that join() and such have the
correct state to work from. [ticket:1853]
-
+
- Fixed bug where Query.join() would fail if
querying a non-ORM column then joining without
an on clause when a FROM clause is already
but the subclass is not. Any attempts to access
cls._sa_class_manager.mapper now raise
UnmappedClassError(). [ticket:1142]
-
+
- Added "column_descriptions" accessor to Query,
returns a list of dictionaries containing
naming/typing information about the entities
the Query will return. Can be helpful for
building GUIs on top of ORM queries.
-
+
- mysql
- The _extract_error_code() method now works
come back as ints without SQLA type
objects being involved and without needless
conversion to Decimal first.
-
+
Unfortunately, some exotic subquery cases
can even see different types between
individual result rows, so the Numeric
form query.join(target, clause_expression),
i.e. missing the tuple, and raise an informative
error message that this is the wrong calling form.
-
+
- Fixed bug regarding flushes on self-referential
bi-directional many-to-many relationships, where
two objects made to mutually reference each other
in one flush would fail to insert a row for both
sides. Regression from 0.5. [ticket:1824]
-
+
- the post_update feature of relationship() has been
reworked architecturally to integrate more closely
with the new 0.6 unit of work. The motivation
statement per column per row. Multiple row
updates are also batched into executemany()s as
possible, while maintaining consistent row ordering.
-
+
- Query.statement, Query.subquery(), etc. now transfer
the values of bind parameters, i.e. those specified
by query.params(), into the resulting SQL expression.
Previously the values would not be transferred
and bind parameters would come out as None.
-
+
- Subquery-eager-loading now works with Query objects
which include params(), as well as get() Queries.
- The make_transient() function is now in the generated
documentation.
-
+
- make_transient() removes all "loader" callables from
the state being made transient, removing any
"expired" state - all unloaded attributes reset back
to undefined, None/empty on access.
-
+
- sql
- The warning emitted by the Unicode and String types
with convert_unicode=True no longer embeds the actual
- Fixed bug that would prevent overridden clause
compilation from working for "annotated" expression
elements, which are often generated by the ORM.
-
+
- The argument to "ESCAPE" of a LIKE operator or similar
is passed through render_literal_value(), which may
implement escaping of backslashes. [ticket:1400]
-
+
- Fixed bug in Enum type which blew away native_enum
flag when used with TypeDecorators or other adaption
scenarios.
- Inspector hits bind.connect() when invoked to ensure
initialize has been called. the internal name ".conn"
is changed to ".bind", since that's what it is.
-
+
- Modified the internals of "column annotation" such that
a custom Column subclass can safely override
_constructor to return Column, for the purposes of
- postgresql
- render_literal_value() is overridden which escapes
backslashes, currently applies to the ESCAPE clause
- of LIKE and similar expressions.
+ of LIKE and similar expressions.
Ultimately this will have to detect the value of
- "standard_conforming_strings" for full behavior.
+ "standard_conforming_strings" for full behavior.
[ticket:1400]
- Won't generate "CREATE TYPE" / "DROP TYPE" if
using types.Enum on a PG version prior to 8.3 -
the supports_native_enum flag is fully
honored. [ticket:1836]
-
+
- mysql
- MySQL dialect doesn't emit CAST() for MySQL version
detected < 4.0.2. This allows the unicode
check on connect to proceed. [ticket:1826]
- MySQL dialect now detects NO_BACKSLASH_ESCAPES sql
- mode, in addition to ANSI_QUOTES.
-
+ mode, in addition to ANSI_QUOTES.
+
- render_literal_value() is overridden which escapes
backslashes, currently applies to the ESCAPE clause
of LIKE and similar expressions. This behavior
is derived from detecting the value of
NO_BACKSLASH_ESCAPES. [ticket:1400]
-
+
- oracle:
- Fixed ora-8 compatibility flags such that they
don't cache a stale value from before the first
which suggests checking that the FreeTDS version
configuration is using 7.0 or 8.0, not 4.2.
[ticket:1825]
-
+
- firebird
- Fixed incorrect signature in do_execute(), error
introduced in 0.6.1. [ticket:1823]
- Firebird dialect adds CHAR, VARCHAR types which
accept a "charset" flag, to support Firebird
"CHARACTER SET" clause. [ticket:1813]
-
+
- declarative
- Added support for @classproperty to provide
any kind of schema/mapping construct from a
An error is raised if any MapperProperty subclass
is specified on a mixin without using @classproperty.
[ticket:1751] [ticket:1796] [ticket:1805]
-
+
- a mixin class can now define a column that matches
one which is present on a __table__ defined on a
subclass. It cannot, however, define one that is
user-defined compiler is specific to certain
backends and compilation for a different backend
is invoked. [ticket:1838]
-
+
- documentation
- Added documentation for the Inspector. [ticket:1820]
decorators so that Sphinx documentation picks up
these attributes and methods, such as
ResultProxy.inserted_primary_key. [ticket:1830]
-
-
+
+
0.6.1
=====
- orm
- Fixed regression introduced in 0.6.0 involving improper
history accounting on mutable attributes. [ticket:1782]
-
+
- Fixed regression introduced in 0.6.0 unit of work refactor
that broke updates for bi-directional relationship()
with post_update=True. [ticket:1807]
-
+
- session.merge() will not expire attributes on the returned
instance if that instance is "pending". [ticket:1789]
the related Engine. The cache is an LRUCache for the
rare case that a mapper receives an extremely
high number of different column patterns as UPDATEs.
-
+
- sql
- expr.in_() now accepts a text() construct as the argument.
Grouping parenthesis are added automatically, i.e. usage
will coerce a "basestring" on the right side into a
_Binary as well so that required DBAPI processing
takes place.
-
+
- Added table.add_is_dependent_on(othertable), allows manual
placement of dependency rules between two Table objects
for use within create_all(), drop_all(), sorted_tables.
- Fixed bug that prevented implicit RETURNING from functioning
properly with composite primary key that contained zeroes.
[ticket:1778]
-
+
- Fixed errant space character when generating ADD CONSTRAINT
for a named UNIQUE constraint.
- Pool classes will reuse the same "pool_logging_name" setting
after a dispose() occurs.
-
+
- Engine gains an "execution_options" argument and
update_execution_options() method, which will apply to
all connections generated by this engine.
-
+
- mysql
- func.sysdate() emits "SYSDATE()", i.e. with the ending
parenthesis, on MySQL. [ticket:1794]
- Fixed concatenation of constraints when "PRIMARY KEY"
constraint gets moved to column level due to SQLite
AUTOINCREMENT keyword being rendered. [ticket:1812]
-
+
- oracle
- Added a check for cx_oracle versions lower than version 5,
in which case the incompatible "output type handler" won't
"native unicode" check doesn't fail, cx_oracle
"native unicode" mode is disabled, VARCHAR() is emitted
with bytes count instead of char count. [ticket:1808]
-
+
- oracle_xe 5 doesn't accept a Python unicode object in
its connect string in normal Python 2.x mode - so we coerce
to str() directly. non-ascii characters aren't supported
or with subqueries, so its still not very usable, but at
least SQLA gets the SQL past the Oracle parser.
[ticket:1815]
-
+
- firebird
- Added a label to the query used within has_table() and
has_sequence() to work with older versions of Firebird
that don't provide labels for result columns. [ticket:1521]
-
+
- Added integer coercion to the "type_conv" attribute when
passed via query string, so that it is properly interpreted
by Kinterbasdb. [ticket:1779]
would cause a version check to occur. Since the instance
is first expired, refresh() always upgrades the object
to the most recent version.
-
+
- The 'refresh-expire' cascade, when reaching a pending object,
will expunge the object if the cascade also includes
"delete-orphan", or will simply detach it otherwise.
- The ORM will set the docstring of all generated descriptors
to None by default. This can be overridden using 'doc'
(or if using Sphinx, attribute docstrings work too).
-
+
- Added kw argument 'doc' to all mapper property callables
as well as Column(). Will assemble the string 'doc' as
- the '__doc__' attribute on the descriptor.
+ the '__doc__' attribute on the descriptor.
- Usage of version_id_col on a backend that supports
cursor.rowcount for execute() but not executemany() now works
objects of all the same class, thereby avoiding redundant
compilation per individual INSERT/UPDATE within an
individual flush() call.
-
+
- internal getattr(), setattr(), getcommitted() methods
on ColumnProperty, CompositeProperty, RelationshipProperty
have been underscored (i.e. are private), signature has
changed.
-
+
- engines
- The C extension now also works with DBAPIs which use custom
sequences as row (and not only tuples). [ticket:1757]
- somejoin.select(fold_equivalents=True) is no longer
deprecated, and will eventually be rolled into a more
comprehensive version of the feature for [ticket:1729].
-
+
- the Numeric type raises an *enormous* warning when expected
to convert floats to Decimal from a DBAPI that returns floats.
This includes SQLite, Sybase, MS-SQL. [ticket:1759]
- Fixed an error in expression typing which caused an endless
loop for expressions with two NULL types.
-
+
- Fixed bug in execution_options() feature whereby the existing
Transaction and other state information from the parent
connection would not be propagated to the sub-connection.
corresponding to the dialect, clause element, the column
names within the VALUES or SET clause of an INSERT or UPDATE,
as well as the "batch" mode for an INSERT or UPDATE statement.
-
+
- Added get_pk_constraint() to reflection.Inspector, similar
to get_primary_keys() except returns a dict that includes the
name of the constraint, for supported backends (PG so far).
- Table.create() and Table.drop() no longer apply metadata-
level create/drop events. [ticket:1771]
-
+
- ext
- the compiler extension now allows @compiles decorators
on base classes that extend to child classes, @compiles
decorators on child classes that aren't broken by a
@compiles decorator on the base class.
-
+
- Declarative will raise an informative error message
if a non-mapped class attribute is referenced in the
string-based relationship() arguments.
SERIAL columns correctly, after the name of of the sequence
has been changed. Thanks to Kumar McMillan for the patch.
[ticket:1071]
-
+
- Repaired missing import in psycopg2._PGNumeric type when
unknown numeric is received.
- Postgresql reflects the name of primary key constraints,
if one exists. [ticket:1769]
-
+
- oracle
- Now using cx_oracle output converters so that the
DBAPI returns natively the kinds of values we prefer:
call is slightly expensive however so it can be disabled.
To re-enable on a per-execution basis, the
'enable_rowcount=True' execution option may be used.
-
+
- examples
- Updated attribute_shard.py example to use a more robust
method of searching a Query for binary expressions which
compare columns against literal values.
-
+
0.6beta3
========
loading available, the new names for eagerload() and
eagerload_all() are joinedload() and joinedload_all(). The
old names will remain as synonyms for the foreseeable future.
-
+
- The "lazy" flag on the relationship() function now accepts
a string argument for all kinds of loading: "select", "joined",
"subquery", "noload" and "dynamic", where the default is now
directly down to select().with_hint() and also accepts
entities as well as tables and aliases. See with_hint() in the
SQL section below. [ticket:921]
-
+
- Fixed bug in Query whereby calling q.join(prop).from_self(...).
join(prop) would fail to render the second join outside the
subquery, when joining on the same criterion as was on the
would fail if the underlying table (but not the actual alias)
were referenced inside the subquery generated by
q.from_self() or q.select_from().
-
+
- Fixed bug which affected all eagerload() and similar options
such that "remote" eager loads, i.e. eagerloads off of a lazy
load such as query(A).options(eagerload(A.b, B.c))
carefully that "Cls" is compatible with the current joinpoint,
and act the same way as Query.join("propname", from_joinpoint=True)
in that regard.
-
+
- sql
- Added with_hint() method to select() construct. Specify
a table/alias, hint text, and optional dialect name, and
"hints" will be rendered in the appropriate place in the
statement. Works for Oracle, Sybase, MySQL. [ticket:921]
-
+
- Fixed bug introduced in 0.6beta2 where column labels would
render inside of column expressions already assigned a label.
[ticket:1747]
when reflecting - TINYINT(1) is returned. Use Boolean/
BOOLEAN in table definition to get boolean conversion
behavior. [ticket:1752]
-
+
- oracle
- The Oracle dialect will issue VARCHAR type definitions
using character counts, i.e. VARCHAR2(50 CHAR), so that
__tablename__, __table_args__, etc. now works if
the method references attributes on the ultimate
subclass. [ticket:1749]
-
+
- relationships and columns with foreign keys aren't
allowed on declarative mixins, sorry. [ticket:1751]
- The sqlalchemy.orm.shard module now becomes an extension,
sqlalchemy.ext.horizontal_shard. The old import
works with a deprecation warning.
-
+
0.6beta2
========
now that Distribute runs on Py3k. distribute_setup.py
is now included. See README.py3k for Python 3 installation/
testing instructions.
-
+
- orm
- The official name for the relation() function is now
relationship(), to eliminate confusion over the relational
callable that, given the current value of the "version_id_col",
returns the next version number. Can be used for alternate
versioning schemes such as uuid, timestamps. [ticket:1692]
-
+
- added "lockmode" kw argument to Session.refresh(), will
- pass through the string value to Query the same as
+ pass through the string value to Query the same as
in with_lockmode(), will also do version check for a
version_id_col-enabled mapping.
- Fixed bug in session.merge() which prevented dict-like
collections from merging.
-
+
- session.merge() works with relations that specifically
don't include "merge" in their cascade options - the target
is ignored completely.
-
+
- session.merge() will not expire existing scalar attributes
on an existing target if the target has a value for that
attribute, even if the incoming merged doesn't have
it also is implemented within merge() such that a SELECT
won't be issued for an incoming instance with partially
NULL primary key if the flag is False. [ticket:1680]
-
+
- Fixed bug in 0.6-reworked "many-to-one" optimizations
such that a many-to-one that is against a non-primary key
column on the remote table (i.e. foreign key against a
we will need it for proper history/backref accounting,
and we can't pull from the local identity map on a
non-primary key column. [ticket:1737]
-
+
- fixed internal error which would occur if calling has()
or similar complex expression on a single-table inheritance
relation(). [ticket:1731]
-
+
- query.one() no longer applies LIMIT to the query, this to
ensure that it fully counts all object identities present
in the result, even in the case where joins may conceal
- query.get() now returns None if queried for an identifier
that is present in the identity map with a different class
- than the one requested, i.e. when using polymorphic loading.
+ than the one requested, i.e. when using polymorphic loading.
[ticket:1727]
-
+
- A major fix in query.join(), when the "on" clause is an
attribute of an aliased() construct, but there is already
an existing join made out to a compatible target, query properly
joins to the right aliased() construct instead of sticking
onto the right side of the existing join. [ticket:1706]
-
+
- Slight improvement to the fix for [ticket:1362] to not issue
needless updates of the primary key column during a so-called
"row switch" operation, i.e. add + delete of two objects
attribute load or refresh action fails due to object
being detached from any Session. UnboundExecutionError
is specific to engines bound to sessions and statements.
-
+
- Query called in the context of an expression will render
disambiguating labels in all cases. Note that this does
not apply to the existing .statement and .subquery()
accessor/method, which still honors the .with_labels()
- setting that defaults to False.
-
+ setting that defaults to False.
+
- Query.union() retains disambiguating labels within the
returned statement, thus avoiding various SQL composition
errors which can result from column name conflicts.
query.select_from(), query.with_polymorphic(), or
query.from_statement() raises an exception now instead of
silently dropping those criterion. [ticket:1736]
-
+
- query.scalar() now raises an exception if more than one
row is returned. All other behavior remains the same.
[ticket:1735]
- Fixed bug which caused "row switch" logic, that is an
INSERT and DELETE replaced by an UPDATE, to fail when
version_id_col was in use. [ticket:1692]
-
+
- sql
- join() will now simulate a NATURAL JOIN by default. Meaning,
if the left side is a join, it will attempt to join the right
any exceptions about ambiguous join conditions if successful
even if there are further join targets across the rest of
the left. [ticket:1714]
-
+
- The most common result processors conversion function were
moved to the new "processors" module. Dialect authors are
encouraged to use those functions whenever they correspond
Dialects can also expand upon the areas where binds are not
accepted, such as within argument lists of functions
(which don't work on MS-SQL when native SQL binding is used).
-
+
- Added "unicode_errors" parameter to String, Unicode, etc.
Behaves like the 'errors' keyword argument to
the standard library's string.decode() functions. This flag
in the first place (i.e. MySQL. *not* PG, Sqlite, etc.)
- Added math negation operator support, -x.
-
+
- FunctionElement subclasses are now directly executable the
same way any func.foo() construct is, with automatic
SELECT being applied when passed to execute().
-
+
- The "type" and "bind" keyword arguments of a func.foo()
construct are now local to "func." constructs and are
not part of the FunctionElement base class, allowing
a "type" to be handled in a custom constructor or
class-level variable.
-
+
- Restored the keys() method to ResultProxy.
-
+
- The type/expression system now does a more complete job
of determining the return type from an expression
as well as the adaptation of the Python operator into
- Column() requires a type if it has no foreign keys (this is
not new). An error is now raised if a Column() has no type
and no foreign keys. [ticket:1705]
-
+
- the "scale" argument of the Numeric() type is honored when
coercing a returned floating point value into a string
on its way to Decimal - this allows accuracy to function
- the copy() method of Column now copies over uninitialized
"on table attach" events. Helps with the new declarative
"mixin" capability.
-
+
- engines
- Added an optional C extension to speed up the sql layer by
reimplementing RowProxy and the most common result processors.
info from the cursor before commit() is called on the
DBAPI connection in an "autocommit" scenario. This helps
mxodbc with rowcount and is probably a good idea overall.
-
+
- Opened up logging a bit such that isEnabledFor() is called
more often, so that changes to the log level for engine/pool
will be reflected on next connect. This adds a small
life a lot easier for all those situations when logging
just happens to be configured after create_engine() is called.
[ticket:1719]
-
+
- The assert_unicode flag is deprecated. SQLAlchemy will raise
a warning in all cases where it is asked to encode a non-unicode
Python string, as well as when a Unicode or UnicodeType type
filters down to that of Pool. Issues the given string name
within the "name" field of logging messages instead of the default
hex identifier string. [ticket:1555]
-
+
- The visit_pool() method of Dialect is removed, and replaced with
on_connect(). This method returns a callable which receives
the raw DBAPI connection after each one is created. The callable
is assembled into a first_connect/connect pool listener by the
connection strategy if non-None. Provides a simpler interface
for dialects.
-
+
- StaticPool now initializes, disposes and recreates without
opening a new connection - the connection is only opened when
first requested. dispose() also works on AssertionPool now.
[ticket:1728]
-
+
- metadata
- Added the ability to strip schema information when using
"tometadata" by passing "schema=None" as an argument. If schema
- declarative now accepts mixin classes directly, as a means
to provide common functional and column-based elements on
all subclasses, as well as a means to propagate a fixed
- set of __table_args__ or __mapper_args__ to subclasses.
+ set of __table_args__ or __mapper_args__ to subclasses.
For custom combinations of __table_args__/__mapper_args__ from
an inherited mixin to local, descriptors can now be used.
New details are all up in the Declarative documentation.
Thanks to Chris Withers for putting up with my strife
on this. [ticket:1707]
-
+
- the __mapper_args__ dict is copied when propagating to a subclass,
and is taken straight off the class __dict__ to avoid any
propagation from the parent. mapper inheritance already
- An exception is raised when a single-table subclass specifies
a column that is already present on the base class.
[ticket:1732]
-
+
- mysql
- Fixed reflection bug whereby when COLLATE was present,
nullable flag and server defaults would not be reflected.
integer flags like UNSIGNED.
- Further fixes for the mysql-connector dialect. [ticket:1668]
-
+
- Composite PK table on InnoDB where the "autoincrement" column
isn't first will emit an explicit "KEY" phrase within
CREATE TABLE thereby avoiding errors, [ticket:1496]
- Added reflection/create table support for a wide range
of MySQL keywords. [ticket:1634]
-
+
- Fixed import error which could occur reflecting tables on
a Windows host [ticket:1580]
-
+
- mssql
- Re-established support for the pymssql dialect.
- Various fixes for implicit returning, reflection,
etc. - the MS-SQL dialects aren't quite complete
in 0.6 yet (but are close)
-
+
- Added basic support for mxODBC [ticket:1710].
-
+
- Removed the text_as_varchar option.
- oracle
is emitted asking that the user seriously consider
the usage of this difficult mode of operation.
[ticket:1670]
-
+
- The except_() method now renders as MINUS on Oracle,
which is more or less equivalent on that platform.
[ticket:1712]
-
+
- Added support for rendering and reflecting
TIMESTAMP WITH TIME ZONE, i.e. TIMESTAMP(timezone=True).
[ticket:651]
-
+
- Oracle INTERVAL type can now be reflected.
-
+
- sqlite
- Added "native_datetime=True" flag to create_engine().
This will cause the DATE and TIMESTAMP types to skip
creates/drops and basic round trip functionality.
Does not yet include reflection or comprehensive
support of unicode/special expressions/etc.
-
+
- examples
- Changed the beaker cache example a bit to have a separate
RelationCache option for lazyload caching. This object
- Platforms targeted now include Python 2.4/2.5/2.6, Python
3.1, Jython2.5.
-
+
- orm
- Changes to query.update() and query.delete():
- the 'expire' option on query.update() has been renamed to
- 'fetch', thus matching that of query.delete().
+ 'fetch', thus matching that of query.delete().
'expire' is deprecated and issues a warning.
- query.update() and query.delete() both default to
- Enhancements / Changes on Session.merge():
- the "dont_load=True" flag on Session.merge() is deprecated
and is now "load=False".
-
+
- Session.merge() is performance optimized, using half the
call counts for "load=False" mode compared to 0.5 and
significantly fewer SQL queries in the case of collections
- merge() will not issue a needless merge of attributes if the
given instance is the same instance which is already present.
-
+
- merge() now also merges the "options" associated with a given
state, i.e. those passed through query.options() which follow
along with an instance, such as options to eagerly- or
lazyily- load various attributes. This is essential for
the construction of highly integrated caching schemes. This
is a subtle behavioral change vs. 0.5.
-
+
- A bug was fixed regarding the serialization of the "loader
path" present on an instance's state, which is also necessary
when combining the usage of merge() with serialized state
- and associated options that should be preserved.
-
+ and associated options that should be preserved.
+
- The all new merge() is showcased in a new comprehensive
example of how to integrate Beaker with SQLAlchemy. See
the notes in the "examples" note below.
-
+
- Primary key values can now be changed on a joined-table inheritance
object, and ON UPDATE CASCADE will be taken into account when
the flush happens. Set the new "passive_updates" flag to False
on mapper() when using SQLite or MySQL/MyISAM. [ticket:1362]
-
+
- flush() now detects when a primary key column was updated by
an ON UPDATE CASCADE operation from another primary key, and
can then locate the row for a subsequent UPDATE on the new PK
value. This occurs when a relation() is there to establish
the relationship as well as passive_updates=True. [ticket:1671]
-
+
- the "save-update" cascade will now cascade the pending *removed*
values from a scalar or collection attribute into the new session
during an add() operation. This so that the flush() operation
will also delete or modify rows of those disconnected items.
-
+
- Using a "dynamic" loader with a "secondary" table now produces
a query where the "secondary" table is *not* aliased. This
allows the secondary Table object to be used in the "order_by"
the row. This may be due to primaryjoin/secondaryjoin
conditions which aren't appropriate for an eager LEFT OUTER
JOIN or for other conditions. [ticket:1643]
-
+
- an explicit check occurs when a synonym() is used with
map_column=True, when a ColumnProperty (deferred or otherwise)
exists separately in the properties dictionary sent to mapper
with the same keyname. Instead of silently replacing
the existing property (and possible options on that property),
an error is raised. [ticket:1633]
-
+
- a "dynamic" loader sets up its query criterion at construction
time so that the actual query is returned from non-cloning
accessors like "statement".
-
+
- the "named tuple" objects returned when iterating a
Query() are now pickleable.
- mapping to a select() construct now requires that you
make an alias() out of it distinctly. This to eliminate
confusion over such issues as [ticket:1542]
-
+
- query.join() has been reworked to provide more consistent
behavior and more flexibility (includes [ticket:1537])
- query.get() can be used with a mapping to an outer join
where one or more of the primary key values are None.
[ticket:1135]
-
+
- query.from_self(), query.union(), others which do a
"SELECT * from (SELECT...)" type of nesting will do
a better job translating column expressions within the subquery
may break queries with literal expressions that do not have labels
applied (i.e. literal('foo'), etc.)
[ticket:1568]
-
+
- relation primaryjoin and secondaryjoin now check that they
are column-expressions, not just clause elements. this prohibits
things like FROM expressions being placed there directly.
[ticket:1622]
-
+
- `expression.null()` is fully understood the same way
None is when comparing an object/collection-referencing
attribute within query.filter(), filter_by(), etc.
subclasses of RelationProperty) into the reverse reference.
The internal BackRef() is gone and backref() returns a plain
tuple that is understood by RelationProperty.
-
+
- The version_id_col feature on mapper() will raise a warning when
used with dialects that don't support "rowcount" adequately.
[ticket:1569]
Select-statements have these options, and the only option
used is "stream_results", and the only dialect which knows
"stream_results" is psycopg2.
-
+
- Query.yield_per() will set the "stream_results" statement
option automatically.
-
+
- Deprecated or removed:
* 'allow_null_pks' flag on mapper() is deprecated. It does
nothing now and the setting is "on" in all cases.
expect a regular mapped object instance.
* the 'engine' parameter to declarative_base() is removed.
Use the 'bind' keyword argument.
-
+
- sql
-
+
- the "autocommit" flag on select() and text() as well
as select().autocommit() are deprecated - now call
.execution_options(autocommit=True) on either of those
- the autoincrement flag on column now indicates the column
which should be linked to cursor.lastrowid, if that method
is used. See the API docs for details.
-
+
- an executemany() now requires that all bound parameter
sets require that all keys are present which are
present in the first bound parameter set. The structure
is not impacted. For this reason defaults would otherwise
silently "fail" for missing parameters, so this is now guarded
against. [ticket:1566]
-
+
- returning() support is native to insert(), update(),
delete(). Implementations of varying levels of
functionality exist for Postgresql, Firebird, MSSQL and
version in use supports it (a version number check is
performed). This occurs if no end-user returning() was
specified.
-
+
- union(), intersect(), except() and other "compound" types
of statements have more consistent behavior w.r.t.
parenthesizing. Each compound element embedded within
when nesting compound elements, the first one usually needs
".alias().select()" called on it to wrap it inside
of a subquery. [ticket:1665]
-
+
- insert() and update() constructs can now embed bindparam()
objects using names that match the keys of columns. These
bind parameters will circumvent the usual route to those
keys showing up in the VALUES or SET clause of the generated
SQL. [ticket:1579]
-
+
- the Binary type now returns data as a Python string
(or a "bytes" type in Python 3), instead of the built-
in "buffer" type. This allows symmetric round trips
of binary data. [ticket:1524]
-
+
- Added a tuple_() construct, allows sets of expressions
to be compared to another set, typically with IN against
composite primary keys or similar. Also accepts an
have only one column" error message is removed - will
rely upon the database to report problems with
col mismatch.
-
+
- User-defined "default" and "onupdate" callables which
accept a context should now call upon
"context.current_parameters" to get at the dictionary
with underscores for dots, i.e. "dbo_master_table_column".
This is a "friendly" label that behaves better
in result sets. [ticket:1428]
-
+
- removed needless "counter" behavior with select()
labelnames that match a column name in the table,
i.e. generates "tablename_id" for "id", instead of
named "tablename_id" - this is because
the labeling logic is always applied to all columns
so a naming conflict will never occur.
-
+
- calling expr.in_([]), i.e. with an empty list, emits a warning
before issuing the usual "expr != expr" clause. The
"expr != expr" can be very expensive, and it's preferred
- Added "execution_options()" to select()/text(), which set the
default options for the Connection. See the note in "engines".
-
+
- Deprecated or removed:
* "scalar" flag on select() is removed, use
select.as_scalar().
the new returning() method.
* fold_equivalents flag on join is deprecated (will remain
until [ticket:1131] is implemented)
-
+
- engines
- transaction isolation level may be specified with
create_engine(... isolation_level="..."); available on
postgresql and sqlite. [ticket:443]
-
+
- Connection has execution_options(), generative method
which accepts keywords that affect how the statement
is executed w.r.t. the DBAPI. Currently supports
option from select() and text(). select() and
text() also have .execution_options() as well as
ORM Query().
-
+
- fixed the import for entrypoint-driven dialects to
not rely upon silly tb_info trick to determine import
error status. [ticket:1630]
-
+
- added first() method to ResultProxy, returns first row and
closes result set immediately.
- RowProxy no longer has a close() method, as the row no longer
maintains a reference to the parent. Call close() on
the parent ResultProxy instead, or use autoclose.
-
+
- ResultProxy internals have been overhauled to greatly reduce
method call counts when fetching columns. Can provide a large
speed improvement (up to more than 100%) when fetching large
- the last_inserted_ids() method has been renamed to the
descriptor "inserted_primary_key".
-
+
- setting echo=False on create_engine() now sets the loglevel
to WARN instead of NOTSET. This so that logging can be
disabled for a particular engine even if logging
for "sqlalchemy.engine" is enabled overall. Note that the
default setting of "echo" is `None`. [ticket:1554]
-
+
- ConnectionProxy now has wrapper methods for all transaction
lifecycle events, including begin(), rollback(), commit()
begin_nested(), begin_prepared(), prepare(), release_savepoint(),
etc.
-
+
- Connection pool logging now uses both INFO and DEBUG
log levels for logging. INFO is for major events such
as invalidated connections, DEBUG for all the acquire/return
logging. `echo_pool` can be False, None, True or "debug"
the same way as `echo` works.
-
+
- All pyodbc-dialects now support extra pyodbc-specific
kw arguments 'ansi', 'unicode_results', 'autocommit'.
[ticket:1621]
- the "threadlocal" engine has been rewritten and simplified
and now supports SAVEPOINT operations.
-
+
- deprecated or removed
* result.last_inserted_ids() is deprecated. Use
result.inserted_primary_key
now has those methods. All four methods accept
*args and **kwargs which are passed to the given callable,
as well as the operating connection.
-
+
- schema
- the `__contains__()` method of `MetaData` now accepts
strings or `Table` objects as arguments. If given
a `Table`, the argument is converted to `table.key` first,
i.e. "[schemaname.]<tablename>" [ticket:1541]
-
+
- deprecated MetaData.connect() and
ThreadLocalMetaData.connect() have been removed - send
the "bind" attribute to bind a metadata.
- deprecated metadata.table_iterator() method removed (use
sorted_tables)
-
+
- deprecated PassiveDefault - use DefaultClause.
-
+
- the "metadata" argument is removed from DefaultGenerator
and subclasses, but remains locally present on Sequence,
which is a standalone construct in DDL.
- PrimaryKeyConstraint.remove()
These should be constructed declaratively (i.e. in one
construction).
-
+
- The "start" and "increment" attributes on Sequence now
generate "START WITH" and "INCREMENT BY" by default,
on Oracle and Postgresql. Firebird doesn't support
these keywords right now. [ticket:1545]
-
+
- UniqueConstraint, Index, PrimaryKeyConstraint all accept
lists of column names or column objects as arguments.
- Column.metadata (get via column.table.metadata)
- Column.sequence (use column.default)
- ForeignKey(constraint=some_parent) (is now private _constraint)
-
+
- The use_alter flag on ForeignKey is now a shortcut option
for operations that can be hand-constructed using the
DDL() event system. A side effect of this refactor is
that ForeignKeyConstraint objects with use_alter=True
will *not* be emitted on SQLite, which does not support
ALTER for foreign keys.
-
+
- ForeignKey and ForeignKeyConstraint objects now correctly
copy() all their public keyword arguments. [ticket:1605]
-
+
- Reflection/Inspection
- Table reflection has been expanded and generalized into
a new API called "sqlalchemy.engine.reflection.Inspector".
The Inspector object provides fine-grained information about
a wide variety of schema information, with room for expansion,
including table names, column names, view definitions, sequences,
- indexes, etc.
-
+ indexes, etc.
+
- Views are now reflectable as ordinary Table objects. The same
Table constructor is used, with the caveat that "effective"
primary and foreign key constraints aren't part of the reflection
results; these have to be specified explicitly if desired.
-
+
- The existing autoload=True system now uses Inspector underneath
so that each dialect need only return "raw" data about tables
and other objects - Inspector is the single place that information
is compiled into Table objects so that consistency is at a maximum.
-
+
- DDL
- the DDL system has been greatly expanded. the DDL() class
now extends the more generic DDLElement(), which forms the basis
of many new constructs:
-
+
- CreateTable()
- DropTable()
- AddConstraint()
- DropIndex()
- CreateSequence()
- DropSequence()
-
+
These support "on" and "execute-at()" just like plain DDL()
does. User-defined DDLElement subclasses can be created and
linked to a compiler using the sqlalchemy.ext.compiler extension.
- The signature of the "on" callable passed to DDL() and
DDLElement() is revised as follows:
-
+
"ddl" - the DDLElement object itself.
"event" - the string event name.
"target" - previously "schema_item", the Table or
- the setuptools entrypoint for external dialects is now
called "sqlalchemy.dialects".
-
+
- the "owner" keyword argument is removed from Table. Use
"schema" to represent any namespaces to be prepended to
the table name.
- cached TypeEngine classes are cached per-dialect class
instead of per-dialect.
-
+
- new UserDefinedType should be used as a base class for
new types, which preserves the 0.5 behavior of
get_col_spec().
-
+
- The result_processor() method of all type classes now
accepts a second argument "coltype", which is the DBAPI
type argument from cursor.description. This argument
can help some types decide on the most efficient processing
of result values.
-
+
- Deprecated Dialect.get_params() removed.
- Dialect.get_rowcount() has been renamed to a descriptor
ExecutionContext. Dialects which support sequences should
add a `fire_sequence()` method to their execution context
implementation. [ticket:1566]
-
+
- Functions and operators generated by the compiler now use
(almost) regular dispatch functions of the form
"visit_<opname>" and "visit_<funcname>_fn" to provide
- postgresql
- New dialects: pg8000, zxjdbc, and pypostgresql
on py3k.
-
+
- The "postgres" dialect is now named "postgresql" !
Connection strings look like:
the older "postgres_returning" and
"postgres_where" names still work with a
deprecation warning.
-
+
- "postgresql_where" now accepts SQL expressions which
can also include literals, which will be quoted as needed.
-
+
- The psycopg2 dialect now uses psycopg2's "unicode extension"
on all new connections, which allows all String/Text/etc.
types to skip the need to post-process bytestrings into
unicode (an expensive step due to its volume). Other
dialects which return unicode natively (pg8000, zxjdbc)
also skip unicode post-processing.
-
+
- Added new ENUM type, which exists as a schema-level
construct and extends the generic Enum type. Automatically
associates itself with tables and their parent metadata
to issue the appropriate CREATE TYPE/DROP TYPE
commands as needed, supports unicode labels, supports
reflection. [ticket:1511]
-
+
- INTERVAL supports an optional "precision" argument
corresponding to the argument that PG accepts.
%(foobar)s however and SQLA doesn't want to add overhead
just to treat that one non-existent use case.
[ticket:1279]
-
+
- Inserting NULL into a primary key + foreign key column
will allow the "not null constraint" error to raise,
not an attempt to execute a nonexistent "col_id_seq"
sequence. [ticket:1516]
-
+
- autoincrement SELECT statements, i.e. those which
select from a procedure that modifies rows, now work
with server-side cursor mode (the named cursor isn't
used for such statements.)
-
+
- postgresql dialect can properly detect pg "devel" version
strings, i.e. "8.5devel" [ticket:1636]
used for the statement. If false, they will not be used, even
if "server_side_cursors" is true on the
connection. [ticket:1619]
-
+
- mysql
- New dialects: oursql, a new native dialect,
MySQL Connector/Python, a native Python port of MySQLdb,
and of course zxjdbc on Jython.
-
+
- VARCHAR/NVARCHAR will not render without a length, raises
an error before passing to MySQL. Doesn't impact
CAST since VARCHAR is not allowed in MySQL CAST anyway,
the dialect renders CHAR/NCHAR in those cases.
-
+
- all the _detect_XXX() functions now run once underneath
dialect.initialize()
MySQLdb can't handle % signs in SQL when executemany() is used,
and SQLA doesn't want to add overhead just to treat that one
non-existent use case. [ticket:1279]
-
+
- the BINARY and MSBinary types now generate "BINARY" in all
cases. Omitting the "length" parameter will generate
"BINARY" with no length. Use BLOB to generate an unlengthed
binary column.
-
+
- the "quoting='quoted'" argument to MSEnum/ENUM is deprecated.
It's best to rely upon the automatic quoting.
-
+
- ENUM now subclasses the new generic Enum type, and also handles
unicode values implicitly, if the given labelnames are unicode
objects.
-
+
- a column of type TIMESTAMP now defaults to NULL if
"nullable=False" is not passed to Column(), and no default
is present. This is now consistent with all other types,
and in the case of TIMESTAMP explictly renders "NULL"
due to MySQL's "switching" of default nullability
for TIMESTAMP columns. [ticket:1539]
-
+
- oracle
- unit tests pass 100% with cx_oracle !
later of cx_oracle.
- an NCLOB type is added to the base types.
-
+
- use_ansi=False won't leak into the FROM/WHERE clause of
a statement that's selecting from a subquery that also
uses JOIN/OUTERJOIN.
-
+
- added native INTERVAL type to the dialect. This supports
only the DAY TO SECOND interval type so far due to lack
of support in cx_oracle for YEAR TO MONTH. [ticket:1467]
-
+
- usage of the CHAR type results in cx_oracle's
FIXED_CHAR dbapi type being bound to statements.
-
+
- the Oracle dialect now features NUMBER which intends
to act justlike Oracle's NUMBER type. It is the primary
numeric type returned by table reflection and attempts
to return Decimal()/float/int based on the precision/scale
parameters. [ticket:885]
-
+
- func.char_length is a generic function for LENGTH
- ForeignKey() which includes onupdate=<value> will emit a
- using new dialect.initialize() feature to set up
version-dependent behavior.
-
+
- using types.BigInteger with Oracle will generate
NUMBER(19) [ticket:1125]
-
+
- "case sensitivity" feature will detect an all-lowercase
case-sensitive column name during reflect and add
"quote=True" to the generated Column, so that proper
quoting is maintained.
-
+
- firebird
- the keys() method of RowProxy() now returns the result
column names *normalized* to be SQLAlchemy case
applies the SQLite keyword "AUTOINCREMENT" to the single integer
primary key column when generating DDL. Will prevent generation of
a separate PRIMARY KEY constraint. [ticket:1016]
-
+
- new dialects
- postgresql+pg8000
- postgresql+pypostgresql (partial)
parameters. In particular, Numeric, Float, NUMERIC,
FLOAT, DECIMAL don't generate any length or scale unless
specified.
-
+
- types.Binary is renamed to types.LargeBinary, it only
- produces BLOB, BYTEA, or a similar "long binary" type.
+ produces BLOB, BYTEA, or a similar "long binary" type.
New base BINARY and VARBINARY
types have been added to access these MySQL/MS-SQL specific
types in an agnostic way [ticket:1664].
-
+
- String/Text/Unicode types now skip the unicode() check
on each result column value if the dialect has
detected the DBAPI as returning Python unicode objects
Time, Date and DateTime on Sqlite, ARRAY on Postgresql,
Time on MySQL, Numeric(as_decimal=False) on MySQL, oursql and
pypostgresql, DateTime on cx_oracle and LOB-based types on cx_oracle.
-
+
- Reflection of types now returns the exact UPPERCASE
type within types.py, or the UPPERCASE type within
the dialect itself if the type is not a standard SQL
type. This means reflection now returns more accurate
- information about reflected types.
-
+ information about reflected types.
+
- Added a new Enum generic type. Enum is a schema-aware object
to support databases which require specific DDL in order to
use enum or equivalent; in the case of PG it handles the
native enum support will by generate VARCHAR + an inline CHECK
constraint to enforce the enum.
[ticket:1109] [ticket:1511]
-
+
- The Interval type includes a "native" flag which controls
if native INTERVAL types (postgresql + oracle) are selected
if available, or not. "day_precision" and "second_precision"
arguments are also added which propagate as appropriately
to these native types. Related to [ticket:1467].
-
+
- The Boolean type, when used on a backend that doesn't
have native boolean support, will generate a CHECK
constraint "col IN (0, 1)" along with the int/smallint-
Note that MySQL has no native boolean *or* CHECK constraint
support so this feature isn't available on that platform.
[ticket:1589]
-
+
- PickleType now uses == for comparison of values when
mutable=True, unless the "comparator" argument with a
comparsion function is specified to the type. Objects
being pickled will be compared based on identity (which
defeats the purpose of mutable=True) if __eq__() is not
overridden or a comparison function is not provided.
-
+
- The default "precision" and "scale" arguments of Numeric
and Float have been removed and now default to None.
NUMERIC and FLOAT will be rendered with no numeric
- AbstractType.get_search_list() is removed - the games
that was used for are no longer necessary.
-
+
- Added a generic BigInteger type, compiles to
BIGINT or NUMBER(19). [ticket:1125]
- sqlsoup db.<sometable>.update() and delete() now call
query(cls).update() and delete(), respectively.
-
+
- sqlsoup now has execute() and connection(), which call upon
the Session methods of those names, ensuring that the bind is
in terms of the SqlSoup object's bind.
-
+
- sqlsoup objects no longer have the 'query' attribute - it's
not needed for sqlsoup's usage paradigm and it gets in the
way of a column that is actually named 'query'.
-
+
- The signature of the proxy_factory callable passed to
association_proxy is now (lazy_collection, creator,
value_attr, association_proxy), adding a fourth argument
- association_proxy now has basic comparator methods .any(),
.has(), .contains(), ==, !=, thanks to Scott Torborg.
[ticket:1372]
-
+
- examples
- The "query_cache" examples have been removed, and are replaced
with a fully comprehensive approach that combines the usage of
the caching characteristics of a particular Query, which
can also be invoked deep within an object graph when lazily
loading related objects. See /examples/beaker_caching/README.
-
+
0.5.9
=====
- sql
- Fixed erroneous self_group() call in expression package.
[ticket:1661]
-
+
0.5.8
=====
- sql
unnamed Column objects. This allows easy creation of
declarative helpers which place common columns on multiple
subclasses.
-
+
- Default generators like Sequence() translate correctly
across a copy() operation.
-
+
- Sequence() and other DefaultGenerator objects are accepted
as the value for the "default" and "onupdate" keyword
arguments of Column, in addition to being accepted
- positionally.
-
+ positionally.
+
- Fixed a column arithmetic bug that affected column
correspondence for cloned selectables which contain
free-standing column expressions. This bug is
ORM behavior only availble in 0.6 via [ticket:1568],
but is more correct at the SQL expression level
as well. [ticket:1617]
-
+
- postgresql
- The extract() function, which was slightly improved in
0.5.7, needed a lot more work to generate the correct
- contains_eager() now works with the automatically
generated subquery that results when you say
"query(Parent).join(Parent.somejoinedsubclass)", i.e.
- when Parent joins to a joined-table-inheritance subclass.
+ when Parent joins to a joined-table-inheritance subclass.
Previously contains_eager() would erroneously add the
subclass table to the query separately producing a
cartesian product. An example is in the ticket
description. [ticket:1543]
-
+
- query.options() now only propagate to loaded objects
for potential further sub-loads only for options where
such behavior is relevant, keeping
various unserializable options like those generated
by contains_eager() out of individual instance states.
[ticket:1553]
-
+
- Session.execute() now locates table- and
mapper-specific binds based on a passed
in expression which is an insert()/update()/delete()
construct. [ticket:1054]
-
+
- Session.merge() now properly overwrites a many-to-one or
uselist=False attribute to None if the attribute
is also None in the given object to be merged.
-
+
- Fixed a needless select which would occur when merging
transient objects that contained a null primary key
identifier. [ticket:1618]
duplicate extensions, such as backref populators,
from being inserted into the list.
[ticket:1585]
-
+
- Fixed the call to get_committed_value() on CompositeProperty.
[ticket:1504]
- Fixed bug where Query would crash if a join() with no clear
"left" side were called when a non-mapped column entity
appeared in the columns list. [ticket:1602]
-
+
- Fixed bug whereby composite columns wouldn't load properly
when configured on a joined-table subclass, introduced in
version 0.5.6 as a result of the fix for [ticket:1480].
combinations of reflected and non-reflected types to work
with 0.5 style type reflection, such as PGText/Text (note 0.6
reflects types as their generic versions). [ticket:1556]
-
+
- Fixed bug in query.update() when passing Cls.attribute
as keys in the value dict and using synchronize_session='expire'
('fetch' in 0.6). [ticket:1436]
-
+
- sql
- Fixed bug in two-phase transaction whereby commit() method
didn't set the full state which allows subsequent close()
call to succeed. [ticket:1603]
-
+
- Fixed the "numeric" paramstyle, which apparently is the
default paramstyle used by Informixdb.
-
+
- Repeat expressions in the columns clause of a select
are deduped based on the identity of each clause element,
not the actual string. This allows positional
elements to render correctly even if they all render
identically, such as "qmark" style bind parameters.
[ticket:1574]
-
+
- The cursor associated with connection pool connections
(i.e. _CursorFairy) now proxies `__iter__()` to the
underlying cursor correctly. [ticket:1632]
- Fixed bug preventing alias() of an alias() from being
cloned or adapted (occurs frequently in ORM operations).
[ticket:1641]
-
+
- sqlite
- sqlite dialect properly generates CREATE INDEX for a table
that is in an alternate schema. [ticket:1439]
-
+
- postgresql
- Added support for reflecting the DOUBLE PRECISION type,
- via a new postgres.PGDoublePrecision object.
+ via a new postgres.PGDoublePrecision object.
This is postgresql.DOUBLE_PRECISION in 0.6.
[ticket:1085]
- Fixed the behavior of extract() to apply operator
precedence rules to the "::" operator when applying
- the "timestamp" cast - ensures proper parenthesization.
+ the "timestamp" cast - ensures proper parenthesization.
[ticket:1611]
- mssql
table generated by Oracle when "index only tables"
with overflow are used. These tables aren't accessible
via SQL and can't be reflected. [ticket:1637]
-
+
- ext
- A column can be added to a joined-table declarative
superclass after the class has been constructed
Comparing equivalence of columns in the ORM is best
accomplished using col1.shares_lineage(col2).
[ticket:1491]
-
+
- Removed unused `load()` method from ShardedQuery.
[ticket:1606]
-
+
0.5.6
=====
- orm
- Fixed bug which disallowed one side of a many-to-many
bidirectional reference to declare itself as "viewonly"
[ticket:1507]
-
+
- Added an assertion that prevents a @validates function
or other AttributeExtension from loading an unloaded
collection such that internal state may be corrupted.
[ticket:1526]
-
+
- Fixed bug which prevented two entities from mutually
replacing each other's primary key values within a single
flush() for some orderings of operations. [ticket:1519]
-
+
- Fixed an obscure issue whereby a joined-table subclass
with a self-referential eager load on the base class
would populate the related object's "subclass" table with
data from the "subclass" table of the parent.
[ticket:1485]
-
+
- relations() now have greater ability to be "overridden",
meaning a subclass that explicitly specifies a relation()
overriding that of the parent class will be honored
during a flush. This is currently to support
many-to-many relations from concrete inheritance setups.
Outside of that use case, YMMV. [ticket:1477]
-
+
- Squeezed a few more unnecessary "lazy loads" out of
relation(). When a collection is mutated, many-to-one
backrefs on the other side will not fire off to load
- the "old" value, unless "single_parent=True" is set.
+ the "old" value, unless "single_parent=True" is set.
A direct assignment of a many-to-one still loads
the "old" value in order to update backref collections
on that value, which may be present in the session
already, thus maintaining the 0.5 behavioral contract.
[ticket:1483]
-
+
- Fixed bug whereby a load/refresh of joined table
inheritance attributes which were based on
column_property() or similar would fail to evaluate.
[ticket:1480]
-
+
- Improved support for MapperProperty objects overriding
that of an inherited mapper for non-concrete
inheritance setups - attribute extensions won't randomly
collide with each other. [ticket:1488]
-
+
- UPDATE and DELETE do not support ORDER BY, LIMIT, OFFSET,
etc. in standard SQL. Query.update() and Query.delete()
now raise an exception if any of limit(), offset(),
order_by(), group_by(), or distinct() have been
called. [ticket:1487]
-
+
- Added AttributeExtension to sqlalchemy.orm.__all__
-
+
- Improved error message when query() is called with
a non-SQL /entity expression. [ticket:1476]
-
+
- Using False or 0 as a polymorphic discriminator now
works on the base class as well as a subclass.
[ticket:1440]
in query.join() which would fail to issue correctly
if the query was against a pure SQL construct.
[ticket:1522]
-
+
- Fixed a somewhat hypothetical issue which would result
in the wrong primary key being calculated for a mapper
using the old polymorphic_union function - but this
is old stuff. [ticket:1486]
-
+
- sql
- Fixed column.copy() to copy defaults and onupdates.
[ticket:1373]
the string "field" argument was getting treated as a
ClauseElement, causing various errors within more
complex SQL transformations.
-
+
- Unary expressions such as DISTINCT propagate their
type handling to result sets, allowing conversions like
unicode and such to take place. [ticket:1420]
-
+
- Fixed bug in Table and Column whereby passing empty
dict for "info" argument would raise an exception.
[ticket:1482]
- oracle
- Backported 0.6 fix for Oracle alias names not getting
truncated. [ticket:1309]
-
+
- ext
- The collection proxies produced by associationproxy are now
pickleable. A user-defined proxy_factory however
in string expressions sent to primaryjoin/secondaryjoin/
secondary - the name is pulled from the MetaData of the
declarative base. [ticket:1527]
-
+
- A column can be added to a joined-table subclass after
the class has been constructed (i.e. via class-level
attribute assignment). The column is added to the underlying
"join" to include the new column, instead of raising
an error about "no such column, use column_property()
instead". [ticket:1523]
-
+
- test
- Added examples into the test suite so they get exercised
regularly and cleaned up a couple deprecation warnings.
-
+
0.5.5
=======
- general
- sql
- Repaired the printing of SQL exceptions which are not
based on parameters or are not executemany() style.
-
+
- postgresql
- Deprecated the hardcoded TIMESTAMP function, which when
used as func.TIMESTAMP(value) would render "TIMESTAMP value".
uppercase is also inappropriate and there's lots of other
PG casts that we'd need to support. So instead, use
text constructs i.e. select(["timestamp '12/05/09'"]).
-
-
+
+
0.5.4p1
=======
- orm
- Fixed an attribute error introduced in 0.5.4 which would
occur when merge() was used with an incomplete object.
-
+
0.5.4
=====
- Significant performance enhancements regarding Sessions/flush()
in conjunction with large mapper graphs, large numbers of
objects:
-
+
- Removed all* O(N) scanning behavior from the flush() process,
i.e. operations that were scanning the full session,
including an extremely expensive one that was erroneously
assuming primary key values were changing when this
was not the case.
-
+
* one edge case remains which may invoke a full scan,
if an existing primary key attribute is modified
to a new value.
-
+
- The Session's "weak referencing" behavior is now *full* -
no strong references whatsoever are made to a mapped object
or related items/collections in its __dict__. Backrefs and
other cycles in objects no longer affect the Session's ability
to lose all references to unmodified objects. Objects with
- pending changes still are maintained strongly until flush.
+ pending changes still are maintained strongly until flush.
[ticket:1398]
-
+
The implementation also improves performance by moving
the "resurrection" process of garbage collected items
to only be relevant for mappings that map "mutable"
attributes (i.e. PickleType, composite attrs). This removes
overhead from the gc process and simplifies internal
behavior.
-
+
If a "mutable" attribute change is the sole change on an object
which is then dereferenced, the mapper will not have access to
other attribute state when the UPDATE is issued. This may present
itself differently to some MapperExtensions.
-
+
The change also affects the internal attribute API, but not
the AttributeExtension interface nor any of the publically
documented attribute functions.
-
+
- The unit of work no longer genererates a graph of "dependency"
processors for the full graph of mappers during flush(), instead
creating such processors only for those mappers which represent
objects with pending changes. This saves a tremendous number
of method calls in the context of a large interconnected
graph of mappers.
-
+
- Cached a wasteful "table sort" operation that previously
occured multiple times per flush, also removing significant
method call count from flush().
-
+
- Other redundant behaviors have been simplified in
mapper._save_obj().
-
+
- Modified query_cls on DynamicAttributeImpl to accept a full
mixin version of the AppenderQuery, which allows subclassing
the AppenderMixin.
- The "polymorphic discriminator" column may be part of a
primary key, and it will be populated with the correct
discriminator value. [ticket:1300]
-
+
- Fixed the evaluator not being able to evaluate IS NULL clauses.
- Fixed the "set collection" function on "dynamic" relations to
be assigned to a pending parent instance, otherwise modified
events would not be fired correctly. Set collection is now
compatible with merge(), fixes [ticket:1352].
-
+
- Allowed pickling of PropertyOption objects constructed with
instrumented descriptors; previously, pickle errors would occur
when pickling an object which was loaded with a descriptor-based
option, such as query.options(eagerload(MyClass.foo)).
-
+
- Lazy loader will not use get() if the "lazy load" SQL clause
matches the clause used by get(), but contains some parameters
hardcoded. Previously the lazy strategy would fail with the
the need in most cases for per-instance/attribute loader
objects, improving load speed and memory overhead for
individual instances. [ticket:1391]
-
+
- Fixed another location where autoflush was interfering
with session.merge(). autoflush is disabled completely
for the duration of merge() now. [ticket:1360]
-
+
- Fixed bug which prevented "mutable primary key" dependency
logic from functioning properly on a one-to-one
relation(). [ticket:1406]
-
+
- Fixed bug in relation(), introduced in 0.5.3,
whereby a self referential relation
from a base class to a joined-table subclass would
- Fixed obscure mapper compilation issue when inheriting
mappers are used which would result in un-initialized
attributes.
-
+
- Fixed documentation for session weak_identity_map -
the default value is True, indicating a weak
referencing map in use.
-
+
- Fixed a unit of work issue whereby the foreign
key attribute on an item contained within a collection
owned by an object being deleted would not be set to
condition in the foreign_keys or remote_side collection. Whereas
previously it was just nonsensical, but would succeed in a
non-deterministic way.
-
+
- schema
- Added a quote_schema() method to the IdentifierPreparer class
so that dialects can override how schemas get handled. This
handy as an alternative to text() when you'd like to
build a construct that has database-specific compilations.
See the extension docs for details.
-
+
- Exception messages are truncated when the list of bound
parameters is larger than 10, preventing enormous
multi-page exceptions from filling up screens and logfiles
for large executemany() statements. [ticket:1413]
-
+
- ``sqlalchemy.extract()`` is now dialect sensitive and can
extract components of timestamps idiomatically across the
supported databases, including SQLite.
- Reflecting a FOREIGN KEY construct will take into account
a dotted schema.tablename combination, if the foreign key
references a table in a remote schema. [ticket:1405]
-
+
- mssql
- Modified how savepoint logic works to prevent it from
stepping on non-savepoint oriented routines. Savepoint
one side of the link and not the other, so supporting
this operation leads to misleading results.
[ticket:1315]
-
+
- Query now implements __clause_element__() which produces
its selectable, which means a Query instance can be accepted
in many SQL expressions, including col.in_(query),
- Query.join() can now construct multiple FROM clauses, if
needed. Such as, query(A, B).join(A.x).join(B.y)
- might say SELECT A.*, B.* FROM A JOIN X, B JOIN Y.
+ might say SELECT A.*, B.* FROM A JOIN X, B JOIN Y.
Eager loading can also tack its joins onto those
multiple FROM clauses. [ticket:1337]
- Fixed bug in dynamic_loader() where append/remove events
after construction time were not being propagated to the
UOW to pick up on flush(). [ticket:1347]
-
+
- Fixed bug where column_prefix wasn't being checked before
not mapping an attribute that already had class-level
name present.
in the database. Presents some degree of a workaround for
[ticket:1315], although we are considering removing the
flush([objects]) feature altogether.
-
+
- Session.scalar() now converts raw SQL strings to text()
the same way Session.execute() does and accepts same
alternative **kw args.
-
+
- improvements to the "determine direction" logic of
relation() such that the direction of tricky situations
like mapper(A.join(B)) -> relation-> mapper(B) can be
determined.
-
+
- When flushing partial sets of objects using session.flush([somelist]),
pending objects which remain pending after the operation won't
inadvertently be added as persistent. [ticket:1306]
-
+
- Added "post_configure_attribute" method to InstrumentationManager,
so that the "listen_for_events.py" example works again.
[ticket:1314]
-
+
- a forward and complementing backwards reference which are both
of the same direction, i.e. ONETOMANY or MANYTOONE,
- is now detected, and an error message is raised.
+ is now detected, and an error message is raised.
Saves crazy CircularDependencyErrors later on.
-
+
- Fixed bugs in Query regarding simultaneous selection of
multiple joined-table inheritance entities with common base
classes:
-
+
- previously the adaption applied to "B" on
"A JOIN B" would be erroneously partially applied
to "A".
-
+
- comparisons on relations (i.e. A.related==someb)
were not getting adapted when they should.
-
+
- Other filterings, like
query(A).join(A.bs).filter(B.foo=='bar'), were erroneously
adapting "B.foo" as though it were an "A".
-
+
- Fixed adaptation of EXISTS clauses via any(), has(), etc.
in conjunction with an aliased object on the left and
of_type() on the right. [ticket:1325]
-
+
- Added an attribute helper method ``set_committed_value`` in
sqlalchemy.orm.attributes. Given an object, attribute name,
and value, will set the value on the object as part of its
- Query won't fail with weakref error when a non-mapper/class
instrumented descriptor is passed, raises
"Invalid column expession".
-
+
- Query.group_by() properly takes into account aliasing applied
to the FROM clause, such as with select_from(), using
with_polymorphic(), or using from_self().
-
+
- sql
- An alias() of a select() will convert to a "scalar subquery"
when used in an unambiguously scalar context, i.e. it's used
in a comparison operation. This applies to
the ORM when using query.subquery() as well.
-
+
- Fixed missing _label attribute on Function object, others
when used in a select() with use_labels (such as when used
in an ORM column_property()). [ticket:1302]
- anonymous alias names now truncate down to the max length
allowed by the dialect. More significant on DBs like
Oracle with very small character limits. [ticket:1309]
-
+
- the __selectable__() interface has been replaced entirely
by __clause_element__().
close, will be detected so that no results doesn't
fail on recent versions of pysqlite which raise
an error when fetchone() called with no rows present.
-
+
- postgresql
- Index reflection won't fail when an index with
multiple expressions is encountered.
-
+
- Added PGUuid and PGBit types to
sqlalchemy.databases.postgres. [ticket:1327]
-
+
- Refection of unknown PG types won't crash when those
types are specified within a domain. [ticket:1327]
- Declarative locates the "inherits" class using a search
through __bases__, to skip over mixins that are local
- to subclasses.
-
+ to subclasses.
+
- Declarative figures out joined-table inheritance primary join
condition even if "inherits" mapper argument is given
explicitly.
- Declarative will properly interpret the "foreign_keys" argument
on a backref() if it's a string.
-
+
- Declarative will accept a table-bound column as a property
when used in conjunction with __table__, if the column is already
present in __table__. The column will be remapped to the given
key the same way as when added to the mapper() properties dict.
-
+
0.5.2
======
fully establish instrumentation for subclasses where the mapper
was created after the superclass had already been fully
instrumented. [ticket:1292]
-
+
- Fixed bug in delete-orphan cascade whereby two one-to-one
relations from two different parent classes to the same target
class would prematurely expunge the instance.
loading would prevent other eager loads, self referential or not,
from joining to the parent JOIN properly. Thanks to Alex K
for creating a great test case.
-
+
- session.expire() and related methods will not expire() unloaded
deferred attributes. This prevents them from being needlessly
loaded when the instance is refreshed.
construct to the existing left side, even if query.from_self()
or query.select_from(someselectable) has been called.
[ticket:1293]
-
+
- sql
- Further fixes to the "percent signs and spaces in column/table
names" functionality. [ticket:1284]
Session methods have been deprecated, replaced by
"expunge_all()" and "add()". "expunge_all()" has also
been added to ScopedSession.
-
+
- Modernized the "no mapped table" exception and added a more
explicit __table__/__tablename__ exception to declarative.
- Test coverage added for `relation()` objects specified on
concrete mappers. [ticket:1237]
-
+
- Query.from_self() as well as query.subquery() both disable
the rendering of eager joins inside the subquery produced.
The "disable all eager joins" feature is available publically
via a new query.enable_eagerloads() generative. [ticket:1276]
-
+
- Added a rudimental series of set operations to Query that
receive Query objects as arguments, including union(),
union_all(), intersect(), except_(), insertsect_all(),
- Fixed bug that prevented Query.join() and eagerloads from
attaching to a query that selected from a union or aliased union.
-
+
- A short documentation example added for bidirectional
relations specified on concrete mappers. [ticket:1237]
behavior with an m2m table, use an explcit association class
so that the individual association row is treated as a parent.
[ticket:1281]
-
+
- delete-orphan cascade always requires delete cascade. Specifying
delete-orphan without delete now raises a deprecation warning.
[ticket:1281]
-
+
- sql
- Improved the methodology to handling percent signs in column
names from [ticket:1256]. Added more tests. MySQL and
PostgreSQL dialects still do not issue correct CREATE TABLE
statements for identifiers with percent signs in them.
-
+
- schema
- Index now accepts column-oriented InstrumentedAttributes
(i.e. column-based mapped class attributes) as column
- Column with no name (as in declarative) won't raise a
NoneType error when it's string output is requsted
(such as in a stack trace).
-
+
- Fixed bug when overriding a Column with a ForeignKey
on a reflected table, where derived columns (i.e. the
"virtual" columns of a select, etc.) would inadvertently
call upon schema-level cleanup logic intended only
for the original column. [ticket:1278]
-
+
- declarative
- Can now specify Column objects on subclasses which have no
- table of their own (i.e. use single table inheritance).
+ table of their own (i.e. use single table inheritance).
The columns will be appended to the base table, but only
mapped by the subclass.
- It's an error to add new Column objects to a declarative class
that specified an existing table using __table__.
-
+
- mysql
- Added the missing keywords from MySQL 4.1 so they get escaped
properly.
- Added an example illustrating Celko's "nested sets" as a
SQLA mapping.
-
+
- contains_eager() with an alias argument works even when
the alias is embedded in a SELECT, as when sent to the
Query via query.select_from().
-
+
- contains_eager() usage is now compatible with a Query that
also contains a regular eager load and limit/offset, in that
the columns are added to the Query-generated subquery.
[ticket:1180]
-
+
- session.execute() will execute a Sequence object passed to
it (regression from 0.4).
-
+
- Removed the "raiseerror" keyword argument from object_mapper()
and class_mapper(). These functions raise in all cases
if the given class/instance is not mapped.
- Fixed session.transaction.commit() on a autocommit=False
session not starting a new transaction.
-
+
- Some adjustments to Session.identity_map's weak referencing
behavior to reduce asynchronous GC side effects.
-
+
- Adjustment to Session's post-flush accounting of newly
"clean" objects to better protect against operating on
objects as they're asynchronously gc'ed. [ticket:1182]
-
+
- sql
- column.in_(someselect) can now be used as a columns-clause
expression without the subquery bleeding into the FROM clause
[ticket:1074]
-
+
- sqlite
- Overhauled SQLite date/time bind/result processing to use
regular expressions and format strings, rather than
2.5.0's new requirement that only Python unicode objects are
accepted;
http://itsystementwicklung.de/pipermail/list-pysqlite/2008-March/000018.html
-
+
- mysql
- Temporary tables are now reflectable.
roughly equivalent to first()[0], value()
takes a single column expression and is roughly equivalent to
values(expr).next()[0].
-
+
- Improved the determination of the FROM clause when placing SQL
expressions in the query() list of entities. In particular
scalar subqueries should not "leak" their inner FROM objects
- query.order_by().get() silently drops the "ORDER BY" from
the query issued by GET but does not raise an exception.
-
+
- Added a Validator AttributeExtension, as well as a
@validates decorator which is used in a similar fashion
as @reconstructor, and marks a method as validating
one or more mapped attributes.
-
+
- class.someprop.in_() raises NotImplementedError pending the
implementation of "in_" for relation [ticket:1140]
- Fixed bug whereby deferred() columns with a group in conjunction
with an otherwise unrelated synonym() would produce
an AttributeError during deferred load.
-
+
- The before_flush() hook on SessionExtension takes place before
the list of new/dirty/deleted is calculated for the final
time, allowing routines within before_flush() to further
optionally be a list, supporting events sent to multiple
SessionExtension instances. Session places SessionExtensions
in Session.extensions.
-
+
- Reentrant calls to flush() raise an error. This also serves
as a rudimentary, but not foolproof, check against concurrent
calls to Session.flush().
- The 3-tuple of iterables returned by attributes.get_history()
may now be a mix of lists and tuples. (Previously members
were always lists.)
-
+
- Fixed bug whereby changing a primary key attribute on an
entity where the attribute's previous value had been expired
would produce an error upon flush(). [ticket:1151]
-
+
- Fixed custom instrumentation bug whereby get_instance_dict()
was not called for newly constructed instances not loaded
by the ORM.
- Session.delete() adds the given object to the session if
not already present. This was a regression bug from 0.4.
[ticket:1150]
-
+
- The `echo_uow` flag on `Session` is deprecated, and unit-of-work
logging is now application-level only, not per-session level.
- Removed conflicting `contains()` operator from
`InstrumentedAttribute` which didn't accept `escape` kwaarg
[ticket:1153].
-
+
- declarative
- Fixed bug whereby mapper couldn't initialize if a composite
primary key referenced another table that was not defined
yet. [ticket:1161]
-
+
- Fixed exception throw which would occur when string-based
primaryjoin condition was used in conjunction with backref.
-
+
- schema
- Added "sorted_tables" accessor to MetaData, which returns
Table objects sorted in order of dependency as a list.
- The exists() construct won't "export" its contained list
of elements as FROM clauses, allowing them to be used more
effectively in the columns clause of a SELECT.
-
+
- and_() and or_() now generate a ColumnElement, allowing
boolean expressions as result columns, i.e.
select([and_(1, 0)]). [ticket:798]
-
+
- Bind params now subclass ColumnElement which allows them to be
selectable by orm.query (they already had most ColumnElement
semantics).
-
+
- Added select_from() method to exists() construct, which becomes
more and more compatible with a regular select().
-
+
- Added func.min(), func.max(), func.sum() as "generic functions",
which basically allows for their return type to be determined
automatically. Helps with dates on SQLite, decimal types,
others. [ticket:1160]
-
+
- added decimal.Decimal as an "auto-detect" type; bind parameters
and generic functions will set their type to Numeric when a
Decimal is used.
-
+
- mysql
- The 'length' argument to MSInteger, MSBigInteger, MSTinyInteger,
MSSmallInteger and MSYear has been renamed to 'display_width'.
-
+
- Added MSMediumInteger type [ticket:1146].
-
+
- the function func.utc_timestamp() compiles to UTC_TIMESTAMP, without
the parenthesis, which seem to get in the way when using in
conjunction with executemany().
install Distribute:
python3 distribute_setup.py
-
+
Installing SQLAlchemy in Python 3
---------------------------------
-Once Distribute is installed, SQLAlchemy can be installed directly.
+Once Distribute is installed, SQLAlchemy can be installed directly.
The 2to3 process will kick in which takes several minutes:
python3 setup.py install
A plain vanilla run of all tests using sqlite can be run via setup.py:
$ python setup.py test
-
+
(NOTE: this command is broken for Python 2.7 with nose 0.11.3, see
Nose issue 340. You will need to use 'nosetests' directly, see below.)
-
+
Setuptools will take care of the rest ! To run nose directly and have
its full set of options available, read on...
--dburi=postgresql://user:password@localhost/test
-Use an empty database and a database user with general DBA privileges.
+Use an empty database and a database user with general DBA privileges.
The test suite will be creating and dropping many tables and other DDL, and
preexisting tables will interfere with the tests.
Additional steps specific to individual databases are as follows:
ORACLE: a user named "test_schema" is created.
-
+
The primary database user needs to be able to create and drop tables,
synonyms, and constraints within the "test_schema" user. For this
to work fully, including that the user has the "REFERENCES" role
in a remote shcema for tables not yet defined (REFERENCES is per-table),
it is required that the test the user be present in the "DBA" role:
-
+
grant dba to scott;
-
+
SYBASE: Similar to Oracle, "test_schema" is created as a user, and the
primary test user needs to have the "sa_role".
the transaction log is full otherwise.
A full series of setup assuming sa/master:
-
+
disk init name="translog", physname="/opt/sybase/data/translog.dat", size="10M"
create database sqlalchemy on default log on translog="10M"
sp_dboption sqlalchemy, "trunc log on chkpt", true
will occur with record locking isolation. This feature is only available
with MSSQL 2005 and greater. You must enable snapshot isolation at the
database level and set the default cursor isolation with two SQL commands:
-
+
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
-
+
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
MSSQL+zxJDBC: Trying to run the unit tests on Windows against SQL Server
class MakoBridge(TemplateBridge):
def init(self, builder, *args, **kw):
self.layout = builder.config.html_context.get('mako_layout', 'html')
-
+
self.lookup = TemplateLookup(directories=builder.config.templates_path,
format_exceptions=True,
imports=[
"from builder import util"
]
)
-
+
def render(self, template, context):
template = template.replace(".html", ".mako")
context['prevtopic'] = context.pop('prev', None)
# sphinx 1.0b2 doesn't seem to be providing _ for some reason...
context.setdefault('_', lambda x:x)
return self.lookup.get_template(template).render_unicode(**context)
-
-
+
+
def render_string(self, template, context):
context['prevtopic'] = context.pop('prev', None)
context['nexttopic'] = context.pop('next', None)
"from builder import util"
]
).render_unicode(**context)
-
+
class StripDocTestFilter(Filter):
def filter(self, lexer, stream):
for ttype, value in stream:
buf[-1] = (buf[-1][0], buf[-1][1].rstrip())
for t, v in buf:
yield t, v
-
+
class PopupSQLFormatter(HtmlFormatter):
def _format_lines(self, tokensource):
buf = []
yield 1, "<div class='popup_sql'>%s</div>" % re.sub(r'(?:[{stop}|\n]*)$', '', value)
else:
buf.append((ttype, value))
-
+
for t, v in _strip_trailing_whitespace(HtmlFormatter._format_lines(self, iter(buf))):
yield t, v
continue
else:
yield ttype, value
-
+
def format(self, tokensource, outfile):
LatexFormatter.format(self, self._filter_tokens(tokensource), outfile)
app.connect('autodoc-skip-member', autodoc_skip_member)
PygmentsBridge.html_formatter = PopupSQLFormatter
PygmentsBridge.latex_formatter = PopupLatexFormatter
-
-
\ No newline at end of file
+
def go(m):
# .html with no anchor if present, otherwise "#" for top of page
return m.group(1) or '#'
-
+
def strip_toplevel_anchors(text):
return re.compile(r'(\.html)?#[-\w]+-toplevel').sub(go, text)
-
+
the :func:`.create_engine` call::
engine = create_engine('mysql://scott:tiger@localhost/test')
-
+
The typical usage of :func:`.create_engine()` is once per particular database
URL, held globally for the lifetime of a single application process. A single
:class:`.Engine` manages many individual DBAPI connections on behalf of the
instructed to close out its resources explicitly::
result.close()
-
+
If the :class:`.ResultProxy` has pending rows remaining and is dereferenced by
the application without being closed, Python garbage collection will
ultimately close out the cursor as well as trigger a return of the pooled
.. autoclass:: sqlalchemy.engine.base.ResultProxy
:members:
-
+
.. autoclass:: sqlalchemy.engine.base.RowProxy
:members:
any :class:`.Executable` construct, which is a marker for SQL expression objects
that support execution. The SQL expression object itself references an
:class:`.Engine` or :class:`.Connection` known as the **bind**, which it uses
-in order to provide so-called "implicit" execution services.
+in order to provide so-called "implicit" execution services.
Given a table as below::
The table below summarizes the state of DBAPI support in SQLAlchemy 0.6. The values
translate as:
-* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
+* yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform.
* yes / OS platform - The DBAPI supports that platform.
-* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.
+* no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support.
* no / OS platform - The DBAPI does not support that platform.
* partial - the DBAPI is partially usable on the target platform but has major unresolved issues.
* development - a development version of the dialect exists, but is not yet usable.
* ``sqlalchemy.engine`` - controls SQL echoing. set to ``logging.INFO`` for SQL query output, ``logging.DEBUG`` for query + result set output.
* ``sqlalchemy.dialects`` - controls custom logging for SQL dialects. See the documentation of individual dialects for details.
* ``sqlalchemy.pool`` - controls connection pool logging. set to ``logging.INFO`` or lower to log connection pool checkouts/checkins.
-* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for information on mapper configurations.
+* ``sqlalchemy.orm`` - controls logging of various ORM functions. set to ``logging.INFO`` for information on mapper configurations.
For example, to log SQL queries using Python logging instead of the ``echo=True`` flag::
.. autofunction:: extract
.. attribute:: func
-
+
Generate SQL function expressions.
-
+
``func`` is a special object instance which generates SQL functions based on name-based attributes, e.g.::
-
+
>>> print func.count(1)
count(:param_1)
SQLAlchemy, it will be rendered exactly as is. For common SQL functions
which SQLAlchemy is aware of, the name may be interpreted as a *generic
function* which will be compiled appropriately to the target database::
-
+
>>> print func.current_timestamp()
CURRENT_TIMESTAMP
-
+
To call functions which are present in dot-separated packages, specify them in the same manner::
-
+
>>> print func.stats.yield_curve(5, 10)
stats.yield_curve(:yield_curve_1, :yield_curve_2)
-
+
SQLAlchemy can be made aware of the return type of functions to enable
type-specific lexical and result-based behavior. For example, to ensure
that a string-based function returns a Unicode value and is similarly
treated as a string in expressions, specify
:class:`~sqlalchemy.types.Unicode` as the type:
-
+
>>> print func.my_string(u'hi', type_=Unicode) + ' ' + \
... func.my_string(u'there', type_=Unicode)
my_string(:my_string_1) || :my_string_2 || my_string(:my_string_3)
-
+
Functions which are interpreted as "generic" functions know how to
calculate their return type automatically. For a listing of known generic
functions, see :ref:`generic_functions`.
-
+
.. autofunction:: insert
.. autofunction:: intersect
.. autoclass:: Function
:members:
:show-inheritance:
-
+
.. autoclass:: FromClause
:members:
:show-inheritance:
SQL functions which are known to SQLAlchemy with regards to database-specific
rendering, return types and argument behavior. Generic functions are invoked
like all SQL functions, using the :attr:`func` attribute::
-
+
select([func.count()]).select_from(sometable)
-
+
Note that any name not known to :attr:`func` generates the function name as is
- there is no restriction on what SQL functions can be called, known or
unknown to SQLAlchemy, built-in or user defined. The section here only
describes those functions where SQLAlchemy already knows what argument and
return types are in use.
-
+
.. automodule:: sqlalchemy.sql.functions
:members:
:undoc-members:
:show-inheritance:
-
-
+
+
.. toctree::
:maxdepth: 2
-
+
tutorial
expression_api
engines
exceptions
compiler
serializer
-
-
\ No newline at end of file
+
# insert from a function
users.insert().values(id=12, name=func.upper('jack'))
-
+
# insert from a concatenation expression
addresses.insert().values(email_address = name + '@' + host)
users.insert().values(name=func.upper('jack')),
fullname='Jack Jones'
)
-
+
:func:`~sqlalchemy.sql.expression.bindparam` constructs can be passed, however the names of the table's columns are reserved for the "automatic" generation of bind names::
users.insert().values(id=bindparam('_id'), name=bindaparam('_name'))
{'_id':3, '_name':'name3'},
]
)
-
+
Updates work a lot like INSERTS, except there is an additional WHERE clause that can be specified:
.. sourcecode:: pycon+sql
from sqlalchemy.types import TypeDecorator, Numeric
from decimal import Decimal
-
+
class SafeNumeric(TypeDecorator):
"""Adds quantization to Numeric."""
-
+
impl = Numeric
-
+
def __init__(self, *arg, **kw):
TypeDecorator.__init__(self, *arg, **kw)
self.quantize_int = -(self.impl.precision - self.impl.scale)
self.quantize = Decimal(10) ** self.quantize_int
-
+
def process_bind_param(self, value, dialect):
if isinstance(value, Decimal) and \
value.as_tuple()[2] < self.quantize_int:
class GUID(TypeDecorator):
"""Platform-independent GUID type.
-
+
Uses Postgresql's UUID type, otherwise uses
CHAR(32), storing as stringified hex values.
-
+
"""
impl = CHAR
Note that the base type is not "mutable", meaning in-place changes to
the value will not be detected by the ORM - you instead would need to
-replace the existing value with a new one to detect changes.
+replace the existing value with a new one to detect changes.
The subtype ``MutableJSONEncodedDict``
adds "mutability" to allow this, but note that "mutable" types add
a significant performance penalty to the ORM's flush process::
class JSONEncodedDict(TypeDecorator):
"""Represents an immutable structure as a json-encoded string.
-
+
Usage::
-
+
JSONEncodedDict(255)
-
+
"""
impl = VARCHAR
if value is not None:
value = simplejson.loads(value, use_decimal=True)
return value
-
+
class MutableJSONEncodedDict(MutableType, JSONEncodedDict):
"""Adds mutability to JSONEncodedDict."""
-
+
def copy_value(self, value):
return simplejson.loads(
simplejson.dumps(value, use_decimal=True),
.. toctree::
:maxdepth: 2
-
+
intro
orm/index
core/index
dialects/index
-
+
Indices and tables
------------------
Documentation Overview
======================
-The documentation is separated into three sections: :ref:`orm_toplevel`, :ref:`core_toplevel`, and :ref:`dialect_toplevel`.
+The documentation is separated into three sections: :ref:`orm_toplevel`, :ref:`core_toplevel`, and :ref:`dialect_toplevel`.
In :ref:`orm_toplevel`, the Object Relational Mapper is introduced and fully
described. New users should begin with the :ref:`ormtutorial_toplevel`. If you
.. sourcecode:: none
# easy_install SQLAlchemy
-
+
Or with pip:
.. sourcecode:: none
Collection Configuration and Techniques
=======================================
-The :func:`.relationship` function defines a linkage between two classes.
+The :func:`.relationship` function defines a linkage between two classes.
When the linkage defines a one-to-many or many-to-many relationship, it's
represented as a Python collection when objects are loaded and manipulated.
This section presents additional information about collection configuration
from within an already instrumented call can cause events to be fired off
repeatedly, or inappropriately, leading to internal state corruption in
rare cases::
-
+
from sqlalchemy.orm.collections import MappedCollection,\
collection
class MyMappedCollection(MappedCollection):
"""Use @internally_instrumented when your methods
call down to already-instrumented methods.
-
+
"""
-
+
@collection.internally_instrumented
def __setitem__(self, key, value, _sa_initiator=None):
# do something with key, value
super(MyMappedCollection, self).__setitem__(key, value, _sa_initiator)
-
+
@collection.internally_instrumented
def __delitem__(self, key, _sa_initiator=None):
# do something with key
session.add(broker)
session.commit()
-
+
# lets take a peek at that holdings_table after committing changes to the db
print list(holdings_table.select().execute())
# [(1, 'ZZK', 10), (1, 'JEK', 123), (1, 'STEPZ', 123)]
.. autoclass:: ShardedSession
:members:
-
+
.. autoclass:: ShardedQuery
:members:
.. toctree::
:maxdepth: 2
-
+
tutorial
mapper_config
relationships
exceptions
extensions/index
examples
-
-
\ No newline at end of file
+
:meth:`.Query.select_from` methods::
session.query(Manager.manager_data).select_from(manager)
-
+
session.query(engineer.c.id).filter(engineer.c.engineer_info==manager.c.manager_data)
Creating Joins to Specific Subtypes
with_polymorphic=('*', pjoin),
polymorphic_on=pjoin.c.type,
polymorphic_identity='employee')
-
+
mapper(Manager, managers_table,
inherits=employee_mapper,
concrete=True,
polymorphic_identity='manager')
-
+
mapper(Engineer, engineers_table,
inherits=employee_mapper,
concrete=True,
polymorphic_identity='engineer')
-
+
mapper(Company, companies, properties={
'employees': relationship(Employee)
})
-----------------
To use :class:`.MapperExtension`, make your own subclass of it and just send it off to a mapper::
-
+
from sqlalchemy.orm.interfaces import MapperExtension
-
+
class MyExtension(MapperExtension):
def before_insert(self, mapper, connection, instance):
print "instance %s before insert !" % instance
-
+
m = mapper(User, users_table, extension=MyExtension())
Multiple extensions will be chained together and processed in order; they are specified as a list::
The :class:`.SessionExtension` applies plugin points for :class:`.Session` objects::
from sqlalchemy.orm.interfaces import SessionExtension
-
+
class MySessionExtension(SessionExtension):
def before_commit(self, session):
print "before commit!"
from sqlalchemy.orm.interfaces import AttributeExtension
from sqlalchemy.orm import mapper, relationship, column_property
-
+
class MyAttrExt(AttributeExtension):
def append(self, state, value, initiator):
print "append event !"
return value
-
+
def set(self, state, value, oldvalue, initiator):
print "set event !"
return value
-
+
mapper(SomeClass, sometable, properties={
'foo':column_property(sometable.c.foo, extension=MyAttrExt()),
'bar':relationship(Bar, extension=MyAttrExt())
A big part of SQLAlchemy is providing a wide range of control over how related objects get loaded when querying. This behavior
can be configured at mapper construction time using the ``lazy`` parameter to the :func:`.relationship` function,
-as well as by using options with the :class:`.Query` object.
+as well as by using options with the :class:`.Query` object.
Using Loader Strategies: Lazy Loading, Eager Loading
----------------------------------------------------
parent objects:
.. sourcecode:: python+sql
-
+
{sql}>>>jack = session.query(User).options(subqueryload('addresses')).filter_by(name='jack').all()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname,
users.password AS users_password
* When using the default lazy loading, if you load 100 objects, and then access a collection on each of
them, a total of 101 SQL statements will be emitted, although each statement will typically be a
simple SELECT without any joins.
-
+
* When using joined loading, the load of 100 objects and their collections will emit only one SQL
statement. However, the
total number of rows fetched will be equal to the sum of the size of all the collections, plus one
exceptions) will transmit the full data of each parent over the wire to the client connection in
any case. Therefore joined eager loading only makes sense when the size of the collections are
relatively small. The LEFT OUTER JOIN can also be performance intensive compared to an INNER join.
-
+
* When using subquery loading, the load of 100 objects will emit two SQL statements. The second
statement will fetch a total number of rows equal to the sum of the size of all collections. An
INNER JOIN is used, and a minimum of parent columns are requested, only the primary keys. So a
subquery load makes sense when the collections are larger.
-
+
* When multiple levels of depth are used with joined or subquery loading, loading collections-within-
collections will multiply the total number of rows fetched in a cartesian fashion. Both forms
of eager loading always join from the original parent class.
-
+
* Many to One Reference
* When using the default lazy loading, a load of 100 objects will like in the case of the collection
if the collection of objects references a relatively small set of target objects, or the full set
of possible target objects have already been loaded into the session and are strongly referenced,
using the default of `lazy='select'` is by far the most efficient way to go.
-
+
* When using joined loading, the load of 100 objects will emit only one SQL statement. The join
- will be a LEFT OUTER JOIN, and the total number of rows will be equal to 100 in all cases.
+ will be a LEFT OUTER JOIN, and the total number of rows will be equal to 100 in all cases.
If you know that each parent definitely has a child (i.e. the foreign
key reference is NOT NULL), the joined load can be configured with ``innerjoin=True``, which is
usually specified within the :func:`~sqlalchemy.orm.relationship`. For a load of objects where
there are many possible target references which may have not been loaded already, joined loading
with an INNER JOIN is extremely efficient.
-
+
* Subquery loading will issue a second load for all the child objects, so for a load of 100 objects
there would be two SQL statements emitted. There's probably not much advantage here over
joined loading, however, except perhaps that subquery loading can use an INNER JOIN in all cases
takes a form such as::
mapper(User, users_table, primary_key=[users_table.c.id])
-
+
Would translate into declarative as::
class User(Base):
class User(Base):
__tablename__ = 'users'
-
+
id = Column(Integer)
-
+
__mapper_args__ = {
'primary_key':[id]
}
example::
mapper(User, users_table, include_properties=['user_id', 'user_name'])
-
+
...will map the ``User`` class to the ``users_table`` table, only including
the "user_id" and "user_name" columns - the rest are not refererenced.
Similarly::
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
-
+
class User(Base):
__tablename__ = 'user'
id = Column('user_id', Integer, primary_key=True)
together using a list, as below where we map to a :func:`~.expression.join`::
from sqlalchemy.sql import join
-
+
# join users and addresses
usersaddresses = join(users_table, addresses_table, \
users_table.c.user_id == addresses_table.c.user_id)
looks like::
from sqlalchemy.orm import mapper, column_property
-
+
mapper(User, users, properties={
'name':column_property(users.c.name, active_history=True)
})
or with declarative::
-
+
class User(Base):
__tablename__ = 'users'
-
+
id = Column(Integer, primary_key=True)
name = column_property(Column(String(50)), active_history=True)
Further examples of :func:`.column_property` are at :ref:`mapper_sql_expressions`.
-
+
.. autofunction:: column_property
.. _deferred:
class Book(Base):
__tablename__ = 'books'
-
+
book_id = Column(Integer, primary_key=True)
title = Column(String(200), nullable=False)
summary = Column(String(2000))
used. Unlike older versions of SQLAlchemy, there is no :func:`~.sql.expression.label` requirement::
from sqlalchemy.orm import column_property
-
+
mapper(User, users_table, properties={
'fullname': column_property(
users_table.c.firstname + " " + users_table.c.lastname
from sqlalchemy.orm import column_property
from sqlalchemy import select, func
-
+
mapper(User, users_table, properties={
'address_count': column_property(
select([func.count(addresses_table.c.address_id)]).\
@property
def fullname(self):
return self.firstname + " " + self.lastname
-
+
To invoke a SQL statement from an instance that's already been loaded, the
session associated with the instance can be acquired using
:func:`~.session.object_session` which will provide the appropriate
from sqlalchemy.orm import object_session
from sqlalchemy import select, func
-
+
class User(object):
@property
def address_count(self):
issued when the ORM is populating the object.
.. sourcecode:: python+sql
-
+
from sqlalchemy.orm import validates
-
+
addresses_table = Table('addresses', metadata,
Column('id', Integer, primary_key=True),
Column('email', String)
different name. Below we illustrate this using Python 2.6-style properties::
class EmailAddress(object):
-
+
@property
def email(self):
return self._email
-
+
@email.setter
def email(self, email):
self._email = email
from sqlalchemy.orm.properties import ColumnProperty
from sqlalchemy.sql import func
-
+
class MyComparator(ColumnProperty.Comparator):
def __eq__(self, other):
return func.lower(self.__clause_element__()) == func.lower(other)
Sets of columns can be associated with a single user-defined datatype. The ORM provides a single attribute which represents the group of columns
using the class you provide.
-A simple example represents pairs of columns as a "Point" object.
+A simple example represents pairs of columns as a "Point" object.
Starting with a table that represents two points as x1/y1 and x2/y2::
from sqlalchemy import Table, Column
-
+
vertices = Table('vertices', metadata,
Column('id', Integer, primary_key=True),
Column('x1', Integer),
from sqlalchemy.orm import mapper
from sqlalchemy.sql import join
-
+
class AddressUser(object):
pass
with multiple columns::
from sqlalchemy.orm import mapper, column_property
-
+
mapper(AddressUser, j, properties={
'user_id': column_property(users_table.c.user_id, addresses_table.c.user_id)
})
list of columns::
from sqlalchemy.ext.declarative import declarative_base
-
+
Base = declarative_base()
-
+
class AddressUser(Base):
__table__ = j
-
+
user_id = column_property(users_table.c.user_id, addresses_table.c.user_id)
A second example::
``Child`` will get a ``parent`` attribute with many-to-one semantics.
Declarative::
-
+
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
-
+
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
children = relationship("Child", backref="parent")
-
+
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parent.id'))
-
+
Many To One
~~~~~~~~~~~~
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'))
child = relationship("Child", backref="parents")
-
+
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
mapper(Parent, parent_table, properties={
'child': relationship(Child, uselist=False, backref='parent')
})
-
+
mapper(Child, child_table)
Or to turn a one-to-many backref into one-to-one, use the :func:`.backref` function
to provide arguments for the reverse side::
-
+
from sqlalchemy.orm import backref
-
+
parent_table = Table('parent', metadata,
Column('id', Integer, primary_key=True),
Column('child_id', Integer, ForeignKey('child.id'))
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'))
child = relationship("Child", backref=backref("parent", uselist=False))
-
+
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
-
+
Many To Many
~~~~~~~~~~~~~
Column('left_id', Integer, ForeignKey('left.id')),
Column('right_id', Integer, ForeignKey('right.id'))
)
-
+
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = relationship("Child",
secondary=association_table,
backref="parents")
-
+
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
-
+
.. _association_pattern:
Association Object
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
child = relationship("Child", backref="parent_assocs")
-
+
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = relationship(Association, backref="parent")
-
+
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
-
+
Working with the association pattern in its direct form requires that child
objects are associated with an association instance before being appended to
the parent; similarly, access from parent to child goes through the
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
-
+
class Node(Base):
__tablename__ = 'nodes'
id = Column(Integer, primary_key=True)
children = relationship("Node",
backref=backref('parent', remote_side=id)
)
-
+
There are several examples included with SQLAlchemy illustrating
self-referential strategies; these include :ref:`examples_adjacencylist` and
:ref:`examples_xmlpersistence`.
# passive_updates=False *only* needed if the database
# does not implement ON UPDATE CASCADE
-
+
mapper(User, users, properties={
'addresses': relationship(Address, passive_updates=False)
})
be set up as in the example above, using the ``bind`` argument. You
can also associate a :class:`.Engine` with an existing :func:`.sessionmaker`
using the :meth:`.sessionmaker.configure` method::
-
+
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
-
+
# configure Session class with desired options
Session = sessionmaker()
on each invocation::
session = Session(bind=engine)
-
+
...or directly with a :class:`.Connection`::
conn = engine.connect()
that point on your other modules say "from mypackage import Session". That
way, everyone else just uses :class:`.Session()`,
and the configuration of that session is controlled by that central point.
-
+
If your application starts up, does imports, but does not know what
database it's going to be connecting to, you can bind the
:class:`.Session` at the "class" level to the
engine later on, using ``configure()``.
-
+
In the examples in this section, we will frequently show the
:func:`.sessionmaker` being created right above the line where we actually
invoke :class:`~sqlalchemy.orm.session.Session()`. But that's just for
then remains in use for the lifespan of a particular database
conversation, which includes not just the initial loading of objects but
throughout the whole usage of those instances.
-
+
Objects become detached if their owning session is discarded. They are
still functional in the detached state if the user has ensured that their
state has not been expired before detachment, but they will not be able to
to consider persisted objects as an extension of the state of a particular
:class:`.Session`, and to keep that session around until all referenced
objects have been discarded.
-
+
An exception to this is when objects are placed in caches or otherwise
shared among threads or processes, in which case their detached state can
be stored, transmitted, or shared. However, the state of detached objects
should still be transferred back into a new :class:`.Session` using
:meth:`.Session.add` or :meth:`.Session.merge` before working with the
object (or in the case of merge, its state) again.
-
+
It is also very common that a :class:`.Session` as well as its associated
objects are only referenced by a single thread. Sharing objects between
threads is most safely accomplished by sharing their state among multiple
:class:`.Session` per thread, :meth:`.Session.merge` to transfer state
between threads. This pattern is not a strict requirement by any means,
but it has the least chance of introducing concurrency issues.
-
+
To help with the recommended :class:`.Session` -per-thread,
:class:`.Session` -per-set-of-objects patterns, the
:func:`.scoped_session` function is provided which produces a
map and see that the object is already there. It's only when you say
``query.get({some primary key})`` that the
:class:`~sqlalchemy.orm.session.Session` doesn't have to issue a query.
-
+
Additionally, the Session stores object instances using a weak reference
by default. This also defeats the purpose of using the Session as a cache.
-
+
The :class:`.Session` is not designed to be a
global object from which everyone consults as a "registry" of objects.
That's more the job of a **second level cache**. SQLAlchemy provides
sharing the session with those threads, but you also will have implemented
a proper locking scheme (or your graphical framework does) so that those
threads do not collide.
-
+
A multithreaded application is usually going to want to make usage of
:func:`.scoped_session` to transparently manage sessions per thread.
More on this at :ref:`unitofwork_contextual`.
:meth:`~.Session.merge` is an extremely useful method for many purposes. However,
it deals with the intricate border between objects that are transient/detached and
-those that are persistent, as well as the automated transferrence of state.
+those that are persistent, as well as the automated transferrence of state.
The wide variety of scenarios that can present themselves here often require a
more careful approach to the state of objects. Common problems with merge usually involve
-some unexpected state regarding the object being passed to :meth:`~.Session.merge`.
+some unexpected state regarding the object being passed to :meth:`~.Session.merge`.
Lets use the canonical example of the User and Address objects::
class User(Base):
__tablename__ = 'user'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False)
addresses = relationship("Address", backref="user")
-
+
class Address(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
email_address = Column(String(50), nullable=False)
user_id = Column(Integer, ForeignKey('user.id'), nullable=False)
-
+
Assume a ``User`` object with one ``Address``, already persistent::
>>> u1 = User(name='ed', addresses=[Address(email_address='ed@ed.com')])
>>> session.add(u1)
>>> session.commit()
-
+
We now create ``a1``, an object outside the session, which we'd like
to merge on top of the existing ``Address``::
>>> a1 = Address(id=existing_a1.id)
A surprise would occur if we said this::
-
+
>>> a1.user = u1
>>> a1 = session.merge(a1)
>>> session.commit()
>>> existing_a1.id = existing_a1.id
>>> existing_a1.user_id = u1.id
>>> existing_a1.user = None
-
+
Where above, both ``user_id`` and ``user`` are assigned to, and change events
are emitted for both. The ``user`` association
takes precedence, and None is applied to ``user_id``, causing a failure.
is the object prematurely in the session ?
.. sourcecode:: python+sql
-
+
>>> a1 = Address(id=existing_a1, user_id=user.id)
>>> assert a1 not in session
>>> a1 = session.merge(a1)
-
+
Or is there state on the object that we don't want ? Examining ``__dict__``
is a quick way to check::
>>> a1 = session.merge(a1)
>>> # success
>>> session.commit()
-
+
Deleting
--------
>>> session.add(o1)
>>> o1 in session
True
-
+
>>> i1 = Item()
>>> i1.order = o1
>>> i1 in o1.orders
True
>>> i1 in session
True
-
+
This behavior can be disabled as of 0.6.5 using the ``cascade_backrefs`` flag::
mapper(Order, order_table, properties={
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
-
+
# global application scope. create Session class, engine
Session = sessionmaker()
# begin a non-ORM transaction
self.trans = connection.begin()
-
+
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
-
+
def test_something(self):
- # use the session in tests.
-
+ # use the session in tests.
+
self.session.add(Foo())
self.session.commit()
-
+
def tearDown(self):
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
self.session.close()
-
+
Above, we issue :meth:`.Session.commit` as well as
:meth:`.Transaction.rollback`. This is an example of where we take advantage
of the :class:`.Connection` object's ability to maintain *subtransactions*, or
.. autoclass:: sqlalchemy.util.ScopedRegistry
:members:
-
+
.. autoclass:: sqlalchemy.util.ThreadLocalRegistry
.. _session_partitioning:
.. autoclass:: History
:members:
-
+
.. attribute:: sqlalchemy.orm.attributes.PASSIVE_NO_INITIALIZE
Symbol indicating that loader callables should
The preceding approach to configuration involved a
:class:`~sqlalchemy.schema.Table`, a user-defined class, and
a call to :func:`~.orm.mapper`. This illustrates classical SQLAlchemy usage, which values
-the highest separation of concerns possible.
+the highest separation of concerns possible.
A large number of applications don't require this degree of
separation, and for those SQLAlchemy offers an alternate "shorthand"
-configurational style called :mod:`~.sqlalchemy.ext.declarative`.
+configurational style called :mod:`~.sqlalchemy.ext.declarative`.
For many applications, this is the only style of configuration needed.
Our above example using this style is as follows::
>>> session.close() # roll back and close the transaction
>>> from sqlalchemy.orm import clear_mappers
>>> clear_mappers() # remove all class mappings
-
+
Below, we use :class:`~.orm.mapper` to reconfigure an ORM mapping for ``User`` and
``Address``, on our existing but currently un-mapped classes. The
font: normal 20px/22px arial,helvetica,sans-serif;
color: #222;
padding:0px;
- margin:0px;
+ margin:0px;
}
.topnav h2 {
font-family:arial,helvetica,sans-serif;
font-size:1.6em;
font-weight:normal;
- line-height:1.6em;
+ line-height:1.6em;
}
.topnav h3 {
}
.totoc {
-
+
}
.doc_copyright {
% else:
${entryname|h}
% endif
-
+
% if subitems:
<dd><dl>
% for subentryname, subentrylinks in subitems:
<div class="topnav">
<div id="pagecontrol">
<a href="${pathto('genindex')}">Index</a>
-
+
% if sourcename:
<div class="sourcelink">(<a href="${pathto('_sources/' + sourcename, True)|h}">${_('view source')})</div>
% endif
</div>
-
+
<div class="navbanner">
<a class="totoc" href="${pathto(master_doc)}">Table of Contents</a>
% if parents:
% if current_page_name != master_doc:
» ${self.show_title()}
% endif
-
+
${prevnext()}
<h2>
${self.show_title()}
% endif
<div class="clearboth"></div>
</div>
-
+
<div class="document">
<div class="body">
${next.body()}
from sqlalchemy import MetaData, Table, Column, Sequence, ForeignKey,\
Integer, String, create_engine
-
+
from sqlalchemy.orm import sessionmaker, mapper, relationship, backref,\
joinedload_all
-
+
from sqlalchemy.orm.collections import attribute_mapped_collection
metadata = MetaData()
def __init__(self, name, parent=None):
self.name = name
self.parent = parent
-
+
def append(self, nodename):
self.children[nodename] = TreeNode(nodename, parent=self)
-
+
def __repr__(self):
return "TreeNode(name=%r, id=%r, parent_id=%r)" % (
self.name,
self.id,
self.parent_id
)
-
+
def dump_tree(node, indent=0):
-
+
return " " * indent + repr(node) + \
"\n" + \
"".join([
dump_tree(c, indent +1)
for c in node.children.values()]
)
-
+
mapper(TreeNode, tree_table, properties={
'children': relationship(TreeNode,
# cascade deletions
cascade="all",
-
+
# many to one + adjacency list - remote_side
# is required to reference the 'remote'
# column in the join condition.
backref=backref("parent", remote_side=tree_table.c.id),
-
+
# children will be represented as a dictionary
# on the "name" attribute.
collection_class=attribute_mapped_collection('name'),
def __init__(self, item, price=None):
self.item = item
self.price = price or item.price
-
+
mapper(Order, orders, properties={
'order_items': relationship(OrderItem, cascade="all, delete-orphan",
backref='order')
# function to return items from the DB
def item(name):
return session.query(Item).filter_by(description=name).one()
-
+
# create an order
order = Order('john smith')
deep within an object graph when lazy loads occur.
E.g.::
-
+
# query for Person objects, specifying cache
q = Session.query(Person).options(FromCache("default", "all_people"))
-
+
# specify that each Person's "addresses" collection comes from
# cache too
q = q.options(RelationshipCache("default", "by_person", Person.addresses))
-
+
# query
print q.all()
-
+
To run, both SQLAlchemy and Beaker (1.4 or greater) must be
installed or on the current PYTHONPATH. The demo will create a local
directory for datafiles, insert initial data, and run. Running the
bootstrap fixture data if necessary.
caching_query.py - Represent functions and classes
- which allow the usage of Beaker caching with SQLAlchemy.
+ which allow the usage of Beaker caching with SQLAlchemy.
Introduces a query option called FromCache.
model.py - The datamodel, which represents Person that has multiple
def load_name_range(start, end, invalidate=False):
"""Load Person objects on a range of names.
-
+
start/end are integers, range is then
"person <start>" - "person <end>".
-
+
The cache option we set up is called "name_range", indicating
a range of names for the Person class.
-
+
The `Person.addresses` collections are also cached. Its basically
another level of tuning here, as that particular cache option
can be transparently replaced with joinedload(Person.addresses).
# have the "addresses" collection cached separately
# each lazyload of Person.addresses loads from cache.
q = q.options(RelationshipCache("default", "by_person", Person.addresses))
-
+
# alternatively, eagerly load the "addresses" collection, so that they'd
# be cached together. This issues a bigger SQL statement and caches
# a single, larger value in the cache per person rather than two
# separate ones.
#q = q.options(joinedload(Person.addresses))
-
+
# if requested, invalidate the cache on current criterion.
if invalidate:
q.invalidate()
-
+
return q.all()
print "two through twelve, possibly from cache:\n"
The rest of what's here are standard SQLAlchemy and
Beaker constructs.
-
+
"""
from sqlalchemy.orm.interfaces import MapperOption
from sqlalchemy.orm.query import Query
class CachingQuery(Query):
"""A Query subclass which optionally loads full results from a Beaker
cache region.
-
+
The CachingQuery stores additional state that allows it to consult
a Beaker cache before accessing the database:
-
+
* A "region", which is a cache region argument passed to a
Beaker CacheManager, specifies a particular cache configuration
(including backend implementation, expiration times, etc.)
group of keys within the cache. A query that filters on a name
might use the name "by_name", a query that filters on a date range
to a joined table might use the name "related_date_range".
-
+
When the above state is present, a Beaker cache is retrieved.
-
+
The "namespace" name is first concatenated with
a string composed of the individual entities and columns the Query
requests, i.e. such as ``Query(User.id, User.name)``.
-
+
The Beaker cache is then loaded from the cache manager based
on the region and composed namespace. The key within the cache
itself is then constructed against the bind parameters specified
The FromCache and RelationshipCache mapper options below represent
the "public" method of configuring this state upon the CachingQuery.
-
+
"""
-
+
def __init__(self, manager, *args, **kw):
self.cache_manager = manager
Query.__init__(self, *args, **kw)
-
+
def __iter__(self):
"""override __iter__ to pull results from Beaker
if particular attributes have been configured.
-
+
Note that this approach does *not* detach the loaded objects from
the current session. If the cache backend is an in-process cache
(like "memory") and lives beyond the scope of the current session's
modified to first expunge() each loaded item from the current
session before returning the list of items, so that the items
in the cache are not the same ones in the current Session.
-
+
"""
if hasattr(self, '_cache_parameters'):
return self.get_value(createfunc=lambda: list(Query.__iter__(self)))
"""Set the value in the cache for this query."""
cache, cache_key = _get_cache_parameters(self)
- cache.put(cache_key, value)
+ cache.put(cache_key, value)
def query_callable(manager):
def query(*arg, **kw):
return CachingQuery(manager, *arg, **kw)
return query
-
+
def _get_cache_parameters(query):
"""For a query with cache_region and cache_namespace configured,
return the correspoinding Cache instance and cache key, based
raise ValueError("This Query does not have caching parameters configured.")
region, namespace, cache_key = query._cache_parameters
-
+
namespace = _namespace_from_query(namespace, query)
if cache_key is None:
return namespace
def _set_cache_parameters(query, region, namespace, cache_key):
-
+
if hasattr(query, '_cache_parameters'):
region, namespace, cache_key = query._cache_parameters
raise ValueError("This query is already configured "
(region, namespace)
)
query._cache_parameters = region, namespace, cache_key
-
+
class FromCache(MapperOption):
"""Specifies that a Query should load results from a cache."""
def __init__(self, region, namespace, cache_key=None):
"""Construct a new FromCache.
-
+
:param region: the cache region. Should be a
region configured in the Beaker CacheManager.
-
+
:param namespace: the cache namespace. Should
be a name uniquely describing the target Query's
lexical structure.
-
+
:param cache_key: optional. A string cache key
that will serve as the key to the query. Use this
if your query has a huge amount of parameters (such
self.region = region
self.namespace = namespace
self.cache_key = cache_key
-
+
def process_query(self, query):
"""Process a Query during normal loading operation."""
-
+
_set_cache_parameters(query, self.region, self.namespace, self.cache_key)
class RelationshipCache(MapperOption):
def __init__(self, region, namespace, attribute):
"""Construct a new RelationshipCache.
-
+
:param region: the cache region. Should be a
region configured in the Beaker CacheManager.
-
+
:param namespace: the cache namespace. Should
be a name uniquely describing the target Query's
lexical structure.
-
+
:param attribute: A Class.attribute which
indicates a particular class relationship() whose
lazy loader should be pulled from the cache.
-
+
"""
self.region = region
self.namespace = namespace
def and_(self, option):
"""Chain another RelationshipCache option to this one.
-
+
While many RelationshipCache objects can be specified on a single
Query separately, chaining them together allows for a more efficient
lookup during load.
-
+
"""
self._relationship_options.update(option._relationship_options)
return self
def _params_from_query(query):
"""Pull the bind parameter values from a query.
-
+
This takes into account any scalar attribute bindparam set up.
-
+
E.g. params_from_query(query.filter(Cls.foo==5).filter(Cls.bar==7)))
would return [5, 7].
-
+
"""
v = []
def visit_bindparam(bind):
value = query._params.get(bind.key, bind.value)
-
+
# lazyloader may dig a callable in here, intended
# to late-evaluate params after autoflush is called.
# convert to a scalar value.
if callable(value):
value = value()
-
+
v.append(value)
if query._criterion is not None:
visitors.traverse(query._criterion, {}, {'bindparam':visit_bindparam})
"Press enter to continue.\n" % root
)
os.makedirs(root)
-
+
dbfile = os.path.join(root, "beaker_demo.db")
engine = create_engine('sqlite:///%s' % dbfile, echo=True)
Session.configure(bind=engine)
'type':'file',
'data_dir':root,
'expire':3600,
-
+
# set start_time to current time
# to re-cache everything
# upon application startup
('New York', 'United States', ('10001', '10002', '10003', '10004', '10005', '10006')),
('San Francisco', 'United States', ('94102', '94103', '94104', '94105', '94107', '94108'))
]
-
+
countries = {}
all_post_codes = []
for city, country, postcodes in data:
country = countries[country]
except KeyError:
countries[country] = country = Country(country)
-
+
city = City(city, country)
pc = [PostalCode(code, city) for code in postcodes]
Session.add_all(pc)
all_post_codes.extend(pc)
-
+
for i in xrange(1, 51):
person = Person(
"person %.2d" % i,
Session.add(person)
Session.commit()
-
+
# start the demo fresh
Session.remove()
\ No newline at end of file
When used with the query_cache system, the effect is that the objects
in the cache are the same as that within the session - the merge()
- is a formality that doesn't actually create a second instance.
+ is a formality that doesn't actually create a second instance.
This makes it safe to use for updates of data from an identity
perspective (still not ideal for deletes though).
is automatically disposed upon session.remove().
"""
-
+
def __init__(self, namespace, scoped_session, **kwargs):
"""__init__ is called by Beaker itself."""
-
+
container.NamespaceManager.__init__(self, namespace)
self.scoped_session = scoped_session
-
+
@classmethod
def create_session_container(cls, beaker_name, scoped_session):
"""Create a new session container for a given scoped_session."""
-
+
def create_namespace(namespace, **kw):
return cls(namespace, scoped_session, **kw)
cache.clsmap[beaker_name] = create_namespace
-
+
@property
def dictionary(self):
"""Return the cache dictionary used by this MemoryNamespaceManager."""
-
+
sess = self.scoped_session()
try:
nscache = sess._beaker_cache
if __name__ == '__main__':
from environment import Session, cache_manager
from caching_query import FromCache
-
+
# create a Beaker container type called "ext:local_session".
# it will reference the ScopedSession in meta.
ScopedSessionNamespace.create_session_container("ext:local_session", Session)
-
+
# set up a region based on this new container type.
cache_manager.regions['local_session'] ={'type':'ext:local_session'}
-
+
from model import Person
-
+
# query to load Person by name, with criterion
# of "person 10"
q = Session.query(Person).\
options(FromCache("local_session", "by_name")).\
filter(Person.name=="person 10")
-
+
# load from DB
person10 = q.one()
-
+
# next call, the query is cached.
person10 = q.one()
# clear out the Session. The "_beaker_cache" dictionary
# disappears with it.
Session.remove()
-
+
# query calls from DB again
person10 = q.one()
-
+
# identity is preserved - person10 is the *same* object that's
# ultimately inside the cache. So it is safe to manipulate
# the not-queried-for attributes of objects when using such a
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False)
-
+
def __init__(self, name):
self.name = name
code = Column(String(10), nullable=False)
city_id = Column(Integer, ForeignKey('city.id'), nullable=False)
city = relationship(City)
-
+
@property
def country(self):
return self.city.country
-
+
def __init__(self, code, city):
self.code = code
self.city = city
-
+
class Address(Base):
__tablename__ = 'address'
street = Column(String(200), nullable=False)
postal_code_id = Column(Integer, ForeignKey('postal_code.id'))
postal_code = relationship(PostalCode)
-
+
@property
def city(self):
return self.postal_code.city
-
+
@property
def country(self):
return self.postal_code.country
-
+
def __str__(self):
return "%s\t"\
"%s, %s\t"\
"%s" % (self.street, self.city.name,
self.postal_code.code, self.country.name)
-
+
class Person(Base):
__tablename__ = 'person'
def __str__(self):
return self.name
-
+
def __repr__(self):
return "Person(name=%r)" % self.name
-
+
def format_full(self):
return "\t".join([str(x) for x in [self] + list(self.addresses)])
-
+
# Caching options. A set of three RelationshipCache options
# which can be applied to Query(), causing the "lazy load"
# of these attributes to be loaded from cache.
class MyCollectionAdapter(object):
"""An wholly alternative instrumentation implementation."""
-
+
def __init__(self, key, state, collection):
self.key = key
self.state = state
class InstallListeners(InstrumentationManager):
def post_configure_attribute(self, class_, key, inst):
"""Add an event listener to an InstrumentedAttribute."""
-
+
inst.impl.extensions.insert(0, AttributeListener(key))
-
+
class AttributeListener(AttributeExtension):
- """Generic event listener.
-
+ """Generic event listener.
+
Propagates attribute change events to a
"receive_change_event()" method on the target
instance.
-
+
"""
def __init__(self, key):
self.key = key
-
+
def append(self, state, value, initiator):
self._report(state, value, None, "appended")
return value
def set(self, state, value, oldvalue, initiator):
self._report(state, value, oldvalue, "set")
return value
-
+
def _report(self, state, value, oldvalue, verb):
state.obj().receive_change_event(verb, self.key, value, oldvalue)
class Base(object):
__sa_instrumentation_manager__ = InstallListeners
-
+
def receive_change_event(self, verb, key, value, oldvalue):
s = "Value '%s' %s on attribute '%s', " % (value, verb, key)
if oldvalue:
s += "which replaced the value '%s', " % oldvalue
s += "on object %s" % self
print s
-
+
Base = declarative_base(cls=Base)
class MyMappedClass(Base):
__tablename__ = "mytable"
-
+
id = Column(Integer, primary_key=True)
data = Column(String(50))
related_id = Column(Integer, ForeignKey("related.id"))
def __str__(self):
return "MyMappedClass(data=%r)" % self.data
-
+
class Related(Base):
__tablename__ = "related"
def __str__(self):
return "Related(data=%r)" % self.data
-
+
# classes are instrumented. Demonstrate the events !
-
+
m1 = MyMappedClass(data='m1', related=Related(data='r1'))
m1.data = 'm1mod'
m1.related.mapped.append(MyMappedClass(data='m2'))
del m1.data
-
-
+
+
def __init__(self, func, expr=None):
self.func = func
self.expr = expr or func
-
+
def __get__(self, instance, owner):
if instance is None:
return new.instancemethod(self.expr, owner, owner.__class__)
return self.expr(owner)
else:
return self.fget(instance)
-
+
def __set__(self, instance, value):
self.fset(instance, value)
-
+
def __delete__(self, instance):
self.fdel(instance)
-
+
def setter(self, fset):
self.fset = fset
return self
def deleter(self, fdel):
self.fdel = fdel
return self
-
+
def expression(self, expr):
self.expr = expr
return self
@method
def contains(self,point):
"""Return true if the interval contains the given interval."""
-
+
return (self.start <= point) & (point < self.end)
-
+
@method
def intersects(self, other):
"""Return true if the interval intersects the given interval."""
-
+
return (self.start < other.end) & (self.end > other.start)
-
+
@method
def _max(self, x, y):
"""Return the max of two values."""
-
+
return max(x, y)
-
+
@_max.expression
def _max(cls, x, y):
"""Return the SQL max of two values."""
-
+
return func.max(x, y)
-
+
@method
def max_length(self, other):
"""Return the longer length of this interval and another."""
-
+
return self._max(self.length, other.length)
-
+
def __repr__(self):
return "%s(%s..%s)" % (self.__class__.__name__, self.start, self.end)
-
+
class Interval1(BaseInterval, Base):
"""Interval stored as endpoints"""
-
+
__table__ = Table('interval1', Base.metadata,
Column('id', Integer, primary_key=True),
Column('start', Integer, nullable=False),
class Interval2(BaseInterval, Base):
"""Interval stored as start and length"""
-
+
__table__ = Table('interval2', Base.metadata,
Column('id', Integer, primary_key=True),
Column('start', Integer, nullable=False),
def __init__(self, start, length):
self.start = start
self.length = length
-
+
@property_
def end(self):
return self.start + self.length
-
+
engine = create_engine('sqlite://', echo=True)
for Interval in (Interval1, Interval2):
print "Querying using interval class %s" % Interval.__name__
-
+
print
print '-- length less than 10'
print [(i, i.length) for i in
session.query(Interval).filter(Interval.length < 10).all()]
-
+
print
print '-- contains 12'
print session.query(Interval).filter(Interval.contains(12)).all()
-
+
print
print '-- intersects 2..10'
other = Interval1(2,10)
filter(Interval.intersects(other)).\
order_by(Interval.length).all()
print [(interval, interval.intersects(other)) for interval in result]
-
+
print
print '-- longer length'
interval_alias = aliased(Interval)
self.collection_name = collection_name\r
self.childclass = childclass\r
self.keyname = keyname\r
- \r
+\r
@property\r
def collection(self):\r
return getattr(self.parent, self.collection_name)\r
- \r
+\r
def keys(self):\r
descriptor = getattr(self.childclass, self.keyname)\r
return [x[0] for x in self.collection.values(descriptor)]\r
- \r
+\r
def __getitem__(self, key):\r
x = self.collection.filter_by(**{self.keyname:key}).first()\r
if x:\r
id = Column(Integer, primary_key=True)\r
name = Column(String(50))\r
_collection = relationship("Child", lazy="dynamic", cascade="all, delete-orphan")\r
- \r
+\r
@property\r
def child_map(self):\r
return ProxyDict(self, '_collection', Child, 'key')\r
- \r
+\r
class Child(Base):\r
__tablename__ = 'child'\r
id = Column(Integer, primary_key=True)\r
\r
def __repr__(self):\r
return "Child(key=%r)" % self.key\r
- \r
+\r
Base.metadata.create_all()\r
\r
sess = sessionmaker()()\r
the load in a non-recursive fashion and is much more efficient.
E.g.::
-
+
# parse an XML file and persist in the database
doc = ElementTree.parse("test.xml")
session.add(Document(file, doc))
session.commit()
-
+
# locate documents with a certain path/attribute structure
for document in find_document('/somefile/header/field2[@attr=foo]'):
# dump the XML
print document
-
+
"""
\ No newline at end of file
meta = MetaData()
####################### PART II - Table Metadata #############################
-
-# stores a top level record of an XML document.
+
+# stores a top level record of an XML document.
documents = Table('documents', meta,
Column('document_id', Integer, primary_key=True),
Column('filename', String(30), unique=True),
########################### PART III - Model #################################
# our document class. contains a string name,
-# and the ElementTree root element.
+# and the ElementTree root element.
class Document(object):
def __init__(self, name, element):
self.filename = name
self.element = element
-
+
def __str__(self):
buf = StringIO.StringIO()
self.element.write(buf)
def __get__(self, document, owner):
if document is None:
return self
-
+
if hasattr(document, '_element'):
return document._element
-
+
nodes = {}
root = None
for node in document._nodes:
elem.attrib[attr.name] = attr.value
elem.text = node.text
elem.tail = node.tail
-
+
document._element = ElementTree.ElementTree(root)
return document._element
-
+
def __set__(self, document, element):
def traverse(node):
n = _Node()
traverse(element.getroot())
document._element = element
-
+
def __delete__(self, document):
del document._element
document._nodes = []
def are_elements_equal(x, y):
return x == y
-# stores a top level record of an XML document.
+# stores a top level record of an XML document.
# the "element" column will store the ElementTree document as a BLOB.
documents = Table('documents', meta,
Column('document_id', Integer, primary_key=True),
meta.create_all(e)
# our document class. contains a string name,
-# and the ElementTree root element.
+# and the ElementTree root element.
class Document(object):
def __init__(self, name, element):
self.filename = name
# get ElementTree document
filename = os.path.join(os.path.dirname(__file__), "test.xml")
doc = ElementTree.parse(filename)
-
+
# save to DB
session = Session(e)
session.add(Document("test.xml", doc))
class Node(Base):
__tablename__ = 'node'
-
+
node_id = Column(Integer, primary_key=True)
def __init__(self, id):
self.node_id = id
-
+
def add_neighbors(self, *nodes):
for node in nodes:
Edge(self, node)
return self
-
+
def higher_neighbors(self):
return [x.higher_node for x in self.lower_edges]
-
+
def lower_neighbors(self):
return [x.lower_node for x in self.higher_edges]
lower_id = Column(Integer,
ForeignKey('node.node_id'),
primary_key=True)
-
+
higher_id = Column(Integer,
ForeignKey('node.node_id'),
primary_key=True)
higher_node = relationship(Node,
primaryjoin=higher_id==Node.node_id,
backref='higher_edges')
-
+
# here we have lower.node_id <= higher.node_id
def __init__(self, n1, n2):
if n1.node_id < n2.node_id:
# n1 -> n2 -> n5
# -> n7
# -> n3 -> n6
-
+
n1 = Node(1)
n2 = Node(2)
n3 = Node(3)
self.name = name
def __repr__(self):
return self.__class__.__name__ + " " + self.name
-
+
class Manager(Employee):
def __init__(self, name, manager_data):
self.name = name
def __repr__(self):
return self.__class__.__name__ + " " + \
self.name + " " + self.manager_data
-
+
class Engineer(Employee):
def __init__(self, name, engineer_info):
self.name = name
Column('company_id', Integer, ForeignKey('companies.company_id')),
Column('name', String(50)),
Column('type', String(30)))
-
+
engineers = Table('engineers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'),
primary_key=True),
Column('engineer_name', String(50)),
Column('primary_language', String(50)),
)
-
+
managers = Table('managers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'),
primary_key=True),
Column('status', String(30)),
Column('manager_name', String(50))
)
-
+
# create our classes. The Engineer and Manager classes extend from Person.
class Person(object):
def __init__(self, **kwargs):
Column('org_id', Integer, primary_key=True),
Column('org_name', String(50), nullable=False, key='name'),
mysql_engine='InnoDB')
-
+
member_table = Table('members', meta,
Column('member_id', Integer, primary_key=True),
Column('member_name', String(50), nullable=False, key='name'),
Column('org_id', Integer, ForeignKey('organizations.org_id', ondelete="CASCADE")),
mysql_engine='InnoDB')
-
-
+
+
class Organization(object):
def __init__(self, name):
self.name = name
-
+
class Member(object):
def __init__(self, name):
self.name = name
# Organization.members will be a Query object - no loading
# of the entire collection occurs unless requested
lazy="dynamic",
-
+
# Member objects "belong" to their parent, are deleted when
# removed from the collection
cascade="all, delete-orphan",
-
+
# "delete, delete-orphan" cascade does not load in objects on delete,
# allows ON DELETE CASCADE to handle it.
# this only works with a database that supports ON DELETE CASCADE -
if __name__ == '__main__':
engine = create_engine("mysql://scott:tiger@localhost/test", echo=True)
meta.create_all(engine)
-
+
# expire_on_commit=False means the session contents
# will not get invalidated after commit.
sess = sessionmaker(engine, expire_on_commit=False)()
sess.delete(org)
print "-------------------------\nflush three - delete org, delete members in one statement\n"
sess.commit()
-
+
print "-------------------------\nno Member rows should remain:\n"
print sess.query(Member).count()
-
+
print "------------------------\ndone. dropping tables."
meta.drop_all(engine)
\ No newline at end of file
right_most_sibling = connection.scalar(
select([personnel.c.rgt]).where(personnel.c.emp==instance.parent.emp)
)
-
+
connection.execute(
personnel.update(personnel.c.rgt>=right_most_sibling).values(
lft = case(
# before_update() would be needed to support moving of nodes
# after_delete() would be needed to support removal of nodes.
# [ticket:1172] needs to be implemented for deletion to work as well.
-
+
class Employee(Base):
__tablename__ = 'personnel'
__mapper_args__ = {
'extension':NestedSetExtension(),
'batch':False # allows extension to fire for each instance before going to the next.
}
-
+
parent = None
-
+
emp = Column(String, primary_key=True)
-
+
left = Column("lft", Integer, nullable=False)
right = Column("rgt", Integer, nullable=False)
-
+
def __repr__(self):
return "Employee(%s, %d, %d)" % (self.emp, self.left, self.right)
group_by(ealias.emp).\
order_by(ealias.left):
print " " * indentation + str(employee)
-
+
setattr(self, attr_name, GenericAssoc(table.name))
getattr(self, attr_name).targets = [value]
setattr(cls, name, property(get, set))
-
+
@property
def member(self):
return getattr(self.association,
'_backref_%s' % self.association.type)
-
+
setattr(cls, 'member', member)
mapper(GenericAssoc, association_table, properties={
* a DDL extension which allows CREATE/DROP to work in
conjunction with AddGeometryColumn/DropGeometryColumn
-
+
* a Geometry type, as well as a few subtypes, which
convert result row values to a GIS-aware object,
and also integrates with the DDL extension.
* a GIS-aware object which stores a raw geometry value
and provides a factory for functions such as AsText().
-
+
* an ORM comparator which can override standard column
methods on mapped objects to produce GIS operators.
-
+
* an attribute event listener that intercepts strings
and converts to GeomFromText().
-
+
* a standalone operator example.
The implementation is limited to only public, well known
class PersistentGisElement(GisElement):
"""Represents a Geometry value as loaded from the database."""
-
+
def __init__(self, desc):
self.desc = desc
class TextualGisElement(GisElement, expression.Function):
"""Represents a Geometry value as expressed within application code; i.e. in wkt format.
-
+
Extends expression.Function so that the value is interpreted as
GeomFromText(value) in a SQL expression context.
-
+
"""
-
+
def __init__(self, desc, srid=-1):
assert isinstance(desc, basestring)
self.desc = desc
class Geometry(TypeEngine):
"""Base PostGIS Geometry column type.
-
+
Converts bind/result values to/from a PersistentGisElement.
-
+
"""
-
+
name = 'GEOMETRY'
-
+
def __init__(self, dimension=None, srid=-1):
self.dimension = dimension
self.srid = srid
-
+
def bind_processor(self, dialect):
def process(value):
if value is not None:
else:
return value
return process
-
+
def result_processor(self, dialect, coltype):
def process(value):
if value is not None:
class Point(Geometry):
name = 'POINT'
-
+
class Curve(Geometry):
name = 'CURVE'
-
+
class LineString(Curve):
name = 'LINESTRING'
class GISDDL(object):
"""A DDL extension which integrates SQLAlchemy table create/drop
methods with PostGis' AddGeometryColumn/DropGeometryColumn functions.
-
+
Usage::
-
+
sometable = Table('sometable', metadata, ...)
-
+
GISDDL(sometable)
sometable.create()
-
+
"""
-
+
def __init__(self, table):
for event in ('before-create', 'after-create', 'before-drop', 'after-drop'):
table.ddl_listeners[event].append(self)
self._stack = []
-
+
def __call__(self, event, table, bind):
if event in ('before-create', 'before-drop'):
regular_cols = [c for c in table.c if not isinstance(c.type, Geometry)]
gis_cols = set(table.c).difference(regular_cols)
self._stack.append(table.c)
table._columns = expression.ColumnCollection(*regular_cols)
-
+
if event == 'before-drop':
for c in gis_cols:
bind.execute(select([func.DropGeometryColumn('public', table.name, c.name)], autocommit=True))
-
+
elif event == 'after-create':
table._columns = self._stack.pop()
-
+
for c in table.c:
if isinstance(c.type, Geometry):
bind.execute(select([func.AddGeometryColumn(table.name, c.name, c.type.srid, c.type.name, c.type.dimension)], autocommit=True))
def _to_postgis(value):
"""Interpret a value as a GIS-compatible construct."""
-
+
if hasattr(value, '__clause_element__'):
return value.__clause_element__()
elif isinstance(value, (expression.ClauseElement, GisElement)):
class GisAttribute(AttributeExtension):
"""Intercepts 'set' events on a mapped instance attribute and
converts the incoming value to a GIS expression.
-
+
"""
-
+
def set(self, state, value, oldvalue, initiator):
return _to_postgis(value)
-
+
class GisComparator(ColumnProperty.ColumnComparator):
"""Intercepts standard Column operators on mapped class attributes
and overrides their behavior.
-
+
"""
-
+
# override the __eq__() operator
def __eq__(self, other):
return self.__clause_element__().op('~=')(_to_postgis(other))
# add a custom operator
def intersects(self, other):
return self.__clause_element__().op('&&')(_to_postgis(other))
-
+
# any number of GIS operators can be overridden/added here
# using the techniques above.
-
+
def GISColumn(*args, **kw):
"""Define a declarative column property with GIS behavior.
-
+
This just produces orm.column_property() with the appropriate
extension and comparator_factory arguments. The given arguments
are passed through to Column. The declarative module extracts
the Column for inclusion in the mapped table.
-
+
"""
return column_property(
Column(*args, **kw),
class Road(Base):
__tablename__ = 'roads'
-
+
road_id = Column(Integer, primary_key=True)
road_name = Column(String)
road_geom = GISColumn(Geometry(2))
-
+
# enable the DDL extension, which allows CREATE/DROP operations
# to work correctly. This is not needed if working with externally
- # defined tables.
+ # defined tables.
GISDDL(Road.__table__)
metadata.drop_all()
metadata.create_all()
session = sessionmaker(bind=engine)()
-
+
# Add objects. We can use strings...
session.add_all([
Road(road_name='Jeff Rd', road_geom='LINESTRING(191232 243118,191108 243242)'),
Road(road_name='Graeme Ave', road_geom='LINESTRING(189412 252431,189631 259122)'),
Road(road_name='Phil Tce', road_geom='LINESTRING(190131 224148,190871 228134)'),
])
-
+
# or use an explicit TextualGisElement (similar to saying func.GeomFromText())
r = Road(road_name='Dave Cres', road_geom=TextualGisElement('LINESTRING(198231 263418,198213 268322)', -1))
session.add(r)
-
+
# pre flush, the TextualGisElement represents the string we sent.
assert str(r.road_geom) == 'LINESTRING(198231 263418,198213 268322)'
assert session.scalar(r.road_geom.wkt) == 'LINESTRING(198231 263418,198213 268322)'
-
+
session.commit()
# after flush and/or commit, all the TextualGisElements become PersistentGisElements.
assert str(r.road_geom) == "01020000000200000000000000B832084100000000E813104100000000283208410000000088601041"
-
+
r1 = session.query(Road).filter(Road.road_name=='Graeme Ave').one()
-
+
# illustrate the overridden __eq__() operator.
-
+
# strings come in as TextualGisElements
r2 = session.query(Road).filter(Road.road_geom == 'LINESTRING(189412 252431,189631 259122)').one()
-
+
# PersistentGisElements work directly
r3 = session.query(Road).filter(Road.road_geom == r1.road_geom).one()
-
+
assert r1 is r2 is r3
# illustrate the "intersects" operator
# illustrate usage of the "wkt" accessor. this requires a DB
# execution to call the AsText() function so we keep this explicit.
assert session.scalar(r1.road_geom.wkt) == 'LINESTRING(189412 252431,189631 259122)'
-
+
session.rollback()
-
+
metadata.drop_all()
for db in (db1, db2, db3, db4):
meta.drop_all(db)
meta.create_all(db)
-
+
# establish initial "id" in db1
db1.execute(ids.insert(), nextid=1)
-# step 5. define sharding functions.
+# step 5. define sharding functions.
# we'll use a straight mapping of a particular set of "country"
# attributes to shard id.
def shard_chooser(mapper, instance, clause=None):
"""shard chooser.
-
+
looks at the given instance and returns a shard id
note that we need to define conditions for
the WeatherLocation class, as well as our secondary Report class which will
point back to its WeatherLocation via its 'location' attribute.
-
+
"""
if isinstance(instance, WeatherLocation):
return shard_lookup[instance.continent]
return shard_chooser(mapper, instance.location)
def id_chooser(query, ident):
- """id chooser.
-
+ """id chooser.
+
given a primary key, returns a list of shards
to search. here, we don't have any particular information from a
pk so we just return all shard ids. often, youd want to do some
kind of round-robin strategy here so that requests are evenly
distributed among DBs.
-
+
"""
return ['north_america', 'asia', 'europe', 'south_america']
def query_chooser(query):
"""query chooser.
-
+
this also returns a list of shard ids, which can
just be all of them. but here we'll search into the Query in order
to try to narrow down the list of shards to query.
-
+
"""
ids = []
ids.append(shard_lookup[value])
elif operator == operators.in_op:
ids.extend(shard_lookup[v] for v in value)
-
+
if len(ids) == 0:
return ['north_america', 'asia', 'europe', 'south_america']
else:
def _get_query_comparisons(query):
"""Search an orm.Query object for binary expressions.
-
+
Returns expressions which match a Column against one or more
literal values as a list of tuples of the form
(column, operator, values). "values" is a single value
or tuple of values depending on the operator.
-
+
"""
binds = {}
clauses = set()
query_chooser=query_chooser
)
-# step 6. mapped classes.
+# step 6. mapped classes.
class WeatherLocation(object):
def __init__(self, continent, city):
self.continent = continent
'reports':relationship(Report, backref='location')
})
-mapper(Report, weather_reports)
+mapper(Report, weather_reports)
# save and load objects!
class SomeClass(Base):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50))
-
+
def __eq__(self, other):
assert type(other) is SomeClass and other.id == self.id
-
+
sess = Session()
sc = SomeClass(name='sc1')
sess.add(sc)
# of the info is always loaded (currently sets it on all attributes)
for prop in local_mapper.iterate_properties:
getattr(local_mapper.class_, prop.key).impl.active_history = True
-
+
super_mapper = local_mapper.inherits
super_history_mapper = getattr(cls, '__history_mapper__', None)
-
+
polymorphic_on = None
super_fks = []
if not super_mapper or local_mapper.local_table is not super_mapper.local_table:
for column in local_mapper.local_table.c:
if column.name == 'version':
continue
-
+
col = column.copy()
col.unique = False
super_fks.append((col.key, list(super_history_mapper.base_mapper.local_table.primary_key)[0]))
cols.append(col)
-
+
if column is local_mapper.polymorphic_on:
polymorphic_on = col
-
+
if super_mapper:
super_fks.append(('version', super_history_mapper.base_mapper.local_table.c.version))
cols.append(Column('version', Integer, primary_key=True))
else:
cols.append(Column('version', Integer, primary_key=True))
-
+
if super_fks:
cols.append(ForeignKeyConstraint(*zip(*super_fks)))
col = column.copy()
super_history_mapper.local_table.append_column(col)
table = None
-
+
if super_history_mapper:
bases = (super_history_mapper.class_,)
else:
bases = local_mapper.base_mapper.class_.__bases__
versioned_cls = type.__new__(type, "%sHistory" % cls.__name__, bases, {})
-
+
m = mapper(
versioned_cls,
table,
polymorphic_identity=local_mapper.polymorphic_identity
)
cls.__history_mapper__ = m
-
+
if not super_history_mapper:
cls.version = Column('version', Integer, default=1, nullable=False)
-
-
+
+
class VersionedMeta(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
DeclarativeMeta.__init__(cls, classname, bases, dict_)
obj_mapper = object_mapper(obj)
history_mapper = obj.__history_mapper__
history_cls = history_mapper.class_
-
+
obj_state = attributes.instance_state(obj)
-
+
attr = {}
obj_changed = False
-
+
for om, hm in zip(obj_mapper.iterate_to_root(), history_mapper.iterate_to_root()):
if hm.single:
continue
-
+
for hist_col in hm.local_table.c:
if hist_col.key == 'version':
continue
-
+
obj_col = om.local_table.c[hist_col.key]
# get the value of the
# the "unmapped" status of the subclass column on the
# base class is a feature of the declarative module as of sqla 0.5.2.
continue
-
+
# expired object attributes and also deferred cols might not be in the
# dict. force it to load no matter what by using getattr().
if prop.key not in obj_state.dict:
# if the attribute had no value.
attr[hist_col.key] = a[0]
obj_changed = True
-
+
if not obj_changed:
# not changed, but we have relationships. OK
# check those too
attributes.get_history(obj, prop.key).has_changes():
obj_changed = True
break
-
- if not obj_changed and not deleted:
+
+ if not obj_changed and not deleted:
return
attr['version'] = obj.version
setattr(hist, key, value)
session.add(hist)
obj.version += 1
-
+
class VersionedListener(SessionExtension):
def before_flush(self, session, flush_context, instances):
for obj in versioned_objects(session.dirty):
def setup():
global engine
engine = create_engine('sqlite://', echo=True)
-
+
class TestVersioning(TestBase):
def setup(self):
global Base, Session, Versioned
__metaclass__ = VersionedMeta
_decl_class_registry = Base._decl_class_registry
Session = sessionmaker(extension=VersionedListener())
-
+
def teardown(self):
clear_mappers()
Base.metadata.drop_all()
-
+
def create_tables(self):
Base.metadata.create_all()
-
+
def test_plain(self):
class SomeClass(Versioned, Base, ComparableEntity):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50))
-
+
self.create_tables()
sess = Session()
sc = SomeClass(name='sc1')
sess.add(sc)
sess.commit()
-
+
sc.name = 'sc1modified'
sess.commit()
-
+
assert sc.version == 2
-
+
SomeClassHistory = SomeClass.__history_mapper__.class_
-
+
eq_(
sess.query(SomeClassHistory).filter(SomeClassHistory.version == 1).all(),
[SomeClassHistory(version=1, name='sc1')]
)
assert sc.version == 3
-
+
sess.commit()
sc.name = 'temp'
SomeClassHistory(version=2, name='sc1modified')
]
)
-
+
sess.delete(sc)
sess.commit()
def test_from_null(self):
class SomeClass(Versioned, Base, ComparableEntity):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50))
-
+
self.create_tables()
sess = Session()
sc = SomeClass()
sess.add(sc)
sess.commit()
-
+
sc.name = 'sc1'
sess.commit()
-
+
assert sc.version == 2
def test_deferred(self):
"""test versioning of unloaded, deferred columns."""
-
+
class SomeClass(Versioned, Base, ComparableEntity):
__tablename__ = 'sometable'
id = Column(Integer, primary_key=True)
name = Column(String(50))
data = deferred(Column(String(25)))
-
+
self.create_tables()
sess = Session()
sc = SomeClass(name='sc1', data='somedata')
sess.add(sc)
sess.commit()
sess.close()
-
+
sc = sess.query(SomeClass).first()
assert 'data' not in sc.__dict__
-
+
sc.name = 'sc1modified'
sess.commit()
sess.query(SomeClassHistory).filter(SomeClassHistory.version == 1).all(),
[SomeClassHistory(version=1, name='sc1', data='somedata')]
)
-
-
+
+
def test_joined_inheritance(self):
class BaseClass(Versioned, Base, ComparableEntity):
__tablename__ = 'basetable'
id = Column(Integer, primary_key=True)
name = Column(String(50))
type = Column(String(20))
-
+
__mapper_args__ = {'polymorphic_on':type, 'polymorphic_identity':'base'}
-
+
class SubClassSeparatePk(BaseClass):
__tablename__ = 'subtable1'
same1 = SubClassSamePk(name='same1', subdata2='same1subdata')
sess.add_all([sep1, base1, same1])
sess.commit()
-
+
base1.name = 'base1mod'
same1.subdata2 = 'same1subdatamod'
sep1.name ='sep1mod'
[
SubClassSeparatePkHistory(id=1, name=u'sep1', type=u'sep', version=1),
BaseClassHistory(id=2, name=u'base1', type=u'base', version=1),
- SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=1)
+ SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=1)
]
)
-
+
same1.subdata2 = 'same1subdatamod2'
eq_(
SubClassSeparatePkHistory(id=1, name=u'sep1', type=u'sep', version=1),
BaseClassHistory(id=2, name=u'base1', type=u'base', version=1),
SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=1),
- SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=2)
+ SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=2)
]
)
BaseClassHistory(id=2, name=u'base1', type=u'base', version=1),
BaseClassHistory(id=2, name=u'base1mod', type=u'base', version=2),
SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=1),
- SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=2)
+ SubClassSamePkHistory(id=3, name=u'same1', type=u'same', version=2)
]
)
name = Column(String(50))
type = Column(String(50))
__mapper_args__ = {'polymorphic_on':type, 'polymorphic_identity':'base'}
-
+
class SubClass(BaseClass):
subname = Column(String(50))
b1 = BaseClass(name='b1')
sc = SubClass(name='s1', subname='sc1')
-
+
sess.add_all([b1, sc])
-
+
sess.commit()
-
+
b1.name='b1modified'
BaseClassHistory = BaseClass.__history_mapper__.class_
SubClassHistory = SubClass.__history_mapper__.class_
-
+
eq_(
sess.query(BaseClassHistory).order_by(BaseClassHistory.id, BaseClassHistory.version).all(),
[BaseClassHistory(id=1, name=u'b1', type=u'base', version=1)]
)
-
+
sc.name ='s1modified'
b1.name='b1modified2'
SubClassHistory(id=2, name=u's1', type=u'sub', version=1)
]
)
-
+
def test_unique(self):
class SomeClass(Versioned, Base, ComparableEntity):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True)
data = Column(String(50))
-
+
self.create_tables()
sess = Session()
sc = SomeClass(name='sc1', data='sc1')
sess.add(sc)
sess.commit()
-
+
sc.data = 'sc1modified'
sess.commit()
-
+
assert sc.version == 2
-
+
sc.data = 'sc1modified2'
sess.commit()
-
+
assert sc.version == 3
def test_relationship(self):
class SomeRelated(Base, ComparableEntity):
__tablename__ = 'somerelated'
-
+
id = Column(Integer, primary_key=True)
class SomeClass(Versioned, Base, ComparableEntity):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
name = Column(String(50))
related_id = Column(Integer, ForeignKey('somerelated.id'))
related = relationship("SomeRelated")
-
+
SomeClassHistory = SomeClass.__history_mapper__.class_
-
+
self.create_tables()
sess = Session()
sc = SomeClass(name='sc1')
sess.commit()
assert sc.version == 1
-
+
sr1 = SomeRelated()
sc.related = sr1
sess.commit()
-
+
assert sc.version == 2
-
+
eq_(
sess.query(SomeClassHistory).filter(SomeClassHistory.version == 1).all(),
[SomeClassHistory(version=1, name='sc1', related_id=None)]
)
assert sc.version == 3
-
+
"""
Illustrates "vertical table" mappings.
-A "vertical table" refers to a technique where individual attributes of an object are stored as distinct rows in a table.
+A "vertical table" refers to a technique where individual attributes of an object are stored as distinct rows in a table.
The "vertical table" technique is used to persist objects which can have a varied set of attributes, at the expense of simple query control and brevity. It is commonly found in content/document management systems in order to represent user-created structures flexibly.
Two variants on the approach are given. In the second, each row references a "datatype" which contains information about the type of information stored in the attribute, such as integer, string, or date.
try:
import pkg_resources
except ImportError:
- return do_download()
+ return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
__all__ = sorted(name for name, obj in locals().items()
if not (name.startswith('_') or inspect.ismodule(obj)))
-
+
__version__ = '0.6.6'
del inspect, sys
class Connector(object):
pass
-
-
\ No newline at end of file
+
class MxODBCConnector(Connector):
driver='mxodbc'
-
+
supports_sane_multi_rowcount = False
supports_unicode_statements = False
supports_unicode_binds = False
-
+
supports_native_decimal = True
-
+
@classmethod
def dbapi(cls):
# this classmethod will normally be replaced by an instance
conn.decimalformat = self.dbapi.DECIMAL_DECIMALFORMAT
conn.errorhandler = self._error_handler()
return connect
-
+
def _error_handler(self):
""" Return a handler that adjusts mxODBC's raised Warnings to
emit Python standard warnings.
The arg 'errorhandler' is not used by SQLAlchemy and will
not be populated.
-
+
"""
opts = url.translate_connect_args(username='user')
opts.update(url.query)
supports_unicode_statements = supports_unicode
supports_native_decimal = True
default_paramstyle = 'named'
-
+
# for non-DSN connections, this should
# hold the desired driver name
pyodbc_driver_name = None
-
+
# will be set to True after initialize()
# if the freetds.so is detected
freetds = False
-
+
@classmethod
def dbapi(cls):
return __import__('pyodbc')
def create_connect_args(self, url):
opts = url.translate_connect_args(username='user')
opts.update(url.query)
-
+
keys = opts
query = url.query
connectors.extend(['%s=%s' % (k,v) for k,v in keys.iteritems()])
return [[";".join (connectors)], connect_args]
-
+
def is_disconnect(self, e):
if isinstance(e, self.dbapi.ProgrammingError):
return "The cursor's connection has been closed." in str(e) or \
def initialize(self, connection):
# determine FreeTDS first. can't issue SQL easily
# without getting unicode_statements/binds set up.
-
+
pyodbc = self.dbapi
dbapi_con = connection.connection
self.supports_unicode_statements = not self.freetds
self.supports_unicode_binds = not self.freetds
# end Py2K
-
+
# run other initialization which asks for user name, etc.
super(PyODBCConnector, self).initialize(connection)
class ZxJDBCConnector(Connector):
driver = 'zxjdbc'
-
+
supports_sane_rowcount = False
supports_sane_multi_rowcount = False
-
+
supports_unicode_binds = True
supports_unicode_statements = sys.version > '2.5.0+'
description_encoding = None
default_paramstyle = 'qmark'
-
+
jdbc_db_name = None
jdbc_driver_name = None
-
+
@classmethod
def dbapi(cls):
from com.ziclix.python.sql import zxJDBC
def _driver_kwargs(self):
"""Return kw arg dict to be sent to connect()."""
return {}
-
+
def _create_jdbc_url(self, url):
"""Create a JDBC url from a :class:`~sqlalchemy.engine.url.URL`"""
return 'jdbc:%s://%s%s/%s' % (self.jdbc_db_name, url.host,
url.port is not None
and ':%s' % url.port or '',
url.database)
-
+
def create_connect_args(self, url):
opts = self._driver_kwargs()
opts.update(url.query)
supports_sane_multi_rowcount = False
ported_sqla_06 = False
-
+
def type_descriptor(self, typeobj):
newobj = types.adapt_type(typeobj, self.colspecs)
return newobj
'dow': 'w',
'week': 'ww'
})
-
+
def visit_select_precolumns(self, select):
"""Access puts TOP, it's version of LIMIT here """
s = select.distinct and "DISTINCT " or ""
SMALLINT, BIGINT, FLOAT, FLOAT, DATE, TIME, \
TEXT, NUMERIC, FLOAT, TIMESTAMP, VARCHAR, CHAR, BLOB,\
dialect
-
+
__all__ = (
'SMALLINT', 'BIGINT', 'FLOAT', 'FLOAT', 'DATE', 'TIME',
'TEXT', 'NUMERIC', 'FLOAT', 'TIMESTAMP', 'VARCHAR', 'CHAR', 'BLOB',
'dialect'
)
-
-
+
+
def visit_VARCHAR(self, type_):
basic = super(FBTypeCompiler, self).visit_VARCHAR(type_)
return self._extend_string(type_, basic)
-
+
class FBCompiler(sql.compiler.SQLCompiler):
Kinterbasedb backend specific keyword arguments are:
-* type_conv - select the kind of mapping done on the types: by default
+* type_conv - select the kind of mapping done on the types: by default
SQLAlchemy uses 200 with Unicode, datetime and decimal support (see
details__).
SQLAlchemy ORM to ignore its usage. The behavior can also be controlled on a
per-execution basis using the `enable_rowcount` option with
:meth:`execution_options()`::
-
+
conn = engine.connect().execution_options(enable_rowcount=True)
r = conn.execute(stmt)
print r.rowcount
-
+
__ http://sourceforge.net/projects/kinterbasdb
__ http://firebirdsql.org/index.php?op=devel&sub=python
__ http://kinterbasdb.sourceforge.net/dist_docs/usage.html#adv_param_conv_dynamic_type_translation
return self.cursor.rowcount
else:
return -1
-
+
class FBDialect_kinterbasdb(FBDialect):
driver = 'kinterbasdb'
supports_sane_rowcount = False
supports_sane_multi_rowcount = False
execution_ctx_cls = FBExecutionContext_kinterbasdb
-
+
supports_native_decimal = True
-
+
colspecs = util.update_copy(
FBDialect.colspecs,
{
sqltypes.Numeric:_FBNumeric_kinterbasdb
}
-
+
)
-
+
def __init__(self, type_conv=200, concurrency_level=1,
enable_rowcount=True, **kwargs):
super(FBDialect_kinterbasdb, self).__init__(**kwargs)
self.concurrency_level = concurrency_level
if enable_rowcount:
self.supports_sane_rowcount = True
-
+
@classmethod
def dbapi(cls):
k = __import__('kinterbasdb')
opts['host'] = "%s/%s" % (opts['host'], opts['port'])
del opts['port']
opts.update(url.query)
-
+
util.coerce_kw_type(opts, 'type_conv', int)
-
+
type_conv = opts.pop('type_conv', self.type_conv)
concurrency_level = opts.pop('concurrency_level',
self.concurrency_level)
-
+
if self.dbapi is not None:
initialized = getattr(self.dbapi, 'initialized', None)
if initialized is None:
name = 'informix'
max_identifier_length = 128 # adjusts at runtime based on server version
-
+
type_compiler = InfoTypeCompiler
statement_compiler = InfoSQLCompiler
ddl_compiler = InfoDDLCompiler
def initialize(self, connection):
super(InformixDialect, self).initialize(connection)
-
+
# http://www.querix.com/support/knowledge-base/error_number_message/error_200
if self.server_version_info < (9, 2):
self.max_identifier_length = 18
else:
self.max_identifier_length = 128
-
+
def do_begin(self, connection):
cu = connection.cursor()
cu.execute('SET LOCK MODE TO WAIT')
util.warn("Did not recognize type '%s' of column '%s'" %
(coltype, name))
coltype = sqltypes.NULLTYPE
-
+
column_info = dict(name=name, type=coltype, nullable=not not_nullable,
default=default, autoincrement=autoincrement,
primary_key=primary_key)
informixdb is available at:
http://informixdb.sourceforge.net/
-
+
Connecting
^^^^^^^^^^
def visit_large_binary(self, type_):
return "LONG BYTE"
-
+
def visit_numeric(self, type_):
if type_.scale and type_.precision:
return 'FIXED(%s, %s)' % (type_.precision, type_.scale)
return 'FIXED(%s)' % type_.precision
else:
return 'INTEGER'
-
+
def visit_BOOLEAN(self, type_):
return "BOOLEAN"
-
+
colspecs = {
sqltypes.Numeric: MaxNumeric,
sqltypes.DateTime: MaxTimestamp,
def visit_mod(self, binary, **kw):
return "mod(%s, %s)" % \
(self.process(binary.left), self.process(binary.right))
-
+
def default_from(self):
return ' FROM DUAL'
Defaults to False. If true, sets NOCACHE.
"""
sequence = create.element
-
+
if (not sequence.optional and
(not self.checkfirst or
not self.dialect.has_sequence(self.connection, sequence.name))):
colspecs = colspecs
ischema_names = ischema_names
-
+
# MaxDB-specific
datetimeformat = 'internal'
class MaxDBDialect_sapdb(MaxDBDialect):
driver = 'sapdb'
-
+
@classmethod
def dbapi(cls):
from sapdb import dbapi as _dbapi
supports_unicode = sys.maxunicode == 65535
supports_unicode_statements = True
driver = 'adodbapi'
-
+
@classmethod
def import_dbapi(cls):
import adodbapi as module
``schema.Sequence()`` objects. In other words::
from sqlalchemy import Table, Integer, Sequence, Column
-
+
Table('test', metadata,
Column('id', Integer,
Sequence('blah',100,10), primary_key=True),
class DATETIME2(_DateTimeBase, sqltypes.DateTime):
__visit_name__ = 'DATETIME2'
-
+
def __init__(self, precision=None, **kwargs):
self.precision = precision
# TODO: is this not an Interval ?
class DATETIMEOFFSET(sqltypes.TypeEngine):
__visit_name__ = 'DATETIMEOFFSET'
-
+
def __init__(self, precision=None, **kwargs):
self.precision = precision
characters."""
__visit_name__ = 'NTEXT'
-
+
def __init__(self, *args, **kwargs):
"""Construct a NTEXT.
class BIT(sqltypes.TypeEngine):
__visit_name__ = 'BIT'
-
+
class MONEY(sqltypes.TypeEngine):
__visit_name__ = 'MONEY'
if type_.length:
spec = spec + "(%d)" % type_.length
-
+
return ' '.join([c for c in (spec, collation)
if c is not None])
def visit_unicode(self, type_):
return self.visit_NVARCHAR(type_)
-
+
def visit_unicode_text(self, type_):
return self.visit_NTEXT(type_)
-
+
def visit_NTEXT(self, type_):
return self._extend("NTEXT", type_)
return self.visit_DATETIME(type_)
else:
return self.visit_TIME(type_)
-
+
def visit_large_binary(self, type_):
return self.visit_IMAGE(type_)
_select_lastrowid = False
_result_proxy = None
_lastrowid = None
-
+
def pre_exec(self):
"""Activate IDENTITY_INSERT if needed."""
tbl = self.compiled.statement.table
seq_column = tbl._autoincrement_column
insert_has_sequence = seq_column is not None
-
+
if insert_has_sequence:
self._enable_identity_insert = \
seq_column.key in self.compiled_parameters[0]
else:
self._enable_identity_insert = False
-
+
self._select_lastrowid = insert_has_sequence and \
not self.compiled.returning and \
not self._enable_identity_insert and \
not self.executemany
-
+
if self._enable_identity_insert:
self.cursor.execute("SET IDENTITY_INSERT %s ON" %
self.dialect.identifier_preparer.format_table(tbl))
def post_exec(self):
"""Disable IDENTITY_INSERT if enabled."""
-
+
if self._select_lastrowid:
if self.dialect.use_scope_identity:
self.cursor.execute(
if (self.isinsert or self.isupdate or self.isdelete) and \
self.compiled.returning:
self._result_proxy = base.FullyBufferedResultProxy(self)
-
+
if self._enable_identity_insert:
self.cursor.execute(
- "SET IDENTITY_INSERT %s OFF" %
+ "SET IDENTITY_INSERT %s OFF" %
self.dialect.identifier_preparer.
format_table(self.compiled.statement.table)
)
-
+
def get_lastrowid(self):
return self._lastrowid
-
+
def handle_dbapi_exception(self, e):
if self._enable_identity_insert:
try:
class MSSQLCompiler(compiler.SQLCompiler):
returning_precedes_values = True
-
+
extract_map = util.update_copy(
compiler.SQLCompiler.extract_map,
{
def visit_now_func(self, fn, **kw):
return "CURRENT_TIMESTAMP"
-
+
def visit_current_date_func(self, fn, **kw):
return "GETDATE()"
-
+
def visit_length_func(self, fn, **kw):
return "LEN%s" % self.function_argspec(fn, **kw)
-
+
def visit_char_length_func(self, fn, **kw):
return "LEN%s" % self.function_argspec(fn, **kw)
-
+
def visit_concat_op(self, binary, **kw):
return "%s + %s" % \
(self.process(binary.left, **kw),
self.process(binary.right, **kw))
-
+
def visit_match_op(self, binary, **kw):
return "CONTAINS (%s, %s)" % (
self.process(binary.left, **kw),
self.process(binary.right, **kw))
-
+
def get_select_precolumns(self, select):
""" MS-SQL puts TOP, it's version of LIMIT here """
if select._distinct or select._limit:
s = select._distinct and "DISTINCT " or ""
-
+
if select._limit:
if not select._offset:
s += "TOP %s " % (select._limit,)
target = stmt.table.alias("inserted")
else:
target = stmt.table.alias("deleted")
-
+
adapter = sql_util.ClauseAdapter(target)
def col_label(col):
adapted = adapter.traverse(col)
return adapted.label(c.key)
else:
return self.label_select_column(None, adapted, asfrom=False)
-
+
columns = [
self.process(
col_label(c),
class MSSQLStrictCompiler(MSSQLCompiler):
"""A subclass of MSSQLCompiler which disables the usage of bind
parameters where not allowed natively by MS-SQL.
-
+
A dialect may use this compiler on a platform where native
binds are used.
-
+
"""
ansi_bind_rules = True
format acceptable to MSSQL. That seems to be the
so-called ODBC canonical date format which looks
like this:
-
+
yyyy-mm-dd hh:mi:ss.mmm(24h)
-
+
For other data types, call the base class implementation.
"""
# datetime and date are both subclasses of datetime.date
colspec += " NOT NULL"
else:
colspec += " NULL"
-
+
if column.table is None:
raise exc.InvalidRequestError(
"mssql requires Table-bound columns "
"in order to generate DDL")
-
+
seq_col = column.table._autoincrement_column
# install a IDENTITY Sequence if we have an implicit IDENTITY column
}
ischema_names = ischema_names
-
+
supports_native_boolean = False
supports_unicode_binds = True
postfetch_lastrowid = True
-
+
server_version_info = ()
-
+
statement_compiler = MSSQLCompiler
ddl_compiler = MSDDLCompiler
type_compiler = MSTypeCompiler
self.max_identifier_length = int(max_identifier_length or 0) or \
self.max_identifier_length
super(MSDialect, self).__init__(**opts)
-
+
def do_savepoint(self, connection, name):
util.warn("Savepoint support in mssql is experimental and "
"may lead to data loss.")
def do_release_savepoint(self, connection, name):
pass
-
+
def initialize(self, connection):
super(MSDialect, self).initialize(connection)
if self.server_version_info[0] not in range(8, 17):
if self.server_version_info >= MS_2005_VERSION and \
'implicit_returning' not in self.__dict__:
self.implicit_returning = True
-
+
def _get_default_schema_name(self, connection):
user_name = connection.scalar("SELECT user_name() as user_name;")
if user_name is not None:
# below MS 2005
if self.server_version_info < MS_2005_VERSION:
return []
-
+
current_schema = schema or self.default_schema_name
full_tname = "%s.%s" % (current_schema, tablename)
for row in rp:
if row['index_id'] in indexes:
indexes[row['index_id']]['column_names'].append(row['name'])
-
+
return indexes.values()
@reflection.cache
# the constrained column
C = ischema.key_constraints.alias('C')
# information_schema.constraint_column_usage:
- # the referenced column
+ # the referenced column
R = ischema.key_constraints.alias('R')
# Primary key constraints
#information_schema.referential_constraints
RR = ischema.ref_constraints
# information_schema.table_constraints
- TC = ischema.constraints
+ TC = ischema.constraints
# information_schema.constraint_column_usage:
# the constrained column
C = ischema.key_constraints.alias('C')
order_by = [
RR.c.constraint_name,
R.c.ordinal_position])
-
+
# group rows by constraint ID, to handle multi-column FKs
fkeys = []
fknm, scols, rcols = (None, [], [])
-
+
def fkey_rec():
return {
'name' : None,
}
fkeys = util.defaultdict(fkey_rec)
-
+
for r in connection.execute(s).fetchall():
scol, rschema, rtbl, rcol, rfknm, fkmatch, fkuprule, fkdelrule = r
if schema is not None or current_schema != rschema:
rec['referred_schema'] = rschema
-
+
local_cols, remote_cols = \
rec['constrained_columns'],\
rec['referred_columns']
-
+
local_cols.append(scol)
remote_cols.append(rcol)
class CoerceUnicode(TypeDecorator):
impl = Unicode
-
+
def process_bind_param(self, value, dialect):
if isinstance(value, str):
value = value.decode(dialect.encoding)
return value
-
+
schemata = Table("SCHEMATA", ischema,
Column("CATALOG_NAME", CoerceUnicode, key="catalog_name"),
Column("SCHEMA_NAME", CoerceUnicode, key="schema_name"),
Column("CONSTRAINT_NAME", CoerceUnicode, key="constraint_name"),
# TODO: is CATLOG misspelled ?
Column("UNIQUE_CONSTRAINT_CATLOG", CoerceUnicode,
- key="unique_constraint_catalog"),
-
+ key="unique_constraint_catalog"),
+
Column("UNIQUE_CONSTRAINT_SCHEMA", CoerceUnicode,
key="unique_constraint_schema"),
Column("UNIQUE_CONSTRAINT_NAME", CoerceUnicode,
Connection is via DSN::
mssql+mxodbc://<username>:<password>@<dsnname>
-
+
Execution Modes
~~~~~~~~~~~~~~~
# won't work.
class MSDialect_mxodbc(MxODBCConnector, MSDialect):
-
+
# TODO: may want to use this only if FreeTDS is not in use,
# since FreeTDS doesn't seem to use native binds.
statement_compiler = MSSQLStrictCompiler
pymssql is available at:
http://pymssql.sourceforge.net/
-
+
Connecting
^^^^^^^^^^
-
+
Sample connect string::
mssql+pymssql://<username>:<password>@<freetds_name>
supports_sane_rowcount = False
max_identifier_length = 30
driver = 'pymssql'
-
+
colspecs = util.update_copy(
MSDialect.colspecs,
{
# pymmsql doesn't have a Binary method. we use string
# TODO: monkeypatching here is less than ideal
module.Binary = str
-
+
client_ver = tuple(int(x) for x in module.__version__.split("."))
if client_ver < (1, ):
util.warn("The pymssql dialect expects at least "
class _MSNumeric_pyodbc(sqltypes.Numeric):
"""Turns Decimals with adjusted() < 0 or > 7 into strings.
-
+
This is the only method that is proven to work with Pyodbc+MSSQL
without crashing (floats can be used but seem to cause sporadic
crashes).
-
+
"""
def bind_processor(self, dialect):
def process(value):
if self.asdecimal and \
isinstance(value, decimal.Decimal):
-
+
adjusted = value.adjusted()
if adjusted < 0:
return self._small_dec_to_string(value)
else:
return value
return process
-
+
def _small_dec_to_string(self, value):
return "%s0.%s%s" % (
(value < 0 and '-' or ''),
"".join(
[str(s) for s in value._int][0:value.adjusted() + 1]))
return result
-
-
+
+
class MSExecutionContext_pyodbc(MSExecutionContext):
_embedded_scope_identity = False
-
+
def pre_exec(self):
"""where appropriate, issue "select scope_identity()" in the same
statement.
-
+
Background on why "scope_identity()" is preferable to "@@identity":
http://msdn.microsoft.com/en-us/library/ms190315.aspx
-
+
Background on why we attempt to embed "scope_identity()" into the same
statement as the INSERT:
http://code.google.com/p/pyodbc/wiki/FAQs#How_do_I_retrieve_autogenerated/identity_values?
-
+
"""
-
+
super(MSExecutionContext_pyodbc, self).pre_exec()
# don't embed the scope_identity select into an
self.dialect.use_scope_identity and \
len(self.parameters[0]):
self._embedded_scope_identity = True
-
+
self.statement += "; select scope_identity()"
def post_exec(self):
try:
# fetchall() ensures the cursor is consumed
# without closing it (FreeTDS particularly)
- row = self.cursor.fetchall()[0]
+ row = self.cursor.fetchall()[0]
break
except self.dialect.dbapi.Error, e:
# no way around this - nextset() consumes the previous set
# so we need to just keep flipping
self.cursor.nextset()
-
+
self._lastrowid = int(row[0])
else:
super(MSExecutionContext_pyodbc, self).post_exec()
execution_ctx_cls = MSExecutionContext_pyodbc
pyodbc_driver_name = 'SQL Server'
-
+
colspecs = util.update_copy(
MSDialect.colspecs,
{
sqltypes.Numeric:_MSNumeric_pyodbc
}
)
-
+
def __init__(self, description_encoding='latin-1', **params):
super(MSDialect_pyodbc, self).__init__(**params)
self.description_encoding = description_encoding
self.use_scope_identity = self.dbapi and \
hasattr(self.dbapi.Cursor, 'nextset')
-
+
dialect = MSDialect_pyodbc
NVARCHAR, NUMERIC, SET, SMALLINT, REAL, TEXT, TIME, TIMESTAMP, \
TINYBLOB, TINYINT, TINYTEXT,\
VARBINARY, VARCHAR, YEAR, dialect
-
+
__all__ = (
'BIGINT', 'BINARY', 'BIT', 'BLOB', 'BOOLEAN', 'CHAR', 'DATE', 'DATETIME', 'DECIMAL', 'DOUBLE',
'ENUM', 'DECIMAL', 'FLOAT', 'INTEGER', 'INTEGER', 'LONGBLOB', 'LONGTEXT', 'MEDIUMBLOB', 'MEDIUMINT',
----------
See the API documentation on individual drivers for details on connecting.
-
+
Connection Timeouts
-------------------
self.unsigned = kw.pop('unsigned', False)
self.zerofill = kw.pop('zerofill', False)
super(_NumericType, self).__init__(**kw)
-
+
class _FloatType(_NumericType, sqltypes.Float):
def __init__(self, precision=None, scale=None, asdecimal=True, **kw):
if isinstance(self, (REAL, DOUBLE)) and \
self.binary = binary
self.national = national
super(_StringType, self).__init__(**kw)
-
+
def __repr__(self):
attributes = inspect.getargspec(self.__init__)[0][1:]
attributes.extend(inspect.getargspec(_StringType.__init__)[0][1:])
class NUMERIC(_NumericType, sqltypes.NUMERIC):
"""MySQL NUMERIC type."""
-
+
__visit_name__ = 'NUMERIC'
-
+
def __init__(self, precision=None, scale=None, asdecimal=True, **kw):
"""Construct a NUMERIC.
class DECIMAL(_NumericType, sqltypes.DECIMAL):
"""MySQL DECIMAL type."""
-
+
__visit_name__ = 'DECIMAL'
-
+
def __init__(self, precision=None, scale=None, asdecimal=True, **kw):
"""Construct a DECIMAL.
super(DECIMAL, self).__init__(precision=precision, scale=scale,
asdecimal=asdecimal, **kw)
-
+
class DOUBLE(_FloatType):
"""MySQL DOUBLE type."""
def result_processor(self, dialect, coltype):
"""Convert a MySQL's 64 bit, variable length binary string to a long.
-
+
TODO: this is MySQL-db, pyodbc specific. OurSQL and mysqlconnector
already do this, so this logic should be moved to those dialects.
-
+
"""
-
+
def process(value):
if value is not None:
v = 0L
"""
super(LONGTEXT, self).__init__(**kwargs)
-
+
class VARCHAR(_StringType, sqltypes.VARCHAR):
"""MySQL VARCHAR type, for variable-length character data."""
class TINYBLOB(sqltypes._Binary):
"""MySQL TINYBLOB type, for binary data up to 2^8 bytes."""
-
+
__visit_name__ = 'TINYBLOB'
class MEDIUMBLOB(sqltypes._Binary):
"""
self.quoting = kw.pop('quoting', 'auto')
-
+
if self.quoting == 'auto' and len(enums):
# What quoting character are we using?
q = None
kw.pop('native_enum', None)
_StringType.__init__(self, length=length, **kw)
sqltypes.Enum.__init__(self, *enums)
-
+
@classmethod
def _strip_enums(cls, enums):
strip_enums = []
a = a[1:-1].replace(a[0] * 2, a[0])
strip_enums.append(a)
return strip_enums
-
+
def bind_processor(self, dialect):
super_convert = super(ENUM, self).bind_processor(dialect)
def process(value):
extract_map.update ({
'milliseconds': 'millisecond',
})
-
+
def visit_random_func(self, fn, **kw):
return "rand%s" % self.function_argspec(fn)
-
+
def visit_utc_timestamp_func(self, fn, **kw):
return "UTC_TIMESTAMP"
-
+
def visit_sysdate_func(self, fn, **kw):
return "SYSDATE()"
-
+
def visit_concat_op(self, binary, **kw):
return "concat(%s, %s)" % (self.process(binary.left), self.process(binary.right))
-
+
def visit_match_op(self, binary, **kw):
return "MATCH (%s) AGAINST (%s IN BOOLEAN MODE)" % (self.process(binary.left), self.process(binary.right))
# No cast until 4, no decimals until 5.
if not self.dialect._supports_cast:
return self.process(cast.clause)
-
+
type_ = self.process(cast.typeclause)
if type_ is None:
return self.process(cast.clause)
if self.dialect._backslash_escapes:
value = value.replace('\\', '\\\\')
return value
-
+
def get_select_precolumns(self, select):
if isinstance(select._distinct, basestring):
return select._distinct.upper() + " "
def create_table_constraints(self, table):
"""Get table constraints."""
constraint_string = super(MySQLDDLCompiler, self).create_table_constraints(table)
-
+
is_innodb = table.kwargs.has_key('mysql_engine') and \
table.kwargs['mysql_engine'].lower() == 'innodb'
constraint_string += ", \n\t"
constraint_string += "KEY `idx_autoinc_%s`(`%s`)" % (auto_inc_column.name, \
self.preparer.format_column(auto_inc_column))
-
+
return constraint_string
default = self.get_column_default_string(column)
if default is not None:
colspec.append('DEFAULT ' + default)
-
+
is_timestamp = isinstance(column.type, sqltypes.TIMESTAMP)
if not column.nullable and not is_timestamp:
colspec.append('NOT NULL')
def visit_drop_index(self, drop):
index = drop.element
-
+
return "\nDROP INDEX %s ON %s" % \
(self.preparer.quote(self._index_identifier(index.name), index.quote),
self.preparer.format_table(index.table))
COLLATE annotations and MySQL specific extensions.
"""
-
+
def attr(name):
return getattr(type_, name, defaults.get(name))
-
+
if attr('charset'):
charset = 'CHARACTER SET %s' % attr('charset')
elif attr('ascii'):
if c is not None])
return ' '.join([c for c in (spec, charset, collation)
if c is not None])
-
+
def _mysql_type(self, type_):
return isinstance(type_, (_StringType, _NumericType))
-
+
def visit_NUMERIC(self, type_):
if type_.precision is None:
return self._extend_numeric(type_, "NUMERIC")
'scale' : type_.scale})
else:
return self._extend_numeric(type_, 'REAL')
-
+
def visit_FLOAT(self, type_):
if self._mysql_type(type_) and type_.scale is not None and type_.precision is not None:
return self._extend_numeric(type_, "FLOAT(%s, %s)" % (type_.precision, type_.scale))
return self._extend_numeric(type_, "FLOAT(%s)" % (type_.precision,))
else:
return self._extend_numeric(type_, "FLOAT")
-
+
def visit_INTEGER(self, type_):
if self._mysql_type(type_) and type_.display_width is not None:
return self._extend_numeric(type_, "INTEGER(%(display_width)s)" % {'display_width': type_.display_width})
else:
return self._extend_numeric(type_, "INTEGER")
-
+
def visit_BIGINT(self, type_):
if self._mysql_type(type_) and type_.display_width is not None:
return self._extend_numeric(type_, "BIGINT(%(display_width)s)" % {'display_width': type_.display_width})
else:
return self._extend_numeric(type_, "BIGINT")
-
+
def visit_MEDIUMINT(self, type_):
if self._mysql_type(type_) and type_.display_width is not None:
return self._extend_numeric(type_, "MEDIUMINT(%(display_width)s)" % {'display_width': type_.display_width})
return "BIT(%s)" % type_.length
else:
return "BIT"
-
+
def visit_DATETIME(self, type_):
return "DATETIME"
return "YEAR"
else:
return "YEAR(%s)" % type_.display_width
-
+
def visit_TEXT(self, type_):
if type_.length:
return self._extend_string(type_, {}, "TEXT(%d)" % type_.length)
else:
return self._extend_string(type_, {}, "TEXT")
-
+
def visit_TINYTEXT(self, type_):
return self._extend_string(type_, {}, "TINYTEXT")
def visit_MEDIUMTEXT(self, type_):
return self._extend_string(type_, {}, "MEDIUMTEXT")
-
+
def visit_LONGTEXT(self, type_):
return self._extend_string(type_, {}, "LONGTEXT")
-
+
def visit_VARCHAR(self, type_):
if type_.length:
return self._extend_string(type_, {}, "VARCHAR(%d)" % type_.length)
else:
raise exc.InvalidRequestError("VARCHAR requires a length when rendered on MySQL")
-
+
def visit_CHAR(self, type_):
if type_.length:
return self._extend_string(type_, {}, "CHAR(%(length)s)" % {'length' : type_.length})
else:
return self._extend_string(type_, {}, "CHAR")
-
+
def visit_NVARCHAR(self, type_):
# We'll actually generate the equiv. "NATIONAL VARCHAR" instead
# of "NVARCHAR".
return self._extend_string(type_, {'national':True}, "VARCHAR(%(length)s)" % {'length': type_.length})
else:
raise exc.InvalidRequestError("NVARCHAR requires a length when rendered on MySQL")
-
+
def visit_NCHAR(self, type_):
# We'll actually generate the equiv. "NATIONAL CHAR" instead of "NCHAR".
if type_.length:
return self._extend_string(type_, {'national':True}, "CHAR(%(length)s)" % {'length': type_.length})
else:
return self._extend_string(type_, {'national':True}, "CHAR")
-
+
def visit_VARBINARY(self, type_):
return "VARBINARY(%d)" % type_.length
-
+
def visit_large_binary(self, type_):
return self.visit_BLOB(type_)
-
+
def visit_enum(self, type_):
if not type_.native_enum:
return super(MySQLTypeCompiler, self).visit_enum(type_)
else:
return self.visit_ENUM(type_)
-
+
def visit_BLOB(self, type_):
if type_.length:
return "BLOB(%d)" % type_.length
else:
return "BLOB"
-
+
def visit_TINYBLOB(self, type_):
return "TINYBLOB"
for e in type_.enums:
quoted_enums.append("'%s'" % e.replace("'", "''"))
return self._extend_string(type_, {}, "ENUM(%s)" % ",".join(quoted_enums))
-
+
def visit_SET(self, type_):
return self._extend_string(type_, {}, "SET(%s)" % ",".join(type_._ddl_values))
def visit_BOOLEAN(self, type):
return "BOOL"
-
+
class MySQLIdentifierPreparer(compiler.IdentifierPreparer):
if not server_ansiquotes:
quote = "`"
else:
- quote = '"'
+ quote = '"'
super(MySQLIdentifierPreparer, self).__init__(
dialect,
class MySQLDialect(default.DefaultDialect):
"""Details of the MySQL dialect. Not used directly in application code."""
-
+
name = 'mysql'
supports_alter = True
-
+
# identifiers are 64, however aliases can be 255...
max_identifier_length = 255
max_index_name_length = 64
-
+
supports_native_enum = True
-
+
supports_sane_rowcount = True
supports_sane_multi_rowcount = False
-
+
default_paramstyle = 'format'
colspecs = colspecs
-
+
statement_compiler = MySQLCompiler
ddl_compiler = MySQLDDLCompiler
type_compiler = MySQLTypeCompiler
ischema_names = ischema_names
preparer = MySQLIdentifierPreparer
-
+
# default SQL compilation settings -
# these are modified upon initialize(),
# i.e. first connect
_backslash_escapes = True
_server_ansiquotes = False
-
+
def __init__(self, use_ansiquotes=None, **kwargs):
default.DefaultDialect.__init__(self, **kwargs)
if isinstance(e, self.dbapi.OperationalError):
return self._extract_error_code(e) in \
(2006, 2013, 2014, 2045, 2055)
- elif isinstance(e, self.dbapi.InterfaceError):
+ elif isinstance(e, self.dbapi.InterfaceError):
# if underlying connection is closed,
# this is the error you get
return "(0, '')" in str(e)
def _extract_error_code(self, exception):
raise NotImplementedError()
-
+
def _get_default_schema_name(self, connection):
return connection.execute('SELECT DATABASE()').scalar()
finally:
if rs:
rs.close()
-
+
def initialize(self, connection):
default.DefaultDialect.initialize(self, connection)
self._connection_charset = self._detect_charset(connection)
def _supports_cast(self):
return self.server_version_info is None or \
self.server_version_info >= (4, 0, 2)
-
+
@reflection.cache
def get_schema_names(self, connection, **kw):
rp = connection.execute("SHOW schemas")
return [row[0] for row in self._compat_fetchall(rp, charset=charset)\
if row[1] == 'BASE TABLE']
-
+
@reflection.cache
def get_view_names(self, connection, schema=None, **kw):
charset = self._connection_charset
parsed_state = self._parsed_state_or_create(connection, table_name, schema, **kw)
default_schema = None
-
+
fkeys = []
for spec in parsed_state.constraints:
def get_indexes(self, connection, table_name, schema=None, **kw):
parsed_state = self._parsed_state_or_create(connection, table_name, schema, **kw)
-
+
indexes = []
for spec in parsed_state.keys:
unique = False
schema,
info_cache=kw.get('info_cache', None)
)
-
+
@util.memoized_property
def _tabledef_parser(self):
"""return the MySQLTableDefinitionParser, generate if needed.
-
+
The deferred creation ensures that the dialect has
retrieved server version information first.
-
+
"""
if (self.server_version_info < (4, 1) and self._server_ansiquotes):
# ANSI_QUOTES doesn't affect SHOW CREATE TABLE on < 4.1
else:
preparer = self.identifier_preparer
return MySQLTableDefinitionParser(self, preparer)
-
+
@reflection.cache
def _setup_parser(self, connection, table_name, schema=None, **kw):
charset = self._connection_charset
full_name=full_name)
sql = parser._describe_to_create(table_name, columns)
return parser.parse(sql, charset)
-
+
def _adjust_casing(self, table, charset=None):
"""Adjust Table name to the server case sensitivity, if needed."""
mode = (mode_no | 4 == mode_no) and 'ANSI_QUOTES' or ''
self._server_ansiquotes = 'ANSI_QUOTES' in mode
-
+
# as of MySQL 5.0.1
self._backslash_escapes = 'NO_BACKSLASH_ESCAPES' not in mode
-
+
def _show_create_table(self, connection, table, charset=None,
full_name=None):
"""Run SHOW CREATE TABLE for a ``Table``."""
class ReflectedState(object):
"""Stores raw information about a SHOW CREATE TABLE statement."""
-
+
def __init__(self):
self.columns = []
self.table_options = {}
self.table_name = None
self.keys = []
self.constraints = []
-
+
class MySQLTableDefinitionParser(object):
"""Parses the results of a SHOW CREATE TABLE statement."""
-
+
def __init__(self, dialect, preparer):
self.dialect = dialect
self.preparer = preparer
state.constraints.append(spec)
else:
pass
-
+
return state
-
+
def _parse_constraints(self, line):
"""Parse a KEY or CONSTRAINT line.
if default == 'NULL':
# eliminates the need to deal with this later.
default = None
-
+
col_d = dict(name=name, type=type_instance, default=default)
col_d.update(col_kw)
state.columns.append(col_d)
MySQL-Python is available at:
http://sourceforge.net/projects/mysql-python
-
+
At least version 1.2.1 or 1.2.2 should be used.
Connecting
Connect string format::
mysql+mysqldb://<user>:<password>@<host>[:<port>]/<dbname>
-
+
Character Sets
--------------
-------------
MySQL-python at least as of version 1.2.2 has a serious memory leak related
-to unicode conversion, a feature which is disabled via ``use_unicode=0``.
+to unicode conversion, a feature which is disabled via ``use_unicode=0``.
The recommended connection form with SQLAlchemy is::
engine = create_engine('mysql://scott:tiger@localhost/test?charset=utf8&use_unicode=0', pool_recycle=3600)
from sqlalchemy import processors
class MySQLExecutionContext_mysqldb(MySQLExecutionContext):
-
+
@property
def rowcount(self):
if hasattr(self, '_rowcount'):
return self._rowcount
else:
return self.cursor.rowcount
-
-
+
+
class MySQLCompiler_mysqldb(MySQLCompiler):
def visit_mod(self, binary, **kw):
return self.process(binary.left) + " %% " + self.process(binary.right)
-
+
def post_process_text(self, text):
return text.replace('%', '%%')
class MySQLIdentifierPreparer_mysqldb(MySQLIdentifierPreparer):
-
+
def _escape_identifier(self, value):
value = value.replace(self.escape_quote, self.escape_to_quote)
return value.replace("%", "%%")
execution_ctx_cls = MySQLExecutionContext_mysqldb
statement_compiler = MySQLCompiler_mysqldb
preparer = MySQLIdentifierPreparer_mysqldb
-
+
colspecs = util.update_copy(
MySQLDialect.colspecs,
{
}
)
-
+
@classmethod
def dbapi(cls):
return __import__('MySQLdb')
pass
opts['client_flag'] = client_flag
return [[], opts]
-
+
def _get_server_version_info(self, connection):
dbapi_con = connection.connection
version = []
OurSQL is available at:
http://packages.python.org/oursql/
-
+
Connecting
-----------
@property
def plain_query(self):
return self.execution_options.get('_oursql_plain_query', False)
-
+
class MySQLDialect_oursql(MySQLDialect):
driver = 'oursql'
# Py3K
supports_unicode_binds = True
supports_unicode_statements = True
# end Py2K
-
+
supports_native_decimal = True
-
+
supports_sane_rowcount = True
supports_sane_multi_rowcount = True
execution_ctx_cls = MySQLExecutionContext_oursql
if not is_prepared:
self.do_prepare_twophase(connection, xid)
self._xa_query(connection, 'XA COMMIT "%s"', xid)
-
+
# Q: why didn't we need all these "plain_query" overrides earlier ?
# am i on a newer/older version of OurSQL ?
def has_table(self, connection, table_name, schema=None):
connection.connect().\
execution_options(_oursql_plain_query=True),
table_name, schema)
-
+
def get_table_options(self, connection, table_name, schema=None, **kw):
return MySQLDialect.get_table_options(self,
connection.connect().\
schema=schema,
**kw
)
-
+
def get_view_names(self, connection, schema=None, **kw):
return MySQLDialect.get_view_names(self,
connection.connect().\
schema=schema,
**kw
)
-
+
def get_table_names(self, connection, schema=None, **kw):
return MySQLDialect.get_table_names(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
schema
)
-
+
def get_schema_names(self, connection, **kw):
return MySQLDialect.get_schema_names(self,
connection.connect().\
execution_options(_oursql_plain_query=True),
**kw
)
-
+
def initialize(self, connection):
return MySQLDialect.initialize(
self,
connection.execution_options(_oursql_plain_query=True)
)
-
+
def _show_create_table(self, connection, table, charset=None,
full_name=None):
return MySQLDialect._show_create_table(self,
table, charset, full_name)
def is_disconnect(self, e):
- if isinstance(e, self.dbapi.ProgrammingError):
+ if isinstance(e, self.dbapi.ProgrammingError):
return e.errno is None and 'cursor' not in e.args[1] and e.args[1].endswith('closed')
else:
return e.errno in (2006, 2013, 2014, 2045, 2055)
def _detect_charset(self, connection):
"""Sniff out the character set in use for connection results."""
-
+
return connection.connection.charset
def _compat_fetchall(self, rp, charset=None):
execution_ctx_cls = MySQLExecutionContext_pyodbc
pyodbc_driver_name = "MySQL"
-
+
def __init__(self, **kw):
# deal with http://code.google.com/p/pyodbc/issues/detail?id=25
kw.setdefault('convert_unicode', True)
util.warn("Could not detect the connection character set. Assuming latin1.")
return 'latin1'
-
+
def _extract_error_code(self, exception):
m = re.compile(r"\((\d+)\)").search(str(exception.args))
c = m.group(1)
from sqlalchemy import types as sqltypes
from sqlalchemy.types import VARCHAR, NVARCHAR, CHAR, DATE, DATETIME, \
BLOB, CLOB, TIMESTAMP, FLOAT
-
+
RESERVED_WORDS = set('SHARE RAW DROP BETWEEN FROM DESC OPTION PRIOR LONG THEN '
'DEFAULT ALTER IS INTO MINUS INTEGER NUMBER GRANT IDENTIFIED '
'ALL TO ORDER ON FLOAT DATE HAVING CLUSTER NOWAIT RESOURCE ANY '
class NUMBER(sqltypes.Numeric, sqltypes.Integer):
__visit_name__ = 'NUMBER'
-
+
def __init__(self, precision=None, scale=None, asdecimal=None):
if asdecimal is None:
asdecimal = bool(scale and scale > 0)
-
+
super(NUMBER, self).__init__(precision=precision, scale=scale, asdecimal=asdecimal)
-
+
def adapt(self, impltype):
ret = super(NUMBER, self).adapt(impltype)
# leave a hint for the DBAPI handler
ret._is_oracle_number = True
return ret
-
+
@property
def _type_affinity(self):
if bool(self.scale and self.scale > 0):
return sqltypes.Numeric
else:
return sqltypes.Integer
-
-
+
+
class DOUBLE_PRECISION(sqltypes.Numeric):
__visit_name__ = 'DOUBLE_PRECISION'
def __init__(self, precision=None, scale=None, asdecimal=None):
if asdecimal is None:
asdecimal = False
-
+
super(DOUBLE_PRECISION, self).__init__(precision=precision, scale=scale, asdecimal=asdecimal)
class BFILE(sqltypes.LargeBinary):
class INTERVAL(sqltypes.TypeEngine):
__visit_name__ = 'INTERVAL'
-
+
def __init__(self,
day_precision=None,
second_precision=None):
"""Construct an INTERVAL.
-
+
Note that only DAY TO SECOND intervals are currently supported.
This is due to a lack of support for YEAR TO MONTH intervals
within available DBAPIs (cx_oracle and zxjdbc).
-
+
:param day_precision: the day precision value. this is the number of digits
to store for the day field. Defaults to "2"
:param second_precision: the second precision value. this is the number of digits
to store for the fractional seconds field. Defaults to "6".
-
+
"""
self.day_precision = day_precision
self.second_precision = second_precision
-
+
@classmethod
def _adapt_from_generic_interval(cls, interval):
return INTERVAL(day_precision=interval.day_precision,
second_precision=interval.second_precision)
-
+
def adapt(self, impltype):
return impltype(day_precision=self.day_precision,
second_precision=self.second_precision)
class ROWID(sqltypes.TypeEngine):
"""Oracle ROWID type.
-
+
When used in a cast() or similar, generates ROWID.
-
+
"""
__visit_name__ = 'ROWID'
-
-
-
+
+
+
class _OracleBoolean(sqltypes.Boolean):
def get_dbapi_type(self, dbapi):
return dbapi.NUMBER
# Oracle DATE == DATETIME
# Oracle does not allow milliseconds in DATE
# Oracle does not support TIME columns
-
+
def visit_datetime(self, type_):
return self.visit_DATE(type_)
-
+
def visit_float(self, type_):
return self.visit_FLOAT(type_)
-
+
def visit_unicode(self, type_):
if self.dialect._supports_nchar:
return self.visit_NVARCHAR(type_)
else:
return self.visit_VARCHAR(type_)
-
+
def visit_INTERVAL(self, type_):
return "INTERVAL DAY%s TO SECOND%s" % (
type_.day_precision is not None and
def visit_DOUBLE_PRECISION(self, type_):
return self._generate_numeric(type_, "DOUBLE PRECISION")
-
+
def visit_NUMBER(self, type_, **kw):
return self._generate_numeric(type_, "NUMBER", **kw)
-
+
def _generate_numeric(self, type_, name, precision=None, scale=None):
if precision is None:
precision = type_.precision
-
+
if scale is None:
scale = getattr(type_, 'scale', None)
-
+
if precision is None:
return name
elif scale is None:
return "%(name)s(%(precision)s)" % {'name':name,'precision': precision}
else:
return "%(name)s(%(precision)s, %(scale)s)" % {'name':name,'precision': precision, 'scale' : scale}
-
+
def visit_VARCHAR(self, type_):
if self.dialect._supports_char_length:
return "VARCHAR(%(length)s CHAR)" % {'length' : type_.length}
def visit_NVARCHAR(self, type_):
return "NVARCHAR2(%(length)s)" % {'length' : type_.length}
-
+
def visit_text(self, type_):
return self.visit_CLOB(type_)
def visit_big_integer(self, type_):
return self.visit_NUMBER(type_, precision=19)
-
+
def visit_boolean(self, type_):
return self.visit_SMALLINT(type_)
-
+
def visit_RAW(self, type_):
return "RAW(%(length)s)" % {'length' : type_.length}
def visit_ROWID(self, type_):
return "ROWID"
-
+
class OracleCompiler(compiler.SQLCompiler):
"""Oracle compiler modifies the lexical structure of Select
statements to work under non-ANSI configured Oracle databases, if
the use_ansi flag is False.
"""
-
+
compound_keywords = util.update_copy(
compiler.SQLCompiler.compound_keywords,
- {
+ {
expression.CompoundSelect.EXCEPT : 'MINUS'
}
)
-
+
def __init__(self, *args, **kwargs):
super(OracleCompiler, self).__init__(*args, **kwargs)
self.__wheres = {}
def visit_mod(self, binary, **kw):
return "mod(%s, %s)" % (self.process(binary.left), self.process(binary.right))
-
+
def visit_now_func(self, fn, **kw):
return "CURRENT_TIMESTAMP"
-
+
def visit_char_length_func(self, fn, **kw):
return "LENGTH" + self.function_argspec(fn, **kw)
-
+
def visit_match_op(self, binary, **kw):
return "CONTAINS (%s, %s)" % (self.process(binary.left), self.process(binary.right))
-
+
def get_select_hint_text(self, byfroms):
return " ".join(
"/*+ %s */" % text for table, text in byfroms.items()
)
-
+
def function_argspec(self, fn, **kw):
if len(fn.clauses) > 0:
return compiler.SQLCompiler.function_argspec(self, fn, **kw)
else:
return ""
-
+
def default_from(self):
"""Called when a ``SELECT`` statement has no froms, and no ``FROM`` clause is to be appended.
{'binary':visit_binary}))
else:
clauses.append(join.onclause)
-
+
for j in join.left, join.right:
if isinstance(j, expression.Join):
visit_join(j)
-
+
for f in froms:
if isinstance(f, expression.Join):
visit_join(f)
-
+
if not clauses:
return None
else:
def visit_alias(self, alias, asfrom=False, ashint=False, **kwargs):
"""Oracle doesn't like ``FROM table AS alias``. Is the AS standard SQL??"""
-
+
if asfrom or ashint:
alias_name = isinstance(alias.name, expression._generated_label) and \
self._truncated_identifier("alias", alias.name) or alias.name
-
+
if ashint:
return alias_name
elif asfrom:
return self.process(alias.original, **kwargs)
def returning_clause(self, stmt, returning_cols):
-
+
def create_out_param(col, i):
bindparam = sql.outparam("ret_%d" % i, type_=col.type)
self.binds[bindparam.key] = bindparam
return self.bindparam_string(self._truncate_bindparam(bindparam))
-
+
columnlist = list(expression._select_iterables(returning_cols))
-
+
# within_columns_clause =False so that labels (foo AS bar) don't render
columns = [self.process(c, within_columns_clause=False, result_map=self.result_map) for c in columnlist]
-
+
binds = [create_out_param(c, i) for i, c in enumerate(columnlist)]
-
+
return 'RETURNING ' + ', '.join(columns) + " INTO " + ", ".join(binds)
def _TODO_visit_compound_select(self, select):
existingfroms = self.stack[-1]['from']
else:
existingfroms = None
-
+
froms = select._get_display_froms(existingfroms)
whereclause = self._get_nonansi_join_whereclause(froms)
if whereclause is not None:
limitselect._oracle_visit = True
limitselect._is_wrapper = True
-
+
# If needed, add the limiting clause
if select._limit is not None:
max_row = select._limit
text = ""
if constraint.ondelete is not None:
text += " ON DELETE %s" % constraint.ondelete
-
+
# oracle has no ON UPDATE CASCADE -
# its only available via triggers http://asktom.oracle.com/tkyte/update_cascade/index.html
if constraint.onupdate is not None:
"Oracle does not contain native UPDATE CASCADE "
"functionality - onupdates will not be rendered for foreign keys. "
"Consider using deferrable=True, initially='deferred' or triggers.")
-
+
return text
class OracleIdentifierPreparer(compiler.IdentifierPreparer):
-
+
reserved_words = set([x.lower() for x in RESERVED_WORDS])
illegal_initial_characters = set(xrange(0, 10)).union(["_", "$"])
or value[0] in self.illegal_initial_characters
or not self.legal_characters.match(unicode(value))
)
-
+
def format_savepoint(self, savepoint):
name = re.sub(r'^_+', '', savepoint.ident)
return super(OracleIdentifierPreparer, self).format_savepoint(savepoint, name)
-
-
+
+
class OracleExecutionContext(default.DefaultExecutionContext):
def fire_sequence(self, seq):
return int(self._execute_scalar("SELECT " +
self.dialect.identifier_preparer.format_sequence(seq) +
".nextval FROM DUAL"))
-
+
class OracleDialect(default.DefaultDialect):
name = 'oracle'
supports_alter = True
supports_sequences = True
sequences_optional = False
postfetch_lastrowid = False
-
+
default_paramstyle = 'named'
colspecs = colspecs
ischema_names = ischema_names
requires_name_normalize = True
-
+
supports_default_values = False
supports_empty_insert = False
-
+
statement_compiler = OracleCompiler
ddl_compiler = OracleDDLCompiler
type_compiler = OracleTypeCompiler
preparer = OracleIdentifierPreparer
execution_ctx_cls = OracleExecutionContext
-
+
reflection_options = ('oracle_resolve_synonyms', )
def __init__(self,
'implicit_returning',
self.server_version_info > (10, )
)
-
+
if self._is_oracle_8:
self.colspecs = self.colspecs.copy()
self.colspecs.pop(sqltypes.Interval)
def _is_oracle_8(self):
return self.server_version_info and \
self.server_version_info < (9, )
-
+
@property
def _supports_char_length(self):
return not self._is_oracle_8
@property
def _supports_nchar(self):
return not self._is_oracle_8
-
+
def do_release_savepoint(self, connection, name):
# Oracle does not support RELEASE SAVEPOINT
pass
def get_indexes(self, connection, table_name, schema=None,
resolve_synonyms=False, dblink='', **kw):
-
+
info_cache = kw.get('info_cache')
(table_name, schema, dblink, synonym) = \
self._prepare_reflection_args(connection, table_name, schema,
a.index_name = b.index_name
AND a.table_owner = b.table_owner
AND a.table_name = b.table_name
-
+
AND a.table_name = :table_name
AND a.table_owner = :schema
ORDER BY a.index_name, a.column_position""" % {'dblink': dblink})
dblink=dblink,
info_cache=kw.get('info_cache'))
uniqueness = dict(NONUNIQUE=False, UNIQUE=True)
-
+
oracle_sys_col = re.compile(r'SYS_NC\d+\$', re.IGNORECASE)
def upper_name_set(names):
constraint_data = self._get_constraint_data(connection, table_name,
schema, dblink,
info_cache=kw.get('info_cache'))
-
+
for row in constraint_data:
#print "ROW:" , row
(cons_name, cons_type, local_column, remote_table, remote_column, remote_owner) = \
}
fkeys = util.defaultdict(fkey_rec)
-
+
for row in constraint_data:
(cons_name, cons_type, local_column, remote_table, remote_column, remote_owner) = \
row[0:2] + tuple([self.normalize_name(x) for x in row[2:6]])
if ref_synonym:
remote_table = self.normalize_name(ref_synonym)
remote_owner = self.normalize_name(ref_remote_owner)
-
+
rec['referred_table'] = remote_table
-
+
if requested_schema is not None or self.denormalize_name(remote_owner) != schema:
rec['referred_schema'] = remote_owner
-
+
local_cols.append(local_column)
remote_cols.append(remote_column)
class _OuterJoinColumn(sql.ClauseElement):
__visit_name__ = 'outer_join_column'
-
+
def __init__(self, column):
self.column = column
* *arraysize* - set the cx_oracle.arraysize value on cursors, in SQLAlchemy
it defaults to 50. See the section on "LOB Objects" below.
-
+
* *auto_convert_lobs* - defaults to True, see the section on LOB objects.
* *auto_setinputsizes* - the cx_oracle.setinputsizes() call is issued for all bind parameters.
other backends, and so that the linkage to a live cursor is not needed in scenarios
like result.fetchmany() and result.fetchall(). This means that by default, LOB
objects are fully fetched unconditionally by SQLAlchemy, and the linkage to a live
-cursor is broken.
+cursor is broken.
To disable this processing, pass ``auto_convert_lobs=False`` to :func:`create_engine()`.
# regardless of the scale given for the originating type.
# So we still need an old school isinstance() handler
# here for decimals.
-
+
if dialect.supports_native_decimal:
if self.asdecimal:
if self.scale is None:
if not dialect.auto_convert_lobs:
# return the cx_oracle.LOB directly.
return None
-
+
def process(value):
if value is not None:
return value.read()
else:
return super(_NativeUnicodeMixin, self).bind_processor(dialect)
# end Py2K
-
+
# we apply a connection output handler that returns
# unicode in all cases, so the "native_unicode" flag
# will be set for the default String.result_processor.
-
+
class _OracleChar(_NativeUnicodeMixin, sqltypes.CHAR):
def get_dbapi_type(self, dbapi):
return dbapi.FIXED_CHAR
class _OracleNVarChar(_NativeUnicodeMixin, sqltypes.NVARCHAR):
def get_dbapi_type(self, dbapi):
return getattr(dbapi, 'UNICODE', dbapi.STRING)
-
+
class _OracleText(_LOBMixin, sqltypes.Text):
def get_dbapi_type(self, dbapi):
return dbapi.CLOB
val = int(val)
return val
return to_int
-
+
class _OracleBinary(_LOBMixin, sqltypes.LargeBinary):
def get_dbapi_type(self, dbapi):
return dbapi.BLOB
class _OracleInterval(oracle.INTERVAL):
def get_dbapi_type(self, dbapi):
return dbapi.INTERVAL
-
+
class _OracleRaw(oracle.RAW):
pass
class _OracleRowid(oracle.ROWID):
def get_dbapi_type(self, dbapi):
return dbapi.ROWID
-
+
class OracleCompiler_cx_oracle(OracleCompiler):
def bindparam_string(self, name):
if self.preparer._bindparam_requires_quotes(name):
else:
return OracleCompiler.bindparam_string(self, name)
-
+
class OracleExecutionContext_cx_oracle(OracleExecutionContext):
-
+
def pre_exec(self):
quoted_bind_names = \
getattr(self.compiled, '_quoted_bind_names', None)
self.out_parameters[name] = self.cursor.var(dbtype)
self.parameters[0][quoted_bind_names.get(name, name)] = \
self.out_parameters[name]
-
+
def create_cursor(self):
c = self._connection.connection.cursor()
if self.dialect.arraysize:
type_code = column[1]
if type_code in self.dialect._cx_oracle_binary_types:
result = base.BufferedColumnResultProxy(self)
-
+
if result is None:
result = base.ResultProxy(self)
-
+
if hasattr(self, 'out_parameters'):
if self.compiled_parameters is not None and \
len(self.compiled_parameters) == 1:
result.out_parameters = out_parameters = {}
-
+
for bind, name in self.compiled.bind_names.items():
if name in self.out_parameters:
type = bind.type
class OracleExecutionContext_cx_oracle_with_unicode(OracleExecutionContext_cx_oracle):
"""Support WITH_UNICODE in Python 2.xx.
-
+
WITH_UNICODE allows cx_Oracle's Python 3 unicode handling
behavior under Python 2.x. This mode in some cases disallows
and in other cases silently passes corrupted data when
non-Python-unicode strings (a.k.a. plain old Python strings)
are passed as arguments to connect(), the statement sent to execute(),
- or any of the bind parameter keys or values sent to execute().
+ or any of the bind parameter keys or values sent to execute().
This optional context therefore ensures that all statements are
passed as Python unicode objects.
-
+
"""
def __init__(self, *arg, **kw):
OracleExecutionContext_cx_oracle.__init__(self, *arg, **kw)
def _execute_scalar(self, stmt):
return super(OracleExecutionContext_cx_oracle_with_unicode, self).\
_execute_scalar(unicode(stmt))
-
+
class ReturningResultProxy(base.FullyBufferedResultProxy):
"""Result proxy which stuffs the _returning clause + outparams into the fetch."""
-
+
def __init__(self, context, returning_params):
self._returning_params = returning_params
super(ReturningResultProxy, self).__init__(context)
-
+
def _cursor_description(self):
returning = self.context.compiled.returning
-
+
ret = []
for c in returning:
if hasattr(c, 'name'):
else:
ret.append((c.anon_label, c.type))
return ret
-
+
def _buffer_rows(self):
return [tuple(self._returning_params["ret_%d" % i]
for i, c in enumerate(self._returning_params))]
statement_compiler = OracleCompiler_cx_oracle
driver = "cx_oracle"
-
+
colspecs = colspecs = {
sqltypes.Numeric: _OracleNumeric,
sqltypes.Date : _OracleDate, # generic type, assume datetime.date is desired
oracle.ROWID: _OracleRowid,
}
-
+
execute_sequence_format = list
-
+
def __init__(self,
auto_setinputsizes=True,
auto_convert_lobs=True,
self.supports_timestamp = self.dbapi is None or hasattr(self.dbapi, 'TIMESTAMP' )
self.auto_setinputsizes = auto_setinputsizes
self.auto_convert_lobs = auto_convert_lobs
-
+
if hasattr(self.dbapi, 'version'):
self.cx_oracle_ver = tuple([int(x) for x in self.dbapi.version.split('.')])
- else:
+ else:
self.cx_oracle_ver = (0, 0, 0)
-
+
def types(*names):
return set([
getattr(self.dbapi, name, None) for name in names
if self._is_oracle_8:
self.supports_unicode_binds = False
self._detect_decimal_char(connection)
-
+
def _detect_decimal_char(self, connection):
"""detect if the decimal separator character is not '.', as
is the case with european locale settings for NLS_LANG.
-
+
cx_oracle itself uses similar logic when it formats Python
Decimal objects to strings on the bind side (as of 5.0.3),
as Oracle sends/receives string numerics only in the
current locale.
-
+
"""
if self.cx_oracle_ver < (5,):
# no output type handlers before version 5
return
-
+
cx_Oracle = self.dbapi
conn = connection.connection
-
+
# override the output_type_handler that's
# on the cx_oracle connection with a plain
# one on the cursor
-
+
def output_type_handler(cursor, name, defaultType,
size, precision, scale):
return cursor.var(
lambda value: _detect_decimal(value.replace(char, '.'))
self._to_decimal = \
lambda value: Decimal(value.replace(char, '.'))
-
+
def _detect_decimal(self, value):
if "." in value:
return Decimal(value)
else:
return int(value)
-
+
_to_decimal = Decimal
-
+
def on_connect(self):
if self.cx_oracle_ver < (5,):
# no output type handlers before version 5
return
-
+
cx_Oracle = self.dbapi
def output_type_handler(cursor, name, defaultType,
size, precision, scale):
# allow all strings to come back natively as Unicode
elif defaultType in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR):
return cursor.var(unicode, size, cursor.arraysize)
-
+
def on_connect(conn):
conn.outputtypehandler = output_type_handler
-
+
return on_connect
-
+
def create_connect_args(self, url):
dialect_opts = dict(url.query)
for opt in ('use_ansi', 'auto_setinputsizes', 'auto_convert_lobs',
"The SQLAlchemy PostgreSQL dialect has been renamed from 'postgres' to 'postgresql'. "
"The new URL format is postgresql[+driver]://<user>:<pass>@<host>/<dbname>"
)
-
+
from sqlalchemy.dialects.postgresql import *
from sqlalchemy.dialects.postgresql import base
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
-"""Support for the PostgreSQL database.
+"""Support for the PostgreSQL database.
For information on connecting using specific drivers, see the documentation
section regarding that driver.
result = table.insert().returning(table.c.col1, table.c.col2).\\
values(name='foo')
print result.fetchall()
-
+
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\\
where(table.c.name=='foo').values(name='bar')
class DOUBLE_PRECISION(sqltypes.Float):
__visit_name__ = 'DOUBLE_PRECISION'
-
+
class INET(sqltypes.TypeEngine):
__visit_name__ = "INET"
PGInet = INET
def __init__(self, timezone=False, precision=None):
super(TIMESTAMP, self).__init__(timezone=timezone)
self.precision = precision
-
+
class TIME(sqltypes.TIME):
def __init__(self, timezone=False, precision=None):
super(TIME, self).__init__(timezone=timezone)
self.precision = precision
-
+
class INTERVAL(sqltypes.TypeEngine):
"""Postgresql INTERVAL type.
-
+
The INTERVAL type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000 or zxjdbc.
-
+
"""
__visit_name__ = 'INTERVAL'
def __init__(self, precision=None):
self.precision = precision
-
+
def adapt(self, impltype):
return impltype(self.precision)
@property
def _type_affinity(self):
return sqltypes.Interval
-
+
PGInterval = INTERVAL
class BIT(sqltypes.TypeEngine):
class UUID(sqltypes.TypeEngine):
"""Postgresql UUID type.
-
+
Represents the UUID column type, interpreting
data either as natively returned by the DBAPI
or as Python uuid objects.
The UUID type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000.
-
+
"""
__visit_name__ = 'UUID'
-
+
def __init__(self, as_uuid=False):
"""Construct a UUID type.
-
-
+
+
:param as_uuid=False: if True, values will be interpreted
as Python uuid objects, converting to/from string via the
DBAPI.
-
+
"""
if as_uuid and _python_UUID is None:
raise NotImplementedError(
"This version of Python does not support the native UUID type."
)
self.as_uuid = as_uuid
-
+
def bind_processor(self, dialect):
if self.as_uuid:
def process(value):
return process
else:
return None
-
+
def result_processor(self, dialect, coltype):
if self.as_uuid:
def process(value):
return process
else:
return None
-
+
PGUuid = UUID
class ARRAY(sqltypes.MutableType, sqltypes.Concatenable, sqltypes.TypeEngine):
"""Postgresql ARRAY type.
-
+
Represents values as Python lists.
The ARRAY type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000.
-
+
**Note:** be sure to read the notes for
:class:`.MutableType` regarding ORM
performance implications. The :class:`.ARRAY` type's
mutability can be disabled using the "mutable" flag.
-
+
"""
__visit_name__ = 'ARRAY'
-
+
def __init__(self, item_type, mutable=True, as_tuple=False):
"""Construct an ARRAY.
:param mutable=True: Specify whether lists passed to this
class should be considered mutable. If so, generic copy operations
(typically used by the ORM) will shallow-copy values.
-
+
:param as_tuple=False: Specify whether return results should be converted
to tuples from lists. DBAPIs such as psycopg2 return lists by default.
When tuples are returned, the results are hashable. This flag can only
be set to ``True`` when ``mutable`` is set to ``False``. (new in 0.6.5)
-
+
"""
if isinstance(item_type, ARRAY):
raise ValueError("Do not nest ARRAY types; ARRAY(basetype) "
"mutable must be set to False if as_tuple is True."
)
self.as_tuple = as_tuple
-
+
def copy_value(self, value):
if value is None:
return None
impl.__dict__.update(self.__dict__)
impl.item_type = self.item_type.dialect_impl(dialect)
return impl
-
+
def adapt(self, impltype):
return impltype(
self.item_type,
mutable=self.mutable,
as_tuple=self.as_tuple
)
-
+
def bind_processor(self, dialect):
item_proc = self.item_type.bind_processor(dialect)
if item_proc:
def create(self, bind=None, checkfirst=True):
if not bind.dialect.supports_native_enum:
return
-
+
if not checkfirst or \
not bind.dialect.has_type(bind, self.name, schema=self.schema):
bind.execute(CreateEnumType(self))
if not checkfirst or \
bind.dialect.has_type(bind, self.name, schema=self.schema):
bind.execute(DropEnumType(self))
-
+
def _on_table_create(self, event, target, bind, **kw):
self.create(bind=bind, checkfirst=True)
class PGCompiler(compiler.SQLCompiler):
-
+
def visit_match_op(self, binary, **kw):
return "%s @@ to_tsquery(%s)" % (
self.process(binary.left),
return super(PGCompiler, self).for_update_clause(select)
def returning_clause(self, stmt, returning_cols):
-
+
columns = [
self.process(
self.label_select_column(None, c, asfrom=False),
result_map=self.result_map)
for c in expression._select_iterables(returning_cols)
]
-
+
return 'RETURNING ' + ', '.join(columns)
def visit_extract(self, extract, **kwargs):
affinity = extract.expr.type._type_affinity
else:
affinity = None
-
+
casts = {
sqltypes.Date:'date',
sqltypes.DateTime:'timestamp',
def visit_create_enum_type(self, create):
type_ = create.element
-
+
return "CREATE TYPE %s AS ENUM (%s)" % (
self.preparer.format_type(type_),
",".join("'%s'" % e for e in type_.enums)
return "DROP TYPE %s" % (
self.preparer.format_type(type_)
)
-
+
def visit_create_index(self, create):
preparer = self.preparer
index = create.element
preparer.format_table(index.table),
', '.join([preparer.format_column(c)
for c in index.columns]))
-
+
if "postgres_where" in index.kwargs:
whereclause = index.kwargs['postgres_where']
util.warn_deprecated(
whereclause = index.kwargs['postgresql_where']
else:
whereclause = None
-
+
if whereclause is not None:
whereclause = sql_util.expression_as_ddl(whereclause)
where_compiled = self.sql_compiler.process(whereclause)
return "FLOAT"
else:
return "FLOAT(%(precision)s)" % {'precision': type_.precision}
-
+
def visit_DOUBLE_PRECISION(self, type_):
return "DOUBLE PRECISION"
-
+
def visit_BIGINT(self, type_):
return "BIGINT"
def visit_datetime(self, type_):
return self.visit_TIMESTAMP(type_)
-
+
def visit_enum(self, type_):
if not type_.native_enum or not self.dialect.supports_native_enum:
return super(PGTypeCompiler, self).visit_enum(type_)
else:
return self.visit_ENUM(type_)
-
+
def visit_ENUM(self, type_):
return self.dialect.identifier_preparer.format_type(type_)
-
+
def visit_TIMESTAMP(self, type_):
return "TIMESTAMP%s %s" % (
getattr(type_, 'precision', None) and "(%d)" %
def visit_large_binary(self, type_):
return self.visit_BYTEA(type_)
-
+
def visit_BYTEA(self, type_):
return "BYTEA"
def format_type(self, type_, use_schema=True):
if not type_.name:
raise exc.ArgumentError("Postgresql ENUM type requires a name.")
-
+
name = self.quote(type_.name, type_.quote)
if not self.omit_schema and use_schema and type_.schema is not None:
name = self.quote_schema(type_.schema, type_.quote) + "." + name
return name
-
+
class PGInspector(reflection.Inspector):
def __init__(self, conn):
return self._execute_scalar(exc)
return super(PGExecutionContext, self).get_insert_default(column)
-
+
class PGDialect(default.DefaultDialect):
name = 'postgresql'
supports_alter = True
max_identifier_length = 63
supports_sane_rowcount = True
-
+
supports_native_enum = True
supports_native_boolean = True
-
+
supports_sequences = True
sequences_optional = True
preexecute_autoincrement_sequences = True
postfetch_lastrowid = False
-
+
supports_default_values = True
supports_empty_insert = False
default_paramstyle = 'pyformat'
ischema_names = ischema_names
colspecs = colspecs
-
+
statement_compiler = PGCompiler
ddl_compiler = PGDDLCompiler
type_compiler = PGTypeCompiler
return connect
else:
return None
-
+
def do_begin_twophase(self, connection, xid):
self.do_begin(connection.connection)
rows = c.fetchall()
domains = self._load_domains(connection)
enums = self._load_enums(connection)
-
+
# format columns
columns = []
for name, format_type, default, notnull, attnum, table_oid in rows:
## strip (5) from character varying(5), timestamp(5)
# with time zone, etc
attype = re.sub(r'\([\d,]+\)', '', format_type)
-
+
# strip '[]' from integer[], etc.
attype = re.sub(r'\[\]', '', attype)
-
+
nullable = not notnull
is_array = format_type.endswith('[]')
charlen = re.search('\(([\d,]+)\)', format_type)
if charlen:
charlen = charlen.group(1)
kwargs = {}
-
+
if attype == 'numeric':
if charlen:
prec, scale = charlen.split(',')
args = (int(charlen),)
else:
args = ()
-
+
while True:
if attype in self.ischema_names:
coltype = self.ischema_names[attype]
else:
coltype = None
break
-
+
if coltype:
coltype = coltype(*args, **kwargs)
if is_array:
def get_pk_constraint(self, connection, table_name, schema=None, **kw):
cols = self.get_primary_keys(connection, table_name,
schema=schema, **kw)
-
+
table_oid = self.get_table_oid(connection, table_name, schema,
info_cache=kw.get('info_cache'))
value = value.replace(self.escape_quote, self.escape_to_quote)
return value.replace('%', '%%')
-
+
class PGDialect_pg8000(PGDialect):
driver = 'pg8000'
supports_unicode_statements = True
-
+
supports_unicode_binds = True
-
+
default_paramstyle = 'format'
supports_sane_multi_rowcount = False
execution_ctx_cls = PGExecutionContext_pg8000
statement_compiler = PGCompiler_pg8000
preparer = PGIdentifierPreparer_pg8000
-
+
colspecs = util.update_copy(
PGDialect.colspecs,
{
sqltypes.Numeric : _PGNumeric,
}
)
-
+
@classmethod
def dbapi(cls):
return __import__('pg8000').dbapi
class PGExecutionContext_psycopg2(PGExecutionContext):
def create_cursor(self):
# TODO: coverage for server side cursors + select.for_update()
-
+
if self.dialect.server_side_cursors:
is_server_side = \
self.execution_options.get('stream_results', True) and (
def get_result_proxy(self):
if logger.isEnabledFor(logging.INFO):
self._log_notices(self.cursor)
-
+
if self.__is_server_side:
return base.BufferedRowResultProxy(self)
else:
class PGCompiler_psycopg2(PGCompiler):
def visit_mod(self, binary, **kw):
return self.process(binary.left) + " %% " + self.process(binary.right)
-
+
def post_process_text(self, text):
return text.replace('%', '%%')
self.server_side_cursors = server_side_cursors
self.use_native_unicode = use_native_unicode
self.supports_unicode_binds = use_native_unicode
-
+
@classmethod
def dbapi(cls):
psycopg = __import__('psycopg2')
return psycopg
-
+
def on_connect(self):
if self.isolation_level is not None:
extensions = __import__('psycopg2.extensions').extensions
'READ_UNCOMMITTED':extensions.ISOLATION_LEVEL_READ_UNCOMMITTED,
'REPEATABLE_READ':extensions.ISOLATION_LEVEL_REPEATABLE_READ,
'SERIALIZABLE':extensions.ISOLATION_LEVEL_SERIALIZABLE
-
+
}
def base_on_connect(conn):
try:
self.isolation_level)
else:
base_on_connect = None
-
+
if self.dbapi and self.use_native_unicode:
extensions = __import__('psycopg2.extensions').extensions
def connect(conn):
return False
dialect = PGDialect_psycopg2
-
+
from sqlalchemy.types import BLOB, BOOLEAN, CHAR, DATE, DATETIME, DECIMAL,\
FLOAT, INTEGER, NUMERIC, SMALLINT, TEXT, TIME,\
TIMESTAMP, VARCHAR
-
+
class _DateTimeMixin(object):
_reg = None
_storage_format = None
-
+
def __init__(self, storage_format=None, regexp=None, **kwargs):
if regexp is not None:
self._reg = re.compile(regexp)
if storage_format is not None:
self._storage_format = storage_format
-
+
class DATETIME(_DateTimeMixin, sqltypes.DateTime):
_storage_format = "%04d-%02d-%02d %02d:%02d:%02d.%06d"
-
+
def bind_processor(self, dialect):
datetime_datetime = datetime.datetime
datetime_date = datetime.date
raise TypeError("SQLite Date type only accepts Python "
"date objects as input.")
return process
-
+
def result_processor(self, dialect, coltype):
if self._reg:
return processors.str_to_datetime_processor_factory(
raise TypeError("SQLite Time type only accepts Python "
"time objects as input.")
return process
-
+
def result_processor(self, dialect, coltype):
if self._reg:
return processors.str_to_datetime_processor_factory(
def visit_now_func(self, fn, **kw):
return "CURRENT_TIMESTAMP"
-
+
def visit_char_length_func(self, fn, **kw):
return "length%s" % self.function_argspec(fn)
-
+
def visit_cast(self, cast, **kwargs):
if self.dialect.supports_cast:
return super(SQLiteCompiler, self).visit_cast(cast)
isinstance(column.type, sqltypes.Integer) and \
not column.foreign_keys:
colspec += " PRIMARY KEY AUTOINCREMENT"
-
+
return colspec
def visit_primary_key_constraint(self, constraint):
return super(SQLiteDDLCompiler, self).\
visit_primary_key_constraint(constraint)
-
+
def visit_foreign_key_constraint(self, constraint):
-
+
local_table = constraint._elements.values()[0].parent.table
remote_table = list(constraint._elements.values())[0].column.table
-
+
if local_table.schema != remote_table.schema:
return None
else:
def define_constraint_remote_table(self, constraint, table, preparer):
"""Format the remote table clause of a CREATE CONSTRAINT clause."""
-
+
return preparer.format_table(table, use_schema=False)
def visit_create_index(self, create):
supports_default_values = True
supports_empty_insert = False
supports_cast = True
-
+
default_paramstyle = 'qmark'
statement_compiler = SQLiteCompiler
ddl_compiler = SQLiteDDLCompiler
"Valid isolation levels for sqlite are 'SERIALIZABLE' and "
"'READ UNCOMMITTED'.")
self.isolation_level = isolation_level
-
+
# this flag used by pysqlite dialect, and perhaps others in the
# future, to indicate the driver is handling date/timestamp
# conversions (and perhaps datetime/time as well on some
self.supports_cast = \
self.dbapi.sqlite_version_info >= (3, 2, 3)
-
+
def on_connect(self):
if self.isolation_level is not None:
if self.isolation_level == 'READ UNCOMMITTED':
isolation_level = 1
else:
isolation_level = 0
-
+
def connect(conn):
cursor = conn.cursor()
cursor.execute("PRAGMA read_uncommitted = %d" % isolation_level)
def _pragma_cursor(cursor):
"""work around SQLite issue whereby cursor.description is blank when PRAGMA returns no rows."""
-
+
if cursor.closed:
cursor.fetchone = lambda: None
return cursor
the URL. Note that the format of a url is::
driver://user:pass@host/database
-
+
This means that the actual filename to be used starts with the characters to the
**right** of the third slash. So connecting to a relative filepath looks like::
# relative path
e = create_engine('sqlite:///path/to/database.db')
-
+
An absolute path, which is denoted by starting with a slash, means you need **four**
slashes::
# absolute path
e = create_engine('sqlite:////path/to/database.db')
-To use a Windows path, regular drive specifications and backslashes can be used.
+To use a Windows path, regular drive specifications and backslashes can be used.
Double backslashes are probably needed::
# absolute path on Windows
somewhat reasonably, the SQLite dialect will specify that the :class:`~sqlalchemy.pool.SingletonThreadPool`
be used by default. This pool maintains a single SQLite connection per thread
that is held open up to a count of five concurrent threads. When more than five threads
-are used, a cleanup mechanism will dispose of excess unused connections.
+are used, a cleanup mechanism will dispose of excess unused connections.
Two optional pool implementations that may be appropriate for particular SQLite usage scenarios:
application using an in-memory database, assuming the threading issues inherent in
pysqlite are somehow accomodated for. This pool holds persistently onto a single connection
which is never closed, and is returned for all requests.
-
+
* the :class:`sqlalchemy.pool.NullPool` might be appropriate for an application that
makes use of a file-based sqlite database. This pool disables any actual "pooling"
behavior, and simply opens and closes real connections corresonding to the :func:`connect()`
return None
else:
return DATETIME.bind_processor(self, dialect)
-
+
def result_processor(self, dialect, coltype):
if dialect.native_datetime:
return None
return None
else:
return DATE.bind_processor(self, dialect)
-
+
def result_processor(self, dialect, coltype):
if dialect.native_datetime:
return None
sqltypes.TIMESTAMP:_SQLite_pysqliteTimeStamp,
}
)
-
+
# Py3K
#description_encoding = None
-
+
driver = 'pysqlite'
-
+
def __init__(self, **kwargs):
SQLiteDialect.__init__(self, **kwargs)
"within", "work", "writetext",
])
-
+
class _SybaseUnitypeMixin(object):
"""these types appear to return a buffer object."""
-
+
def result_processor(self, dialect, coltype):
def process(value):
if value is not None:
else:
return None
return process
-
+
class UNICHAR(_SybaseUnitypeMixin, sqltypes.Unicode):
__visit_name__ = 'UNICHAR'
class BIT(sqltypes.TypeEngine):
__visit_name__ = 'BIT'
-
+
class MONEY(sqltypes.TypeEngine):
__visit_name__ = "MONEY"
class UNIQUEIDENTIFIER(sqltypes.TypeEngine):
__visit_name__ = "UNIQUEIDENTIFIER"
-
+
class IMAGE(sqltypes.LargeBinary):
__visit_name__ = 'IMAGE'
class SybaseTypeCompiler(compiler.GenericTypeCompiler):
def visit_large_binary(self, type_):
return self.visit_IMAGE(type_)
-
+
def visit_boolean(self, type_):
return self.visit_BIT(type_)
def visit_TINYINT(self, type_):
return "TINYINT"
-
+
def visit_IMAGE(self, type_):
return "IMAGE"
def visit_MONEY(self, type_):
return "MONEY"
-
+
def visit_SMALLMONEY(self, type_):
return "SMALLMONEY"
-
+
def visit_UNIQUEIDENTIFIER(self, type_):
return "UNIQUEIDENTIFIER"
-
+
ischema_names = {
'integer' : INTEGER,
'unsigned int' : INTEGER, # TODO: unsigned flags
class SybaseExecutionContext(default.DefaultExecutionContext):
_enable_identity_insert = False
-
+
def set_ddl_autocommit(self, connection, value):
"""Must be implemented by subclasses to accommodate DDL executions.
-
+
"connection" is the raw unwrapped DBAPI connection. "value"
is True or False. when True, the connection should be configured
such that a DDL can take place subsequently. when False,
a DDL has taken place and the connection should be resumed
into non-autocommit mode.
-
+
"""
raise NotImplementedError()
-
+
def pre_exec(self):
if self.isinsert:
tbl = self.compiled.statement.table
seq_column = tbl._autoincrement_column
insert_has_sequence = seq_column is not None
-
+
if insert_has_sequence:
self._enable_identity_insert = \
seq_column.key in self.compiled_parameters[0]
else:
self._enable_identity_insert = False
-
+
if self._enable_identity_insert:
self.cursor.execute("SET IDENTITY_INSERT %s ON" %
self.dialect.identifier_preparer.format_table(tbl))
self.set_ddl_autocommit(
self.root_connection.connection.connection,
True)
-
+
def post_exec(self):
if self.isddl:
self.set_ddl_autocommit(self.root_connection, False)
-
+
if self._enable_identity_insert:
self.cursor.execute(
- "SET IDENTITY_INSERT %s OFF" %
+ "SET IDENTITY_INSERT %s OFF" %
self.dialect.identifier_preparer.
format_table(self.compiled.statement.table)
)
self.max_identifier_length = 30
else:
self.max_identifier_length = 255
-
+
@reflection.cache
def get_table_names(self, connection, schema=None, **kw):
if schema is None:
UNICHAR
UNITEXT
UNIVARCHAR
-
+
"""
from sqlalchemy.dialects.sybase.base import SybaseDialect,\
class _SybNumeric_pyodbc(sqltypes.Numeric):
"""Turns Decimals with adjusted() < -6 into floats.
-
+
It's not yet known how to get decimals with many
significant digits or very large adjusted() into Sybase
via pyodbc.
-
+
"""
def bind_processor(self, dialect):
class SybaseSQLCompiler_pysybase(SybaseSQLCompiler):
def bindparam_string(self, name):
return "@" + name
-
+
class SybaseDialect_pysybase(SybaseDialect):
driver = 'pysybase'
execution_ctx_cls = SybaseExecutionContext_pysybase
a. Specifying behavior which needs to occur for bind parameters
or result row columns.
-
+
b. Specifying types that are entirely specific to the database
in use and have no analogue in the sqlalchemy.types package.
-
+
c. Specifying types where there is an analogue in sqlalchemy.types,
but the database in use takes vendor-specific flags for those
types.
d. If a TypeEngine class doesn't provide any of this, it should be
*removed* from the dialect.
-
+
2. the TypeEngine classes are *no longer* used for generating DDL. Dialects
now have a TypeCompiler subclass which uses the same visit_XXX model as
-other compilers.
+other compilers.
3. the "ischema_names" and "colspecs" dictionaries are now required members on
the Dialect class.
end users would never need to use _PGNumeric directly. However, if a dialect-specific
type is specifying a type *or* arguments that are not present generically, it should
match the real name of the type on that backend, in uppercase. E.g. postgresql.INET,
-mysql.ENUM, postgresql.ARRAY.
+mysql.ENUM, postgresql.ARRAY.
Or follow this handy flowchart:
|
v
the type should
- subclass the
- UPPERCASE
+ subclass the
+ UPPERCASE
type in types.py
(i.e. class BLOB(types.BLOB))
it ultimately deals with strings.
Example 5. Postgresql has a DATETIME type. The DBAPIs handle dates correctly,
-and no special arguments are used in PG's DDL beyond what types.py provides.
+and no special arguments are used in PG's DDL beyond what types.py provides.
Postgresql dialect therefore imports types.DATETIME into its base.py.
Ideally one should be able to specify a schema using names imported completely from a
dialect, all matching the real name on that backend:
from sqlalchemy.dialects.postgresql import base as pg
-
+
t = Table('mytable', metadata,
Column('id', pg.INTEGER, primary_key=True),
Column('name', pg.VARCHAR(300)),
module and from this dictionary.
6. "ischema_names" indicates string descriptions of types as returned from the database
-linked to TypeEngine classes.
+linked to TypeEngine classes.
a. The string name should be matched to the most specific type possible within
sqlalchemy.types, unless there is no matching type within sqlalchemy.types in which
own subclass of that type with special bind/result behavior - reflect to the types.py
UPPERCASE type as much as possible. With very few exceptions, all types
should reflect to an UPPERCASE type.
-
+
b. If the dialect contains a matching dialect-specific type that takes extra arguments
which the generic one does not, then point to the dialect-specific type. E.g.
mssql.VARCHAR takes a "collation" parameter which should be preserved.
-
+
5. DDL, or what was formerly issued by "get_col_spec()", is now handled exclusively by
a subclass of compiler.GenericTypeCompiler.
a. your TypeCompiler class will receive generic and uppercase types from
sqlalchemy.types. Do not assume the presence of dialect-specific attributes on
these types.
-
+
b. the visit_UPPERCASE methods on GenericTypeCompiler should *not* be overridden with
methods that produce a different DDL name. Uppercase types don't do any kind of
"guessing" - if visit_TIMESTAMP is called, the DDL should render as TIMESTAMP in
all cases, regardless of whether or not that type is legal on the backend database.
-
+
c. the visit_UPPERCASE methods *should* be overridden with methods that add additional
- arguments and flags to those types.
-
+ arguments and flags to those types.
+
d. the visit_lowercase methods are overridden to provide an interpretation of a generic
type. E.g. visit_large_binary() might be overridden to say "return self.visit_BIT(type_)".
-
+
e. visit_lowercase methods should *never* render strings directly - it should always
be via calling a visit_UPPERCASE() method.
"""
# not sure what this was used for
-#import sqlalchemy.databases
+#import sqlalchemy.databases
from sqlalchemy.engine.base import (
BufferedColumnResultProxy,
:param execution_options: Dictionary execution options which will
be applied to all connections. See
:meth:`~sqlalchemy.engine.base.Connection.execution_options`
-
+
:param label_length=None: optional integer value which limits
the size of dynamically generated column labels to that many
characters. If less than 6, labels are generated as
"_(counter)". If ``None``, the value of
``dialect.max_identifier_length`` is used instead.
-
+
:param listeners: A list of one or more
:class:`~sqlalchemy.interfaces.PoolListener` objects which will
receive connection pool events.
-
+
:param logging_name: String identifier which will be used within
the "name" field of logging records generated within the
"sqlalchemy.engine" logger. Defaults to a hexstring of the
:param strategy='plain': selects alternate engine implementations.
Currently available is the ``threadlocal``
strategy, which is described in :ref:`threadlocal_strategy`.
-
+
"""
strategy = kwargs.pop('strategy', default_strategy)
a tuple containing a version number for the DB backend in use.
This value is only available for supporting dialects, and is
typically populated during the initial connection to the database.
-
+
default_schema_name
the name of the default schema. This value is only available for
supporting dialects, and is typically populated during the
initial connection to the database.
-
+
execution_ctx_cls
a :class:`ExecutionContext` class used to handle statement execution
execute_sequence_format
either the 'tuple' or 'list' type, depending on what cursor.execute()
accepts for the second argument (they vary).
-
+
preparer
a :class:`~sqlalchemy.sql.compiler.IdentifierPreparer` class used to
quote identifiers.
True if 'implicit' primary key functions must be executed separately
in order to get their value. This is currently oriented towards
Postgresql.
-
+
implicit_returning
use RETURNING or equivalent during INSERT execution in order to load
newly generated primary keys and other column defaults in one execution,
If an insert statement has returning() specified explicitly,
the "implicit" functionality is not used and inserted_primary_key
will not be available.
-
+
dbapi_type_map
A mapping of DB-API type objects present in this Dialect's
DB-API implementation mapped to TypeEngine implementations used
supports_default_values
Indicates if the construct ``INSERT INTO tablename DEFAULT
VALUES`` is supported
-
+
supports_sequences
Indicates if the dialect supports CREATE SEQUENCE or similar.
-
+
sequences_optional
If True, indicates if the "optional" flag on the Sequence() construct
should signal to not generate a CREATE SEQUENCE. Applies only to
dialects that support sequences. Currently used only to allow Postgresql
SERIAL to be used on a column that specifies Sequence() for usage on
other backends.
-
+
supports_native_enum
Indicates if the dialect supports a native ENUM construct.
This will prevent types.Enum from generating a CHECK
Indicates if the dialect supports a native boolean construct.
This will prevent types.Boolean from generating a CHECK
constraint when that type is used.
-
+
"""
def create_connect_args(self, url):
Given a :class:`~sqlalchemy.engine.url.URL` object, returns a tuple
consisting of a `*args`/`**kwargs` suitable to send directly
to the dbapi's connect function.
-
+
"""
raise NotImplementedError()
The returned result is cached *per dialect class* so can
contain no dialect-instance state.
-
+
"""
raise NotImplementedError()
Allows dialects to configure options based on server version info or
other properties.
-
+
The connection passed here is a SQLAlchemy Connection object,
with full capabilities.
-
+
The initalize() method of the base dialect should be called via
super().
-
+
"""
pass
properties from the database. If include_columns (a list or
set) is specified, limit the autoload to the given column
names.
-
+
The default implementation uses the
:class:`~sqlalchemy.engine.reflection.Inspector` interface to
provide the output, building upon the granular table/column/
constraint etc. methods of :class:`Dialect`.
-
+
"""
raise NotImplementedError()
def normalize_name(self, name):
"""convert the given name to lowercase if it is detected as
case insensitive.
-
+
this method is only used if the dialect defines
requires_name_normalize=True.
def denormalize_name(self, name):
"""convert the given name to a case insensitive identifier
for the backend if it is an all-lowercase name.
-
+
this method is only used if the dialect defines
requires_name_normalize=True.
"""
raise NotImplementedError()
-
+
def has_table(self, connection, table_name, schema=None):
"""Check the existence of a particular table in the database.
def _get_server_version_info(self, connection):
"""Retrieve the server version info from the given connection.
-
+
This is used by the default implementation to populate the
"server_version_info" attribute and is called exactly
once upon first connect.
-
+
"""
raise NotImplementedError()
-
+
def _get_default_schema_name(self, connection):
"""Return the string name of the currently selected schema from
the given connection.
This is used by the default implementation to populate the
"default_schema_name" attribute and is called exactly
once upon first connect.
-
+
"""
raise NotImplementedError()
The callable accepts a single argument "conn" which is the
DBAPI connection itself. It has no return value.
-
+
This is used to set dialect-wide per-connection options such as
isolation modes, unicode modes, etc.
in some dialects; this is indicated by the
``supports_sane_rowcount`` and ``supports_sane_multi_rowcount``
dialect attributes.
-
+
"""
raise NotImplementedError()
@property
def sql_compiler(self):
"""Return a Compiled that is capable of processing SQL expressions.
-
+
If this compiler is one, it would likely just return 'self'.
-
+
"""
-
+
raise NotImplementedError()
-
+
def process(self, obj, **kwargs):
return obj._compiler_dispatch(self, **kwargs)
shared among threads using properly synchronized access, it is still
possible that the underlying DBAPI connection may not support shared
access between threads. Check the DBAPI documentation for details.
-
+
The Connection object represents a single dbapi connection checked out
from the connection pool. In this state, the connection pool has no affect
upon the connection, including its expiration or timeout state. For the
.. index::
single: thread safety; Connection
-
+
"""
-
+
def __init__(self, engine, connection=None, close_with_result=False,
_branch=False, _execution_options=None):
"""Construct a new Connection.
The constructor here is not public and is only called only by an
:class:`.Engine`. See :meth:`.Engine.connect` and
:meth:`.Engine.contextual_connect` methods.
-
+
"""
self.engine = engine
self.__connection = connection or engine.raw_connection()
c = self.__class__.__new__(self.__class__)
c.__dict__ = self.__dict__.copy()
return c
-
+
def execution_options(self, **opt):
""" Set non-SQL options for the connection which take effect
during execution.
-
+
The method returns a copy of this :class:`Connection` which references
the same underlying DBAPI connection, but also defines the given
execution options which will take effect for a call to
:meth:`execute`. As the new :class:`Connection` references the same
underlying resource, it is probably best to ensure that the copies
would be discarded immediately, which is implicit if used as in::
-
+
result = connection.execution_options(stream_results=True).\
execute(stmt)
-
+
The options are the same as those accepted by
:meth:`sqlalchemy.sql.expression.Executable.execution_options`.
c = self._clone()
c._execution_options = c._execution_options.union(opt)
return c
-
+
@property
def dialect(self):
"Dialect used by this Connection."
# use getattr() for is_valid to support exceptions raised in
# dialect initializer, where the connection is not wrapped in
# _ConnectionFairy
-
+
return getattr(self.__connection, 'is_valid', False)
@property
"""
if self.invalidated:
return
-
+
if self.closed:
raise exc.ResourceClosedError("This Connection is closed")
self.__connection.invalidate(exception)
del self.__connection
self.__invalid = True
-
-
+
+
def detach(self):
"""Detach the underlying DB-API connection from its connection pool.
self.__invalid = False
del self.__connection
self.__transaction = None
-
+
def scalar(self, object, *multiparams, **params):
"""Executes and returns the first column of the first row.
def execute(self, object, *multiparams, **params):
"""Executes the given construct and returns a :class:`.ResultProxy`.
-
+
The construct can be one of:
-
+
* a textual SQL string
* any :class:`.ClauseElement` construct that is also
a subclass of :class:`.Executable`, such as a
* a :class:`.DDLElement` object
* a :class:`.DefaultGenerator` object
* a :class:`.Compiled` object
-
+
"""
for c in type(object).__mro__:
In the case of 'raw' execution which accepts positional parameters,
it may be a list of tuples or lists.
-
+
"""
if not multiparams:
def __execute_context(self, context):
if context.compiled:
context.pre_exec()
-
+
if context.executemany:
self._cursor_executemany(
context.cursor,
context.cursor,
context.statement,
context.parameters[0], context=context)
-
+
if context.compiled:
context.post_exec()
-
+
if context.isinsert and not context.executemany:
context.post_insert()
-
+
# create a resultproxy, get rowcount/implicit RETURNING
# rows, close cursor if no further results pending
r = context.get_result_proxy()._autoclose()
if self.__transaction is None and context.should_autocommit:
self._commit_impl()
-
+
if r.closed and self.should_close_with_result:
self.close()
-
+
return r
-
+
def _handle_dbapi_exception(self,
e,
statement,
connection_invalidated=is_disconnect), \
None, sys.exc_info()[2]
# end Py2K
-
+
finally:
del self._reentrant_error
This is a shortcut for explicitly calling `begin()` and `commit()`
and optionally `rollback()` when exceptions are raised. The
given `*args` and `**kwargs` will be passed to the function.
-
+
See also transaction() on engine.
-
+
"""
trans = self.begin()
also implements a context manager interface so that
the Python ``with`` statement can be used with the
:meth:`.Connection.begin` method.
-
+
The Transaction object is **not** threadsafe.
.. index::
"""The constructor for :class:`.Transaction` is private
and is called from within the :class:`.Connection.begin`
implementation.
-
+
"""
self.connection = connection
self._parent = parent or self
This is used to cancel a Transaction without affecting the scope of
an enclosing transaction.
-
+
"""
if not self._parent.is_active:
return
def rollback(self):
"""Roll back this :class:`.Transaction`.
-
+
"""
if not self._parent.is_active:
return
def commit(self):
"""Commit this :class:`.Transaction`."""
-
+
if not self._parent.is_active:
raise exc.InvalidRequestError("This transaction is inactive")
self._do_commit()
Connects a :class:`~sqlalchemy.pool.Pool` and
:class:`~sqlalchemy.engine.base.Dialect` together to provide a source
of database connectivity and behavior.
-
+
An :class:`Engine` object is instantiated publically using the
:func:`~sqlalchemy.create_engine` function.
self.Connection = Connection
if execution_options:
self.update_execution_options(**execution_options)
-
+
def update_execution_options(self, **opt):
"""update the execution_options dictionary of this :class:`Engine`.
-
+
For details on execution_options, see
:meth:`Connection.execution_options` as well as
:meth:`sqlalchemy.sql.expression.Executable.execution_options`.
-
-
+
+
"""
self._execution_options = \
self._execution_options.union(opt)
A new connection pool is created immediately after the old one has
been disposed. This new pool, like all SQLAlchemy connection pools,
does not make any actual connections to the database until one is
- first requested.
-
+ first requested.
+
This method has two general use cases:
-
+
* When a dropped connection is detected, it is assumed that all
connections held by the pool are potentially dropped, and
the entire pool is replaced.
-
+
* An application may want to use :meth:`dispose` within a test
suite that is creating multiple engines.
-
+
It is critical to note that :meth:`dispose` does **not** guarantee
that the application will release all open database connections - only
- those connections that are checked into the pool are closed.
+ those connections that are checked into the pool are closed.
Connections which remain checked out or have been detached from
the engine are not affected.
-
+
"""
self.pool.dispose()
self.pool = self.pool.recreate()
def text(self, text, *args, **kwargs):
"""Return a :func:`~sqlalchemy.sql.expression.text` construct,
bound to this engine.
-
+
This is equivalent to::
-
+
text("SELECT * FROM table", bind=engine)
-
+
"""
return expression.text(text, bind=self, *args, **kwargs)
This is a shortcut for explicitly calling `begin()` and `commit()`
and optionally `rollback()` when exceptions are raised. The
given `*args` and `**kwargs` will be passed to the function.
-
+
The connection used is that of contextual_connect().
-
+
See also the similar method on Connection itself.
-
+
"""
-
+
conn = self.contextual_connect()
try:
return conn.transaction(callable_, *args, **kwargs)
def execute(self, statement, *multiparams, **params):
"""Executes the given construct and returns a :class:`.ResultProxy`.
-
+
The arguments are the same as those used by
:meth:`.Connection.execute`.
-
+
Here, a :class:`.Connection` is acquired using the
:meth:`~.Engine.contextual_connect` method, and the statement executed
with that connection. The returned :class:`.ResultProxy` is flagged
underlying cursor is closed, the :class:`.Connection` created here
will also be closed, which allows its associated DBAPI connection
resource to be returned to the connection pool.
-
+
"""
connection = self.contextual_connect(close_with_result=True)
def connect(self, **kwargs):
"""Return a new :class:`.Connection` object.
-
+
The :class:`.Connection`, upon construction, will procure a DBAPI connection
from the :class:`.Pool` referenced by this :class:`.Engine`,
returning it back to the :class:`.Pool` after the :meth:`.Connection.close`
method is called.
-
+
"""
return self.Connection(self, **kwargs)
def contextual_connect(self, close_with_result=False, **kwargs):
"""Return a :class:`.Connection` object which may be part of some ongoing context.
-
+
By default, this method does the same thing as :meth:`.Engine.connect`.
Subclasses of :class:`.Engine` may override this method
to provide contextual behavior.
:param close_with_result: When True, the first :class:`.ResultProxy` created
by the :class:`.Connection` will call the :meth:`.Connection.close` method
- of that connection as soon as any pending result rows are exhausted.
+ of that connection as soon as any pending result rows are exhausted.
This is used to supply the "connectionless execution" behavior provided
by the :meth:`.Engine.execute` method.
-
+
"""
return self.Connection(self,
def _begin_impl(self):
return proxy.begin(self, super(ProxyConnection, self)._begin_impl)
-
+
def _rollback_impl(self):
return proxy.rollback(self,
super(ProxyConnection, self)._rollback_impl)
super(ProxyConnection,
self)._rollback_to_savepoint_impl,
name, context)
-
+
def _release_savepoint_impl(self, name, context):
return proxy.release_savepoint(self,
super(ProxyConnection, self)._release_savepoint_impl,
Sequence.register(RowProxy)
except ImportError:
pass
-
+
class ResultMetaData(object):
"""Handle cursor.description, applying additional info from an execution
context."""
-
+
def __init__(self, parent, metadata):
self._processors = processors = []
processor = type_.dialect_impl(dialect).\
result_processor(dialect, coltype)
-
+
processors.append(processor)
rec = (processor, i)
# indexes as keys. This is only needed for the Python version of
# RowProxy (the C version uses a faster path for integer indexes).
keymap[i] = rec
-
+
# Column names as keys
if keymap.setdefault(name.lower(), rec) is not rec:
# We do not raise an exception directly because several
if origname and \
keymap.setdefault(origname.lower(), rec) is not rec:
keymap[origname.lower()] = (processor, None)
-
+
if dialect.requires_name_normalize:
colname = dialect.normalize_name(colname)
-
+
self.keys.append(colname)
if obj:
for o in obj:
),
'keys': self.keys
}
-
+
def __setstate__(self, state):
# the row has been processed at pickling time so we don't need any
# processor anymore
self.keys = state['keys']
self._echo = False
-
+
class ResultProxy(object):
"""Wraps a DB-API cursor object to provide easier access to row columns.
_process_row = RowProxy
out_parameters = None
_can_close_connection = False
-
+
def __init__(self, context):
self.context = context
self.dialect = context.dialect
self._metadata = None
else:
self._metadata = ResultMetaData(self, metadata)
-
+
def keys(self):
"""Return the current set of string keys for rows."""
if self._metadata:
return self._metadata.keys
else:
return []
-
+
@util.memoized_property
def rowcount(self):
"""Return the 'rowcount' for this result.
-
+
The 'rowcount' reports the number of rows affected
by an UPDATE or DELETE statement. It has *no* other
uses and is not intended to provide the number of rows
present from a SELECT.
-
+
Note that this row count may not be properly implemented in some
dialects; this is indicated by
:meth:`~sqlalchemy.engine.base.ResultProxy.supports_sane_rowcount()`
:meth:`~sqlalchemy.engine.base.ResultProxy.supports_sane_multi_rowcount()`.
``rowcount()`` also may not work at this time for a statement that
uses ``returning()``.
-
+
"""
return self.context.rowcount
@property
def lastrowid(self):
"""return the 'lastrowid' accessor on the DBAPI cursor.
-
+
This is a DBAPI specific method and is only functional
for those backends which support it, for statements
where it is appropriate. It's behavior is not
consistent across backends.
-
+
Usage of this method is normally unnecessary; the
:attr:`~ResultProxy.inserted_primary_key` attribute provides a
tuple of primary key values for a newly inserted row,
regardless of database backend.
-
+
"""
return self._saved_cursor.lastrowid
-
+
def _cursor_description(self):
"""May be overridden by subclasses."""
-
+
return self._saved_cursor.description
-
+
def _autoclose(self):
"""called by the Connection to autoclose cursors that have no pending
results beyond those used by an INSERT/UPDATE/DELETE with no explicit
RETURNING clause.
-
+
"""
if self.context.isinsert:
if self.context._is_implicit_returning:
# such as kintersbasdb, mxodbc),
self.rowcount
self.close(_autoclose_connection=False)
-
+
return self
-
+
def close(self, _autoclose_connection=True):
"""Close this ResultProxy.
Closes the underlying DBAPI cursor corresponding to the execution.
-
+
Note that any data cached within this ResultProxy is still available.
For some types of results, this may include buffered rows.
* all result rows are exhausted using the fetchXXX() methods.
* cursor.description is None.
-
+
"""
if not self.closed:
self.connection.close()
# allow consistent errors
self.cursor = None
-
+
def __iter__(self):
while True:
row = self.fetchone()
raise StopIteration
else:
yield row
-
+
@util.memoized_property
def inserted_primary_key(self):
"""Return the primary key for the row just inserted.
-
+
This only applies to single row insert() constructs which
did not explicitly specify returning().
raise exc.InvalidRequestError(
"Can't call inserted_primary_key when returning() "
"is used.")
-
+
return self.context._inserted_primary_key
@util.deprecated("0.6", "Use :attr:`.ResultProxy.inserted_primary_key`")
def last_inserted_ids(self):
"""Return the primary key for the row just inserted."""
-
+
return self.inserted_primary_key
-
+
def last_updated_params(self):
"""Return ``last_updated_params()`` from the underlying
ExecutionContext.
return self.cursor.fetchall()
except AttributeError:
self._non_result()
-
+
def _non_result(self):
if self._metadata is None:
raise exc.ResourceClosedError(
)
else:
raise exc.ResourceClosedError("This result object is closed.")
-
+
def process_rows(self, rows):
process_row = self._process_row
metadata = self._metadata
def fetchmany(self, size=None):
"""Fetch many rows, just like DB-API
``cursor.fetchmany(size=cursor.arraysize)``.
-
+
If rows are present, the cursor remains open after this is called.
Else the cursor is automatically closed and an empty list is returned.
-
+
"""
try:
def fetchone(self):
"""Fetch one row, just like DB-API ``cursor.fetchone()``.
-
+
If a row is present, the cursor remains open after this is called.
Else the cursor is automatically closed and None is returned.
-
+
"""
try:
row = self._fetchone_impl()
def first(self):
"""Fetch the first row and then close the result set unconditionally.
-
+
Returns None if no row is present.
-
+
"""
if self._metadata is None:
self._non_result()
return None
finally:
self.close()
-
+
def scalar(self):
"""Fetch the first column of the first row, and close the result set.
-
+
Returns None if no row is present.
-
+
"""
row = self.first()
if row is not None:
class FullyBufferedResultProxy(ResultProxy):
"""A result proxy that buffers rows fully upon creation.
-
+
Used for operations where a result is to be delivered
after the database conversation can not be continued,
such as MSSQL INSERT...OUTPUT after an autocommit.
-
+
"""
def _init_metadata(self):
super(FullyBufferedResultProxy, self)._init_metadata()
def _buffer_rows(self):
return self.cursor.fetchall()
-
+
def _fetchone_impl(self):
if self.__rowbuffer:
return self.__rowbuffer.pop(0)
row = tuple(row)
super(BufferedColumnRow, self).__init__(parent, row,
processors, keymap)
-
+
class BufferedColumnResultProxy(ResultProxy):
"""A ResultProxy with column buffering behavior.
databases where result rows contain "live" results that fall out
of scope unless explicitly fetched. Currently this includes
cx_Oracle LOB objects.
-
+
"""
_process_row = BufferedColumnRow
else:
tables = metadata.tables.values()
collection = [t for t in sql_util.sort_tables(tables) if self._can_create(t)]
-
+
for listener in metadata.ddl_listeners['before-create']:
listener('before-create', metadata, self.connection, tables=collection)
-
+
for table in collection:
self.traverse_single(table, create_ok=True)
def visit_table(self, table, create_ok=False):
if not create_ok and not self._can_create(table):
return
-
+
for listener in table.ddl_listeners['before-create']:
listener('before-create', table, self.connection)
else:
tables = metadata.tables.values()
collection = [t for t in reversed(sql_util.sort_tables(tables)) if self._can_drop(t)]
-
+
for listener in metadata.ddl_listeners['before-drop']:
listener('before-drop', metadata, self.connection, tables=collection)
-
+
for table in collection:
self.traverse_single(table, drop_ok=True)
def visit_table(self, table, drop_ok=False):
if not drop_ok and not self._can_drop(table):
return
-
+
for listener in table.ddl_listeners['before-drop']:
listener('before-drop', table, self.connection)
supports_alter = True
# most DBAPIs happy with this for execute().
- # not cx_oracle.
+ # not cx_oracle.
execute_sequence_format = tuple
-
+
supports_sequences = False
sequences_optional = False
preexecute_autoincrement_sequences = False
postfetch_lastrowid = True
implicit_returning = False
-
+
supports_native_enum = False
supports_native_boolean = False
-
+
# if the NUMERIC type
# returns decimal.Decimal.
# *not* the FLOAT type however.
supports_native_decimal = False
-
+
# Py3K
#supports_unicode_statements = True
#supports_unicode_binds = True
# end Py2K
name = 'default'
-
+
# length at which to truncate
# any identifier.
max_identifier_length = 9999
-
+
# length at which to truncate
# the name of an index.
# Usually None to indicate
# 'use max_identifier_length'.
# thanks to MySQL, sigh
max_index_name_length = None
-
+
supports_sane_rowcount = True
supports_sane_multi_rowcount = True
dbapi_type_map = {}
default_paramstyle = 'named'
supports_default_values = False
supports_empty_insert = True
-
+
server_version_info = None
-
+
# indicates symbol names are
# UPPERCASEd if they are case insensitive
# within the database.
# if this is True, the methods normalize_name()
# and denormalize_name() must be provided.
requires_name_normalize = False
-
+
reflection_options = ()
def __init__(self, convert_unicode=False, assert_unicode=False,
encoding='utf-8', paramstyle=None, dbapi=None,
implicit_returning=None,
label_length=None, **kwargs):
-
+
if not getattr(self, 'ported_sqla_06', True):
util.warn(
"The %s dialect is not yet ported to SQLAlchemy 0.6" %
self.name)
-
+
self.convert_unicode = convert_unicode
if assert_unicode:
util.warn_deprecated(
"received. "
"This does *not* apply to DBAPIs that coerce Unicode "
"natively.")
-
+
self.encoding = encoding
self.positional = False
self._ischema = None
self,
'description_encoding',
encoding)
-
+
@property
def dialect_description(self):
return self.name + "+" + self.driver
-
+
def initialize(self, connection):
try:
self.server_version_info = \
self.default_schema_name = None
self.returns_unicode_strings = self._check_unicode_returns(connection)
-
+
self.do_rollback(connection.connection)
def on_connect(self):
"""return a callable which sets up a newly created DBAPI connection.
-
+
This is used to set dialect-wide per-connection options such as
isolation modes, unicode modes, etc.
-
+
If a callable is returned, it will be assembled into a pool listener
that receives the direct DBAPI connection, with all wrappers removed.
-
+
If None is returned, no listener will be generated.
-
+
"""
return None
-
+
def _check_unicode_returns(self, connection):
# Py2K
if self.supports_unicode_statements:
)
)
row = cursor.fetchone()
-
+
return isinstance(row[0], unicode)
finally:
cursor.close()
-
+
# detect plain VARCHAR
unicode_for_varchar = check_unicode(sqltypes.VARCHAR(60))
-
+
# detect if there's an NVARCHAR type with different behavior available
unicode_for_unicode = check_unicode(sqltypes.Unicode(60))
-
+
if unicode_for_unicode and not unicode_for_varchar:
return "conditional"
else:
return unicode_for_varchar
-
+
def type_descriptor(self, typeobj):
"""Provide a database-specific ``TypeEngine`` object, given
the generic object which comes from the types module.
def get_pk_constraint(self, conn, table_name, schema=None, **kw):
"""Compatiblity method, adapts the result of get_primary_keys()
for those dialects which don't implement get_pk_constraint().
-
+
"""
return {
'constrained_columns':
self.get_primary_keys(conn, table_name,
schema=schema, **kw)
}
-
+
def validate_identifier(self, ident):
if len(ident) > self.max_identifier_length:
raise exc.IdentifierError(
result_map = None
compiled = None
statement = None
-
+
def __init__(self,
dialect,
connection,
compiled_ddl=None,
statement=None,
parameters=None):
-
+
self.dialect = dialect
self._connection = self.root_connection = connection
self.engine = connection.engine
-
+
if compiled_ddl is not None:
self.compiled = compiled = compiled_ddl
self.isddl = True
self.statement = self.unicode_statement.encode(self.dialect.encoding)
else:
self.statement = self.unicode_statement = unicode(compiled)
-
+
self.cursor = self.create_cursor()
self.compiled_parameters = []
self.parameters = [self._default_params]
-
+
elif compiled_sql is not None:
self.compiled = compiled = compiled_sql
else:
self.compiled_parameters = [compiled.construct_params(m, _group_number=grp) for
grp,m in enumerate(parameters)]
-
+
self.executemany = len(parameters) > 1
self.cursor = self.create_cursor()
if self.isinsert or self.isupdate:
self.__process_defaults()
self.parameters = self.__convert_compiled_params(self.compiled_parameters)
-
+
elif statement is not None:
# plain text statement
if connection._execution_options:
self.execution_options = self.execution_options.union(connection._execution_options)
self.parameters = self.__encode_param_keys(parameters)
self.executemany = len(parameters) > 1
-
+
if isinstance(statement, unicode) and not dialect.supports_unicode_statements:
self.unicode_statement = statement
self.statement = statement.encode(self.dialect.encoding)
else:
self.statement = self.unicode_statement = statement
-
+
self.cursor = self.create_cursor()
else:
# no statement. used for standalone ColumnDefault execution.
if connection._execution_options:
self.execution_options = self.execution_options.union(connection._execution_options)
self.cursor = self.create_cursor()
-
+
@util.memoized_property
def is_crud(self):
return self.isinsert or self.isupdate or self.isdelete
-
+
@util.memoized_property
def should_autocommit(self):
autocommit = self.execution_options.get('autocommit',
self.statement and
expression.PARSE_AUTOCOMMIT
or False)
-
+
if autocommit is expression.PARSE_AUTOCOMMIT:
return self.should_autocommit_text(self.unicode_statement)
else:
return autocommit
-
+
@util.memoized_property
def _is_explicit_returning(self):
return self.compiled and \
getattr(self.compiled.statement, '_returning', False)
-
+
@util.memoized_property
def _is_implicit_returning(self):
return self.compiled and \
bool(self.compiled.returning) and \
not self.compiled.statement._returning
-
+
@util.memoized_property
def _default_params(self):
if self.dialect.positional:
return self.dialect.execute_sequence_format()
else:
return {}
-
+
def _execute_scalar(self, stmt):
"""Execute a string statement on the current cursor, returning a
scalar result.
-
+
Used to fire off sequences, default phrases, and "select lastrowid"
types of statements individually or in the context of a parent INSERT
or UPDATE statement.
-
+
"""
conn = self._connection
stmt = stmt.encode(self.dialect.encoding)
conn._cursor_execute(self.cursor, stmt, self._default_params)
return self.cursor.fetchone()[0]
-
+
@property
def connection(self):
return self._connection._branch()
"""Apply string encoding to the keys of dictionary-based bind parameters.
This is only used executing textual, non-compiled SQL expressions.
-
+
"""
-
+
if not params:
return [self._default_params]
elif isinstance(params[0], self.dialect.execute_sequence_format):
return [proc(d) for d in params] or [{}]
else:
return [self.dialect.execute_sequence_format(p) for p in params]
-
+
def __convert_compiled_params(self, compiled_parameters):
"""Convert the dictionary of bind parameter values into a dict or list
def post_exec(self):
pass
-
+
def get_lastrowid(self):
"""return self.cursor.lastrowid, or equivalent, after an INSERT.
-
+
This may involve calling special cursor functions,
issuing a new SELECT on the cursor (or a new one),
or returning a stored value that was
calculated within post_exec().
-
+
This function will only be called for dialects
which support "implicit" primary key generation,
keep preexecute_autoincrement_sequences set to False,
and when no explicit id value was bound to the
statement.
-
+
The function is called once, directly after
post_exec() and before the transaction is committed
or ResultProxy is generated. If the post_exec()
method assigns a value to `self._lastrowid`, the
value is used in place of calling get_lastrowid().
-
+
Note that this method is *not* equivalent to the
``lastrowid`` method on ``ResultProxy``, which is a
direct proxy to the DBAPI ``lastrowid`` accessor
in all cases.
-
+
"""
return self.cursor.lastrowid
def get_result_proxy(self):
return base.ResultProxy(self)
-
+
@property
def rowcount(self):
return self.cursor.rowcount
def supports_sane_multi_rowcount(self):
return self.dialect.supports_sane_multi_rowcount
-
+
def post_insert(self):
if self.dialect.postfetch_lastrowid and \
(not len(self._inserted_primary_key) or \
None in self._inserted_primary_key):
-
+
table = self.compiled.statement.table
lastrowid = self.get_lastrowid()
self._inserted_primary_key = [c is table._autoincrement_column and lastrowid or v
for c, v in zip(table.primary_key, self._inserted_primary_key)
]
-
+
def _fetch_implicit_returning(self, resultproxy):
table = self.compiled.statement.table
row = resultproxy.fetchone()
ipk.append(v)
else:
ipk.append(row[c])
-
+
self._inserted_primary_key = ipk
def last_inserted_params(self):
elif default.is_clause_element:
# TODO: expensive branching here should be
# pulled into _exec_scalar()
- conn = self.connection
+ conn = self.connection
c = expression.select([default.arg]).compile(bind=conn)
return conn._execute_compiled(c, (), {}).scalar()
else:
return default.arg
-
+
def get_insert_default(self, column):
if column.default is None:
return None
if self.executemany:
if len(self.compiled.prefetch):
scalar_defaults = {}
-
+
# pre-determine scalar Python-side defaults
# to avoid many calls of get_insert_default()/get_update_default()
for c in self.compiled.prefetch:
scalar_defaults[c] = c.default.arg
elif self.isupdate and c.onupdate and c.onupdate.is_scalar:
scalar_defaults[c] = c.onupdate.arg
-
+
for param in self.compiled_parameters:
self.current_parameters = param
for c in self.compiled.prefetch:
self.postfetch_cols = self.compiled.postfetch
self.prefetch_cols = self.compiled.prefetch
-
+
DefaultDialect.execution_ctx_cls = DefaultExecutionContext
:class:`~sqlalchemy.engine.base.Dialect`, providing a
consistent interface as well as caching support for previously
fetched metadata.
-
+
The preferred method to construct an :class:`.Inspector` is via the
:meth:`Inspector.from_engine` method. I.e.::
-
+
engine = create_engine('...')
insp = Inspector.from_engine(engine)
-
+
Where above, the :class:`~sqlalchemy.engine.base.Dialect` may opt
to return an :class:`.Inspector` subclass that provides additional
methods specific to the dialect's target database.
-
+
"""
def __init__(self, bind):
which is typically an instance of
:class:`~sqlalchemy.engine.base.Engine` or
:class:`~sqlalchemy.engine.base.Connection`.
-
+
For a dialect-specific instance of :class:`.Inspector`, see
:meth:`Inspector.from_engine`
# ensure initialized
bind.connect()
-
+
# this might not be a connection, it could be an engine.
self.bind = bind
-
+
# set the engine
if hasattr(bind, 'engine'):
self.engine = bind.engine
which is typically an instance of
:class:`~sqlalchemy.engine.base.Engine` or
:class:`~sqlalchemy.engine.base.Connection`.
-
+
This method differs from direct a direct constructor call of :class:`.Inspector`
in that the :class:`~sqlalchemy.engine.base.Dialect` is given a chance to provide
a dialect-specific :class:`.Inspector` instance, which may provide additional
methods.
-
+
See the example at :class:`.Inspector`.
-
+
"""
if hasattr(bind.dialect, 'inspector'):
return bind.dialect.inspector(bind)
def default_schema_name(self):
"""Return the default schema name presented by the dialect
for the current engine's database user.
-
+
E.g. this is typically ``public`` for Postgresql and ``dbo``
for SQL Server.
-
+
"""
return self.dialect.default_schema_name
def get_table_options(self, table_name, schema=None, **kw):
"""Return a dictionary of options specified when the table of the given name was created.
-
+
This currently includes some options that apply to MySQL tables.
-
+
"""
if hasattr(self.dialect, 'get_table_options'):
return self.dialect.get_table_options(self.bind, table_name, schema,
Given a string `table_name`, and an optional string `schema`, return
primary key information as a dictionary with these keys:
-
+
constrained_columns
a list of column names that make up the primary key
-
+
name
optional name of the primary key constraint.
**kw)
return pkeys
-
+
def get_foreign_keys(self, table_name, schema=None, **kw):
"""Return information about foreign_keys in `table_name`.
name
optional name of the foreign key constraint.
-
+
\**kw
other options passed to the dialect's get_foreign_keys() method.
unique
boolean
-
+
\**kw
other options passed to the dialect's get_indexes() method.
"""
def reflecttable(self, table, include_columns):
"""Given a Table object, load its internal constructs based on introspection.
-
+
This is the underlying method used by most dialects to produce
table reflection. Direct usage is like::
-
+
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.engine import reflection
-
+
engine = create_engine('...')
meta = MetaData()
user_table = Table('user', meta)
insp = Inspector.from_engine(engine)
insp.reflecttable(user_table, None)
-
+
:param table: a :class:`~sqlalchemy.schema.Table` instance.
:param include_columns: a list of string column names to include
in the reflection process. If ``None``, all columns are reflected.
-
+
"""
dialect = self.bind.dialect
col_kw['autoincrement'] = col_d['autoincrement']
if 'quote' in col_d:
col_kw['quote'] = col_d['quote']
-
+
colargs = []
if col_d.get('default') is not None:
# the "default" value is assumed to be a literal SQL expression,
# so is wrapped in text() so that no quoting occurs on re-issuance.
colargs.append(sa_schema.DefaultClause(sql.text(col_d['default'])))
-
+
if 'sequence' in col_d:
# TODO: mssql, maxdb and sybase are using this.
seq = col_d['sequence']
if 'increment' in seq:
sequence.increment = seq['increment']
colargs.append(sequence)
-
+
col = sa_schema.Column(name, coltype, *colargs, **col_kw)
table.append_column(col)
Provides a ``create`` method that receives input arguments and
produces an instance of base.Engine or a subclass.
-
+
"""
def __init__(self):
"""Base class for built-in stratgies."""
pool_threadlocal = False
-
+
def create(self, name_or_url, **kwargs):
# create url.URL object
u = url.make_url(name_or_url)
import sys
raise exc.DBAPIError.instance(None, None, e), None, sys.exc_info()[2]
# end Py2K
-
+
creator = kwargs.pop('creator', connect)
poolclass = (kwargs.pop('poolclass', None) or
engine_args[k] = kwargs.pop(k)
_initialize = kwargs.pop('_initialize', True)
-
+
# all kwargs should be consumed
if kwargs:
raise TypeError(
dialect.__class__.__name__,
pool.__class__.__name__,
engineclass.__name__))
-
+
engine = engineclass(pool, dialect, u, **engine_args)
if _initialize:
if conn is None:
return
do_on_connect(conn)
-
+
pool.add_listener({'first_connect': on_connect, 'connect':on_connect})
-
+
def first_connect(conn, rec):
c = base.Connection(engine, connection=conn)
dialect.initialize(c)
name = 'plain'
engine_cls = base.Engine
-
+
PlainEngineStrategy()
class ThreadLocalEngineStrategy(DefaultEngineStrategy):
"""Strategy for configuring an Engine with thredlocal behavior."""
-
+
name = 'threadlocal'
pool_threadlocal = True
engine_cls = threadlocal.TLEngine
Produces a single mock Connectable object which dispatches
statement execution to a passed-in function.
-
+
"""
name = 'mock'
-
+
def create(self, name_or_url, executor, **kwargs):
# create url.URL object
u = url.make_url(name_or_url)
def create(self, entity, **kwargs):
kwargs['checkfirst'] = False
from sqlalchemy.engine import ddl
-
+
ddl.SchemaGenerator(self.dialect, self, **kwargs).traverse(entity)
def drop(self, entity, **kwargs):
def __init__(self, *arg, **kw):
super(TLConnection, self).__init__(*arg, **kw)
self.__opencount = 0
-
+
def _increment_connect(self):
self.__opencount += 1
return self
-
+
def close(self):
if self.__opencount == 1:
base.Connection.close(self)
self.__opencount = 0
base.Connection.close(self)
-
+
class TLEngine(base.Engine):
"""An Engine that includes support for thread-local managed transactions."""
connection = None
else:
connection = self._connections.conn()
-
+
if connection is None or connection.closed:
# guards against pool-level reapers, if desired.
# or not connection.connection.is_valid:
connection = self.TLConnection(self, self.pool.connect(), **kw)
self._connections.conn = conn = weakref.ref(connection)
-
+
return connection._increment_connect()
-
+
def begin_twophase(self, xid=None):
if not hasattr(self._connections, 'trans'):
self._connections.trans = []
if not hasattr(self._connections, 'trans'):
self._connections.trans = []
self._connections.trans.append(self.contextual_connect().begin_nested())
-
+
def begin(self):
if not hasattr(self._connections, 'trans'):
self._connections.trans = []
self._connections.trans.append(self.contextual_connect().begin())
-
+
def prepare(self):
if not hasattr(self._connections, 'trans') or \
not self._connections.trans:
return
self._connections.trans[-1].prepare()
-
+
def commit(self):
if not hasattr(self._connections, 'trans') or \
not self._connections.trans:
return
trans = self._connections.trans.pop(-1)
trans.commit()
-
+
def rollback(self):
if not hasattr(self._connections, 'trans') or \
not self._connections.trans:
return
trans = self._connections.trans.pop(-1)
trans.rollback()
-
+
def dispose(self):
self._connections = util.threading.local()
super(TLEngine, self).dispose()
-
+
@property
def closed(self):
return not hasattr(self._connections, 'conn') or \
self._connections.conn() is None or \
self._connections.conn().closed
-
+
def close(self):
if not self.closed:
self.contextual_connect().close()
connection._force_close()
del self._connections.conn
self._connections.trans = []
-
+
def __repr__(self):
return 'TLEngine(%s)' % str(self.url)
return module
else:
raise
-
+
def _load_entry_point(self):
"""attempt to load this url's dialect from entry points, or return None
if pkg_resources is not installed or there is no matching entry point.
-
+
Raise ImportError if the actual load fails.
-
+
"""
try:
import pkg_resources
except ImportError:
return None
-
+
for res in pkg_resources.iter_entry_points('sqlalchemy.dialects'):
if res.name == self.drivername:
return res.load()
else:
return None
-
+
def translate_connect_args(self, names=[], **kw):
"""Translate url attributes into a dictionary of connection arguments.
class ResourceClosedError(InvalidRequestError):
"""An operation was requested from a connection, cursor, or other
object that's in a closed state."""
-
+
class NoSuchColumnError(KeyError, InvalidRequestError):
"""A nonexistent column is requested from a ``RowProxy``."""
class NoReferenceError(InvalidRequestError):
"""Raised by ``ForeignKey`` to indicate a reference cannot be resolved."""
-
+
class NoReferencedTableError(NoReferenceError):
"""Raised by ``ForeignKey`` when the referred ``Table`` cannot be located."""
proxy = self._new(_lazy_collection(obj, self.target_collection))
setattr(obj, self.key, (id(obj), proxy))
return proxy
-
+
def __set__(self, obj, values):
if self.owning_class is None:
self.owning_class = type(obj)
getter, setter = self.getset_factory(self.collection_class, self)
else:
getter, setter = self._default_getset(self.collection_class)
-
+
if self.collection_class is list:
return _AssociationList(lazy_collection, creator, getter, setter, self)
elif self.collection_class is dict:
getter, setter = self.getset_factory(self.collection_class, self)
else:
getter, setter = self._default_getset(self.collection_class)
-
+
proxy.creator = creator
proxy.getter = getter
proxy.setter = setter
def any(self, criterion=None, **kwargs):
return self._comparator.any(getattr(self.target_class, self.value_attr).has(criterion, **kwargs))
-
+
def has(self, criterion=None, **kwargs):
return self._comparator.has(getattr(self.target_class, self.value_attr).has(criterion, **kwargs))
def __getstate__(self):
return {'obj':self.ref(), 'target':self.target}
-
+
def __setstate__(self, state):
self.ref = weakref.ref(state['obj'])
self.target = state['target']
class _AssociationCollection(object):
def __init__(self, lazy_collection, creator, getter, setter, parent):
- """Constructs an _AssociationCollection.
-
+ """Constructs an _AssociationCollection.
+
This will always be a subclass of either _AssociationList,
_AssociationSet, or _AssociationDict.
self.parent = state['parent']
self.lazy_collection = state['lazy_collection']
self.parent._inflate(self)
-
+
class _AssociationList(_AssociationCollection):
"""Generic, converting, list-to-list proxy."""
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ColumnClause
-
+
class MyColumn(ColumnClause):
pass
-
+
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
-
+
Above, ``MyColumn`` extends :class:`~sqlalchemy.sql.expression.ColumnClause`,
the base expression element for named column objects. The ``compiles``
decorator registers itself with the ``MyColumn`` class so that it is invoked
when the object is compiled to a string::
from sqlalchemy import select
-
+
s = select([MyColumn('x'), MyColumn('y')])
print str(s)
-
+
Produces::
SELECT [x], [y]
method which can be used for compilation of embedded attributes::
from sqlalchemy.sql.expression import Executable, ClauseElement
-
+
class InsertFromSelect(Executable, ClauseElement):
def __init__(self, table, select):
self.table = table
insert = InsertFromSelect(t1, select([t1]).where(t1.c.x>5))
print insert
-
+
Produces::
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z FROM mytable WHERE mytable.x > :x_1)"
return "VARCHAR('max')"
else:
return compiler.visit_VARCHAR(element, **kw)
-
+
foo = Table('foo', metadata,
Column('data', VARCHAR('max'))
)
"column-like" elements. Anything that you'd place in the "columns" clause of
a SELECT statement (as well as order by and group by) can derive from this -
the object will automatically have Python "comparison" behavior.
-
+
:class:`~sqlalchemy.sql.expression.ColumnElement` classes want to have a
``type`` member which is expression's return type. This can be established
at the instance level in the constructor, or at the class level if its
generally constant::
-
+
class timestamp(ColumnElement):
type = TIMESTAMP()
statements along the line of "SELECT FROM <some function>"
``FunctionElement`` adds in the ability to be used in the FROM clause of a
``select()`` construct::
-
+
from sqlalchemy.sql.expression import FunctionElement
class coalesce(FunctionElement):
existing_dispatch = class_.__dict__.get('_compiler_dispatch')
if not existing:
existing = _dispatcher()
-
+
if existing_dispatch:
existing.specs['default'] = existing_dispatch
-
+
# TODO: why is the lambda needed ?
setattr(class_, '_compiler_dispatch', lambda *arg, **kw: existing(*arg, **kw))
setattr(class_, '_compiler_dispatcher', existing)
-
+
if specs:
for s in specs:
existing.specs[s] = fn
existing.specs['default'] = fn
return fn
return decorate
-
+
class _dispatcher(object):
def __init__(self):
self.specs = {}
-
+
def __call__(self, element, compiler, **kw):
# TODO: yes, this could also switch off of DBAPI in use.
fn = self.specs.get(compiler.dialect.name, None)
if not fn:
fn = self.specs['default']
return fn(element, compiler, **kw)
-
+
# access the mapped Table
SomeClass.__table__
-
+
# access the Mapper
SomeClass.__mapper__
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column("some_table_id", Integer, primary_key=True)
-
+
Attributes may be added to the class after its construction, and they will be
added to the underlying :class:`.Table` and
:func:`.mapper()` definitions as appropriate::
SomeClass.related = relationship(RelatedInfo)
Classes which are constructed using declarative can interact freely
-with classes that are mapped explicitly with :func:`mapper`.
+with classes that are mapped explicitly with :func:`mapper`.
It is recommended, though not required, that all tables
share the same underlying :class:`~sqlalchemy.schema.MetaData` object,
Column('author_id', Integer, ForeignKey('authors.id')),
Column('keyword_id', Integer, ForeignKey('keywords.id'))
)
-
+
class Author(Base):
__tablename__ = 'authors'
id = Column(Integer, primary_key=True)
@property
def attr(self):
return self._attr
-
+
@attr.setter
def attr(self, attr):
self._attr = attr
-
+
attr = synonym('_attr', descriptor=attr)
The above synonym is then usable as an instance attribute as well as a
class MyClass(Base):
__tablename__ = 'sometable'
-
+
id = Column(Integer, primary_key=True)
_attr = Column('attr', String)
classes::
from sqlalchemy.sql import func
-
+
class Address(Base):
__tablename__ = 'address'
id = Column('id', Integer, primary_key=True)
user_id = Column(Integer, ForeignKey('user.id'))
-
+
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String)
-
+
address_count = column_property(
select([func.count(Address.id)]).\\
where(Address.user_id==id)
table metadata, while still getting most of the benefits of using declarative.
An application that uses reflection might want to load table metadata elsewhere
and simply pass it to declarative classes::
-
+
from sqlalchemy.ext.declarative import declarative_base
-
+
Base = declarative_base()
Base.metadata.reflect(some_engine)
-
+
class User(Base):
__table__ = metadata.tables['user']
-
+
class Address(Base):
__table__ = metadata.tables['address']
class declaration::
from datetime import datetime
-
+
class Widget(Base):
__tablename__ = 'widgets'
-
+
id = Column(Integer, primary_key=True)
timestamp = Column(DateTime, nullable=False)
-
+
__mapper_args__ = {
'version_id_col': timestamp,
'version_id_generator': lambda v:datetime.now()
__tablename__ = 'people'
id = Column(Integer, primary_key=True)
name = Column(String(50))
-
+
class Engineer(Person):
__tablename__ = 'engineers'
__mapper_args__ = {'concrete':True}
Column('name', String(50)),
Column('golf_swing', String(50))
)
-
+
punion = polymorphic_union({
'engineer':engineers,
'manager':managers
}, 'type', 'punion')
-
+
class Person(Base):
__table__ = punion
__mapper_args__ = {'polymorphic_on':punion.c.type}
-
+
class Engineer(Person):
__table__ = engineers
__mapper_args__ = {'polymorphic_identity':'engineer', 'concrete':True}
class Manager(Person):
__table__ = managers
__mapper_args__ = {'polymorphic_identity':'manager', 'concrete':True}
-
+
Mixin Classes
==============
table and doesn't subclass the declarative :class:`Base`. For example::
class MyMixin(object):
-
+
__table_args__ = {'mysql_engine': 'InnoDB'}
__mapper_args__= {'always_refresh': True}
-
+
id = Column(Integer, primary_key=True)
patterns common to many classes can be defined as callables::
from sqlalchemy.ext.declarative import declared_attr
-
+
class ReferenceAddressMixin(object):
@declared_attr
def address_id(cls):
return Column(Integer, ForeignKey('address.id'))
-
+
class User(Base, ReferenceAddressMixin):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
-
+
Where above, the ``address_id`` class-level callable is executed at the
point at which the ``User`` class is constructed, and the declarative
extension can use the resulting :class:`Column` object as returned by
class MyModel(Base,MyMixin):
__tablename__='test'
id = Column(Integer, primary_key=True)
-
+
Mixing in Relationships
~~~~~~~~~~~~~~~~~~~~~~~
@declared_attr
def target_id(cls):
return Column('target_id', ForeignKey('target.id'))
-
+
@declared_attr
def target(cls):
return relationship("Target")
-
+
class Foo(Base, RefTargetMixin):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
-
+
class Bar(Base, RefTargetMixin):
__tablename__ = 'bar'
id = Column(Integer, primary_key=True)
-
+
class Target(Base):
__tablename__ = 'target'
id = Column(Integer, primary_key=True)
:func:`~sqlalchemy.orm.relationship` definitions which require explicit
primaryjoin, order_by etc. expressions should use the string forms
-for these arguments, so that they are evaluated as late as possible.
+for these arguments, so that they are evaluated as late as possible.
To reference the mixin class in these expressions, use the given ``cls``
to get it's name::
@declared_attr
def target_id(cls):
return Column('target_id', ForeignKey('target.id'))
-
+
@declared_attr
def target(cls):
return relationship("Target",
from sqlalchemy.ext.declarative import declared_attr
class MySQLSettings:
- __table_args__ = {'mysql_engine':'InnoDB'}
+ __table_args__ = {'mysql_engine':'InnoDB'}
class MyOtherMixin:
__table_args__ = {'info':'foo'}
# This is needed to successfully combine
# two mixins which both have metaclasses
pass
-
+
class MyModel(Base,MyMixin1,MyMixin2):
__tablename__ = 'awooooga'
__metaclass__ = CombinedMeta
For this reason, if a mixin requires a custom metaclass, this should
be mentioned in any documentation of that mixin to avoid confusion
later down the line.
-
+
Class Constructor
=================
Note that ``declarative`` does nothing special with sessions, and is
only intended as an easier way to configure mappers and
:class:`~sqlalchemy.schema.Table` objects. A typical application
-setup using :func:`~sqlalchemy.orm.scoped_session` might look like::
+setup using :func:`~sqlalchemy.orm.scoped_session` might look like::
engine = create_engine('postgresql://scott:tiger@localhost/test')
Session = scoped_session(sessionmaker(autocommit=False,
"""Given a class, configure the class declaratively,
using the given registry, which can be any dictionary, and
MetaData object.
-
+
"""
if '_decl_class_registry' in cls.__dict__:
raise exceptions.InvalidRequestError(
column_copies = {}
potential_columns = {}
-
+
mapper_args = {}
table_args = inherited_table_args = None
tablename = None
parent_columns = ()
-
+
declarative_props = (declared_attr, util.classproperty)
-
+
for base in cls.__mro__:
class_mapped = _is_mapped_class(base)
if class_mapped:
parent_columns = base.__table__.c.keys()
-
+
for name,obj in vars(base).items():
if name == '__mapper_args__':
if not mapper_args and (
continue
elif base is not cls:
# we're a mixin.
-
+
if isinstance(obj, Column):
if obj.foreign_keys:
raise exceptions.InvalidRequestError(
for k, v in potential_columns.items():
if tablename or (v.name or k) not in parent_columns:
dict_[k] = v
-
+
if inherited_table_args and not tablename:
table_args = None
# than the original columns from any mixins
for k, v in mapper_args.iteritems():
mapper_args[k] = column_copies.get(v,v)
-
+
if classname in cls._decl_class_registry:
util.warn("The classname %r is already in the registry of this"
value = dict_[k]
if isinstance(value, declarative_props):
value = getattr(cls, k)
-
+
if (isinstance(value, tuple) and len(value) == 1 and
isinstance(value[0], (Column, MapperProperty))):
util.warn("Ignoring declarative-like tuple value of attribute "
table = None
if '__table__' not in dict_:
if tablename is not None:
-
+
if isinstance(table_args, dict):
args, table_kw = (), table_args
elif isinstance(table_args, tuple):
"Can't add additional column %r when "
"specifying __table__" % c.key
)
-
+
if 'inherits' not in mapper_args:
for c in cls.__bases__:
if _is_mapped_class(c):
"Can't place __table_args__ on an inherited class "
"with no table."
)
-
+
# add any columns declared here to the inherited table.
for c in cols:
if c.primary_key:
(c, cls, inherited_table.c[c.name])
)
inherited_table.append_column(c)
-
+
# single or joined inheritance
# exclude any cols on the inherited table which are not mapped on the
# parent class, to avoid
inherited_mapper = class_mapper(mapper_args['inherits'],
compile=False)
inherited_table = inherited_mapper.local_table
-
+
if 'exclude_properties' not in mapper_args:
mapper_args['exclude_properties'] = exclude_properties = \
set([c.key for c in inherited_table.c
if c not in inherited_mapper._columntoproperty])
exclude_properties.difference_update([c.key for c in cols])
-
+
# look through columns in the current mapper that
# are keyed to a propname different than the colname
# (if names were the same, we'd have popped it out above,
# in which case the mapper makes this combination).
- # See if the superclass has a similar column property.
- # If so, join them together.
+ # See if the superclass has a similar column property.
+ # If so, join them together.
for k, col in our_stuff.items():
if not isinstance(col, expression.ColumnElement):
continue
# append() in mapper._configure_property().
# change this ordering when we do [ticket:1892]
our_stuff[k] = p.columns + [col]
-
+
cls.__mapper__ = mapper_cls(cls,
table,
properties=our_stuff,
def __init__(self, cls):
self.cls = cls
def __getattr__(self, key):
-
+
mapper = class_mapper(self.cls, compile=False)
if mapper:
prop = mapper.get_property(key, raiseerr=False)
def _deferred_relationship(cls, prop):
def resolve_arg(arg):
import sqlalchemy
-
+
def access_cls(key):
if key in cls._decl_class_registry:
return _GetColumns(cls._decl_class_registry[key])
def return_cls():
try:
x = eval(arg, globals(), d)
-
+
if isinstance(x, _GetColumns):
return x.cls
else:
.. note:: @declared_attr is available as
``sqlalchemy.util.classproperty`` for SQLAlchemy versions
0.6.2, 0.6.3, 0.6.4.
-
+
@declared_attr turns the attribute into a scalar-like
property that can be invoked from the uninstantiated class.
Declarative treats attributes specifically marked with
to mapping or declarative table configuration. The name
of the attribute is that of what the non-dynamic version
of the attribute would be.
-
+
@declared_attr is more often than not applicable to mixins,
to define relationships that are to be applied to different
implementors of the class::
-
+
class ProvidesUser(object):
"A mixin that adds a 'user' relationship to classes."
-
+
@declared_attr
def user(self):
return relationship("User")
-
+
It also can be applied to mapped classes, such as to provide
a "polymorphic" scheme for inheritance::
-
+
class Employee(Base):
id = Column(Integer, primary_key=True)
type = Column(String(50), nullable=False)
-
+
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
-
+
@declared_attr
def __mapper_args__(cls):
if cls.__name__ == 'Employee':
}
else:
return {"polymorphic_identity":cls.__name__}
-
+
"""
-
+
def __init__(self, fget, *arg, **kw):
super(declared_attr, self).__init__(fget, *arg, **kw)
self.__doc__ = fget.__doc__
-
+
def __get__(desc, self, cls):
return desc.fget(cls)
:param query_chooser: For a given Query, returns the list of shard_ids where the query
should be issued. Results from all shards returned will be combined
together into a single listing.
-
+
:param shards: A dictionary of string shard names to :class:`~sqlalchemy.engine.base.Engine`
- objects.
-
+ objects.
+
"""
super(ShardedSession, self).__init__(**kwargs)
self.shard_chooser = shard_chooser
if shards is not None:
for k in shards:
self.bind_shard(k, shards[k])
-
+
def connection(self, mapper=None, instance=None, shard_id=None, **kwargs):
if shard_id is None:
shard_id = self.shard_chooser(mapper, instance)
return self.get_bind(mapper,
shard_id=shard_id,
instance=instance).contextual_connect(**kwargs)
-
+
def get_bind(self, mapper, shard_id=None, instance=None, clause=None, **kw):
if shard_id is None:
shard_id = self.shard_chooser(mapper, instance, clause=clause)
self.id_chooser = self.session.id_chooser
self.query_chooser = self.session.query_chooser
self._shard_id = None
-
+
def set_shard(self, shard_id):
"""return a new query, limited to a single shard ID.
-
+
all subsequent operations with the returned query will
be against the single shard regardless of other state.
"""
-
+
q = self._clone()
q._shard_id = shard_id
return q
-
+
def _execute_and_instances(self, context):
if self._shard_id is not None:
result = self.session.connection(
mapper=self._mapper_zero(),
shard_id=shard_id).execute(context.statement, self._params)
partial = partial + list(self.instances(result, context))
-
+
# if some kind of in memory 'sorting'
# were done, this is where it would happen
return iter(partial)
return o
else:
return None
-
+
related bullets for you.
.. sourcecode:: python+sql
-
+
mapper(Slide, slides_table, properties={
'bullets': relationship(Bullet,
collection_class=ordering_list('position'),
Use the ``ordering_list`` function to set up the ``collection_class`` on relationships
(as in the mapper example above). This implementation depends on the list
-starting in the proper order, so be SURE to put an order_by on your relationship.
+starting in the proper order, so be SURE to put an order_by on your relationship.
.. warning:: ``ordering_list`` only provides limited functionality when a primary
key column or unique column is the target of the sort. Since changing the order of
Ordering values are not limited to incrementing integers. Almost any scheme
can implemented by supplying a custom ``ordering_func`` that maps a Python list
-index to any value you require.
+index to any value you require.
stop = index.stop or len(self)
if stop < 0:
stop += len(self)
-
+
for i in xrange(start, stop, step):
self.__setitem__(i, entity[i])
else:
super(OrderingList, self).__delslice__(start, end)
self._reorder()
# end Py2K
-
+
for func_name, func in locals().items():
if (util.callable(func) and func.func_name == func_name and
not func.__doc__ and hasattr(list, func_name)):
from sqlalchemy.ext.serializer import loads, dumps
metadata = MetaData(bind=some_engine)
Session = scoped_session(sessionmaker())
-
+
# ... define mappers
-
+
query = Session.query(MyClass).filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)
-
+
# pickle the query
serialized = dumps(query)
-
+
# unpickle. Pass in metadata + scoped_session
query2 = loads(serialized, metadata, Session)
-
+
print query2.all()
Similar restrictions as when using raw pickle apply; mapped classes must be
def Serializer(*args, **kw):
pickler = pickle.Pickler(*args, **kw)
-
+
def persistent_id(obj):
#print "serializing:", repr(obj)
if isinstance(obj, QueryableAttribute):
else:
return None
return id
-
+
pickler.persistent_id = persistent_id
return pickler
-
+
our_ids = re.compile(r'(mapper|table|column|session|attribute|engine):(.*)')
def Deserializer(file, metadata=None, scoped_session=None, engine=None):
unpickler = pickle.Unpickler(file)
-
+
def get_engine():
if engine:
return engine
return metadata.bind
else:
return None
-
+
def persistent_load(id):
m = our_ids.match(id)
if not m:
pickler = Serializer(buf, protocol)
pickler.dump(obj)
return buf.getvalue()
-
+
def loads(data, metadata=None, scoped_session=None, engine=None):
buf = byte_buffer(data)
unpickler = Deserializer(buf, metadata, scoped_session, engine)
return unpickler.load()
-
-
+
+
via::
>>> from sqlalchemy.ext.sqlsoup import Session
-
+
The configuration of this session is ``autoflush=True``,
``autocommit=False``. This means when you work with the SqlSoup
object, you need to call ``db.commit()`` in order to have
engine_encoding = engine.dialect.encoding
mapname = mapname.encode(engine_encoding)
# end Py2K
-
+
if isinstance(selectable, Table):
klass = TableClassType(mapname, (base_cls,), {})
else:
except AttributeError:
raise TypeError('unable to compare with %s' % o.__class__)
return t1, t2
-
+
# python2/python3 compatible system of
# __cmp__ - __lt__ + __eq__
-
+
def __lt__(self, o):
t1, t2 = _compare(self, o)
return t1 < t2
def __eq__(self, o):
t1, t2 = _compare(self, o)
return t1 == t2
-
+
def __repr__(self):
L = ["%s=%r" % (key, getattr(self, key, ''))
for key in self.__class__.c.keys()]
return '%s(%s)' % (self.__class__.__name__, ','.join(L))
-
+
for m in ['__eq__', '__repr__', '__lt__']:
setattr(klass, m, eval(m))
klass._table = selectable
selectable,
extension=AutoAdd(session),
**mapper_kwargs)
-
+
for k in mappr.iterate_properties:
klass.c[k.key] = k.columns[0]
-
+
klass._query = session.query_property()
return klass
class SqlSoup(object):
"""Represent an ORM-wrapped database resource."""
-
+
def __init__(self, engine_or_metadata, base=object, session=None):
"""Initialize a new :class:`.SqlSoup`.
module is used.
"""
-
+
self.session = session or Session
self.base=base
-
+
if isinstance(engine_or_metadata, MetaData):
self._metadata = engine_or_metadata
elif isinstance(engine_or_metadata, (basestring, Engine)):
else:
raise ArgumentError("invalid engine or metadata argument %r" %
engine_or_metadata)
-
+
self._cache = {}
self.schema = None
-
+
@property
def bind(self):
"""The :class:`.Engine` associated with this :class:`.SqlSoup`."""
"""Mark an instance as deleted."""
self.session.delete(instance)
-
+
def execute(self, stmt, **params):
"""Execute a SQL statement.
-
+
The statement may be a string SQL string,
an :func:`.expression.select` construct, or an :func:`.expression.text`
construct.
-
+
"""
return self.session.execute(sql.text(stmt, bind=self.bind), **params)
-
+
@property
def _underlying_session(self):
if isinstance(self.session, session.Session):
return self.session
else:
return self.session()
-
+
def connection(self):
"""Return the current :class:`.Connection` in use by the current transaction."""
-
+
return self._underlying_session._connection_for_bind(self.bind)
-
+
def flush(self):
"""Flush pending changes to the database.
-
+
See :meth:`.Session.flush`.
-
+
"""
self.session.flush()
-
+
def rollback(self):
"""Rollback the current transction.
-
+
See :meth:`.Session.rollback`.
-
+
"""
self.session.rollback()
-
+
def commit(self):
"""Commit the current transaction.
-
+
See :meth:`.Session.commit`.
-
+
"""
self.session.commit()
-
+
def clear(self):
"""Synonym for :meth:`.SqlSoup.expunge_all`."""
-
+
self.session.expunge_all()
-
+
def expunge(self, instance):
"""Remove an instance from the :class:`.Session`.
-
+
See :meth:`.Session.expunge`.
-
+
"""
self.session.expunge(instance)
-
+
def expunge_all(self):
"""Clear all objects from the current :class:`.Session`.
-
+
See :meth:`.Session.expunge_all`.
-
+
"""
self.session.expunge_all()
def map_to(self, attrname, tablename=None, selectable=None,
schema=None, base=None, mapper_args=util.frozendict()):
"""Configure a mapping to the given attrname.
-
+
This is the "master" method that can be used to create any
configuration.
-
+
(new in 0.6.6)
-
+
:param attrname: String attribute name which will be
established as an attribute on this :class:.`.SqlSoup`
instance.
argument.
:param schema: String schema name to use if the
``tablename`` argument is present.
-
-
+
+
"""
if attrname in self._cache:
raise InvalidRequestError(
attrname,
class_mapper(self._cache[attrname]).mapped_table
))
-
+
if tablename is not None:
if not isinstance(tablename, basestring):
raise ArgumentError("'tablename' argument must be a string."
raise PKNotFoundError(
"selectable '%s' does not have a primary "
"key defined" % selectable)
-
+
mapped_cls = _class_for_table(
self.session,
self.engine,
)
self._cache[attrname] = mapped_cls
return mapped_cls
-
+
def map(self, selectable, base=None, **mapper_args):
"""Map a selectable directly.
-
+
The class and its mapping are not cached and will
be discarded once dereferenced (as of 0.6.6).
-
+
:param selectable: an :func:`.expression.select` construct.
:param base: a Python class which will be used as the
base for the mapped class. If ``None``, the "base"
``object``.
:param mapper_args: Dictionary of arguments which will
be passed directly to :func:`.orm.mapper`.
-
+
"""
return _class_for_table(
The class and its mapping are not cached and will
be discarded once dereferenced (as of 0.6.6).
-
+
:param selectable: an :func:`.expression.select` construct.
:param base: a Python class which will be used as the
base for the mapped class. If ``None``, the "base"
``object``.
:param mapper_args: Dictionary of arguments which will
be passed directly to :func:`.orm.mapper`.
-
+
"""
-
+
# TODO give meaningful aliases
return self.map(
expression._clause_element_as_expr(selectable).
The class and its mapping are not cached and will
be discarded once dereferenced (as of 0.6.6).
-
+
:param left: a mapped class or table object.
:param right: a mapped class or table object.
:param onclause: optional "ON" clause construct..
``object``.
:param mapper_args: Dictionary of arguments which will
be passed directly to :func:`.orm.mapper`.
-
+
"""
-
+
j = join(left, right, onclause=onclause, isouter=isouter)
return self.map(j, base=base, **mapper_args)
def entity(self, attr, schema=None):
"""Return the named entity from this :class:`.SqlSoup`, or
create if not present.
-
+
For more generalized mapping, see :meth:`.map_to`.
-
+
"""
try:
return self._cache[attr]
except KeyError, ke:
return self.map_to(attr, tablename=attr, schema=schema)
-
+
def __getattr__(self, attr):
return self.entity(attr)
"""Hooks into the lifecycle of connections in a ``Pool``.
Usage::
-
+
class MyListener(PoolListener):
def connect(self, dbapi_con, con_record):
'''perform connect operations'''
# etc.
-
+
# create a new pool with a listener
p = QueuePool(..., listeners=[MyListener()])
-
+
# add a listener after the fact
p.add_listener(MyListener())
-
+
# usage with create_engine()
e = create_engine("url://", listeners=[MyListener()])
-
+
All of the standard connection :class:`~sqlalchemy.pool.Pool` types can
accept event listeners for key connection lifecycle events:
creation, pool check-out and check-in. There are no events fired
internal event queues based on its capabilities. In terms of
efficiency and function call overhead, you're much better off only
providing implementations for the hooks you'll be using.
-
+
"""
def connect(self, dbapi_con, con_record):
class ConnectionProxy(object):
"""Allows interception of statement execution by Connections.
-
+
Either or both of the ``execute()`` and ``cursor_execute()``
may be implemented to intercept compiled statement and
cursor level executions, e.g.::
-
+
class MyProxy(ConnectionProxy):
def execute(self, conn, execute, clauseelement, *multiparams, **params):
print "compiled statement:", clauseelement
return execute(clauseelement, *multiparams, **params)
-
+
def cursor_execute(self, execute, cursor, statement, parameters, context, executemany):
print "raw statement:", statement
return execute(cursor, statement, parameters, context)
The ``execute`` argument is a function that will fulfill the default
execution behavior for the operation. The signature illustrated
in the example should be used.
-
+
The proxy is installed into an :class:`~sqlalchemy.engine.Engine` via
the ``proxy`` argument::
-
+
e = create_engine('someurl://', proxy=MyProxy())
-
+
"""
def execute(self, conn, execute, clauseelement, *multiparams, **params):
"""Intercept high level execute() events."""
-
+
return execute(clauseelement, *multiparams, **params)
def cursor_execute(self, execute, cursor, statement, parameters, context, executemany):
"""Intercept low-level cursor execute() events."""
-
+
return execute(cursor, statement, parameters, context)
-
+
def begin(self, conn, begin):
"""Intercept begin() events."""
-
+
return begin()
-
+
def rollback(self, conn, rollback):
"""Intercept rollback() events."""
-
+
return rollback()
-
+
def commit(self, conn, commit):
"""Intercept commit() events."""
-
+
return commit()
-
+
def savepoint(self, conn, savepoint, name=None):
"""Intercept savepoint() events."""
-
+
return savepoint(name=name)
-
+
def rollback_savepoint(self, conn, rollback_savepoint, name, context):
"""Intercept rollback_savepoint() events."""
-
+
return rollback_savepoint(name, context)
-
+
def release_savepoint(self, conn, release_savepoint, name, context):
"""Intercept release_savepoint() events."""
-
+
return release_savepoint(name, context)
-
+
def begin_twophase(self, conn, begin_twophase, xid):
"""Intercept begin_twophase() events."""
-
+
return begin_twophase(xid)
-
+
def prepare_twophase(self, conn, prepare_twophase, xid):
"""Intercept prepare_twophase() events."""
-
+
return prepare_twophase(xid)
-
+
def rollback_twophase(self, conn, rollback_twophase, xid, is_prepared):
"""Intercept rollback_twophase() events."""
-
+
return rollback_twophase(xid, is_prepared)
-
+
def commit_twophase(self, conn, commit_twophase, xid, is_prepared):
"""Intercept commit_twophase() events."""
-
+
return commit_twophase(xid, is_prepared)
-
+
import logging
logger = logging.getLogger('sqlalchemy.engine.Engine.%s' % hex(id(engine)))
logger.setLevel(logging.DEBUG)
-
+
"""
import logging
cls._should_log_info = lambda self: logger.isEnabledFor(logging.INFO)
cls.logger = logger
_logged_classes.add(cls)
-
+
class Identified(object):
@util.memoized_property
# cause the app to run out of memory.
return "0x...%s" % hex(id(self))[-4:]
-
+
def instance_logger(instance, echoflag=None):
"""create a logger for an instance that implements :class:`Identified`.
-
+
Warning: this is an expensive call which also results in a permanent
increase in memory overhead for each call. Use only for
low-volume, long-time-spanning objects.
-
+
"""
name = "%s.%s.%s" % (instance.__class__.__module__,
instance.__class__.__name__, instance.logging_name)
-
+
if echoflag is not None:
l = logging.getLogger(name)
if echoflag == 'debug':
def create_session(bind=None, **kwargs):
"""Create a new :class:`.Session`
with no automation enabled by default.
-
+
This function is used primarily for testing. The usual
route to :class:`.Session` creation is via its constructor
or the :func:`.sessionmaker` function.
def relationship(argument, secondary=None, **kwargs):
"""Provide a relationship of a primary Mapper to a secondary Mapper.
-
+
.. note:: :func:`relationship` is historically known as
:func:`relation` prior to version 0.6.
-
+
This corresponds to a parent-child or associative table relationship. The
constructed class is an instance of :class:`RelationshipProperty`.
for applications that make use of
:func:`.attributes.get_history` which also need to know
the "previous" value of the attribute. (New in 0.6.6)
-
+
:param backref:
indicates the string name of a property to be placed on the related
mapper's class that will handle this relationship in the other
when the mappers are configured. Can also be passed as a
:func:`backref` object to control the configuration of the
new relationship.
-
+
:param back_populates:
Takes a string name and has the same meaning as ``backref``,
except the complementing property is **not** created automatically,
* ``all`` - shorthand for "save-update,merge, refresh-expire,
expunge, delete"
-
+
:param cascade_backrefs=True:
a boolean value indicating if the ``save-update`` cascade should
operate along a backref event. When set to ``False`` on a
set to ``False`` on a many-to-one relationship that has a one-to-many
backref, appending a persistent object to the one-to-many collection
on a transient object will not add the transient to the session.
-
+
``cascade_backrefs`` is new in 0.6.5.
-
+
:param collection_class:
a class or callable that returns a new list-holding object. will
be used in place of a plain list for storing elements.
:param doc:
docstring which will be applied to the resulting descriptor.
-
+
:param extension:
an :class:`AttributeExtension` instance, or list of extensions,
which will be prepended to the list of attribute listeners for
"foreign" in the table metadata, allowing the specification
of a list of :class:`.Column` objects that should be considered
part of the foreign key.
-
+
There are only two use cases for ``foreign_keys`` - one, when it is not
convenient for :class:`.Table` metadata to contain its own foreign key
metadata (which should be almost never, unless reflecting a large amount of
via many-to-one using local foreign keys that are not nullable,
or when the reference is one-to-one or a collection that is
guaranteed to have one or at least one entry.
-
+
:param join_depth:
when non-``None``, an integer value indicating how many levels
deep "eager" loaders should join on a self-referring or cyclical
* ``select`` - items should be loaded lazily when the property is first
accessed, using a separate SELECT statement, or identity map
fetch for simple many-to-one references.
-
+
* ``immediate`` - items should be loaded as the parents are loaded,
using a separate SELECT statement, or identity map fetch for
simple many-to-one references. (new as of 0.6.5)
that of the parent, using a JOIN or LEFT OUTER JOIN. Whether
the join is "outer" or not is determined by the ``innerjoin``
parameter.
-
+
* ``subquery`` - items should be loaded "eagerly" within the same
query as that of the parent, using a second SQL statement
which issues a JOIN to a subquery of the original
allowing ``append()`` and ``remove()``. Changes to the
collection will not be visible until flushed
to the database, where it is then refetched upon iteration.
-
+
* True - a synonym for 'select'
-
+
* False - a synonyn for 'joined'
-
+
* None - a synonym for 'noload'
-
+
Detailed discussion of loader strategies is at :ref:`loading_toplevel`.
-
+
:param load_on_pending=False:
Indicates loading behavior for transient or pending parent objects.
-
+
When set to ``True``, causes the lazy-loader to
issue a query for a parent object that is not persistent, meaning it has
never been flushed. This may take effect for a pending object when
"attached" to a :class:`.Session` but is not part of its pending
collection. Attachment of transient objects to the session without
moving to the "pending" state is not a supported behavior at this time.
-
+
Note that the load of related objects on a pending or transient object
also does not trigger any attribute change events - no user-defined
events will be emitted for these attributes, and if and when the
object is ultimately flushed, only the user-specific foreign key
attributes will be part of the modified state.
-
+
The load_on_pending flag does not improve behavior
when the ORM is used normally - object references should be constructed
at the object level, not at the foreign key level, so that they
are present in an ordinary way before flush() proceeds. This flag
is not not intended for general use.
-
+
New in 0.6.5.
-
+
:param order_by:
indicates the ordering that should be applied when loading these
items.
(i.e. SQLite, MySQL MyISAM tables).
Also see the passive_updates flag on ``mapper()``.
-
+
A future SQLAlchemy release will provide a "detect" feature for
this flag.
should be treated either as one-to-one or one-to-many. Its
usage is optional unless delete-orphan cascade is also
set on this relationship(), in which case its required (new in 0.5.2).
-
+
:param uselist=(True|False):
a boolean that indicates if this property should be loaded as a
list or a scalar. In most cases, this value is determined
def relation(*arg, **kw):
"""A synonym for :func:`relationship`."""
-
+
return relationship(*arg, **kw)
-
+
def dynamic_loader(argument, secondary=None, primaryjoin=None,
secondaryjoin=None, foreign_keys=None, backref=None,
post_update=False, cascade=False, remote_side=None,
it does not load immediately, and is instead loaded when the
attribute is first accessed on an instance. See also
:func:`~sqlalchemy.orm.deferred`.
-
+
:param doc:
optional string that will be applied as the doc on the
class-bound descriptor.
-
+
:param extension:
an :class:`~sqlalchemy.orm.interfaces.AttributeExtension` instance,
or list of extensions, which will be prepended to the list of
def composite(class_, *cols, **kwargs):
"""Return a composite column-based property for use with a Mapper.
-
+
See the mapping documention section :ref:`mapper_composite` for a full
usage example.
-
+
:param class\_:
The "composite type" class.
:param passive_updates: Indicates UPDATE behavior of foreign keys
when a primary key changes on a joined-table inheritance or other
joined table mapping.
-
+
When True, it is assumed that ON UPDATE CASCADE is configured on
the foreign key in the database, and that the database will handle
propagation of an UPDATE from a source column to dependent rows.
required for this operation. The relationship() will update the
value of the attribute on related items which are locally present
in the session during a flush.
-
+
When False, it is assumed that the database does not enforce
referential integrity and will not be issuing its own CASCADE
operation for an update. The relationship() will issue the
appropriate UPDATE statements to the database in response to the
change of a referenced key, and items locally present in the
session during a flush will also be refreshed.
-
+
This flag should probably be set to False if primary key changes
are expected and the database in use doesn't support CASCADE (i.e.
SQLite, MySQL MyISAM tables).
-
+
Also see the passive_updates flag on :func:`relationship()`.
-
+
A future SQLAlchemy release will provide a "detect" feature for
this flag.
from sqlalchemy.orm import mapper, comparable_property
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.sql import func
-
+
class MyClass(object):
@property
def myprop(self):
Used with the ``properties`` dictionary sent to
:func:`~sqlalchemy.orm.mapper`.
-
+
Note that :func:`comparable_property` is usually not needed for basic
needs. The recipe at :mod:`.derived_attributes` offers a simpler
pure-Python method of achieving a similar result using class-bound
attributes with SQLAlchemy expression constructs.
-
+
:param comparator_factory:
A PropComparator subclass or factory that defines operator behavior
for this property.
def clear_mappers():
"""Remove all mappers from all classes.
-
+
This function removes all instrumentation from classes and disposes
of their associated mappers. Once called, the classes are unmapped
and can be later re-mapped with new mappers.
-
+
:func:`.clear_mappers` is *not* for normal use, as there is literally no
valid usage for it outside of very specific testing scenarios. Normally,
mappers are permanent structural components of user-defined classes, and
and possibly the test suites of other ORM extension libraries which
intend to test various combinations of mapper construction upon a fixed
set of classes.
-
+
"""
mapperlib._COMPILE_MUTEX.acquire()
try:
Used with :meth:`~sqlalchemy.orm.query.Query.options`.
examples::
-
+
# joined-load the "orders" colleciton on "User"
query(User).options(joinedload(User.orders))
-
+
# joined-load the "keywords" collection on each "Item",
# but not the "items" collection on "Order" - those
# remain lazily loaded.
:func:`joinedload` also accepts a keyword argument `innerjoin=True` which
indicates using an inner join instead of an outer::
-
+
query(Order).options(joinedload(Order.user, innerjoin=True))
-
+
Note that the join created by :func:`joinedload` is aliased such that no
other aspects of the query will affect what it loads. To use joined eager
loading with a join that is constructed manually using
:meth:`~sqlalchemy.orm.query.Query.join` or :func:`~sqlalchemy.orm.join`,
see :func:`contains_eager`.
-
+
See also: :func:`subqueryload`, :func:`lazyload`
-
+
"""
innerjoin = kw.pop('innerjoin', None)
if innerjoin is not None:
load in one joined eager load.
Individual descriptors are accepted as arguments as well::
-
+
query.options(joinedload_all(User.orders, Order.items, Item.keywords))
The keyword arguments accept a flag `innerjoin=True|False` which will
def eagerload(*args, **kwargs):
"""A synonym for :func:`joinedload()`."""
return joinedload(*args, **kwargs)
-
+
def eagerload_all(*args, **kwargs):
"""A synonym for :func:`joinedload_all()`"""
return joinedload_all(*args, **kwargs)
-
+
def subqueryload(*keys):
"""Return a ``MapperOption`` that will convert the property
of the given name into an subquery eager load.
Used with :meth:`~sqlalchemy.orm.query.Query.options`.
examples::
-
+
# subquery-load the "orders" colleciton on "User"
query(User).options(subqueryload(User.orders))
-
+
# subquery-load the "keywords" collection on each "Item",
# but not the "items" collection on "Order" - those
# remain lazily loaded.
query(Order).options(subqueryload_all(Order.items, Item.keywords))
See also: :func:`joinedload`, :func:`lazyload`
-
+
"""
return strategies.EagerLazyOption(keys, lazy="subquery")
load in one subquery eager load.
Individual descriptors are accepted as arguments as well::
-
+
query.options(subqueryload_all(User.orders, Order.items,
Item.keywords))
"""
return strategies.EagerLazyOption(keys, lazy="subquery", chained=True)
-
+
@sa_util.accepts_a_list_as_starargs(list_deprecation='deprecated')
def lazyload(*keys):
"""Return a ``MapperOption`` that will convert the property of the given
def immediateload(*keys):
"""Return a ``MapperOption`` that will convert the property of the given
name into an immediate load.
-
+
Used with :meth:`~sqlalchemy.orm.query.Query.options`.
See also: :func:`lazyload`, :func:`eagerload`, :func:`subqueryload`
-
+
New as of verison 0.6.5.
-
+
"""
return strategies.EagerLazyOption(keys, lazy='immediate')
-
+
def contains_alias(alias):
"""Return a ``MapperOption`` that will indicate to the query that
the main table has been aliased.
The option is used in conjunction with an explicit join that loads
the desired rows, i.e.::
-
+
sess.query(Order).\\
join(Order.user).\\
options(contains_eager(Order.user))
-
+
The above query would join from the ``Order`` entity to its related
``User`` entity, and the returned ``Order`` objects would have the
``Order.user`` attribute pre-populated.
string name of an alias, an :func:`~sqlalchemy.sql.expression.alias`
construct, or an :func:`~sqlalchemy.orm.aliased` construct. Use this when
the eagerly-loaded rows are to come from an aliased table::
-
+
user_alias = aliased(User)
sess.query(Order).\\
join((user_alias, Order.user)).\\
def hasparent(self, state, optimistic=False):
return self.impl.hasparent(state, optimistic=optimistic)
-
+
def __getattr__(self, key):
try:
return getattr(self.comparator, key)
type(self.comparator).__name__,
key)
)
-
+
def __str__(self):
return repr(self.parententity) + "." + self.property.key
class _ProxyImpl(object):
accepts_scalar_loader = False
expire_missing = True
-
+
def __init__(self, key):
self.key = key
def __getattr__(self, attribute):
"""Delegate __getattr__ to the original descriptor and/or
comparator."""
-
+
try:
return getattr(descriptor, attribute)
except AttributeError:
\class_
associated class
-
+
key
string name of the attribute
the hasparent() function to identify an "owning" attribute.
Allows multiple AttributeImpls to all match a single
owner attribute.
-
+
expire_missing
if False, don't add an "expiry" callable to this attribute
during state.expire_attributes(None), if no value is present
for this key.
-
+
"""
self.class_ = class_
self.key = key
break
self.active_history = active_history
self.expire_missing = expire_missing
-
-
+
+
def hasparent(self, state, optimistic=False):
"""Return the boolean value of a `hasparent` flag attached to
the given state.
if state.committed_state.get(self.key, NEVER_SET) is NEVER_SET:
if passive is PASSIVE_NO_INITIALIZE:
return PASSIVE_NO_RESULT
-
+
callable_ = self._get_callable(state)
if callable_ is not None:
#if passive is not PASSIVE_OFF:
# Return a new, empty value
return self.initialize(state, dict_)
-
+
def append(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
self.set(state, dict_, value, initiator, passive=passive)
v = state.committed_state.get(self.key, NO_VALUE)
else:
v = dict_.get(self.key, NO_VALUE)
-
+
return History.from_attribute(
self, state, v)
where the target object is also instrumented.
Adds events to delete/set operations.
-
+
"""
accepts_scalar_loader = False
old = self.get(state, dict_, passive=PASSIVE_ONLY_PERSISTENT)
else:
old = self.get(state, dict_, passive=PASSIVE_NO_FETCH)
-
+
value = self.fire_replace_event(state, dict_, value, old, initiator)
dict_[self.key] = value
previous is not None and
previous is not PASSIVE_NO_RESULT):
self.sethasparent(instance_state(previous), False)
-
+
for ext in self.extensions:
value = ext.set(state, value, previous, initiator or self)
return
state.modified_event(dict_, self, True, old)
-
+
old_collection = self.get_collection(state, dict_, old)
dict_[self.key] = user_data
state.commit(dict_, [self.key])
if self.key in state.pending:
-
+
# pending items exist. issue a modified event,
# add/remove new items.
state.modified_event(dict_, self, True, user_data)
are two objects which contain scalar references to each other.
"""
-
+
active_history = False
-
+
def __init__(self, key):
self.key = key
initiator, passive=PASSIVE_NO_FETCH)
except (ValueError, KeyError, IndexError):
pass
-
+
if child is not None:
child_state, child_dict = instance_state(child),\
instance_dict(child)
event_registry_factory = Events
deferred_scalar_loader = None
-
+
def __init__(self, class_):
self.class_ = class_
self.factory = None # where we came from, for inheritance bookkeeping
self.events = self.event_registry_factory()
self.manage()
self._instrument_init()
-
+
@property
def is_mapped(self):
return 'mapper' in self.__dict__
-
+
@util.memoized_property
def mapper(self):
raise exc.UnmappedClassError(self.class_)
-
+
def _attr_has_impl(self, key):
"""Return True if the given attribute is fully initialized.
-
+
i.e. has an impl.
"""
-
+
return key in self and self[key].impl is not None
-
+
def _configure_create_arguments(self,
_source=None,
deferred_scalar_loader=None):
"""Accept extra **kw arguments passed to create_manager_for_cls.
-
+
The current contract of ClassManager and other managers is that they
take a single "cls" argument in their constructor (as per
test/orm/instrumentation.py InstrumentationCollisionTest). This
ClassManager-like instances. So create_manager_for_cls sends
in ClassManager-specific arguments via this method once the
non-proxied ClassManager is available.
-
+
"""
if _source:
deferred_scalar_loader = _source.deferred_scalar_loader
if deferred_scalar_loader:
self.deferred_scalar_loader = deferred_scalar_loader
-
+
def _subclass_manager(self, cls):
"""Create a new ClassManager for a subclass of this ClassManager's
class.
-
+
This is called automatically when attributes are instrumented so that
the attributes can be propagated to subclasses against their own
class-local manager, without the need for mappers etc. to have already
pre-configured managers for the full class hierarchy. Mappers
can post-configure the auto-generated ClassManager when needed.
-
+
"""
manager = manager_of_class(cls)
if manager is None:
manager = _create_manager_for_cls(cls, _source=self)
return manager
-
+
def _instrument_init(self):
# TODO: self.class_.__init__ is often the already-instrumented
# __init__ from an instrumented superclass. We still need to make
self.events.original_init = self.class_.__init__
self.new_init = _generate_init(self.class_, self)
self.install_member('__init__', self.new_init)
-
+
def _uninstrument_init(self):
if self.new_init:
self.uninstall_member('__init__')
self.new_init = None
-
+
def _create_instance_state(self, instance):
if self.mutable_attributes:
return state.MutableAttrInstanceState(instance, self)
else:
return state.InstanceState(instance, self)
-
+
def manage(self):
"""Mark this instance as the manager for its class."""
-
+
setattr(self.class_, self.MANAGER_ATTR, self)
def dispose(self):
"""Dissasociate this manager from its class."""
-
+
delattr(self.class_, self.MANAGER_ATTR)
def manager_getter(self):
self.local_attrs[key] = inst
self.install_descriptor(key, inst)
self[key] = inst
-
+
for cls in self.class_.__subclasses__():
manager = self._subclass_manager(cls)
manager.instrument_attribute(key, inst, True)
def post_configure_attribute(self, key):
pass
-
+
def uninstrument_attribute(self, key, propagated=False):
if key not in self:
return
def unregister(self):
"""remove all instrumentation established by this ClassManager."""
-
+
self._uninstrument_init()
self.mapper = self.events = None
self.info.clear()
-
+
for key in list(self):
if key in self.local_attrs:
self.uninstrument_attribute(key)
def setup_instance(self, instance, state=None):
setattr(instance, self.STATE_ATTR,
state or self._create_instance_state(instance))
-
+
def teardown_instance(self, instance):
delattr(instance, self.STATE_ATTR)
-
+
def _new_state_if_none(self, instance):
"""Install a default InstanceState if none is present.
A private convenience method used by the __init__ decorator.
-
+
"""
if hasattr(instance, self.STATE_ATTR):
return False
state = self._create_instance_state(instance)
setattr(instance, self.STATE_ATTR, state)
return state
-
+
def state_getter(self):
"""Return a (instance) -> InstanceState callable.
"""
return attrgetter(self.STATE_ATTR)
-
+
def dict_getter(self):
return attrgetter('__dict__')
-
+
def has_state(self, instance):
return hasattr(instance, self.STATE_ATTR)
-
+
def has_parent(self, state, key, optimistic=False):
"""TODO"""
return self.get_impl(key).hasparent(state, optimistic=optimistic)
self._adapted = override
self._get_state = self._adapted.state_getter(class_)
self._get_dict = self._adapted.dict_getter(class_)
-
+
ClassManager.__init__(self, class_, **kw)
def manage(self):
def setup_instance(self, instance, state=None):
self._adapted.initialize_instance_dict(self.class_, instance)
-
+
if state is None:
state = self._create_instance_state(instance)
-
+
# the given instance is assumed to have no state
self._adapted.install_state(self.class_, instance, state)
return state
return False
else:
return True
-
+
def state_getter(self):
return self._get_state
"""A 3-tuple of added, unchanged and deleted values,
representing the changes which have occured on an instrumented
attribute.
-
+
Each tuple member is an iterable sequence.
"""
added = property(itemgetter(0))
"""Return the collection of items added to the attribute (the first tuple
element)."""
-
+
unchanged = property(itemgetter(1))
"""Return the collection of items that have not changed on the attribute
(the second tuple element)."""
-
-
+
+
deleted = property(itemgetter(2))
"""Return the collection of items that have been removed from the
attribute (the third tuple element)."""
-
+
def __new__(cls, added, unchanged, deleted):
return tuple.__new__(cls, (added, unchanged, deleted))
-
+
def __nonzero__(self):
return self != HISTORY_BLANK
-
+
def empty(self):
"""Return True if this :class:`History` has no changes
and no existing, unchanged state.
-
+
"""
-
+
return not bool(
(self.added or self.deleted)
or self.unchanged and self.unchanged != [None]
)
-
+
def sum(self):
"""Return a collection of added + unchanged + deleted."""
-
+
return (self.added or []) +\
(self.unchanged or []) +\
(self.deleted or [])
-
+
def non_deleted(self):
"""Return a collection of added + unchanged."""
-
+
return (self.added or []) +\
(self.unchanged or [])
-
+
def non_added(self):
"""Return a collection of unchanged + deleted."""
-
+
return (self.unchanged or []) +\
(self.deleted or [])
-
+
def has_changes(self):
"""Return True if this :class:`History` has changes."""
-
+
return bool(self.added or self.deleted)
-
+
def as_state(self):
return History(
[(c is not None and c is not PASSIVE_NO_RESULT)
and instance_state(c) or None
for c in self.deleted],
)
-
+
@classmethod
def from_attribute(cls, attribute, state, current):
original = state.committed_state.get(attribute.key, NEVER_SET)
def get_history(obj, key, **kwargs):
"""Return a :class:`.History` record for the given object
and attribute key.
-
+
:param obj: an object whose class is instrumented by the
- attributes package.
-
+ attributes package.
+
:param key: string attribute name.
-
+
:param kwargs: Optional keyword arguments currently
include the ``passive`` flag, which indicates if the attribute should be
loaded from the database if not already present (:attr:`PASSIVE_NO_FETCH`), and
if the attribute should be not initialized to a blank value otherwise
(:attr:`PASSIVE_NO_INITIALIZE`). Default is :attr:`PASSIVE_OFF`.
-
+
"""
return get_state_history(instance_state(obj), key, **kwargs)
def register_class(class_, **kw):
"""Register class instrumentation.
-
+
Returns the existing or newly created class manager.
"""
if manager is None:
manager = _create_manager_for_cls(class_, **kw)
return manager
-
+
def unregister_class(class_):
"""Unregister class instrumentation."""
-
+
instrumentation_registry.unregister(class_)
def register_attribute(class_, key, **kw):
proxy_property = kw.pop('proxy_property', None)
-
+
comparator = kw.pop('comparator', None)
parententity = kw.pop('parententity', None)
doc = kw.pop('doc', None)
comparator, parententity, doc=doc)
if not proxy_property:
register_attribute_impl(class_, key, **kw)
-
-def register_attribute_impl(class_, key,
+
+def register_attribute_impl(class_, key,
uselist=False, callable_=None,
useobject=False, mutable_scalars=False,
impl_class=None, **kw):
-
+
manager = manager_of_class(class_)
if uselist:
factory = kw.pop('typecallable', None)
impl = ScalarAttributeImpl(class_, key, callable_, **kw)
manager[key].impl = impl
-
+
manager.post_configure_attribute(key)
-
+
def register_descriptor(class_, key, proxy_property=None, comparator=None,
parententity=None, property_=None, doc=None):
manager = manager_of_class(class_)
else:
descriptor = InstrumentedAttribute(key, comparator=comparator,
parententity=parententity)
-
+
descriptor.__doc__ = doc
-
+
manager.instrument_attribute(key, descriptor)
def unregister_attribute(class_, key):
def init_collection(obj, key):
"""Initialize a collection attribute and return the collection adapter.
-
+
This function is used to provide direct access to collection internals
for a previously unloaded attribute. e.g.::
-
+
collection_adapter = init_collection(someobject, 'elements')
for elem in values:
collection_adapter.append_without_event(elem)
-
+
For an easier way to do the above, see
:func:`~sqlalchemy.orm.attributes.set_committed_value`.
-
+
obj is an instrumented object instance. An InstanceState
is accepted directly for backwards compatibility but
this usage is deprecated.
-
+
"""
state = instance_state(obj)
dict_ = state.dict
return init_state_collection(state, dict_, key)
-
+
def init_state_collection(state, dict_, key):
"""Initialize a collection attribute and return the collection adapter."""
-
+
attr = state.get_impl(key)
user_data = attr.initialize(state, dict_)
return attr.get_collection(state, dict_, user_data)
def set_committed_value(instance, key, value):
"""Set the value of an attribute with no history events.
-
+
Cancels any previous history present. The value should be
a scalar value for scalar-holding attributes, or
an iterable for any collection-holding attribute.
which has loaded additional attributes or collections through
separate queries, which can then be attached to an instance
as though it were part of its original loaded state.
-
+
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.get_impl(key).set_committed_value(state, dict_, value)
-
+
def set_attribute(instance, key, value):
"""Set the value of an attribute, firing history events.
-
+
This function may be used regardless of instrumentation
applied directly to the class, i.e. no descriptors are required.
Custom attribute management schemes will need to make usage
of this method to establish attribute state as understood
by SQLAlchemy.
-
+
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.get_impl(key).set(state, dict_, value, None)
Custom attribute management schemes will need to make usage
of this method to make usage of attribute state as understood
by SQLAlchemy.
-
+
"""
state, dict_ = instance_state(instance), instance_dict(instance)
return state.get_impl(key).get(state, dict_)
Custom attribute management schemes will need to make usage
of this method to establish attribute state as understood
by SQLAlchemy.
-
+
"""
state, dict_ = instance_state(instance), instance_dict(instance)
state.get_impl(key).delete(state, dict_)
def is_instrumented(instance, key):
"""Return True if the given attribute on the given instance is
instrumented by the attributes package.
-
+
This function may be used regardless of instrumentation
applied directly to the class, i.e. no descriptors are required.
-
+
"""
return manager_of_class(instance.__class__).\
is_instrumented(key, search=True)
class InstrumentationRegistry(object):
"""Private instrumentation registration singleton.
-
+
All classes are routed through this registry
when first instrumented, however the InstrumentationRegistry
is not actually needed unless custom ClassManagers are in use.
-
+
"""
_manager_finders = weakref.WeakKeyDictionary()
manager = factory(class_)
if not isinstance(manager, ClassManager):
manager = _ClassInstrumentationAdapter(class_, manager)
-
+
if factory != ClassManager and not self._extended:
# somebody invoked a custom ClassManager.
# reinstall global "getter" functions with the more
# expensive ones.
self._extended = True
_install_lookup_strategy(self)
-
+
manager._configure_create_arguments(**kw)
manager.factory = factory
except KeyError:
raise AttributeError("%r is not instrumented" %
instance.__class__)
-
+
def unregister(self, class_):
if class_ in self._manager_finders:
manager = self.manager_of_class(class_)
del self._dict_finders[class_]
if ClassManager.MANAGER_ATTR in class_.__dict__:
delattr(class_, ClassManager.MANAGER_ATTR)
-
+
instrumentation_registry = InstrumentationRegistry()
def _install_lookup_strategy(implementation):
with either faster or more comprehensive implementations,
based on whether or not extended class instrumentation
has been detected.
-
+
This function is called only by InstrumentationRegistry()
and unit tests specific to this behavior.
-
+
"""
global instance_state, instance_dict, manager_of_class
if implementation is util.symbol('native'):
instance_state = instrumentation_registry.state_of
instance_dict = instrumentation_registry.dict_of
manager_of_class = instrumentation_registry.manager_of_class
-
+
_create_manager_for_cls = instrumentation_registry.create_manager_for_cls
# Install default "lookup" strategies. These are basically
# implementations
def collection_adapter(collection):
"""Fetch the :class:`.CollectionAdapter` for a collection."""
-
+
return getattr(collection, '_sa_adapter', None)
def collection_iter(collection):
self._data = weakref.ref(data)
self.owner_state = owner_state
self.link_to_self(data)
-
+
@property
def data(self):
"The entity collection being adapted."
@util.memoized_property
def attr(self):
return self.owner_state.manager[self._key].impl
-
+
def link_to_self(self, data):
"""Link a collection to this adapter, and fire a link event."""
setattr(data, '_sa_adapter', self)
def append_with_event(self, item, initiator=None):
"""Add an entity to the collection, firing mutation events."""
-
+
getattr(self._data(), '_sa_appender')(item, _sa_initiator=initiator)
def append_without_event(self, item):
def __iter__(self):
"""Iterate over entities in the collection."""
-
+
# Py3K requires iter() here
return iter(getattr(self._data(), '_sa_iterator')())
if executor:
item = getattr(executor, 'fire_append_event')(item, _sa_initiator)
return item
-
+
def __del(collection, item, _sa_initiator=None):
"""Run del events, may eventually be inlined into decorators."""
if _sa_initiator is not False and item is not None:
stop = index.stop or len(self)
if stop < 0:
stop += len(self)
-
+
if step == 1:
for i in xrange(start, stop, step):
if len(self) > start:
del self[start]
-
+
for i, item in enumerate(value):
self.insert(i + start, item)
else:
_tidy(__delslice__)
return __delslice__
# end Py2K
-
+
def extend(fn):
def extend(self, iterable):
for value in iterable:
__instrumentation__ = {
'iterator': 'itervalues', }
# end Py2K
-
+
__canned_instrumentation = {
list: InstrumentedList,
set: InstrumentedSet,
"No target attributes to populate between parent and "
"child are present" %
self.prop)
-
+
@classmethod
def from_relationship(cls, prop):
return _direction_to_processor[prop.direction](prop)
-
+
def hasparent(self, state):
"""return True if the given object instance has a parent,
according to the ``InstrumentedAttribute`` handled by this
``DependencyProcessor``.
-
+
"""
return self.parent.class_manager.get_impl(self.key).hasparent(state)
def per_property_preprocessors(self, uow):
"""establish actions and dependencies related to a flush.
-
+
These actions will operate on all relevant states in
the aggreagte.
-
+
"""
uow.register_preprocessor(self, True)
-
-
+
+
def per_property_flush_actions(self, uow):
after_save = unitofwork.ProcessAll(uow, self, False, True)
before_delete = unitofwork.ProcessAll(uow, self, True, True)
uow,
self.mapper.primary_base_mapper
)
-
+
self.per_property_dependencies(uow,
parent_saves,
child_saves,
after_save,
before_delete
)
-
+
def per_state_flush_actions(self, uow, states, isdelete):
"""establish actions and dependencies related to a flush.
-
+
These actions will operate on all relevant states
individually. This occurs only if there are cycles
in the 'aggregated' version of events.
-
+
"""
parent_base_mapper = self.parent.primary_base_mapper
# locate and disable the aggregate processors
# for this dependency
-
+
if isdelete:
before_delete = unitofwork.ProcessAll(uow, self, True, True)
before_delete.disabled = True
after_save.disabled = True
# check if the "child" side is part of the cycle
-
+
if child_saves not in uow.cycles:
# based on the current dependencies we use, the saves/
# deletes should always be in the 'cycles' collection
# together. if this changes, we will have to break up
# this method a bit more.
assert child_deletes not in uow.cycles
-
+
# child side is not part of the cycle, so we will link per-state
# actions to the aggregate "saves", "deletes" actions
child_actions = [
child_in_cycles = False
else:
child_in_cycles = True
-
+
# check if the "parent" side is part of the cycle
if not isdelete:
parent_saves = unitofwork.SaveUpdateAll(
parent_saves = after_save = None
if parent_deletes in uow.cycles:
parent_in_cycles = True
-
+
# now create actions /dependencies for each state.
for state in states:
# detect if there's anything changed or loaded
uow,
state,
parent_base_mapper)
-
+
if child_in_cycles:
child_actions = []
for child_state in sum_:
child_base_mapper),
False)
child_actions.append(child_action)
-
+
# establish dependencies between our possibly per-state
# parent action and our possibly per-state child action.
for child_action, childisdelete in child_actions:
child_action,
after_save, before_delete,
isdelete, childisdelete)
-
-
+
+
def presort_deletes(self, uowcommit, states):
return False
-
+
def presort_saves(self, uowcommit, states):
return False
-
+
def process_deletes(self, uowcommit, states):
pass
-
+
def process_saves(self, uowcommit, states):
pass
def prop_has_changes(self, uowcommit, states, isdelete):
passive = not isdelete or self.passive_deletes
-
+
for s in states:
# TODO: add a high speed method
# to InstanceState which returns: attribute
return True
else:
return False
-
+
def _verify_canload(self, state):
if state is not None and \
not self.mapper._canload(state,
"Attempting to flush an item of type %s on collection '%s', "
"whose mapper does not inherit from that of %s." %
(state.class_, self.prop, self.mapper.class_))
-
+
def _synchronize(self, state, child, associationrow,
clearkeys, uowcommit):
raise NotImplementedError()
[r for l, r in self.prop.synchronize_pairs]
)
break
-
+
def _pks_changed(self, uowcommit, state):
raise NotImplementedError()
return "%s(%s)" % (self.__class__.__name__, self.prop)
class OneToManyDP(DependencyProcessor):
-
+
def per_property_dependencies(self, uow, parent_saves,
child_saves,
parent_deletes,
uow,
self.mapper.primary_base_mapper,
True)
-
+
uow.dependencies.update([
(child_saves, after_save),
(parent_saves, after_save),
(after_save, child_post_updates),
-
+
(before_delete, child_pre_updates),
(child_pre_updates, parent_deletes),
(child_pre_updates, child_deletes),
-
+
])
else:
uow.dependencies.update([
(parent_saves, after_save),
(after_save, child_saves),
(after_save, child_deletes),
-
+
(child_saves, parent_deletes),
(child_deletes, parent_deletes),
(before_delete, child_saves),
(before_delete, child_deletes),
])
-
+
def per_state_dependencies(self, uow,
save_parent,
delete_parent,
child_action,
after_save, before_delete,
isdelete, childisdelete):
-
+
if self.post_update:
child_post_updates = unitofwork.IssuePostUpdate(
uow,
self.mapper.primary_base_mapper,
True)
-
+
# TODO: this whole block is not covered
# by any tests
if not isdelete:
(before_delete, child_action),
(child_action, delete_parent)
])
-
+
def presort_deletes(self, uowcommit, states):
# head object is being deleted, and we manage its list of
# child objects the child objects have to have their
uowcommit.register_object(child, isdelete=True)
else:
uowcommit.register_object(child)
-
+
if should_null_fks:
for child in history.unchanged:
if child is not None:
uowcommit.register_object(child)
-
-
+
+
def presort_saves(self, uowcommit, states):
children_added = uowcommit.memo(('children_added', self), set)
-
+
for state in states:
pks_changed = self._pks_changed(uowcommit, state)
-
+
history = uowcommit.get_attribute_history(
state,
self.key,
child,
False,
self.passive_updates)
-
+
def process_deletes(self, uowcommit, states):
# head object is being deleted, and we manage its list of
# child objects the child objects have to have their foreign
# key to the parent set to NULL this phase can be called
# safely for any cascade but is unnecessary if delete cascade
# is on.
-
+
if self.post_update or not self.passive_deletes == 'all':
children_added = uowcommit.memo(('children_added', self), set)
uowcommit, False)
if self.post_update and child:
self._post_update(child, uowcommit, [state])
-
+
if self.post_update or not self.cascade.delete:
for child in set(history.unchanged).\
difference(children_added):
self._post_update(child,
uowcommit,
[state])
-
+
# technically, we can even remove each child from the
# collection here too. but this would be a somewhat
# inconsistent behavior since it wouldn't happen
#if the old parent wasn't deleted but child was moved.
-
+
def process_saves(self, uowcommit, states):
for state in states:
history = uowcommit.get_attribute_history(state,
for child in history.unchanged:
self._synchronize(state, child, None,
False, uowcommit, True)
-
+
def _synchronize(self, state, child,
associationrow, clearkeys, uowcommit,
pks_changed):
isdelete, childisdelete):
if self.post_update:
-
+
if not isdelete:
parent_post_updates = unitofwork.IssuePostUpdate(
uow,
uow.dependencies.update([
(save_parent, after_save),
(child_action, after_save),
-
+
(after_save, parent_post_updates)
])
else:
(parent_pre_updates, delete_parent),
(parent_pre_updates, child_action)
])
-
+
elif not isdelete:
if not childisdelete:
uow.dependencies.update([
uow.dependencies.update([
(after_save, save_parent),
])
-
+
else:
if childisdelete:
uow.dependencies.update([
'delete', child):
uowcommit.register_object(
attributes.instance_state(c), isdelete=True)
-
+
def presort_saves(self, uowcommit, states):
for state in states:
uowcommit.register_object(state)
if self.post_update and \
not self.cascade.delete_orphan and \
not self.passive_deletes == 'all':
-
+
# post_update means we have to update our
# row to not reference the child object
# before we can DELETE the row
if history:
for child in history.added:
self._synchronize(state, child, None, False, uowcommit)
-
+
if self.post_update:
self._post_update(state, uowcommit, history.sum())
"""For many-to-one relationships with no one-to-many backref,
searches for parents through the unit of work when a primary
key has changed and updates them.
-
+
Theoretically, this approach could be expanded to support transparent
deletion of objects referenced via many-to-one as well, although
the current attribute system doesn't do enough bookkeeping for this
to be efficient.
-
+
"""
def per_property_preprocessors(self, uow):
if False in (prop.passive_updates for \
prop in self.prop._reverse_property):
return
-
+
uow.register_preprocessor(self, False)
def per_property_flush_actions(self, uow):
uow.dependencies.update([
(parent_saves, after_save)
])
-
+
def per_state_flush_actions(self, uow, states, isdelete):
pass
-
+
def presort_deletes(self, uowcommit, states):
pass
if not isdelete and self.passive_updates:
d = self._key_switchers(uow, states)
return bool(d)
-
+
return False
-
+
def process_deletes(self, uowcommit, states):
assert False
# statements being emitted
assert self.passive_updates
self._process_key_switches(states, uowcommit)
-
+
def _key_switchers(self, uow, states):
switched, notswitched = uow.memo(
('pk_switchers', self),
lambda: (set(), set())
)
-
+
allstates = switched.union(notswitched)
for s in states:
if s not in allstates:
else:
notswitched.add(s)
return switched
-
+
def _process_key_switches(self, deplist, uowcommit):
switchers = self._key_switchers(uowcommit, deplist)
if switchers:
class ManyToManyDP(DependencyProcessor):
-
+
def per_property_dependencies(self, uow, parent_saves,
child_saves,
parent_deletes,
(parent_saves, after_save),
(child_saves, after_save),
(after_save, child_deletes),
-
+
# a rowswitch on the parent from deleted to saved
# can make this one occur, as the "save" may remove
# an element from the
# "deleted" list before we have a chance to
# process its child rows
(before_delete, parent_saves),
-
+
(before_delete, parent_deletes),
(before_delete, child_deletes),
(before_delete, child_saves),
(before_delete, child_action),
(before_delete, delete_parent)
])
-
+
def presort_deletes(self, uowcommit, states):
if not self.passive_deletes:
# if no passive deletes, load history on
state,
self.key,
passive=self.passive_deletes)
-
+
def presort_saves(self, uowcommit, states):
if not self.passive_updates:
# if no passive updates, load history on
if not self.cascade.delete_orphan:
return
-
+
# check for child items removed from the collection
# if delete_orphan check is turned on.
for state in states:
child):
uowcommit.register_object(
attributes.instance_state(c), isdelete=True)
-
+
def process_deletes(self, uowcommit, states):
secondary_delete = []
secondary_insert = []
secondary_update = []
-
+
processed = self._get_reversed_processed_set(uowcommit)
tmp = set()
for state in states:
associationrow,
False, uowcommit)
secondary_delete.append(associationrow)
-
+
tmp.update((c, state) for c in history.non_added())
if processed is not None:
processed.update(tmp)
-
+
self._run_crud(uowcommit, secondary_insert,
secondary_update, secondary_delete)
associationrow,
False, uowcommit)
secondary_delete.append(associationrow)
-
+
tmp.update((c, state)
for c in history.added + history.deleted)
-
+
if need_cascade_pks:
-
+
for child in history.unchanged:
associationrow = {}
sync.update(state,
self.prop.secondary_synchronize_pairs)
secondary_update.append(associationrow)
-
+
if processed is not None:
processed.update(tmp)
-
+
self._run_crud(uowcommit, secondary_insert,
secondary_update, secondary_delete)
-
+
def _run_crud(self, uowcommit, secondary_insert,
secondary_update, secondary_delete):
connection = uowcommit.transaction.connection(self.mapper)
-
+
if secondary_delete:
associationrow = secondary_delete[0]
statement = self.secondary.delete(sql.and_(*[
if c.key in associationrow
]))
result = connection.execute(statement, secondary_delete)
-
+
if result.supports_sane_multi_rowcount() and \
result.rowcount != len(secondary_delete):
raise exc.StaleDataError(
if secondary_insert:
statement = self.secondary.insert()
connection.execute(statement, secondary_insert)
-
+
def _synchronize(self, state, child, associationrow,
clearkeys, uowcommit):
if associationrow is None:
return
self._verify_canload(child)
-
+
sync.populate_dict(state, self.parent, associationrow,
self.prop.synchronize_pairs)
sync.populate_dict(child, self.mapper, associationrow,
uses_objects = True
accepts_scalar_loader = False
supports_population = False
-
+
def __init__(self, class_, key, typecallable,
target_mapper, order_by, query_class=None, **kwargs):
super(DynamicAttributeImpl, self).\
def set_committed_value(self, state, dict_, value):
raise NotImplementedError("Dynamic attributes don't support "
"collection population.")
-
+
def get_history(self, state, dict_, passive=False):
c = self._get_collection_history(state, passive)
return attributes.History(c.added_items, c.unchanged_items,
query = self.query_class(self.attr.target_mapper, session=sess)
else:
query = sess.query(self.attr.target_mapper)
-
+
query._criterion = self._criterion
query._order_by = self._order_by
-
+
return query
def append(self, item):
class StaleDataError(sa.exc.SQLAlchemyError):
"""An operation encountered database state that is unaccounted for.
-
+
Two conditions cause this to happen:
-
+
* A flush may have attempted to update or delete rows
and an unexpected number of rows were matched during
the UPDATE or DELETE statement. Note that when
version_id_col is used, rows in UPDATE or DELETE statements
are also matched against the current known version
identifier.
-
+
* A mapped object with version_id_col was refreshed,
and the version number coming back from the database does
not match that of the object itself.
-
+
"""
-
+
ConcurrentModificationError = StaleDataError
class DetachedInstanceError(sa.exc.SQLAlchemyError):
"""An attempt to access unloaded attributes on a
mapped instance that is detached."""
-
+
class UnmappedInstanceError(UnmappedError):
"""An mapping operation was requested for an unknown instance."""
self._mutable_attrs = set()
self._modified = set()
self._wr = weakref.ref(self)
-
+
def replace(self, state):
raise NotImplementedError()
-
+
def add(self, state):
raise NotImplementedError()
-
+
def remove(self, state):
raise NotImplementedError()
-
+
def update(self, dict):
raise NotImplementedError("IdentityMap uses add() to insert data")
-
+
def clear(self):
raise NotImplementedError("IdentityMap uses remove() to remove data")
-
+
def _manage_incoming_state(self, state):
state._instance_dict = self._wr
-
+
if state.modified:
- self._modified.add(state)
+ self._modified.add(state)
if state.manager.mutable_attributes:
self._mutable_attrs.add(state)
-
+
def _manage_removed_state(self, state):
del state._instance_dict
self._mutable_attrs.discard(state)
def check_modified(self):
"""return True if any InstanceStates present have been marked as 'modified'."""
-
+
if self._modified:
return True
else:
if state.modified:
return True
return False
-
+
def has_key(self, key):
return key in self
-
+
def popitem(self):
raise NotImplementedError("IdentityMap uses remove() to remove data")
def __delitem__(self, key):
raise NotImplementedError("IdentityMap uses remove() to remove data")
-
+
class WeakInstanceDict(IdentityMap):
def __init__(self):
IdentityMap.__init__(self)
return False
else:
return o is not None
-
+
def contains_state(self, state):
return dict.get(self, state.key) is state
-
+
def replace(self, state):
if dict.__contains__(self, state.key):
existing = dict.__getitem__(self, state.key)
self._manage_removed_state(existing)
else:
return
-
+
dict.__setitem__(self, state.key, state)
self._manage_incoming_state(state)
-
+
def add(self, state):
if state.key in self:
if dict.__getitem__(self, state.key) is not state:
else:
dict.__setitem__(self, state.key, state)
self._manage_incoming_state(state)
-
+
def remove_key(self, key):
state = dict.__getitem__(self, key)
self.remove(state)
-
+
def remove(self, state):
self._remove_mutex.acquire()
try:
raise AssertionError("State %s is not present in this identity map" % state)
finally:
self._remove_mutex.release()
-
+
self._manage_removed_state(state)
-
+
def discard(self, state):
if self.contains_state(state):
dict.__delitem__(self, state.key)
self._manage_removed_state(state)
-
+
def get(self, key, default=None):
state = dict.get(self, key, default)
if state is default:
def items(self):
# Py2K
return list(self.iteritems())
-
+
def iteritems(self):
# end Py2K
self._remove_mutex.acquire()
return iter(result)
finally:
self._remove_mutex.release()
-
+
def values(self):
# Py2K
return list(self.itervalues())
return iter(result)
finally:
self._remove_mutex.release()
-
+
def all_states(self):
self._remove_mutex.acquire()
try:
# Py3K
# return list(dict.values(self))
-
+
# Py2K
return dict.values(self)
# end Py2K
finally:
self._remove_mutex.release()
-
+
def prune(self):
return 0
-
+
class StrongInstanceDict(IdentityMap):
def all_states(self):
return [attributes.instance_state(o) for o in self.itervalues()]
-
+
def contains_state(self, state):
return state.key in self and attributes.instance_state(self[state.key]) is state
-
+
def replace(self, state):
if dict.__contains__(self, state.key):
existing = dict.__getitem__(self, state.key)
else:
dict.__setitem__(self, state.key, state.obj())
self._manage_incoming_state(state)
-
+
def remove(self, state):
if attributes.instance_state(dict.pop(self, state.key)) is not state:
raise AssertionError("State %s is not present in this identity map" % state)
self._manage_removed_state(state)
-
+
def discard(self, state):
if self.contains_state(state):
dict.__delitem__(self, state.key)
self._manage_removed_state(state)
-
+
def remove_key(self, key):
state = attributes.instance_state(dict.__getitem__(self, key))
self.remove(state)
def prune(self):
"""prune unreferenced, non-dirty states."""
-
+
ref_count = len(self)
dirty = [s.obj() for s in self.all_states() if s.modified]
dict.update(self, keepers)
self.modified = bool(dirty)
return ref_count - len(self)
-
+
class MapperExtension(object):
"""Base implementation for customizing ``Mapper`` behavior.
-
+
New extension classes subclass ``MapperExtension`` and are specified
using the ``extension`` mapper() argument, which is a single
``MapperExtension`` or a list of such. A single mapper
particular mapping event occurs, the corresponding method
on each ``MapperExtension`` is invoked serially, and each method
has the ability to halt the chain from proceeding further.
-
+
Each ``MapperExtension`` method returns the symbol
EXT_CONTINUE by default. This symbol generally means "move
to the next ``MapperExtension`` for processing". For methods
should be ignored. In some cases it's required for a
default mapper activity to be performed, such as adding a
new instance to a result list.
-
+
The symbol EXT_STOP has significance within a chain
of ``MapperExtension`` objects that the chain will be stopped
when this symbol is returned. Like EXT_CONTINUE, it also
has additional significance in some cases that a default
mapper activity will not be performed.
-
+
"""
-
+
def instrument_class(self, mapper, class_):
"""Receive a class when the mapper is first constructed, and has
applied instrumentation to the mapped class.
-
+
The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
-
+
"""
return EXT_CONTINUE
def init_instance(self, mapper, class_, oldinit, instance, args, kwargs):
"""Receive an instance when it's constructor is called.
-
+
This method is only called during a userland construction of
an object. It is not called when an object is loaded from the
database.
-
+
The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
def init_failed(self, mapper, class_, oldinit, instance, args, kwargs):
"""Receive an instance when it's constructor has been called,
and raised an exception.
-
+
This method is only called during a userland construction of
an object. It is not called when an object is loaded from the
database.
-
+
The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
object which contains mapped columns as keys. The
returned object should also be a dictionary-like object
which recognizes mapped columns as keys.
-
+
If the ultimate return value is EXT_CONTINUE, the row
is not translated.
-
+
"""
return EXT_CONTINUE
The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
-
+
"""
return EXT_CONTINUE
This means that an instance being sent to before_update is *not* a
guarantee that an UPDATE statement will be issued (although you can
affect the outcome here).
-
+
To detect if the column-based attributes on the object have net
changes, and will therefore generate an UPDATE statement, use
``object_session(instance).is_modified(instance,
The return value is only significant within the ``MapperExtension``
chain; the parent mapper's behavior isn't modified by this method.
-
+
"""
return EXT_CONTINUE
def before_commit(self, session):
"""Execute right before commit is called.
-
+
Note that this may not be per-flush if a longer running
transaction is ongoing."""
def after_commit(self, session):
"""Execute after a commit has occured.
-
+
Note that this may not be per-flush if a longer running
transaction is ongoing."""
def after_rollback(self, session):
"""Execute after a rollback has occured.
-
+
Note that this may not be per-flush if a longer running
transaction is ongoing."""
def before_flush( self, session, flush_context, instances):
"""Execute before flush process has started.
-
+
`instances` is an optional list of objects which were passed to
the ``flush()`` method. """
def after_flush(self, session, flush_context):
"""Execute after flush has completed, but before commit has been
called.
-
+
Note that the session's state is still in pre-flush, i.e. 'new',
'dirty', and 'deleted' lists still show pre-flush state as well
as the history settings on instance attributes."""
def after_flush_postexec(self, session, flush_context):
"""Execute after flush has completed, and after the post-exec
state occurs.
-
+
This will be when the 'new', 'dirty', and 'deleted' lists are in
their final state. An actual commit() may or may not have
occured, depending on whether or not the flush started its own
def after_begin( self, session, transaction, connection):
"""Execute after a transaction is begun on a connection
-
+
`transaction` is the SessionTransaction. This method is called
after an engine level transaction is begun on a connection. """
def after_attach(self, session, instance):
"""Execute after an instance is attached to a session.
-
+
This is called after an add, delete or merge. """
def after_bulk_update( self, session, query, query_context, result):
"""Execute after a bulk update operation to the session.
-
+
This is called after a session.query(...).update()
-
+
`query` is the query object that this update operation was
called on. `query_context` was the query context object.
`result` is the result object returned from the bulk operation.
def after_bulk_delete( self, session, query, query_context, result):
"""Execute after a bulk delete operation to the session.
-
+
This is called after a session.query(...).delete()
-
+
`query` is the query object that this delete operation was
called on. `query_context` was the query context object.
`result` is the result object returned from the bulk operation.
cascade = ()
"""The set of 'cascade' attribute names.
-
+
This collection is checked before the 'cascade_iterator' method is called.
-
+
"""
def setup(self, context, entity, path, adapter, **kwargs):
def create_row_processor(self, selectcontext, path, mapper, row, adapter):
"""Return a 3-tuple consisting of three row processing functions.
-
+
"""
raise NotImplementedError()
halt_on=None):
"""Iterate through instances related to the given instance for
a particular 'cascade', starting with this MapperProperty.
-
+
Return an iterator3-tuples (instance, mapper, state).
-
+
Note that the 'cascade' collection on this MapperProperty is
checked first for the given type before cascade_iterator is called.
_compile_started = False
_compile_finished = False
-
+
def init(self):
"""Called after all mappers are created to assemble
relationships between mappers and perform other post-mapper-creation
def do_init(self):
"""Perform subclass-specific initialization post-mapper-creation
steps.
-
+
This is a template method called by the ``MapperProperty``
object's init() method.
-
+
"""
pass
new operator behaivor. The custom :class:`.PropComparator` is passed to
the mapper property via the ``comparator_factory`` argument. In each case,
the appropriate subclass of :class:`.PropComparator` should be used::
-
+
from sqlalchemy.orm.properties import \\
ColumnProperty,\\
CompositeProperty,\\
class MyColumnComparator(ColumnProperty.Comparator):
pass
-
+
class MyCompositeComparator(CompositeProperty.Comparator):
pass
-
+
class MyRelationshipComparator(RelationshipProperty.Comparator):
pass
-
+
"""
def __init__(self, prop, mapper, adapter=None):
def adapted(self, adapter):
"""Return a copy of this PropComparator which will use the given
adaption function on the local side of generated expressions.
-
+
"""
return self.__class__(self.prop, self.mapper, adapter)
There is a single strategy selected by default. Alternate
strategies can be selected at Query time through the usage of
``StrategizedOption`` objects via the Query.options() method.
-
+
"""
-
+
def _get_context_strategy(self, context, path):
cls = context.attributes.get(('loaderstrategy',
_reduce_path(path)), None)
if self.is_primary() and \
not mapper.class_manager._attr_has_impl(self.key):
self.strategy.init_class_attribute(mapper)
-
+
def build_path(entity, key, prev=None):
if prev:
return prev + (entity, key)
def serialize_path(path):
if path is None:
return None
-
+
return zip(
[m.class_ for m in [path[i] for i in range(0, len(path), 2)]],
[path[i] for i in range(1, len(path), 2)] + [None]
global class_mapper
if class_mapper is None:
from sqlalchemy.orm import class_mapper
-
+
p = tuple(chain(*[(class_mapper(cls), key) for cls, key in path]))
if p and p[-1] is None:
p = p[0:-1]
"""if True, indicate this option should be carried along
Query object generated by scalar or object lazy loaders.
"""
-
+
def process_query(self, query):
pass
def process_query_conditionally(self, query):
"""same as process_query(), except that this option may not
apply to the given query.
-
+
Used when secondary loaders resend existing options to a new
Query."""
class AttributeExtension(object):
"""An event handler for individual attribute change events.
-
+
AttributeExtension is assembled within the descriptors associated
with a mapped class.
-
+
"""
active_history = True
"""indicates that the set() method would like to receive the 'old' value,
even if it means firing lazy callables.
-
+
Note that ``active_history`` can also be set directly via
:func:`.column_property` and :func:`.relationship`.
-
+
"""
-
+
def append(self, state, value, initiator):
"""Receive a collection append event.
def _reduce_path(path):
"""Convert a (mapper, path) path to use base mappers.
-
+
This is used to allow more open ended selection of loader strategies, i.e.
Mapper -> prop1 -> Subclass -> prop2, where Subclass is a sub-mapper
of the mapper referened by Mapper.prop1.
-
+
"""
return tuple([i % 2 != 0 and
path[i] or
row, adapter):
"""Return row processing functions which fulfill the contract
specified by MapperProperty.create_row_processor.
-
+
StrategizedProperty delegates its create_row_processor method
directly to this method. """
class InstrumentationManager(object):
"""User-defined class instrumentation extension.
-
+
The API for this class should be considered as semi-stable,
and may change slightly with new releases.
-
+
"""
# r4361 added a mandatory (cls) constructor to this interface.
def dict_getter(self, class_):
return lambda inst: self.get_instance_dict(class_, inst)
-
\ No newline at end of file
self.order_by = util.to_list(order_by)
else:
self.order_by = order_by
-
+
self.always_refresh = always_refresh
self.version_id_col = version_id_col
self.version_id_generator = version_id_generator or \
self._inherits_equated_pairs = None
self._memoized_values = {}
self._compiled_cache_size = _compiled_cache_size
-
+
if allow_null_pks:
util.warn_deprecated(
"the allow_null_pks option to Mapper() is "
"deprecated. It is now allow_partial_pks=False|True, "
"defaults to True.")
allow_partial_pks = allow_null_pks
-
+
self.allow_partial_pks = allow_partial_pks
-
+
if with_polymorphic == '*':
self.with_polymorphic = ('*', None)
elif isinstance(with_polymorphic, (tuple, list)):
self.exclude_properties = None
self.compiled = False
-
+
# prevent this mapper from being constructed
# while a compile() is occuring (and defer a compile()
# until construction succeeds)
self._expire_memoizations()
finally:
_COMPILE_MUTEX.release()
-
+
def _configure_inheritance(self):
"""Configure settings related to inherting and/or inherited mappers
being present."""
"""Go through the global_extensions list as well as the list
of ``MapperExtensions`` specified for this ``Mapper`` and
creates a linked list of those extensions.
-
+
"""
extlist = util.OrderedSet()
"""
manager = attributes.manager_of_class(self.class_)
-
+
if self.non_primary:
if not manager or not manager.is_mapped:
raise sa_exc.InvalidRequestError(
# a ClassManager may already exist as
# ClassManager.instrument_attribute() creates
# new managers for each subclass if they don't yet exist.
-
+
_mapper_registry[self] = True
self.extension.instrument_class(self, self.class_)
event_registry.add_listener('on_init', _event_on_init)
event_registry.add_listener('on_init_failure', _event_on_init_failure)
event_registry.add_listener('on_resurrect', _event_on_resurrect)
-
+
for key, method in util.iterate_attributes(self.class_):
if isinstance(method, types.FunctionType):
if hasattr(method, '__sa_reconstructor__'):
def dispose(self):
# Disable any attribute-based compilation.
self.compiled = True
-
+
if hasattr(self, '_compile_failed'):
del self._compile_failed
-
+
if not self.non_primary and \
self.class_manager.is_mapped and \
self.class_manager.mapper is self:
all_cols = util.column_set(chain(*[
col.proxy_set for col in
self._columntoproperty]))
-
+
pk_cols = util.column_set(c for c in all_cols if c.primary_key)
# identify primary key columns which are also mapped by this mapper.
for col in self._columntoproperty
if not hasattr(col, 'table') or
col.table not in self._cols_by_table)
-
+
# if explicit PK argument sent, add those columns to the
# primary key mappings
if self.primary_key_argument:
if k.table not in self._pks_by_table:
self._pks_by_table[k.table] = util.OrderedSet()
self._pks_by_table[k.table].add(k)
-
+
# otherwise, see that we got a full PK for the mapped table
elif self.mapped_table not in self._pks_by_table or \
len(self._pks_by_table[self.mapped_table]) == 0:
self._log("Identified primary key columns: %s", primary_key)
def _configure_properties(self):
-
+
# Column and other ClauseElement objects which are mapped
self.columns = self.c = util.OrderedProperties()
raise sa_exc.InvalidRequestError(
"Cannot exclude or override the discriminator column %r" %
col.key)
-
+
self._configure_property(
col.key,
properties.ColumnProperty(col, _instrument=instrument),
key,
properties.ConcreteInheritedProperty(),
init=init, setparent=True)
-
+
def _configure_property(self, key, prop, init=True, setparent=True):
self._log("_configure_property(%s, %s)", key, prop.__class__.__name__)
"or more attributes for these same-named columns "
"explicitly."
% (prop.columns[-1], column, key))
-
+
# this hypothetically changes to
# prop.columns.insert(0, column) when we do [ticket:1892]
prop.columns.append(column)
self._log("appending to existing properties.ColumnProperty %s" % (key))
-
+
elif prop is None or isinstance(prop, properties.ConcreteInheritedProperty):
mapped_column = []
for c in columns:
if isinstance(prop, properties.ColumnProperty):
col = self.mapped_table.corresponding_column(prop.columns[0])
-
+
# if the column is not present in the mapped table,
# test if a column has been added after the fact to the
# parent table (or their parent, etc.) [ticket:1570]
prop.columns[0])
break
path.append(m)
-
+
# otherwise, col might not be present! the selectable given
# to the mapper need not include "deferred"
# columns (included in zblog tests)
col.table in self._cols_by_table and \
col not in self._cols_by_table[col.table]:
self._cols_by_table[col.table].add(col)
-
+
# if this properties.ColumnProperty represents the "polymorphic
# discriminator" column, mark it. We'll need this when rendering
# columns in SELECT statements.
prop._is_polymorphic_discriminator = \
(col is self.polymorphic_on or
prop.columns[0] is self.polymorphic_on)
-
+
self.columns[key] = col
for col in prop.columns:
for col in col.proxy_set:
"a ColumnProperty already exists keyed to the name "
"%r for column %r" % (syn, key, key, syn)
)
-
+
self._props[key] = prop
if not self.non_primary:
This is a deferred configuration step which is intended
to execute once all mappers have been constructed.
-
+
"""
self._log("_post_configure_properties() started")
l = [(key, prop) for key, prop in self._props.iteritems()]
for key, prop in l:
self._log("initialize prop %s", key)
-
+
if prop.parent is self and not prop._compile_started:
prop.init()
-
+
if prop._compile_finished:
prop.post_instrument_class(self)
-
+
self._log("_post_configure_properties() complete")
self.compiled = True
-
+
def add_properties(self, dict_of_properties):
"""Add the given dictionary of properties to this mapper,
using `add_property`.
def get_property(self, key,
resolve_synonyms=False,
raiseerr=True, _compile_mappers=True):
-
+
"""return a :class:`.MapperProperty` associated with the given key.
-
+
resolve_synonyms=False and raiseerr=False are deprecated.
-
+
"""
if _compile_mappers and not self.compiled:
self.compile()
-
+
if not resolve_synonyms:
prop = self._props.get(key, None)
if prop is None and raiseerr:
"Mapper '%s' has no property '%s'" % (self, key))
else:
return None
-
+
@util.deprecated('0.6.4',
'Call to deprecated function mapper._get_col_to_pr'
'op(). Use mapper.get_property_by_column()')
def _get_col_to_prop(self, col):
return self._columntoproperty[col]
-
+
def get_property_by_column(self, column):
"""Given a :class:`.Column` object, return the
:class:`.MapperProperty` which maps this column."""
return self._columntoproperty[column]
-
+
@property
def iterate_properties(self):
"""return an iterator of all MapperProperty objects."""
mapped tables.
"""
-
+
from_obj = self.mapped_table
for m in mappers:
if m is self:
def _iterate_polymorphic_properties(self, mappers=None):
"""Return an iterator of MapperProperty objects which will render into
a SELECT."""
-
+
if mappers is None:
mappers = self._with_polymorphic_mappers
c.columns[0] is not self.polymorphic_on):
continue
yield c
-
+
@property
def properties(self):
raise NotImplementedError(
def primary_mapper(self):
"""Return the primary mapper corresponding to this mapper's class key
(class)."""
-
+
return self.class_manager.mapper
@property
def primary_base_mapper(self):
return self.class_manager.mapper.base_mapper
-
+
def identity_key_from_row(self, row, adapter=None):
"""Return an identity-map key for use in storing/retrieving an
item from the identity map.
def _optimized_get_statement(self, state, attribute_names):
"""assemble a WHERE clause which retrieves a given state by primary
key, using a minimized set of tables.
-
+
Applies to a joined-table inheritance mapper where the
requested attribute names are only present on joined tables,
not the base table. The WHERE clause attempts to include
only those tables to minimize joins.
-
+
"""
props = self._props
-
+
tables = set(chain(
*[sqlutil.find_tables(c, check_columns=True)
for key in attribute_names
for c in props[key].columns]
))
-
+
if self.base_mapper.local_table in tables:
return None
if not iterator:
visitables.pop()
continue
-
+
if item_type is prp:
prop = iterator.popleft()
if type_ not in prop.cascade:
for mapper in self.base_mapper.self_and_descendants:
for t in mapper.tables:
table_to_mapper[t] = mapper
-
+
sorted_ = sqlutil.sort_tables(table_to_mapper.iterkeys())
ret = util.OrderedDict()
for t in sorted_:
saves = unitofwork.SaveUpdateAll(uow, self.base_mapper)
deletes = unitofwork.DeleteAll(uow, self.base_mapper)
uow.dependencies.add((saves, deletes))
-
+
for dep in self._dependency_processors:
dep.per_property_preprocessors(uow)
-
+
for prop in self._props.values():
prop.per_property_preprocessors(uow)
-
+
def _per_state_flush_actions(self, uow, states, isdelete):
-
+
base_mapper = self.base_mapper
save_all = unitofwork.SaveUpdateAll(uow, base_mapper)
delete_all = unitofwork.DeleteAll(uow, base_mapper)
else:
action = unitofwork.SaveUpdateState(uow, state, base_mapper)
uow.dependencies.add((action, delete_all))
-
+
yield action
-
+
def _memo(self, key, callable_):
if key in self._memoized_values:
return self._memoized_values[key]
else:
self._memoized_values[key] = value = callable_()
return value
-
+
def _post_update(self, states, uowtransaction, post_update_cols):
"""Issue UPDATE statements on behalf of a relationship() which
specifies post_update.
-
+
"""
cached_connections = util.PopulateDict(
lambda conn:conn.execution_options(
conn = connection
mapper = _state_mapper(state)
-
+
tups.append((state, state.dict, mapper, conn))
table_to_mapper = self._sorted_tables
for state, state_dict, mapper, connection in tups:
if table not in mapper._pks_by_table:
continue
-
+
pks = mapper._pks_by_table[table]
params = {}
hasdata = False
if hasdata:
update.append((state, state_dict, params, mapper,
connection))
-
+
if update:
mapper = table_to_mapper[table]
params, mapper, conn in grouper]
cached_connections[connection].\
execute(statement, multiparams)
-
+
def _save_obj(self, states, uowtransaction, single=False):
"""Issue ``INSERT`` and/or ``UPDATE`` statements for a list
of objects.
ordering among a polymorphic chain of instances. Therefore
_save_obj is typically called only on a *base mapper*, or a
mapper which does not inherit from any other mapper.
-
+
"""
-
+
# if batch=false, call _save_obj separately for each object
if not single and not self.batch:
for state in _sort_states(states):
connection_callable = None
tups = []
-
+
for state in _sort_states(states):
if connection_callable:
conn = connection_callable(self, state.obj())
else:
conn = connection
-
+
has_identity = state.has_identity
mapper = _state_mapper(state)
instance_key = state.key or mapper._identity_key_from_state(state)
instance_key, row_switch in tups:
if table not in mapper._pks_by_table:
continue
-
+
pks = mapper._pks_by_table[table]
-
+
isinsert = not has_identity and \
not row_switch
-
+
params = {}
value_params = {}
hasdata = False
def update_stmt():
clause = sql.and_()
-
+
for col in mapper._pks_by_table[table]:
clause.clauses.append(col == sql.bindparam(col._label,
type_=col.type))
type_=col.type))
return table.update(clause)
-
+
statement = self._memo(('update', table), update_stmt)
-
+
rows = 0
for state, state_dict, params, mapper, \
connection, value_params in update:
-
+
if value_params:
c = connection.execute(
statement.values(value_params),
else:
c = cached_connections[connection].\
execute(statement, params)
-
+
mapper._postfetch(uowtransaction, table,
state, state_dict, c,
c.last_updated_params(), value_params)
"- versioning cannot be verified." %
c.dialect.dialect_description,
stacklevel=12)
-
+
if insert:
statement = self._memo(('insert', table), table.insert)
else:
c = cached_connections[connection].\
execute(statement, params)
-
+
primary_key = c.inserted_primary_key
if primary_key is not None:
mapper._set_state_attr_by_column(
state, state_dict, col,
primary_key[i])
-
+
mapper._postfetch(uowtransaction, table,
state, state_dict, c,
c.last_inserted_params(),
readonly = state.unmodified.intersection(
p.key for p in mapper._readonly_props
)
-
+
if readonly:
sessionlib._expire_state(state, state.dict, readonly)
equated_pairs,
uowtransaction,
self.passive_updates)
-
+
@util.memoized_property
def _table_to_equated(self):
"""memoized map of tables to collections of columns to be
synchronized upwards to the base mapper."""
-
+
result = util.defaultdict(list)
-
+
for table in self._sorted_tables:
cols = set(table.c)
for m in self.iterate_to_root():
cols.intersection(
[l for l, r in m._inherits_equated_pairs]):
result[table].append((m, m._inherits_equated_pairs))
-
+
return result
-
+
def _delete_obj(self, states, uowtransaction):
"""Issue ``DELETE`` statements for a list of objects.
else:
connection = uowtransaction.transaction.connection(self)
connection_callable = None
-
+
tups = []
cached_connections = util.PopulateDict(
lambda conn:conn.execution_options(
compiled_cache=self._compiled_cache
))
-
+
for state in _sort_states(states):
mapper = _state_mapper(state)
conn = connection_callable(self, state.obj())
else:
conn = connection
-
+
if 'before_delete' in mapper.extension:
mapper.extension.before_delete(mapper, conn, state.obj())
-
+
tups.append((state,
state.dict,
_state_mapper(state),
conn))
table_to_mapper = self._sorted_tables
-
+
for table in reversed(table_to_mapper.keys()):
delete = util.defaultdict(list)
for state, state_dict, mapper, has_identity, connection in tups:
polymorphic_from=None, extension=None,
only_load_props=None, refresh_state=None,
polymorphic_discriminator=None):
-
+
"""Produce a mapper level row processor callable
which processes rows into mapped instances."""
-
+
pk_cols = self.primary_key
if polymorphic_from or refresh_state:
new_populators = []
existing_populators = []
load_path = context.query._current_path + path
-
+
def populate_state(state, dict_, row, isnew, only_load_props):
if isnew:
if context.propagate_options:
new_populators,
existing_populators
)
-
+
if isnew:
populators = new_populators
else:
is_not_primary_key = _none_set.issuperset
else:
is_not_primary_key = _none_set.issubset
-
+
def _instance(row, result):
if translate_row:
ret = translate_row(self, context, row)
dict_,
self.version_id_col) != \
row[version_id_col]:
-
+
raise orm_exc.StaleDataError(
"Instance '%s' has version id '%s' which "
"does not match database-loaded version id '%s'."
isnew = True
attrs = state.unloaded
# allow query.instances to commit the subset of attrs
- context.partials[state] = (dict_, attrs)
+ context.partials[state] = (dict_, attrs)
if not populate_instance or \
populate_instance(self, context, row, instance,
def _populators(self, context, path, row, adapter,
new_populators, existing_populators):
"""Produce a collection of attribute level row processor callables."""
-
+
delayed_populators = []
for prop in self._props.itervalues():
newpop, existingpop, delayedpop = prop.create_row_processor(
delayed_populators.append((prop.key, delayedpop))
if delayed_populators:
new_populators.extend(delayed_populators)
-
+
def _configure_subclass_mapper(self, context, path, adapter):
"""Produce a mapper level row processor callable factory for mappers
inheriting this one."""
-
+
def configure_subclass_mapper(discriminator):
try:
mapper = self.polymorphic_map[discriminator]
discriminator)
if mapper is self:
return None
-
+
# replace the tip of the path info with the subclass mapper
# being used. that way accurate "load_path" info is available
# for options invoked during deferred loads.
# we lose AliasedClass path elements this way, but currently,
# those are not needed at this stage.
-
+
# this asserts to true
#assert mapper.isa(_class_to_mapper(path[-1]))
-
+
return mapper._instance_processor(context, path[0:-1] + (mapper,),
adapter,
polymorphic_from=self)
can then raise validation exceptions to halt the process from continuing,
or can modify or replace the value before proceeding. The function
should otherwise return the given value.
-
+
Note that a validator for a collection **cannot** issue a load of that
collection within the validation routine - this usage raises
an assertion to avoid recursion overflows. This is a reentrant
for col, val in zip(instrumenting_mapper.primary_key, state.key[1]):
instrumenting_mapper._set_state_attr_by_column(
state, state.dict, col, val)
-
-
+
+
def _sort_states(states):
return sorted(states, key=operator.attrgetter('sort_key'))
def _load_scalar_attributes(state, attribute_names):
"""initiate a column-based attribute refresh operation."""
-
+
mapper = _state_mapper(state)
session = sessionlib._state_session(state)
if not session:
(state_str(state)))
has_key = state.has_identity
-
+
result = False
if mapper.inherits and not mapper.concrete:
" persistent and does not "
"contain a full primary key." % state_str(state))
identity_key = mapper._identity_key_from_state(state)
-
+
if (_none_set.issubset(identity_key) and \
not mapper.allow_partial_pks) or \
_none_set.issuperset(identity_key):
"(and shouldn't be expired, either)."
% state_str(state))
return
-
+
result = session.query(mapper)._get(
identity_key,
refresh_state=state,
self.descriptor = kwargs.pop('descriptor', None)
self.extension = kwargs.pop('extension', None)
self.active_history = kwargs.pop('active_history', False)
-
+
if 'doc' in kwargs:
self.doc = kwargs.pop('doc')
else:
break
else:
self.doc = None
-
+
if kwargs:
raise TypeError(
"%s received unexpected keyword argument(s): %s" % (
self.strategy_class = strategies.DeferredColumnLoader
else:
self.strategy_class = strategies.ColumnLoader
-
+
def instrument_class(self, mapper):
if not self.instrument:
return
-
+
attributes.register_descriptor(
mapper.class_,
self.key,
property_=self,
doc=self.doc
)
-
+
def do_init(self):
super(ColumnProperty, self).do_init()
if len(self.columns) > 1 and \
dest_dict, load, _recursive):
if self.key in source_dict:
value = source_dict[self.key]
-
+
if not load:
dest_dict[self.key] = value
else:
else:
if dest_state.has_identity and self.key not in dest_dict:
dest_state.expire_attributes(dest_dict, [self.key])
-
+
def get_col_value(self, column, value):
return value
return self.prop.columns[0]._annotate({
"parententity": self.mapper,
"parentmapper":self.mapper})
-
+
def operate(self, op, *other, **kwargs):
return op(self.__clause_element__(), *other, **kwargs)
def reverse_operate(self, op, other, **kwargs):
col = self.__clause_element__()
return op(col._bind_param(op, other), col, **kwargs)
-
+
# TODO: legacy..do we need this ? (0.5)
ColumnComparator = Comparator
-
+
def __str__(self):
return str(self.parent.class_.__name__) + "." + self.key
class CompositeProperty(ColumnProperty):
"""subclasses ColumnProperty to provide composite type support."""
-
+
def __init__(self, class_, *columns, **kwargs):
super(CompositeProperty, self).__init__(*columns, **kwargs)
self._col_position_map = util.column_dict(
obj.__set_composite_values__(*values)
else:
setattr(obj, column.key, value)
-
+
def get_col_value(self, column, value):
if value is None:
return None
*[self.adapter(x) for x in self.prop.columns])
else:
return expression.ClauseList(*self.prop.columns)
-
+
__hash__ = None
-
+
def __eq__(self, other):
if other is None:
values = [None] * len(self.prop.columns)
values = other.__composite_values__()
return sql.and_(
*[a==b for a, b in zip(self.prop.columns, values)])
-
+
def __ne__(self, other):
return sql.not_(self.__eq__(other))
"""A 'do nothing' :class:`MapperProperty` that disables
an attribute on a concrete subclass that is only present
on the inherited mapper, not the concrete classes' mapper.
-
+
Cases where this occurs include:
-
+
* When the superclass mapper is mapped against a
"polymorphic union", which includes all attributes from
all subclasses.
but not on the subclass mapper. Concrete mappers require
that relationship() is configured explicitly on each
subclass.
-
+
"""
-
+
def instrument_class(self, mapper):
def warn():
raise AttributeError("Concrete %s does not implement "
doc=self.doc
)
-
+
class ComparableProperty(DescriptorProperty):
"""Instruments a Python property for use in query expressions."""
def instrument_class(self, mapper):
"""Set up a proxy to the unmanaged descriptor."""
-
+
if self.descriptor is None:
desc = getattr(mapper.class_, self.key, None)
if mapper._is_userland_descriptor(desc):
RelationshipProperty.Comparator
self.comparator = self.comparator_factory(self, None)
util.set_creation_order(self)
-
+
if strategy_class:
self.strategy_class = strategy_class
elif self.lazy== 'dynamic':
self.strategy_class = dynamic.DynaLoader
else:
self.strategy_class = strategies.factory(self.lazy)
-
+
self._reverse_property = set()
if cascade is not False:
"""Return a copy of this PropComparator which will use the
given adaption function on the local side of generated
expressions.
-
+
"""
return self.__class__(self.property, self.mapper,
getattr(self, '_of_type', None),
adapter)
-
+
@property
def parententity(self):
return self.property.parent
raise NotImplementedError('in_() not yet supported for '
'relationships. For a simple many-to-one, use '
'in_() against the set of foreign key values.')
-
+
__hash__ = None
-
+
def __eq__(self, other):
if isinstance(other, (NoneType, expression._Null)):
if self.property.direction in [ONETOMANY, MANYTOMANY]:
source_selectable = self.__clause_element__()
else:
source_selectable = None
-
+
pj, sj, source, dest, secondary, target_adapter = \
self.property._create_joins(dest_polymorphic=True,
dest_selectable=to_selectable,
criterion = crit
else:
criterion = criterion & crit
-
+
# annotate the *local* side of the join condition, in the case
# of pj + sj this is the full primaryjoin, in the case of just
# pj its the local side of the primaryjoin.
j = _orm_annotate(pj) & sj
else:
j = _orm_annotate(pj, exclude=self.property.remote_side)
-
+
if criterion is not None and target_adapter:
# limit this adapter to annotated only?
criterion = target_adapter.traverse(criterion)
# to anything in the enclosing query.
if criterion is not None:
criterion = criterion._annotate({'_halt_adapt': True})
-
+
crit = j & criterion
-
+
return sql.exists([1], crit, from_obj=dest).correlate(source)
def any(self, criterion=None, **kwargs):
def __negated_contains_or_equals(self, other):
if self.property.direction == MANYTOONE:
state = attributes.instance_state(other)
-
+
def state_bindparam(state, col):
o = state.obj() # strong ref
return lambda : \
self.property.mapper._get_committed_attr_by_column(o,
col)
-
+
def adapt(col):
if self.adapter:
return self.adapter(col)
else:
return col
-
+
if self.property._use_get:
return sql.and_(*[
sql.or_(
adapt(x) != state_bindparam(state, y),
adapt(x) == None)
for (x, y) in self.property.local_remote_pairs])
-
+
criterion = sql.and_(*[x==y for (x, y) in
zip(
self.property.mapper.primary_key,
if load:
# for a full merge, pre-load the destination collection,
# so that individual _merge of each item pulls from identity
- # map for those already present.
+ # map for those already present.
# also assumes CollectionAttrbiuteImpl behavior of loading
# "old" list in any case
dest_state.get_impl(self.key).get(dest_state, dest_dict)
-
+
dest_list = []
for current in instances:
current_state = attributes.instance_state(current)
load=load, _recursive=_recursive)
if obj is not None:
dest_list.append(obj)
-
+
if not load:
coll = attributes.init_state_collection(dest_state,
dest_dict, self.key)
passive=passive)
skip_pending = type_ == 'refresh-expire' and 'delete-orphan' \
not in self.cascade
-
+
if instances:
for c in instances:
if c is not None and \
c is not attributes.PASSIVE_NO_RESULT and \
c not in visited_instances and \
(halt_on is None or not halt_on(c)):
-
+
if not isinstance(c, self.mapper.class_):
raise AssertionError("Attribute '%s' on class '%s' "
"doesn't handle objects "
str(c.__class__)
))
instance_state = attributes.instance_state(c)
-
+
if skip_pending and not instance_state.key:
continue
-
+
visited_instances.add(c)
# cascade using the mapper local to this
# object, so that its individual properties are located
instance_mapper = instance_state.manager.mapper
yield c, instance_mapper, instance_state
-
+
def _add_reverse_property(self, key):
other = self.mapper.get_property(key, _compile_mappers=False)
self._reverse_property.add(other)
other._reverse_property.add(self)
-
+
if not other._get_target().common_parent(self.parent):
raise sa_exc.ArgumentError('reverse_property %r on '
'relationship %s references relationship %s, which '
'both of the same direction %r. Did you mean to '
'set remote_side on the many-to-one side ?'
% (other, self, self.direction))
-
+
def do_init(self):
self._get_target()
self._assert_is_primary()
% (self.key, type(self.argument)))
assert isinstance(self.mapper, mapper.Mapper), self.mapper
return self.mapper
-
+
def _process_dependent_arguments(self):
# accept callables for other attributes which may require
"""Given a join condition, figure out what columns are foreign
and are part of a binary "equated" condition to their referecned
columns, and convert into a list of tuples of (primary col->foreign col).
-
+
Make several attempts to determine if cols are compared using
"=" or other comparators (in which case suggest viewonly),
columns are present but not part of the expected mappings, columns
don't have any :class:`ForeignKey` information on them, or
the ``foreign_keys`` attribute is being used incorrectly.
-
+
"""
eq_pairs = criterion_as_pairs(join_condition,
consider_as_foreign_keys=self._user_defined_foreign_keys,
any_operator=self.viewonly)
-
+
eq_pairs = [(l, r) for (l, r) in eq_pairs
if self._col_is_part_of_mappings(l)
and self._col_is_part_of_mappings(r)
or self.viewonly and r in self._user_defined_foreign_keys]
-
+
if not eq_pairs and \
self.secondary is not None and \
not self._user_defined_foreign_keys:
join_condition,
self
))
-
+
if not eq_pairs:
if not self.viewonly and criterion_as_pairs(join_condition,
consider_as_foreign_keys=self._user_defined_foreign_keys,
any_operator=True):
-
+
err = "Could not locate any "\
"foreign-key-equated, locally mapped column "\
"pairs for %s "\
join_condition,
self
)
-
+
if not self._user_defined_foreign_keys:
err += " Ensure that the "\
"referencing Column objects have a "\
"of a ForeignKeyConstraint on their parent "\
"Table, or specify the foreign_keys parameter "\
"to this relationship."
-
+
err += " For more "\
"relaxed rules on join conditions, the "\
"relationship may be marked as viewonly=True."
util.warn("On %s, 'passive_deletes' is normally configured "
"on one-to-many, one-to-one, many-to-many relationships only."
% self)
-
+
def _determine_local_remote_pairs(self):
if not self.local_remote_pairs:
if self.remote_side:
"created for class '%s' " % (self.key,
self.parent.class_.__name__,
self.parent.class_.__name__))
-
+
def _generate_backref(self):
if not self.is_primary():
return
self.extension.append(
attributes.GenericBackrefExtension(self.back_populates))
self._add_reverse_property(self.back_populates)
-
+
def _post_init(self):
self.logger.info('%s setup primary join %s', self,
self.primaryjoin)
if not self.viewonly:
self._dependency_processor = \
dependency.DependencyProcessor.from_relationship(self)
-
+
@util.memoized_property
def _use_get(self):
"""memoize the 'use_get' attribute of this RelationshipLoader's
strategy = self._get_strategy(strategies.LazyLoader)
return strategy.use_get
-
+
def _refers_to_parent_table(self):
for c, f in self.synchronize_pairs:
if c.table is f.table:
primaryjoin, secondaryjoin, secondary = self.primaryjoin, \
self.secondaryjoin, self.secondary
-
+
# adjust the join condition for single table inheritance,
# in the case that the join is to a subclass
# this is analgous to the "_adjust_for_single_table_inheritance()"
# method in Query.
dest_mapper = of_type or self.mapper
-
+
single_crit = dest_mapper._single_table_criterion
if single_crit is not None:
if secondaryjoin is not None:
secondaryjoin = secondaryjoin & single_crit
else:
primaryjoin = primaryjoin & single_crit
-
+
if aliased:
if secondary is not None:
secondary = secondary.alias()
class Query(object):
"""ORM-level SQL construction object.
-
+
:class:`.Query` is the source of all SELECT statements generated by the
ORM, both those formulated by end-user query operations as well as by
high level internal operations such as related collection loading. It
features a generative interface whereby successive calls return a new
:class:`.Query` object, a copy of the former with additional
criteria and options associated with it.
-
+
:class:`.Query` objects are normally initially generated using the
:meth:`~.Session.query` method of :class:`.Session`. For a full walkthrough
of :class:`.Query` usage, see the :ref:`ormtutorial_toplevel`.
-
+
"""
-
+
_enable_eagerloads = True
_enable_assertions = True
_with_labels = False
_with_options = ()
_with_hints = ()
_enable_single_crit = True
-
+
def __init__(self, entities, session=None):
self.session = session
self._polymorphic_adapters = {}
equivs = self.__all_equivs()
self._from_obj_alias = sql_util.ColumnAdapter(
self._from_obj[0], equivs)
-
+
def _get_polymorphic_adapter(self, entity, selectable):
self.__mapper_loads_polymorphically_with(entity.mapper,
sql_util.ColumnAdapter(selectable,
@_generative()
def _adapt_all_clauses(self):
self._disable_orm_filtering = True
-
+
def _adapt_col_list(self, cols):
return [
self._adapt_clause(
True, True)
for o in cols
]
-
+
def _adapt_clause(self, clause, as_filter, orm_only):
adapters = []
if as_filter and self._filter_aliases:
def _get_condition(self):
self._order_by = self._distinct = False
return self._no_criterion_condition("get")
-
+
def _no_criterion_condition(self, meth):
if not self._enable_assertions:
return
@property
def statement(self):
"""The full SELECT statement represented by this Query.
-
+
The statement by default will not have disambiguating labels
applied to the construct unless with_labels(True) is called
first.
-
+
"""
stmt = self._compile_context(labels=self._with_labels).\
"""
return self.enable_eagerloads(False).statement.alias()
-
+
def label(self, name):
"""Return the full SELECT statement represented by this :class:`.Query`, converted
to a scalar subquery with a label of the given name.
-
+
Analagous to :meth:`sqlalchemy.sql._SelectBaseMixin.label`.
-
+
New in 0.6.5.
"""
-
+
return self.enable_eagerloads(False).statement.label(name)
def as_scalar(self):
"""Return the full SELECT statement represented by this :class:`.Query`, converted
to a scalar subquery.
-
+
Analagous to :meth:`sqlalchemy.sql._SelectBaseMixin.as_scalar`.
New in 0.6.5.
-
+
"""
-
+
return self.enable_eagerloads(False).statement.as_scalar()
-
-
+
+
def __clause_element__(self):
return self.enable_eagerloads(False).with_labels().statement
"""
self._with_labels = True
-
+
@_generative()
def enable_assertions(self, value):
"""Control whether assertions are generated.
-
+
When set to False, the returned Query will
not assert its state before certain operations,
including that LIMIT/OFFSET has not been applied
is called. This more permissive mode is used by
custom Query subclasses to specify criterion or
other modifiers outside of the usual usage patterns.
-
+
Care should be taken to ensure that the usage
pattern is even possible. A statement applied
by from_statement() will override any criterion
set by filter() or order_by(), for example.
-
+
"""
self._enable_assertions = value
-
+
@property
def whereclause(self):
"""A readonly attribute which returns the current WHERE criterion for this Query.
-
+
This returned value is a SQL expression construct, or ``None`` if no
criterion has been established.
-
+
"""
return self._criterion
set the ``stream_results`` execution
option to ``True``, which currently is only understood by psycopg2
and causes server side cursors to be used.
-
+
"""
self._yield_per = count
self._execution_options = self._execution_options.copy()
self._execution_options['stream_results'] = True
-
+
def get(self, ident):
"""Return an instance of the object based on the
given identifier, or None if not found.
"""Return a :class:`.Query` construct which will correlate the given
FROM clauses to that of an enclosing :class:`.Query` or
:func:`~.expression.select`.
-
+
The method here accepts mapped classes, :func:`.aliased` constructs,
and :func:`.mapper` constructs as arguments, which are resolved into
expression constructs, in addition to appropriate expression
constructs.
-
+
The correlation arguments are ultimately passed to
:meth:`.Select.correlate` after coercion to expression constructs.
-
+
The correlation arguments take effect in such cases
as when :meth:`.Query.from_self` is used, or when
a subquery as returned by :meth:`.Query.subquery` is
embedded in another :func:`~.expression.select` construct.
-
+
"""
-
+
self._correlate = self._correlate.union(
_orm_selectable(s)
for s in args)
def populate_existing(self):
"""Return a :class:`Query` that will expire and refresh all instances
as they are loaded, or reused from the current :class:`.Session`.
-
+
:meth:`.populate_existing` does not improve behavior when
the ORM is used normally - the :class:`.Session` object's usual
behavior of maintaining a transaction and expiring all attributes
to a child object or collection, using its attribute state
as well as an established :func:`.relationship()`
configuration.
-
+
The method uses the :func:`.with_parent` function to generate
the clause, the result of which is passed to :meth:`.Query.filter`.
-
+
Parameters are the same as :func:`.with_parent`, with the exception
that the given property can be None, in which case a search is
performed against this :class:`.Query` object's target mapper.
-
+
"""
-
+
if property is None:
from sqlalchemy.orm import properties
mapper = object_mapper(instance)
@_generative()
def _enable_single_crit(self, val):
self._enable_single_crit = val
-
+
@_generative()
def _from_selectable(self, fromclause):
for attr in ('_statement', '_criterion', '_order_by', '_group_by',
# end Py2K
except StopIteration:
return None
-
+
@_generative()
def with_entities(self, *entities):
"""Return a new :class:`.Query` replacing the SELECT list with the given
entities.
-
+
e.g.::
# Users, filtered on some arbitrary criterion
limit(1)
New in 0.6.5.
-
+
"""
self._set_entities(entities)
-
-
+
+
@_generative()
def add_columns(self, *column):
"""Add one or more column expressions to the list
False)
def add_column(self, column):
"""Add a column expression to the list of result columns to be returned.
-
+
Pending deprecation: :meth:`.add_column` will be superceded by
:meth:`.add_columns`.
-
+
"""
-
+
return self.add_columns(column)
def options(self, *args):
"""Return a new Query object, applying the given list of
mapper options.
-
+
Most supplied options regard changing how column- and
relationship-mapped attributes are loaded. See the sections
:ref:`deferred` and :ref:`loading_toplevel` for reference
documentation.
-
+
"""
return self._options(False, *args)
def with_hint(self, selectable, text, dialect_name='*'):
"""Add an indexing hint for the given entity or selectable to
this :class:`Query`.
-
+
Functionality is passed straight through to
:meth:`~sqlalchemy.sql.expression.Select.with_hint`,
with the addition that ``selectable`` can be a
/etc.
"""
mapper, selectable, is_aliased_class = _entity_info(selectable)
-
+
self._with_hints += ((selectable, text, dialect_name),)
-
+
@_generative()
def execution_options(self, **kwargs):
""" Set non-SQL options which take effect during execution.
-
+
The options are the same as those accepted by
:meth:`sqlalchemy.sql.expression.Executable.execution_options`.
-
+
Note that the ``stream_results`` execution option is enabled
automatically if the :meth:`~sqlalchemy.orm.query.Query.yield_per()`
method is used.
def order_by(self, *criterion):
"""apply one or more ORDER BY criterion to the query and return
the newly resulting ``Query``
-
+
All existing ORDER BY settings can be suppressed by
passing ``None`` - this will suppress any ORDER BY configured
on mappers as well.
-
+
Alternatively, an existing ORDER BY setting on the Query
object can be entirely cancelled by passing ``False``
as the value - use this before calling methods where
an ORDER BY is invalid.
-
+
"""
if len(criterion) == 1:
if criterion[0] is None:
self._order_by = None
return
-
+
criterion = self._adapt_col_list(criterion)
if self._order_by is False or self._order_by is None:
SELECT * FROM Z)
"""
-
-
+
+
return self._from_selectable(
expression.union(*([self]+ list(q))))
def _join(self, keys, outerjoin, create_aliases, from_joinpoint):
"""consumes arguments from join() or outerjoin(), places them into a
consistent format with which to form the actual JOIN constructs.
-
+
"""
self._polymorphic_adapters = self._polymorphic_adapters.copy()
if not from_joinpoint:
self._reset_joinpoint()
-
+
if len(keys) >= 2 and \
isinstance(keys[1], expression.ClauseElement) and \
not isinstance(keys[1], expression.FromClause):
"You appear to be passing a clause expression as the second "
"argument to query.join(). Did you mean to use the form "
"query.join((target, onclause))? Note the tuple.")
-
+
for arg1 in util.to_list(keys):
if isinstance(arg1, tuple):
arg1, arg2 = arg1
else:
arg2 = None
-
+
# determine onclause/right_entity. there
# is a little bit of legacy behavior still at work here
# which means they might be in either order. may possibly
right_entity, onclause = arg1, arg2
left_entity = prop = None
-
+
if isinstance(onclause, basestring):
left_entity = self._joinpoint_zero()
descriptor = _entity_descriptor(left_entity, onclause)
onclause = descriptor
-
+
# check for q.join(Class.propname, from_joinpoint=True)
# and Class is that of the current joinpoint
elif from_joinpoint and \
isinstance(onclause, interfaces.PropComparator):
left_entity = onclause.parententity
-
+
left_mapper, left_selectable, left_is_aliased = \
_entity_info(self._joinpoint_zero())
if left_mapper is left_entity:
right_entity = of_type
else:
right_entity = onclause.property.mapper
-
+
left_entity = onclause.parententity
-
+
prop = onclause.property
if not isinstance(onclause, attributes.QueryableAttribute):
onclause = prop
elif onclause is not None and right_entity is None:
# TODO: no coverage here
raise NotImplementedError("query.join(a==b) not supported.")
-
+
self._join_left_to_right(
left_entity,
right_entity, onclause,
def _join_left_to_right(self, left, right,
onclause, outerjoin, create_aliases, prop):
"""append a JOIN to the query's from clause."""
-
+
if left is None:
left = self._joinpoint_zero()
"Can't construct a join from %s to %s, they "
"are the same entity" %
(left, right))
-
+
left_mapper, left_selectable, left_is_aliased = _entity_info(left)
right_mapper, right_selectable, right_is_aliased = _entity_info(right)
self._joinpoint = {
'_joinpoint_entity':right
}
-
+
# if an alias() of the right side was generated here,
# apply an adapter to all subsequent filter() calls
# until reset_joinpoint() is called.
# adapters that are in place right now
if isinstance(onclause, expression.ClauseElement):
onclause = self._adapt_clause(onclause, True, True)
-
+
# if an alias() on the right side was generated,
# which is intended to wrap a the right side in a subquery,
# ensure that columns retrieved from this target in the result
equivalents=right_mapper._equivalent_columns
)
)
-
+
# this is an overly broad assumption here, but there's a
# very wide variety of situations where we rely upon orm.join's
# adaption to glue clauses together, with joined-table inheritance's
# adaption should be enabled (or perhaps that we're even doing the
# whole thing the way we are here).
join_to_left = not right_is_aliased and not left_is_aliased
-
+
if self._from_obj and left_selectable is not None:
replace_clause_index, clause = sql_util.find_join_source(
self._from_obj,
# ensure it adapts to the left side.
if self._from_obj_alias and clause is self._from_obj[0]:
join_to_left = True
-
+
# An exception case where adaption to the left edge is not
# desirable. See above note on join_to_left.
if join_to_left and isinstance(clause, expression.Join) and \
sql_util.clause_is_present(left_selectable, clause):
join_to_left = False
-
+
clause = orm_join(clause,
right,
onclause, isouter=outerjoin,
clause = orm_join(clause, right, onclause,
isouter=outerjoin, join_to_left=join_to_left)
-
+
self._from_obj = self._from_obj + (clause,)
def _reset_joinpoint(self):
@_generative(_no_clauseelement_condition)
def select_from(self, *from_obj):
"""Set the FROM clause of this :class:`.Query` explicitly.
-
+
Sending a mapped class or entity here effectively replaces the
"left edge" of any calls to :meth:`.Query.join`, when no
joinpoint is otherwise established - usually, the default "join
point" is the leftmost entity in the :class:`.Query` object's
list of entities to be selected.
-
+
Mapped entities or plain :class:`.Table` or other selectables
can be sent here which will form the default FROM clause.
-
+
"""
obj = []
for fo in from_obj:
raise sa_exc.ArgumentError(
"select_from() accepts FromClause objects only.")
else:
- obj.append(fo)
-
+ obj.append(fo)
+
self._set_select_from(*obj)
-
+
def __getitem__(self, item):
if isinstance(item, slice):
start, stop, step = util.decode_slice(item)
def slice(self, start, stop):
"""apply LIMIT/OFFSET to the ``Query`` based on a "
"range and return the newly resulting ``Query``."""
-
+
if start is not None and stop is not None:
self._offset = (self._offset or 0) + start
self._limit = stop - start
def first(self):
"""Return the first result of this ``Query`` or
None if the result doesn't contain any row.
-
+
first() applies a limit of one within the generated SQL, so that
only one primary entity row is generated on the server side
(note this may consist of multiple result rows if join-loaded
if multiple object identities are returned, or if multiple
rows are returned for a query that does not return object
identities.
-
+
Note that an entity query, that is, one which selects one or
more mapped classes as opposed to individual column attributes,
may ultimately represent many rows but only one row of
"""
ret = list(self)
-
+
l = len(ret)
if l == 1:
return ret[0]
querycontext.statement, params=self._params,
mapper=self._mapper_zero_or_none())
return self.instances(result, querycontext)
-
+
@property
def column_descriptions(self):
"""Return metadata about the columns which would be
returned by this :class:`Query`.
-
+
Format is a list of dictionaries::
-
+
user_alias = aliased(User, name='user2')
q = sess.query(User, User.id, user_alias)
-
+
# this expression:
q.columns
-
+
# would return:
[
{
'expr':user_alias
}
]
-
+
"""
return [
{
}
for ent in self._entities
]
-
+
def instances(self, cursor, __context=None):
"""Given a ResultProxy cursor as returned by connection.execute(),
return an ORM result as an iterator.
query_entity.row_processor(self, context, custom_rows)
for query_entity in self._entities
])
-
-
+
+
while True:
context.progress = {}
context.partials = {}
def merge_result(self, iterator, load=True):
"""Merge a result into this Query's Session.
-
+
Given an iterator returned by a Query of the same structure as this
one, return an identical iterator of results, with all mapped
instances merged into the session using Session.merge(). This is an
structure of the result rows and unmapped columns with less method
overhead than that of calling Session.merge() explicitly for each
value.
-
+
The structure of the results is determined based on the column list of
this Query - if these do not correspond, unchecked errors will occur.
-
+
The 'load' argument is the same as that of Session.merge().
-
+
"""
-
+
session = self.session
if load:
# flush current contents if we expect to load data
session._autoflush()
-
+
autoflush = session.autoflush
try:
session.autoflush = False
attributes.instance_state(newrow[i]),
attributes.instance_dict(newrow[i]),
load=load, _recursive={})
- result.append(util.NamedTuple(newrow, row._labels))
-
+ result.append(util.NamedTuple(newrow, row._labels))
+
return iter(result)
finally:
session.autoflush = autoflush
-
-
+
+
def _get(self, key=None, ident=None, refresh_state=None, lockmode=None,
only_load_props=None, passive=None):
lockmode = lockmode or self._lockmode
-
+
mapper = self._mapper_zero()
if not self._populate_existing and \
not refresh_state and \
# item present in identity map with a different class
if not issubclass(instance.__class__, mapper.class_):
return None
-
+
state = attributes.instance_state(instance)
-
+
# expired - ensure it still exists
if state.expired:
if passive is attributes.PASSIVE_NO_FETCH:
','.join("'%s'" % c for c in mapper.primary_key))
(_get_clause, _get_params) = mapper._get_clause
-
+
# None present in ident - turn those comparisons
# into "IS NULL"
if None in ident:
])
_get_clause = sql_util.adapt_criterion_to_null(
_get_clause, nones)
-
+
_get_clause = q._adapt_clause(_get_clause, True, False)
q._criterion = _get_clause
def count(self):
"""Return a count of rows this Query would return.
-
+
For simple entity queries, count() issues
a SELECT COUNT, and will specifically count the primary
key column of the first entity only. If the query uses
generated by this Query in a subquery, from which a SELECT COUNT
is issued, so that the contract of "how many rows
would be returned?" is honored.
-
+
For queries that request specific columns or expressions,
count() again makes no assumptions about those expressions
and will wrap everything in a subquery. Therefore,
- ``Query.count()`` is usually not what you want in this case.
+ ``Query.count()`` is usually not what you want in this case.
To count specific columns, often in conjunction with
GROUP BY, use ``func.count()`` as an individual column expression
instead of ``Query.count()``. See the ORM tutorial
:param synchronize_session: chooses the strategy for the removal of
matched objects from the session. Valid values are:
-
+
False - don't synchronize the session. This option is the most
efficient and is reliable once the session is expired, which
typically occurs after a commit(), or explicitly using
the objects in the session. If evaluation of the criteria isn't
implemented, an error is raised. In that case you probably
want to use the 'fetch' strategy as a fallback.
-
+
The expression evaluator currently doesn't account for differing
string collations between the database and Python.
else:
def eval_condition(obj):
return True
-
+
except evaluator.UnevaluatableError:
raise sa_exc.InvalidRequestError(
"Could not evaluate current criteria in Python. "
expire_all(). Before the expiration, updated objects may still
remain in the session with stale values on their attributes, which
can lead to confusing results.
-
+
'fetch' - performs a select query before the update to find
objects that are matched by the update query. The updated
attributes are expired on matched objects.
"the synchronize_session argument of "
"query.update() is now called 'fetch'")
synchronize_session = 'fetch'
-
+
if synchronize_session not in [False, 'evaluate', 'fetch']:
raise sa_exc.ArgumentError(
"Valid strategies for session synchronization "
for entity in self._entities:
entity.setup_context(self, context)
-
+
for rec in context.create_eager_joins:
strategy = rec[0]
strategy(*rec[1:])
-
+
eager_joins = context.eager_joins.values()
if context.from_clause:
# "load from explicit FROMs" mode,
# i.e. when select_from() or join() is used
- froms = list(context.from_clause)
+ froms = list(context.from_clause)
else:
# "load from discrete FROMs" mode,
# i.e. when each _MappedEntity has its own FROM
- froms = context.froms
+ froms = context.froms
if self._enable_single_crit:
self._adjust_for_single_inheritance(context)
order_by=context.order_by,
**self._select_args
)
-
+
for hint in self._with_hints:
inner = inner.with_hint(*hint)
-
+
if self._correlate:
inner = inner.correlate(*self._correlate)
[inner] + context.secondary_columns,
for_update=for_update,
use_labels=labels)
-
+
if self._execution_options:
statement = statement.execution_options(
**self._execution_options)
for hint in self._with_hints:
statement = statement.with_hint(*hint)
-
+
if self._execution_options:
statement = statement.execution_options(
**self._execution_options)
selected from the total results.
"""
-
+
for entity, (mapper, adapter, s, i, w) in \
self._mapper_adapter_map.iteritems():
single_crit = mapper._single_table_criterion
self.entities = [entity]
self.entity_zero = self.expr = entity
-
+
def setup_entity(self, entity, mapper, adapter,
from_obj, is_aliased_class, with_polymorphic):
self.mapper = mapper
self.path_entity = mapper
self.entity = self.entity_zero = mapper
self._label_name = self.mapper.class_.__name__
-
+
def set_with_polymorphic(self, query, cls_or_mappers,
selectable, discriminator):
if cls_or_mappers is None:
query._entities.append(self)
def _get_entity_clauses(self, query, context):
-
+
adapter = None
if not self.is_aliased_class and query._polymorphic_adapters:
adapter = query._polymorphic_adapters.get(self.mapper, None)
if not adapter and self.adapter:
adapter = self.adapter
-
+
if adapter:
if query._from_obj_alias:
ret = adapter.wrap(query._from_obj_alias)
self._polymorphic_discriminator)
return _instance, self._label_name
-
+
def setup_context(self, query, context):
adapter = self._get_entity_clauses(query, context)
def __init__(self, query, column):
self.expr = column
-
+
if isinstance(column, basestring):
column = sql.literal_column(column)
self._label_name = column.name
self.entity_zero = list(self.entities)[0]
else:
self.entity_zero = None
-
+
@property
def type(self):
return self.column.type
-
+
def adapt_to_selectable(self, query, sel):
c = _ColumnEntity(query, sel.corresponding_column(self.column))
c.entity_zero = self.entity_zero
c.entities = self.entities
-
+
def setup_entity(self, entity, mapper, adapter, from_obj,
is_aliased_class, with_polymorphic):
self.selectable = from_obj
multi_row_eager_loaders = False
adapter = None
froms = ()
-
+
def __init__(self, query):
if query._statement is not None:
Session = scoped_session(sessionmaker())
... use Session normally.
-
+
The internal registry is accessible as well,
and by default is an instance of :class:`.ThreadLocalRegistry`.
-
+
"""
def remove(self):
"""Dispose of the current contextual session."""
-
+
if self.registry.has():
self.registry().close()
self.registry.clear()
def configure(self, **kwargs):
"""reconfigure the sessionmaker used by this ScopedSession."""
-
+
if self.registry.has():
warn('At least one scoped session is already present. '
' configure() can not affect sessions that have '
class when called.
e.g.::
-
+
Session = scoped_session(sessionmaker())
class MyClass(object):
single: thread safety; SessionTransaction
"""
-
+
_rollback_exception = None
-
+
def __init__(self, session, parent=None, nested=False):
self.session = session
self._connections = {}
for s in set(self._new).union(self.session._new):
self.session._expunge_state(s)
-
+
for s in set(self._deleted).union(self.session._deleted):
if s.deleted:
# assert s in self._deleted
"""Manages persistence operations for ORM-mapped objects.
The Session's usage paradigm is described at :ref:`session_toplevel`.
-
+
"""
public_methods = (
'is_modified',
'merge', 'query', 'refresh', 'rollback',
'scalar')
-
-
+
+
def __init__(self, bind=None, autoflush=True, expire_on_commit=True,
_enable_transaction_accounting=True,
autocommit=False, twophase=False,
typical point of entry.
"""
-
+
if weak_identity_map:
self._identity_cls = identity.WeakInstanceDict
else:
``subtransactions=True`` or ``nested=True`` is specified.
The ``subtransactions=True`` flag indicates that this :meth:`~.Session.begin`
- can create a subtransaction if a transaction is already in progress.
+ can create a subtransaction if a transaction is already in progress.
For documentation on subtransactions, please see :ref:`session_subtransactions`.
-
+
The ``nested`` flag begins a SAVEPOINT transaction and is equivalent
to calling :meth:`~.Session.begin_nested`. For documentation on SAVEPOINT
transactions, please see :ref:`session_begin_nested`.
def commit(self):
"""Flush pending changes and commit the current transaction.
-
+
If no transaction is in progress, this method raises an
InvalidRequestError.
-
+
By default, the :class:`.Session` also expires all database
- loaded state on all ORM-managed attributes after transaction commit.
+ loaded state on all ORM-managed attributes after transaction commit.
This so that subsequent operations load the most recent
data from the database. This behavior can be disabled using
the ``expire_on_commit=False`` option to :func:`.sessionmaker` or
will be created for the life of the result (i.e., a connection is
checked out from the connection pool, which is returned when the
result object is closed).
-
+
If the :class:`Session` is not bound to an
:class:`~sqlalchemy.engine.base.Engine` or
:class:`~sqlalchemy.engine.base.Connection`, the given clause will be
(since the :class:`Session` keys multiple bind sources to a series of
:func:`mapper` objects). See :meth:`get_bind` for further details on
bind resolution.
-
+
:param clause:
A ClauseElement (i.e. select(), text(), etc.) or
string SQL statement to be executed
:param \**kw:
Additional keyword arguments are sent to :meth:`get_bind()`
which locates a connectable to use for the execution.
-
+
"""
clause = expression._literal_as_text(clause)
def scalar(self, clause, params=None, mapper=None, **kw):
"""Like execute() but return a scalar result."""
-
+
return self.execute(clause, params=params, mapper=mapper, **kw).scalar()
def close(self):
"a binding.")
c_mapper = mapper is not None and _class_to_mapper(mapper) or None
-
+
# manually bound?
if self.__binds:
if c_mapper:
context.append('mapper %s' % c_mapper)
if clause is not None:
context.append('SQL expression')
-
+
raise sa_exc.UnboundExecutionError(
"Could not locate a bind configured on %s or this Session" % (
', '.join(context)))
:meth:`~Session.refresh` usually only makes sense if non-ORM SQL
statement were emitted in the ongoing transaction, or if autocommit
mode is turned on.
-
+
:param attribute_names: optional. An iterable collection of
string attribute names indicating a subset of attributes to
be refreshed.
-
+
:param lockmode: Passed to the :class:`~sqlalchemy.orm.query.Query`
as used by :meth:`~sqlalchemy.orm.query.Query.with_lockmode`.
-
+
"""
try:
state = attributes.instance_state(instance)
def expire_all(self):
"""Expires all persistent instances within this Session.
-
+
When any attributes on a persitent instance is next accessed,
a query will be issued using the
:class:`.Session` object's current transactional context in order to
To expire individual objects and individual attributes
on those objects, use :meth:`Session.expire`.
-
+
The :class:`Session` object's default behavior is to
expire all state whenever the :meth:`Session.rollback`
or :meth:`Session.commit` methods are called, so that new
a highly isolated transaction will return the same values as were
previously read in that same transaction, regardless of changes
in database state outside of that transaction.
-
+
To expire all objects in the :class:`.Session` simultaneously,
use :meth:`Session.expire_all`.
-
+
The :class:`Session` object's default behavior is to
expire all state whenever the :meth:`Session.rollback`
or :meth:`Session.commit` methods are called, so that new
except exc.NO_STATE:
raise exc.UnmappedInstanceError(instance)
self._expire_state(state, attribute_names)
-
+
def _expire_state(self, state, attribute_names):
self._validate_persistent(state)
if attribute_names:
self._conditional_expire(state)
for (state, m, o) in cascaded:
self._conditional_expire(state)
-
+
def _conditional_expire(self, state):
"""Expire a state if persistent, else expunge if pending"""
-
+
if state.key:
_expire_state(state, state.dict, None, instance_dict=self.identity_map)
elif state in self._new:
self._new.pop(state)
state.detach()
-
+
def prune(self):
"""Remove unreferenced instances cached in the identity map.
if obj is not None:
instance_key = mapper._identity_key_from_state(state)
-
+
if _none_set.issubset(instance_key[1]) and \
not mapper.allow_partial_pks or \
_none_set.issuperset(instance_key[1]):
# map (see test/orm/test_naturalpks.py ReversePKsTest)
self.identity_map.discard(state)
state.key = instance_key
-
+
self.identity_map.replace(state)
state.commit_all(state.dict, self.identity_map)
-
+
# remove from new last, might be the last strong ref
if state in self._new:
if self._enable_transaction_accounting and self.transaction:
if state in self._deleted:
return
-
+
# ensure object is attached to allow the
# cascade operation to load deferred attributes
# and collections
mapped with ``cascade="merge"``.
See :ref:`unitofwork_merging` for a detailed discussion of merging.
-
+
"""
if 'dont_load' in kw:
load = not kw['dont_load']
util.warn_deprecated("dont_load=True has been renamed to load=False.")
-
+
_recursive = {}
-
+
if load:
# flush current contents if we expect to load data
self._autoflush()
-
+
_object_mapper(instance) # verify mapped
autoflush = self.autoflush
try:
load=load, _recursive=_recursive)
finally:
self.autoflush = autoflush
-
+
def _merge(self, state, state_dict, load=True, _recursive=None):
mapper = _state_mapper(state)
if state in _recursive:
new_instance = False
key = state.key
-
+
if key is None:
if not load:
raise sa_exc.InvalidRequestError(
if key in self.identity_map:
merged = self.identity_map[key]
-
+
elif not load:
if state.modified:
raise sa_exc.InvalidRequestError(
merged_state.key = key
self._update_impl(merged_state)
new_instance = True
-
+
elif not _none_set.issubset(key[1]) or \
(mapper.allow_partial_pks and
not _none_set.issuperset(key[1])):
merged = self.query(mapper.class_).get(key[1])
else:
merged = None
-
+
if merged is None:
merged = mapper.class_manager.new_instance()
merged_state = attributes.instance_state(merged)
else:
merged_state = attributes.instance_state(merged)
merged_dict = attributes.instance_dict(merged)
-
+
_recursive[state] = merged
# check that we didn't just pull the exact same
- # state out.
+ # state out.
if state is not merged_state:
merged_state.load_path = state.load_path
merged_state.load_options = state.load_options
-
+
for prop in mapper.iterate_properties:
prop.merge(self, state, state_dict, merged_state, merged_dict, load, _recursive)
if not load:
# remove any history
- merged_state.commit_all(merged_dict, self.identity_map)
+ merged_state.commit_all(merged_dict, self.identity_map)
if new_instance:
merged_state._run_on_load(merged)
raise sa_exc.InvalidRequestError(
"Object '%s' already has an identity - it can't be registered "
"as pending" % mapperutil.state_str(state))
-
+
self._attach(state)
if state not in self._new:
self._new[state] = state.obj()
if (self.identity_map.contains_state(state) and
state not in self._deleted):
return
-
+
if state.key is None:
raise sa_exc.InvalidRequestError(
"Instance '%s' is not persisted" %
mapperutil.state_str(state))
-
+
if state.deleted:
raise sa_exc.InvalidRequestError(
"Instance '%s' has been deleted. Use the make_transient() "
if state.key is None:
return
-
+
self._attach(state)
self._deleted[state] = state.obj()
self.identity_map.add(state)
-
+
def _attach(self, state):
if state.key and \
state.key in self.identity_map and \
"Can't attach instance %s; another instance with key %s is already present in this session." %
(mapperutil.state_str(state), state.key)
)
-
+
if state.session_id and state.session_id is not self.hash_key:
raise sa_exc.InvalidRequestError(
"Object '%s' is already attached to session '%s' "
"(this is '%s')" % (mapperutil.state_str(state),
state.session_id, self.hash_key))
-
+
if state.session_id != self.hash_key:
state.session_id = self.hash_key
for ext in self.extensions:
"The 'objects' argument to session.flush() is deprecated; "
"Please do not add objects to the session which should not "
"yet be persisted.")
-
+
if self._flushing:
raise sa_exc.InvalidRequestError("Session is already flushing")
-
+
try:
self._flushing = True
self._flush(objects)
finally:
self._flushing = False
-
+
def _flush(self, objects=None):
if (not self.identity_map.check_modified() and
not self._deleted and not self._new):
for ext in self.extensions:
ext.before_flush(self, flush_context, objects)
dirty = self._dirty_states
-
+
deleted = set(self._deleted)
new = set(self._new)
proc = new.union(dirty).intersection(objset).difference(deleted)
else:
proc = new.union(dirty).difference(deleted)
-
+
for state in proc:
is_orphan = _state_mapper(state)._is_orphan(state)
if is_orphan and not state.has_identity:
except:
transaction.rollback(_capture_exception=True)
raise
-
+
flush_context.finalize_flush_changes()
# useful assertions:
#else:
# assert self.identity_map._modified == self.identity_map._modified.difference(objects)
#self.identity_map._modified.clear()
-
+
for ext in self.extensions:
ext.after_flush_postexec(self, flush_context)
This method retrieves a history instance for each instrumented
attribute on the instance and performs a comparison of the current
- value to its previously committed value.
+ value to its previously committed value.
``include_collections`` indicates if multivalued collections should be
included in the operation. Setting this to False is a way to detect
The ``passive`` flag indicates if unloaded attributes and collections
should not be loaded in the course of performing this test.
-
+
A few caveats to this method apply:
-
+
* Instances present in the 'dirty' collection may result in a value
of ``False`` when tested with this method. This because while
the object may have received attribute set events, there may be
based on the assumption that an UPDATE of the scalar value is
usually needed, and in those few cases where it isn't, is less
expensive on average than issuing a defensive SELECT.
-
+
The "old" value is fetched unconditionally only if the attribute
container has the "active_history" flag set to ``True``. This flag
is set typically for primary key attributes and scalar references
hasattr(attr.impl, 'get_collection')
) or not hasattr(attr.impl, 'get_history'):
continue
-
+
(added, unchanged, deleted) = \
attr.impl.get_history(state, dict_, passive=passive)
-
+
if added or deleted:
return True
return False
return util.IdentitySet(self._new.values())
_expire_state = state.InstanceState.expire_attributes
-
+
UOWEventHandler = unitofwork.UOWEventHandler
_sessions = weakref.WeakValueDictionary()
def make_transient(instance):
"""Make the given instance 'transient'.
-
+
This will remove its association with any
session and additionally will remove its "identity key",
such that it's as though the object were newly constructed,
except retaining its values. It also resets the
"deleted" flag on the state if this object
had been explicitly deleted by its session.
-
+
Attributes which were "expired" or deferred at the
instance level are reverted to undefined, and
will not trigger any loads.
-
+
"""
state = attributes.instance_state(instance)
s = _state_session(state)
del state.key
if state.deleted:
del state.deleted
-
+
def object_session(instance):
"""Return the ``Session`` to which instance belongs.
-
+
If the instance is not a mapped instance, an error is raised.
"""
-
+
try:
return _state_session(attributes.instance_state(instance))
except exc.NO_STATE:
raise exc.UnmappedInstanceError(instance)
-
+
def _state_session(state):
if state.session_id:
modified = False
expired = False
deleted = False
-
+
def __init__(self, obj, manager):
self.class_ = obj.__class__
self.manager = manager
@util.memoized_property
def committed_state(self):
return {}
-
+
@util.memoized_property
def parents(self):
return {}
@property
def has_identity(self):
return bool(self.key)
-
+
def detach(self):
if self.session_id:
try:
def dispose(self):
self.detach()
del self.obj
-
+
def _cleanup(self, ref):
instance_dict = self._instance_dict()
if instance_dict:
# remove possible cycles
self.__dict__.pop('callables', None)
self.dispose()
-
+
def obj(self):
return None
-
+
@property
def dict(self):
o = self.obj()
return attributes.instance_dict(o)
else:
return {}
-
+
@property
def sort_key(self):
return self.key and self.key[1] or (self.insert_order, )
for fn in manager.events.on_init:
fn(self, instance, args, kwargs)
-
+
# LESSTHANIDEAL:
# adjust for the case where the InstanceState was created before
# mapper compilation, and this actually needs to be a MutableAttrInstanceState
self.__class__ = MutableAttrInstanceState
self.obj = weakref.ref(self.obj(), self._cleanup)
self.mutable_dict = {}
-
+
try:
return manager.events.original_init(*mixed[1:], **kwargs)
except:
def _run_on_load(self, instance):
self.manager.events.run('on_load', instance)
-
+
def __getstate__(self):
d = {'instance':self.obj()}
if self.load_path:
d['load_path'] = interfaces.serialize_path(self.load_path)
return d
-
+
def __setstate__(self, state):
self.obj = weakref.ref(state['instance'], self._cleanup)
self.class_ = state['instance'].__class__
self.class_)
elif manager.is_mapped and not manager.mapper.compiled:
manager.mapper.compile()
-
+
self.committed_state = state.get('committed_state', {})
self.pending = state.get('pending', {})
self.parents = state.get('parents', {})
self.modified = state.get('modified', False)
self.expired = state.get('expired', False)
self.callables = state.get('callables', {})
-
+
if self.modified:
self._strong_obj = state['instance']
-
+
self.__dict__.update([
(k, state[k]) for k in (
'key', 'load_options', 'mutable_dict'
def initialize(self, key):
"""Set this attribute to an empty value or collection,
based on the AttributeImpl in use."""
-
+
self.manager.get_impl(key).initialize(self, self.dict)
def reset(self, dict_, key):
def set_callable(self, dict_, key, callable_):
"""Remove the given attribute and set the given callable
as a loader."""
-
+
dict_.pop(key, None)
self.callables[key] = callable_
-
+
def expire_attributes(self, dict_, attribute_names, instance_dict=None):
"""Expire all or a group of attributes.
-
+
If all attributes are expired, the "expired" flag is set to True.
-
+
"""
# we would like to assert that 'self.key is not None' here,
# but there are many cases where the mapper will expire
# occurs fully, within the flush(), before this key is assigned.
# the key is assigned late within the flush() to assist in
# "key switch" bookkeeping scenarios.
-
+
if attribute_names is None:
attribute_names = self.manager.keys()
self.expired = True
self.__dict__.get('committed_state', None),
self.mutable_dict
)
-
+
for key in attribute_names:
impl = self.manager[key].impl
if impl.accepts_scalar_loader and \
(not filter_deferred or impl.expire_missing or key in dict_):
self.callables[key] = self
dict_.pop(key, None)
-
+
for d in to_clear:
if d is not None:
d.pop(key, None)
if kw.get('passive') is attributes.PASSIVE_NO_FETCH:
return attributes.PASSIVE_NO_RESULT
-
+
toload = self.expired_attributes.\
intersection(self.unmodified)
-
+
self.manager.deferred_scalar_loader(self, toload)
# if the loader failed, or this
# dict. ensure they are removed.
for k in toload.intersection(self.callables):
del self.callables[k]
-
+
return ATTR_WAS_SET
@property
def unmodified(self):
"""Return the set of keys which have no uncommitted changes"""
-
+
return set(self.manager).difference(self.committed_state)
@property
def expired_attributes(self):
"""Return the set of keys which are 'expired' to be loaded by
the manager's deferred scalar loader, assuming no pending
- changes.
-
+ changes.
+
see also the ``unmodified`` collection which is intersected
against this set when a refresh operation occurs.
-
+
"""
return set([k for k, v in self.callables.items() if v is self])
def _is_really_none(self):
return self.obj()
-
+
def modified_event(self, dict_, attr, should_copy, previous, passive=PASSIVE_OFF):
if attr.key not in self.committed_state:
if previous is NEVER_SET:
previous = dict_[attr.key]
else:
previous = attr.get(self, dict_)
-
+
if should_copy and previous not in (None, NO_VALUE, NEVER_SET):
previous = attr.copy(previous)
self.committed_state[attr.key] = previous
-
-
+
+
# the "or not self.modified" is defensive at
# this point. The assertion below is expected
# to be True:
# assert self._strong_obj is None or self.modified
-
+
if self._strong_obj is None or not self.modified:
instance_dict = self._instance_dict()
if instance_dict:
self.committed_state[key] = self.manager[key].impl.copy(dict_[key])
else:
self.committed_state.pop(key, None)
-
+
self.expired = False
-
+
for key in set(self.callables).\
intersection(keys).\
intersection(dict_):
del self.callables[key]
-
+
def commit_all(self, dict_, instance_dict=None):
"""commit all attributes unconditionally.
if a value was not populated in state.dict.
"""
-
+
self.__dict__.pop('committed_state', None)
self.__dict__.pop('pending', None)
for key in self.manager.mutable_attributes:
if key in dict_:
self.committed_state[key] = self.manager[key].impl.copy(dict_[key])
-
+
if instance_dict and self.modified:
instance_dict._modified.discard(self)
-
+
self.modified = self.expired = False
self._strong_obj = None
class MutableAttrInstanceState(InstanceState):
"""InstanceState implementation for objects that reference 'mutable'
attributes.
-
+
Has a more involved "cleanup" handler that checks mutable attributes
for changes upon dereference, resurrecting if needed.
-
+
"""
-
+
@util.memoized_property
def mutable_dict(self):
return {}
-
+
def _get_modified(self, dict_=None):
if self.__dict__.get('modified', False):
return True
return True
else:
return False
-
+
def _set_modified(self, value):
self.__dict__['modified'] = value
-
+
modified = property(_get_modified, _set_modified)
-
+
@property
def unmodified(self):
"""a set of keys which have no uncommitted changes"""
dict_ = self.dict
-
+
return set([
key for key in self.manager
if (key not in self.committed_state or
def _is_really_none(self):
"""do a check modified/resurrect.
-
+
This would be called in the extremely rare
race condition that the weakref returned None but
the cleanup handler had not yet established the
__resurrect callable as its replacement.
-
+
"""
if self.modified:
self.obj = self.__resurrect
def reset(self, dict_, key):
self.mutable_dict.pop(key, None)
InstanceState.reset(self, dict_, key)
-
+
def _cleanup(self, ref):
"""weakref callback.
-
+
This method may be called by an asynchronous
gc.
-
+
If the state shows pending changes, the weakref
is replaced by the __resurrect callable which will
re-establish an object reference on next access,
else removes this InstanceState from the owning
identity map, if any.
-
+
"""
if self._get_modified(self.mutable_dict):
self.obj = self.__resurrect
except AssertionError:
pass
self.dispose()
-
+
def __resurrect(self):
"""A substitute for the obj() weakref function which resurrects."""
-
+
# store strong ref'ed version of the object; will revert
# to weakref when changes are persisted
-
+
obj = self.manager.new_instance(state=self)
self.obj = weakref.ref(obj, self._cleanup)
self._strong_obj = obj
# re-establishes identity attributes from the key
self.manager.events.run('on_resurrect', self, obj)
-
+
# TODO: don't really think we should run this here.
# resurrect is only meant to preserve the minimal state needed to
# do an UPDATE, not to produce a fully usable object
self._run_on_load(obj)
-
+
return obj
class PendingCollection(object):
proxy_property=None,
active_history=False,
impl_class=None,
- **kw
+ **kw
):
prop = strategy.parent_property
attribute_ext = list(util.to_list(prop.extension, default=[]))
-
+
if useobject and prop.single_parent:
attribute_ext.insert(0, _SingleParentValidator(prop))
attribute_ext.insert(0,
mapperutil.Validator(prop.key, prop.parent._validators[prop.key])
)
-
+
if useobject:
attribute_ext.append(sessionlib.UOWEventHandler(prop.key))
-
+
for m in mapper.self_and_descendants:
if prop is m._props.get(prop.key):
-
+
attributes.register_attribute_impl(
m.class_,
prop.key,
class UninstrumentedColumnLoader(LoaderStrategy):
"""Represent the a non-instrumented MapperProperty.
-
+
The polymorphic_on argument of mapper() often results in this,
if the argument is against the with_polymorphic selectable.
-
+
"""
def init(self):
self.columns = self.parent_property.columns
class ColumnLoader(LoaderStrategy):
"""Strategize the loading of a plain column-based MapperProperty."""
-
+
def init(self):
self.columns = self.parent_property.columns
self.is_composite = hasattr(self.parent_property, 'composite_class')
-
+
def setup_query(self, context, entity, path, adapter,
column_collection=None, **kwargs):
for c in self.columns:
if adapter:
c = adapter.columns[c]
column_collection.append(c)
-
+
def init_class_attribute(self, mapper):
self.is_class_level = True
coltype = self.columns[0].type
# TODO: check all columns ? check for foreign key as well?
active_history = self.parent_property.active_history or \
- self.columns[0].primary_key
+ self.columns[0].primary_key
_register_attribute(self, mapper, useobject=False,
compare_function=coltype.compare_values,
mutable_scalars=self.columns[0].type.is_mutable(),
active_history = active_history
)
-
+
def create_row_processor(self, selectcontext, path, mapper, row, adapter):
key = self.key
# look through list of columns represented here
return None
return self.parent_property.\
composite_class(*obj.__composite_values__())
-
+
def compare(a, b):
if a is None or b is None:
return a is b
-
+
for col, aprop, bprop in zip(self.columns,
a.__composite_values__(),
b.__composite_values__()):
composite_class = self.parent_property.composite_class
if adapter:
columns = [adapter.columns[c] for c in columns]
-
+
for c in columns:
if c not in row:
def new_execute(state, dict_, row):
return new_execute, None, None
log.class_logger(CompositeColumnLoader)
-
+
class DeferredColumnLoader(LoaderStrategy):
"""Strategize the loading of a deferred column-based MapperProperty."""
def init_class_attribute(self, mapper):
self.is_class_level = True
-
+
_register_attribute(self, mapper, useobject=False,
compare_function=self.columns[0].type.compare_values,
copy_function=self.columns[0].type.copy_value,
self.parent_property._get_strategy(ColumnLoader).\
setup_query(context, entity,
path, adapter, **kwargs)
-
+
def _class_level_loader(self, state):
if not state.has_identity:
return None
-
+
return LoadDeferredColumns(state, self.key)
-
-
+
+
log.class_logger(DeferredColumnLoader)
class LoadDeferredColumns(object):
"""serializable loader object used by DeferredColumnLoader"""
-
+
def __init__(self, state, key):
self.state, self.key = state, key
return attributes.PASSIVE_NO_RESULT
state = self.state
-
+
localparent = mapper._state_mapper(state)
-
+
prop = localparent.get_property(self.key)
strategy = prop._get_strategy(DeferredColumnLoader)
class DeferredOption(StrategizedOption):
propagate_to_loaders = True
-
+
def __init__(self, key, defer=False):
super(DeferredOption, self).__init__(key)
self.defer = defer
def __init__(self, group):
self.group = group
-
+
def process_query(self, query):
query._attributes[('undefer', self.group)] = True
return new_execute, None, None
log.class_logger(NoLoader)
-
+
class LazyLoader(AbstractRelationshipLoader):
"""Strategize a relationship() that loads when first accessed."""
self.__lazywhere, \
self.__bind_to_col, \
self._equated_columns = self._create_lazy_clause(self.parent_property)
-
+
self.logger.info("%s lazy loading clause %s", self, self.__lazywhere)
# determine if our "lazywhere" clause is the same as the mapper's
use_proxies=True,
equivalents=self.mapper._equivalent_columns
)
-
+
if self.use_get:
for col in self._equated_columns.keys():
if col in self.mapper._equivalent_columns:
for c in self.mapper._equivalent_columns[col]:
self._equated_columns[c] = self._equated_columns[col]
-
+
self.logger.info("%s will use query.get() to "
"optimize instance loads" % self)
def init_class_attribute(self, mapper):
self.is_class_level = True
-
+
# MANYTOONE currently only needs the
# "old" value for delete-orphan
# cascades. the required _SingleParentValidator
return self._lazy_none_clause(
reverse_direction,
adapt_source=adapt_source)
-
+
if not reverse_direction:
criterion, bind_to_col, rev = \
self.__lazywhere, \
o = state.obj() # strong ref
dict_ = attributes.instance_dict(o)
-
+
# use the "committed state" only if we're in a flush
# for this state.
-
+
sess = sessionlib._state_session(state)
if sess is not None and sess._flushing:
def visit_bindparam(bindparam):
if bindparam.key in bind_to_col:
bindparam.value = lambda: mapper._get_state_attr_by_column(
state, dict_, bind_to_col[bindparam.key])
-
-
+
+
if self.parent_property.secondary is not None and alias_secondary:
criterion = sql_util.ClauseAdapter(
self.parent_property.secondary.alias()).\
if adapt_source:
criterion = adapt_source(criterion)
return criterion
-
+
def _lazy_none_clause(self, reverse_direction=False, adapt_source=None):
if not reverse_direction:
criterion, bind_to_col, rev = \
if adapt_source:
criterion = adapt_source(criterion)
return criterion
-
+
def _class_level_loader(self, state):
if not state.has_identity and \
(not self.parent_property.load_on_pending or not state.session_id):
# this class - reset its
# per-instance attribute state, so that the class-level
# lazy loader is
- # executed when next referenced on this instance.
+ # executed when next referenced on this instance.
# this is needed in
# populate_existing() types of scenarios to reset
# any existing state.
state.reset(dict_, key)
return new_execute, None, None
-
+
@classmethod
def _create_lazy_clause(cls, prop, reverse_direction=False):
binds = util.column_dict()
_list = lookup.setdefault(l, [])
_list.append((l, r))
equated_columns[r] = l
-
+
def col_to_bind(col):
if col in lookup:
for tobind, equated in lookup[col]:
binds[col] = sql.bindparam(None, None, type_=col.type)
return binds[col]
return None
-
+
lazywhere = prop.primaryjoin
if prop.secondaryjoin is None or not reverse_direction:
lazywhere = visitors.replacement_traverse(
lazywhere, {}, col_to_bind)
-
+
if prop.secondaryjoin is not None:
secondaryjoin = prop.secondaryjoin
if reverse_direction:
secondaryjoin = visitors.replacement_traverse(
secondaryjoin, {}, col_to_bind)
lazywhere = sql.and_(lazywhere, secondaryjoin)
-
+
bind_to_col = dict((binds[col].key, col) for col in binds)
-
+
return lazywhere, bind_to_col, equated_columns
-
+
log.class_logger(LazyLoader)
class LoadLazyAttribute(object):
def __init__(self, state, key):
self.state, self.key = state, key
-
+
def __getstate__(self):
return (self.state, self.key)
def __setstate__(self, state):
self.state, self.key = state
-
+
def __call__(self, passive=False):
state = self.state
instance_mapper = mapper._state_mapper(state)
prop = instance_mapper.get_property(self.key)
strategy = prop._get_strategy(LazyLoader)
pending = not state.key
-
+
if (
passive is attributes.PASSIVE_NO_FETCH and
not strategy.use_get
pending
):
return attributes.PASSIVE_NO_RESULT
-
+
if strategy._should_log_debug():
strategy.logger.debug("loading %s",
mapperutil.state_attribute_str(
state, self.key))
-
+
session = sessionlib._state_session(state)
if session is None:
raise orm_exc.DetachedInstanceError(
"lazy load operation of attribute '%s' cannot proceed" %
(mapperutil.state_str(state), self.key)
)
-
+
q = session.query(prop.mapper)._adapt_all_clauses()
-
+
# don't autoflush on pending
# this would be something that's prominent in the
# docs and such
if pending:
q = q.autoflush(False)
-
+
if state.load_path:
q = q._with_current_path(state.load_path + (self.key,))
return val
allnulls = allnulls and val is None
ident.append(val)
-
+
if allnulls:
return None
-
+
if state.load_options:
q = q._conditional_options(*state.load_options)
key = prop.mapper.identity_key_from_primary_key(ident)
return q._get(key, ident, passive=passive)
-
+
if prop.order_by:
q = q.order_by(*util.to_list(prop.order_by))
if state.load_options:
q = q._conditional_options(*state.load_options)
-
+
lazy_clause = strategy.lazy_clause(state)
-
+
if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None
-
+
q = q.filter(lazy_clause)
result = q.all()
"Multiple rows returned with "
"uselist=False for lazily-loaded attribute '%s' "
% prop)
-
+
return result[0]
else:
return None
self.parent_property.\
_get_strategy(LazyLoader).\
init_class_attribute(mapper)
-
+
def setup_query(self, context, entity,
path, adapter, column_collection=None,
parentmapper=None, **kwargs):
def create_row_processor(self, context, path, mapper, row, adapter):
def execute(state, dict_, row):
state.get_impl(self.key).get(state, dict_)
-
+
return None, None, execute
-
+
class SubqueryLoader(AbstractRelationshipLoader):
def init(self):
super(SubqueryLoader, self).init()
self.join_depth = self.parent_property.join_depth
-
+
def init_class_attribute(self, mapper):
self.parent_property.\
_get_strategy(LazyLoader).\
init_class_attribute(mapper)
-
+
def setup_query(self, context, entity,
path, adapter, column_collection=None,
parentmapper=None, **kwargs):
if not context.query._enable_eagerloads:
return
-
+
path = path + (self.key, )
# build up a path indicating the path from the leftmost
subq_path = subq_path + path
reduced_path = interfaces._reduce_path(path)
-
+
# join-depth / recursion check
if ("loaderstrategy", reduced_path) not in context.attributes:
if self.join_depth:
else:
if self.mapper.base_mapper in interfaces._reduce_path(subq_path):
return
-
+
orig_query = context.attributes.get(
("orig_query", SubqueryLoader),
context.query)
subq_mapper = mapperutil._class_to_mapper(subq_path[0])
-
+
# determine attributes of the leftmost mapper
if self.parent.isa(subq_mapper) and self.key==subq_path[1]:
leftmost_mapper, leftmost_prop = \
subq_mapper, \
subq_mapper.get_property(subq_path[1])
leftmost_cols, remote_cols = self._local_remote_columns(leftmost_prop)
-
+
leftmost_attr = [
leftmost_mapper._columntoproperty[c].class_attribute
for c in leftmost_cols
# which we'll join onto.
embed_q = q.with_labels().subquery()
left_alias = mapperutil.AliasedClass(leftmost_mapper, embed_q)
-
+
# q becomes a new query. basically doing a longhand
# "from_self()". (from_self() itself not quite industrial
# strength enough for all contingencies...but very close)
-
+
q = q.session.query(self.mapper)
q._attributes = {
("orig_query", SubqueryLoader): orig_query,
]
q = q.order_by(*local_attr)
q = q.add_columns(*local_attr)
-
+
for i, (mapper, key) in enumerate(to_join):
-
+
# we need to use query.join() as opposed to
# orm.join() here because of the
# rich behavior it brings when dealing with
# "with_polymorphic" mappers. "aliased"
# and "from_joinpoint" take care of most of
# the chaining and aliasing for us.
-
+
first = i == 0
middle = i < len(to_join) - 1
second_to_last = i == len(to_join) - 2
-
+
if first:
attr = getattr(left_alias, key)
else:
attr = key
-
+
if second_to_last:
q = q.join((parent_alias, attr), from_joinpoint=True)
else:
)
)
q = q.order_by(*eager_order_by)
-
+
# add new query to attributes to be picked up
# by create_row_processor
context.attributes[('subquery', reduced_path)] = q
-
+
def _local_remote_columns(self, prop):
if prop.secondary is None:
return zip(*prop.local_remote_pairs)
p[0] for p in prop.
secondary_synchronize_pairs
]
-
+
def create_row_processor(self, context, path, mapper, row, adapter):
if not self.parent.class_manager[self.key].impl.supports_population:
raise sa_exc.InvalidRequestError(
"'%s' does not support object "
"population - eager loading cannot be applied." %
self)
-
+
path = path + (self.key,)
path = interfaces._reduce_path(path)
-
+
if ('subquery', path) not in context.attributes:
return None, None, None
-
+
local_cols, remote_cols = self._local_remote_columns(self.parent_property)
remote_attr = [
self.mapper._columntoproperty[c].key
for c in remote_cols]
-
+
q = context.attributes[('subquery', path)]
-
+
collections = dict(
(k, [v[0] for v in v])
for k, v in itertools.groupby(
q,
lambda x:x[1:]
))
-
+
if adapter:
local_cols = [adapter.columns[c] for c in local_cols]
-
+
if self.uselist:
def execute(state, dict_, row):
collection = collections.get(
"Multiple rows returned with "
"uselist=False for eagerly-loaded attribute '%s' "
% self)
-
+
scalar = collection[0]
state.get_impl(self.key).\
set_committed_value(state, dict_, scalar)
-
+
return execute, None, None
log.class_logger(SubqueryLoader)
class EagerLoader(AbstractRelationshipLoader):
"""Strategize a relationship() that loads within the process
of the parent object being selected."""
-
+
def init(self):
super(EagerLoader, self).init()
self.join_depth = self.parent_property.join_depth
def init_class_attribute(self, mapper):
self.parent_property.\
_get_strategy(LazyLoader).init_class_attribute(mapper)
-
+
def setup_query(self, context, entity, path, adapter, \
column_collection=None, parentmapper=None,
allow_innerjoin=True,
**kwargs):
"""Add a left outer join to the statement thats being constructed."""
-
+
if not context.query._enable_eagerloads:
return
-
+
path = path + (self.key,)
-
+
reduced_path = interfaces._reduce_path(path)
-
+
# check for user-defined eager alias
if ("user_defined_eager_row_processor", reduced_path) in\
context.attributes:
clauses = context.attributes[
("user_defined_eager_row_processor",
reduced_path)]
-
+
adapter = entity._get_entity_clauses(context.query, context)
if adapter and clauses:
context.attributes[
context.attributes[
("user_defined_eager_row_processor",
reduced_path)] = clauses = adapter
-
+
add_to_collection = context.primary_columns
-
+
else:
# check for join_depth or basic recursion,
# if the current path was not explicitly stated as
# if this is an outer join, all eager joins from
# here must also be outer joins
allow_innerjoin = False
-
+
context.create_eager_joins.append(
(self._create_eager_join, context,
entity, path, adapter,
parentmapper=self.mapper,
column_collection=add_to_collection,
allow_innerjoin=allow_innerjoin)
-
+
def _create_eager_join(self, context, entity,
path, adapter, parentmapper,
clauses, innerjoin):
-
+
if parentmapper is None:
localparent = entity.mapper
else:
localparent = parentmapper
-
+
# whether or not the Query will wrap the selectable in a subquery,
# and then attach eager load joins to that (i.e., in the case of
# LIMIT/OFFSET etc.)
should_nest_selectable = context.multi_row_eager_loaders and \
context.query._should_nest_selectable
-
+
entity_key = None
if entity not in context.eager_joins and \
not should_nest_selectable and \
),
self.key, self.parent_property
)
-
+
if onclause is self.parent_property:
# TODO: this is a temporary hack to
# account for polymorphic eager loads where
# ensure all the parent cols in the primaryjoin are actually
# in the
# columns clause (i.e. are not deferred), so that aliasing applied
- # by the Query propagates those columns outward.
+ # by the Query propagates those columns outward.
# This has the effect
# of "undefering" those columns.
for col in sql_util.find_columns(
if adapter:
col = adapter.columns[col]
context.primary_columns.append(col)
-
+
if self.parent_property.order_by:
context.eager_order_by += \
eagerjoin._target_adapter.\
)
)
-
+
def _create_eager_adapter(self, context, row, adapter, path):
reduced_path = interfaces._reduce_path(path)
if ("user_defined_eager_row_processor", reduced_path) in \
path = path + (self.key,)
-
+
eager_adapter = self._create_eager_adapter(
context,
row,
adapter, path)
-
+
if eager_adapter is not False:
key = self.key
_instance = self.mapper._instance_processor(
context,
path + (self.mapper,),
eager_adapter)
-
+
if not self.uselist:
def new_execute(state, dict_, row):
# set a scalar object instance directly on the parent
self.chained = chained
self.propagate_to_loaders = propagate_to_loaders
self.strategy_cls = factory(lazy)
-
+
@property
def is_eager(self):
return self.lazy in (False, 'joined', 'subquery')
-
+
@property
def is_chained(self):
return self.is_eager and self.chained
return ImmediateLoader
else:
return LazyLoader
-
-
-
+
+
+
class EagerJoinOption(PropertyOption):
-
+
def __init__(self, key, innerjoin, chained=False):
super(EagerJoinOption, self).__init__(key)
self.innerjoin = innerjoin
self.chained = chained
-
+
def is_chained(self):
return self.chained
query._attributes[("eager_join_type", path)] = self.innerjoin
else:
query._attributes[("eager_join_type", paths[-1])] = self.innerjoin
-
+
class LoadEagerFromAliasOption(PropertyOption):
-
+
def __init__(self, key, alias=None):
super(LoadEagerFromAliasOption, self).__init__(key)
if alias is not None:
(mapperutil.instance_str(value), state.class_, self.prop)
)
return value
-
+
def append(self, state, value, initiator):
return self._do_check(state, value, None, initiator)
dest_mapper._set_state_attr_by_column(dest, dest.dict, r, value)
except exc.UnmappedColumnError:
_raise_col_to_prop(True, source_mapper, l, dest_mapper, r)
-
+
# techically the "r.primary_key" check isn't
# needed here, but we check for this condition to limit
# how often this logic is invoked for memory/performance
def source_modified(uowcommit, source, source_mapper, synchronize_pairs):
"""return true if the source object has changes from an old to a
new value on the given synchronize pairs
-
+
"""
for l, r in synchronize_pairs:
try:
"""An event handler added to all relationship attributes which handles
session cascade operations.
"""
-
+
active_history = False
-
+
def __init__(self, key):
self.key = key
item not in sess:
sess.add(item)
return item
-
+
def remove(self, state, item, initiator):
sess = session._state_session(state)
if sess:
# dictionary used by external actors to
# store arbitrary state information.
self.attributes = {}
-
+
# dictionary of mappers to sets of
# DependencyProcessors, which are also
# set to be part of the sorted flush actions,
# which have that mapper as a parent.
self.deps = util.defaultdict(set)
-
+
# dictionary of mappers to sets of InstanceState
# items pending for flush which have that mapper
# as a parent.
self.mappers = util.defaultdict(set)
-
+
# a dictionary of Preprocess objects, which gather
# additional states impacted by the flush
# and determine if a flush action is needed
self.presort_actions = {}
-
+
# dictionary of PostSortRec objects, each
# one issues work during the flush within
# a certain ordering.
self.postsort_actions = {}
-
+
# a set of 2-tuples, each containing two
# PostSortRec objects where the second
# is dependent on the first being executed
# first
self.dependencies = set()
-
+
# dictionary of InstanceState-> (isdelete, listonly)
# tuples, indicating if this state is to be deleted
# or insert/updated, or just refreshed
self.states = {}
-
+
# tracks InstanceStates which will be receiving
# a "post update" call. Keys are mappers,
# values are a set of states and a set of the
# columns which should be included in the update.
self.post_update_states = util.defaultdict(lambda: (set(), set()))
-
+
@property
def has_work(self):
return bool(self.states)
def is_deleted(self, state):
"""return true if the given state is marked as deleted
within this uowtransaction."""
-
+
return state in self.states and self.states[state][0]
-
+
def memo(self, key, callable_):
if key in self.attributes:
return self.attributes[key]
else:
self.attributes[key] = ret = callable_()
return ret
-
+
def remove_state_actions(self, state):
"""remove pending actions for a state from the uowtransaction."""
-
+
isdelete = self.states[state][0]
-
+
self.states[state] = (isdelete, True)
-
+
def get_attribute_history(self, state, key, passive=True):
"""facade to attributes.get_state_history(), including caching of results."""
-
+
hashkey = ("history", state, key)
# cache the objects, not the states; the strong reference here
return history
else:
return history.as_state()
-
+
def has_dep(self, processor):
return (processor, True) in self.presort_actions
-
+
def register_preprocessor(self, processor, fromparent):
key = (processor, fromparent)
if key not in self.presort_actions:
self.presort_actions[key] = Preprocess(processor, fromparent)
-
+
def register_object(self, state, isdelete=False,
listonly=False, cancel_delete=False):
if not self.session._contains_state(state):
if state not in self.states:
mapper = _state_mapper(state)
-
+
if mapper not in self.mappers:
mapper._per_mapper_flush_actions(self)
-
+
self.mappers[mapper].add(state)
self.states[state] = (isdelete, listonly)
else:
if not listonly and (isdelete or cancel_delete):
self.states[state] = (isdelete, False)
-
+
def issue_post_update(self, state, post_update_cols):
mapper = state.manager.mapper.base_mapper
states, cols = self.post_update_states[mapper]
states.add(state)
cols.update(post_update_cols)
-
+
@util.memoized_property
def _mapper_for_dep(self):
"""return a dynamic mapping of (Mapper, DependencyProcessor) to
True or False, indicating if the DependencyProcessor operates
on objects of that Mapper.
-
+
The result is stored in the dictionary persistently once
calculated.
-
+
"""
return util.PopulateDict(
lambda tup:tup[0]._props.get(tup[1].key) is tup[1].prop
)
-
+
def filter_states_for_dep(self, dep, states):
"""Filter the given list of InstanceStates to those relevant to the
given DependencyProcessor.
-
+
"""
mapper_for_dep = self._mapper_for_dep
return [s for s in states if mapper_for_dep[(s.manager.mapper, dep)]]
-
+
def states_for_mapper_hierarchy(self, mapper, isdelete, listonly):
checktup = (isdelete, listonly)
for mapper in mapper.base_mapper.self_and_descendants:
for state in self.mappers[mapper]:
if self.states[state] == checktup:
yield state
-
+
def _generate_actions(self):
"""Generate the full, unsorted collection of PostSortRecs as
well as dependency pairs for this UOWTransaction.
-
+
"""
# execute presort_actions, until all states
# have been processed. a presort_action might
self.cycles = cycles = topological.find_cycles(
self.dependencies,
self.postsort_actions.values())
-
+
if cycles:
# if yes, break the per-mapper actions into
# per-state actions
self.dependencies.remove(edge)
for dep in convert[edge[1]]:
self.dependencies.add((edge[0], dep))
-
+
return set([a for a in self.postsort_actions.values()
if not a.disabled
]
def execute(self):
postsort_actions = self._generate_actions()
-
+
#sort = topological.sort(self.dependencies, postsort_actions)
#print "--------------"
#print self.dependencies
#print list(sort)
#print "COUNT OF POSTSORT ACTIONS", len(postsort_actions)
-
+
# execute
if self.cycles:
for set_ in topological.sort_as_subsets(
self.dependencies,
postsort_actions):
rec.execute(self)
-
+
def finalize_flush_changes(self):
"""mark processed objects as clean / deleted after a successful flush().
this method is called within the flush() method after the
execute() method has succeeded and the transaction has been committed.
-
+
"""
for state, (isdelete, listonly) in self.states.iteritems():
if isdelete:
)
else:
return self.dependency_processor.mapper.self_and_descendants
-
+
class Preprocess(IterateMappersMixin):
def __init__(self, dependency_processor, fromparent):
self.dependency_processor = dependency_processor
self.fromparent = fromparent
self.processed = set()
self.setup_flush_actions = False
-
+
def execute(self, uow):
delete_states = set()
save_states = set()
-
+
for mapper in self._mappers(uow):
for state in uow.mappers[mapper].difference(self.processed):
(isdelete, listonly) = uow.states[state]
if save_states:
self.dependency_processor.presort_saves(uow, save_states)
self.processed.update(save_states)
-
+
if (delete_states or save_states):
if not self.setup_flush_actions and (
self.dependency_processor.\
class PostSortRec(object):
disabled = False
-
+
def __new__(cls, uow, *args):
key = (cls, ) + args
if key in uow.postsort_actions:
ret = \
object.__new__(cls)
return ret
-
+
def execute_aggregate(self, uow, recs):
self.execute(uow)
-
+
def __repr__(self):
return "%s(%s)" % (
self.__class__.__name__,
self.delete = delete
self.fromparent = fromparent
uow.deps[dependency_processor.parent.base_mapper].add(dependency_processor)
-
+
def execute(self, uow):
states = self._elements(uow)
if self.delete:
def execute(self, uow):
states, cols = uow.post_update_states[self.mapper]
states = [s for s in states if uow.states[s][0] == self.isdelete]
-
+
self.mapper._post_update(states, uow, cols)
class SaveUpdateAll(PostSortRec):
def __init__(self, uow, mapper):
self.mapper = mapper
assert mapper is mapper.base_mapper
-
+
def execute(self, uow):
self.mapper._save_obj(
uow.states_for_mapper_hierarchy(self.mapper, False, False),
uow
)
-
+
def per_state_flush_actions(self, uow):
states = list(uow.states_for_mapper_hierarchy(self.mapper, False, False))
for rec in self.mapper._per_state_flush_actions(
states,
False):
yield rec
-
+
for dep in uow.deps[self.mapper]:
states_for_prop = uow.filter_states_for_dep(dep, states)
dep.per_state_flush_actions(uow, states_for_prop, False)
-
+
class DeleteAll(PostSortRec):
def __init__(self, uow, mapper):
self.mapper = mapper
states,
True):
yield rec
-
+
for dep in uow.deps[self.mapper]:
states_for_prop = uow.filter_states_for_dep(dep, states)
dep.per_state_flush_actions(uow, states_for_prop, True)
mapperutil.state_str(self.state),
self.delete
)
-
+
class SaveUpdateState(PostSortRec):
def __init__(self, uow, state, mapper):
self.state = state
self.mapper = mapper
-
+
def execute_aggregate(self, uow, recs):
cls_ = self.__class__
mapper = self.mapper
def __init__(self, uow, state, mapper):
self.state = state
self.mapper = mapper
-
+
def execute_aggregate(self, uow, recs):
cls_ = self.__class__
mapper = self.mapper
The ORM equivalent of a :func:`sqlalchemy.sql.expression.alias`
construct, this object mimics the mapped class using a
__getattr__ scheme and maintains a reference to a
- real :class:`~sqlalchemy.sql.expression.Alias` object.
-
+ real :class:`~sqlalchemy.sql.expression.Alias` object.
+
Usage is via the :class:`~sqlalchemy.orm.aliased()` synonym::
# find all pairs of users with the same name
"""Create filtering criterion that relates this query's primary entity
to the given related instance, using established :func:`.relationship()`
configuration.
-
+
The SQL rendered is the same as that rendered when a lazy loader
would fire off from the given parent on that attribute, meaning
that the appropriate state is taken from the parent object in
Python without the need to render joins to the parent table
in the rendered statement.
-
+
As of 0.6.4, this method accepts parent instances in all
persistence states, including transient, persistent, and detached.
Only the requisite primary key/foreign key attributes need to
be populated. Previous versions didn't work with transient
instances.
-
+
:param instance:
An instance which has some :func:`.relationship`.
String property name, or class-bound attribute, which indicates
what relationship from the instance should be used to reconcile the
parent/child relationship.
-
+
"""
if isinstance(prop, basestring):
mapper = object_mapper(instance)
if isinstance(entity, mapperlib.Mapper):
mapper = entity
-
+
elif isinstance(entity, type):
class_manager = attributes.manager_of_class(entity)
-
+
if class_manager is None:
return None, entity, False
-
+
mapper = class_manager.mapper
else:
return None, entity, False
-
+
if compile:
mapper = mapper.compile()
return mapper, mapper._with_polymorphic_selectable, False
def _entity_descriptor(entity, key):
"""Return a class attribute given an entity and string name.
-
+
May return :class:`.InstrumentedAttribute` or user-defined
attribute.
"""
if not isinstance(entity, (AliasedClass, type)):
entity = entity.class_
-
+
try:
return getattr(entity, key)
except AttributeError:
Raises UnmappedClassError if no mapping is configured.
"""
-
+
try:
class_manager = attributes.manager_of_class(class_)
mapper = class_manager.mapper
mapper = class_or_mapper
else:
raise exc.UnmappedClassError(class_or_mapper)
-
+
if compile:
return mapper.compile()
else:
self.logging_name = self._orig_logging_name = logging_name
else:
self._orig_logging_name = None
-
+
self.logger = log.instance_logger(self, echoflag=echo)
self._threadconns = threading.local()
self._creator = creator
def unique_connection(self):
"""Produce a DBAPI connection that is not referenced by any
thread-local context.
-
+
This method is different from :meth:`.Pool.connect` only if the
``use_threadlocal`` flag has been set to ``True``.
-
+
"""
-
+
return _ConnectionFairy(self).checkout()
def create_connection(self):
"""Called by subclasses to create a new ConnectionRecord."""
-
+
return _ConnectionRecord(self)
def recreate(self):
"""Return a new :class:`.Pool`, of the same class as this one
and configured with identical creation arguments.
-
+
This method is used in conjunection with :meth:`dispose`
to close out an entire :class:`.Pool` and create a new one in
its place.
-
+
"""
raise NotImplementedError()
remaining open, It is advised to not reuse the pool once dispose()
is called, and to instead use a new pool constructed by the
recreate() method.
-
+
"""
raise NotImplementedError()
def connect(self):
"""Return a DBAPI connection from the pool.
-
+
The connection is instrumented such that when its
``close()`` method is called, the connection will be returned to
the pool.
-
+
"""
if not self._use_threadlocal:
return _ConnectionFairy(self).checkout()
def return_conn(self, record):
"""Given a _ConnectionRecord, return it to the :class:`.Pool`.
-
+
This method is called when an instrumented DBAPI connection
has its ``close()`` method called.
-
+
"""
if self._use_threadlocal and hasattr(self._threadconns, "current"):
del self._threadconns.current
def get(self):
"""Return a non-instrumented DBAPI connection from this :class:`.Pool`.
-
+
This is called by ConnectionRecord in order to get its DBAPI
resource.
-
+
"""
return self.do_get()
def do_get(self):
"""Implementation for :meth:`get`, supplied by subclasses."""
-
+
raise NotImplementedError()
def do_return_conn(self, conn):
"""Implementation for :meth:`return_conn`, supplied by subclasses."""
-
+
raise NotImplementedError()
def status(self):
def _finalize_fairy(connection, connection_record, pool, ref=None):
_refs.discard(connection_record)
-
+
if ref is not None and \
(connection_record.fairy is not ref or
isinstance(pool, AssertionPool)):
connection_record.invalidate(e=e)
if isinstance(e, (SystemExit, KeyboardInterrupt)):
raise
-
+
if connection_record is not None:
connection_record.fairy = None
pool.logger.debug("Connection %r being returned to pool", connection)
__slots__ = '_pool', '__counter', 'connection', \
'_connection_record', '__weakref__', '_detached_info'
-
+
def __init__(self, pool):
self._pool = pool
self.__counter = 0
self._parent = parent
self.cursor = cursor
self.execute = cursor.execute
-
+
def invalidate(self, e=None):
self._parent.invalidate(e=e)
-
+
def __iter__(self):
return iter(self.cursor)
-
+
def close(self):
try:
self.cursor.close()
if isinstance(e, (SystemExit, KeyboardInterrupt)):
raise
-
+
def __setattr__(self, key, value):
if key in self.__slots__:
object.__setattr__(self, key, value)
else:
setattr(self.cursor, key, value)
-
+
def __getattr__(self, key):
return getattr(self.cursor, key)
:param pool_size: The number of threads in which to maintain connections
at once. Defaults to five.
-
+
"""
def __init__(self, creator, pool_size=5, **kw):
# pysqlite won't even let you close a conn from a thread
# that didn't create it
pass
-
+
self._all_conns.clear()
-
+
def dispose_local(self):
if hasattr(self._conn, 'current'):
conn = self._conn.current()
@memoized_property
def connection(self):
return _ConnectionRecord(self)
-
+
def status(self):
return "StaticPool"
self._conn = None
self._checked_out = False
Pool.__init__(self, *args, **kw)
-
+
def status(self):
return "AssertionPool"
def do_return_invalid(self, conn):
self._conn = None
self._checked_out = False
-
+
def dispose(self):
self._checked_out = False
if self._conn:
return AssertionPool(self._creator, echo=self.echo,
logging_name=self._orig_logging_name,
listeners=self.listeners)
-
+
def do_get(self):
if self._checked_out:
raise AssertionError("connection is already checked out")
-
+
if not self._conn:
self._conn = self.create_connection()
-
+
self._checked_out = True
return self._conn
a Pool class, defaulting to QueuePool
Other parameters are sent to the Pool object's constructor.
-
+
"""
self.module = module
self.poolclass = poolclass
self.pools = {}
self._create_pool_mutex = threading.Lock()
-
+
def close(self):
for key in self.pools.keys():
del self.pools[key]
return self.pools[key]
finally:
self._create_pool_mutex.release()
-
+
def connect(self, *args, **kw):
"""Activate a connection to the database.
If the pool has no available connections and allows new connections
to be created, a new database connection will be made.
-
+
"""
return self.get_pool(*args, **kw).connect()
return UnicodeResultProcessor(encoding, errors).process
else:
return UnicodeResultProcessor(encoding).process
-
+
def to_decimal_processor_factory(target_class, scale=10):
# Note that the scale argument is not taken into account for integer
# values in the C implementation while it is in the Python one.
class Table(SchemaItem, expression.TableClause):
"""Represent a table in a database.
-
+
e.g.::
-
+
mytable = Table("mytable", metadata,
Column('mytable_id', Integer, primary_key=True),
Column('value', String(50))
The Table object constructs a unique instance of itself based on its
name within the given MetaData object. Constructor
arguments are as follows:
-
+
:param name: The name of this table as represented in the database.
This property, along with the *schema*, indicates the *singleton
table. Similar to the style of a CREATE TABLE statement, other
:class:`.SchemaItem` constructs may be added here, including
:class:`PrimaryKeyConstraint`, and :class:`ForeignKeyConstraint`.
-
+
:param autoload: Defaults to False: the Columns for this table should
be reflected from the database. Usually there will be no Column
objects in the constructor if this property is set.
:class:`Table` are overwritten.
"""
-
+
__visit_name__ = 'table'
ddl_events = ('before-create', 'after-create',
if not args:
# python3k pickle seems to call this
return object.__new__(cls)
-
+
try:
name, metadata, args = args[0], args[1], args[2:]
except IndexError:
raise TypeError("Table() takes at least two arguments")
-
+
schema = kw.get('schema', None)
useexisting = kw.pop('useexisting', False)
mustexist = kw.pop('mustexist', False)
except:
metadata.tables.pop(key)
raise
-
+
def __init__(self, *args, **kw):
# __init__ is overridden to prevent __new__ from
# calling the superclass constructor.
pass
-
+
def _init(self, name, metadata, *args, **kwargs):
super(Table, self).__init__(name)
self.metadata = metadata
def add_is_dependent_on(self, table):
"""Add a 'dependency' for this Table.
-
+
This is another Table object which must be created
first before this one can, or dropped after this one.
-
+
Usually, dependencies between tables are determined via
ForeignKey objects. However, for other situations that
create dependencies outside of foreign keys (rules, inheriting),
this method can manually establish such a link.
-
+
"""
self._extra_dependencies.add(table)
-
+
def append_column(self, column):
"""Append a ``Column`` to this ``Table``."""
created or dropped, either directly before or after the DDL is issued
to the database. The listener may modify the Table, but may not abort
the event itself.
-
+
:param event:
One of ``Table.ddl_events``; e.g. 'before-create', 'after-create',
'before-drop' or 'after-drop'.
:event:
The event currently being handled
-
+
:target:
The ``Table`` object being created or dropped
-
+
:bind:
The ``Connection`` bueing used for DDL execution.
if bind is None:
bind = _bind_or_error(self)
bind.drop(self, checkfirst=checkfirst)
-
+
def tometadata(self, metadata, schema=RETAIN_SCHEMA):
"""Return a copy of this :class:`Table` associated with a different
:class:`MetaData`.
-
+
E.g.::
-
+
# create two metadata
meta1 = MetaData('sqlite:///querytest.db')
meta2 = MetaData()
# create the same Table object for the plain metadata
users_table_2 = users_table.tometadata(meta2)
-
+
"""
if schema is RETAIN_SCHEMA:
"""Represents a column in a database table."""
__visit_name__ = 'column'
-
+
def __init__(self, *args, **kwargs):
"""
Construct a new ``Column`` object.
-
+
:param name: The name of this column as represented in the database.
This argument may be the first positional argument, or specified
via keyword.
-
+
Names which contain no upper case characters
will be treated as case insensitive names, and will not be quoted
unless they are a reserved word. Names with any number of upper
case characters will be quoted and sent exactly. Note that this
behavior applies even for databases which standardize upper
case names as case insensitive such as Oracle.
-
+
The name field may be omitted at construction time and applied
later, at any time before the Column is associated with a
:class:`Table`. This is to support convenient
usage within the :mod:`~sqlalchemy.ext.declarative` extension.
-
+
:param type\_: The column's type, indicated using an instance which
subclasses :class:`~sqlalchemy.types.AbstractType`. If no arguments
are required for the type, the class of the type can be sent
as well, e.g.::
-
+
# use a type with arguments
Column('data', String(50))
-
+
# use no arguments
Column('level', Integer)
-
+
The ``type`` argument may be the second positional argument
or specified by keyword.
has a composite primary key consisting of more than one
integer column, set this flag to True only on the
column that should be considered "autoincrement".
-
+
The setting *only* has an effect for columns which are:
-
+
* Integer derived (i.e. INT, SMALLINT, BIGINT)
-
+
* Part of the primary key
-
+
* Are not referenced by any foreign keys
-
+
* have no server side or client side defaults (with the exception
of Postgresql SERIAL).
-
+
The setting has these two effects on columns that meet the
above criteria:
-
+
* DDL issued for the column will include database-specific
keywords intended to signify this column as an
"autoincrement" column, such as AUTO INCREMENT on MySQL,
special SQLite flag that is not required for autoincrementing
behavior. See the SQLite dialect documentation for
information on SQLite's AUTOINCREMENT.
-
+
* The column will be considered to be available as
cursor.lastrowid or equivalent, for those dialects which
"post fetch" newly inserted identifiers after a row has
if this column is otherwise not specified in the VALUES clause of
the insert. This is a shortcut to using :class:`ColumnDefault` as
a positional argument.
-
+
Contrast this argument to ``server_default`` which creates a
default generator on the database side.
-
+
:param doc: optional String that can be used by the ORM or similar
to document attributes. This attribute does not render SQL
comments (a future attribute 'comment' will achieve that).
-
+
:param key: An optional string identifier which will identify this
``Column`` object on the :class:`Table`. When a key is provided,
this is the only identifier referencing the ``Column`` within the
present in the SET clause of the update. This is a shortcut to
using :class:`ColumnDefault` as a positional argument with
``for_update=True``.
-
+
:param primary_key: If ``True``, marks this column as a primary key
column. Multiple columns can have this flag set to specify
composite primary keys. As an alternative, the primary key of a
Strings and text() will be converted into a :class:`DefaultClause`
object upon initialization.
-
+
Use :class:`FetchedValue` to indicate that an already-existing
column will generate a default value on the database side which
will be available to SQLAlchemy for post-fetch after inserts. This
name = args.pop(0)
if args:
coltype = args[0]
-
+
if (isinstance(coltype, types.AbstractType) or
(isinstance(coltype, type) and
issubclass(coltype, types.AbstractType))):
raise exc.ArgumentError(
"May not pass type_ positionally and as a keyword.")
type_ = args.pop(0)
-
+
no_type = type_ is None
-
+
super(Column, self).__init__(name, None, type_)
self.key = kwargs.pop('key', name)
self.primary_key = kwargs.pop('primary_key', False)
# otherwise, add DDL-related events
elif isinstance(self.type, types.SchemaType):
self.type._set_parent(self)
-
+
if self.default is not None:
if isinstance(self.default, (ColumnDefault, Sequence)):
args.append(self.default)
args.append(self.server_default)
else:
args.append(DefaultClause(self.server_default))
-
+
if self.onupdate is not None:
if isinstance(self.onupdate, (ColumnDefault, Sequence)):
args.append(self.onupdate)
else:
args.append(ColumnDefault(self.onupdate, for_update=True))
-
+
if self.server_onupdate is not None:
if isinstance(self.server_onupdate, FetchedValue):
args.append(self.server_default)
if 'info' in kwargs:
self.info = kwargs.pop('info')
-
+
if kwargs:
raise exc.ArgumentError(
"Unknown arguments passed to Column: " + repr(kwargs.keys()))
# already, if it's a composite constraint
# and more than one col being replaced
table.constraints.remove(fk.constraint)
-
+
table._columns.replace(self)
if self.primary_key:
for fn in self._table_events:
fn(table, self)
del self._table_events
-
+
def _on_table_attach(self, fn):
if self.table is not None:
fn(self.table, self)
else:
self._table_events.add(fn)
-
+
def copy(self, **kw):
"""Create a copy of this ``Column``, unitialized.
This is used in ``Table.tometadata``.
"""
-
+
# Constraint objects plus non-constraint-bound ForeignKey objects
args = \
[c.copy(**kw) for c in self.constraints] + \
[c.copy(**kw) for c in self.foreign_keys if not c.constraint]
-
+
c = Column(
name=self.name,
type_=self.type,
if hasattr(self, '_table_events'):
c._table_events = list(self._table_events)
return c
-
+
def _make_proxy(self, selectable, name=None):
"""Create a *proxy* for this column.
(such as an alias or select statement). The column should
be used only in select scenarios, as its full DDL/default
information is not transferred.
-
+
"""
fk = [ForeignKey(f.column) for f in self.foreign_keys]
if name is None and self.name is None:
``ForeignKey`` is specified as an argument to a :class:`Column` object,
e.g.::
-
+
t = Table("remote_table", metadata,
Column("remote_id", ForeignKey("main_table.id"))
)
-
+
Note that ``ForeignKey`` is only a marker object that defines
a dependency between two columns. The actual constraint
is in all cases represented by the :class:`ForeignKeyConstraint`
``ForeignKey`` markers are automatically generated to be
present on each associated :class:`Column`, which are also
associated with the constraint object.
-
+
Note that you cannot define a "composite" foreign key constraint,
that is a constraint between a grouping of multiple parent/child
columns, using ``ForeignKey`` objects. To define this grouping,
the :class:`ForeignKeyConstraint` object must be used, and applied
to the :class:`Table`. The associated ``ForeignKey`` objects
are created automatically.
-
+
The ``ForeignKey`` objects associated with an individual
:class:`Column` object are available in the `foreign_keys` collection
of that column.
-
+
Further examples of foreign key configuration are in
:ref:`metadata_foreignkeys`.
onupdate=None, ondelete=None, deferrable=None,
initially=None, link_to_name=False):
"""
- Construct a column-level FOREIGN KEY.
-
+ Construct a column-level FOREIGN KEY.
+
The :class:`ForeignKey` object when constructed generates a
:class:`ForeignKeyConstraint` which is associated with the parent
:class:`Table` object's collection of constraints.
:param initially: Optional string. If set, emit INITIALLY <value> when
issuing DDL for this constraint.
-
+
:param link_to_name: if True, the string name given in ``column`` is
the rendered name of the referenced column, not its locally
assigned ``key``.
-
+
:param use_alter: passed to the underlying
:class:`ForeignKeyConstraint` to indicate the constraint should be
generated/dropped externally from the CREATE TABLE/ DROP TABLE
statement. See that classes' constructor for details.
-
+
"""
self._colspec = column
-
+
# the linked ForeignKeyConstraint.
# ForeignKey will create this when parent Column
# is attached to a Table, *or* ForeignKeyConstraint
# object passes itself in when creating ForeignKey
# markers.
self.constraint = _constraint
-
-
+
+
self.use_alter = use_alter
self.name = name
self.onupdate = onupdate
def copy(self, schema=None):
"""Produce a copy of this :class:`ForeignKey` object.
-
+
The new :class:`ForeignKey` will not be bound
to any :class:`Column`.
-
+
This method is usually used by the internal
copy procedures of :class:`Column`, :class:`Table`,
and :class:`MetaData`.
-
+
:param schema: The returned :class:`ForeignKey` will
reference the original table and column name, qualified
by the given string schema name.
-
+
"""
-
+
return ForeignKey(
self._get_colspec(schema=schema),
use_alter=self.use_alter,
def _get_colspec(self, schema=None):
"""Return a string based 'column specification' for this :class:`ForeignKey`.
-
+
This is usually the equivalent of the string-based "tablename.colname"
argument first passed to the object's constructor.
-
+
"""
if schema:
return schema + "." + self.column.table.name + \
_column = self._colspec.__clause_element__()
else:
_column = self._colspec
-
+
return "%s.%s" % (_column.table.fullname, _column.key)
target_fullname = property(_get_colspec)
def references(self, table):
"""Return True if the given :class:`Table` is referenced by this :class:`ForeignKey`."""
-
+
return table.corresponding_column(self.column) is not None
def get_referent(self, table):
@util.memoized_property
def column(self):
"""Return the target :class:`.Column` referenced by this :class:`.ForeignKey`.
-
+
If this :class:`ForeignKey` was created using a
string-based target column specification, this
attribute will on first access initiate a resolution
to the parent :class:`.Column`, :class:`.Table`, and
:class:`.MetaData` to proceed - if any of these aren't
yet present, an error is raised.
-
+
"""
# ForeignKey inits its remote column as late as possible, so tables
# can be defined without dependencies
"foreign key to target column '%s'" % (self.parent, tname, colname))
table = Table(tname, parenttable.metadata,
mustexist=True, schema=schema)
-
+
_column = None
if colname is None:
# colname is None in the case that ForeignKey argument
self.parent = column
self.parent.foreign_keys.add(self)
self.parent._on_table_attach(self._set_table)
-
+
def _set_table(self, table, column):
# standalone ForeignKey - create ForeignKeyConstraint
# on the hosting Table when attached to the Table.
self.constraint._elements[self.parent] = self
self.constraint._set_parent(table)
table.foreign_keys.add(self)
-
+
class DefaultGenerator(SchemaItem):
"""Base class for column *default* values."""
__visit_name__ = 'default_generator'
is_sequence = False
-
+
def __init__(self, for_update=False):
self.for_update = for_update
This could correspond to a constant, a callable function,
or a SQL clause.
-
+
:class:`.ColumnDefault` is generated automatically
whenever the ``default``, ``onupdate`` arguments of
:class:`.Column` are used. A :class:`.ColumnDefault`
can be passed positionally as well.
-
+
For example, the following::
-
+
Column('foo', Integer, default=50)
-
+
Is equivalent to::
-
+
Column('foo', Integer, ColumnDefault(50))
-
+
"""
def __init__(self, arg, **kwargs):
if util.callable(arg):
arg = self._maybe_wrap_callable(arg)
self.arg = arg
-
+
@util.memoized_property
def is_callable(self):
return util.callable(self.arg)
-
+
@util.memoized_property
def is_clause_element(self):
return isinstance(self.arg, expression.ClauseElement)
-
+
@util.memoized_property
def is_scalar(self):
return not self.is_callable and \
not self.is_clause_element and \
not self.is_sequence
-
+
def _maybe_wrap_callable(self, fn):
"""Backward compat: Wrap callables that don't accept a context."""
return lambda ctx: fn()
positionals = len(argspec[0])
-
+
# Py3K compat - no unbound methods
if inspect.ismethod(inspectable) or inspect.isclass(fn):
positionals -= 1
__visit_name__ = 'sequence'
is_sequence = True
-
+
def __init__(self, name, start=None, increment=None, schema=None,
optional=False, quote=None, metadata=None, for_update=False):
super(Sequence, self).__init__(for_update=for_update)
def _set_parent(self, column):
super(Sequence, self)._set_parent(column)
column._on_table_attach(self._set_table)
-
+
def _set_table(self, table, column):
self.metadata = table.metadata
-
+
@property
def bind(self):
if self.metadata:
return self.metadata.bind
else:
return None
-
+
def create(self, bind=None, checkfirst=True):
"""Creates this sequence in the database."""
class FetchedValue(object):
"""A marker for a transparent database-side default.
-
+
Use :class:`.FetchedValue` when the database is configured
to provide some automatic default for a column.
-
+
E.g.::
-
+
Column('foo', Integer, FetchedValue())
-
+
Would indicate that some trigger or default generator
will create a new value for the ``foo`` column during an
INSERT.
-
+
"""
def __init__(self, for_update=False):
class DefaultClause(FetchedValue):
"""A DDL-specified DEFAULT column value.
-
+
:class:`.DefaultClause` is a :class:`.FetchedValue`
that also generates a "DEFAULT" clause when
"CREATE TABLE" is emitted.
-
+
:class:`.DefaultClause` is generated automatically
whenever the ``server_default``, ``server_onupdate`` arguments of
:class:`.Column` are used. A :class:`.DefaultClause`
can be passed positionally as well.
-
+
For example, the following::
-
+
Column('foo', Integer, server_default="50")
-
+
Is equivalent to::
-
+
Column('foo', Integer, DefaultClause("50"))
-
+
"""
def __init__(self, arg, for_update=False):
class PassiveDefault(DefaultClause):
"""A DDL-specified DEFAULT column value.
-
+
.. deprecated:: 0.6 :class:`.PassiveDefault` is deprecated.
Use :class:`.DefaultClause`.
"""
:param initially:
Optional string. If set, emit INITIALLY <value> when issuing DDL
for this constraint.
-
+
:param _create_rule:
a callable which is passed the DDLCompiler object during
compilation. Returns True or False to signal inline generation of
_create_rule is used by some types to create constraints.
Currently, its call signature is subject to change at any time.
-
+
"""
self.name = name
class ColumnCollectionConstraint(Constraint):
"""A constraint that proxies a ColumnCollection."""
-
+
def __init__(self, *columns, **kw):
"""
:param \*columns:
:param initially:
Optional string. If set, emit INITIALLY <value> when issuing DDL
for this constraint.
-
+
"""
super(ColumnCollectionConstraint, self).__init__(**kw)
self.columns = expression.ColumnCollection()
isinstance(self._pending_colargs[0], Column) and \
self._pending_colargs[0].table is not None:
self._set_parent(self._pending_colargs[0].table)
-
+
def _set_parent(self, table):
super(ColumnCollectionConstraint, self)._set_parent(table)
for col in self._pending_colargs:
:param sqltext:
A string containing the constraint definition, which will be used
verbatim, or a SQL expression construct.
-
+
:param name:
Optional, the in-database name of the constraint.
:param initially:
Optional string. If set, emit INITIALLY <value> when issuing DDL
for this constraint.
-
+
"""
super(CheckConstraint, self).\
self.sqltext = expression._literal_as_text(sqltext)
if table is not None:
self._set_parent(table)
-
+
def __visit_name__(self):
if isinstance(self.parent, Table):
return "check_constraint"
constraint. For a no-frills, single column foreign key, adding a
:class:`ForeignKey` to the definition of a :class:`Column` is a shorthand
equivalent for an unnamed, single column :class:`ForeignKeyConstraint`.
-
+
Examples of foreign key configuration are in :ref:`metadata_foreignkeys`.
-
+
"""
__visit_name__ = 'foreign_key_constraint'
as "after-create" and "before-drop" events on the MetaData object.
This is normally used to generate/drop constraints on objects that
are mutually dependent on each other.
-
+
"""
super(ForeignKeyConstraint, self).\
__init__(name, deferrable, initially)
self.use_alter = use_alter
self._elements = util.OrderedDict()
-
+
# standalone ForeignKeyConstraint - create
# associated ForeignKey objects which will be applied to hosted
# Column objects (in col.foreign_keys), either now or when attached
if table is not None:
self._set_parent(table)
-
+
@property
def columns(self):
return self._elements.keys()
-
+
@property
def elements(self):
return self._elements.values()
-
+
def _set_parent(self, table):
super(ForeignKeyConstraint, self)._set_parent(table)
for col, fk in self._elements.iteritems():
if isinstance(col, basestring):
col = table.c[col]
fk._set_parent(col)
-
+
if self.use_alter:
def supports_alter(ddl, event, schema_item, bind, **kw):
return table in set(kw['tables']) and \
bind.dialect.supports_alter
-
+
AddConstraint(self, on=supports_alter).\
execute_at('after-create', table.metadata)
DropConstraint(self, on=supports_alter).\
execute_at('before-drop', table.metadata)
-
+
def copy(self, **kw):
return ForeignKeyConstraint(
[x.parent.name for x in self._elements.values()],
:param \**kw:
Other keyword arguments may be interpreted by specific dialects.
-
+
"""
self.name = name
@property
def bind(self):
"""Return the connectable associated with this Index."""
-
+
return self.table.bind
def create(self, bind=None):
This property may be assigned an ``Engine`` or ``Connection``, or
assigned a string or URL to automatically create a basic ``Engine``
for this bind with ``create_engine()``.
-
+
"""
return self._bind
def remove(self, table):
"""Remove the given Table object from this MetaData."""
-
+
# TODO: scan all other tables and remove FK _column
del self.tables[table.key]
dependency.
"""
return sqlutil.sort_tables(self.tables.itervalues())
-
+
def reflect(self, bind=None, schema=None, views=False, only=None):
"""Load all available table definitions from the database.
:param schema:
Optional, query and reflect tables from an alterate schema.
-
+
:param views:
If True, also reflect views.
-
+
:param only:
Optional. Load only a sub-set of available named tables. May be
specified as a sequence of names or a callable.
available.update(
bind.dialect.get_view_names(conn or bind, schema)
)
-
+
current = set(self.tables.iterkeys())
if only is None:
:event:
The event currently being handled
-
+
:target:
The ``MetaData`` object being operated upon
-
+
:bind:
The ``Connection`` bueing used for DDL execution.
:param checkfirst:
Defaults to True, don't issue CREATEs for tables already present
in the target database.
-
+
"""
if bind is None:
bind = _bind_or_error(self)
class DDLElement(expression.Executable, expression.ClauseElement):
"""Base class for DDL expression constructs."""
-
+
_execution_options = expression.Executable.\
_execution_options.union({'autocommit':True})
target = None
on = None
-
+
def execute(self, bind=None, target=None):
"""Execute this DDL immediately.
statement will be executed using the same Connection and transactional
context as the Table create/drop itself. The ``.bind`` property of
this statement is ignored.
-
+
:param event:
One of the events defined in the schema item's ``.ddl_events``;
e.g. 'before-create', 'after-create', 'before-drop' or 'after-drop'
s = self.__class__.__new__(self.__class__)
s.__dict__ = self.__dict__.copy()
return s
-
+
def _compiler(self, dialect, **kw):
"""Return a compiler appropriate for this ClauseElement, given a
Dialect."""
-
+
return dialect.ddl_compiler(dialect, self, **kw)
class DDL(DDLElement):
"""
__visit_name__ = "ddl"
-
+
def __init__(self, statement, on=None, context=None, bind=None):
"""Create a DDL statement.
If a callable, it will be invoked with four positional arguments
as well as optional keyword arguments:
-
+
:ddl:
This DDL element.
-
+
:event:
The name of the event that has triggered this DDL, such as
'after-create' Will be None if the DDL is executed explicitly.
:connection:
The ``Connection`` being used for DDL execution
- :tables:
+ :tables:
Optional keyword argument - a list of Table objects which are to
be created/ dropped within a MetaData.create_all() or drop_all()
method call.
-
+
If the callable returns a true value, the DDL statement will be
executed.
:param bind:
Optional. A :class:`~sqlalchemy.engine.base.Connectable`, used by
default when ``execute()`` is invoked without a bind argument.
-
+
"""
if not isinstance(statement, basestring):
The common theme of _CreateDropBase is a single
``element`` attribute which refers to the element
to be created or dropped.
-
+
"""
-
+
def __init__(self, element, on=None, bind=None):
self.element = element
self._check_ddl_on(on)
def _create_rule_disable(self, compiler):
"""Allow disable of _create_rule using a callable.
-
+
Pass to _create_rule using
util.portable_instancemethod(self._create_rule_disable)
to retain serializability.
-
+
"""
return False
class CreateTable(_CreateDropBase):
"""Represent a CREATE TABLE statement."""
-
+
__visit_name__ = "create_table"
-
+
class DropTable(_CreateDropBase):
"""Represent a DROP TABLE statement."""
class CreateSequence(_CreateDropBase):
"""Represent a CREATE SEQUENCE statement."""
-
+
__visit_name__ = "create_sequence"
class DropSequence(_CreateDropBase):
"""Represent a DROP SEQUENCE statement."""
__visit_name__ = "drop_sequence"
-
+
class CreateIndex(_CreateDropBase):
"""Represent a CREATE INDEX statement."""
-
+
__visit_name__ = "create_index"
class DropIndex(_CreateDropBase):
class AddConstraint(_CreateDropBase):
"""Represent an ALTER TABLE ADD CONSTRAINT statement."""
-
+
__visit_name__ = "add_constraint"
def __init__(self, element, *args, **kw):
super(AddConstraint, self).__init__(element, *args, **kw)
element._create_rule = util.portable_instancemethod(
self._create_rule_disable)
-
+
class DropConstraint(_CreateDropBase):
"""Represent an ALTER TABLE DROP CONSTRAINT statement."""
__visit_name__ = "drop_constraint"
-
+
def __init__(self, element, cascade=False, **kw):
self.cascade = cascade
super(DropConstraint, self).__init__(element, **kw)
bindable = "the %s's .bind" % name
else:
bindable = "this %s's .metadata.bind" % name
-
+
if msg is None:
msg = "The %s is not bound to an Engine or Connection. "\
"Execution can not proceed without a database to execute "\
__visit_name__ = 'label'
__slots__ = 'element', 'name'
-
+
def __init__(self, col, name):
self.element = col
self.name = name
-
+
@property
def type(self):
return self.element.type
-
+
@property
def quote(self):
return self.element.quote
extract_map = EXTRACT_MAP
compound_keywords = COMPOUND_KEYWORDS
-
+
# class-level defaults which can be set at the instance
# level to define if this Compiled instance represents
# INSERT/UPDATE/DELETE
isdelete = isinsert = isupdate = False
-
+
# holds the "returning" collection of columns if
# the statement is CRUD and defines returning columns
# either implicitly or explicitly
returning = None
-
+
# set to True classwide to generate RETURNING
# clauses before the VALUES or WHERE clause (i.e. MSSQL)
returning_precedes_values = False
-
+
# SQL 92 doesn't allow bind parameters to be used
# in the columns clause of a SELECT, nor does it allow
# ambiguous expressions like "? = ?". A compiler
# subclass can set this flag to False if the target
# driver/DB enforces this
ansi_bind_rules = False
-
+
def __init__(self, dialect, statement, column_keys=None, inline=False, **kwargs):
"""Construct a new ``DefaultCompiler`` object.
self.preparer = self.dialect.identifier_preparer
self.label_length = self.dialect.label_length or self.dialect.max_identifier_length
-
+
# a map which tracks "anonymous" identifiers that are
# created on the fly here
self.anon_map = util.PopulateDict(self._process_anon)
@property
def sql_compiler(self):
return self
-
+
def construct_params(self, params=None, _group_number=None):
"""return a dictionary of bind parameter keys and values"""
return self.process(label.element,
within_columns_clause=False,
**kw)
-
+
def visit_column(self, column, result_map=None, **kwargs):
name = column.name
if name is None:
raise exc.CompileError("Cannot compile Column object until "
"it's 'name' is assigned.")
-
+
if not column.is_literal and isinstance(name, sql._generated_label):
name = self._truncated_identifier("colident", name)
if result_map is not None:
result_map[name.lower()] = (name, (column, ), column.type)
-
+
if column.is_literal:
name = self.escape_literal_column(name)
else:
tablename = column.table.name
tablename = isinstance(tablename, sql._generated_label) and \
self._truncated_identifier("alias", tablename) or tablename
-
+
return schema_prefix + \
self.preparer.quote(tablename, column.table.quote) + "." + name
def post_process_text(self, text):
return text
-
+
def visit_textclause(self, textclause, **kwargs):
if textclause.typemap is not None:
for colname, type_ in textclause.typemap.iteritems():
self.stack.append({'from':entry.get('from', None), 'iswrapper':True})
keyword = self.compound_keywords.get(cs.keyword)
-
+
text = (" " + keyword + " ").join(
(self.process(c, asfrom=asfrom, parens=False,
compound_index=i, **kwargs)
for i, c in enumerate(cs.selects))
)
-
+
group_by = self.process(cs._group_by_clause, asfrom=asfrom, **kwargs)
if group_by:
text += " GROUP BY " + group_by
isinstance(binary.left, sql._BindParamClause) and \
isinstance(binary.right, sql._BindParamClause):
kw['literal_binds'] = True
-
+
return self._operator_dispatch(binary.operator,
binary,
lambda opstr: self.process(binary.left, **kw) +
+ (escape and
(' ESCAPE ' + self.render_literal_value(escape, None))
or '')
-
+
def visit_ilike_op(self, binary, **kw):
escape = binary.modifiers.get("escape", None)
return 'lower(%s) LIKE lower(%s)' % (
+ (escape and
(' ESCAPE ' + self.render_literal_value(escape, None))
or '')
-
+
def visit_notilike_op(self, binary, **kw):
escape = binary.modifiers.get("escape", None)
return 'lower(%s) NOT LIKE lower(%s)' % (
+ (escape and
(' ESCAPE ' + self.render_literal_value(escape, None))
or '')
-
+
def _operator_dispatch(self, operator, element, fn, **kw):
if util.callable(operator):
disp = getattr(self, "visit_%s" % operator.__name__, None)
return fn(OPERATORS[operator])
else:
return fn(" " + operator + " ")
-
+
def visit_bindparam(self, bindparam, within_columns_clause=False,
literal_binds=False, **kwargs):
if literal_binds or \
raise exc.CompileError("Bind parameter without a "
"renderable value not allowed here.")
return self.render_literal_bindparam(bindparam, within_columns_clause=True, **kwargs)
-
+
name = self._truncate_bindparam(bindparam)
if name in self.binds:
existing = self.binds[name]
"with insert() or update() (for example, 'b_%s')."
% (bindparam.key, bindparam.key)
)
-
+
self.binds[bindparam.key] = self.binds[name] = bindparam
return self.bindparam_string(name)
-
+
def render_literal_bindparam(self, bindparam, **kw):
value = bindparam.value
processor = bindparam.bind_processor(self.dialect)
if processor:
value = processor(value)
return self.render_literal_value(value, bindparam.type)
-
+
def render_literal_value(self, value, type_):
"""Render the value of a bind parameter as a quoted literal.
-
+
This is used for statement sections that do not accept bind paramters
on the target driver/database.
-
+
This should be implemented by subclasses using the quoting services
of the DBAPI.
-
+
"""
if isinstance(value, basestring):
value = value.replace("'", "''")
return str(value)
else:
raise NotImplementedError("Don't know how to literal-quote value %r" % value)
-
+
def _truncate_bindparam(self, bindparam):
if bindparam in self.bind_names:
return self.bind_names[bindparam]
truncname = anonname
self.truncated_names[(ident_class, name)] = truncname
return truncname
-
+
def _anonymize(self, name):
return name % self.anon_map
-
+
def _process_anon(self, key):
(ident, derived) = key.split(' ', 1)
anonymous_counter = self.anon_map.get(derived, 1)
elif asfrom:
ret = self.process(alias.original, asfrom=True, **kwargs) + " AS " + \
self.preparer.format_alias(alias, alias_name)
-
+
if fromhints and alias in fromhints:
hinttext = self.get_from_hint_text(alias, fromhints[alias])
if hinttext:
ret += " " + hinttext
-
+
return ret
else:
return self.process(alias.original, **kwargs)
def get_select_hint_text(self, byfroms):
return None
-
+
def get_from_hint_text(self, table, text):
return None
-
+
def visit_select(self, select, asfrom=False, parens=True,
iswrapper=False, fromhints=None,
compound_index=1, **kwargs):
entry = self.stack and self.stack[-1] or {}
-
+
existingfroms = entry.get('from', None)
froms = select._get_display_froms(existingfroms)
]
if c is not None
]
-
+
text = "SELECT " # we're off to a good start !
if select._hints:
hint_text = self.get_select_hint_text(byfrom)
if hint_text:
text += hint_text + " "
-
+
if select._prefixes:
text += " ".join(self.process(x, **kwargs) for x in select._prefixes) + " "
text += self.get_select_precolumns(select)
if froms:
text += " \nFROM "
-
+
if select._hints:
text += ', '.join([self.process(f,
asfrom=True, fromhints=byfrom,
def get_select_precolumns(self, select):
"""Called when building a ``SELECT`` statement, position is just before
column list.
-
+
"""
return select._distinct and "DISTINCT " or ""
preparer = self.preparer
supports_default_values = self.dialect.supports_default_values
-
+
text = "INSERT"
-
+
prefixes = [self.process(x) for x in insert_stmt._prefixes]
if prefixes:
text += " " + " ".join(prefixes)
-
+
text += " INTO " + preparer.format_table(insert_stmt.table)
-
+
if colparams or not supports_default_values:
text += " (%s)" % ', '.join([preparer.format_column(c[0])
for c in colparams])
if self.returning or insert_stmt._returning:
self.returning = self.returning or insert_stmt._returning
returning_clause = self.returning_clause(insert_stmt, self.returning)
-
+
if self.returning_precedes_values:
text += " " + returning_clause
else:
text += " VALUES (%s)" % \
', '.join([c[1] for c in colparams])
-
+
if self.returning and not self.returning_precedes_values:
text += " " + returning_clause
-
+
return text
-
+
def visit_update(self, update_stmt):
self.stack.append({'from': set([update_stmt.table])})
colparams = self._get_colparams(update_stmt)
text = "UPDATE " + self.preparer.format_table(update_stmt.table)
-
+
text += ' SET ' + \
', '.join(
self.preparer.quote(c[0].name, c[0].quote) + '=' + c[1]
self.returning = update_stmt._returning
if self.returning_precedes_values:
text += " " + self.returning_clause(update_stmt, update_stmt._returning)
-
+
if update_stmt._whereclause is not None:
text += " WHERE " + self.process(update_stmt._whereclause)
if self.returning and not self.returning_precedes_values:
text += " " + self.returning_clause(update_stmt, update_stmt._returning)
-
+
self.stack.pop(-1)
return text
"with insert() or update() (for example, 'b_%s')."
% (col.key, col.key)
)
-
+
self.binds[col.key] = bindparam
return self.bindparam_string(self._truncate_bindparam(bindparam))
-
+
def _get_colparams(self, stmt):
"""create a set of tuples representing column/string pairs for use
in an INSERT or UPDATE statement.
]
required = object()
-
+
# if we have statement parameters - set defaults in the
# compiled params
if self.column_keys is None:
# create a list of column assignment clauses as tuples
values = []
-
+
need_pks = self.isinsert and \
not self.inline and \
not stmt._returning
-
+
implicit_returning = need_pks and \
self.dialect.implicit_returning and \
stmt.table.implicit_returning
-
+
postfetch_lastrowid = need_pks and self.dialect.postfetch_lastrowid
-
+
# iterating through columns at the top to maintain ordering.
# otherwise we might iterate through individual sets of
# "defaults", "primary key cols", etc.
self.postfetch.append(c)
value = self.process(value.self_group())
values.append((c, value))
-
+
elif self.isinsert:
if c.primary_key and \
need_pks and \
not postfetch_lastrowid or
c is not stmt.table._autoincrement_column
):
-
+
if implicit_returning:
if c.default is not None:
if c.default.is_sequence:
values.append((c, self._create_crud_bind_param(c, None)))
self.prefetch.append(c)
-
+
elif c.default is not None:
if c.default.is_sequence:
proc = self.process(c.default)
self.postfetch.append(c)
elif c.default.is_clause_element:
values.append((c, self.process(c.default.arg.self_group())))
-
+
if not c.primary_key:
# dont add primary key column to postfetch
self.postfetch.append(c)
elif c.server_default is not None:
if not c.primary_key:
self.postfetch.append(c)
-
+
elif self.isupdate:
if c.onupdate is not None and not c.onupdate.is_sequence:
if c.onupdate.is_clause_element:
self.returning = delete_stmt._returning
if self.returning_precedes_values:
text += " " + self.returning_clause(delete_stmt, delete_stmt._returning)
-
+
if delete_stmt._whereclause is not None:
text += " WHERE " + self.process(delete_stmt._whereclause)
if self.returning and not self.returning_precedes_values:
text += " " + self.returning_clause(delete_stmt, delete_stmt._returning)
-
+
self.stack.pop(-1)
return text
class DDLCompiler(engine.Compiled):
-
+
@util.memoized_property
def sql_compiler(self):
return self.dialect.statement_compiler(self.dialect, self.statement)
-
+
@property
def preparer(self):
return self.dialect.identifier_preparer
def construct_params(self, params=None):
return None
-
+
def visit_ddl(self, ddl, **kwargs):
# table events can substitute table and schema name
context = ddl.context
context.setdefault('table', table)
context.setdefault('schema', sch)
context.setdefault('fullname', preparer.format_table(ddl.target))
-
+
return ddl.statement % context
def visit_create_table(self, create):
return text
def create_table_constraints(self, table):
-
+
# On some DB order is significant: visit PK first, then the
# other constraints (engine.ReflectionTest.testbasic failed on FB2)
constraints = []
if table.primary_key:
constraints.append(table.primary_key)
-
+
constraints.extend([c for c in table.constraints if c is not table.primary_key])
-
+
return ", \n\t".join(p for p in
(self.process(constraint) for constraint in constraints
if (
not getattr(constraint, 'use_alter', False)
)) if p is not None
)
-
+
def visit_drop_table(self, drop):
return "\nDROP TABLE " + self.preparer.format_table(drop.element)
preparer = self.preparer
text = "CREATE "
if index.unique:
- text += "UNIQUE "
+ text += "UNIQUE "
text += "INDEX %s ON %s (%s)" \
% (preparer.quote(self._index_identifier(index.name),
index.quote),
if create.element.start is not None:
text += " START WITH %d" % create.element.start
return text
-
+
def visit_drop_sequence(self, drop):
return "DROP SEQUENCE %s" % self.preparer.format_sequence(drop.element)
self.preparer.format_constraint(drop.element),
drop.cascade and " CASCADE" or ""
)
-
+
def get_column_specification(self, column, **kwargs):
colspec = self.preparer.format_column(column) + " " + \
self.dialect.type_compiler.process(column.type)
def define_constraint_remote_table(self, constraint, table, preparer):
"""Format the remote table clause of a CREATE CONSTRAINT clause."""
-
+
return preparer.format_table(table)
def visit_unique_constraint(self, constraint):
if constraint.onupdate is not None:
text += " ON UPDATE %s" % constraint.onupdate
return text
-
+
def define_constraint_deferrability(self, constraint):
text = ""
if constraint.deferrable is not None:
if constraint.initially is not None:
text += " INITIALLY %s" % constraint.initially
return text
-
-
+
+
class GenericTypeCompiler(engine.TypeCompiler):
def visit_CHAR(self, type_):
return "CHAR" + (type_.length and "(%d)" % type_.length or "")
def visit_NCHAR(self, type_):
return "NCHAR" + (type_.length and "(%d)" % type_.length or "")
-
+
def visit_FLOAT(self, type_):
return "FLOAT"
def visit_DECIMAL(self, type_):
return "DECIMAL"
-
+
def visit_INTEGER(self, type_):
return "INTEGER"
def visit_VARBINARY(self, type_):
return "VARBINARY" + (type_.length and "(%d)" % type_.length or "")
-
+
def visit_BOOLEAN(self, type_):
return "BOOLEAN"
-
+
def visit_TEXT(self, type_):
return "TEXT"
-
+
def visit_large_binary(self, type_):
return self.visit_BLOB(type_)
-
+
def visit_boolean(self, type_):
return self.visit_BOOLEAN(type_)
-
+
def visit_time(self, type_):
return self.visit_TIME(type_)
-
+
def visit_datetime(self, type_):
return self.visit_DATETIME(type_)
-
+
def visit_date(self, type_):
return self.visit_DATE(type_)
def visit_big_integer(self, type_):
return self.visit_BIGINT(type_)
-
+
def visit_small_integer(self, type_):
return self.visit_SMALLINT(type_)
-
+
def visit_integer(self, type_):
return self.visit_INTEGER(type_)
-
+
def visit_float(self, type_):
return self.visit_FLOAT(type_)
-
+
def visit_numeric(self, type_):
return self.visit_NUMERIC(type_)
-
+
def visit_string(self, type_):
return self.visit_VARCHAR(type_)
-
+
def visit_unicode(self, type_):
return self.visit_VARCHAR(type_)
def visit_unicode_text(self, type_):
return self.visit_TEXT(type_)
-
+
def visit_enum(self, type_):
return self.visit_VARCHAR(type_)
-
+
def visit_null(self, type_):
raise NotImplementedError("Can't generate DDL for the null type")
-
+
def visit_type_decorator(self, type_):
return self.process(type_.type_engine(self.dialect))
-
+
def visit_user_defined(self, type_):
return type_.get_col_spec()
-
+
class IdentifierPreparer(object):
"""Handle quoting and case-folding of identifiers based on options."""
self.escape_to_quote = self.escape_quote * 2
self.omit_schema = omit_schema
self._strings = {}
-
+
def _escape_identifier(self, value):
"""Escape an identifier.
def format_constraint(self, constraint):
return self.quote(constraint.name, constraint.quote)
-
+
def format_table(self, table, use_schema=True, name=None):
"""Prepare a quoted table and schema name."""
'final': final,
'escaped': escaped_final })
return r
-
+
def unformat_identifiers(self, identifiers):
"""Unpack 'schema.table.column'-like strings into components."""
do not support bind parameters in the ``then`` clause. The type
can be specified which determines the type of the :func:`case()` construct
overall::
-
+
case([(orderline.c.qty > 100,
literal_column("'greaterthan100'", String)),
(orderline.c.qty > 10, literal_column("'greaterthan10'",
], else_=literal_column("'lethan10'", String))
"""
-
+
return _Case(whens, value=value, else_=else_)
def cast(clause, totype, **kwargs):
return _BindParamClause(None, value, type_=type_, unique=True)
def tuple_(*expr):
- """Return a SQL tuple.
-
+ """Return a SQL tuple.
+
Main usage is to produce a composite IN construct::
-
+
tuple_(table.c.col1, table.c.col2).in_(
[(1, 2), (5, 12), (10, 19)]
)
-
+
"""
return _Tuple(*expr)
def type_coerce(expr, type_):
"""Coerce the given expression into the given type, on the Python side only.
-
+
:func:`.type_coerce` is roughly similar to :func:.`cast`, except no
"CAST" expression is rendered - the given type is only applied towards
expression typing and against received result values.
-
+
e.g.::
-
+
from sqlalchemy.types import TypeDecorator
import uuid
-
+
class AsGuid(TypeDecorator):
impl = String
return str(value)
else:
return None
-
+
def process_result_value(self, value, dialect):
if value is not None:
return uuid.UUID(value)
else:
return None
-
+
conn.execute(
select([type_coerce(mytable.c.ident, AsGuid)]).\\
where(
type_coerce(mytable.c.ident, AsGuid) ==
uuid.uuid3(uuid.NAMESPACE_URL, 'bar')
)
- )
-
+ )
+
"""
if hasattr(expr, '__clause_expr__'):
return type_coerce(expr.__clause_expr__())
-
+
elif not isinstance(expr, Visitable):
if expr is None:
return null()
return literal(expr, type_=type_)
else:
return _Label(None, expr, type_=type_)
-
-
+
+
def label(name, obj):
"""Return a :class:`_Label` object for the
given :class:`ColumnElement`.
required
A value is required at execution time.
-
+
"""
if isinstance(key, ColumnClause):
return _BindParamClause(key.name, value, type_=key.type,
def text(text, bind=None, *args, **kwargs):
"""Create a SQL construct that is represented by a literal string.
-
+
E.g.::
-
+
t = text("SELECT * FROM users")
result = connection.execute(t)
-
+
The advantages :func:`text` provides over a plain string are
backend-neutral support for bind parameters, per-statement
execution options, as well as
bind parameter and result-column typing behavior, allowing
SQLAlchemy type constructs to play a role when executing
a statement that is specified literally.
-
+
Bind parameters are specified by name, using the format ``:name``.
E.g.::
-
+
t = text("SELECT * FROM users WHERE id=:user_id")
result = connection.execute(t, user_id=12)
-
+
To invoke SQLAlchemy typing logic for bind parameters, the
``bindparams`` list allows specification of :func:`bindparam`
constructs which specify the type for a given name::
-
+
t = text("SELECT id FROM users WHERE updated_at>:updated",
bindparams=[bindparam('updated', DateTime())]
)
-
- Typing during result row processing is also an important concern.
+
+ Typing during result row processing is also an important concern.
Result column types
are specified using the ``typemap`` dictionary, where the keys
match the names of columns. These names are taken from what
the DBAPI returns as ``cursor.description``::
-
+
t = text("SELECT id, name FROM users",
typemap={
'id':Integer,
'name':Unicode
}
)
-
+
The :func:`text` construct is used internally for most cases when
a literal string is specified for part of a larger query, such as
within :func:`select()`, :func:`update()`,
:func:`insert()` or :func:`delete()`. In those cases, the same
bind parameter syntax is applied::
-
+
s = select([users.c.id, users.c.name]).where("id=:user_id")
result = connection.execute(s, user_id=12)
-
+
Using :func:`text` explicitly usually implies the construction
of a full, standalone statement. As such, SQLAlchemy refers
to it as an :class:`Executable` object, and it supports
the :meth:`Executable.execution_options` method. For example,
a :func:`text` construct that should be subject to "autocommit"
can be set explicitly so using the ``autocommit`` option::
-
+
t = text("EXEC my_procedural_thing()").\\
execution_options(autocommit=True)
-
+
Note that SQLAlchemy's usual "autocommit" behavior applies to
:func:`text` constructs - that is, statements which begin
with a phrase such as ``INSERT``, ``UPDATE``, ``DELETE``,
def null():
"""Return a :class:`_Null` object, which compiles to ``NULL`` in a sql
statement.
-
+
"""
return _Null()
return x
else:
return x.replace('%', '%%')
-
+
def _clone(element):
return element._clone()
def _expand_cloned(elements):
"""expand the given set of ClauseElements to be the set of all 'cloned'
predecessors.
-
+
"""
return itertools.chain(*[x._cloned_set for x in elements])
def _select_iterables(elements):
"""expand tables into individual columns in the
given list of column expressions.
-
+
"""
return itertools.chain(*[c._select_iterable for c in elements])
-
+
def _cloned_intersection(a, b):
"""return the intersection of sets a and b, counting
any overlap between 'cloned' predecessors.
if hasattr(element, '__clause_element__'):
element = element.__clause_element__()
return element.key
-
+
def _literal_as_text(element):
if hasattr(element, '__clause_element__'):
return element.__clause_element__()
return element.__clause_element__()
else:
return element
-
+
def _literal_as_column(element):
if hasattr(element, '__clause_element__'):
return element.__clause_element__()
raise exc.ArgumentError("Column-based expression object expected for argument '%s'; "
"got: '%s', type %s" % (name, element, type(element)))
return element
-
+
def _corresponding_column_or_error(fromclause, column,
require_embedded=False):
c = fromclause.corresponding_column(column,
def is_column(col):
"""True if ``col`` is an instance of :class:`ColumnElement`."""
-
+
return isinstance(col, ColumnElement)
class ClauseElement(Visitable):
"""Base class for elements of a programmatically constructed SQL
expression.
-
+
"""
__visit_name__ = 'clause'
supports_execution = False
_from_objects = []
_bind = None
-
+
def _clone(self):
"""Create a shallow copy of this ClauseElement.
@property
def _constructor(self):
"""return the 'constructor' for this ClauseElement.
-
+
This is for the purposes for creating a new object of
- this type. Usually, its just the element's __class__.
+ this type. Usually, its just the element's __class__.
However, the "Annotated" version of the object overrides
to return the class of its proxied element.
d = self.__dict__.copy()
d.pop('_is_clone_of', None)
return d
-
+
if util.jython:
def __hash__(self):
"""Return a distinct hash code.
unique values on platforms with moving GCs.
"""
return id(self)
-
+
def _annotate(self, values):
"""return a copy of this ClauseElement with the given annotations
dictionary.
-
+
"""
return sqlutil.Annotated(self, values)
def _deannotate(self):
"""return a copy of this ClauseElement with an empty annotations
dictionary.
-
+
"""
return self._clone()
Subclasses should override the default behavior, which is a
straight identity comparison.
-
+
\**kw are arguments consumed by subclass compare() methods and
may be used to modify the criteria for comparison.
(see :class:`ColumnElement`)
def self_group(self, against=None):
"""Apply a 'grouping' to this :class:`.ClauseElement`.
-
+
This method is overridden by subclasses to return a
"grouping" construct, i.e. parenthesis. In particular
it's used by "binary" expressions to provide a grouping
subqueries should be normally created using the
:func:`.Select.alias` method, as many platforms require
nested SELECT statements to be named).
-
+
As expressions are composed together, the application of
:meth:`self_group` is automatic - end-user code should never
need to use this method directly. Note that SQLAlchemy's
so parenthesis might not be needed, for example, in
an expression like ``x OR (y AND z)`` - AND takes precedence
over OR.
-
+
The base :meth:`self_group` method of :class:`.ClauseElement`
just returns self.
"""
def bind(self):
"""Returns the Engine or Connection to which this ClauseElement is
bound, or None if none found.
-
+
"""
if self._bind is not None:
return self._bind
return engine
else:
return None
-
+
@util.pending_deprecation('0.7',
'Only SQL expressions which subclass '
':class:`.Executable` may provide the '
':func:`.execute` method.')
def execute(self, *multiparams, **params):
"""Compile and execute this :class:`ClauseElement`.
-
+
"""
e = self.bind
if e is None:
def scalar(self, *multiparams, **params):
"""Compile and execute this :class:`ClauseElement`, returning
the result's scalar representation.
-
+
"""
return self.execute(*multiparams, **params).scalar()
associated with a primary key `Column`.
"""
-
+
if not dialect:
if bind:
dialect = bind.dialect
compiler = self._compiler(dialect, bind=bind, **kw)
compiler.compile()
return compiler
-
+
def _compiler(self, dialect, **kw):
"""Return a compiler appropriate for this ClauseElement, given a
Dialect."""
-
+
return dialect.statement_compiler(dialect, self, **kw)
-
+
def __str__(self):
# Py3K
#return unicode(self.compile())
return self.operate(operators.le, other)
__hash__ = Operators.__hash__
-
+
def __eq__(self, other):
return self.operate(operators.eq, other)
def __operate(self, op, obj, reverse=False):
obj = self._check_literal(op, obj)
-
+
if reverse:
left, right = obj, self
else:
left, right = self, obj
-
+
if left.type is None:
op, result_type = sqltypes.NULLTYPE._adapt_expression(op,
right.type)
op, result_type = left.type._adapt_expression(op,
right.type)
return _BinaryExpression(left, right, op, type_=result_type)
-
+
# a mapping of operators with the method they use, along with their negated
# operator for comparison operators
def in_(self, other):
"""Compare this element to the given element or collection using IN."""
-
+
return self._in_impl(operators.in_op, operators.notin_op, other)
def _in_impl(self, op, negate_op, seq_or_selectable):
seq_or_selectable = _clause_element_as_expr(seq_or_selectable)
-
+
if isinstance(seq_or_selectable, _ScalarSelect):
return self.__compare(op, seq_or_selectable,
negate=negate_op)
elif isinstance(seq_or_selectable, (Selectable, _TextClause)):
return self.__compare(op, seq_or_selectable,
negate=negate_op)
-
-
+
+
# Handle non selectable arguments as sequences
args = []
def __neg__(self):
return _UnaryExpression(self, operator=operators.neg)
-
+
def startswith(self, other, escape=None):
"""Produce the clause ``LIKE '<other>%'``"""
def label(self, name):
"""Produce a column label, i.e. ``<columnname> AS <name>``.
-
+
This is a shortcut to the :func:`~.expression.label` function.
if 'name' is None, an anonymous label name will be generated.
somecolumn.op('&')(0xff)
is a bitwise AND of the value in somecolumn.
-
+
"""
return lambda other: self.__operate(operator, other)
foreign_keys = []
quote = None
_label = None
-
+
@property
def _select_iterable(self):
return (self, )
key = str(self)
else:
key = name
-
+
co = ColumnClause(name, selectable, type_=getattr(self,
'type', None))
co.proxies = [self]
def compare(self, other, use_proxies=False, equivalents=None, **kw):
"""Compare this ColumnElement to another.
-
+
Special arguments understood:
-
+
:param use_proxies: when True, consider two columns that
share a common base column as equivalent (i.e. shares_lineage())
-
+
:param equivalents: a dictionary of columns as keys mapped to sets
of columns. If the given "other" column is present in this
dictionary, if any of the columns in the correponding set() pass the
self.add(c)
__hash__ = None
-
+
def __eq__(self, other):
l = []
for c in other:
class FromClause(Selectable):
"""Represent an element that can be used within the ``FROM``
clause of a ``SELECT`` statement.
-
+
"""
__visit_name__ = 'fromclause'
named_with_column = False
def alias(self, name=None):
"""return an alias of this :class:`FromClause`.
-
+
For table objects, this has the effect of the table being rendered
- as ``tablename AS aliasname`` in a SELECT statement.
+ as ``tablename AS aliasname`` in a SELECT statement.
For select objects, the effect is that of creating a named
subquery, i.e. ``(select ...) AS aliasname``.
The :func:`alias()` method is the general way to create
a "subquery" out of an existing SELECT.
-
+
The ``name`` parameter is optional, and if left blank an
"anonymous" name will be generated at compile time, guaranteed
to be unique against other anonymous constructs used in the
same statement.
-
+
"""
return Alias(self, name)
def replace_selectable(self, old, alias):
"""replace all occurences of FromClause 'old' with the given Alias
object, returning a copy of this :class:`FromClause`.
-
+
"""
return sqlutil.ClauseAdapter(alias).traverse(self)
def correspond_on_equivalents(self, column, equivalents):
"""Return corresponding_column for the given column, or if None
search for a match in the given dictionary.
-
+
"""
col = self.corresponding_column(column, require_embedded=True)
if col is None and col in equivalents:
which corresponds to that original
:class:`~sqlalchemy.schema.Column` via a common anscestor
column.
-
+
:param column: the target :class:`ColumnElement` to be matched
-
+
:param require_embedded: only return corresponding columns for
the given :class:`ColumnElement`, if the given
:class:`ColumnElement` is actually present within a sub-element
of this :class:`FromClause`. Normally the column will match if
it merely shares a common anscestor with one of the exported
columns of this :class:`FromClause`.
-
+
"""
# dont dig around if the column is locally present
modified if another :class:`_BindParamClause` of the same name
already has been located within the containing
:class:`ClauseElement`.
-
+
:param required:
a value is required at execution time.
-
+
:param isoutparam:
if True, the parameter should be treated like a stored procedure
"OUT" parameter.
self.type = type_()
else:
self.type = type_
-
+
def _clone(self):
c = ClauseElement._clone(self)
if self.unique:
class _Generative(object):
"""Allow a ClauseElement to generate itself via the
@_generative decorator.
-
+
"""
-
+
def _generate(self):
s = self.__class__.__new__(self.__class__)
s.__dict__ = self.__dict__.copy()
class Executable(_Generative):
"""Mark a ClauseElement as supporting execution.
-
+
:class:`Executable` is a superclass for all "statement" types
of objects, including :func:`select`, :func:`delete`, :func:`update`,
:func:`insert`, :func:`text`.
-
+
"""
supports_execution = True
def execution_options(self, **kw):
""" Set non-SQL options for the statement which take effect during
execution.
-
+
Current options include:
-
+
* autocommit - when True, a COMMIT will be invoked after execution
when executed in 'autocommit' mode, i.e. when an explicit
transaction is not begun on the connection. Note that DBAPI
constructs do not. Use this option when invoking a SELECT or other
specific SQL construct where COMMIT is desired (typically when
calling stored procedures and such).
-
+
* stream_results - indicate to the dialect that results should be
"streamed" and not pre-buffered, if possible. This is a limitation
of many DBAPIs. The flag is currently understood only by the
as well as the "batch" mode for an INSERT or UPDATE statement.
The format of this dictionary is not guaranteed to stay the
same in future releases.
-
+
This option is usually more appropriate
to use via the
:meth:`sqlalchemy.engine.base.Connection.execution_options()`
method of :class:`Connection`, rather than upon individual
statement objects, though the effect is the same.
-
+
See also:
-
+
:meth:`sqlalchemy.engine.base.Connection.execution_options()`
:meth:`sqlalchemy.orm.query.Query.execution_options()`
-
+
"""
self._execution_options = self._execution_options.union(kw)
def scalar(self, *multiparams, **params):
"""Compile and execute this :class:`.Executable`, returning the
result's scalar representation.
-
+
"""
return self.execute(*multiparams, **params).scalar()
# legacy, some outside users may be calling this
_Executable = Executable
-
+
class _TextClause(Executable, ClauseElement):
"""Represent a literal SQL text fragment.
_execution_options = \
Executable._execution_options.union({'autocommit'
: PARSE_AUTOCOMMIT})
-
+
@property
def _select_iterable(self):
return (self,)
if bindparams is not None:
for b in bindparams:
self.bindparams[b.key] = b
-
+
@property
def type(self):
if self.typemap is not None and len(self.typemap) == 1:
self.clauses = [
_literal_as_text(clause)
for clause in clauses if clause is not None]
-
+
@util.memoized_property
def type(self):
if self.clauses:
return self.clauses[0].type
else:
return sqltypes.NULLTYPE
-
+
def __iter__(self):
return iter(self.clauses)
return (self, )
class _Tuple(ClauseList, ColumnElement):
-
+
def __init__(self, *clauses, **kw):
clauses = [_literal_as_binds(c) for c in clauses]
super(_Tuple, self).__init__(*clauses, **kw)
_compared_to_type=self.type, unique=True)
for o in obj
]).self_group()
-
+
class _Case(ColumnElement):
__visit_name__ = 'case'
class FunctionElement(Executable, ColumnElement, FromClause):
"""Base for SQL function-oriented constructs."""
-
+
def __init__(self, *clauses, **kwargs):
args = [_literal_as_binds(c, self.name) for c in clauses]
self.clause_expr = ClauseList(
return _BindParamClause(None, obj, _compared_to_operator=operator,
_compared_to_type=self.type, unique=True)
-
+
class Function(FunctionElement):
"""Describe a named SQL function."""
self.name = name
self._bind = kw.get('bind', None)
self.type = sqltypes.to_instance(kw.get('type_', None))
-
+
FunctionElement.__init__(self, *clauses, **kw)
def _bind_param(self, operator, obj):
self.modifiers = {}
else:
self.modifiers = modifiers
-
+
def __nonzero__(self):
try:
return self.operator(hash(self.left), hash(self.right))
except:
raise TypeError("Boolean value of this clause is not defined")
-
+
@property
def _from_objects(self):
return self.left._from_objects + self.right._from_objects
def select_from(self, clause):
"""return a new exists() construct with the given expression set as
its FROM clause.
-
+
"""
e = self._clone()
e.element = self.element.select_from(clause).self_group()
def where(self, clause):
"""return a new exists() construct with the given expression added to
its WHERE clause, joined to the existing clause via AND, if any.
-
+
"""
e = self._clone()
e.element = self.element.where(clause).self_group()
except AttributeError:
raise AttributeError("Element %s does not support "
"'as_scalar()'" % self.element)
-
+
def is_derived_from(self, fromclause):
if fromclause in self._cloned_set:
return True
self._type = type_
self.quote = element.quote
self.proxies = [element]
-
+
@util.memoized_property
def type(self):
return sqltypes.to_instance(
@util.memoized_property
def element(self):
return self._element.self_group(against=operators.as_)
-
+
def self_group(self, against=None):
sub_element = self._element.self_group(against=against)
if sub_element is not self._element:
type_=self._type)
else:
return self._element
-
+
@property
def primary_key(self):
return self.element.primary_key
e = self.element._make_proxy(selectable, name=self.name)
else:
e = column(self.name)._make_proxy(selectable=selectable)
-
+
e.proxies.append(self)
return e
__visit_name__ = 'table'
named_with_column = True
-
+
def __init__(self, name, *columns):
super(TableClause, self).__init__()
self.name = self.fullname = name
def count(self, whereclause=None, **params):
"""return a SELECT COUNT generated against this
:class:`TableClause`."""
-
+
if self.primary_key:
col = list(self.primary_key)[0]
else:
def order_by(self, *clauses):
"""return a new selectable with the given list of ORDER BY
criterion applied.
-
+
The criterion will be appended to any pre-existing ORDER BY
criterion.
-
+
"""
self.append_order_by(*clauses)
def group_by(self, *clauses):
"""return a new selectable with the given list of GROUP BY
criterion applied.
-
+
The criterion will be appended to any pre-existing GROUP BY
criterion.
-
+
"""
self.append_group_by(*clauses)
EXCEPT_ALL = util.symbol('EXCEPT ALL')
INTERSECT = util.symbol('INTERSECT')
INTERSECT_ALL = util.symbol('INTERSECT ALL')
-
+
def __init__(self, keyword, *selects, **kwargs):
self._should_correlate = kwargs.pop('correlate', False)
self.keyword = keyword
# some DBs do not like ORDER BY in the inner queries of a UNION, etc.
for n, s in enumerate(selects):
s = _clause_element_as_expr(s)
-
+
if not numcols:
numcols = len(s.c)
elif len(s.c) != numcols:
self.selects.append(s.self_group(self))
_SelectBaseMixin.__init__(self, **kwargs)
-
+
def _scalar_type(self):
return self.selects[0]._scalar_type()
-
+
def self_group(self, against=None):
return _FromGrouping(self)
proxy.proxies = [c._annotate({'weight': i + 1}) for (i,
c) in enumerate(cols)]
-
+
def _copy_internals(self, clone=_clone):
self._reset_exported()
self.selects = [clone(s) for s in self.selects]
"""
__visit_name__ = 'select'
-
+
_prefixes = ()
_hints = util.frozendict()
-
+
def __init__(self,
columns,
whereclause=None,
self._correlate = set()
self._froms = util.OrderedSet()
-
+
try:
cols_present = bool(columns)
except TypeError:
raise exc.ArgumentError("columns argument to select() must "
"be a Python list or other iterable")
-
+
if cols_present:
self._raw_columns = []
for c in columns:
"""Return the displayed list of FromClause elements."""
return self._get_display_froms()
-
+
@_generative
def with_hint(self, selectable, text, dialect_name='*'):
"""Add an indexing hint for the given selectable to this
:class:`Select`.
-
+
The text of the hint is rendered in the appropriate
location for the database backend in use, relative
to the given :class:`.Table` or :class:`.Alias` passed as the
with the token ``%(name)s`` to render the name of
the table or alias. E.g. when using Oracle, the
following::
-
+
select([mytable]).\\
with_hint(mytable, "+ index(%(name)s ix_mytable)")
-
+
Would render SQL as::
-
+
select /*+ index(mytable ix_mytable) */ ... from mytable
-
+
The ``dialect_name`` option will limit the rendering of a particular
hint to a particular backend. Such as, to add hints for both Oracle
and Sybase simultaneously::
-
+
select([mytable]).\\
with_hint(mytable, "+ index(%(name)s ix_mytable)", 'oracle').\\
with_hint(mytable, "WITH INDEX ix_mytable", 'sybase')
-
+
"""
self._hints = self._hints.union({(selectable, dialect_name):text})
-
+
@property
def type(self):
raise exc.InvalidRequestError("Select objects don't have a type. "
@util.memoized_instancemethod
def locate_all_froms(self):
"""return a Set of all FromClause elements referenced by this Select.
-
+
This set is a superset of that returned by the ``froms`` property,
which is specifically for those FromClause elements that would
actually be rendered.
def column(self, column):
"""return a new select() construct with the given column expression
added to its columns clause.
-
+
"""
column = _literal_as_column(column)
def with_only_columns(self, columns):
"""return a new select() construct with its columns clause replaced
with the given columns.
-
+
"""
self._raw_columns = [
def where(self, whereclause):
"""return a new select() construct with the given expression added to
its WHERE clause, joined to the existing clause via AND, if any.
-
+
"""
self.append_whereclause(whereclause)
def having(self, having):
"""return a new select() construct with the given expression added to
its HAVING clause, joined to the existing clause via AND, if any.
-
+
"""
self.append_having(having)
def distinct(self):
"""return a new select() construct which will apply DISTINCT to its
columns clause.
-
+
"""
self._distinct = True
def correlate(self, *fromclauses):
"""return a new select() construct which will correlate the given FROM
clauses to that of an enclosing select(), if a match is found.
-
+
By "match", the given fromclause must be present in this select's
list of FROM objects and also present in an enclosing select's list of
FROM objects.
-
+
Calling this method turns off the select's default behavior of
"auto-correlation". Normally, select() auto-correlates all of its FROM
clauses to those of an embedded select when compiled.
-
+
If the fromclause is None, correlation is disabled for the returned
select().
def append_column(self, column):
"""append the given column expression to the columns clause of this
select() construct.
-
+
"""
column = _literal_as_column(column)
def append_prefix(self, clause):
"""append the given columns clause prefix expression to this select()
construct.
-
+
"""
clause = _literal_as_text(clause)
self._prefixes = self._prefixes + (clause,)
def self_group(self, against=None):
"""return a 'grouping' construct as per the ClauseElement
specification.
-
+
This produces an element that can be embedded in an expression. Note
that this method is called automatically as needed when constructing
expressions.
def union_all(self, other, **kwargs):
"""return a SQL UNION ALL of this select() construct against the given
selectable.
-
+
"""
return union_all(self, other, **kwargs)
def except_all(self, other, **kwargs):
"""return a SQL EXCEPT ALL of this select() construct against the
given selectable.
-
+
"""
return except_all(self, other, **kwargs)
def intersect(self, other, **kwargs):
"""return a SQL INTERSECT of this select() construct against the given
selectable.
-
+
"""
return intersect(self, other, **kwargs)
def intersect_all(self, other, **kwargs):
"""return a SQL INTERSECT ALL of this select() construct against the
given selectable.
-
+
"""
return intersect_all(self, other, **kwargs)
_execution_options = \
Executable._execution_options.union({'autocommit': True})
kwargs = util.frozendict()
-
+
def _process_colparams(self, parameters):
if isinstance(parameters, (list, tuple)):
pp = {}
"use statement.returning(col1, col2, ...)" % k
)
return kwargs
-
+
@_generative
def returning(self, *cols):
"""Add a RETURNING or equivalent clause to this statement.
-
+
The given list of columns represent columns within the table that is
the target of the INSERT, UPDATE, or DELETE. Each element can be any
column expression. :class:`~sqlalchemy.schema.Table` objects will be
expanded into their individual columns.
-
+
Upon compilation, a RETURNING clause, or database equivalent,
will be rendered within the statement. For INSERT and UPDATE,
the values are the newly inserted/updated values. For DELETE,
the values are those of the rows which were deleted.
-
+
Upon execution, the values of the columns to be returned
are made available via the result set and can be iterated
using ``fetchone()`` and similar. For DBAPIs which do not
SQLAlchemy will approximate this behavior at the result level
so that a reasonable amount of behavioral neutrality is
provided.
-
+
Note that not all databases/DBAPIs
support RETURNING. For those backends with no support,
an exception is raised upon compilation and/or execution.
and other statements which return multiple rows. Please
read the documentation notes for the database in use in
order to determine the availability of RETURNING.
-
+
"""
self._returning = cols
-
+
class _ValuesBase(_UpdateBase):
__visit_name__ = 'values_base'
"""
__visit_name__ = 'insert'
-
+
_prefixes = ()
-
+
def __init__(self,
table,
values=None,
self._returning = returning
if prefixes:
self._prefixes = tuple([_literal_as_text(p) for p in prefixes])
-
+
if kwargs:
self.kwargs = self._process_deprecated_kw(kwargs)
def where(self, whereclause):
"""return a new update() construct with the given expression added to
its WHERE clause, joined to the existing clause via AND, if any.
-
+
"""
if self._whereclause is not None:
self._whereclause = and_(self._whereclause,
self._bind = bind
self.table = table
self._returning = returning
-
+
if whereclause is not None:
self._whereclause = _literal_as_text(whereclause)
else:
class ReturnTypeFromArgs(GenericFunction):
"""Define a function whose return type is the same as its arguments."""
-
+
def __init__(self, *args, **kwargs):
kwargs.setdefault('type_', _type_from_args(args))
GenericFunction.__init__(self, args=args, **kwargs)
from operator import (
and_, or_, inv, add, mul, sub, mod, truediv, lt, le, ne, gt, ge, eq, neg
)
-
+
# Py2K
from operator import (div,)
# end Py2K
return op in _commutative
_associative = _commutative.union([concat_op, and_, or_])
-
+
_smallest = symbol('_smallest')
_largest = symbol('_largest')
def sort_tables(tables):
"""sort a collection of Table objects in order of their foreign-key dependency."""
-
+
tables = list(tables)
tuples = []
def visit_foreign_key(fkey):
tuples.extend(
[parent, table] for parent in table._extra_dependencies
)
-
+
return list(topological.sort(tuples, tables))
def find_join_source(clauses, join_to):
return the first index and element from the list of
clauses which can be joined against the selectable. returns
None, None if no match is found.
-
+
e.g.::
-
+
clause1 = table1.join(table2)
clause2 = table4.join(table5)
-
+
join_to = table2.join(table3)
-
+
find_join_source([clause1, clause2], join_to) == clause1
-
+
"""
-
+
selectables = list(expression._from_objects(join_to))
for i, f in enumerate(clauses):
for s in selectables:
include_aliases=False, include_joins=False,
include_selects=False, include_crud=False):
"""locate Table objects within the given expression."""
-
+
tables = []
_visitors = {}
-
+
if include_selects:
_visitors['select'] = _visitors['compound_select'] = tables.append
-
+
if include_joins:
_visitors['join'] = tables.append
-
+
if include_aliases:
_visitors['alias'] = tables.append
-
+
if include_crud:
_visitors['insert'] = _visitors['update'] = \
_visitors['delete'] = lambda ent: tables.append(ent.table)
-
+
if check_columns:
def visit_column(column):
tables.append(column.table)
def find_columns(clause):
"""locate Column objects within the given expression."""
-
+
cols = util.column_set()
visitors.traverse(clause, {}, {'column':cols.add})
return cols
"""Given a target clause and a second to search within, return True
if the target is plainly present in the search without any
subqueries or aliases involved.
-
+
Basically descends through Joins.
-
+
"""
stack = [search]
elif isinstance(elem, expression.Join):
stack.extend((elem.left, elem.right))
return False
-
-
+
+
def bind_values(clause):
"""Return an ordered list of "bound" values in the given clause.
E.g.::
-
+
>>> expr = and_(
... table.c.foo==5, table.c.foo==7
... )
>>> bind_values(expr)
[5, 7]
"""
-
+
v = []
def visit_bindparam(bind):
value = bind.value
-
+
# evaluate callables
if callable(value):
value = value()
-
+
v.append(value)
-
+
visitors.traverse(clause, {}, {'bindparam':visit_bindparam})
return v
return "'%s'" % element
else:
return repr(element)
-
+
def expression_as_ddl(clause):
"""Given a SQL expression, convert for usage in DDL, such as
CREATE INDEX and CHECK CONSTRAINT.
-
+
Converts bind params into quoted literals, column identifiers
into detached column constructs so that the parent table
identifier is not included.
-
+
"""
def repl(element):
if isinstance(element, expression._BindParamClause):
return expression.column(element.name)
else:
return None
-
+
return visitors.replacement_traverse(clause, {}, repl)
-
+
def adapt_criterion_to_null(crit, nulls):
"""given criterion containing bind params, convert selected elements to IS NULL."""
binary.negate = operators.isnot
return visitors.cloned_traverse(crit, {}, {'binary':visit_binary})
-
-
+
+
def join_condition(a, b, ignore_nonexistent_tables=False, a_subset=None):
"""create a join condition between two tables or selectables.
-
+
e.g.::
-
+
join_condition(tablea, tableb)
-
+
would produce an expression along the lines of::
-
+
tablea.c.id==tableb.c.tablea_id
-
+
The join is determined based on the foreign key relationships
between the two selectables. If there are multiple ways
to join, or no way to join, an error is raised.
-
+
:param ignore_nonexistent_tables: This flag will cause the
function to silently skip over foreign key resolution errors
due to nonexistent tables - the assumption is that these
will be successful even if there are other ways to join to ``a``.
This allows the "right side" of a join to be passed thereby
providing a "natural join".
-
+
"""
crit = []
constraints = set()
-
+
for left in (a_subset, a):
if left is None:
continue
continue
else:
raise
-
+
if col is not None:
crit.append(col == fk.parent)
constraints.add(fk.constraint)
constraints.add(fk.constraint)
if crit:
break
-
+
if len(crit) == 0:
if isinstance(b, expression._FromGrouping):
hint = " Perhaps you meant to convert the right side to a subquery using alias()?"
class Annotated(object):
"""clones a ClauseElement and applies an 'annotations' dictionary.
-
+
Unlike regular clones, this clone also mimics __hash__() and
__cmp__() of the original element so that it takes its place
in hashed collections.
-
+
A reference to the original element is maintained, for the important
reason of keeping its hash value current. When GC'ed, the
hash value may be reused, causing conflicts.
"""
-
+
def __new__(cls, *args):
if not args:
# clone constructor
# collections into __dict__
if isinstance(element, expression.FromClause):
element.c
-
+
self.__dict__ = element.__dict__.copy()
self.__element = element
self._annotations = values
-
+
def _annotate(self, values):
_values = self._annotations.copy()
_values.update(values)
clone.__dict__ = self.__dict__.copy()
clone._annotations = _values
return clone
-
+
def _deannotate(self):
return self.__element
-
+
def _compiler_dispatch(self, visitor, **kw):
return self.__element.__class__._compiler_dispatch(self, visitor, **kw)
-
+
@property
def _constructor(self):
return self.__element._constructor
-
+
def _clone(self):
clone = self.__element._clone()
if clone is self.__element:
# to this object's __dict__.
clone.__dict__.update(self.__dict__)
return Annotated(clone, self._annotations)
-
+
def __hash__(self):
return hash(self.__element)
def splice_joins(left, right, stop_on=None):
if left is None:
return right
-
+
stack = [(right, None)]
adapter = ClauseAdapter(left)
ret = right
return ret
-
+
def reduce_columns(columns, *clauses, **kw):
"""given a list of columns, return a 'reduced' set based on natural equivalents.
\**kw may specify 'ignore_nonexistent_tables' to ignore foreign keys
whose tables are not yet configured.
-
+
This function is primarily used to determine the most minimal "primary key"
from a selectable, by reducing the set of primary key columns present
in the the selectable to just those that are not repeated.
"""
ignore_nonexistent_tables = kw.pop('ignore_nonexistent_tables', False)
-
+
columns = util.ordered_column_set(columns)
omit = util.column_set()
def criterion_as_pairs(expression, consider_as_foreign_keys=None,
consider_as_referenced_keys=None, any_operator=False):
"""traverse an expression and locate binary criterion pairs."""
-
+
if consider_as_foreign_keys and consider_as_referenced_keys:
raise exc.ArgumentError("Can only specify one of "
"'consider_as_foreign_keys' or "
"'consider_as_referenced_keys'")
-
+
def visit_binary(binary):
if not any_operator and binary.operator is not operators.eq:
return
def folded_equivalents(join, equivs=None):
"""Return a list of uniquely named columns.
-
+
The column list of the given Join will be narrowed
down to a list of all equivalently-named,
equated columns folded into one column, where 'equated' means they are
equated to each other in the ON clause of this join.
This function is used by Join.select(fold_equivalents=True).
-
+
Deprecated. This function is used for a certain kind of
"polymorphic_union" which is designed to achieve joined
table inheritance where the base table has no "discriminator"
class AliasedRow(object):
"""Wrap a RowProxy with a translation map.
-
+
This object allows a set of keys to be translated
to those present in a RowProxy.
-
+
"""
def __init__(self, row, map):
# AliasedRow objects don't nest, so un-nest
else:
self.row = row
self.map = map
-
+
def __contains__(self, key):
return self.map[key] in self.row
class ClauseAdapter(visitors.ReplacingCloningVisitor):
"""Clones and modifies clauses based on column correspondence.
-
+
E.g.::
table1 = Table('sometable', metadata,
self.include = include
self.exclude = exclude
self.equivalents = util.column_dict(equivalents or {})
-
+
def _corresponding_column(self, col, require_embedded, _seen=util.EMPTY_SET):
newcol = self.selectable.corresponding_column(col, require_embedded=require_embedded)
return None
elif self.exclude and col in self.exclude:
return None
-
+
return self._corresponding_column(col, True)
class ColumnAdapter(ClauseAdapter):
"""Extends ClauseAdapter with extra utility functions.
-
+
Provides the ability to "wrap" this ClauseAdapter
around another, a columns dictionary which returns
adapted elements given an original, and an
adapted_row() factory.
-
+
"""
def __init__(self, selectable, equivalents=None,
chain_to=None, include=None,
c = self._corresponding_column(col, True)
if c is None:
c = self.adapt_clause(col)
-
+
# anonymize labels in case they have a hardcoded name
if isinstance(c, expression._Label):
c = c.label(None)
-
+
# adapt_required indicates that if we got the same column
# back which we put in (i.e. it passed through),
# it's not correct. this is used by eagerloading which
# the wrong column.
if self.adapt_required and c is col:
return None
-
- return c
+
+ return c
def adapted_row(self, row):
return AliasedRow(row, self.columns)
-
+
def __getstate__(self):
d = self.__dict__.copy()
del d['columns']
return d
-
+
def __setstate__(self, state):
self.__dict__.update(state)
self.columns = util.PopulateDict(self._locate_col)
'CloningVisitor', 'ReplacingCloningVisitor', 'iterate',
'iterate_depthfirst', 'traverse_using', 'traverse',
'cloned_traverse', 'replacement_traverse']
-
+
class VisitableType(type):
"""Metaclass which checks for a `__visit_name__` attribute and
applies `_compiler_dispatch` method to classes.
-
+
"""
-
+
def __init__(cls, clsname, bases, clsdict):
if cls.__name__ == 'Visitable' or not hasattr(cls, '__visit_name__'):
super(VisitableType, cls).__init__(clsname, bases, clsdict)
return
-
+
# set up an optimized visit dispatch function
# for use by the compiler
if '__visit_name__' in cls.__dict__:
return getattr(visitor, 'visit_%s' % self.__visit_name__)(self, **kw)
cls._compiler_dispatch = _compiler_dispatch
-
+
super(VisitableType, cls).__init__(clsname, bases, clsdict)
class Visitable(object):
"""Base class for visitable objects, applies the
``VisitableType`` metaclass.
-
+
"""
__metaclass__ = VisitableType
class ClauseVisitor(object):
"""Base class for visitor objects which can traverse using
the traverse() function.
-
+
"""
-
+
__traverse_options__ = {}
-
+
def traverse_single(self, obj, **kw):
for v in self._visitor_iterator:
meth = getattr(v, "visit_%s" % obj.__visit_name__, None)
if meth:
return meth(obj, **kw)
-
+
def iterate(self, obj):
"""traverse the given expression structure, returning an iterator of all elements."""
return iterate(obj, self.__traverse_options__)
-
+
def traverse(self, obj):
"""traverse and visit the given expression structure."""
return traverse(obj, self.__traverse_options__, self._visitor_dict)
-
+
@util.memoized_property
def _visitor_dict(self):
visitors = {}
if name.startswith('visit_'):
visitors[name[6:]] = getattr(self, name)
return visitors
-
+
@property
def _visitor_iterator(self):
"""iterate through this visitor and each 'chained' visitor."""
-
+
v = self
while v:
yield v
def chain(self, visitor):
"""'chain' an additional ClauseVisitor onto this ClauseVisitor.
-
+
the chained visitor will receive all visit events after this one.
-
+
"""
tail = list(self._visitor_iterator)[-1]
tail._next = visitor
class CloningVisitor(ClauseVisitor):
"""Base class for visitor objects which can traverse using
the cloned_traverse() function.
-
+
"""
def copy_and_process(self, list_):
class ReplacingCloningVisitor(CloningVisitor):
"""Base class for visitor objects which can traverse using
the replacement_traverse() function.
-
+
"""
def replace(self, elem):
"""receive pre-copied elements during a cloning traversal.
-
+
If the method returns a new element, the element is used
instead of creating a simple copy of the element. Traversal
will halt on the newly returned element if it is re-encountered.
def iterate(obj, opts):
"""traverse the given expression structure, returning an iterator.
-
+
traversal is configured to be breadth-first.
-
+
"""
stack = deque([obj])
while stack:
def iterate_depthfirst(obj, opts):
"""traverse the given expression structure, returning an iterator.
-
+
traversal is configured to be depth-first.
-
+
"""
stack = deque([obj])
traversal = deque()
if meth:
meth(target)
return obj
-
+
def traverse(obj, opts, visitors):
"""traverse and visit the given expression structure using the default iterator."""
def cloned_traverse(obj, opts, visitors):
"""clone the given expression structure, allowing modifications by visitors."""
-
+
cloned = util.column_dict()
def clone(element):
def replacement_traverse(obj, opts, replace):
"""clone the given expression structure, allowing element replacement by a given replacement function."""
-
+
cloned = util.column_dict()
stop_on = util.column_set(opts.get('stop_on', []))
def process_cursor_execute(self, statement, parameters, context, executemany):
pass
-
+
def is_consumed(self):
"""Return True if this rule has been consumed, False if not.
-
+
Should raise an AssertionError if this rule's condition has definitely failed.
-
+
"""
raise NotImplementedError()
-
+
def rule_passed(self):
"""Return True if the last test of this rule passed, False if failed, None if no test was applied."""
-
+
raise NotImplementedError()
-
+
def consume_final(self):
"""Return True if this rule has been consumed.
-
+
Should raise an AssertionError if this rule's condition has not been consumed or has failed.
-
+
"""
-
+
if self._result is None:
assert False, "Rule has not been consumed"
-
+
return self.is_consumed()
class SQLMatchRule(AssertRule):
def __init__(self):
self._result = None
self._errmsg = ""
-
+
def rule_passed(self):
return self._result
-
+
def is_consumed(self):
if self._result is None:
return False
-
+
assert self._result, self._errmsg
-
+
return True
-
+
class ExactSQL(SQLMatchRule):
def __init__(self, sql, params=None):
SQLMatchRule.__init__(self)
self.sql = sql
self.params = params
-
+
def process_cursor_execute(self, statement, parameters, context, executemany):
if not context:
return
-
+
_received_statement = _process_engine_statement(context.unicode_statement, context)
_received_parameters = context.compiled_parameters
-
+
# TODO: remove this step once all unit tests
# are migrated, as ExactSQL should really be *exact* SQL
sql = _process_assertion_statement(self.sql, context)
-
+
equivalent = _received_statement == sql
if self.params:
if util.callable(self.params):
equivalent = equivalent and params == context.compiled_parameters
else:
params = {}
-
-
+
+
self._result = equivalent
if not self._result:
self._errmsg = "Testing for exact statement %r exact params %r, " \
"received %r with params %r" % (sql, params, _received_statement, _received_parameters)
-
+
class RegexSQL(SQLMatchRule):
def __init__(self, regex, params=None):
if not isinstance(params, list):
params = [params]
-
+
# do a positive compare only
for param, received in zip(params, _received_parameters):
for k, v in param.iteritems():
return
_received_parameters = list(context.compiled_parameters)
-
+
# recompile from the context, using the default dialect
compiled = context.compiled.statement.\
compile(dialect=DefaultDialect(), column_keys=context.compiled.column_keys)
-
+
_received_statement = re.sub(r'\n', '', str(compiled))
-
+
equivalent = self.statement == _received_statement
if self.params:
if util.callable(self.params):
if not isinstance(params, list):
params = [params]
-
+
all_params = list(params)
all_received = list(_received_parameters)
while params:
param = dict(params.pop(0))
for k, v in context.compiled.params.iteritems():
param.setdefault(k, v)
-
+
if param not in _received_parameters:
equivalent = False
break
"received %r with params %r" % \
(self.statement, all_params, _received_statement, all_received)
#print self._errmsg
-
-
+
+
class CountStatements(AssertRule):
def __init__(self, count):
self.count = count
self._statement_count = 0
-
+
def process_execute(self, clauseelement, *multiparams, **params):
self._statement_count += 1
def is_consumed(self):
return False
-
+
def consume_final(self):
assert self.count == self._statement_count, "desired statement count %d does not match %d" % (self.count, self._statement_count)
return True
-
+
class AllOf(AssertRule):
def __init__(self, *rules):
self.rules = set(rules)
-
+
def process_execute(self, clauseelement, *multiparams, **params):
for rule in self.rules:
rule.process_execute(clauseelement, *multiparams, **params)
def is_consumed(self):
if not self.rules:
return True
-
+
for rule in list(self.rules):
if rule.rule_passed(): # a rule passed, move on
self.rules.remove(rule)
return len(self.rules) == 0
assert False, "No assertion rules were satisfied for statement"
-
+
def consume_final(self):
return len(self.rules) == 0
-
+
def _process_engine_statement(query, context):
if util.jython:
# oracle+zxjdbc passes a PyStatement when returning into
query = unicode(query)
if context.engine.name == 'mssql' and query.endswith('; select scope_identity()'):
query = query[:-25]
-
+
query = re.sub(r'\n', '', query)
-
+
return query
-
+
def _process_assertion_statement(query, context):
paramstyle = context.dialect.paramstyle
if paramstyle == 'named':
class SQLAssert(ConnectionProxy):
rules = None
-
+
def add_rules(self, rules):
self.rules = list(rules)
-
+
def statement_complete(self):
for rule in self.rules:
if not rule.consume_final():
def clear_rules(self):
del self.rules
-
+
def execute(self, conn, execute, clauseelement, *multiparams, **params):
result = execute(clauseelement, *multiparams, **params)
rule.process_execute(clauseelement, *multiparams, **params)
if rule.is_consumed():
self.rules.pop(0)
-
+
return result
-
+
def cursor_execute(self, execute, cursor, statement, parameters, context, executemany):
result = execute(cursor, statement, parameters, context)
-
+
if self.rules:
rule = self.rules[0]
rule.process_cursor_execute(statement, parameters, context, executemany)
return result
asserter = SQLAssert()
-
+
def drop_all_tables(metadata):
testing_reaper.close_all()
metadata.drop_all()
-
+
def assert_conns_closed(fn):
def decorated(*args, **kw):
try:
testing_reaper.close_all()
fn(*args, **kw)
return function_named(decorated, fn.__name__)
-
-
+
+
def close_open_connections(fn):
"""Decorator that closes all connections after fn execution."""
if not mod:
mod = getattr(__import__('sqlalchemy.databases.%s' % name).databases, name)
yield mod.dialect()
-
+
class ReconnectFixture(object):
def __init__(self, dbapi):
self.dbapi = dbapi
options = options or config.db_opts
options.setdefault('proxy', asserter)
-
+
listeners = options.setdefault('listeners', [])
listeners.append(testing_reaper)
engine = create_engine(url, **options)
-
+
# may want to call this, results
# in first-connect initializers
#engine.connect()
-
+
return engine
def utf8_engine(url=None, options=None):
def mock_engine(dialect_name=None):
"""Provides a mocking engine based on the current testing.db.
-
+
This is normally used to test DDL generation flow as emitted
by an Engine.
-
+
It should not be used in other cases, as assert_compile() and
assert_sql_execution() are much better choices with fewer
moving parts.
-
+
"""
-
+
from sqlalchemy import create_engine
-
+
if not dialect_name:
dialect_name = config.db.name
def assert_sql(stmts):
recv = [re.sub(r'[\n\t]', '', str(s)) for s in buffer]
assert recv == stmts, recv
-
+
engine = create_engine(dialect_name + '://',
strategy='mock', executor=executor)
assert not hasattr(engine, 'mock')
#for t in ('FunctionType', 'BuiltinFunctionType',
# 'MethodType', 'BuiltinMethodType',
# 'LambdaType', )])
-
+
# Py2K
for t in ('FunctionType', 'BuiltinFunctionType',
'MethodType', 'BuiltinMethodType',
else:
buffer.append(result)
return result
-
+
@property
def _sqla_unwrap(self):
return self._subject
-
+
def __getattribute__(self, key):
try:
return object.__getattribute__(self, key)
return self
else:
return result
-
+
@property
def _sqla_unwrap(self):
return None
-
+
def __getattribute__(self, key):
try:
return object.__getattribute__(self, key)
self_key = sa.orm.attributes.instance_state(self).key
except sa.orm.exc.NO_STATE:
self_key = None
-
+
if other is None:
a = self
b = other
def __eq__(self, other):
return other.__class__ is self.__class__ and other.x==self.x and other.y==self.y
-class OldSchoolWithoutCompare:
+class OldSchoolWithoutCompare:
def __init__(self, x, y):
self.x = x
self.y = y
-
+
class BarWithoutCompare(object):
def __init__(self, x, y):
self.x = x
cextension = True
except ImportError:
cextension = False
-
+
while version_info:
version = '.'.join([str(v) for v in version_info])
if cextension:
no_support('maxdb', 'FIXME: verify not supported by database'),
no_support('informix', 'not supported by database'),
)
-
+
def identity(fn):
"""Target database must support GENERATED AS IDENTITY or a facsimile.
# no access to same table
no_support('mysql', 'requires SUPER priv'),
exclude('mysql', '<', (5, 0, 10), 'not supported by database'),
-
+
# huh? TODO: implement triggers for PG tests, remove this
- no_support('postgresql', 'PG triggers need to be implemented for tests'),
+ no_support('postgresql', 'PG triggers need to be implemented for tests'),
)
def correlated_outer_joins(fn):
"""Target must support an outer join to a subquery which correlates to the parent."""
-
+
return _chain_decorators_on(
fn,
no_support('oracle', 'Raises "ORA-01799: a column may not be outer-joined to a subquery"')
)
-
+
def savepoints(fn):
"""Target database must support savepoints."""
return _chain_decorators_on(
def denormalized_names(fn):
"""Target database must have 'denormalized', i.e. UPPERCASE as case insensitive names."""
-
+
return skip_if(
lambda: not testing.db.dialect.requires_name_normalize,
"Backend does not require denomralized names."
)(fn)
-
+
def schemas(fn):
"""Target database must support external schemas, and have one named 'test_schema'."""
-
+
return _chain_decorators_on(
fn,
no_support('sqlite', 'no schema support'),
no_support('firebird', 'no schema support')
)
-
+
def sequences(fn):
"""Target database must support SEQUENCEs."""
return _chain_decorators_on(
no_support('sqlite', 'no FOR UPDATE NOWAIT support'),
no_support('sybase', 'no FOR UPDATE NOWAIT support'),
)
-
+
def subqueries(fn):
"""Target database must support subqueries."""
return _chain_decorators_on(
fn,
fails_on('sybase', 'no support for OFFSET or equivalent'),
)
-
+
def returning(fn):
return _chain_decorators_on(
fn,
no_support('sybase', 'not supported by database'),
no_support('informix', 'not supported by database'),
)
-
+
def two_phase_transactions(fn):
"""Target database must support two-phase transactions."""
return _chain_decorators_on(
fn,
skip_if(lambda: not _has_cextensions(), "C extensions not installed")
)
-
+
def dbapi_lastrowid(fn):
return _chain_decorators_on(
fn,
fails_on_everything_except('mysql+mysqldb', 'mysql+oursql', 'sqlite+pysqlite')
)
-
+
def sane_multi_rowcount(fn):
return _chain_decorators_on(
fn,
fn,
fails_on_everything_except('postgresql', 'oracle')
)
-
+
def python2(fn):
return _chain_decorators_on(
fn,
"Python version 2.5 or greater is required"
)
)
-
+
def _has_cextensions():
try:
from sqlalchemy import cresultproxy, cprocessors
return True
except ImportError:
return False
-
+
def _has_sqlite():
from sqlalchemy import create_engine
try:
return name[0:max(dialect.max_identifier_length - 6, 0)] + "_" + hex(hash(name) % 64)[2:]
else:
return name
-
+
from sqlalchemy.engine import default
from nose import SkipTest
-
+
_ops = { '<': operator.lt,
'>': operator.gt,
'==': operator.eq,
return engine.name in dialects or \
engine.driver in drivers or \
(engine.name, engine.driver) in specs
-
+
return check
-
+
def fails_on(dbs, reason):
"""Mark a test as expected to fail on the specified database
"""
spec = db_spec(dbs)
-
+
def decorate(fn):
fn_name = fn.__name__
def maybe(*args, **kw):
"""
spec = db_spec(*dbs)
-
+
def decorate(fn):
fn_name = fn.__name__
def maybe(*args, **kw):
return True
return function_named(maybe, fn_name)
return decorate
-
+
def exclude(db, op, spec, reason):
"""Mark a test as unsupported by specific database server versions.
"""
carp = _should_carp_about_exclusion(reason)
-
+
def decorate(fn):
fn_name = fn.__name__
def maybe(*args, **kw):
if bind is None:
bind = config.db
-
+
# force metadata to be retrieved
conn = bind.connect()
version = getattr(bind.dialect, 'server_version_info', ())
"""Skip a test if predicate is true."""
reason = reason or predicate.__name__
carp = _should_carp_about_exclusion(reason)
-
+
def decorate(fn):
fn_name = fn.__name__
def maybe(*args, **kw):
warnings.filterwarnings().
"""
spec = db_spec(db)
-
+
def decorate(fn):
def maybe(*args, **kw):
if isinstance(db, basestring):
verbiage emitted by the sqlalchemy.util.deprecated decorator.
"""
-
+
def decorate(fn):
def safe(*args, **kw):
# todo: should probably be strict about this, too
def global_cleanup_assertions():
"""Check things that have to be finalized at the end of a test suite.
-
+
Hardcoded at the moment, a modular system can be built here
to support things like PG prepared transactions, tables all
dropped, etc.
-
+
"""
testutil.lazy_gc()
assert not pool._refs
-
-
+
+
def against(*queries):
"""Boolean predicate, compares to testing database configuration.
success = False
except except_cls, e:
success = True
-
+
# assert outside the block so it works for AssertionError too !
assert success, "Callable did not raise an exception"
def fail(msg):
assert False, msg
-
+
def fixture(table, columns, *rows):
"""Insert data into table after creation."""
def onload(event, schema_item, connection):
finally:
metadata.drop_all()
return function_named(maybe, fn.__name__)
-
+
def resolve_artifact_names(fn):
"""Decorator, augment function globals with tables and classes.
def assert_(self, val, msg=None):
assert val, msg
-
+
class AssertsCompiledSQL(object):
def assert_compile(self, clause, result, params=None, checkparams=None, dialect=None, use_default_dialect=False):
if use_default_dialect:
dialect = default.DefaultDialect()
-
+
if dialect is None:
dialect = getattr(self, '__dialect__', None)
kw = {}
if params is not None:
kw['column_keys'] = params.keys()
-
+
if isinstance(clause, orm.Query):
context = clause._compile_context()
context.statement.use_labels = True
clause = context.statement
-
+
c = clause.compile(dialect=dialect, **kw)
param_str = repr(getattr(c, 'params', {}))
# Py3K
#param_str = param_str.encode('utf-8').decode('ascii', 'ignore')
-
+
print "\nSQL String:\n" + str(c) + param_str
-
+
cc = re.sub(r'[\n\t]', '', str(c))
-
+
eq_(cc, result, "%r != %r on dialect %r" % (cc, result, dialect))
if checkparams is not None:
assert reflected_c is reflected_table.c[c.name]
eq_(c.primary_key, reflected_c.primary_key)
eq_(c.nullable, reflected_c.nullable)
-
+
if strict_types:
assert type(reflected_c.type) is type(c.type), \
"Type '%s' doesn't correspond to type '%s'" % (reflected_c.type, c.type)
assert len(table.primary_key) == len(reflected_table.primary_key)
for c in table.primary_key:
assert reflected_table.primary_key.columns[c.name] is not None
-
+
def assert_types_base(self, c1, c2):
assert c1.type._compare_type_affinity(c2.type),\
"On column %r, type '%s' doesn't correspond to type '%s'" % \
assertsql.asserter.statement_complete()
finally:
assertsql.asserter.clear_rules()
-
+
def assert_sql(self, db, callable_, list_, with_sequences=None):
if with_sequences is not None and config.db.name in ('firebird', 'oracle', 'postgresql'):
rules = with_sequences
else:
rules = list_
-
+
newrules = []
for rule in rules:
if isinstance(rule, dict):
else:
newrule = assertsql.ExactSQL(*rule)
newrules.append(newrule)
-
+
self.assert_sql_execution(db, callable_, *newrules)
def assert_sql_count(self, db, callable_, count):
gc.collect()
gc.collect()
return 0
-
+
# "lazy" gc, for VM's that don't GC on refcount == 0
lazy_gc = gc_collect
# end Py2K
import pickle
picklers.add(pickle)
-
+
# yes, this thing needs this much testing
for pickle in picklers:
for protocol in -1, 0, 1, 2:
yield pickle.loads, lambda d:pickle.dumps(d, protocol)
-
-
+
+
def round_decimal(value, prec):
if isinstance(value, float):
return round(value, prec)
-
+
import decimal
# can also use shift() here but that is 2.6 only
return (value * decimal.Decimal("1" + "0" * prec)).to_integral(decimal.ROUND_FLOOR) / \
pow(10, prec)
-
+
class RandomSet(set):
def __iter__(self):
l = list(set.__iter__(self))
random.shuffle(l)
return iter(l)
-
+
def pop(self):
index = random.randint(0, len(self) - 1)
item = list(set.__iter__(self))[index]
self.remove(item)
return item
-
+
def union(self, other):
return RandomSet(set.union(self, other))
-
+
def difference(self, other):
return RandomSet(set.difference(self, other))
-
+
def intersection(self, other):
return RandomSet(set.intersection(self, other))
-
+
def copy(self):
return RandomSet(self)
-
\ No newline at end of file
edges = util.defaultdict(set)
for parent, child in tuples:
edges[child].add(parent)
-
+
todo = set(allitems)
while todo:
edges[parent].add(child)
output = set()
-
+
while todo:
node = todo.pop()
stack = [node]
cyc = stack[stack.index(node):]
todo.difference_update(cyc)
output.update(cyc)
-
+
if node in todo:
stack.append(node)
todo.remove(node)
import array
class AbstractType(Visitable):
-
+
def copy_value(self, value):
return value
def bind_processor(self, dialect):
"""Defines a bind parameter processing function.
-
+
:param dialect: Dialect instance in use.
"""
def result_processor(self, dialect, coltype):
"""Defines a result-column processing function.
-
+
:param dialect: Dialect instance in use.
:param coltype: DBAPI coltype argument received in cursor.description.
-
+
"""
return None
objects alone. Values such as dicts, lists which
are serialized into strings are examples of "mutable"
column structures.
-
+
When this method is overridden, :meth:`copy_value` should
also be supplied. The :class:`.MutableType` mixin
is recommended as a helper.
-
+
"""
return False
def get_dbapi_type(self, dbapi):
"""Return the corresponding type object from the underlying DB-API, if
any.
-
+
This can be useful for calling ``setinputsizes()``, for example.
"""
def _adapt_expression(self, op, othertype):
"""evaluate the return type of <self> <op> <othertype>,
and apply any adaptations to the given operator.
-
+
"""
return op, self
-
+
@util.memoized_property
def _type_affinity(self):
"""Return a rudimental 'affinity' value expressing the general class
typ = t
else:
return self.__class__
-
+
def _coerce_compared_value(self, op, value):
_coerced_type = type_map.get(type(value), NULLTYPE)
if _coerced_type is NULLTYPE or _coerced_type._type_affinity \
return self
else:
return _coerced_type
-
+
def _compare_type_affinity(self, other):
return self._type_affinity is other._type_affinity
def compile(self, dialect=None):
# arg, return value is inconsistent with
# ClauseElement.compile()....this is a mistake.
-
+
if not dialect:
dialect = self._default_dialect
-
+
return dialect.type_compiler.process(self)
-
+
@property
def _default_dialect(self):
if self.__class__.__module__.startswith("sqlalchemy.dialects"):
return getattr(__import__(mod).dialects, tokens[-1]).dialect()
else:
return default.DefaultDialect()
-
+
def __str__(self):
# Py3K
#return unicode(self.compile())
def _adapt_expression(self, op, othertype):
"""evaluate the return type of <self> <op> <othertype>,
and apply any adaptations to the given operator.
-
+
"""
return self.adapt_operator(op), self
def adapt_operator(self, op):
"""A hook which allows the given operator to be adapted
to something new.
-
+
See also UserDefinedType._adapt_expression(), an as-yet-
semi-public method with greater capability in this regard.
-
+
"""
return op
class TypeDecorator(AbstractType):
"""Allows the creation of types which add additional functionality
to an existing type.
-
+
This method is preferred to direct subclassing of SQLAlchemy's
built-in types as it ensures that all required functionality of
the underlying type is kept in place.
'''
impl = types.Unicode
-
+
def process_bind_param(self, value, dialect):
return "PREFIX:" + value
method. This is used to give the expression system a hint when coercing
Python objects into bind parameters within expressions. Consider this
expression::
-
+
mytable.c.somecol + datetime.date(2009, 5, 15)
-
+
Above, if "somecol" is an ``Integer`` variant, it makes sense that
we're doing date arithmetic, where above is usually interpreted
by databases as adding a number of days to the given date.
The expression system does the right thing by not attempting to
coerce the "date()" value into an integer-oriented bind parameter.
-
+
However, in the case of ``TypeDecorator``, we are usually changing an
incoming Python type to something new - ``TypeDecorator`` by default will
"coerce" the non-typed side to be the same type as itself. Such as below,
we define an "epoch" type that stores a date value as an integer::
-
+
class MyEpochType(types.TypeDecorator):
impl = types.Integer
-
+
epoch = datetime.date(1970, 1, 1)
-
+
def process_bind_param(self, value, dialect):
return (value - self.epoch).days
-
+
def process_result_value(self, value, dialect):
return self.epoch + timedelta(days=value)
Our expression of ``somecol + date`` with the above type will coerce the
- "date" on the right side to also be treated as ``MyEpochType``.
-
+ "date" on the right side to also be treated as ``MyEpochType``.
+
This behavior can be overridden via the
:meth:`~TypeDecorator.coerce_compared_value` method, which returns a type
that should be used for the value of the expression. Below we set it such
that an integer value will be treated as an ``Integer``, and any other
value is assumed to be a date and will be treated as a ``MyEpochType``::
-
+
def coerce_compared_value(self, op, value):
if isinstance(value, int):
return Integer()
"'impl' which refers to the class of "
"type being decorated")
self.impl = to_instance(self.__class__.impl, *args, **kwargs)
-
+
def adapt(self, cls):
return cls()
-
+
def dialect_impl(self, dialect):
key = (dialect.__class__, dialect.server_version_info)
def type_engine(self, dialect):
"""Return a TypeEngine instance for this TypeDecorator.
-
+
"""
adapted = dialect.type_descriptor(self)
if adapted is not self:
return process
else:
return self.impl.result_processor(dialect, coltype)
-
+
def coerce_compared_value(self, op, value):
"""Suggest a type for a 'coerced' Python value in an expression.
-
+
By default, returns self. This method is called by
the expression system when an object using this type is
on the left or right side of an expression against a plain Python
object which does not yet have a SQLAlchemy type assigned::
-
+
expr = table.c.somecolumn + 35
-
+
Where above, if ``somecolumn`` uses this type, this method will
be called with the value ``operator.add``
and ``35``. The return value is whatever SQLAlchemy type should
be used for ``35`` for this particular operation.
-
+
"""
return self
def _coerce_compared_value(self, op, value):
return self.coerce_compared_value(op, value)
-
+
def copy(self):
instance = self.__class__.__new__(self.__class__)
instance.__dict__.update(self.__dict__)
objects alone. Values such as dicts, lists which
are serialized into strings are examples of "mutable"
column structures.
-
+
When this method is overridden, :meth:`copy_value` should
also be supplied. The :class:`.MutableType` mixin
is recommended as a helper.
-
+
"""
return self.impl.is_mutable()
which applies special rules to such values in order to guarantee
that changes are detected. These rules may have a significant
performance impact, described below.
-
+
A :class:`MutableType` usually allows a flag called
``mutable=True`` to enable/disable the "mutability" flag,
represented on this class by :meth:`is_mutable`. Examples
:class:`~sqlalchemy.dialects.postgresql.base.ARRAY`. Setting
this flag to ``False`` effectively disables any mutability-
specific behavior by the ORM.
-
+
:meth:`copy_value` and :meth:`compare_values` represent a copy
and compare function for values of this type - implementing
subclasses should override these appropriately.
execution of :class:`Query` will require a full scan of that subset of
the 6000 objects that have mutable attributes, possibly resulting
in tens of thousands of additional method calls for every query.
-
+
Note that for small numbers (< 100 in the Session at a time)
of objects with "mutable" values, the performance degradation is
negligible. In most cases it's likely that the convenience allowed
by "mutable" change detection outweighs the performance penalty.
-
+
It is perfectly fine to represent "mutable" data types with the
"mutable" flag set to False, which eliminates any performance
issues. It means that the ORM will only reliably detect changes
for values of this type if a newly modified value is of a different
identity (i.e., ``id(value)``) than what was present before -
i.e., instead of operations like these::
-
+
myobject.somedict['foo'] = 'bar'
myobject.someset.add('bar')
myobject.somelist.append('bar')
-
+
You'd instead say::
-
+
myobject.somevalue = {'foo':'bar'}
myobject.someset = myobject.someset.union(['bar'])
myobject.somelist = myobject.somelist + ['bar']
-
+
A future release of SQLAlchemy will include instrumented
collection support for mutable types, such that at least usage of
plain Python datastructures will be able to emit events for
def is_mutable(self):
"""Return True if the target Python type is 'mutable'.
-
+
For :class:`.MutableType`, this method is set to
return ``True``.
-
+
"""
return True
class _DateAffinity(object):
"""Mixin date/time specific expression adaptations.
-
+
Rules are implemented within Date,Time,Interval,DateTime, Numeric,
Integer. Based on http://www.postgresql.org/docs/current/static
/functions-datetime.html.
-
+
"""
-
+
@property
def _expression_adaptations(self):
raise NotImplementedError()
for all String types by setting
:attr:`sqlalchemy.engine.base.Dialect.convert_unicode`
on create_engine().
-
+
To instruct SQLAlchemy to perform Unicode encoding/decoding
even on a platform that already handles Unicode natively,
set convert_unicode='force'. This will incur significant
performance overhead when fetching unicode result columns.
-
+
:param assert_unicode: Deprecated. A warning is raised in all cases
when a non-Unicode object is passed when SQLAlchemy would coerce
into an encoding (note: but **not** when the DBAPI handles unicode
if unicode_error is not None and convert_unicode != 'force':
raise exc.ArgumentError("convert_unicode must be 'force' "
"when unicode_error is set.")
-
+
if assert_unicode:
util.warn_deprecated('assert_unicode is deprecated. '
'SQLAlchemy emits a warning in all '
self.convert_unicode = convert_unicode
self.unicode_error = unicode_error
self._warn_on_bytestring = _warn_on_bytestring
-
+
def adapt(self, impltype):
return impltype(
length=self.length,
needs_convert = wants_unicode and \
(dialect.returns_unicode_strings is not True or
self.convert_unicode == 'force')
-
+
if needs_convert:
to_unicode = processors.to_unicode_processor_factory(
dialect.encoding, self.unicode_error)
-
+
if dialect.returns_unicode_strings:
# we wouldn't be here unless convert_unicode='force'
# was specified, or the driver has erratic unicode-returning
# habits. since we will be getting back unicode
- # in most cases, we check for it (decode will fail).
+ # in most cases, we check for it (decode will fail).
def process(value):
if isinstance(value, unicode):
return value
``u'somevalue'``) into encoded bytestrings when passing the value
to the database driver, and similarly decodes values from the
database back into Python ``unicode`` objects.
-
+
It's roughly equivalent to using a ``String`` object with
``convert_unicode=True``, however
the type has other significances in that it implies the usage
This may affect what type is emitted when issuing CREATE TABLE
and also may effect some DBAPI-specific details, such as type
information passed along to ``setinputsizes()``.
-
+
When using the ``Unicode`` type, it is only appropriate to pass
Python ``unicode`` objects, and not plain ``str``. If a
bytestring (``str``) is passed, a runtime warning is issued. If
"""
__visit_name__ = 'unicode'
-
+
def __init__(self, length=None, **kwargs):
"""
Create a Unicode-converting String type.
*length* for use in DDL, and will raise an exception when
the ``CREATE TABLE`` DDL is issued. Whether the value is
interpreted as bytes or characters is database specific.
-
+
:param \**kwargs: passed through to the underlying ``String``
type.
-
+
"""
kwargs.setdefault('convert_unicode', True)
kwargs.setdefault('_warn_on_bytestring', True)
def get_dbapi_type(self, dbapi):
return dbapi.NUMBER
-
+
@util.memoized_property
def _expression_adaptations(self):
# TODO: need a dictionary object that will
values should be sent as Python Decimal objects, or
as floats. Different DBAPIs send one or the other based on
datatypes - the Numeric type will ensure that return values
- are one or the other across DBAPIs consistently.
-
+ are one or the other across DBAPIs consistently.
+
When using the ``Numeric`` type, care should be taken to ensure
that the asdecimal setting is apppropriate for the DBAPI in use -
when Numeric applies a conversion from Decimal->float or float->
Decimal, this conversion incurs an additional performance overhead
for all result columns received.
-
+
DBAPIs that return Decimal natively (e.g. psycopg2) will have
better accuracy and higher performance with a setting of ``True``,
as the native translation to Decimal reduces the amount of floating-
'consider storing Decimal numbers as strings '
'or integers on this platform for lossless '
'storage.' % (dialect.name, dialect.driver))
-
+
# we're a "numeric", DBAPI returns floats, convert.
if self.scale is not None:
return processors.to_decimal_processor_factory(
}
class Float(Numeric):
- """A type for ``float`` numbers.
-
+ """A type for ``float`` numbers.
+
Returns Python ``float`` objects by default, applying
conversion as needed.
-
+
"""
__visit_name__ = 'float'
:param precision: the numeric precision for use in DDL ``CREATE
TABLE``.
-
+
:param asdecimal: the same flag as that of :class:`Numeric`, but
defaults to ``False``. Note that setting this flag to ``True``
results in floating point conversion.
DateTime:Interval,
},
}
-
+
class Date(_DateAffinity,TypeEngine):
"""A type for ``datetime.date()`` objects."""
operators.sub:{
# date - integer = date
Integer:Date,
-
+
# date - date = integer.
Date:Integer,
Interval:DateTime,
-
+
# date - datetime = interval,
# this one is not in the PG docs
# but works
return self
else:
return super(_Binary, self)._coerce_compared_value(op, value)
-
+
def adapt(self, impltype):
return impltype(length=self.length)
def get_dbapi_type(self, dbapi):
return dbapi.BINARY
-
+
class LargeBinary(_Binary):
"""A type for large binary byte data.
class Binary(LargeBinary):
"""Deprecated. Renamed to LargeBinary."""
-
+
def __init__(self, *arg, **kw):
util.warn_deprecated('The Binary type has been renamed to '
'LargeBinary.')
class SchemaType(object):
"""Mark a type as possibly requiring schema-level DDL for usage.
-
+
Supports types that must be explicitly created/dropped (i.e. PG ENUM type)
as well as types that are complimented by table or schema level
constraints, triggers, and other rules.
-
+
"""
-
+
def __init__(self, **kw):
self.name = kw.pop('name', None)
self.quote = kw.pop('quote', None)
util.portable_instancemethod(self._on_metadata_create))
self.metadata.append_ddl_listener('after-drop',
util.portable_instancemethod(self._on_metadata_drop))
-
+
def _set_parent(self, column):
column._on_table_attach(util.portable_instancemethod(self._set_table))
-
+
def _set_table(self, table, column):
table.append_ddl_listener('before-create',
util.portable_instancemethod(
util.portable_instancemethod(self._on_metadata_create))
table.metadata.append_ddl_listener('after-drop',
util.portable_instancemethod(self._on_metadata_drop))
-
+
@property
def bind(self):
return self.metadata and self.metadata.bind or None
-
+
def create(self, bind=None, checkfirst=False):
"""Issue CREATE ddl for this type, if applicable."""
-
+
if bind is None:
bind = schema._bind_or_error(self)
t = self.dialect_impl(bind.dialect)
t = self.dialect_impl(bind.dialect)
if t is not self and isinstance(t, SchemaType):
t.drop(bind=bind, checkfirst=checkfirst)
-
+
def _on_table_create(self, event, target, bind, **kw):
t = self.dialect_impl(bind.dialect)
if t is not self and isinstance(t, SchemaType):
t = self.dialect_impl(bind.dialect)
if t is not self and isinstance(t, SchemaType):
t._on_metadata_drop(event, target, bind, **kw)
-
+
class Enum(String, SchemaType):
"""Generic Enum Type.
-
+
The Enum type provides a set of possible string values which the
column is constrained towards.
-
+
By default, uses the backend's native ENUM type if available,
else uses VARCHAR + a CHECK constraint.
"""
-
+
__visit_name__ = 'enum'
-
+
def __init__(self, *enums, **kw):
"""Construct an enum.
-
+
Keyword arguments which don't apply to a specific backend are ignored
by that backend.
break
else:
convert_unicode = False
-
+
if self.enums:
length =max(len(x) for x in self.enums)
else:
def _should_create_constraint(self, compiler):
return not self.native_enum or \
not compiler.dialect.supports_native_enum
-
+
def _set_table(self, table, column):
if self.native_enum:
SchemaType._set_table(self, table, column)
-
+
e = schema.CheckConstraint(
column.in_(self.enums),
self._should_create_constraint)
)
table.append_constraint(e)
-
+
def adapt(self, impltype):
if issubclass(impltype, Enum):
return impltype(name=self.name,
**Note:** be sure to read the notes for :class:`MutableType` regarding
ORM performance implications.
-
+
"""
impl = LargeBinary
:meth:`AbstractType.is_mutable`. When ``True``, incoming
objects should provide an ``__eq__()`` method which
performs the desired deep comparison of members, or the
- ``comparator`` argument must be present.
+ ``comparator`` argument must be present.
:param comparator: optional. a 2-arg callable predicate used
to compare values of this type. Otherwise,
def is_mutable(self):
"""Return True if the target Python type is 'mutable'.
-
+
When this method is overridden, :meth:`copy_value` should
also be supplied. The :class:`.MutableType` mixin
is recommended as a helper.
-
+
"""
return self.mutable
def __init__(self, create_constraint=True, name=None):
"""Construct a Boolean.
-
+
:param create_constraint: defaults to True. If the boolean
is generated as an int/smallint, also create a CHECK constraint
on the table that ensures 1 or 0 as a value.
-
+
:param name: if a CHECK constraint is generated, specify
the name of the constraint.
-
+
"""
self.create_constraint = create_constraint
self.name = name
-
+
def _should_create_constraint(self, compiler):
return not compiler.dialect.supports_native_boolean
-
+
def _set_table(self, table, column):
if not self.create_constraint:
return
-
+
e = schema.CheckConstraint(
column.in_([0, 1]),
name=self.name,
self._should_create_constraint)
)
table.append_constraint(e)
-
+
def bind_processor(self, dialect):
if dialect.supports_native_boolean:
return None
else:
return processors.boolean_to_int
-
+
def result_processor(self, dialect, coltype):
if dialect.supports_native_boolean:
return None
(such as, conversion of both sides into integer epoch values first) which
currently is a manual procedure (such as via
:attr:`~sqlalchemy.sql.expression.func`).
-
+
"""
impl = DateTime
second_precision=None,
day_precision=None):
"""Construct an Interval object.
-
+
:param native: when True, use the actual
INTERVAL type provided by the database, if
- supported (currently Postgresql, Oracle).
+ supported (currently Postgresql, Oracle).
Otherwise, represent the interval data as
an epoch value regardless.
-
+
:param second_precision: For native interval types
which support a "fractional seconds precision" parameter,
i.e. Oracle and Postgresql
-
+
:param day_precision: for native interval types which
support a "day precision" parameter, i.e. Oracle.
-
+
"""
super(Interval, self).__init__()
self.native = native
return cls._adapt_from_generic_interval(self)
else:
return self
-
+
def bind_processor(self, dialect):
impl_processor = self.impl.bind_processor(dialect)
epoch = self.epoch
# a controversial feature, required by MySQLdb currently
def buffer(x):
return x
-
+
buffer = getattr(__builtin__, 'buffer', buffer)
# end Py2K
-
+
if sys.version_info >= (2, 5):
class PopulateDict(dict):
"""A dict which populates missing values via a creation function.
def __init__(self, creator):
self.creator = creator
-
+
def __missing__(self, key):
self[key] = val = self.creator(key)
return val
def __init__(self, creator):
self.creator = creator
-
+
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
d2 = self.copy()
d2.update(d)
return frozendict(d2)
-
+
def __repr__(self):
return "frozendict(%s)" % dict.__repr__(self)
class _probe(dict):
def __missing__(self, key):
return 1
-
+
try:
try:
_probe()['missing']
return fn(*(list(args[0:-1]) + args[-1]), **kw)
else:
return fn(*args, **kw)
-
+
return decorator(go)(fn)
return decorate
def update_copy(d, _new=None, **kw):
"""Copy the given dict and update with the given values."""
-
+
d = d.copy()
if _new:
d.update(_new)
d.update(**kw)
return d
-
+
def flatten_iterator(x):
"""Given an iterator of which further sub-elements may also be
iterators, flatten the sub-elements into a single iterator.
__init__ defines a \**kwargs catch-all, then the constructor is presumed to
pass along unrecognized keywords to it's base classes, and the collection
process is repeated recursively on each of the bases.
-
+
"""
for c in cls.__mro__:
else:
return (['self'], 'args', 'kwargs', None)
-
+
def unbound_method_to_callable(func_or_cls):
"""Adjust the incoming callable such that a 'self' argument is not required."""
class portable_instancemethod(object):
"""Turn an instancemethod into a (parent, name) pair
to produce a serializable callable.
-
+
"""
def __init__(self, meth):
self.target = meth.im_self
def __call__(self, *arg, **kw):
return getattr(self.target, self.name)(*arg, **kw)
-
+
def class_hierarchy(cls):
"""Return an unordered sequence of all classes related to cls.
def bool_or_str(*text):
"""Return a callable that will evaulate a string as
boolean, or one of a set of "alternate" string values.
-
+
"""
def bool_or_value(obj):
if obj in text:
else:
return asbool(obj)
return bool_or_value
-
+
def coerce_kw_type(kw, key, type_, flexi_bool=True):
"""If 'key' is present in dict 'kw', coerce its value to type 'type\_' if
necessary. If 'flexi_bool' is True, the string '0' is considered false
class NamedTuple(tuple):
"""tuple() subclass that adds labeled names.
-
+
Is also pickleable.
-
+
"""
def __new__(cls, vals, labels=None):
This strategy has edge cases for builtin types- it's possible to have
two 'foo' strings in one of these sets, for example. Use sparingly.
-
+
"""
_working_set = set
-
+
def __init__(self, iterable=None):
self._members = dict()
if iterable:
result._members.update(
self._working_set(self._member_id_tuples()).symmetric_difference(_iter_id(iterable)))
return result
-
+
def _member_id_tuples(self):
return ((id(v), v) for v in self._members.itervalues())
-
+
def __xor__(self, other):
if not isinstance(other, IdentitySet):
return NotImplemented
# but it's safe here: IDS operates on (id, instance) tuples in the
# working set.
__sa_hash_exempt__ = True
-
+
def __init__(self, iterable=None):
IdentitySet.__init__(self)
self._members = OrderedDict()
def unique_list(seq, compare_with=set):
seen = compare_with()
- return [x for x in seq if x not in seen and not seen.add(x)]
+ return [x for x in seq if x not in seen and not seen.add(x)]
class UniqueAppender(object):
"""Appends items to a collection ensuring uniqueness.
class ScopedRegistry(object):
"""A Registry that can store one or multiple instances of a single
class on the basis of a "scope" function.
-
+
The object implements ``__call__`` as the "getter", so by
calling ``myregistry()`` the contained object is returned
for the current scope.
def __init__(self, createfunc, scopefunc):
"""Construct a new :class:`.ScopedRegistry`.
-
+
:param createfunc: A creation function that will generate
a new value for the current scope, if none is present.
-
+
:param scopefunc: A function that returns a hashable
token representing the current scope (such as, current
thread identifier).
-
+
"""
self.createfunc = createfunc
self.scopefunc = scopefunc
def has(self):
"""Return True if an object is present in the current scope."""
-
+
return self.scopefunc() in self.registry
def set(self, obj):
"""Set the value forthe current scope."""
-
+
self.registry[self.scopefunc()] = obj
def clear(self):
"""Clear the current scope, if any."""
-
+
try:
del self.registry[self.scopefunc()]
except KeyError:
class ThreadLocalRegistry(ScopedRegistry):
"""A :class:`.ScopedRegistry` that uses a ``threading.local()``
variable for storage.
-
+
"""
def __init__(self, createfunc):
self.createfunc = createfunc
class importlater(object):
"""Deferred import object.
-
+
e.g.::
-
+
somesubmod = importlater("mypackage.somemodule", "somesubmod")
-
+
is equivalent to::
-
+
from mypackage.somemodule import somesubmod
-
+
except evaluted upon attribute access to "somesubmod".
-
+
"""
def __init__(self, path, addtl=None):
self._il_path = path
self._il_addtl = addtl
-
+
@memoized_property
def _il_module(self):
if self._il_addtl:
for token in self._il_path.split(".")[1:]:
m = getattr(m, token)
return m
-
+
def __getattr__(self, key):
try:
attr = getattr(self._il_module, key)
)
self.__dict__[key] = attr
return attr
-
+
class WeakIdentityMapping(weakref.WeakKeyDictionary):
"""A WeakKeyDictionary with an object identity index.
del self.by_id[key]
except (KeyError, AttributeError): # pragma: no cover
pass # pragma: no cover
-
+
class _keyed_weakref(weakref.ref):
def __init__(self, object, callback):
weakref.ref.__init__(self, object, callback)
class LRUCache(dict):
"""Dictionary with 'squishy' removal of least
recently used items.
-
+
"""
def __init__(self, capacity=100, threshold=.5):
self.capacity = capacity
if message is None:
message = "Call to deprecated function %(func)s"
-
+
def decorate(fn):
return _decorate_with_warning(
fn, exc.SAPendingDeprecationWarning,
name += "()"
return name
return re.sub(r'\:(\w+)\:`~?\.?(.+?)`', repl, text)
-
-
+
+
def _decorate_with_warning(func, wtype, message, docstring_header=None):
"""Wrap a function with a warnings.warn and augmented docstring."""
message = _sanitize_rest(message)
-
+
@decorator
def warned(fn, *args, **kwargs):
warnings.warn(wtype(message), stacklevel=3)
module, but note that the
:class:`~.sqlalchemy.ext.declarative.declared_attr`
decorator should be used for this purpose with declarative.
-
+
"""
-
+
def __init__(self, fget, *arg, **kw):
super(classproperty, self).__init__(fget, *arg, **kw)
self.__doc__ = fget.__doc__
-
+
def __get__(desc, self, cls):
return desc.fget(cls)
file_config.readfp(StringIO.StringIO(base_config))
file_config.read(['test.cfg', os.path.expanduser('~/.satest.cfg')])
config.file_config = file_config
-
+
def configure(self, options, conf):
Plugin.configure(self, options, conf)
self.options = options
-
+
def begin(self):
global testing, requires, util
from sqlalchemy.test import testing, requires
from sqlalchemy import util
-
+
testing.db = db
testing.requires = requires
# Lazy setup of other options (post coverage)
for fn in post_configure:
fn(self.options, file_config)
-
+
def describeTest(self, test):
return ""
-
+
def wantClass(self, cls):
"""Return true if you want the main test selector to collect
tests from this class, false if you don't, and None if you don't
return True
else:
return not self.__should_skip_for(cls)
-
+
def __should_skip_for(self, cls):
if hasattr(cls, '__requires__'):
def test_suite(): return 'ok'
print "'%s' unsupported on DB implementation '%s'" % (
cls.__class__.__name__, testing.db.name)
return True
-
+
if getattr(cls, '__only_on__', None):
spec = testing.db_spec(*util.to_list(cls.__only_on__))
if not spec(testing.db):
print "'%s' unsupported on DB implementation '%s'" % (
cls.__class__.__name__, testing.db.name)
- return True
+ return True
if getattr(cls, '__skip_if__', False):
for c in getattr(cls, '__skip_if__'):
print "'%s' skipped by %s" % (
cls.__class__.__name__, c.__name__)
return True
-
+
for rule in getattr(cls, '__excluded_on__', ()):
if testing._is_excluded(*rule):
print "'%s' unsupported on DB %s version %s" % (
def afterTest(self, test):
testing.resetwarnings()
-
+
def afterContext(self):
testing.global_cleanup_assertions()
-
+
#def handleError(self, test, err):
#pass
yield line
else:
yield line
-
+
def consume_py3k():
yield "# start Py3K"
while lines:
lines.insert(0, line)
break
yield "# end Py3K"
-
+
def consume_py2k():
yield "# start Py2K"
while lines:
extra = {}
if sys.version_info >= (3, 0):
# monkeypatch our preprocessor
- # onto the 2to3 tool.
+ # onto the 2to3 tool.
from sa2to3 import refactor_string
from lib2to3.refactor import RefactoringTool
RefactoringTool.refactor_string = refactor_string
'sqlalchemy = sqlalchemy_nose.noseplugin:NoseSQLAlchemy',
]
},
-
+
long_description = """\
SQLAlchemy is:
class CompileTest(TestBase, AssertsExecutionResults):
@classmethod
def setup_class(cls):
-
+
global t1, t2, metadata
metadata = MetaData()
t1 = Table('t1', metadata,
from sqlalchemy import types
for t in types.type_map.values():
t._type_affinity
-
+
@profiling.function_call_count(69, {'2.4': 44,
'3.0':77, '3.1':77})
def test_insert(self):
def profile_memory(func):
# run the test 50 times. if length of gc.get_objects()
# keeps growing, assert false
-
+
def profile(*args):
gc_collect()
samples = [0 for x in range(0, 50)]
func(*args)
gc_collect()
samples[x] = len(gc.get_objects())
-
+
print "sample gc sizes:", samples
assert len(_sessions) == 0
-
+
for x in samples[-4:]:
if x != samples[-5]:
flatline = False
flatline = True
# object count is bigger than when it started
- if not flatline and samples[-1] > samples[0]:
+ if not flatline and samples[-1] > samples[0]:
for x in samples[1:-2]:
# see if a spike bigger than the endpoint exists
if x > samples[-1]:
_mapper_registry.clear()
class MemUsageTest(EnsureZeroed):
-
+
# ensure a pure growing test trips the assertion
@testing.fails_if(lambda: True)
def test_fixture(self):
class Foo(object):
pass
-
+
x = []
@profile_memory
def go():
x[-1:] = [Foo(), Foo(), Foo(), Foo(), Foo(), Foo()]
go()
-
+
def test_session(self):
metadata = MetaData(testing.db)
'pool_logging_name':'BAR'}
)
sess = create_session(bind=engine)
-
+
a1 = A(col2="a1")
a2 = A(col2="a2")
a3 = A(col2="a3")
def test_many_updates(self):
metadata = MetaData(testing.db)
-
+
wide_table = Table('t', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
*[Column('col%d' % i, Integer) for i in range(10)]
)
-
+
class Wide(object):
pass
-
+
mapper(Wide, wide_table, _compiled_cache_size=10)
-
+
metadata.create_all()
session = create_session()
w1 = Wide()
session.close()
del session
counter = [1]
-
+
@profile_memory
def go():
session = create_session()
session.flush()
session.close()
counter[0] += 1
-
+
try:
go()
finally:
metadata.drop_all()
-
+
@testing.fails_if(lambda : testing.db.dialect.name == 'sqlite' \
and testing.db.dialect.dbapi.version_info >= (2,
5),
go()
finally:
metadata.drop_all()
-
+
def test_mapper_reset(self):
metadata = MetaData(testing.db)
# dont need to clear_mappers()
del B
del A
-
+
metadata.create_all()
try:
go()
go()
finally:
metadata.drop_all()
-
-
+
+
def test_mutable_identity(self):
metadata = MetaData(testing.db)
test_needs_autoincrement=True),
Column('col2', PickleType(comparator=operator.eq))
)
-
+
class Foo(object):
def __init__(self, col2):
self.col2 = col2
-
+
mapper(Foo, table1)
metadata.create_all()
-
+
session = sessionmaker()()
-
+
def go():
obj = [
Foo({'a':1}),
Foo({'k':1}),
Foo({'l':1}),
]
-
+
session.add_all(obj)
session.commit()
-
+
testing.eq_(len(session.identity_map._mutable_attrs), 12)
testing.eq_(len(session.identity_map), 12)
obj = None
gc_collect()
testing.eq_(len(session.identity_map._mutable_attrs), 0)
testing.eq_(len(session.identity_map), 0)
-
+
try:
go()
finally:
class Connection(object):
def rollback(self):
pass
-
+
def close(self):
pass
class ZooMarkTest(TestBase):
"""Runs the ZooMark and squawks if method counts vary from the norm.
-
+
Each test has an associated `call_range`, the total number of
accepted function calls made during the test. The count can vary
between Python 2.4 and 2.5.
-
+
Unlike a unit test, this is a ordered collection of steps. Running
components individually will fail.
-
+
"""
__only_on__ = 'postgresql+psycopg2'
class ZooMarkTest(TestBase):
"""Runs the ZooMark and squawks if method counts vary from the norm.
-
+
Each test has an associated `call_range`, the total number of
accepted function calls made during the test. The count can vary
between Python 2.4 and 2.5.
-
+
Unlike a unit test, this is a ordered collection of steps. Running
components individually will fail.
-
+
"""
__only_on__ = 'postgresql+psycopg2'
Admission=4.95)
session.add(wap)
sdz = Zoo(Name=u'San Diego Zoo', Founded=datetime.date(1835, 9,
- 13), Opens=datetime.time(9, 0, 0), Admission=0)
+ 13), Opens=datetime.time(9, 0, 0), Admission=0)
session.add(sdz)
bio = Zoo(Name=u'Montr\xe9al Biod\xf4me',
Founded=datetime.date(1992, 6, 19),
d = util.frozendict({1:2, 3:4})
for loads, dumps in picklers():
print loads(dumps(d))
-
+
class MemoizedAttrTest(TestBase):
def test_memoized_property(self):
v = val[0]
val[0] += 1
return v
-
+
ne_(Foo.bar, None)
f1 = Foo()
assert 'bar' not in f1.__dict__
eq_(f1.bar(), 20)
eq_(f1.bar(), 20)
eq_(val[0], 21)
-
+
class ColumnCollectionTest(TestBase):
def test_in(self):
cc = sql.ColumnCollection()
class LRUTest(TestBase):
- def test_lru(self):
+ def test_lru(self):
class item(object):
def __init__(self, id):
self.id = id
l[25] = i2
assert 25 in l
assert l[25] is i2
-
+
class ImmutableSubclass(str):
pass
def test_str_with_iter(self):
"""ensure that a str object with an __iter__ method (like in
PyPy) is not interpreted as an iterable.
-
+
"""
class IterString(str):
def __iter__(self):
assert list(util.flatten_iterator([IterString('asdf'),
[IterString('x'), IterString('y')]])) == ['asdf',
'x', 'y']
-
+
class HashOverride(object):
def __init__(self, value=None):
self.value = value
return self.value != other.value
else:
return True
-
+
class HashEqOverride(object):
def __init__(self, value=None):
self.value = value
assert_raises(TypeError, lambda: s1 - [3, 4, 5])
class OrderedIdentitySetTest(TestBase):
-
+
def assert_eq(self, identityset, expected_iterable):
expected = [id(o) for o in expected_iterable]
found = [id(o) for o in identityset]
def test_intersection(self):
elem = object
eq_ = self.assert_eq
-
+
a, b, c, d, e, f, g = \
elem(), elem(), elem(), elem(), elem(), elem(), elem()
-
+
s1 = util.OrderedIdentitySet([a, b, c])
s2 = util.OrderedIdentitySet([d, e, f])
s3 = util.OrderedIdentitySet([a, d, f, g])
d = UserDict.UserDict(a=1,b=2,c=3)
self._ok(d)
# end Py2K
-
+
def test_object(self):
self._notok(object())
class ForcedSet(list):
__emulates__ = set
-
+
for type_ in (set,
# Py2K
sets.Set,
eq_(set(util.class_hierarchy(A)), set((A, B, C, object)))
eq_(set(util.class_hierarchy(B)), set((A, B, C, object)))
-
+
# Py2K
def test_oldstyle_mixin(self):
class A(object):
eq_(set(util.class_hierarchy(Mixin)), set())
eq_(set(util.class_hierarchy(A)), set((A, B, object)))
# end Py2K
-
+
class TestClassProperty(TestBase):
CREATE TABLE A (
ID DOM_ID /* INTEGER NOT NULL */ DEFAULT 0 )
"""
-
+
# the 'default' keyword is lower case here
TABLE_B = """\
CREATE TABLE B (
table_a = Table('a', metadata, autoload=True)
eq_(table_a.c.id.server_default.arg.text, "0")
-
+
def test_lowercase_default_name(self):
metadata = MetaData(testing.db)
table_b = Table('b', metadata, autoload=True)
eq_(table_b.c.id.server_default.arg.text, "0")
-
+
class CompileTest(TestBase, AssertsCompiledSQL):
'UPDATE sometable SET somecolumn=:somecolum'
'n WHERE sometable.somecolumn = '
':somecolumn_1', dict(somecolumn=10))
-
+
# TODO: should this be for *all* MS-SQL dialects ?
def test_mxodbc_binds(self):
"""mxodbc uses MS-SQL native binds, which aren't allowed in
various places."""
-
+
mxodbc_dialect = mxodbc.dialect()
t = table('sometable', column('foo'))
-
+
for expr, compile in [
(
select([literal("x"), literal("y")]),
)
]:
self.assert_compile(expr, compile, dialect=mxodbc_dialect)
-
+
def test_in_with_subqueries(self):
"""Test that when using subqueries in a binary expression
the == and != are changed to IN and NOT IN respectively.
'remotetable_1.value FROM mytable JOIN '
'remote_owner.remotetable AS remotetable_1 '
'ON remotetable_1.rem_id = mytable.myid')
-
+
self.assert_compile(select([table4.c.rem_id,
table4.c.value]).apply_labels().union(select([table1.c.myid,
table1.c.description]).apply_labels()).alias().select(),
"SELECT mytable.myid AS mytable_myid, mytable.description "
"AS mytable_description FROM mytable) AS anon_1"
)
-
-
+
+
def test_delete_schema(self):
metadata = MetaData()
tbl = Table('test', metadata, Column('id', Integer,
and table2.c['col1'].default
assert sequence.start == 2
assert sequence.increment == 3
-
+
@testing.emits_warning("Did not recognize")
@testing.provide_metadata
def test_skip_types(self):
@testing.provide_metadata
def test_indexes_cols(self):
-
+
t1 = Table('t', metadata, Column('x', Integer), Column('y', Integer))
Index('foo', t1.c.x, t1.c.y)
metadata.create_all()
-
+
m2 = MetaData()
t2 = Table('t', m2, autoload=True, autoload_with=testing.db)
-
+
eq_(
set(list(t2.indexes)[0].columns),
set([t2.c['x'], t2.c.y])
@testing.provide_metadata
def test_indexes_cols_with_commas(self):
-
+
t1 = Table('t', metadata,
Column('x, col', Integer, key='x'),
Column('y', Integer)
)
Index('foo', t1.c.x, t1.c.y)
metadata.create_all()
-
+
m2 = MetaData()
t2 = Table('t', m2, autoload=True, autoload_with=testing.db)
-
+
eq_(
set(list(t2.indexes)[0].columns),
set([t2.c['x, col'], t2.c.y])
)
-
+
@testing.provide_metadata
def test_indexes_cols_with_spaces(self):
-
+
t1 = Table('t', metadata, Column('x col', Integer, key='x'),
Column('y', Integer))
Index('foo', t1.c.x, t1.c.y)
metadata.create_all()
-
+
m2 = MetaData()
t2 = Table('t', m2, autoload=True, autoload_with=testing.db)
-
+
eq_(
set(list(t2.indexes)[0].columns),
set([t2.c['x col'], t2.c.y])
)
-
+
class QueryUnicodeTest(TestBase):
__only_on__ = 'mssql'
def test_fetchid_trigger(self):
"""
Verify identity return value on inserting to a trigger table.
-
+
MSSQL's OUTPUT INSERTED clause does not work for the
case of a table having an identity (autoincrement)
primary key column, and which also has a trigger configured
to fire upon each insert and subsequently perform an
insert into a different table.
-
+
SQLALchemy's MSSQL dialect by default will attempt to
use an OUTPUT_INSERTED clause, which in this case will
raise the following error:
-
+
ProgrammingError: (ProgrammingError) ('42000', 334,
"[Microsoft][SQL Server Native Client 10.0][SQL Server]The
target table 't1' of the DML statement cannot have any enabled
triggers if the statement contains an OUTPUT clause without
INTO clause.", 7748) 'INSERT INTO t1 (descr) OUTPUT inserted.id
VALUES (?)' ('hello',)
-
+
This test verifies a workaround, which is to rely on the
older SCOPE_IDENTITY() call, which still works for this scenario.
To enable the workaround, the Table must be instantiated
dialect = mssql.dialect()
self.ddl_compiler = dialect.ddl_compiler(dialect,
schema.CreateTable(t))
-
+
def _column_spec(self):
return self.ddl_compiler.get_column_specification(self.column)
-
+
def test_that_mssql_default_nullability_emits_null(self):
eq_("test_column VARCHAR NULL", self._column_spec())
connection = dialect.create_connect_args(u)
eq_([['DRIVER={SQL Server};Server=hostspec;Database=database;UI'
'D=username;PWD=password'], {}], connection)
-
+
def test_pymssql_port_setting(self):
dialect = pymssql.dialect()
[[], {'host': 'somehost:5000', 'password': 'tiger',
'user': 'scott', 'database': 'test'}], connection
)
-
+
@testing.only_on(['mssql+pyodbc', 'mssql+pymssql'], "FreeTDS specific test")
def test_bad_freetds_warning(self):
engine = engines.testing_engine()
def teardown(self):
metadata.drop_all()
-
+
@testing.fails_on_everything_except('mssql+pyodbc',
'this is some pyodbc-specific feature')
def test_decimal_notation(self):
elif c.name.startswith('int_n'):
assert not c.autoincrement, name
assert tbl._autoincrement_column is not c, name
-
+
# mxodbc can't handle scope_identity() with DEFAULT VALUES
if testing.db.driver == 'mxodbc':
: False}),
engines.testing_engine(options={'implicit_returning'
: True})]
-
+
for counter, engine in enumerate(eng):
engine.execute(tbl.insert())
if 'int_y' in tbl.c:
class BinaryTest(TestBase, AssertsExecutionResults):
"""Test the Binary and VarBinary types"""
-
+
__only_on__ = 'mssql'
-
+
@classmethod
def setup_class(cls):
global binary_table, MyPickleType
value.stuff = 'this is the right stuff'
return value
- binary_table = Table(
+ binary_table = Table(
'binary_table',
MetaData(testing.db),
Column('primary_id', Integer, Sequence('binary_id_seq',
self.log = []
def connect(self, *args, **kwargs):
return MockConnection(self)
-
+
class MockConnection(object):
def __init__(self, parent):
self.parent = parent
__only_on__ = 'mysql'
__dialect__ = mysql.dialect()
-
+
@testing.uses_deprecated('Manually quoting ENUM value literals')
def test_basic(self):
meta1 = MetaData(testing.db)
# if needed, can break out the eq_() just to check for
# timestamps that are within a few seconds of "now"
# using timedelta.
-
+
now = testing.db.execute("select now()").scalar()
-
+
# TIMESTAMP without NULL inserts current time when passed
# NULL. when not passed, generates 0000-00-00 quite
# annoyingly.
ts_table.insert().execute({'t1':now, 't2':None})
ts_table.insert().execute({'t1':None, 't2':None})
-
+
eq_(
ts_table.select().execute().fetchall(),
[(now, now), (None, now)]
)
finally:
meta.drop_all()
-
+
def test_year(self):
"""Exercise YEAR."""
assert_raises(exc.SQLError, enum_table.insert().execute,
e1=None, e2=None, e3=None, e4=None)
-
+
assert_raises(exc.InvalidRequestError, enum_table.insert().execute,
e1='c', e2='c', e2generic='c', e3='c',
e4='c', e5='c', e5generic='c', e6='c')
eq_(res, expected)
enum_table.drop()
-
+
def test_unicode_enum(self):
unicode_engine = utf8_engine()
metadata = MetaData(unicode_engine)
(u'réveillé', u'drôle') #, u'S’il') # eh ?
finally:
metadata.drop_all()
-
+
def test_enum_compile(self):
e1 = Enum('x', 'y', 'z', name='somename')
t1 = Table('sometable', MetaData(), Column('somecolumn', e1))
"CREATE TABLE sometable (somecolumn "
"VARCHAR(1), CHECK (somecolumn IN ('x', "
"'y', 'z')))")
-
+
@testing.exclude('mysql', '<', (4,), "3.23 can't handle an ENUM of ''")
@testing.uses_deprecated('Manually quoting ENUM value literals')
def test_enum_parse(self):
eq_(
gen(True, ['high_priority', sql.text('sql_cache')]),
'SELECT high_priority sql_cache DISTINCT q')
-
+
def test_backslash_escaping(self):
self.assert_compile(
sql.column('foo').like('bar', escape='\\'),
"foo LIKE %s ESCAPE '\\'",
dialect=dialect
)
-
+
def test_limit(self):
t = sql.table('t', sql.column('col1'), sql.column('col2'))
select([t]).offset(10),
"SELECT t.col1, t.col2 FROM t LIMIT 10, 18446744073709551615"
)
-
+
def test_varchar_raise(self):
for type_ in (
String,
):
type_ = sqltypes.to_instance(type_)
assert_raises(exc.InvalidRequestError, type_.compile, dialect=mysql.dialect())
-
+
def test_update_limit(self):
t = sql.table('t', sql.column('col1'), sql.column('col2'))
def test_sysdate(self):
self.assert_compile(func.sysdate(), "SYSDATE()")
-
+
def test_cast(self):
t = sql.table('t', sql.column('col'))
m = mysql
for type_, expected in specs:
self.assert_compile(cast(t.c.col, type_), expected)
-
+
def test_no_cast_pre_4(self):
self.assert_compile(
cast(Column('foo', Integer), String),
"foo",
dialect=dialect
)
-
+
def test_extract(self):
t = sql.table('t', sql.column('col1'))
self.assert_compile(
select([extract('milliseconds', t.c.col1)]),
"SELECT EXTRACT(millisecond FROM t.col1) AS anon_1 FROM t")
-
+
def test_too_long_index(self):
exp = 'ix_zyrenian_zyme_zyzzogeton_zyzzogeton_zyrenian_zyme_zyz_5cd2'
tname = 'zyrenian_zyme_zyzzogeton_zyzzogeton'
cname = 'zyrenian_zyme_zyzzogeton_zo'
-
+
t1 = Table(tname, MetaData(),
Column(cname, Integer, index=True),
)
ix1 = list(t1.indexes)[0]
-
+
self.assert_compile(
schema.CreateIndex(ix1),
"CREATE INDEX %s "
"ON %s (%s)" % (exp, tname, cname),
dialect=mysql.dialect()
)
-
+
def test_innodb_autoincrement(self):
t1 = Table('sometable', MetaData(), Column('assigned_id',
Integer(), primary_key=True, autoincrement=False),
class SQLModeDetectionTest(TestBase):
__only_on__ = 'mysql'
-
+
def _options(self, modes):
class SetOptions(object):
def first_connect(self, con, record):
cursor = con.cursor()
cursor.execute("set sql_mode='%s'" % (",".join(modes)))
return engines.testing_engine(options={"listeners":[SetOptions()]})
-
+
def test_backslash_escapes(self):
engine = self._options(['NO_BACKSLASH_ESCAPES'])
c = engine.connect()
assert not engine.dialect._backslash_escapes
c.close()
engine.dispose()
-
+
class RawReflectionTest(TestBase):
def setup(self):
dialect = mysql.dialect()
meta.reflect(cx)
eq_(cx.dialect._connection_charset, charset)
cx.close()
-
+
def test_sysdate(self):
d = testing.db.scalar(func.sysdate())
assert isinstance(d, datetime.datetime)
self.assert_compile(
matchtable.c.title.match('somstr'),
"MATCH (matchtable.title) AGAINST (%s IN BOOLEAN MODE)" % format)
-
+
@testing.fails_on('mysql+mysqldb', 'uses format')
@testing.fails_on('mysql+oursql', 'uses format')
@testing.fails_on('mysql+pyodbc', 'uses format')
'col2 FROM sometable ORDER BY '
'sometable.col2) WHERE ROWNUM <= :ROWNUM_1 '
'FOR UPDATE')
-
+
s = select([t],
for_update=True).limit(10).offset(20).order_by(t.c.col2)
self.assert_compile(s,
'sometable.col2) WHERE ROWNUM <= '
':ROWNUM_1) WHERE ora_rn > :ora_rn_1 FOR '
'UPDATE')
-
-
+
+
def test_long_labels(self):
dialect = default.DefaultDialect()
dialect.max_identifier_length = 30
-
+
ora_dialect = oracle.dialect()
-
+
m = MetaData()
a_table = Table(
'thirty_characters_table_xxxxxx',
primary_key=True
)
)
-
+
anon = a_table.alias()
self.assert_compile(select([other_table,
anon]).
'thirty_characters_table__1.id = '
'other_thirty_characters_table_.thirty_char'
'acters_table_id', dialect=ora_dialect)
-
+
def test_outer_join(self):
table1 = table('mytable',
column('myid', Integer),
'mytable.name) WHERE ROWNUM <= :ROWNUM_1) '
'WHERE ora_rn > :ora_rn_1',
dialect=oracle.dialect(use_ansi=False))
-
+
subq = select([table1]).select_from(table1.outerjoin(table2,
table1.c.myid == table2.c.otherid)).alias()
q = select([table3]).select_from(table3.outerjoin(subq,
table3.c.userid == subq.c.myid))
-
+
self.assert_compile(q,
'SELECT thirdtable.userid, '
'thirdtable.otherstuff FROM thirdtable '
'mytable.myid = myothertable.otherid) '
'anon_1 ON thirdtable.userid = anon_1.myid'
, dialect=oracle.dialect(use_ansi=True))
-
+
self.assert_compile(q,
'SELECT thirdtable.userid, '
'thirdtable.otherstuff FROM thirdtable, '
'+)) anon_1 WHERE thirdtable.userid = '
'anon_1.myid(+)',
dialect=oracle.dialect(use_ansi=False))
-
+
q = select([table1.c.name]).where(table1.c.name == 'foo')
self.assert_compile(q,
'SELECT mytable.name FROM mytable WHERE '
'mytable.name) AS bar FROM mytable',
dialect=oracle.dialect(use_ansi=False))
-
+
def test_alias_outer_join(self):
address_types = table('address_types', column('id'),
column('name'))
class CompatFlagsTest(TestBase, AssertsCompiledSQL):
__only_on__ = 'oracle'
-
+
def test_ora8_flags(self):
def server_version_info(self):
return (8, 2, 5)
-
+
dialect = oracle.dialect(dbapi=testing.db.dialect.dbapi)
dialect._get_server_version_info = server_version_info
dialect._get_server_version_info = server_version_info
dialect.initialize(testing.db.connect())
assert dialect.implicit_returning
-
+
def test_default_flags(self):
"""test with no initialization or server version info"""
self.assert_compile(String(50),"VARCHAR(50 CHAR)",dialect=dialect)
self.assert_compile(Unicode(50),"NVARCHAR2(50)",dialect=dialect)
self.assert_compile(UnicodeText(),"NCLOB",dialect=dialect)
-
+
def test_ora10_flags(self):
def server_version_info(self):
return (10, 2, 5)
self.assert_compile(String(50),"VARCHAR(50 CHAR)",dialect=dialect)
self.assert_compile(Unicode(50),"NVARCHAR2(50)",dialect=dialect)
self.assert_compile(UnicodeText(),"NCLOB",dialect=dialect)
-
-
+
+
class MultiSchemaTest(TestBase, AssertsCompiledSQL):
__only_on__ = 'oracle'
-
+
@classmethod
def setup_class(cls):
# currently assuming full DBA privs for the user.
# don't really know how else to go here unless
# we connect as the other user.
-
+
for stmt in """
create table test_schema.parent(
id integer primary key,
data varchar2(50)
);
-
+
create table test_schema.child(
id integer primary key,
data varchar2(50),
-- can't make a ref from local schema to the
-- remote schema's table without this,
--- *and* cant give yourself a grant !
+-- *and* cant give yourself a grant !
-- so we give it to public. ideas welcome.
grant references on test_schema.parent to public;
grant references on test_schema.child to public;
""".split(";"):
if stmt.strip():
testing.db.execute(stmt)
-
+
@classmethod
def teardown_class(cls):
for stmt in """
""".split(";"):
if stmt.strip():
testing.db.execute(stmt)
-
+
def test_create_same_names_explicit_schema(self):
schema = testing.db.dialect.default_schema_name
meta = MetaData(testing.db)
ForeignKeyConstraint(['foo_id'], ['foo.id'],
onupdate='CASCADE'))
assert_raises(exc.SAWarning, bat.create)
-
+
class TypesTest(TestBase, AssertsCompiledSQL):
__only_on__ = 'oracle'
__dialect__ = oracle.OracleDialect()
b = bindparam("foo", u"hello world!")
assert b.type.dialect_impl(dialect).get_dbapi_type(dbapi) == 'STRING'
-
+
@testing.fails_on('+zxjdbc', 'zxjdbc lacks the FIXED_CHAR dbapi type')
def test_fixed_char(self):
m = MetaData(testing.db)
Column('id', Integer, primary_key=True),
Column('data', CHAR(30), nullable=False)
)
-
+
t.create()
try:
t.insert().execute(
eq_(t.select().where(t.c.data=='value 2').execute().fetchall(),
[(2, 'value 2 ')]
)
-
+
m2 = MetaData(testing.db)
t2 = Table('t1', m2, autoload=True)
assert type(t2.c.data.type) is CHAR
eq_(t2.select().where(t2.c.data=='value 2').execute().fetchall(),
[(2, 'value 2 ')]
)
-
+
finally:
t.drop()
-
+
def test_type_adapt(self):
dialect = cx_oracle.dialect()
assert isinstance(x, int)
finally:
t1.drop()
-
+
@testing.provide_metadata
def test_rowid(self):
t = Table('t1', metadata,
s1 = select([t])
s2 = select([column('rowid')]).select_from(s1)
rowid = s2.scalar()
-
+
# the ROWID type is not really needed here,
# as cx_oracle just treats it as a string,
# but we want to make sure the ROWID works...
eq_(s3.select().execute().fetchall(),
[(5, rowid)]
)
-
+
@testing.fails_on('+zxjdbc',
'Not yet known how to pass values of the '
'INTERVAL type')
seconds=5743))
finally:
metadata.drop_all()
-
+
def test_numerics(self):
m = MetaData(testing.db)
t1 = Table('t1', m,
Column('numbercol1', oracle.NUMBER(9)),
Column('numbercol2', oracle.NUMBER(9, 3)),
Column('numbercol3', oracle.NUMBER),
-
+
)
t1.create()
try:
numbercol2=14.85,
numbercol3=15.76
)
-
+
m2 = MetaData(testing.db)
t2 = Table('t1', m2, autoload=True)
finally:
t1.drop()
-
+
@testing.provide_metadata
def test_numerics_broken_inspection(self):
"""Numeric scenarios where Oracle type info is 'broken',
returning us precision, scale of the form (0, 0) or (0, -127).
We convert to Decimal and let int()/float() processors take over.
-
+
"""
-
+
# this test requires cx_oracle 5
-
+
foo = Table('foo', metadata,
Column('idata', Integer),
Column('ndata', Numeric(20, 2)),
Column('fdata', Float()),
)
foo.create()
-
+
foo.insert().execute(
{'idata':5, 'ndata':Decimal("45.6"), 'ndata2':Decimal("45.0"),
'nidata':Decimal('53'), 'fdata':45.68392},
fdata
FROM foo
"""
-
-
+
+
row = testing.db.execute(stmt).fetchall()[0]
eq_([type(x) for x in row], [int, Decimal, Decimal, int, float])
eq_(
row,
(5, Decimal('45.6'), 45, 53, Decimal('45.68392'))
)
-
+
row = testing.db.execute(text(stmt,
typemap={
'idata':Integer(),
eq_(row,
(5, Decimal('45.6'), Decimal('45'), Decimal('53'), 45.683920000000001)
)
-
+
stmt = """
SELECT
anon_1.idata AS anon_1_idata,
(5, 45.6, 45, 53, Decimal('45.68392'))
)
-
+
def test_reflect_dates(self):
metadata = MetaData(testing.db)
Table(
assert isinstance(t1.c.d3.type, TIMESTAMP)
assert t1.c.d3.type.timezone
assert isinstance(t1.c.d4.type, oracle.INTERVAL)
-
+
finally:
metadata.drop_all()
-
+
def test_reflect_raw(self):
types_table = Table('all_types', MetaData(testing.db),
Column('owner', String(30), primary_key=True),
assert isinstance(res, unicode)
finally:
metadata.drop_all()
-
+
def test_char_length(self):
self.assert_compile(VARCHAR(50),"VARCHAR(50 CHAR)")
self.assert_compile(NVARCHAR(50),"NVARCHAR2(50)")
self.assert_compile(CHAR(50),"CHAR(50)")
-
+
metadata = MetaData(testing.db)
t1 = Table('t1', metadata,
Column("c1", VARCHAR(50)),
eq_(row['bindata'].read(), 'this is binary')
finally:
t.drop(engine)
-
+
class EuroNumericTest(TestBase):
"""test the numeric output_type_handler when using non-US locale for NLS_LANG."""
-
+
__only_on__ = 'oracle+cx_oracle'
-
+
def setup(self):
self.old_nls_lang = os.environ.get('NLS_LANG', False)
os.environ['NLS_LANG'] = "GERMAN"
self.engine = testing_engine()
-
+
def teardown(self):
if self.old_nls_lang is not False:
os.environ['NLS_LANG'] = self.old_nls_lang
else:
del os.environ['NLS_LANG']
self.engine.dispose()
-
+
@testing.provide_metadata
def test_output_type_handler(self):
for stmt, exp, kw in [
exp
)
assert type(test_exp) is type(exp)
-
-
+
+
class DontReflectIOTTest(TestBase):
"""test that index overflow tables aren't included in
table_names."""
PCTTHRESHOLD 20
OVERFLOW TABLESPACE users
""")
-
+
def teardown(self):
testing.db.execute("drop table admin_docindex")
-
+
def test_reflect_all(self):
m = MetaData(testing.db)
m.reflect()
set(t.name for t in m.tables.values()),
set(['admin_docindex'])
)
-
+
class BufferedColumnTest(TestBase, AssertsCompiledSQL):
__only_on__ = 'oracle'
class UnsupportedIndexReflectTest(TestBase):
__only_on__ = 'oracle'
-
+
def setup(self):
global metadata
metadata = MetaData(testing.db)
Column('data', String(20), primary_key=True)
)
metadata.create_all()
-
+
def teardown(self):
metadata.drop_all()
-
+
def test_reflect_functional_index(self):
testing.db.execute('CREATE INDEX DATA_IDX ON '
'TEST_INDEX_REFLECT (UPPER(DATA))')
m2 = MetaData(testing.db)
t2 = Table('test_index_reflect', m2, autoload=True)
-
+
class RoundTripIndexTest(TestBase):
__only_on__ = 'oracle'
metadata.drop_all()
-
+
class SequenceTest(TestBase, AssertsCompiledSQL):
def test_basic(self):
seq = Sequence('My_Seq', schema='Some_Schema')
assert dialect.identifier_preparer.format_sequence(seq) \
== '"Some_Schema"."My_Seq"'
-
-
+
+
class ExecuteTest(TestBase):
__only_on__ = 'oracle'
def test_basic(self):
eq_(testing.db.execute('/*+ this is a comment */ SELECT 1 FROM '
'DUAL').fetchall(), [(1, )])
-
+
def test_sequences_are_integers(self):
seq = Sequence('foo_seq')
seq.create(testing.db)
assert type(val) is int
finally:
seq.drop(testing.db)
-
+
@testing.provide_metadata
def test_limit_offset_for_update(self):
# oracle can't actually do the ROWNUM thing with FOR UPDATE
# very well.
-
+
t = Table('t1', metadata, Column('id', Integer, primary_key=True),
Column('data', Integer)
)
metadata.create_all()
-
+
t.insert().execute(
{'id':1, 'data':1},
{'id':2, 'data':7},
{'id':4, 'data':15},
{'id':5, 'data':32},
)
-
+
# here, we can't use ORDER BY.
eq_(
t.select(for_update=True).limit(2).execute().fetchall(),
"ORA-02014",
t.select(for_update=True).limit(2).offset(3).execute
)
-
-
+
+
'RETURNING length(mytable.name) AS length_1'
, dialect=dialect)
-
+
def test_insert_returning(self):
dialect = postgresql.dialect()
table1 = table('mytable',
'INSERT INTO mytable (name) VALUES '
'(%(name)s) RETURNING length(mytable.name) '
'AS length_1', dialect=dialect)
-
+
@testing.uses_deprecated('.*argument is deprecated. Please use '
'statement.returning.*')
def test_old_returning_names(self):
'INSERT INTO mytable (name) VALUES '
'(%(name)s) RETURNING mytable.myid, '
'mytable.name', dialect=dialect)
-
+
def test_create_partial_index(self):
m = MetaData()
tbl = Table('testtbl', m, Column('data', Integer))
{'data':52},
{'data':9},
)
-
+
@testing.resolve_artifact_names
def test_float_coercion(self):
for type_, result in [
])
).scalar()
eq_(round_decimal(ret, 9), result)
-
+
@testing.provide_metadata
def test_arrays(self):
t1 = Table('t', metadata,
row,
([5], [5], [6], [decimal.Decimal("6.4")])
)
-
+
class EnumTest(TestBase, AssertsExecutionResults, AssertsCompiledSQL):
__only_on__ = 'postgresql'
"CREATE TABLE sometable (somecolumn "
"VARCHAR(1), CHECK (somecolumn IN ('x', "
"'y', 'z')))")
-
+
@testing.fails_on('postgresql+zxjdbc',
'zxjdbc fails on ENUM: column "XXX" is of type '
'XXX but expression is of type character varying')
finally:
metadata.drop_all()
metadata.drop_all()
-
+
def test_name_required(self):
metadata = MetaData(testing.db)
etype = Enum('four', 'five', 'six', metadata=metadata)
Enum(u'réveillé', u'drôle', u'S’il',
name='onetwothreetype'))
)
-
+
metadata.create_all()
try:
t1.insert().execute(value=u'drôle')
metadata.drop_all()
assert not testing.db.dialect.has_type(testing.db,
'fourfivesixtype')
-
+
def test_no_support(self):
def server_version_info(self):
return (8, 2)
-
+
e = engines.testing_engine()
dialect = e.dialect
dialect._get_server_version_info = server_version_info
-
+
assert dialect.supports_native_enum
e.connect()
assert not dialect.supports_native_enum
-
+
# initialize is called again on new pool
e.dispose()
e.connect()
assert not dialect.supports_native_enum
-
+
def test_reflection(self):
metadata = MetaData(testing.db)
metadata.drop_all()
class NumericInterpretationTest(TestBase):
-
-
+
+
def test_numeric_codes(self):
from sqlalchemy.dialects.postgresql import pg8000, psycopg2, base
from decimal import Decimal
-
+
for dialect in (pg8000.dialect(), psycopg2.dialect()):
-
+
typ = Numeric().dialect_impl(dialect)
for code in base._INT_TYPES + base._FLOAT_TYPES + \
base._DECIMAL_TYPES:
if proc is not None:
val = proc(val)
assert val in (23.7, Decimal("23.7"))
-
+
class InsertTest(TestBase, AssertsExecutionResults):
__only_on__ = 'postgresql'
assert_raises_message(exc.DBAPIError,
'violates not-null constraint',
eng.execute, t2.insert())
-
+
def test_sequence_insert(self):
table = Table('testtable', metadata, Column('id', Integer,
Sequence('my_seq'), primary_key=True),
con.execute("DROP TABLE enum_test")
con.execute("DROP DOMAIN enumdomain")
con.execute("DROP TYPE testtype")
-
+
def test_table_is_reflected(self):
metadata = MetaData(testing.db)
table = Table('testtable', metadata, autoload=True)
"Reflected default value didn't equal expected value")
assert not table.columns.answer.nullable, \
'Expected reflected column to not be nullable.'
-
+
def test_enum_domain_is_reflected(self):
metadata = MetaData(testing.db)
table = Table('enum_test', metadata, autoload=True)
table.c.data.type.enums,
('test', )
)
-
+
def test_table_is_reflected_test_schema(self):
metadata = MetaData(testing.db)
table = Table('testtable', metadata, autoload=True,
isolation_level='SERIALIZABLE')
eq_(eng.execute('show transaction isolation level').scalar(),
'serializable')
-
+
# check that it stays
conn = eng.connect()
eq_(conn.execute('show transaction isolation level').scalar(),
eq_(conn.execute('show transaction isolation level').scalar(),
'serializable')
conn.close()
-
+
eng = create_engine(testing.db.url, isolation_level='FOO')
if testing.db.driver == 'zxjdbc':
exception_cls = eng.dialect.dbapi.Error
stmt = text("select cast('hi' as char) as hi", typemap={'hi'
: Numeric})
assert_raises(exc.InvalidRequestError, testing.db.execute, stmt)
-
+
class TimezoneTest(TestBase):
"""Test timezone-aware datetimes.
-
+
psycopg will return a datetime with a tzinfo attached to it, if
postgresql returns it. python then will not let you compare a
datetime with a tzinfo to a datetime that doesnt have one. this
eq_(t2.c.c6.type.timezone, True)
finally:
t1.drop()
-
+
class ArrayTest(TestBase, AssertsExecutionResults):
__only_on__ = 'postgresql'
foo.id = 2
sess.add(foo)
sess.flush()
-
+
@testing.provide_metadata
def test_tuple_flag(self):
assert_raises_message(
exc.ArgumentError,
"mutable must be set to False if as_tuple is True.",
postgresql.ARRAY, Integer, as_tuple=True)
-
+
t1 = Table('t1', metadata,
Column('id', Integer, primary_key=True),
Column('data', postgresql.ARRAY(String(5), as_tuple=True, mutable=False)),
testing.db.execute(t1.insert(), id=1, data=["1","2","3"], data2=[5.4, 5.6])
testing.db.execute(t1.insert(), id=2, data=["4", "5", "6"], data2=[1.0])
testing.db.execute(t1.insert(), id=3, data=[["4", "5"], ["6", "7"]], data2=[[5.4, 5.6], [1.0, 1.1]])
-
+
r = testing.db.execute(t1.select().order_by(t1.c.id)).fetchall()
eq_(
r,
set(row[1] for row in r),
set([('1', '2', '3'), ('4', '5', '6'), (('4', '5'), ('6', '7'))])
)
-
-
-
+
+
+
class TimestampTest(TestBase, AssertsExecutionResults):
__only_on__ = 'postgresql'
def test_timestamp(self):
engine = testing.db
connection = engine.connect()
-
+
s = select(["timestamp '2007-12-25'"])
result = connection.execute(s).first()
eq_(result[0], datetime.datetime(2007, 12, 25, 0, 0))
class SpecialTypesTest(TestBase, ComparesTables):
"""test DDL and reflection of PG-specific types """
-
+
__only_on__ = 'postgresql'
__excluded_on__ = (('postgresql', '<', (8, 3, 0)),)
-
+
@classmethod
def setup_class(cls):
global metadata, table
metadata = MetaData(testing.db)
-
+
# create these types so that we can issue
# special SQL92 INTERVAL syntax
class y2m(types.UserDefinedType, postgresql.INTERVAL):
class d2s(types.UserDefinedType, postgresql.INTERVAL):
def get_col_spec(self):
return "INTERVAL DAY TO SECOND"
-
+
table = Table('sometable', metadata,
Column('id', postgresql.PGUuid, primary_key=True),
Column('flag', postgresql.PGBit),
Column('month_interval', d2s()),
Column('precision_interval', postgresql.INTERVAL(precision=3))
)
-
+
metadata.create_all()
-
+
# cheat so that the "strict type check"
# works
table.c.year_interval.type = postgresql.INTERVAL()
table.c.month_interval.type = postgresql.INTERVAL()
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
-
+
def test_reflection(self):
m = MetaData(testing.db)
t = Table('sometable', m, autoload=True)
-
+
self.assert_tables_equal(table, t, strict_types=True)
assert t.c.plain_interval.type.precision is None
assert t.c.precision_interval.type.precision == 3
class UUIDTest(TestBase):
"""Test the bind/return values of the UUID type."""
-
+
__only_on__ = 'postgresql'
-
+
@testing.requires.python25
@testing.fails_on('postgresql+pg8000', 'No support for UUID type')
def test_uuid_string(self):
str(uuid.uuid4()),
str(uuid.uuid4())
)
-
+
@testing.requires.python25
@testing.fails_on('postgresql+pg8000', 'No support for UUID type')
def test_uuid_uuid(self):
uuid.uuid4(),
uuid.uuid4()
)
-
+
def test_no_uuid_available(self):
from sqlalchemy.dialects.postgresql import base
uuid_type = base._python_UUID
)
finally:
base._python_UUID = uuid_type
-
+
def setup(self):
self.conn = testing.db.connect()
trans = self.conn.begin()
-
+
def teardown(self):
self.conn.close()
-
+
def _test_round_trip(self, utable, value1, value2):
utable.create(self.conn)
self.conn.execute(utable.insert(), {'data':value1})
)
eq_(r.fetchone()[0], value2)
eq_(r.fetchone(), None)
-
-
+
+
class MatchTest(TestBase, AssertsCompiledSQL):
__only_on__ = 'postgresql'
class TupleTest(TestBase):
__only_on__ = 'postgresql'
-
+
def test_tuple_containment(self):
-
+
for test, exp in [
([('a', 'b')], True),
([('a', 'c')], False),
def test_boolean(self):
"""Test that the boolean only treats 1 as True
-
+
"""
meta = MetaData(testing.db)
def test_extra_reserved_words(self):
"""Tests reserved words in identifiers.
-
+
'true', 'false', and 'column' are undocumented reserved words
when used as column identifiers (as of 3.5.1). Covering them
here to ensure they remain in place if the dialect's
Column('id', Integer, primary_key=True),
Column('t1_id', Integer, ForeignKey('master.t1.id')),
)
-
+
# schema->schema, generate REFERENCES with no schema name
self.assert_compile(
schema.CreateTable(t2),
"t1_id INTEGER, "
"PRIMARY KEY (id), "
"FOREIGN KEY(t1_id) REFERENCES t1 (id)"
- ")"
+ ")"
)
# schema->different schema, don't generate REFERENCES
"id INTEGER NOT NULL, "
"t1_id INTEGER, "
"PRIMARY KEY (id)"
- ")"
+ ")"
)
# same for local schema
"id INTEGER NOT NULL, "
"t1_id INTEGER, "
"PRIMARY KEY (id)"
- ")"
+ ")"
)
def setup_class(cls):
cls.engine = cls.create_engine()
super(AltEngineTest, cls).setup_class()
-
+
@classmethod
def teardown_class(cls):
cls.engine.dispose()
cls.engine = None
super(AltEngineTest, cls).teardown_class()
-
+
@classmethod
def create_engine(cls):
raise NotImplementedError
assert 'klptzyxm' not in strings
assert 'xyzzy' in strings
assert 'fnord' in strings
-
+
def test_conditional_constraint(self):
metadata, users, engine = self.metadata, self.users, self.engine
nonpg_mock = engines.mock_engine(dialect_name='sqlite')
metadata.drop_all(bind=pg_mock)
strings = ' '.join(str(x) for x in pg_mock.mock)
assert 'my_test_constraint' in strings
-
+
def test_metadata(self):
metadata, engine = self.metadata, self.engine
DDL('mxyzptlk').execute_at('before-create', metadata)
@engines.close_first
def teardown(self):
testing.db.connect().execute(users.delete())
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
eng.update_execution_options(foo='hoho')
conn = eng.contextual_connect()
eq_(conn._execution_options['foo'], 'hoho')
-
+
class CompiledCacheTest(TestBase):
@classmethod
@engines.close_first
def teardown(self):
testing.db.connect().execute(users.delete())
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
-
+
def test_cache(self):
conn = testing.db.connect()
cache = {}
cached_conn = conn.execution_options(compiled_cache=cache)
-
+
ins = users.insert()
cached_conn.execute(ins, {'user_name':'u1'})
cached_conn.execute(ins, {'user_name':'u2'})
cached_conn.execute(ins, {'user_name':'u3'})
assert len(cache) == 1
eq_(conn.execute("select count(*) from users").scalar(), 3)
-
+
class LogTest(TestBase):
def _test_logger(self, eng, eng_name, pool_name):
buf = logging.handlers.BufferingHandler(100)
]
for log in logs:
log.addHandler(buf)
-
+
eq_(eng.logging_name, eng_name)
eq_(eng.pool.logging_name, pool_name)
eng.execute(select([1]))
for log in logs:
log.removeHandler(buf)
-
+
names = set([b.name for b in buf.buffer])
assert 'sqlalchemy.engine.base.Engine.%s' % (eng_name,) in names
assert 'sqlalchemy.pool.%s.%s' % (eng.pool.__class__.__name__,
pool_name) in names
-
+
def test_named_logger(self):
options = {'echo':'debug', 'echo_pool':'debug',
'logging_name':'myenginename',
}
eng = engines.testing_engine(options=options)
self._test_logger(eng, "myenginename", "mypoolname")
-
+
eng.dispose()
self._test_logger(eng, "myenginename", "mypoolname")
-
+
def test_unnamed_logger(self):
eng = engines.testing_engine(options={'echo': 'debug',
"0x...%s" % hex(id(eng))[-4:],
"0x...%s" % hex(id(eng.pool))[-4:],
)
-
+
class ResultProxyTest(TestBase):
def test_nontuple_row(self):
"""ensure the C version of BaseRowProxy handles
duck-type-dependent rows."""
-
+
from sqlalchemy.engine import RowProxy
class MyList(object):
engine = engines.testing_engine()
metadata.bind = engine
-
+
t = Table('t1', metadata,
Column('data', String(10))
)
@property
def rowcount(self):
assert False
-
+
execution_ctx_cls = engine.dialect.execution_ctx_cls
engine.dialect.execution_ctx_cls = type("FakeCtx",
(BreakRowcountMixin,
assert_raises(AssertionError, t.delete().execute)
finally:
engine.dialect.execution_ctx_cls = execution_ctx_cls
-
+
@testing.requires.python26
def test_rowproxy_is_sequence(self):
import collections
row = RowProxy(object(), ['value'], [None], {'key'
: (None, 0), 0: (None, 0)})
assert isinstance(row, collections.Sequence)
-
+
@testing.requires.cextensions
def test_row_c_sequence_check(self):
import csv
import collections
from StringIO import StringIO
-
+
metadata = MetaData()
metadata.bind = 'sqlite://'
users = Table('users', metadata,
# csv performs PySequenceCheck call
writer.writerow(row)
assert s.getvalue().strip() == '1,Test'
-
+
class ProxyConnectionTest(TestBase):
@testing.fails_on('firebird', 'Data type unknown')
def test_proxy(self):
-
+
stmts = []
cursor_stmts = []
-
+
class MyProxy(ConnectionProxy):
def execute(
self,
):
cursor_stmts.append((str(statement), parameters, None))
return execute(cursor, statement, parameters, context)
-
+
def assert_stmts(expected, received):
for stmt, params, posn in expected:
if not received:
# be incorrect
assert_stmts(compiled, stmts)
assert_stmts(cursor, cursor_stmts)
-
+
def test_options(self):
track = []
class TrackProxy(ConnectionProxy):
c3 = c2.execution_options(bar='bat')
eq_(c3._execution_options, {'foo':'bar', 'bar':'bat'})
eq_(track, ['execute', 'cursor_execute'])
-
-
+
+
def test_transactional(self):
track = []
class TrackProxy(ConnectionProxy):
trans = conn.begin()
conn.execute(select([1]))
trans.commit()
-
+
eq_(track, [
'begin',
'execute',
'cursor_execute',
'commit',
])
-
+
@testing.requires.savepoints
@testing.requires.two_phase_transactions
def test_transactional_advanced(self):
engine = engines.testing_engine(options={'proxy':TrackProxy()})
conn = engine.connect()
-
+
trans = conn.begin()
trans2 = conn.begin_nested()
conn.execute(select([1]))
conn.execute(select([1]))
trans2.commit()
trans.rollback()
-
+
trans = conn.begin_twophase()
conn.execute(select([1]))
trans.prepare()
t2 = Table('t2', metadata, Column('x', Integer), schema='foo')
t3 = Table('t2', MetaData(), Column('x', Integer))
t4 = Table('t1', MetaData(), Column('x', Integer), schema='foo')
-
+
assert "t1" in metadata
assert "foo.t2" in metadata
assert "t2" not in metadata
assert t2 in metadata
assert t3 not in metadata
assert t4 not in metadata
-
+
def test_uninitialized_column_copy(self):
for col in [
Column('foo', String(), nullable=False),
cx = c1.copy()
t = Table('foo%d' % i, m, cx)
eq_(msgs, ['attach foo0.foo', 'attach foo1.foo', 'attach foo2.foo'])
-
-
+
+
def test_dupe_tables(self):
metadata = MetaData()
t1 = Table('table1', metadata,
"Table object."
finally:
metadata.drop_all()
-
+
def test_fk_copy(self):
c1 = Column('foo', Integer)
c2 = Column('bar', Integer)
m = MetaData()
t1 = Table('t', m, c1, c2)
-
+
kw = dict(onupdate="X",
ondelete="Y", use_alter=True, name='f1',
deferrable="Z", initially="Q", link_to_name=True)
-
+
fk1 = ForeignKey(c1, **kw)
fk2 = ForeignKeyConstraint((c1,), (c2,), **kw)
-
+
t1.append_constraint(fk2)
fk1c = fk1.copy()
fk2c = fk2.copy()
-
+
for k in kw:
eq_(getattr(fk1c, k), kw[k])
eq_(getattr(fk2c, k), kw[k])
-
+
def test_fk_construct(self):
c1 = Column('foo', Integer)
c2 = Column('bar', Integer)
t1 = Table('t', m, c1, c2)
fk1 = ForeignKeyConstraint(('foo', ), ('bar', ), table=t1)
assert fk1 in t1.constraints
-
+
@testing.exclude('mysql', '<', (4, 1, 1), 'early types are squirrely')
def test_to_metadata(self):
meta = MetaData()
assert not c.columns.contains_column(table.c.name)
finally:
meta.drop_all(testing.db)
-
+
def test_tometadata_with_schema(self):
meta = MetaData()
Column('data2', Integer),
)
Index('multi',table.c.data1,table.c.data2),
-
+
meta2 = MetaData()
table_c = table.tometadata(meta2)
return [i.name,i.unique] + \
sorted(i.kwargs.items()) + \
i.columns.keys()
-
+
eq_(
sorted([_get_key(i) for i in table.indexes]),
sorted([_get_key(i) for i in table_c.indexes])
@emits_warning("Table '.+' already exists within the given MetaData")
def test_tometadata_already_there(self):
-
+
meta1 = MetaData()
table1 = Table('mytable', meta1,
Column('myid', Integer, primary_key=True),
)
meta3 = MetaData()
-
+
table_c = table1.tometadata(meta2)
table_d = table2.tometadata(meta2)
c = Table('c', meta, Column('foo', Integer))
d = Table('d', meta, Column('foo', Integer))
e = Table('e', meta, Column('foo', Integer))
-
+
e.add_is_dependent_on(c)
a.add_is_dependent_on(b)
b.add_is_dependent_on(d)
meta.sorted_tables,
[d, b, a, c, e]
)
-
-
+
+
def test_tometadata_strip_schema(self):
meta = MetaData()
table1 = Table("temporary_table_1", MetaData(),
Column("col1", Integer),
prefixes = ["TEMPORARY"])
-
+
self.assert_compile(
schema.CreateTable(table1),
"CREATE TEMPORARY TABLE temporary_table_1 (col1 INTEGER)"
exec ('from sqlalchemy.dialects import %s\ndialect = '
'%s.dialect()' % (name, name), globals())
eq_(dialect.name, name)
-
+
class CreateEngineTest(TestBase):
"""test that create_engine arguments of different types get
propagated properly"""
mock_dbapi = MockDBAPI()
-class PoolTestBase(TestBase):
+class PoolTestBase(TestBase):
def setup(self):
pool.clear_managers()
-
+
@classmethod
def teardown_class(cls):
pool.clear_managers()
expected = [(1, )]
for row in cursor:
eq_(row, expected.pop(0))
-
+
def test_no_connect_on_recreate(self):
def creator():
raise Exception("no creates allowed")
-
+
for cls in (pool.SingletonThreadPool, pool.StaticPool,
pool.QueuePool, pool.NullPool, pool.AssertionPool):
p = cls(creator=creator)
p.dispose()
p.recreate()
-
+
mock_dbapi = MockDBAPI()
p = cls(creator=mock_dbapi.connect)
conn = p.connect()
mock_dbapi.throw_error = True
p.dispose()
p.recreate()
-
-
+
+
def testthreadlocal_del(self):
self._do_testthreadlocal(useclose=False)
snoop.assert_total(1, 1, 2, 1)
c.close()
snoop.assert_total(1, 1, 2, 2)
-
+
def test_listeners_callables(self):
dbapi = MockDBAPI()
c1.close()
lazy_gc()
assert not pool._refs
-
+
def test_timeout(self):
p = pool.QueuePool(creator=mock_dbapi.connect, pool_size=3,
max_overflow=0, use_threadlocal=False,
timeouts.append(time.time() - now)
continue
time.sleep(4)
- c1.close()
+ c1.close()
threads = []
for i in xrange(10):
def _test_overflow(self, thread_count, max_overflow):
gc_collect()
-
+
def creator():
time.sleep(.05)
return mock_dbapi.connect()
th.join()
self.assert_(max(peaks) <= max_overflow)
-
+
lazy_gc()
assert not pool._refs
def setup(self):
global db, dbapi
-
+
class MDBAPI(MockDBAPI):
def connect(self, *args, **kwargs):
return MConn(self)
-
+
class MConn(MockConnection):
def cursor(self):
return MCursor(self)
db = tsa.create_engine(
'postgresql://foo:bar@localhost/test',
module=dbapi, _initialize=False)
-
+
def test_cursor_explode(self):
conn = db.connect()
result = conn.execute("select foo")
result.close()
conn.close()
-
+
def teardown(self):
db.dispose()
-
+
engine = None
class RealReconnectTest(TestBase):
def setup(self):
conn = engine.connect()
conn.invalidate()
conn.invalidate()
-
+
def test_explode_in_initializer(self):
engine = engines.testing_engine()
def broken_initialize(connection):
connection.execute("select fake_stuff from _fake_table")
-
+
engine.dialect.initialize = broken_initialize
-
+
# raises a DBAPIError, not an AttributeError
assert_raises(exc.DBAPIError, engine.connect)
engine.dispose()
p1 = engine.pool
-
+
def is_disconnect(e):
return True
-
+
engine.dialect.is_disconnect = is_disconnect
# invalidate() also doesn't screw up
assert_raises(exc.DBAPIError, engine.connect)
-
+
# pool was recreated
assert engine.pool is not p1
assert conn.invalidated
eq_(conn.execute(select([1])).scalar(), 1)
assert not conn.invalidated
-
+
@testing.fails_on('+informixdb',
"Wrong error thrown, fix in informixdb?")
def test_close(self):
conn = engine.contextual_connect()
eq_(conn.execute(select([1])).scalar(), 1)
conn.close()
-
+
meta, table, engine = None, None, None
class InvalidateDuringResultTest(TestBase):
def setup(self):
self.assert_tables_equal(addresses, reflected_addresses)
finally:
meta.drop_all()
-
+
def test_two_foreign_keys(self):
meta = MetaData(testing.db)
t1 = Table(
assert t1r.c.t3id.references(t3r.c.id)
finally:
meta.drop_all()
-
+
def test_nonexistent(self):
meta = MetaData(testing.db)
assert_raises(sa.exc.NoSuchTableError, Table, 'nonexistent',
meta, autoload=True)
-
+
def test_include_columns(self):
meta = MetaData(testing.db)
foo = Table('foo', meta, *[Column(n, sa.String(30))
@testing.emits_warning(r".*omitted columns")
def test_include_columns_indexes(self):
m = MetaData(testing.db)
-
+
t1 = Table('t1', m, Column('a', sa.Integer), Column('b', sa.Integer))
sa.Index('foobar', t1.c.a, t1.c.b)
sa.Index('bat', t1.c.a)
def test_autoincrement_col(self):
"""test that 'autoincrement' is reflected according to sqla's policy.
-
+
Don't mark this test as unsupported for any backend !
-
+
(technically it fails with MySQL InnoDB since "id" comes before "id2")
-
+
"""
-
+
meta = MetaData(testing.db)
t1 = Table('test', meta,
Column('id', sa.Integer, primary_key=True),
m2 = MetaData(testing.db)
t1a = Table('test', m2, autoload=True)
assert t1a._autoincrement_column is t1a.c.id
-
+
t2a = Table('test2', m2, autoload=True)
assert t2a._autoincrement_column is t2a.c.id2
-
+
finally:
meta.drop_all()
-
+
def test_unknown_types(self):
meta = MetaData(testing.db)
t = Table("test", meta,
u4 = Table('users', meta4,
Column('id', sa.Integer, key='u_id', primary_key=True),
autoload=True)
-
+
a4 = Table(
'addresses',
meta4,
assert len(a4.constraints) == 2
finally:
meta.drop_all()
-
+
@testing.provide_metadata
def test_override_composite_fk(self):
"""Test double-remove of composite foreign key, when replaced."""
autoload=True,
autoload_with=testing.db
)
-
+
assert b1.c.x is c1
assert b1.c.y is c2
assert f1 in b1.constraints
assert len(b1.constraints) == 2
-
-
-
+
+
+
def test_override_keys(self):
"""test that columns can be overridden with a 'key',
and that ForeignKey targeting during reflection still works."""
-
+
meta = MetaData(testing.db)
a1 = Table('a', meta,
assert b2.c.y.references(a2.c.x1)
finally:
meta.drop_all()
-
+
def test_nonreflected_fk_raises(self):
"""test that a NoReferencedColumnError is raised when reflecting
a table with an FK to another table which has not included the target
column in its reflection.
-
+
"""
meta = MetaData(testing.db)
a1 = Table('a', meta,
m2 = MetaData(testing.db)
a2 = Table('a', m2, include_columns=['z'], autoload=True)
b2 = Table('b', m2, autoload=True)
-
+
assert_raises(sa.exc.NoReferencedColumnError, a2.join, b2)
finally:
meta.drop_all()
-
-
+
+
@testing.exclude('mysql', '<', (4, 1, 1), 'innodb funkiness')
def test_override_existing_fk(self):
"""test that you can override columns and specify new foreign
autoload=True)
u2 = Table('users', meta2, autoload=True)
s = sa.select([a2])
-
+
assert s.c.user_id is not None
assert len(a2.foreign_keys) == 1
assert len(a2.c.user_id.foreign_keys) == 1
assert list(a2.c.user_id.foreign_keys)[0].parent \
is a2.c.user_id
assert u2.join(a2).onclause.compare(u2.c.id == a2.c.user_id)
-
+
meta2 = MetaData(testing.db)
u2 = Table('users', meta2, Column('id', sa.Integer,
primary_key=True), autoload=True)
primary_key=True), Column('user_id', sa.Integer,
sa.ForeignKey('users.id')), autoload=True)
s = sa.select([a2])
-
+
assert s.c.user_id is not None
assert len(a2.foreign_keys) == 1
assert len(a2.c.user_id.foreign_keys) == 1
assert list(a2.c.user_id.foreign_keys)[0].parent \
is a2.c.user_id
assert u2.join(a2).onclause.compare(u2.c.id == a2.c.user_id)
-
+
finally:
meta.drop_all()
Column('pkg_id', sa.Integer, sa.ForeignKey('pkgs.pkg_id')),
Column('slot', sa.String(128)),
)
-
+
assert_raises_message(sa.exc.InvalidRequestError,
"Foreign key assocated with column 'slots.pkg_id' "
"could not find table 'pkgs' with which to generate "
else:
check_col = 'true'
quoter = meta.bind.dialect.identifier_preparer.quote_identifier
-
+
table_b = Table('false', meta,
Column('create', sa.Integer, primary_key=True),
Column('true', sa.Integer,sa.ForeignKey('select.not')),
sa.CheckConstraint('%s <> 1'
% quoter(check_col), name='limit')
)
-
+
table_c = Table('is', meta,
Column('or', sa.Integer, nullable=False, primary_key=True),
Column('join', sa.Integer, nullable=False, primary_key=True),
m2 = MetaData(testing.db)
users_v = Table("users_v", m2, autoload=True)
addresses_v = Table("email_addresses_v", m2, autoload=True)
-
+
for c1, c2 in zip(users.c, users_v.c):
eq_(c1.name, c2.name)
self.assert_types_base(c1, c2)
-
+
for c1, c2 in zip(addresses.c, addresses_v.c):
eq_(c1.name, c2.name)
self.assert_types_base(c1, c2)
finally:
- dropViews(metadata.bind)
-
+ dropViews(metadata.bind)
+
@testing.provide_metadata
def test_reflect_all_with_views(self):
users, addresses = createTables(metadata, None)
metadata.create_all()
createViews(metadata.bind, None)
m2 = MetaData(testing.db)
-
+
m2.reflect(views=False)
eq_(
set(m2.tables),
set([u'users', u'email_addresses'])
)
-
+
m2 = MetaData(testing.db)
m2.reflect(views=True)
eq_(
)
finally:
dropViews(metadata.bind)
-
+
class CreateDropTest(TestBase):
@classmethod
sa.Sequence('user_id_seq', optional=True),
primary_key=True),
Column('user_name',sa.String(40)))
-
+
addresses = Table('email_addresses', metadata,
Column('address_id', sa.Integer,
sa.Sequence('address_id_seq', optional=True),
table_names = [t.name for t in tables]
ua = [n for n in table_names if n in ('users', 'email_addresses')]
oi = [n for n in table_names if n in ('orders', 'items')]
-
+
eq_(ua, ['users', 'email_addresses'])
eq_(oi, ['orders', 'items'])
-
+
def testcheckfirst(self):
try:
schema = engine.dialect.default_schema_name
assert bool(schema)
-
+
metadata = MetaData(engine)
table1 = Table('table1', metadata,
Column('col1', sa.Integer, primary_key=True),
schema=test_schema), False)
eq_(testing.db.dialect.has_sequence(testing.db, 'user_id_seq'),
False)
-
+
# Tests related to engine.reflection
@testing.requires.schemas
def test_get_schema_names(self):
insp = Inspector(testing.db)
-
+
self.assert_('test_schema' in insp.get_schema_names())
def test_dialect_initialize(self):
assert not hasattr(engine.dialect, 'default_schema_name')
insp = Inspector(engine)
assert hasattr(engine.dialect, 'default_schema_name')
-
+
def test_get_default_schema_name(self):
insp = Inspector(testing.db)
eq_(insp.default_schema_name, testing.db.dialect.default_schema_name)
-
+
def _test_get_table_names(self, schema=None, table_type='table',
order_by=None):
meta = MetaData(testing.db)
eq_(users_pkeys, ['user_id'])
addr_cons = insp.get_pk_constraint(addresses.name,
schema=schema)
-
+
addr_pkeys = addr_cons['constrained_columns']
eq_(addr_pkeys, ['address_id'])
-
+
@testing.requires.reflects_pk_names
def go():
eq_(addr_cons['name'], 'email_ad_pk')
go()
-
+
finally:
addresses.drop()
users.drop()
result = connection.execute("select * from query_users")
assert len(result.fetchall()) == 0
connection.close()
-
+
def test_transaction_container(self):
def go(conn, table, data):
{'user_id': 1, 'user_name': 'user3'}])
eq_(testing.db.execute(users.select()).fetchall(), [(1, 'user1'
)])
-
+
def test_nested_rollback(self):
connection = testing.db.connect()
try:
eq_(connection.scalar("select count(*) from query_users"), 0)
finally:
connection.close()
-
+
def test_nesting(self):
connection = testing.db.connect()
transaction = connection.begin()
class ExplicitAutoCommitTest(TestBase):
"""test the 'autocommit' flag on select() and text() objects.
-
+
Requires PostgreSQL so that we may define a custom function which
modifies the database. """
# ensure tests start with engine closed
tlengine.close()
-
+
def test_rollback_no_trans(self):
tlengine = create_engine(testing.db.url, strategy="threadlocal")
# shouldn't fail
tlengine.rollback()
-
+
tlengine.begin()
tlengine.rollback()
# shouldn't fail
tlengine.prepare()
-
+
def test_connection_close(self):
"""test that when connections are closed for real, transactions
are rolled back and disposed."""
@testing.requires.independent_connections
def test_queued_update(self):
"""Test SELECT FOR UPDATE with concurrent modifications.
-
+
Runs concurrent modifications on a single row in the users
table, with each mutator trying to increment a value stored in
user_name.
-
+
"""
db = testing.db
ForeignKey('Parent.id')),
Column('foo', String(128)),
Column('name', String(128)))
-
+
class CustomProxy(_AssociationList):
def __init__(
self,
setter,
parent,
)
-
+
class Parent(object):
children = association_proxy('_children', 'name',
proxy_factory=CustomProxy,
self.metadata = metadata
self.session = create_session()
self.Parent, self.Child = Parent, Child
-
+
def test_sequence_ops(self):
self._test_sequence_ops()
-
-
+
+
class ScalarTest(TestBase):
def test_scalar_proxy(self):
metadata = MetaData(testing.db)
class PickleKeyFunc(object):
def __init__(self, name):
self.name = name
-
+
def __call__(self, obj):
return getattr(obj, self.name)
run_deletes = None
run_setup_mappers = 'once'
run_setup_classes = 'once'
-
+
@classmethod
def define_tables(cls, metadata):
Table('userkeywords', metadata, Column('keyword_id', Integer,
select([MyThingy('x'), MyThingy('y')]).where(MyThingy() == 5),
"SELECT >>x<<, >>y<< WHERE >>MYTHINGY!<< = :MYTHINGY!_1"
)
-
+
def test_types(self):
class MyType(TypeEngine):
pass
-
+
@compiles(MyType, 'sqlite')
def visit_type(type, compiler, **kw):
return "SQLITE_FOO"
"POSTGRES_FOO",
dialect=postgresql.dialect()
)
-
-
+
+
def test_stateful(self):
class MyThingy(ColumnClause):
def __init__(self):
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z "
"FROM mytable WHERE mytable.x > :x_1)"
)
-
+
def test_annotations(self):
"""test that annotated clause constructs use the
decorated class' compiler.
-
+
"""
t1 = table('t1', column('c1'), column('c2'))
-
+
dispatch = Select._compiler_dispatch
try:
@compiles(Select)
def compile(element, compiler, **kw):
return "OVERRIDE"
-
+
s1 = select([t1])
self.assert_compile(
s1, "OVERRIDE"
Select._compiler_dispatch = dispatch
if hasattr(Select, '_compiler_dispatcher'):
del Select._compiler_dispatcher
-
+
def test_default_on_existing(self):
"""test that the existing compiler function remains
as 'default' when overriding the compilation of an
existing construct."""
-
+
t1 = table('t1', column('c1'), column('c2'))
-
+
dispatch = Select._compiler_dispatch
try:
-
+
@compiles(Select, 'sqlite')
def compile(element, compiler, **kw):
return "OVERRIDE"
-
+
s1 = select([t1])
self.assert_compile(
s1, "SELECT t1.c1, t1.c2 FROM t1",
Select._compiler_dispatch = dispatch
if hasattr(Select, '_compiler_dispatcher'):
del Select._compiler_dispatcher
-
+
def test_dialect_specific(self):
class AddThingy(DDLElement):
__visit_name__ = 'add_thingy'
def test_functions(self):
from sqlalchemy.dialects.postgresql import base as postgresql
-
+
class MyUtcFunction(FunctionElement):
pass
-
+
@compiles(MyUtcFunction)
def visit_myfunc(element, compiler, **kw):
return "utcnow()"
-
+
@compiles(MyUtcFunction, 'postgresql')
def visit_myfunc(element, compiler, **kw):
return "timezone('utc', current_timestamp)"
-
+
self.assert_compile(
MyUtcFunction(),
"utcnow()",
"timezone('utc', current_timestamp)",
dialect=postgresql.dialect()
)
-
+
def test_subclasses_one(self):
class Base(FunctionElement):
name = 'base'
-
+
class Sub1(Base):
name = 'sub1'
class Sub2(Base):
name = 'sub2'
-
+
@compiles(Base)
def visit_base(element, compiler, **kw):
return element.name
'SELECT FOOsub1, sub2',
use_default_dialect=True
)
-
+
def test_subclasses_two(self):
class Base(FunctionElement):
name = 'base'
-
+
class Sub1(Base):
name = 'sub1'
class Sub2(Base):
name = 'sub2'
-
+
class SubSub1(Sub1):
name = 'subsub1'
-
+
self.assert_compile(
select([Sub1(), Sub2(), SubSub1()]),
'SELECT sub1, sub2, subsub1',
'SELECT FOOsub1, sub2, FOOsubsub1',
use_default_dialect=True
)
-
\ No newline at end of file
def teardown(self):
clear_mappers()
Base.metadata.drop_all()
-
+
class DeclarativeTest(DeclarativeTestBase):
def test_basic(self):
class User(Base, ComparableEntity):
eq_(Address.__table__.c['id'].name, 'id')
eq_(Address.__table__.c['_email'].name, 'email')
eq_(Address.__table__.c['_user_id'].name, 'user_id')
-
+
u1 = User(name='u1', addresses=[
Address(email='one'),
Address(email='two'),
assert class_mapper(Bar).get_property('some_data').columns[0] \
is t.c.data
-
+
def test_difficult_class(self):
"""test no getattr() errors with a customized class"""
decl.instrument_declarative(User,{},Base.metadata)
-
+
def test_undefer_column_name(self):
# TODO: not sure if there was an explicit
# test for this elsewhere
eq_(str(foo), 'foo')
eq_(foo.key, 'foo')
eq_(foo.name, 'foo')
-
+
def test_recompile_on_othermapper(self):
"""declarative version of the same test in mappers.py"""
u = User()
assert User.addresses
assert mapperlib._new_mappers is False
-
+
def test_string_dependency_resolution(self):
from sqlalchemy.sql import desc
backref=backref('user',
primaryjoin='User.id==Address.user_id',
foreign_keys='[Address.user_id]'))
-
+
class Address(Base, ComparableEntity):
__tablename__ = 'addresses'
compile_mappers()
eq_(str(User.addresses.prop.primaryjoin),
'users.id = addresses.user_id')
-
+
def test_string_dependency_resolution_in_backref(self):
class User(Base, ComparableEntity):
compile_mappers()
eq_(str(User.addresses.property.primaryjoin),
str(Address.user.property.primaryjoin))
-
+
def test_string_dependency_resolution_tables(self):
class User(Base, ComparableEntity):
eq_(sess.query(User).filter(User.name == 'ed').one(),
User(name='ed', addresses=[Address(email='abc'),
Address(email='def'), Address(email='xyz')]))
-
+
def test_nice_dependency_error(self):
class User(Base):
Base = decl.declarative_base(cls=MyBase)
assert hasattr(Base, 'metadata')
assert Base().foobar() == "foobar"
-
+
def test_uses_get_on_class_col_fk(self):
# test [ticket:1492]
assert d1.master
self.assert_sql_count(testing.db, go, 0)
-
+
def test_index_doesnt_compile(self):
class User(Base):
__tablename__ = 'users'
id = Column('id', Integer, primary_key=True)
name = Column('name', String(50))
error = relationship("Address")
-
+
i = Index('my_index', User.name)
-
+
# compile fails due to the nonexistent Addresses relationship
assert_raises(sa.exc.InvalidRequestError, compile_mappers)
-
+
# index configured
assert i in User.__table__.indexes
assert User.__table__.c.id not in set(i.columns)
assert User.__table__.c.name in set(i.columns)
-
+
# tables create fine
Base.metadata.create_all()
-
+
def test_add_prop(self):
class User(Base, ComparableEntity):
a1 = sess.query(Address).filter(Address.email == 'two').one()
eq_(a1, Address(email='two'))
eq_(a1.user, User(name='u1'))
-
+
def test_eager_order_by(self):
class Address(Base, ComparableEntity):
assert_raises_message(sa.exc.ArgumentError,
'Mapper Mapper|User|users could not '
'assemble any primary key', define)
-
+
def test_table_args_bad_format(self):
def err():
assert_raises_message(sa.exc.ArgumentError,
'Tuple form of __table_args__ is ', err)
-
+
def test_table_args_type(self):
def err():
class Foo1(Base):
id = Column('id', Integer, primary_key=True)
assert_raises_message(sa.exc.ArgumentError,
'__table_args__ value must be a tuple, ', err)
-
+
def test_table_args_none(self):
-
+
class Foo2(Base):
__tablename__ = 'foo'
id = Column('id', Integer, primary_key=True)
assert Foo2.__table__.kwargs == {}
-
+
def test_table_args_dict_format(self):
-
+
class Foo2(Base):
__tablename__ = 'foo'
test_needs_autoincrement=True)
name = Column('name', String(50))
addresses = relationship('Address', backref='user')
-
+
@declared_attr
def address_count(cls):
# this doesn't really gain us anything. but if
# one is used, lets have it function as expected...
return sa.orm.column_property(sa.select([sa.func.count(Address.id)]).
where(Address.user_id == cls.id))
-
+
Base.metadata.create_all()
u1 = User(name='u1', addresses=[Address(email='one'),
Address(email='two')])
sess.expunge_all()
eq_(sess.query(User).all(), [User(name='u1', address_count=2,
addresses=[Address(email='one'), Address(email='two')])])
-
+
def test_column(self):
class User(Base, ComparableEntity):
primary_language = Column('primary_language', String(50))
assert class_mapper(Engineer).inherits is class_mapper(Person)
-
+
@testing.fails_if(lambda: True, "Not implemented until 0.7")
def test_foreign_keys_with_col(self):
"""Test that foreign keys that reference a literal 'id' subclass
- 'id' attribute behave intuitively.
-
+ 'id' attribute behave intuitively.
+
See ticket 1892.
-
+
"""
class Booking(Base):
__tablename__ = 'booking'
primary_key=True)
plan_booking_id = Column(Integer,
ForeignKey(PlanBooking.id))
-
+
plan_booking = relationship(PlanBooking,
backref='feature_bookings')
-
+
assert FeatureBooking.__table__.c.plan_booking_id.\
references(PlanBooking.__table__.c.id)
assert FeatureBooking.__table__.c.id.\
references(Booking.__table__.c.id)
-
+
def test_with_undefined_foreignkey(self):
class Parent(Base):
def test_single_colsonsub(self):
"""test single inheritance where the columns are local to their
class.
-
+
this is a newer usage.
-
+
"""
class Company(Base, ComparableEntity):
def test_single_fksonsub(self):
"""test single inheritance with a foreign key-holding column on
a subclass.
-
+
"""
class Person(Base, ComparableEntity):
eq_(obj.name, 'testing')
eq_(obj.foo(), 'bar1')
eq_(obj.baz, 'fu')
-
+
def test_mixin_overrides(self):
"""test a mixin that overrides a column on a superclass."""
-
+
class MixinA(object):
foo = Column(String(50))
-
+
class MixinB(MixinA):
foo = Column(Integer)
class MyModelA(Base, MixinA):
__tablename__ = 'testa'
id = Column(Integer, primary_key=True)
-
+
class MyModelB(Base, MixinB):
__tablename__ = 'testb'
id = Column(Integer, primary_key=True)
-
+
eq_(MyModelA.__table__.c.foo.type.__class__, String)
eq_(MyModelB.__table__.c.foo.type.__class__, Integer)
-
-
+
+
def test_not_allowed(self):
class MyMixin:
pass
eq_(MyModel.__table__.name, 'mymodel')
-
+
def test_classproperty_still_works(self):
class MyMixin(object):
@classproperty
__tablename__ = 'overridden'
eq_(MyModel.__table__.name, 'overridden')
-
+
def test_table_name_not_inherited(self):
class MyMixin:
mapped to a superclass and single-table inheritance subclass.
The superclass table gets the column, the subclass shares
the MapperProperty.
-
+
"""
-
+
class MyMixin(object):
foo = Column('foo', Integer)
bar = Column('bar_newname', Integer)
-
+
class General(Base, MyMixin):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
assert General.bar.prop.columns[0] is General.__table__.c.bar_newname
assert len(General.bar.prop.columns) == 1
assert Specific.bar.prop is General.bar.prop
-
+
def test_columns_joined_table_inheritance(self):
"""Test a column on a mixin with an alternate attribute name,
mapped to a superclass and joined-table inheritance subclass.
Both tables get the column, in the case of the subclass the two
columns are joined under one MapperProperty.
-
+
"""
class MyMixin(object):
foo = Column('foo', Integer)
bar = Column('bar_newname', Integer)
-
+
class General(Base, MyMixin):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
assert len(Specific.bar.prop.columns) == 2
assert Specific.bar.prop.columns[0] is General.__table__.c.bar_newname
assert Specific.bar.prop.columns[1] is Specific.__table__.c.bar_newname
-
+
def test_column_join_checks_superclass_type(self):
"""Test that the logic which joins subclass props to those
of the superclass checks that the superclass property is a column.
__tablename__ = 'sub'
id = Column(Integer, ForeignKey('test.id'), primary_key=True)
type_ = Column('foob', String(50))
-
+
assert isinstance(General.type_.property, sa.orm.RelationshipProperty)
assert Specific.type_.property.columns[0] is Specific.__table__.c.foob
assert_raises_message(
sa.exc.ArgumentError, "column 'foob' conflicts with property", go
)
-
+
def test_table_args_overridden(self):
class MyMixin:
if cls.__name__ != 'MyModel':
args.pop('polymorphic_on')
args['polymorphic_identity'] = cls.__name__
-
+
return args
id = Column(Integer, primary_key=True)
-
+
class MySubModel(MyModel):
pass
-
+
eq_(
MyModel.__mapper__.polymorphic_on.name,
'type_'
eq_(MyModel.__mapper__.always_refresh, True)
eq_(MySubModel.__mapper__.always_refresh, True)
eq_(MySubModel.__mapper__.polymorphic_identity, 'MySubModel')
-
+
def test_mapper_args_property(self):
class MyModel(Base):
-
+
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
-
+
@declared_attr
def __table_args__(cls):
return {'mysql_engine':'InnoDB'}
-
+
@declared_attr
def __mapper_args__(cls):
args = {}
args['polymorphic_identity'] = cls.__name__
return args
id = Column(Integer, primary_key=True)
-
+
class MySubModel(MyModel):
id = Column(Integer, ForeignKey('mymodel.id'), primary_key=True)
class MySubModel2(MyModel):
__tablename__ = 'sometable'
id = Column(Integer, ForeignKey('mymodel.id'), primary_key=True)
-
+
eq_(MyModel.__mapper__.polymorphic_identity, 'MyModel')
eq_(MySubModel.__mapper__.polymorphic_identity, 'MySubModel')
eq_(MyModel.__table__.kwargs['mysql_engine'], 'InnoDB')
eq_(MySubModel2.__table__.kwargs['mysql_engine'], 'InnoDB')
eq_(MyModel.__table__.name, 'mymodel')
eq_(MySubModel.__table__.name, 'mysubmodel')
-
+
def test_mapper_args_custom_base(self):
"""test the @declared_attr approach from a custom base."""
-
+
class Base(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
-
+
@declared_attr
def __table_args__(cls):
return {'mysql_engine':'InnoDB'}
-
+
@declared_attr
def id(self):
return Column(Integer, primary_key=True)
-
+
Base = decl.declarative_base(cls=Base)
-
+
class MyClass(Base):
pass
-
+
class MyOtherClass(Base):
pass
-
+
eq_(MyClass.__table__.kwargs['mysql_engine'], 'InnoDB')
eq_(MyClass.__table__.name, 'myclass')
eq_(MyOtherClass.__table__.name, 'myotherclass')
assert MyClass.__table__.c.id.table is MyClass.__table__
assert MyOtherClass.__table__.c.id.table is MyOtherClass.__table__
-
+
def test_single_table_no_propagation(self):
class IdColumn:
class ColumnMixin:
tada = Column(Integer)
-
+
def go():
class Model(Base, ColumnMixin):
Column('data',Integer),
Column('id', Integer,primary_key=True))
foo = relationship("Dest")
-
+
assert_raises_message(sa.exc.ArgumentError,
"Can't add additional column 'tada' when "
"specifying __table__", go)
def test_table_in_model_and_different_named_alt_key_column_in_mixin(self):
-
+
# here, the __table__ has a column 'tada'. We disallow
# the add of the 'foobar' column, even though it's
# keyed to 'tada'.
-
+
class ColumnMixin:
tada = Column('foobar', Integer)
-
+
def go():
class Model(Base, ColumnMixin):
Column('tada', Integer),
Column('id', Integer,primary_key=True))
foo = relationship("Dest")
-
+
assert_raises_message(sa.exc.ArgumentError,
"Can't add additional column 'foobar' when "
"specifying __table__", go)
def test_doc(self):
"""test documentation transfer.
-
+
the documentation situation with @declared_attr is problematic.
at least see if mapped subclasses get the doc.
-
+
"""
class MyMixin(object):
def test_relationship_primryjoin(self):
self._test_relationship(True)
-
+
def test_map_to_table_not_string(self):
db = sqlsoup.SqlSoup(engine)
-
+
table = Table('users', db._metadata, Column('id', Integer, primary_key=True))
assert_raises_message(
exc.ArgumentError,
if name[0].isupper:
delattr(cls, name)
del cls.classes[name]
-
+
@classmethod
def _load_fixtures(cls):
headers, rows = {}, {}
Table('composite_pk_table', fixture_metadata,
Column('i', Integer, primary_key=True),
Column('j', Integer, primary_key=True),
- Column('k', Integer, nullable=False),
+ Column('k', Integer, nullable=False),
),
('i', 'j', 'k'),
(1, 2, 3),
class CompositePk(Base):
pass
-
+
class FixtureTest(_base.MappedTest):
"""A MappedTest pre-configured for fixtures.
keywords=[]),
Item(id=5,
keywords=[])]
-
+
@property
def user_item_keyword_result(self):
item1, item2, item3, item4, item5 = \
items=[item1, item5])]),
User(id=10, orders=[])]
return user_result
-
+
FixtureTest.static = CannedResults()
"""produce a testcase for A->B->C inheritance with a self-referential
relationship between two of the classes, using either one-to-many or
many-to-one.
-
+
the old "no discriminator column" pattern is used.
-
+
"""
class ABCTest(_base.MappedTest):
@classmethod
Column('id', Integer, primary_key=True),
Column('y', String(10)),
Column('xid', ForeignKey('t1.id')))
-
+
@testing.resolve_artifact_names
def test_bad_polymorphic_on(self):
class InterfaceBase(object):
"polymorphic loads will not function properly",
go
)
-
-
+
+
class FalseDiscriminatorTest(_base.MappedTest):
@classmethod
t1 = Table('t1', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('type', Boolean, nullable=False))
-
+
def test_false_on_sub(self):
class Foo(object):pass
class Bar(Foo):pass
assert d1.type is False
sess.expunge_all()
assert sess.query(Ding).one() is not None
-
+
class PolymorphicSynonymTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Column('id', Integer, ForeignKey('t1.id'),
primary_key=True),
Column('data', String(10), nullable=False))
-
+
def test_polymorphic_synonym(self):
class T1(_fixtures.Base):
def info(self):
def _set_info(self, x):
self._info = x
info = property(info, _set_info)
-
+
class T2(T1):pass
-
+
mapper(T1, t1, polymorphic_on=t1.c.type, polymorphic_identity='t1',
properties={
'info':synonym('_info', map_column=True)
sess.expunge_all()
eq_(sess.query(T2).filter(T2.info=='at2').one(), at2)
eq_(at2.info, "THE INFO IS:at2")
-
-
+
+
class CascadeTest(_base.MappedTest):
"""that cascades on polymorphic relationships continue
cascading along the path of the instance's mapper, not
# the 'primaryjoin' looks just like "Sub"'s "get" clause (based on the Base id),
# and foreign_keys since that join condition doesn't actually have any fks in it
#'sub':relationship(Sub, primaryjoin=base.c.id==related.c.sub_id, foreign_keys=related.c.sub_id)
-
+
# now we can use this:
'sub':relationship(Sub)
})
-
+
assert class_mapper(Related).get_property('sub').strategy.use_get
-
+
sess = create_session()
s1 = Sub()
r1 = Related(sub=s1)
def go():
assert r1.sub
self.assert_sql_count(testing.db, go, 0)
-
+
class GetTest(_base.MappedTest):
@classmethod
Column('foo_id', Integer, ForeignKey('foo.id')),
Column('bar_id', Integer, ForeignKey('bar.id')),
Column('data', String(20)))
-
+
@classmethod
def setup_classes(cls):
class Foo(_base.BasicEntity):
def test_get_polymorphic(self):
self._do_get_test(True)
-
+
def test_get_nonpolymorphic(self):
self._do_get_test(False)
assert sess.query(Blub).get(f.id) is None
assert sess.query(Blub).get(b.id) is None
assert sess.query(Bar).get(f.id) is None
-
+
self.assert_sql_count(testing.db, go, 0)
else:
# this is testing the 'wrong' behavior of using get()
class EagerLazyTest(_base.MappedTest):
"""tests eager load/lazy load of child items off inheritance mappers, tests that
LazyLoader constructs the right query condition."""
-
+
@classmethod
def define_tables(cls, metadata):
global foo, bar, bar_foo
class EagerTargetingTest(_base.MappedTest):
"""test a scenario where joined table inheritance might be
confused as an eagerly loaded joined table."""
-
+
@classmethod
def define_tables(cls, metadata):
Table('a_table', metadata,
Column('id', Integer, ForeignKey('a_table.id'), primary_key=True),
Column('b_data', String(50)),
)
-
+
@testing.resolve_artifact_names
def test_adapt_stringency(self):
class A(_base.ComparableEntity):
pass
class B(A):
pass
-
+
mapper(A, a_table, polymorphic_on=a_table.c.type, polymorphic_identity='A',
properties={
'children': relationship(A, order_by=a_table.c.name)
mapper(B, b_table, inherits=A, polymorphic_identity='B', properties={
'b_derived':column_property(b_table.c.b_data + "DATA")
})
-
+
sess=create_session()
b1=B(id=1, name='b1',b_data='i')
node = sess.query(B).filter(B.id==bid).all()[0]
eq_(node, B(id=1, name='b1',b_data='i'))
eq_(node.children[0], B(id=2, name='b2',b_data='l'))
-
+
sess.expunge_all()
node = sess.query(B).options(joinedload(B.children)).filter(B.id==bid).all()[0]
eq_(node, B(id=1, name='b1',b_data='i'))
eq_(node.children[0], B(id=2, name='b2',b_data='l'))
-
+
class FlushTest(_base.MappedTest):
"""test dependency sorting among inheriting mappers"""
-
+
@classmethod
def define_tables(cls, metadata):
Table('users', metadata,
class DistinctPKTest(_base.MappedTest):
"""test the construction of mapper.primary_key when an inheriting relationship
joins on a column other than primary key column."""
-
+
run_inserts = 'once'
run_deletes = None
class SyncCompileTest(_base.MappedTest):
"""test that syncrules compile properly on custom inherit conds"""
-
+
@classmethod
def define_tables(cls, metadata):
global _a_table, _b_table, _c_table
class OverrideColKeyTest(_base.MappedTest):
"""test overriding of column attributes."""
-
+
@classmethod
def define_tables(cls, metadata):
global base, subtable
-
+
base = Table('base', metadata,
Column('base_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('data', String(255)),
Column('sqlite_fixer', String(10))
)
-
+
subtable = Table('subtable', metadata,
Column('base_id', Integer, ForeignKey('base.base_id'), primary_key=True),
Column('subdata', String(255))
mapper(Base, base)
mapper(Sub, subtable, inherits=Base)
-
+
# Sub gets a "base_id" property using the "base_id"
# column of both tables.
eq_(
# this pattern is what you see when using declarative
# in particular, here we do a "manual" version of
# what we'd like the mapper to do.
-
+
class Base(object):
pass
class Sub(Base):
pass
-
+
mapper(Base, base, properties={
'id':base.c.base_id
})
sess.add(s1)
sess.flush()
assert sess.query(Sub).get(10) is s1
-
+
def test_override_onlyinparent(self):
class Base(object):
pass
'id':base.c.base_id
})
mapper(Sub, subtable, inherits=Base)
-
+
eq_(
class_mapper(Sub).get_property('id').columns,
[base.c.base_id]
class_mapper(Sub).get_property('base_id').columns,
[subtable.c.base_id]
)
-
+
s1 = Sub()
s1.id = 10
-
+
s2 = Sub()
s2.base_id = 15
-
+
sess = create_session()
sess.add_all([s1, s2])
sess.flush()
-
+
# s1 gets '10'
assert sess.query(Sub).get(10) is s1
-
+
# s2 gets a new id, base_id is overwritten by the ultimate
# PK col
assert s2.id == s2.base_id != 15
-
+
@testing.emits_warning(r'Implicit')
def test_override_implicit(self):
# this is how the pattern looks intuitively when
# using declarative.
# fixed as part of [ticket:1111]
-
+
class Base(object):
pass
class Sub(Base):
mapper(Sub, subtable, inherits=Base, properties={
'id':subtable.c.base_id
})
-
+
# Sub mapper compilation needs to detect that "base.c.base_id"
# is renamed in the inherited mapper as "id", even though
# it has its own "id" property. Sub's "id" property
# gets joined normally with the extra column.
-
+
eq_(
set(class_mapper(Sub).get_property('id').columns),
set([base.c.base_id, subtable.c.base_id])
)
-
+
s1 = Sub()
s1.id = 10
sess = create_session()
def test_plain_descriptor(self):
"""test that descriptors prevent inheritance from propigating properties to subclasses."""
-
+
class Base(object):
pass
class Sub(Base):
mapper(Base, base)
mapper(Sub, subtable, inherits=Base)
-
+
s1 = Sub()
sess = create_session()
sess.add(s1)
if instance is None:
return self
return "im the data"
-
+
class Base(object):
pass
class Sub(Base):
sess.add(s1)
sess.flush()
assert sess.query(Sub).one().data == "im the data"
-
+
def test_sub_columns_over_base_descriptors(self):
class Base(object):
@property
mapper(Base, base)
mapper(Sub, subtable, inherits=Base)
-
+
sess = create_session()
b1 = Base()
assert b1.subdata == "this is base"
sess.add_all([s1, b1])
sess.flush()
sess.expunge_all()
-
+
assert sess.query(Base).get(b1.base_id).subdata == "this is base"
assert sess.query(Sub).get(s1.base_id).subdata == "this is sub"
class OptimizedLoadTest(_base.MappedTest):
"""tests for the "optimized load" routine."""
-
+
@classmethod
def define_tables(cls, metadata):
Table('base', metadata,
Column('a', String(10)),
Column('b', String(10))
)
-
+
@testing.resolve_artifact_names
def test_optimized_passes(self):
""""test that the 'optimized load' routine doesn't crash when
a column in the join condition is not available."""
-
+
class Base(_base.BasicEntity):
pass
class Sub(Base):
pass
-
+
mapper(Base, base, polymorphic_on=base.c.type, polymorphic_identity='base')
-
+
# redefine Sub's "id" to favor the "id" col in the subtable.
# "id" is also part of the primary join condition
mapper(Sub, sub, inherits=Base,
sess.add(s1)
sess.commit()
sess.expunge_all()
-
+
# load s1 via Base. s1.id won't populate since it's relative to
# the "sub" table. The optimized load kicks in and tries to
# generate on the primary join, but cannot since "id" is itself unloaded.
assert s2test.comp
eq_(s1test.comp, Comp('ham', 'cheese'))
eq_(s2test.comp, Comp('bacon', 'eggs'))
-
+
@testing.resolve_artifact_names
def test_load_expired_on_pending(self):
class Base(_base.ComparableEntity):
mapper(Base, base, polymorphic_on=base.c.type,
polymorphic_identity='base')
m = mapper(Sub, sub, inherits=Base, polymorphic_identity='sub')
-
+
s1 = Sub()
assert m._optimized_get_statement(attributes.instance_state(s1),
['counter2']) is None
-
+
# loads s1.id as None
eq_(s1.id, None)
-
+
# this now will come up with a value of None for id - should reject
assert m._optimized_get_statement(attributes.instance_state(s1),
['counter2']) is None
-
+
s1.id = 1
attributes.instance_state(s1).commit_all(s1.__dict__, None)
assert m._optimized_get_statement(attributes.instance_state(s1),
['counter2']) is not None
-
+
@testing.resolve_artifact_names
def test_load_expired_on_pending_twolevel(self):
class Base(_base.ComparableEntity):
pass
class SubSub(Sub):
pass
-
+
mapper(Base, base, polymorphic_on=base.c.type,
polymorphic_identity='base')
mapper(Sub, sub, inherits=Base, polymorphic_identity='sub')
lambda ctx:{'id':s1.id}
),
)
-
-
-
+
+
+
class PKDiscriminatorTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(60)))
-
+
children = Table('children', metadata,
Column('id', Integer, ForeignKey('parents.id'),
primary_key=True),
class A(Child):
pass
-
+
mapper(Parent, parents, properties={
'children': relationship(Child, backref='parent'),
})
mapper(Child, children, polymorphic_on=children.c.type,
polymorphic_identity=1)
-
+
mapper(A, inherits=Child, polymorphic_identity=2)
s = create_session()
assert a.id
assert a.type == 2
-
+
p.name='p1new'
a.name='a1new'
s.flush()
-
+
s.expire_all()
assert a.name=='a1new'
assert p.name=='p1new'
-
-
+
+
class DeleteOrphanTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Column('data', String(50)),
Column('parent_id', Integer, ForeignKey('parent.id'), nullable=False),
)
-
+
parent = Table('parent', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('data', String(50))
)
-
+
def test_orphan_message(self):
class Base(_fixtures.Base):
pass
-
+
class SubClass(Base):
pass
-
+
class Parent(_fixtures.Base):
pass
-
+
mapper(Base, single, polymorphic_on=single.c.type, polymorphic_identity='base')
mapper(SubClass, inherits=Base, polymorphic_identity='sub')
mapper(Parent, parent, properties={
'related':relationship(Base, cascade="all, delete-orphan")
})
-
+
sess = create_session()
s1 = SubClass(data='s1')
sess.add(s1)
assert_raises_message(orm_exc.FlushError,
r"is not attached to any parent 'Parent' instance via "
"that classes' 'related' attribute", sess.flush)
-
-
+
+
primary_key=True, test_needs_autoincrement=True),
Column('some_dest_id', Integer, ForeignKey('dest_table.id')),
Column('cname', String(50)))
-
+
Table('dest_table', metadata, Column('id', Integer,
primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)))
class B(A):
pass
-
+
class C(A):
pass
-
+
class Dest(_base.ComparableEntity):
pass
dest = Dest()
assert_raises(AttributeError, setattr, b, 'some_dest', dest)
clear_mappers()
-
+
mapper(A, a_table, properties={'a_id': a_table.c.id})
mapper(B, b_table, inherits=A, concrete=True)
mapper(Dest, dest_table)
b = B()
assert_raises(AttributeError, setattr, b, 'a_id', 3)
clear_mappers()
-
+
mapper(A, a_table, properties={'a_id': a_table.c.id})
mapper(B, b_table, inherits=A, concrete=True)
mapper(Dest, dest_table)
properties={
'some_dest': relationship(Dest, back_populates='many_b')
})
-
+
mapper(Dest, dest_table, properties={
'many_a': relationship(A,back_populates='some_dest'),
'many_b': relationship(B,back_populates='some_dest')
properties={
'some_dest': relationship(Dest, back_populates='many_a')},
)
-
+
mapper(Dest, dest_table, properties={
'many_a': relationship(A,
back_populates='some_dest',
order_by=ajoin.c.id)
}
)
-
+
sess = sessionmaker()()
dest1 = Dest(name='c1')
dest2 = Dest(name='c2')
b2 = B(some_dest=dest1, bname='b2', id=4)
c1 = C(some_dest=dest1, cname='c1', id=5)
c2 = C(some_dest=dest2, cname='c2', id=6)
-
+
eq_([a2, c2], dest2.many_a)
eq_([a1, b1, b2, c1], dest1.many_a)
sess.add_all([dest1, dest2])
sess.commit()
-
+
assert sess.query(Dest).filter(Dest.many_a.contains(a2)).one() is dest2
assert sess.query(Dest).filter(Dest.many_a.contains(b1)).one() is dest1
assert sess.query(Dest).filter(Dest.many_a.contains(c2)).one() is dest2
properties={
'some_dest': relationship(Dest, back_populates='many_a')},
)
-
+
mapper(Dest, dest_table, properties={
'many_a': relationship(A,
back_populates='some_dest',
c1 = C(some_dest=dest2, cname='c1')
sess.add_all([dest1, dest2, c1, a1, b1])
sess.commit()
-
+
sess2 = sessionmaker()()
merged_c1 = sess2.merge(c1)
eq_(merged_c1.some_dest.name, 'd2')
page3 = ClassifiedPage(magazine=magazine,page_no=3)
session.add(pub)
-
+
session.flush()
print [x for x in session]
session.expunge_all()
'engineer':people.join(engineers),
'manager':people.join(managers),
}, None, 'pjoin')
-
+
manager_join = people.join(managers).outerjoin(boss)
person_with_polymorphic = ['*', person_join]
manager_with_polymorphic = ['*', manager_join]
Engineer(status='CGG', engineer_name='engineer2', primary_language='python', **{person_attribute_name:'wally'}),
Manager(status='ABA', manager_name='manager2', **{person_attribute_name:'jsmith'})
]
-
+
pointy = employees[0]
jsmith = employees[-1]
dilbert = employees[1]
-
+
session = create_session()
c = Company(name='company1')
c.employees = employees
session.flush()
session.expunge_all()
-
+
eq_(session.query(Person).get(dilbert.person_id), dilbert)
session.expunge_all()
def go():
cc = session.query(Company).get(c.company_id)
eq_(cc.employees, employees)
-
+
if not lazy_relationship:
if with_polymorphic != 'none':
self.assert_sql_count(testing.db, go, 1)
self.assert_sql_count(testing.db, go, 2)
else:
self.assert_sql_count(testing.db, go, 6)
-
+
# test selecting from the query, using the base mapped table (people) as the selection criterion.
# in the case of the polymorphic Person query, the "people" selectable should be adapted to be "person_join"
eq_(
session.query(Engineer).filter(getattr(Person, person_attribute_name)=='dilbert').first(),
dilbert
)
-
+
# test selecting from the query, joining against an alias of the base "people" table. test that
# the "palias" alias does *not* get sucked up into the "person_join" conversion.
palias = people.alias("palias")
assert dilbert is session.query(Engineer).filter((palias.c.name=='dilbert') & (palias.c.person_id==Person.person_id)).first()
assert dilbert is session.query(Person).filter((Engineer.engineer_name=="engineer1") & (engineers.c.person_id==people.c.person_id)).first()
assert dilbert is session.query(Engineer).filter(Engineer.engineer_name=="engineer1")[0]
-
+
dilbert.engineer_name = 'hes dibert!'
session.flush()
session.expunge_all()
-
+
def go():
session.query(Person).filter(getattr(Person, person_attribute_name)=='dilbert').first()
self.assert_sql_count(testing.db, go, 1)
eq_(session.query(Manager).order_by(Manager.person_id).all(), manager_list)
c = session.query(Company).first()
-
+
session.delete(c)
session.flush()
-
+
eq_(people.count().scalar(), 0)
-
+
test_roundtrip = function_named(
test_roundtrip, "test_%s%s%s_%s" % (
(lazy_relationship and "lazy" or "eager"),
pass
class Manager(Person):
pass
-
+
# note that up until recently (0.4.4), we had to specify "foreign_keys" here
- # for this primary join.
+ # for this primary join.
mapper(Person, people, properties={
'manager':relationship(Manager, primaryjoin=(people.c.manager_id ==
managers.c.person_id),
})
mapper(Manager, managers, inherits=Person,
inherit_condition=people.c.person_id==managers.c.person_id)
-
+
eq_(class_mapper(Person).get_property('manager').synchronize_pairs, [(managers.c.person_id,people.c.manager_id)])
-
+
session = create_session()
p = Person(name='some person')
m = Manager(name='some manager')
session.flush()
session.expunge_all()
-
+
def go():
testcar = session.query(Car).options(joinedload('employee')).get(car1.car_id)
assert str(testcar.employee) == "Engineer E4, status X"
sess = create_session()
sess.add(t1)
sess.flush()
-
+
sess.expunge_all()
eq_(
sess.query(Taggable).order_by(Taggable.id).all(),
[User(data='u1'), Taggable(owner=User(data='u1'))]
)
-
+
class GenerativeTest(TestBase, AssertsExecutionResults):
@classmethod
def setup_class(cls):
# added here for testing
e = exists([Car.owner], Car.owner==employee_join.c.person_id)
Query(Person)._adapt_clause(employee_join, False, False)
-
+
r = session.query(Person).filter(Person.name.like('%2')).join('status').filter_by(name="active").order_by(Person.person_id)
eq_(str(list(r)), "[Manager M2, category YYYYYYYYY, status Status active, Engineer E2, field X, status Status active]")
r = session.query(Engineer).join('status').filter(Person.name.in_(['E2', 'E3', 'E4', 'M4', 'M2', 'M1']) & (status.c.name=="active")).order_by(Person.name)
Column('id', Integer, ForeignKey('tablec.id'), primary_key=True),
Column('ddata', String(50)),
)
-
+
def test_polyon_col_setsup(self):
class A(_fixtures.Base):
pass
pass
class D(C):
pass
-
+
poly_select = select([tablea, tableb.c.data.label('discriminator')], from_obj=tablea.join(tableb)).alias('poly')
-
+
mapper(B, tableb)
mapper(A, tablea, with_polymorphic=('*', poly_select), polymorphic_on=poly_select.c.discriminator, properties={
'b':relationship(B, uselist=False)
})
mapper(C, tablec, inherits=A,polymorphic_identity='c')
mapper(D, tabled, inherits=C, polymorphic_identity='d')
-
+
c = C(cdata='c1', adata='a1', b=B(data='c'))
d = D(cdata='c2', adata='a2', ddata='d2', b=B(data='d'))
sess = create_session()
sess.flush()
sess.expunge_all()
eq_(sess.query(A).all(), [C(cdata='c1', adata='a1'), D(cdata='c2', adata='a2', ddata='d2')])
-
+
class Machine(_fixtures.Base):
pass
-
+
class Paperwork(_fixtures.Base):
pass
run_inserts = 'once'
run_setup_mappers = 'once'
run_deletes = None
-
+
@classmethod
def define_tables(cls, metadata):
global companies, people, engineers, managers, boss, paperwork, machines
Column('engineer_name', String(50)),
Column('primary_language', String(50)),
)
-
+
machines = Table('machines', metadata,
Column('machine_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
Column('engineer_id', Integer, ForeignKey('engineers.person_id')))
-
+
managers = Table('managers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'), primary_key=True),
Column('status', String(30)),
Column('person_id', Integer, ForeignKey('people.person_id')))
clear_mappers()
-
+
mapper(Company, companies, properties={
'employees':relationship(Person, order_by=people.c.person_id)
})
inherits=Person, polymorphic_identity='manager')
mapper(Boss, boss, inherits=Manager, polymorphic_identity='boss')
mapper(Paperwork, paperwork)
-
+
@classmethod
def insert_data(cls):
Machine(name="Commodore 64"),
Machine(name="IBM 3270")
])
-
+
c2.employees = [e3]
sess = create_session()
sess.add(c1)
all_employees = [e1, e2, b1, m1, e3]
c1_employees = [e1, e2, b1, m1]
c2_employees = [e3]
-
+
def test_loads_at_once(self):
"""test that all objects load from the full query, when with_polymorphic is used"""
-
+
sess = create_session()
def go():
eq_(sess.query(Person).all(), all_employees)
def test_foo(self):
sess = create_session()
-
+
def go():
eq_(sess.query(Person).options(subqueryload(Engineer.machines)).all(), all_employees)
self.assert_sql_count(testing.db, go, {'':14, 'Unions':8, 'Polymorphic':7}.get(select_type, 8))
# for both joinedload() and subqueryload(), if the original q is not loading
# the subclass table, the joinedload doesn't happen.
-
+
def go():
eq_(sess.query(Person).options(joinedload(Engineer.machines))[1:3], all_employees[1:3])
self.assert_sql_count(testing.db, go, {'':6, 'Polymorphic':3}.get(select_type, 4))
sess = create_session()
-
+
def go():
eq_(sess.query(Person).options(subqueryload(Engineer.machines)).all(), all_employees)
self.assert_sql_count(testing.db, go, {'':14, 'Unions':8, 'Polymorphic':7}.get(select_type, 8))
options(joinedload(Engineer.machines))[1:3],
all_employees[1:3])
self.assert_sql_count(testing.db, go, 3)
-
-
+
+
def test_get(self):
sess = create_session()
-
+
# for all mappers, ensure the primary key has been calculated as just the "person_id"
# column
eq_(sess.query(Person).get(e1.person_id), Engineer(name="dilbert", primary_language="java"))
eq_(sess.query(Engineer).get(e1.person_id), Engineer(name="dilbert", primary_language="java"))
eq_(sess.query(Manager).get(b1.person_id), Boss(name="pointy haired boss", golf_swing="fore"))
-
+
def test_multi_join(self):
sess = create_session()
e = aliased(Person)
c = aliased(Company)
-
+
q = sess.query(Company, Person, c, e).join((Person, Company.employees)).join((e, c.employees)).\
filter(Person.name=='dilbert').filter(e.name=='wally')
-
+
eq_(q.count(), 1)
eq_(q.all(), [
(
Engineer(status=u'regular engineer',engineer_name=u'wally',name=u'wally',company_id=1,primary_language=u'c++',person_id=2,type=u'engineer')
)
])
-
+
def test_filter_on_subclass(self):
sess = create_session()
eq_(sess.query(Engineer).all()[0], Engineer(name="dilbert"))
eq_(sess.query(Manager).filter(Manager.person_id==m1.person_id).one(), Manager(name="dogbert"))
eq_(sess.query(Manager).filter(Manager.person_id==b1.person_id).one(), Boss(name="pointy haired boss"))
-
+
eq_(sess.query(Boss).filter(Boss.person_id==b1.person_id).one(), Boss(name="pointy haired boss"))
def test_join_from_polymorphic(self):
sess.expunge_all()
eq_(sess.query(Person).with_polymorphic([Manager, Engineer]).join('paperwork', aliased=aliased).filter(Person.name.like('%dog%')).filter(Paperwork.description.like('%#2%')).all(), [m1])
-
+
def test_join_to_polymorphic(self):
sess = create_session()
eq_(sess.query(Company).join('employees').filter(Person.name=='vlad').one(), c2)
sess.query(Company).\
filter(Company.employees.any(Person.name=='vlad')).all(), [c2]
)
-
+
# test that the aliasing on "Person" does not bleed into the
# EXISTS clause generated by any()
eq_(
sess.query(Company).join(Company.employees, aliased=True).filter(Person.name=='dilbert').\
filter(Company.employees.any(Person.name=='vlad')).all(), []
)
-
+
eq_(
sess.query(Company).filter(Company.employees.of_type(Engineer).any(Engineer.primary_language=='cobol')).one(),
c2
)
-
+
calias = aliased(Company)
eq_(
sess.query(calias).filter(calias.employees.of_type(Engineer).any(Engineer.primary_language=='cobol')).one(),
eq_(
sess.query(Person).filter(Person.paperwork.any(Paperwork.description=="review #2")).all(), [m1]
)
-
+
eq_(
sess.query(Company).filter(Company.employees.of_type(Engineer).any(and_(Engineer.primary_language=='cobol'))).one(),
c2
)
-
+
def test_join_from_columns_or_subclass(self):
sess = create_session()
sess.query(Manager.name).order_by(Manager.name).all(),
[(u'dogbert',), (u'pointy haired boss',)]
)
-
+
eq_(
sess.query(Manager.name).join((Paperwork, Manager.paperwork)).order_by(Manager.name).all(),
[(u'dogbert',), (u'dogbert',), (u'pointy haired boss',)]
sess.query(Person.name).join((Paperwork, Person.paperwork)).order_by(Person.name).all(),
[(u'dilbert',), (u'dilbert',), (u'dogbert',), (u'dogbert',), (u'pointy haired boss',), (u'vlad',), (u'wally',), (u'wally',)]
)
-
+
eq_(
sess.query(Person.name).join((paperwork, Manager.person_id==paperwork.c.person_id)).order_by(Person.name).all(),
[(u'dilbert',), (u'dilbert',), (u'dogbert',), (u'dogbert',), (u'pointy haired boss',), (u'vlad',), (u'wally',), (u'wally',)]
)
-
+
eq_(
sess.query(Manager).join((Paperwork, Manager.paperwork)).order_by(Manager.name).all(),
[m1, b1]
sess.query(Manager.person_id).join((paperwork, Manager.person_id==paperwork.c.person_id)).order_by(Manager.name).all(),
[(4,), (4,), (3,)]
)
-
+
eq_(
sess.query(Manager.name, Paperwork.description).
join((Paperwork, Manager.person_id==Paperwork.person_id)).
all(),
[(u'pointy haired boss', u'review #1'), (u'dogbert', u'review #2'), (u'dogbert', u'review #3')]
)
-
+
malias = aliased(Manager)
eq_(
sess.query(malias.name).join((paperwork, malias.person_id==paperwork.c.person_id)).all(),
[(u'pointy haired boss',), (u'dogbert',), (u'dogbert',)]
)
-
+
def test_polymorphic_option(self):
"""test that polymorphic loading sets state.load_path with its actual mapper
on a subclass, and not the superclass mapper.
-
+
"""
paths = []
class MyOption(interfaces.MapperOption):
propagate_to_loaders = True
def process_query_conditionally(self, query):
paths.append(query._current_path)
-
+
sess = create_session()
dilbert, boss = sess.query(Person).\
options(MyOption()).\
filter(Person.name.in_(['dilbert', 'pointy haired boss'])).\
order_by(Person.name).\
all()
-
+
dilbert.machines
boss.paperwork
eq_(paths,
[(class_mapper(Engineer), 'machines'),
(class_mapper(Boss), 'paperwork')])
-
-
+
+
def test_expire(self):
"""test that individual column refresh doesn't get tripped up by the select_table mapper"""
-
+
sess = create_session()
m1 = sess.query(Manager).filter(Manager.name=='dogbert').one()
sess.expire(m1)
m2 = sess.query(Manager).filter(Manager.name=='pointy haired boss').one()
sess.expire(m2, ['manager_name', 'golf_swing'])
assert m2.golf_swing=='fore'
-
+
def test_with_polymorphic(self):
-
+
sess = create_session()
-
-
+
+
assert_raises(sa_exc.InvalidRequestError, sess.query(Person).with_polymorphic, Paperwork)
assert_raises(sa_exc.InvalidRequestError, sess.query(Engineer).with_polymorphic, Boss)
assert_raises(sa_exc.InvalidRequestError, sess.query(Engineer).with_polymorphic, Person)
-
+
# compare to entities without related collections to prevent additional lazy SQL from firing on
# loaded entities
emps_without_relationships = [
Engineer(name="vlad", engineer_name="vlad", primary_language="cobol", status="elbonian engineer")
]
eq_(sess.query(Person).with_polymorphic('*').all(), emps_without_relationships)
-
-
+
+
def go():
eq_(sess.query(Person).with_polymorphic(Engineer).filter(Engineer.primary_language=='java').all(), emps_without_relationships[0:1])
self.assert_sql_count(testing.db, go, 1)
-
+
sess.expunge_all()
def go():
eq_(sess.query(Person).with_polymorphic('*').all(), emps_without_relationships)
def go():
eq_(sess.query(Person).with_polymorphic(Engineer, people.outerjoin(engineers)).all(), emps_without_relationships)
self.assert_sql_count(testing.db, go, 3)
-
+
sess.expunge_all()
def go():
# limit the polymorphic join down to just "Person", overriding select_table
eq_(sess.query(Person).with_polymorphic(Person).all(), emps_without_relationships)
self.assert_sql_count(testing.db, go, 6)
-
+
def test_relationship_to_polymorphic(self):
assert_result = [
Company(name="MegaCorp, Inc.", employees=[
Engineer(name="vlad", engineer_name="vlad", primary_language="cobol", status="elbonian engineer")
])
]
-
+
sess = create_session()
-
+
def go():
# test load Companies with lazy load to 'employees'
eq_(sess.query(Company).all(), assert_result)
self.assert_sql_count(testing.db, go, {'':9, 'Polymorphic':4}.get(select_type, 5))
-
+
sess = create_session()
def go():
# currently, it doesn't matter if we say Company.employees,
joinedload_all(Company.employees.of_type(Engineer), Engineer.machines
)).all(),
assert_result)
-
+
# in the case of select_type='', the joinedload
# doesn't take in this case; it joinedloads company->people,
- # then a load for each of 5 rows, then lazyload of "machines"
+ # then a load for each of 5 rows, then lazyload of "machines"
self.assert_sql_count(testing.db, go,
{'':7, 'Polymorphic':1}.get(select_type, 2)
)
-
+
sess = create_session()
def go():
eq_(
subqueryload_all(Company.employees.of_type(Engineer), Engineer.machines
)).all(),
assert_result)
-
+
self.assert_sql_count(
testing.db, go,
{'':8,
'Polymorphic':3,
'AliasedJoins':4}[select_type]
)
-
+
def test_joinedload_on_subclass(self):
sess = create_session()
def go():
)
self.assert_sql_count(testing.db, go, 2)
-
+
def test_query_subclass_join_to_base_relationship(self):
sess = create_session()
# non-polymorphic
if select_type == '':
eq_(sess.query(Company).select_from(companies.join(people).join(engineers)).filter(Engineer.primary_language=='java').all(), [c1])
eq_(sess.query(Company).join(('employees', people.join(engineers))).filter(Engineer.primary_language=='java').all(), [c1])
-
+
ealias = aliased(Engineer)
eq_(sess.query(Company).join(('employees', ealias)).filter(ealias.primary_language=='java').all(), [c1])
eq_(sess.query(Person).join(Engineer.machines).filter(Machine.name.ilike("%ibm%")).all(), [e1, e3])
eq_(sess.query(Company).join('employees', Engineer.machines).all(), [c1, c2])
eq_(sess.query(Company).join('employees', Engineer.machines).filter(Machine.name.ilike("%thinkpad%")).all(), [c1])
-
+
# non-polymorphic
eq_(sess.query(Engineer).join(Engineer.machines).all(), [e1, e2, e3])
eq_(sess.query(Engineer).join(Engineer.machines).filter(Machine.name.ilike("%ibm%")).all(), [e1, e3])
join('employees', 'paperwork', aliased=aliased).filter(Person.name.in_(['dilbert', 'vlad'])).filter(Paperwork.description.like('%#2%')).all(),
[c1]
)
-
+
eq_(
sess.query(Company).\
join('employees', 'paperwork', aliased=aliased).filter(Person.name.in_(['dilbert', 'vlad'])).filter(Paperwork.description.like('%#%')).all(),
sess.query(Company).join((Engineer, Company.company_id==Engineer.company_id)).filter(Engineer.engineer_name=='vlad').one(),
c2
)
-
-
+
+
def test_filter_on_baseclass(self):
sess = create_session()
eq_(sess.query(Person).all(), all_employees)
eq_(sess.query(Person).first(), all_employees[0])
-
+
eq_(sess.query(Person).filter(Person.person_id==e2.person_id).one(), e2)
-
+
def test_from_alias(self):
sess = create_session()
-
+
palias = aliased(Person)
eq_(
sess.query(palias).filter(palias.name.in_(['dilbert', 'wally'])).all(),
[e1, e2]
)
-
+
def test_self_referential(self):
sess = create_session()
-
+
c1_employees = [e1, e2, b1, m1]
-
+
palias = aliased(Person)
eq_(
sess.query(Person, palias).filter(Person.company_id==palias.company_id).filter(Person.name=='dogbert').\
(m1, b1),
]
)
-
+
def test_nesting_queries(self):
sess = create_session()
-
+
# query.statement places a flag "no_adapt" on the returned statement. This prevents
# the polymorphic adaptation in the second "filter" from hitting it, which would pollute
# the subquery and usually results in recursion overflow errors within the adaption.
subq = sess.query(engineers.c.person_id).filter(Engineer.primary_language=='java').statement.as_scalar()
-
+
eq_(sess.query(Person).filter(Person.person_id==subq).one(), e1)
-
+
def test_mixed_entities(self):
sess = create_session()
[(Engineer(status=u'elbonian engineer',engineer_name=u'vlad',name=u'vlad',primary_language=u'cobol'),
u'Elbonia, Inc.')]
)
-
-
+
+
eq_(
sess.query(Manager.name).all(),
[('pointy haired boss', ), ('dogbert',)]
row = sess.query(Engineer.name, Engineer.primary_language).filter(Engineer.name=='dilbert').first()
assert row.name == 'dilbert'
assert row.primary_language == 'java'
-
+
eq_(
sess.query(Engineer.name, Engineer.primary_language).all(),
sess.query(Boss.name, Boss.golf_swing).all(),
[(u'pointy haired boss', u'fore')]
)
-
+
# TODO: I think raise error on these for now. different inheritance/loading schemes have different
# results here, all incorrect
#
# sess.query(Person.name, Engineer.primary_language).all(),
# []
# )
-
+
# self.assertEquals(
# sess.query(Person.name, Engineer.primary_language, Manager.manager_name).all(),
# []
(Engineer(status=u'elbonian engineer',engineer_name=u'vlad',name=u'vlad',company_id=2,primary_language=u'cobol',person_id=5,type=u'engineer'), u'Elbonia, Inc.')
]
)
-
+
eq_(
sess.query(Engineer.primary_language, Company.name).join(Company.employees).filter(Person.type=='engineer').order_by(desc(Engineer.primary_language)).all(),
[(u'java', u'MegaCorp, Inc.'), (u'cobol', u'Elbonia, Inc.'), (u'c++', u'MegaCorp, Inc.')]
sess.query(Person.name, Company.name, palias.name).join(Company.employees).filter(Company.name=='Elbonia, Inc.').filter(palias.name=='dilbert').all(),
[(u'vlad', u'Elbonia, Inc.', u'dilbert')]
)
-
+
palias = aliased(Person)
eq_(
sess.query(Person.type, Person.name, palias.type, palias.name).filter(Person.company_id==palias.company_id).filter(Person.name=='dogbert').\
(u'manager', u'dogbert', u'engineer', u'wally'),
(u'manager', u'dogbert', u'boss', u'pointy haired boss')]
)
-
+
eq_(
sess.query(Person.name, Paperwork.description).filter(Person.person_id==Paperwork.person_id).order_by(Person.name, Paperwork.description).all(),
[(u'dilbert', u'tps report #1'), (u'dilbert', u'tps report #2'), (u'dogbert', u'review #2'),
sess.query(func.count(Person.person_id)).filter(Engineer.primary_language=='java').all(),
[(1, )]
)
-
+
eq_(
sess.query(Company.name, func.count(Person.person_id)).filter(Company.company_id==Person.company_id).group_by(Company.name).order_by(Company.name).all(),
[(u'Elbonia, Inc.', 1), (u'MegaCorp, Inc.', 4)]
sess.query(Company.name, func.count(Person.person_id)).join(Company.employees).group_by(Company.name).order_by(Company.name).all(),
[(u'Elbonia, Inc.', 1), (u'MegaCorp, Inc.', 4)]
)
-
-
+
+
PolymorphicQueryTest.__name__ = "Polymorphic%sTest" % select_type
return PolymorphicQueryTest
for select_type in ('', 'Polymorphic', 'Unions', 'AliasedJoins', 'Joins'):
testclass = _produce_test(select_type)
exec("%s = testclass" % testclass.__name__)
-
+
del testclass
class SelfReferentialTestJoinedToBase(_base.MappedTest):
run_setup_mappers = 'once'
-
+
@classmethod
def define_tables(cls, metadata):
global people, engineers
polymorphic_identity='engineer', properties={
'reports_to':relationship(Person, primaryjoin=people.c.person_id==engineers.c.reports_to_id)
})
-
+
def test_has(self):
-
+
p1 = Person(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=p1)
sess = create_session()
sess.add(e1)
sess.flush()
sess.expunge_all()
-
+
eq_(sess.query(Engineer).filter(Engineer.reports_to.has(Person.name=='dogbert')).first(), Engineer(name='dilbert'))
def test_oftype_aliases_in_exists(self):
sess = create_session()
sess.add_all([e1, e2])
sess.flush()
-
+
eq_(sess.query(Engineer).filter(Engineer.reports_to.of_type(Engineer).has(Engineer.name=='dilbert')).first(), e2)
-
+
def test_join(self):
p1 = Person(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=p1)
sess.add(e1)
sess.flush()
sess.expunge_all()
-
+
eq_(
sess.query(Engineer).join('reports_to', aliased=True).filter(Person.name=='dogbert').first(),
Engineer(name='dilbert'))
Column('primary_language', String(50)),
Column('reports_to_id', Integer, ForeignKey('managers.person_id'))
)
-
+
managers = Table('managers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'), primary_key=True),
)
def setup_mappers(cls):
mapper(Person, people, polymorphic_on=people.c.type, polymorphic_identity='person')
mapper(Manager, managers, inherits=Person, polymorphic_identity='manager')
-
+
mapper(Engineer, engineers, inherits=Person,
polymorphic_identity='engineer', properties={
'reports_to':relationship(Manager, primaryjoin=managers.c.person_id==engineers.c.reports_to_id, backref='engineers')
eq_(
sess.query(Engineer).join('reports_to', aliased=True).filter(Manager.name=='dogbert').first(),
Engineer(name='dilbert'))
-
+
def test_filter_aliasing(self):
m1 = Manager(name='dogbert')
m2 = Manager(name='foo')
(m1, e1),
]
)
-
+
def test_relationship_compare(self):
m1 = Manager(name='dogbert')
m2 = Manager(name='foo')
[m1]
)
-
+
class M2MFilterTest(_base.MappedTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
-
+
@classmethod
def define_tables(cls, metadata):
global people, engineers, organizations, engineers_to_org
-
+
organizations = Table('organizations', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
Column('org_id', Integer, ForeignKey('organizations.id')),
Column('engineer_id', Integer, ForeignKey('engineers.person_id')),
)
-
+
people = Table('people', metadata,
Column('person_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
global Organization
class Organization(_fixtures.Base):
pass
-
+
mapper(Organization, organizations, properties={
'engineers':relationship(Engineer, secondary=engineers_to_org, backref='organizations')
})
-
+
mapper(Person, people, polymorphic_on=people.c.type, polymorphic_identity='person')
mapper(Engineer, engineers, inherits=Person, polymorphic_identity='engineer')
-
+
@classmethod
def insert_data(cls):
e1 = Engineer(name='e1')
e4 = Engineer(name='e4')
org1 = Organization(name='org1', engineers=[e1, e2])
org2 = Organization(name='org2', engineers=[e3, e4])
-
+
sess = create_session()
sess.add(org1)
sess.add(org2)
sess.flush()
-
+
def test_not_contains(self):
sess = create_session()
-
+
e1 = sess.query(Person).filter(Engineer.name=='e1').one()
-
+
# this works
eq_(sess.query(Organization).filter(~Organization.engineers.of_type(Engineer).contains(e1)).all(), [Organization(name='org2')])
# this had a bug
eq_(sess.query(Organization).filter(~Organization.engineers.contains(e1)).all(), [Organization(name='org2')])
-
+
def test_any(self):
sess = create_session()
eq_(sess.query(Organization).filter(Organization.engineers.of_type(Engineer).any(Engineer.name=='e1')).all(), [Organization(name='org1')])
class SelfReferentialM2MTest(_base.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
-
+
@classmethod
def define_tables(cls, metadata):
global Parent, Child1, Child2
uselist = False, backref="right_children"
)
-
+
def test_query_crit(self):
session = create_session()
c11, c12, c13 = Child1(), Child1(), Child1()
c21, c22, c23 = Child2(), Child2(), Child2()
-
+
c11.left_child2 = c22
c12.left_child2 = c22
c13.left_child2 = c23
-
+
session.add_all([c11, c12, c13, c21, c22, c23])
session.flush()
-
+
# test that the join to Child2 doesn't alias Child1 in the select
eq_(
set(session.query(Child1).join(Child1.left_child2)),
def test_eager_join(self):
session = create_session()
-
+
c1 = Child1()
c1.left_child2 = Child2()
session.add(c1)
session.flush()
-
+
q = session.query(Child1).options(joinedload('left_child2'))
# test that the splicing of the join works here, doesnt break in the middle of "parent join child1"
# another way to check
assert q.limit(1).with_labels().subquery().count().scalar() == 1
-
+
assert q.first() is c1
-
+
def test_subquery_load(self):
session = create_session()
-
+
c1 = Child1()
c1.left_child2 = Child2()
session.add(c1)
session.flush()
session.expunge_all()
-
+
for row in session.query(Child1).options(subqueryload('left_child2')).all():
assert row.left_child2
-
+
class EagerToSubclassTest(_base.MappedTest):
"""Test joinedloads to subclass mappers"""
mapper(Manager, inherits=Employee, polymorphic_identity='manager')
mapper(Engineer, inherits=Employee, polymorphic_identity='engineer')
mapper(JuniorEngineer, inherits=Engineer, polymorphic_identity='juniorengineer')
-
+
@testing.resolve_artifact_names
def test_single_inheritance(self):
assert session.query(Engineer).all() == [e1, e2]
assert session.query(Manager).all() == [m1]
assert session.query(JuniorEngineer).all() == [e2]
-
+
m1 = session.query(Manager).one()
session.expire(m1, ['manager_data'])
eq_(m1.manager_data, "knows how to manage things")
@testing.resolve_artifact_names
def test_multi_qualification(self):
session = create_session()
-
+
m1 = Manager(name='Tom', manager_data='knows how to manage things')
e1 = Engineer(name='Kurt', engineer_info='knows how to hack')
e2 = JuniorEngineer(name='Ed', engineer_info='oh that ed')
-
+
session.add_all([m1, e1, e2])
session.flush()
session.query(Manager, ealias).all(),
[(m1, e1), (m1, e2)]
)
-
+
eq_(
session.query(Manager.name).all(),
[("Tom",)]
session.query(Manager).add_entity(ealias).all(),
[(m1, e1), (m1, e2)]
)
-
+
eq_(
session.query(Manager.name).add_column(ealias.name).all(),
[("Tom", "Kurt"), ("Tom", "Ed")]
)
-
+
# TODO: I think raise error on this for now
# self.assertEquals(
# session.query(Employee.name, Manager.manager_data, Engineer.engineer_info).all(),
'anon_1 WHERE anon_1.employees_type IN '
'(:type_1, :type_2)',
use_default_dialect=True)
-
+
@testing.resolve_artifact_names
def test_select_from(self):
sess = create_session()
e2 = JuniorEngineer(name='Ed', engineer_info='oh that ed')
sess.add_all([m1, m2, e1, e2])
sess.flush()
-
+
eq_(
sess.query(Manager).select_from(employees.select().limit(10)).all(),
[m1, m2]
)
-
+
@testing.resolve_artifact_names
def test_count(self):
sess = create_session()
eq_(sess.query(Manager).count(), 2)
eq_(sess.query(Engineer).count(), 2)
eq_(sess.query(Employee).count(), 4)
-
+
eq_(sess.query(Manager).filter(Manager.name.like('%m%')).count(), 2)
eq_(sess.query(Employee).filter(Employee.name.like('%m%')).count(), 3)
Column('name', String(50)),
Column('type', String(20)),
)
-
+
Table('employee_stuff', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('employee_id', Integer, ForeignKey('employee.id')),
Column('name', String(50)),
)
-
+
@classmethod
def setup_classes(cls):
class Employee(ComparableEntity):
'stuff':relationship(Stuff)
})
mapper(Stuff, employee_stuff)
-
+
sess = create_session()
context = sess.query(Manager).options(subqueryload('stuff'))._compile_context()
subq = context.attributes[('subquery', (class_mapper(Employee), 'stuff'))]
Column('type', String(20)),
Column('company_id', Integer, ForeignKey('companies.company_id'))
)
-
+
Table('companies', metadata,
Column('company_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
)
-
+
@classmethod
def setup_classes(cls):
class Company(ComparableEntity):
pass
-
+
class Employee(ComparableEntity):
pass
class Manager(Employee):
mapper(Engineer, inherits=Employee, polymorphic_identity='engineer')
mapper(JuniorEngineer, inherits=Engineer, polymorphic_identity='juniorengineer')
sess = sessionmaker()()
-
+
c1 = Company(name='c1')
c2 = Company(name='c2')
-
+
m1 = Manager(name='Tom', manager_data='data1', company=c1)
m2 = Manager(name='Tom2', manager_data='data2', company=c2)
e1 = Engineer(name='Kurt', engineer_info='knows how to hack', company=c2)
mapper(Engineer, inherits=Employee, polymorphic_identity='engineer')
mapper(JuniorEngineer, inherits=Engineer, polymorphic_identity='juniorengineer')
sess = sessionmaker()()
-
+
c1 = Company(name='c1')
c2 = Company(name='c2')
-
+
m1 = Manager(name='Tom', manager_data='data1', company=c1)
m2 = Manager(name='Tom2', manager_data='data2', company=c2)
e1 = Engineer(name='Kurt', engineer_info='knows how to hack', company=c2)
eq_(c1.engineers, [e2])
eq_(c2.engineers, [e1])
-
+
sess.expunge_all()
eq_(sess.query(Company).order_by(Company.name).all(),
[
(Company(name='c2'), Engineer(name='Kurt'))
]
)
-
+
# join() to Company.engineers, Engineer as the requested entity.
# this actually applies the IN criterion twice which is less than ideal.
sess.expunge_all()
]
)
- # this however fails as it does not limit the subtypes to just "Engineer".
+ # this however fails as it does not limit the subtypes to just "Engineer".
# with joins constructed by filter(), we seem to be following a policy where
# we don't try to make decisions on how to join to the target class, whereas when using join() we
# seem to have a lot more capabilities.
]
)
go()
-
+
class SingleOnJoinedTest(MappedTest):
@classmethod
def define_tables(cls, metadata):
global persons_table, employees_table
-
+
persons_table = Table('persons', metadata,
Column('person_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
Column('employee_data', String(50)),
Column('manager_data', String(50)),
)
-
+
def test_single_on_joined(self):
class Person(_fixtures.Base):
pass
pass
class Manager(Employee):
pass
-
+
mapper(Person, persons_table, polymorphic_on=persons_table.c.type, polymorphic_identity='person')
mapper(Employee, employees_table, inherits=Person,polymorphic_identity='engineer')
mapper(Manager, inherits=Employee,polymorphic_identity='manager')
-
+
sess = create_session()
sess.add(Person(name='p1'))
sess.add(Employee(name='e1', employee_data='ed1'))
sess.add(Manager(name='m1', employee_data='ed2', manager_data='md1'))
sess.flush()
sess.expunge_all()
-
+
eq_(sess.query(Person).order_by(Person.person_id).all(), [
Person(name='p1'),
Employee(name='e1', employee_data='ed1'),
Manager(name='m1', employee_data='ed2', manager_data='md1')
])
sess.expunge_all()
-
+
def go():
eq_(sess.query(Person).with_polymorphic('*').order_by(Person.person_id).all(), [
Person(name='p1'),
Manager(name='m1', employee_data='ed2', manager_data='md1')
])
self.assert_sql_count(testing.db, go, 1)
-
+
run_deletes = None
run_inserts = "once"
run_setup_mappers = "once"
-
+
@classmethod
def define_tables(cls, metadata):
-
+
if testing.db.dialect.supports_native_boolean:
false = 'false'
else:
false = "0"
-
+
cls.other_artifacts['false'] = false
Table('owners', metadata ,
arb_result = [row['data_id'] for row in arb_result]
arb_data = arb_data.alias('arb')
-
+
# now query for Data objects using that above select, adding the
# "order by max desc" separately
q = (session.query(Data).
self.assert_(len(o4.mt2) == 1)
self.assert_(o4.mt2[0].a == 'abcde')
self.assert_(o4.mt2[0].b is None)
-
+
def test_state_gc(self):
"""test that InstanceState always has a dict, even after host object gc'ed."""
-
+
class Foo(object):
pass
-
+
attributes.register_class(Foo)
f = Foo()
state = attributes.instance_state(f)
gc_collect()
assert state.obj() is None
assert state.dict == {}
-
+
def test_deferred(self):
class Foo(object):pass
a.email_address = 'foo@bar.com'
u.addresses.append(a)
self.assert_(u.user_id == 7 and u.user_name == 'heythere' and u.addresses[0].email_address == 'lala@123.com' and u.addresses[1].email_address == 'foo@bar.com')
-
+
def test_extension_commit_attr(self):
"""test that an extension which commits attribute history
maintains the end-result history.
-
+
This won't work in conjunction with some unitofwork extensions.
-
+
"""
-
+
class Foo(_base.ComparableEntity):
pass
class Bar(_base.ComparableEntity):
pass
-
+
class ReceiveEvents(AttributeExtension):
def __init__(self, key):
self.key = key
-
+
def append(self, state, child, initiator):
if commit:
state.commit_all(state.dict)
attributes.register_class(Bar)
b1, b2, b3, b4 = Bar(id='b1'), Bar(id='b2'), Bar(id='b3'), Bar(id='b4')
-
+
def loadcollection(**kw):
if kw.get('passive') is attributes.PASSIVE_NO_FETCH:
return attributes.PASSIVE_NO_RESULT
return [b1, b2]
-
+
def loadscalar(**kw):
if kw.get('passive') is attributes.PASSIVE_NO_FETCH:
return attributes.PASSIVE_NO_RESULT
return b2
-
+
attributes.register_attribute(Foo, 'bars',
uselist=True,
useobject=True,
callable_=lambda o:loadcollection,
extension=[ReceiveEvents('bars')])
-
+
attributes.register_attribute(Foo, 'bar',
uselist=False,
useobject=True,
callable_=lambda o:loadscalar,
extension=[ReceiveEvents('bar')])
-
+
attributes.register_attribute(Foo, 'scalar',
uselist=False,
useobject=False, extension=[ReceiveEvents('scalar')])
-
-
+
+
def create_hist():
def hist(key, shouldmatch, fn, *arg):
attributes.instance_state(f1).commit_all(attributes.instance_dict(f1))
hist('scalar', True, setattr, f1, 'scalar', 5)
hist('scalar', True, setattr, f1, 'scalar', None)
hist('scalar', True, setattr, f1, 'scalar', 4)
-
+
histories = []
commit = False
create_hist()
eq_(woc, wic)
else:
ne_(woc, wic)
-
+
def test_extension_lazyload_assertion(self):
class Foo(_base.BasicEntity):
pass
def func1(**kw):
if kw.get('passive') is attributes.PASSIVE_NO_FETCH:
return attributes.PASSIVE_NO_RESULT
-
+
return [bar1, bar2, bar3]
attributes.register_attribute(Foo, 'bars', uselist=True, callable_=lambda o:func1, useobject=True, extension=[ReceiveEvents()])
x = Foo()
assert_raises(AssertionError, Bar(id=4).foos.append, x)
-
+
x.bars
b = Bar(id=4)
b.foos.append(x)
attributes.instance_state(x).expire_attributes(attributes.instance_dict(x), ['bars'])
assert_raises(AssertionError, b.foos.remove, x)
-
-
+
+
def test_scalar_listener(self):
# listeners on ScalarAttributeImpl and MutableScalarAttributeImpl aren't used normally.
# test that they work for the benefit of user extensions
class Foo(object):
pass
-
+
results = []
class ReceiveEvents(AttributeExtension):
def append(self, state, child, initiator):
def set(self, state, child, oldchild, initiator):
results.append(("set", state.obj(), child, oldchild))
return child
-
+
attributes.register_class(Foo)
attributes.register_attribute(Foo, 'x', uselist=False, mutable_scalars=False, useobject=False, extension=ReceiveEvents())
attributes.register_attribute(Foo, 'y', uselist=False, mutable_scalars=True, useobject=False, copy_function=lambda x:x, extension=ReceiveEvents())
-
+
f = Foo()
f.x = 5
f.x = 17
f.y = [1,2,3]
f.y = [4,5,6]
del f.y
-
+
eq_(results, [
('set', f, 5, None),
('set', f, 17, 5),
('set', f, [4,5,6], [1,2,3]),
('remove', f, [4,5,6])
])
-
-
+
+
def test_lazytrackparent(self):
"""test that the "hasparent" flag works properly
when lazy loaders and backrefs are used
-
+
"""
class Post(object):pass
attributes.unregister_attribute(Foo, "collection")
assert not attributes.manager_of_class(Foo).is_instrumented("collection")
-
+
try:
attributes.register_attribute(Foo, "collection", uselist=True, typecallable=dict, useobject=True)
assert False
class Bar(object):
pass
-
+
attributes.register_class(Foo)
attributes.register_class(Bar)
attributes.register_attribute(Foo, "coll", uselist=True, useobject=True)
-
+
f1 = Foo()
b1 = Bar()
b2 = Bar()
eq_(attributes.get_history(f1, "coll"), ([b1], [], []))
attributes.set_committed_value(f1, "coll", [b2])
eq_(attributes.get_history(f1, "coll"), ((), [b2], ()))
-
+
attributes.del_attribute(f1, "coll")
assert "coll" not in f1.__dict__
a token that is global to all InstrumentedAttribute objects
within a particular class, not just the indvidual IA object
since we use distinct objects in an inheritance scenario.
-
+
"""
class Parent(object):
pass
p_token = object()
c_token = object()
-
+
attributes.register_class(Parent)
attributes.register_class(Child)
attributes.register_class(SubChild)
extension=attributes.GenericBackrefExtension('child'),
parent_token = c_token,
useobject=True)
-
+
p1 = Parent()
c1 = Child()
p1.child = c1
-
+
c2 = SubChild()
c2.parent = p1
p_token = object()
c_token = object()
-
+
attributes.register_class(Parent)
attributes.register_class(SubParent)
attributes.register_class(Child)
extension=attributes.GenericBackrefExtension('children'),
parent_token = c_token,
useobject=True)
-
+
p1 = Parent()
p2 = SubParent()
c1 = Child()
-
+
p1.children.append(c1)
assert c1.parent is p1
assert c1 in p1.children
-
+
p2.children.append(c1)
assert c1.parent is p2
-
+
# note its still in p1.children -
# the event model currently allows only
# one level deep. without the parent_token,
# it keeps going until a ValueError is raised
# and this condition changes.
assert c1 in p1.children
-
-
-
-
+
+
+
+
class PendingBackrefTest(_base.ORMTest):
def setup(self):
global Post, Blog, called, lazy_load
b = Blog("blog 1")
p = Post("post 4")
-
+
p.blog = b
p = Post("post 5")
p.blog = b
# calling backref calls the callable, populates extra posts
assert b.posts == [p1, p2, p3, Post("post 4"), Post("post 5")]
assert called[0] == 1
-
+
def test_lazy_history(self):
global lazy_load
p1, p2, p3 = Post("post 1"), Post("post 2"), Post("post 3")
lazy_load = [p1, p2, p3]
-
+
b = Blog("blog 1")
p = Post("post 4")
p.blog = b
-
+
p4 = Post("post 5")
p4.blog = b
assert called[0] == 0
f = Foo()
f.someattr = 3
eq_(Foo.someattr.impl.get_committed_value(attributes.instance_state(f), attributes.instance_dict(f)), None)
-
+
attributes.instance_state(f).commit(attributes.instance_dict(f), ['someattr'])
eq_(Foo.someattr.impl.get_committed_value(attributes.instance_state(f), attributes.instance_dict(f)), 3)
eq_(attributes.get_state_history(attributes.instance_state(f), 'someattr'), (['one'], (), ()))
f.someattr = 'two'
eq_(attributes.get_state_history(attributes.instance_state(f), 'someattr'), (['two'], (), ()))
-
-
+
+
def test_mutable_scalar(self):
class Foo(_base.BasicEntity):
pass
attributes.instance_state(f).commit_all(attributes.instance_dict(f))
eq_(attributes.get_state_history(attributes.instance_state(f), 'someattr'), ((), [hi, there, hi], ()))
-
+
f.someattr = []
eq_(attributes.get_state_history(attributes.instance_state(f), 'someattr'), ([], [], [hi, there, hi]))
-
+
def test_collections_via_backref(self):
class Foo(_base.BasicEntity):
pass
class ListenerTest(_base.ORMTest):
def test_receive_changes(self):
"""test that Listeners can mutate the given value.
-
+
This is a rudimentary test which would be better suited by a full-blown inclusion
into collection.py.
-
+
"""
class Foo(object):
pass
attributes.register_attribute(Foo, 'barlist', uselist=True, useobject=True, extension=AlteringListener())
attributes.register_attribute(Foo, 'barset', typecallable=set, uselist=True, useobject=True, extension=AlteringListener())
attributes.register_attribute(Bar, 'data', uselist=False, useobject=False)
-
+
f1 = Foo()
f1.data = "some data"
eq_(f1.data, "some data modified")
f1.barlist.append(b1)
assert b1.data == "some bar"
assert f1.barlist[0].data == "some bar appended"
-
+
f1.barset.add(b1)
assert f1.barset.pop().data == "some bar appended"
-
-
+
+
u2= User(name='ed')
sess.add_all([u1, a1, a2, a3])
sess.commit()
-
+
#u1.addresses
-
+
def go():
u2.addresses.append(a1)
u2.addresses.append(a2)
u2.addresses.append(a3)
self.assert_sql_count(testing.db, go, 0)
-
+
@testing.resolve_artifact_names
def test_collection_move_preloaded(self):
sess = sessionmaker()()
# backref fires
assert a1.user is u2
-
+
# u1.addresses wasn't loaded,
# so when it loads its correct
assert a1 not in u1.addresses
# backref fires
assert a1.user is u2
-
+
# everything expires, no changes in
# u1.addresses, so all is fine
sess.commit()
@testing.resolve_artifact_names
def test_plain_load_passive(self):
"""test that many-to-one set doesn't load the old value."""
-
+
sess = sessionmaker()()
u1 = User(name='jack')
u2 = User(name='ed')
def go():
a1.user = u2
self.assert_sql_count(testing.db, go, 0)
-
+
assert a1 not in u1.addresses
assert a1 in u2.addresses
-
+
@testing.resolve_artifact_names
def test_set_none(self):
sess = sessionmaker()()
def go():
a1.user = None
self.assert_sql_count(testing.db, go, 0)
-
+
assert a1 not in u1.addresses
-
-
-
+
+
+
@testing.resolve_artifact_names
def test_scalar_move_notloaded(self):
sess = sessionmaker()()
# "old" u1 here allows the backref
# to remove it from the addresses collection
a1.user = u2
-
+
sess.commit()
assert a1 not in u1.addresses
assert a1 in u2.addresses
# load a1.user
a1.user
-
+
# reassign
a2.user = u1
# backref fires
assert u1.address is a2
-
+
# stays on both sides
assert a1.user is u1
assert a2.user is u1
# backref fires
assert a1.user is u2
-
+
# u1.address loads now after a flush
assert u1.address is None
assert u2.address is a1
# load
assert a1.user is u1
-
+
# reassign
a2.user = u1
# didnt work this way tho
assert a1.user is u1
-
+
# moves appropriately after commit
sess.commit()
assert u1.address is a2
sess = sessionmaker()()
a1 = Address(email_address="address1")
u1 = User(name='jack', address=a1)
-
+
sess.add(u1)
sess.commit()
sess.expunge(u1)
-
+
u2= User(name='ed')
# the _SingleParent extension sets the backref get to "active" !
# u1 gets loaded and deleted
u2.address = a1
sess.commit()
assert sess.query(User).count() == 1
-
-
+
+
class M2MScalarMoveTest(_fixtures.FixtureTest):
run_inserts = None
'keyword':relationship(Keyword, secondary=item_keywords, uselist=False, backref=backref("item", uselist=False))
})
mapper(Keyword, keywords)
-
+
@testing.resolve_artifact_names
def test_collection_move_preloaded(self):
sess = sessionmaker()()
-
+
k1 = Keyword(name='k1')
i1 = Item(description='i1', keyword=k1)
i2 = Item(description='i2')
sess.add_all([i1, i2, k1])
sess.commit() # everything is expired
-
+
# load i1.keyword
assert i1.keyword is k1
-
+
i2.keyword = k1
assert k1.item is i2
-
+
# nothing happens.
assert i1.keyword is k1
assert i2.keyword is k1
assert o3 in sess
sess.commit()
-
+
@testing.resolve_artifact_names
def test_delete(self):
sess = create_session()
def test_cascade_nosideeffects(self):
"""test that cascade leaves the state of unloaded
scalars/collections unchanged."""
-
+
sess = create_session()
u = User(name='jack')
sess.add(u)
assert 'orders' not in u.__dict__
sess.flush()
-
+
assert 'orders' not in u.__dict__
a = Address(email_address='foo@bar.com')
assert 'user' not in a.__dict__
a.user = u
sess.flush()
-
+
d = Dingaling(data='d1')
d.address_id = a.id
sess.add(d)
assert 'address' not in d.__dict__
sess.flush()
assert d.address is a
-
+
@testing.resolve_artifact_names
def test_cascade_delete_plusorphans(self):
sess = create_session()
class O2OCascadeTest(_fixtures.FixtureTest):
run_inserts = None
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
u1.address = a2
assert u1.address is not a1
assert a1.user is None
-
-
-
+
+
+
class O2MBackrefTest(_fixtures.FixtureTest):
run_inserts = None
class NoSaveCascadeTest(_fixtures.FixtureTest):
"""test that backrefs don't force save-update cascades to occur
when the cascade initiated from the forwards side."""
-
+
@testing.resolve_artifact_names
def test_unidirectional_cascade_o2m(self):
mapper(Order, orders)
orders = relationship(
Order, backref=backref("user", cascade=None))
))
-
+
sess = create_session()
-
+
o1 = Order()
sess.add(o1)
u1 = User(orders=[o1])
assert u1 not in sess
assert o1 in sess
-
+
sess.expunge_all()
-
+
o1 = Order()
u1 = User(orders=[o1])
sess.add(o1)
'user':relationship(User, backref=backref("orders", cascade=None))
})
mapper(User, users)
-
+
sess = create_session()
-
+
u1 = User()
sess.add(u1)
o1 = Order()
o1.user = u1
assert o1 not in sess
assert u1 in sess
-
+
sess.expunge_all()
u1 = User()
i1.keywords.append(k1)
assert i1 in sess
assert k1 not in sess
-
+
sess.expunge_all()
-
+
i1 = Item()
k1 = Keyword()
sess.add(i1)
k1.items.append(i1)
assert i1 in sess
assert k1 not in sess
-
-
+
+
class O2MCascadeNoOrphanTest(_fixtures.FixtureTest):
run_inserts = None
pass
class Foo(_fixtures.Base):
pass
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
"""test a bug introduced by r6711"""
sess = sessionmaker(expire_on_commit=True)()
-
-
+
+
u1 = User(name='jack', foo=Foo(data='f1'))
sess.add(u1)
sess.commit()
attributes.get_history(u1, 'foo'),
([None], (), [attributes.PASSIVE_NO_RESULT])
)
-
+
sess.add(u1)
assert u1 in sess
sess.commit()
sess = sessionmaker(expire_on_commit=False)()
p1, p2 = Pref(data='p1'), Pref(data='p2')
-
-
+
+
u = User(name='jack', pref=p1)
sess.add(u)
sess.commit()
sess.close()
u.pref = p2
-
+
sess.add(u)
assert p1 in sess
assert p2 in sess
test_needs_autoincrement=True),
Column('data',String(50)),
Column('t2id', Integer, ForeignKey('t2.id')))
-
+
Table('t2', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('data',String(50)),
Column('t3id', Integer, ForeignKey('t3.id')))
-
+
Table('t3', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
test_needs_autoincrement=True),
Column('data', String(50)),
Column('t2id', Integer, ForeignKey('t2.id')))
-
+
Table('t2', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('data', String(50)),
Column('t3id', Integer, ForeignKey('t3.id')))
-
+
Table('t3', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
def test_single_parent_raise(self):
sess = create_session()
-
+
y = T2(data='T2a')
x = T1(data='T1a', t2=y)
assert_raises(sa_exc.InvalidRequestError, T1, data='T1b', t2=y)
def test_single_parent_backref(self):
sess = create_session()
-
+
y = T3(data='T3a')
x = T2(data='T2a', t3=y)
# cant attach the T3 to another T2
assert_raises(sa_exc.InvalidRequestError, T2, data='T2b', t3=y)
-
+
# set via backref tho is OK, unsets from previous parent
# first
z = T2(data='T2b')
test_needs_autoincrement=True),
Column('data', String(30)),
test_needs_fk=True
-
+
)
Table('atob', metadata,
Column('aid', Integer, ForeignKey('a.id')),
Column('bid', Integer, ForeignKey('b.id')),
test_needs_fk=True
-
+
)
Table('c', metadata,
Column('id', Integer, primary_key=True,
Column('data', String(30)),
Column('bid', Integer, ForeignKey('b.id')),
test_needs_fk=True
-
+
)
@classmethod
sess = create_session()
b1 =B(data='b1')
a1 = A(data='a1', bs=[b1])
-
+
assert_raises(sa_exc.InvalidRequestError,
A, data='a2', bs=[b1]
)
@testing.resolve_artifact_names
def test_single_parent_backref(self):
"""test that setting m2m via a uselist=False backref bypasses the single_parent raise"""
-
+
mapper(A, a, properties={
'bs':relationship(B,
secondary=atob,
sess = create_session()
b1 =B(data='b1')
a1 = A(data='a1', bs=[b1])
-
+
assert_raises(
sa_exc.InvalidRequestError,
A, data='a2', bs=[b1]
)
-
+
a2 = A(data='a2')
b1.a = a2
assert b1 not in a1.bs
'addresses':relationship(Address, backref='user',
cascade_backrefs=False)
})
-
+
mapper(Dingaling, dingalings, properties={
'address' : relationship(Address, backref='dingalings',
cascade_backrefs=False)
@testing.resolve_artifact_names
def test_o2m(self):
sess = Session()
-
+
u1 = User(name='u1')
sess.add(u1)
-
+
a1 = Address(email_address='a1')
a1.user = u1
assert a1 not in sess
sess.commit()
-
+
assert a1 not in sess
-
+
sess.add(a1)
-
+
d1 = Dingaling()
d1.address = a1
assert d1 in a1.dingalings
a1 = Address(email_address='a1')
d1 = Dingaling()
sess.add(d1)
-
+
a1.dingalings.append(d1)
assert a1 not in sess
-
+
a2 = Address(email_address='a2')
sess.add(a2)
-
+
u1 = User(name='u1')
u1.addresses.append(a2)
assert u1 in sess
def test_double_parent_expunge_o2m(self):
"""test the delete-orphan uow event for multiple delete-orphan
parent relationships."""
-
+
class Customer(_fixtures.Base):
pass
class Account(_fixtures.Base):
assert c not in s, \
'Should expunge customer when both parents are gone'
-
+
class DoubleParentOrphanTest(_base.MappedTest):
"""test orphan detection for an entity with two parent relationships"""
class O2MConflictTest(_base.MappedTest):
"""test that O2M dependency detects a change in parent, does the
right thing, and even updates the collection/attribute.
-
+
"""
-
+
@classmethod
def define_tables(cls, metadata):
Table("parent", metadata,
Column('parent_id', Integer, ForeignKey('parent.id'),
nullable=False)
)
-
+
@classmethod
def setup_classes(cls):
class Parent(_base.ComparableEntity):
pass
class Child(_base.ComparableEntity):
pass
-
+
@testing.resolve_artifact_names
def _do_delete_old_test(self):
sess = create_session()
-
+
p1, p2, c1 = Parent(), Parent(), Child()
if Parent.child.property.uselist:
p1.child.append(c1)
p1.child = c1
sess.add_all([p1, c1])
sess.flush()
-
+
sess.delete(p1)
-
+
if Parent.child.property.uselist:
p2.child.append(c1)
else:
sess.flush()
eq_(sess.query(Child).filter(Child.parent_id==p2.id).all(), [c1])
-
+
@testing.resolve_artifact_names
def test_o2o_delete_old(self):
mapper(Parent, parent, properties={
mapper(Child, child)
self._do_delete_old_test()
self._do_move_test()
-
+
@testing.resolve_artifact_names
def test_o2o_delcascade_delete_old(self):
mapper(Parent, parent, properties={
})
self._do_delete_old_test()
self._do_move_test()
-
+
class PartialFlushTest(_base.MappedTest):
"""test cascade behavior as it relates to object lists passed to flush().
-
+
"""
@classmethod
def define_tables(cls, metadata):
@testing.resolve_artifact_names
def test_circular_sort(self):
"""test ticket 1306"""
-
+
class Base(_base.ComparableEntity):
pass
class Parent(Base):
c1, c2, c3 = Child(), Child(), Child()
p1.children = [c1, c2, c3]
sess.add(p1)
-
+
sess.flush([c1])
assert p1 in sess.new
assert c1 not in sess.new
assert c2 in sess.new
-
+
def invalid():
direct[slice(0, 6, 2)] = [creator()]
assert_raises(ValueError, invalid)
-
+
if hasattr(direct, '__delitem__'):
e = creator()
direct.append(e)
del direct[::2]
del control[::2]
assert_eq()
-
+
if hasattr(direct, 'remove'):
e = creator()
direct.append(e)
direct.remove(e)
control.remove(e)
assert_eq()
-
+
if hasattr(direct, '__setitem__') or hasattr(direct, '__setslice__'):
-
+
values = [creator(), creator()]
direct[:] = values
control[:] = values
assert_eq()
-
+
# test slice assignment where
# slice size goes over the number of items
values = [creator(), creator()]
direct[1:3] = values
control[1:3] = values
assert_eq()
-
+
values = [creator(), creator()]
direct[0:1] = values
control[0:1] = values
direct[1::2] = values
control[1::2] = values
assert_eq()
-
+
values = [creator(), creator()]
direct[-1:-3] = values
control[-1:-3] = values
direct[-2:-1] = values
control[-2:-1] = values
assert_eq()
-
+
if hasattr(direct, '__delitem__') or hasattr(direct, '__delslice__'):
for i in range(1, 4):
del direct[:]
del control[:]
assert_eq()
-
+
if hasattr(direct, 'extend'):
values = [creator(), creator(), creator()]
self._test_list_bulk(list)
def test_list_setitem_with_slices(self):
-
+
# this is a "list" that has no __setslice__
# or __delslice__ methods. The __setitem__
# and __delitem__ must therefore accept
p = session.query(Parent).get(pid)
-
+
eq_(set(p.children.keys()), set(['foo', 'bar']))
cid = p.children['foo'].id
def test_declarative_column_mapped(self):
"""test that uncompiled attribute usage works with column_mapped_collection"""
-
+
from sqlalchemy.ext.declarative import declarative_base
BaseObject = declarative_base()
__tablename__ = "foo"
id = Column(Integer(), primary_key=True, test_needs_autoincrement=True)
bar_id = Column(Integer, ForeignKey('bar.id'))
-
+
class Bar(BaseObject):
__tablename__ = "bar"
id = Column(Integer(), primary_key=True, test_needs_autoincrement=True)
foos = relationship(Foo, collection_class=collections.column_mapped_collection(Foo.id))
foos2 = relationship(Foo, collection_class=collections.column_mapped_collection((Foo.id, Foo.bar_id)))
-
+
eq_(Bar.foos.property.collection_class().keyfunc(Foo(id=3)), 3)
eq_(Bar.foos2.property.collection_class().keyfunc(Foo(id=3, bar_id=12)), (3, 12))
"for argument 'mapping_spec'; got: 'a'",
collections.column_mapped_collection,
text('a'))
-
-
+
+
@testing.resolve_artifact_names
def test_column_mapped_collection(self):
collection_class = collections.column_mapped_collection(
return self.data == other
def __repr__(self):
return 'ListLike(%s)' % repr(self.data)
-
+
self._test_list(ListLike)
-
+
@testing.resolve_artifact_names
def _test_list(self, listcls):
class Parent(object):
o = [Child()]
control[1:3] = o
-
+
p.children[1:3] = o
assert control == p.children
assert control == list(p.children)
'start':sa.orm.composite(Point, edges.c.x1, edges.c.y1),
'end': sa.orm.composite(Point, edges.c.x2, edges.c.y2)
})
-
+
@testing.resolve_artifact_names
def _fixture(self):
sess = Session()
sess.add(g)
sess.commit()
return sess
-
+
@testing.resolve_artifact_names
def test_round_trip(self):
g1 = sess.query(Graph).first()
sess.close()
-
+
g = sess.query(Graph).get(g1.id)
eq_(
[(e.start, e.end) for e in g.edges],
(Point(3, 4), Point(5, 6)),
(Point(14, 5), Point(2, 7)),
]
- )
+ )
@testing.resolve_artifact_names
def test_detect_change(self):
sess = self._fixture()
-
+
g = sess.query(Graph).first()
g.edges[1].end = Point(18, 4)
sess.commit()
def test_detect_mutation(self):
# only on 0.6
sess = self._fixture()
-
+
g = sess.query(Graph).first()
g.edges[1].end.x = 18
g.edges[1].end.y = 4
e = sess.query(Edge).get(g.edges[1].id)
eq_(e.end, Point(18, 4))
-
+
@testing.resolve_artifact_names
def test_eager_load(self):
sess = self._fixture()
g = sess.query(Graph).first()
sess.close()
-
+
def go():
g2 = sess.query(Graph).\
options(sa.orm.joinedload('edges')).\
get(g.id)
-
+
eq_(
[(e.start, e.end) for e in g2.edges],
[
(Point(3, 4), Point(5, 6)),
(Point(14, 5), Point(2, 7)),
]
- )
+ )
self.assert_sql_count(testing.db, go, 1)
-
+
@testing.resolve_artifact_names
def test_comparator(self):
sess = self._fixture()
sess.query(Edge.start, Edge.end).all(),
[(3, 4, 5, 6), (14, 5, 2, 7)]
)
-
+
# @testing.resolve_artifact_names
# def test_delete(self):
# only on 0.7
# sess = self._fixture()
# g = sess.query(Graph).first()
-
+
# e = g.edges[1]
# del e.end
# sess.flush()
def test_set_none(self):
sess = self._fixture()
g = sess.query(Graph).first()
-
+
e = g.edges[1]
e.end = None
sess.flush()
# only on 0.6
sess = self._fixture()
g = sess.query(Graph).first()
-
+
e = g.edges[1]
e.end.x = e.end.y = None
sess.flush()
@testing.resolve_artifact_names
def test_save_null(self):
"""test saving a null composite value
-
+
See google groups thread for more context:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/0c6580a1761b2c29
-
+
"""
sess = Session()
g = Graph(id=1)
e = Edge(None, None)
g.edges.append(e)
-
+
sess.add(g)
sess.commit()
-
+
g2 = sess.query(Graph).get(1)
assert g2.edges[-1].start.x is None
assert g2.edges[-1].start.y is None
test_needs_autoincrement=True),
Column('version_id', Integer, primary_key=True, nullable=True),
Column('name', String(30)))
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
mapper(Graph, graphs, properties={
'version':sa.orm.composite(Version, graphs.c.id,
graphs.c.version_id)})
-
+
@testing.resolve_artifact_names
def _fixture(self):
sess.add(g)
sess.commit()
return sess
-
+
@testing.resolve_artifact_names
def test_get_by_col(self):
sess = self._fixture()
g = sess.query(Graph).first()
-
+
g2 = sess.query(Graph).get([g.version.id, g.version.version])
eq_(g.version, g2.version)
def test_get_by_composite(self):
sess = self._fixture()
g = sess.query(Graph).first()
-
+
g2 = sess.query(Graph).get(Version(g.version.id, g.version.version))
eq_(g.version, g2.version)
@testing.resolve_artifact_names
def test_null_pk(self):
sess = Session()
-
+
# test pk with one column NULL
# only sqlite can really handle this
g = Graph(Version(2, None))
sess.commit()
g2 = sess.query(Graph).filter_by(version=Version(2, None)).one()
eq_(g.version, g2.version)
-
+
class DefaultsTest(_base.MappedTest):
@classmethod
foobars.c.x3,
foobars.c.x4)
))
-
+
@testing.resolve_artifact_names
def test_attributes_with_defaults(self):
sess.flush()
assert f1.foob == FBComposite(2, 5, 15, None)
-
+
f2 = Foobar()
sess.add(f2)
sess.flush()
assert f2.foob == FBComposite(2, None, 15, None)
-
+
@testing.resolve_artifact_names
def test_set_composite_values(self):
sess = Session()
f1.foob = FBComposite(None, 5, None, None)
sess.add(f1)
sess.flush()
-
+
assert f1.foob == FBComposite(2, 5, 15, None)
-
+
class MappedSelectTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Column('v1', String(20)),
Column('v2', String(20)),
)
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
desc_values.c.v2),
})
-
+
@testing.resolve_artifact_names
def test_set_composite_attrs_via_selectable(self):
session = Session()
test that the circular dependency sort can assemble a many-to-one
dependency processor when only the object on the "many" side is
- actually in the list of modified objects.
+ actually in the list of modified objects.
"""
mapper(C1, t1, properties={
mapper(C1, t1, properties={
'children':relationship(C1)
})
-
+
sess = create_session()
c1 = C1()
c2 = C1()
sess.add(c1)
sess.flush()
assert c2.parent_c1 == c1.c1
-
+
sess.delete(c1)
sess.flush()
assert c2.parent_c1 is None
-
+
sess.expire_all()
assert c2.parent_c1 is None
-
+
class SelfReferentialNoPKTest(_base.MappedTest):
"""A self-referential relationship that joins on a column other than the primary key column"""
class BiDirectionalManyToOneTest(_base.MappedTest):
run_define_tables = 'each'
-
+
@classmethod
def define_tables(cls, metadata):
Table('t1', metadata,
"""
run_define_tables = 'each'
-
+
@classmethod
def define_tables(cls, metadata):
Table('ball', metadata,
@testing.resolve_artifact_names
def test_post_update_backref(self):
"""test bidirectional post_update."""
-
+
mapper(Ball, ball)
mapper(Person, person, properties=dict(
balls=relationship(Ball,
favorite=relationship(Ball,
primaryjoin=person.c.favorite_ball_id == ball.c.id,
remote_side=person.c.favorite_ball_id)
-
+
))
-
+
sess = sessionmaker()()
p1 = Person(data='p1')
p2 = Person(data='p2')
p3 = Person(data='p3')
-
+
b1 = Ball(data='b1')
-
+
b1.person = p1
sess.add_all([p1, p2, p3])
sess.commit()
-
+
# switch here. the post_update
# on ball.person can't get tripped up
# by the fact that there's a "reverse" prop.
eq_(
p3, b1.person
)
-
+
@testing.resolve_artifact_names
),
)
-
+
sess.delete(p)
-
+
self.assert_sql_execution(testing.db, sess.flush,
CompiledSQL("UPDATE ball SET person_id=:person_id "
"WHERE ball.id = :ball_id",
class SelfReferentialPostUpdateTest(_base.MappedTest):
"""Post_update on a single self-referential mapper.
-
-
+
+
"""
@classmethod
session.flush()
remove_child(root, cats)
-
+
# pre-trigger lazy loader on 'cats' to make the test easier
cats.children
self.assert_sql_execution(
"WHERE node.id = :node_id",
lambda ctx:{'next_sibling_id':None, 'node_id':cats.id}),
),
-
+
CompiledSQL("DELETE FROM node WHERE node.id = :id",
lambda ctx:[{'id':cats.id}])
)
mapper(Child, child, properties={
'parent':relationship(Child, remote_side=child.c.id)
})
-
+
session = create_session()
p1 = Parent('p1')
c1 = Child('c1')
p1.children =[c1, c2]
c2.parent = c1
p1.child = c2
-
+
session.add_all([p1, c1, c2])
session.flush()
p2.children = [c3]
p2.child = c3
session.add(p2)
-
+
session.delete(c2)
p1.children.remove(c2)
p1.child = None
session.flush()
-
+
p2.child = None
session.flush()
-
+
class PostUpdateBatchingTest(_base.MappedTest):
"""test that lots of post update cols batch together into a single UPDATE."""
-
+
@classmethod
def define_tables(cls, metadata):
Table('parent', metadata,
Column('name', String(50), nullable=False),
Column('parent_id', Integer,
ForeignKey('parent.id'), nullable=False))
-
+
Table('child3', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
class Child3(_base.BasicEntity):
def __init__(self, name=''):
self.name = name
-
+
@testing.resolve_artifact_names
def test_one(self):
mapper(Parent, parent, properties={
'c1s':relationship(Child1, primaryjoin=child1.c.parent_id==parent.c.id),
'c2s':relationship(Child2, primaryjoin=child2.c.parent_id==parent.c.id),
'c3s':relationship(Child3, primaryjoin=child3.c.parent_id==parent.c.id),
-
+
'c1':relationship(Child1, primaryjoin=child1.c.id==parent.c.c1_id, post_update=True),
'c2':relationship(Child2, primaryjoin=child2.c.id==parent.c.c2_id, post_update=True),
'c3':relationship(Child3, primaryjoin=child3.c.id==parent.c.c3_id, post_update=True),
mapper(Child1, child1)
mapper(Child2, child2)
mapper(Child3, child3)
-
+
sess = create_session()
-
+
p1 = Parent('p1')
c11, c12, c13 = Child1('c1'), Child1('c2'), Child1('c3')
c21, c22, c23 = Child2('c1'), Child2('c2'), Child2('c3')
c31, c32, c33 = Child3('c1'), Child3('c2'), Child3('c3')
-
+
p1.c1s = [c11, c12, c13]
p1.c2s = [c21, c22, c23]
p1.c3s = [c31, c32, c33]
sess.add(p1)
sess.flush()
-
+
p1.c1 = c12
p1.c2 = c23
p1.c3 = c31
lambda ctx: {'c2_id': None, 'parent_id': p1.id, 'c1_id': None, 'c3_id': None}
)
)
-
\ No newline at end of file
),
):
ins.execute_at('after-create', dt)
-
+
sa.DDL("DROP TRIGGER dt_ins").execute_at('before-drop', dt)
for up in (
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('col1', String(20), default="hello"),
)
-
+
@testing.resolve_artifact_names
def test_exclude(self):
class Foo(_base.ComparableEntity):
pass
mapper(Foo, dt, exclude_properties=('col1',))
-
+
f1 = Foo()
sess = create_session()
sess.add(f1)
sess.flush()
eq_(dt.select().execute().fetchall(), [(1, "hello")])
-
+
@testing.resolve_artifact_names
def test_override_get(self):
"""MapperExtension.get()
-
+
x = session.query.get(5)
-
+
"""
from sqlalchemy.orm.query import Query
cache = {}
x = super(MyQuery, self).get(ident)
cache[ident] = x
return x
-
+
session = sessionmaker(query_cls=MyQuery)()
-
+
ad1 = session.query(Address).get(1)
assert ad1 in cache.values()
-
+
@testing.resolve_artifact_names
def test_load(self):
"""x = session.query(Address).load(1)
-
+
x = session.load(Address, 1)
-
+
"""
session = create_session()
ad1 = session.query(Address).populate_existing().get(1)
assert bool(ad1)
-
-
+
+
@testing.resolve_artifact_names
def test_apply_max(self):
"""Query.apply_max(col)
q = sess.query(User)
u = q.filter(User.id==7).first()
-
+
eq_([User(id=7,
addresses=[Address(id=1, email_address='jack@bean.com')])],
q.filter(User.id==7).all())
def test_statement(self):
"""test that the .statement accessor returns the actual statement that
would render, without any _clones called."""
-
+
mapper(User, users, properties={
'addresses':dynamic_loader(mapper(Address, addresses))
})
"addresses WHERE :param_1 = addresses.user_id",
use_default_dialect=True
)
-
+
@testing.resolve_artifact_names
def test_order_by(self):
mapper(User, users, properties={
"Dynamic attributes don't support collection population.",
attributes.set_committed_value, u1, 'addresses', []
)
-
+
@testing.resolve_artifact_names
def test_m2m(self):
mapper(Order, orders, properties={
o.items.filter(order_items.c.item_id==2).all(),
[Item(id=2)]
)
-
-
+
+
@testing.resolve_artifact_names
def test_transient_detached(self):
mapper(User, users, properties={
sess = create_session()
sess.query(User).all()
m.add_property("addresses", relationship(mapper(Address, addresses)))
-
+
sess.expunge_all()
def go():
eq_(
sess.query(User).options(joinedload('addresses')).filter(User.id==7).all()
)
self.assert_sql_count(testing.db, go, 1)
-
-
+
+
@testing.resolve_artifact_names
def test_no_orphan(self):
"""An eagerly loaded child object is not marked as an orphan"""
-
+
mapper(User, users, properties={
'addresses':relationship(Address, cascade="all,delete-orphan", lazy='joined')
})
def test_orderby_related(self):
"""A regular mapper select on a single table can
order by a relationship to a second table"""
-
+
mapper(Address, addresses)
mapper(User, users, properties = dict(
addresses = relationship(Address, lazy='joined', order_by=addresses.c.id),
@testing.resolve_artifact_names
def test_disable_dynamic(self):
"""test no joined option on a dynamic."""
-
+
mapper(User, users, properties={
'addresses':relationship(Address, lazy="dynamic")
})
mapper(Address, addresses)
mapper(Order, orders)
-
+
open_mapper = mapper(Order, openorders, non_primary=True)
closed_mapper = mapper(Order, closedorders, non_primary=True)
-
+
mapper(User, users, properties = dict(
addresses = relationship(Address, lazy='joined', order_by=addresses.c.id),
open_orders = relationship(
def test_useget_cancels_eager(self):
"""test that a one to many lazyload cancels the unnecessary
eager many-to-one join on the other side."""
-
+
mapper(User, users)
mapper(Address, addresses, properties={
'user':relationship(User, lazy='joined', backref='addresses')
})
-
+
sess = create_session()
u1 = sess.query(User).filter(User.id==8).one()
def go():
"addresses.user_id",
{'param_1': 8})
)
-
-
+
+
@testing.resolve_artifact_names
def test_manytoone_limit(self):
"""test that the subquery wrapping only occurs with
limit/offset and m2m or o2m joins present."""
-
+
mapper(User, users, properties=odict(
orders=relationship(Order, backref='user')
))
))
mapper(Address, addresses)
mapper(Item, items)
-
+
sess = create_session()
self.assert_compile(
"orders_1.user_id JOIN addresses AS addresses_1 ON addresses_1.id = orders_1.address_id"
,use_default_dialect=True
)
-
+
@testing.resolve_artifact_names
def test_one_to_many_scalar(self):
mapper(User, users, properties = dict(
def test_many_to_one_null(self):
"""test that a many-to-one eager load which loads None does
not later trigger a lazy load.
-
+
"""
-
+
# use a primaryjoin intended to defeat SA's usage of
# query.get() for a many-to-one lazyload
mapper(Order, orders, properties = dict(
addresses.c.id==orders.c.address_id,
addresses.c.email_address != None
),
-
+
lazy='joined')
))
sess = create_session()
o1 = sess.query(Order).options(lazyload('address')).filter(Order.id==5).one()
eq_(o1.address, None)
self.assert_sql_count(testing.db, go, 2)
-
+
sess.expunge_all()
def go():
o1 = sess.query(Order).filter(Order.id==5).one()
eq_(o1.address, None)
self.assert_sql_count(testing.db, go, 1)
-
+
@testing.resolve_artifact_names
def test_one_and_many(self):
"""tests eager load for a parent object with a child object that
def test_uselist_false_warning(self):
"""test that multiple rows received by a
uselist=False raises a warning."""
-
+
mapper(User, users, properties={
'order':relationship(Order, uselist=False)
})
s = create_session()
assert_raises(sa.exc.SAWarning,
s.query(User).options(joinedload(User.order)).all)
-
+
@testing.resolve_artifact_names
def test_wide(self):
mapper(Order, orders, properties={'items':relationship(Item, secondary=order_items, lazy='joined',
innerjoin=True)
))
mapper(Item, items)
-
+
sess = create_session()
self.assert_compile(
sess.query(User),
"order_items_1.item_id",
use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(User).options(joinedload(User.orders, innerjoin=False)),
"SELECT users.id AS users_id, users.name AS users_name, items_1.id AS "
"order_items_1.item_id",
use_default_dialect=True
)
-
+
@testing.resolve_artifact_names
def test_inner_join_chaining_fixed(self):
mapper(User, users, properties = dict(
innerjoin=True)
))
mapper(Item, items)
-
+
sess = create_session()
# joining from user, its all LEFT OUTER JOINs
"order_items_1.item_id",
use_default_dialect=True
)
-
+
# joining just from Order, innerjoin=True can be respected
self.assert_compile(
sess.query(Order),
"order_items_1.item_id",
use_default_dialect=True
)
-
-
+
+
@testing.resolve_artifact_names
def test_inner_join_options(self):
"order_items_1 ON orders_1.id = order_items_1.order_id JOIN items AS items_1 ON "
"items_1.id = order_items_1.item_id ORDER BY orders_1.id, items_1.id"
, use_default_dialect=True)
-
+
def go():
eq_(
sess.query(User).options(
joinedload(User.orders, innerjoin=True),
joinedload(User.orders, Order.items, innerjoin=True)).
order_by(User.id).all(),
-
+
[User(id=7,
orders=[
Order(id=1, items=[ Item(id=1), Item(id=2), Item(id=3)]),
]
)
self.assert_sql_count(testing.db, go, 1)
-
+
# test that default innerjoin setting is used for options
self.assert_compile(
sess.query(Order).options(joinedload(Order.user)).filter(Order.description == 'foo'),
"WHERE orders.description = :description_1",
use_default_dialect=True
)
-
+
class AddEntityTest(_fixtures.FixtureTest):
run_inserts = 'once'
run_deletes = None
pass
class B(_base.ComparableEntity):
pass
-
+
mapper(A,a_table)
mapper(B,b_table,properties = {
'parent_b1': relationship(B,
order_by = b_table.c.id
)
});
-
+
@classmethod
@testing.resolve_artifact_names
def insert_data(cls):
dict(id=13, parent_a_id=3, parent_b1_id=4, parent_b2_id=4),
dict(id=14, parent_a_id=3, parent_b1_id=7, parent_b2_id=2),
)
-
+
@testing.resolve_artifact_names
def test_eager_load(self):
session = create_session()
]
)
self.assert_sql_count(testing.db, go, 1)
-
+
class SelfReferentialM2MEagerTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
@testing.resolve_artifact_names
def test_two_entities_with_joins(self):
sess = create_session()
-
+
# two FROM clauses where there's a join on each one
def go():
u1 = aliased(User)
order_by(User.id, Order.id, u1.id, o1.id).all(),
)
self.assert_sql_count(testing.db, go, 1)
-
-
+
+
@testing.resolve_artifact_names
def test_aliased_entity(self):
class CorrelatedSubqueryTest(_base.MappedTest):
"""tests for #946, #947, #948.
-
+
The "users" table is joined to "stuff", and the relationship
would like to pull only the "stuff" entry with the most recent date.
-
+
Exercises a variety of ways to configure this.
-
+
"""
# another argument for joinedload learning about inner joins
-
+
__requires__ = ('correlated_outer_joins', )
-
+
@classmethod
def define_tables(cls, metadata):
users = Table('users', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('date', Date),
Column('user_id', Integer, ForeignKey('users.id')))
-
+
@classmethod
@testing.resolve_artifact_names
def insert_data(cls):
{'id':5, 'user_id':3, 'date':datetime.date(2007, 6, 15)},
{'id':6, 'user_id':3, 'date':datetime.date(2007, 3, 15)},
)
-
-
+
+
def test_labeled_on_date_noalias(self):
self._do_test('label', True, False)
def test_plain_on_limitid_alias(self):
self._do_test('none', False, True)
-
+
@testing.resolve_artifact_names
def _do_test(self, labeled, ondate, aliasstuff):
class User(_base.ComparableEntity):
class Stuff(_base.ComparableEntity):
pass
-
+
mapper(Stuff, stuff)
if aliasstuff:
mapper(User, users, properties={
'stuff':relationship(Stuff, primaryjoin=and_(users.c.id==stuff.c.user_id, stuff.c.id==stuff_view))
})
-
+
sess = create_session()
def go():
eq_(
]
)
self.assert_sql_count(testing.db, go, 1)
-
+
sess = create_session()
def go():
eq_(
Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(64)))
-
+
@classmethod
def setup_classes(cls):
class User(_base.ComparableEntity):
pass
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
mapper(User, users)
-
+
@testing.resolve_artifact_names
def test_compare_to_value(self):
eval_eq(User.name == 'foo', testcases=[
(User(name='bar'), False),
(User(name=None), None),
])
-
+
eval_eq(User.id < 5, testcases=[
(User(id=3), True),
(User(id=5), False),
(User(id=None), None),
])
-
+
@testing.resolve_artifact_names
def test_compare_to_none(self):
eval_eq(User.name == None, testcases=[
(User(name='foo'), False),
(User(name=None), True),
])
-
+
@testing.resolve_artifact_names
def test_boolean_ops(self):
eval_eq(and_(User.name == 'foo', User.id == 1), testcases=[
(User(id=2, name='bar'), False),
(User(id=1, name=None), None),
])
-
+
eval_eq(or_(User.name == 'foo', User.id == 1), testcases=[
(User(id=1, name='foo'), True),
(User(id=2, name='foo'), True),
(User(id=1, name=None), True),
(User(id=2, name=None), None),
])
-
+
eval_eq(not_(User.id == 1), testcases=[
(User(id=1), False),
(User(id=2), True),
# trick the "deleted" flag so we can re-add for the sake
# of this test
del attributes.instance_state(u).deleted
-
+
# add it back
s.add(u)
# nope, raises ObjectDeletedError
# do a get()/remove u from session again
assert s.query(User).get(10) is None
assert u not in s
-
+
s.rollback()
assert u in s
@testing.resolve_artifact_names
def test_deferred(self):
"""test that unloaded, deferred attributes aren't included in the expiry list."""
-
+
mapper(Order, orders, properties={'description':deferred(orders.c.description)})
-
+
s = create_session()
o1 = s.query(Order).first()
assert 'description' not in o1.__dict__
assert o1.isopen is not None
assert 'description' not in o1.__dict__
assert o1.description
-
+
@testing.resolve_artifact_names
def test_lazyload_autoflushes(self):
mapper(User, users, properties={
def test_refresh_collection_exception(self):
"""test graceful failure for currently unsupported
immediate refresh of a collection"""
-
+
mapper(User, users, properties={
'addresses':relationship(Address, order_by=addresses.c.email_address)
})
assert_raises_message(sa_exc.InvalidRequestError,
"properties specified for refresh",
s.refresh, u, ['addresses'])
-
+
# in contrast to a regular query with no columns
assert_raises_message(sa_exc.InvalidRequestError,
"no columns with which to SELECT", s.query().all)
-
+
@testing.resolve_artifact_names
def test_refresh_cancels_expire(self):
mapper(User, users)
assert 'name' not in u.__dict__
sess.add(u)
assert u.name == 'jack'
-
+
@testing.resolve_artifact_names
def test_no_instance_key_no_pk(self):
# same as test_no_instance_key, but the PK columns
sess.add(u)
assert_raises(sa_exc.InvalidRequestError, getattr, u, 'name')
-
+
@testing.resolve_artifact_names
def test_expire_preserves_changes(self):
"""test that the expire load operation doesn't revert post-expire changes"""
def test_refresh_cascade_pending(self):
cascade = 'save-update, refresh-expire'
self._test_cascade_to_pending(cascade, False)
-
+
@testing.resolve_artifact_names
def _test_cascade_to_pending(self, cascade, expire_or_refresh):
mapper(User, users, properties={
u = s.query(User).get(8)
a = Address(id=12, email_address='foobar')
-
+
u.addresses.append(a)
if expire_or_refresh:
s.expire(u)
assert a not in s
else:
assert a in s
-
+
assert a not in u.addresses
s.flush()
"""Behavioral test to verify the current activity of loader callables."""
mapper(User, users)
-
+
sess = create_session()
-
+
# deferred attribute option, gets the LoadDeferredColumns
# callable
u1 = sess.query(User).options(defer(User.name)).first()
attributes.instance_state(u1).callables['name'],
strategies.LoadDeferredColumns
)
-
+
# expire the attr, it gets the InstanceState callable
sess.expire(u1, ['name'])
assert isinstance(
attributes.instance_state(u1).callables['name'],
state.InstanceState
)
-
+
# load it, callable is gone
u1.name
assert 'name' not in attributes.instance_state(u1).callables
attributes.instance_state(u1).callables['name'],
state.InstanceState
)
-
+
# load over it. everything normal.
sess.query(User).first()
assert 'name' not in attributes.instance_state(u1).callables
-
+
sess.expunge_all()
u1 = sess.query(User).first()
# for non present, still expires the same way
del u1.name
sess.expire(u1)
assert 'name' in attributes.instance_state(u1).callables
-
+
@testing.resolve_artifact_names
def test_state_deferred_to_col(self):
"""Behavioral test to verify the current activity of loader callables."""
-
+
mapper(User, users, properties={'name':deferred(users.c.name)})
sess = create_session()
u1 = sess.query(User).options(undefer(User.name)).first()
assert 'name' not in attributes.instance_state(u1).callables
-
+
# mass expire, the attribute was loaded,
# the attribute gets the callable
sess.expire(u1)
# load it, callable is gone
u1.name
assert 'name' not in attributes.instance_state(u1).callables
-
+
# mass expire, attribute was loaded but then deleted,
# the callable goes away - the state wants to flip
# it back to its "deferred" loader.
mapper(User, users, properties={'addresses':relationship(Address, lazy='noload')})
mapper(Address, addresses)
-
+
sess = create_session()
u1 = sess.query(User).options(lazyload(User.addresses)).first()
assert isinstance(
attributes.instance_state(u1).callables['addresses'],
strategies.LoadLazyAttribute
)
-
+
# load over it. callable goes away.
sess.query(User).first()
assert 'addresses' not in attributes.instance_state(u1).callables
-
+
sess.expunge_all()
u1 = sess.query(User).options(lazyload(User.addresses)).first()
sess.expire(u1, ['addresses'])
attributes.instance_state(u1).callables['addresses'],
strategies.LoadLazyAttribute
)
-
+
# load the attr, goes away
u1.addresses
assert 'addresses' not in attributes.instance_state(u1).callables
-
-
-
+
+
+
class PolymorphicExpireTest(_base.MappedTest):
run_inserts = 'once'
run_deletes = None
{'person_id':2, 'status':'new engineer'},
{'person_id':3, 'status':'old engineer'},
)
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
mapper(Person, people, polymorphic_on=people.c.type, polymorphic_identity='person')
mapper(Engineer, engineers, inherits=Person, polymorphic_identity='engineer')
-
+
@testing.resolve_artifact_names
def test_poly_deferred(self):
@testing.resolve_artifact_names
def test_no_instance_key(self):
-
+
sess = create_session()
e1 = sess.query(Engineer).get(2)
assert 'name' not in e1.__dict__
sess.add(e1)
assert e1.name == 'engineer1'
-
+
@testing.resolve_artifact_names
def test_no_instance_key(self):
# same as test_no_instance_key, but the PK columns
run_setup_classes = 'once'
run_setup_mappers = None
run_inserts = None
-
+
@testing.resolve_artifact_names
def test_expired_pending(self):
mapper(User, users, properties={
a1 = Address(email_address='a1')
sess.add(a1)
sess.flush()
-
+
u1 = User(name='u1')
a1.user = u1
sess.flush()
# expire u1.addresses again. this expires
# "pending" as well.
sess.expire(u1, ['addresses'])
-
+
# insert a new row
sess.execute(addresses.insert(), dict(email_address='a3', user_id=u1.id))
-
+
# only two addresses pulled from the DB, no "pending"
assert len(u1.addresses) == 2
-
+
sess.flush()
sess.expire_all()
assert len(u1.addresses) == 3
-
+
class RefreshTest(_fixtures.FixtureTest):
def test_instance_dict(self):
class User(MyClass):
pass
-
+
attributes.register_class(User)
attributes.register_attribute(User, 'user_id', uselist = False, useobject=False)
attributes.register_attribute(User, 'user_name', uselist = False, useobject=False)
attributes.register_attribute(User, 'email_address', uselist = False, useobject=False)
-
+
u = User()
u.user_id = 7
u.user_name = 'john'
u.email_address = 'lala@123.com'
self.assert_(u.__dict__ == {'_my_state':u._my_state, '_goofy_dict':{'user_id':7, 'user_name':'john', 'email_address':'lala@123.com'}}, u.__dict__)
-
+
def test_basic(self):
for base in (object, MyBaseClass, MyClass):
class User(base):
manager.deferred_scalar_loader = loader
attributes.register_attribute(Foo, 'a', uselist=False, useobject=False)
attributes.register_attribute(Foo, 'b', uselist=False, useobject=False)
-
+
assert Foo in attributes.instrumentation_registry._state_finders
f = Foo()
attributes.instance_state(f).expire_attributes(attributes.instance_dict(f), None)
sess = create_session()
query = sess.query(Foo).order_by(Foo.id)
orig = query.all()
-
+
assert query[1] == orig[1]
assert query[-4] == orig[-4]
assert query[-1] == orig[-1]
-
+
assert list(query[10:20]) == orig[10:20]
assert list(query[10:]) == orig[10:]
assert list(query[:10]) == orig[:10]
assert list(query[-2:-5]) == orig[-2:-5]
assert list(query[-5:-2]) == orig[-5:-2]
assert list(query[:-2]) == orig[:-2]
-
+
assert query[10:20][5] == orig[10:20][5]
@testing.uses_deprecated('Call to deprecated function apply_max')
query = sess.query(Foo)
assert query.count() == 100
assert sess.query(func.min(foo.c.bar)).filter(foo.c.bar<30).one() == (0,)
-
+
assert sess.query(func.max(foo.c.bar)).filter(foo.c.bar<30).one() == (29,)
# Py3K
#assert query.filter(foo.c.bar<30).values(sa.func.max(foo.c.bar)).__next__()[0] == 29
assert query.filter(foo.c.bar<30).values(sa.func.max(foo.c.bar)).next()[0] == 29
assert query.filter(foo.c.bar<30).values(sa.func.max(foo.c.bar)).next()[0] == 29
# end Py2K
-
+
@testing.fails_if(lambda:testing.against('mysql+mysqldb') and
testing.db.dialect.dbapi.version_info[:4] == (1, 2, 1, 'gamma'),
"unknown incompatibility")
'addresses':relationship(Address)
})
sess = create_session()
-
+
l = sess.query(User).options(immediateload(User.addresses)).filter(users.c.id==7).all()
eq_(len(sess.identity_map), 2)
-
+
sess.close()
-
+
eq_(
[User(id=7, addresses=[Address(id=1, email_address='jack@bean.com')])],
l
'addresses':relationship(Address, lazy='immediate')
})
sess = create_session()
-
+
l = sess.query(User).filter(users.c.id==7).all()
eq_(len(sess.identity_map), 2)
sess.close()
-
+
eq_(
[User(id=7, addresses=[Address(id=1, email_address='jack@bean.com')])],
l
a = A()
assert not a.bs
-
+
def test_uninstrument(self):
class A(object):pass
-
+
manager = attributes.register_class(A)
-
+
assert attributes.manager_of_class(A) is manager
attributes.unregister_class(A)
assert attributes.manager_of_class(A) is None
-
+
def test_compileonattr_rel_backref_a(self):
m = MetaData()
t1 = Table('t1', m,
@testing.resolve_artifact_names
def test_many_to_one_binds(self):
mapper(Address, addresses, primary_key=[addresses.c.user_id, addresses.c.email_address])
-
+
mapper(User, users, properties = dict(
address = relationship(Address, uselist=False,
primaryjoin=sa.and_(users.c.id==addresses.c.user_id, addresses.c.email_address=='ed@bettyboop.com')
],
list(q)
)
-
+
@testing.resolve_artifact_names
def test_double(self):
closedorders = sa.alias(orders, 'closedorders')
mapper(Address, addresses)
-
+
mapper(Order, orders)
-
+
open_mapper = mapper(Order, openorders, non_primary=True)
closed_mapper = mapper(Order, closedorders, non_primary=True)
mapper(User, users, properties = dict(
class SmallintDecorator(TypeDecorator):
impl = SmallInteger
-
+
class SomeDBInteger(sa.Integer):
pass
-
+
for tt in [
Integer,
SmallInteger,
def setUp(self):
global Parent, Child, Base
Base= declarative_base()
-
+
class Parent(Base):
__tablename__ = 'parent'
-
+
id= Column(Integer, primary_key=True, test_needs_autoincrement=True)
name = Column(String(50), nullable=False)
children = relationship("Child", load_on_pending=True)
-
+
class Child(Base):
__tablename__ = 'child'
id= Column(Integer, primary_key=True, test_needs_autoincrement=True)
parent_id = Column(Integer, ForeignKey('parent.id'))
-
+
Base.metadata.create_all(engine)
def tearDown(self):
def test_annoying_autoflush_one(self):
sess = Session(engine)
-
+
p1 = Parent()
sess.add(p1)
p1.children = []
def test_annoying_autoflush_two(self):
sess = Session(engine)
-
+
p1 = Parent()
sess.add(p1)
assert p1.children == []
def test_dont_load_if_no_keys(self):
sess = Session(engine)
-
+
p1 = Parent()
sess.add(p1)
-
+
def go():
assert p1.children == []
self.assert_sql_count(testing.db, go, 0)
class LoadOnFKsTest(AssertsExecutionResults, TestBase):
-
+
def setUp(self):
global Parent, Child, Base
Base= declarative_base()
-
+
class Parent(Base):
__tablename__ = 'parent'
__table_args__ = {'mysql_engine':'InnoDB'}
-
+
id= Column(Integer, primary_key=True, test_needs_autoincrement=True)
class Child(Base):
id= Column(Integer, primary_key=True, test_needs_autoincrement=True)
parent_id = Column(Integer, ForeignKey('parent.id'))
-
+
parent = relationship(Parent, backref=backref("children"))
-
+
Base.metadata.create_all(engine)
global sess, p1, p2, c1, c2
assert c1 in sess
sess.commit()
-
+
def tearDown(self):
sess.rollback()
Base.metadata.drop_all(engine)
sess.add(c3)
c3.parent_id = p1.id
c3.parent = p1
-
+
# a side effect of load-on-pending with no autoflush.
# a change to the backref event handler to check
# collection membership before assuming "old == new so return"
sess.add(c3)
c3.parent_id = p1.id
c3.parent = p1
-
+
assert c3 in p1.children
def test_no_load_on_pending_allows_backref_event(self):
# users who stick with the program and don't use
# 'load_on_pending' get expected behavior
-
+
sess.autoflush = False
c3 = Child()
sess.add(c3)
c3.parent_id = p1.id
c3.parent = p1
-
+
assert c3 in p1.children
-
+
def test_autoflush_on_pending(self):
c3 = Child()
sess.add(c3)
c3.parent_id = p1.id
-
+
# pendings don't autoflush
assert c3.parent is None
c3 = Child()
sess.add(c3)
c3.parent_id = p1.id
-
+
# ...unless the flag is on
assert c3.parent is p1
-
+
def test_load_on_pending_with_set(self):
Child.parent.property.load_on_pending = True
c3 = Child()
sess.add(c3)
-
+
c3.parent_id = p1.id
def go():
c3.parent = p1
self.assert_sql_count(testing.db, go, 0)
-
+
def test_backref_doesnt_double(self):
Child.parent.property.load_on_pending = True
sess.autoflush = False
c3.parent = p1
c3.parent = p1
assert len(p1.children)== 2
-
+
def test_m2o_lazy_loader_on_persistent(self):
"""Compare the behaviors from the lazyloader using
the "committed" state in all cases, vs. the lazyloader
using the "current" state in all cases except during flush.
-
+
"""
for loadfk in (True, False):
for loadrel in (True, False):
for manualflush in (True, False):
for fake_autoexpire in (True, False):
sess.autoflush = autoflush
-
+
if loadfk:
c1.parent_id
if loadrel:
c1.parent
c1.parent_id = p2.id
-
+
if manualflush:
sess.flush()
-
+
# fake_autoexpire refers to the eventual
# auto-expire of 'parent' when c1.parent_id
# is altered.
if fake_autoexpire:
sess.expire(c1, ['parent'])
-
+
# old 0.6 behavior
#if manualflush and (not loadrel or fake_autoexpire):
# # a flush occurs, we get p2
# # if things were loaded, autoflush doesn't even
# # happen.
# assert c1.parent is p1
-
+
# new behavior
if loadrel and not fake_autoexpire:
assert c1.parent is p1
else:
assert c1.parent is p2
-
+
sess.rollback()
-
+
def test_m2o_lazy_loader_on_pending(self):
for loadonpending in (False, True):
for autoflush in (False, True):
c2 = Child()
sess.add(c2)
c2.parent_id = p2.id
-
+
if manualflush:
sess.flush()
-
+
if loadonpending or manualflush:
assert c2.parent is p2
else:
assert c2.parent is None
-
+
sess.rollback()
def test_m2o_lazy_loader_on_transient(self):
Child.parent.property.load_on_pending = loadonpending
sess.autoflush = autoflush
c2 = Child()
-
+
if attach:
sess._attach(instance_state(c2))
c2.parent_id = p2.id
-
+
if manualflush:
sess.flush()
-
+
if loadonpending and attach:
assert c2.parent is p2
else:
assert c2.parent is None
-
+
sess.rollback()
sess.add_all([p1, p2])
p1.parent_places.append(p2)
sess.flush()
-
+
sess.expire_all()
assert p1 in p2.parent_places
assert p2 in p1.parent_places
-
+
@testing.resolve_artifact_names
def test_double(self):
passive_updates=False)
})
mapper(Transition, transition)
-
+
p1 = Place('place1')
t1 = Transition('t1')
p1.transitions.append(t1)
p1.place_id
p1.transitions
-
+
sess.execute("delete from place_input", mapper=Place)
p1.place_id = 7
-
+
assert_raises_message(
orm_exc.StaleDataError,
r"UPDATE statement on table 'place_input' expected to "
sess.commit
)
sess.rollback()
-
+
p1.place_id
p1.transitions
sess.execute("delete from place_input", mapper=Place)
r"delete 1 row\(s\); Only 0 were matched.",
sess.commit
)
-
+
class M2MTest2(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
@testing.resolve_artifact_names
def test_dupliates_raise(self):
"""test constraint error is raised for dupe entries in a list"""
-
+
mapper(Student, student)
mapper(Course, course, properties={
'students': relationship(Student, enroll, backref='courses')})
s1.courses.append(c1)
sess.add(s1)
assert_raises(sa.exc.DBAPIError, sess.flush)
-
+
@testing.resolve_artifact_names
def test_delete(self):
"""A many-to-many table gets cleared out with deletion from the backref side"""
'a2s': relationship(A, secondary=c2a2, lazy='joined')})
assert create_session().query(C).with_labels().statement is not None
-
+
# TODO: seems like just a test for an ancient exception throw.
# how about some data/inserts/queries/assertions for this one
Column('t1', Integer, ForeignKey('table1.col1')),
Column('t2', Integer, ForeignKey('table2.col1')),
)
-
+
@testing.resolve_artifact_names
def test_delete_parent(self):
class A(_base.ComparableEntity):
@testing.resolve_artifact_names
def test_update_attr_keys(self):
"""test that update()/insert() use the correct key when given InstrumentedAttributes."""
-
+
mapper(User, users, properties={
'foobar':users.c.name
})
users.update().values({User.foobar:User.foobar + 'foo'}).execute()
eq_(sa.select([User.foobar]).where(User.foobar=='name1foo').execute().fetchall(), [('name1foo',)])
-
+
@testing.resolve_artifact_names
def test_utils(self):
from sqlalchemy.orm.util import _is_mapped_class, _is_aliased_class
-
+
class Foo(object):
x = "something"
@property
return "somethign else"
m = mapper(Foo, users)
a1 = aliased(Foo)
-
+
f = Foo()
for fn, arg, ret in [
"""test preservation of mapper compile errors raised during hasattr(),
as well as for redundant mapper compile calls. Test that
repeated calls don't stack up error messages.
-
+
"""
-
+
mapper(Address, addresses, properties={
'user':relationship(User)
})
-
+
hasattr(Address.user, 'property')
for i in range(3):
assert_raises_message(sa.exc.InvalidRequestError,
"Original exception was: Class "
"'test.orm._fixtures.User' is not mapped$"
, compile_mappers)
-
+
@testing.resolve_artifact_names
def test_column_prefix(self):
mapper(User, users, column_prefix='_', properties={
"not represented in the mapper's table",
mapper, User, users, properties={'foo'
: addresses.c.user_id})
-
+
@testing.resolve_artifact_names
def test_bad_constructor(self):
"""If the construction of a mapped class fails, the instance does not get placed in the session"""
-
+
class Foo(object):
def __init__(self, one, two, _sa_session=None):
pass
assert Foo.orders.impl.extensions is User.orders.impl.extensions
assert Foo.orders.impl.extensions is not ext_list
-
+
compile_mappers()
assert len(User.somename.impl.extensions) == 1
assert len(Foo.somename.impl.extensions) == 1
assert len(Foo.orders.impl.extensions) == 3
assert len(User.orders.impl.extensions) == 3
-
+
@testing.resolve_artifact_names
def test_compile_on_get_props_1(self):
assert not m.compiled
assert m.get_property('name')
assert m.compiled
-
+
@testing.resolve_artifact_names
def test_add_property(self):
assert_col = []
class UCComparator(sa.orm.PropComparator):
__hash__ = None
-
+
def __eq__(self, other):
cls = self.prop.parent.class_
col = getattr(cls, 'name')
def name(self):
pass
class Empty(object):pass
-
+
empty = mapper(Empty, t, properties={'empty_id' : t.c.id},
include_properties=[])
p_m = mapper(Person, t, polymorphic_on=t.c.type,
column_prefix="p_")
hd_m = mapper(HasDef, t, column_prefix="h_")
-
+
fb_m = mapper(Fub, t, include_properties=(t.c.id, t.c.type))
frb_m = mapper(Frob, t, column_prefix='f_',
exclude_properties=(t.c.boss_id,
'employee_number', t.c.vendor_id))
-
+
p_m.compile()
def assert_props(cls, want):
have = set([p.key for p in class_mapper(cls).iterate_properties])
want = set(want)
eq_(have, want)
-
+
assert_props(HasDef, ['h_boss_id', 'h_employee_number', 'h_id',
- 'name', 'h_name', 'h_vendor_id', 'h_type'])
+ 'name', 'h_name', 'h_vendor_id', 'h_type'])
assert_props(Person, ['id', 'name', 'type'])
assert_instrumented(Person, ['id', 'name', 'type'])
assert_props(Employee, ['boss', 'boss_id', 'employee_number',
'id', 'name', 'type'])
assert_props(Manager, ['boss', 'boss_id', 'employee_number', 'peon',
'id', 'name', 'type'])
-
+
# 'peon' and 'type' are both explicitly stated properties
assert_instrumented(Manager, ['peon', 'type', 'id'])
-
+
assert_props(Vendor, ['vendor_id', 'id', 'name', 'type'])
assert_props(Hoho, ['id', 'name', 'type'])
assert_props(Lala, ['p_employee_number', 'p_id', 'p_name', 'p_type'])
Foo, inherits=Person, polymorphic_identity='foo',
exclude_properties=('type', ),
)
-
+
@testing.resolve_artifact_names
def test_mapping_to_join_raises(self):
"""Test implicit merging of two cols warns."""
-
+
usersaddresses = sa.join(users, addresses,
users.c.id == addresses.c.user_id)
assert_raises_message(
mapper, User, usersaddresses, primary_key=[users.c.id]
)
sa.orm.clear_mappers()
-
+
@testing.emits_warning(r'Implicitly')
def go():
# but it works despite the warning
m1 = mapper(Item, items, primary_key=[items.c.id])
m2 = mapper(Keyword, keywords, primary_key=keywords.c.id)
m3 = mapper(User, users, primary_key=(users.c.id,))
-
+
assert m1.primary_key[0] is items.c.id
assert m2.primary_key[0] is keywords.c.id
assert m3.primary_key[0] is users.c.id
-
-
+
+
@testing.resolve_artifact_names
def test_custom_join(self):
"""select_from totally replace the FROM parameters."""
create_session().query(User).order_by(User.name).all(),
[User(id=10, name=u'chuck'), User(id=8, name=u'ed'), User(id=9, name=u'fred'), User(id=7, name=u'jack')]
)
-
+
# 'Raises a "expression evaluation not supported" error at prepare time
@testing.fails_on('firebird', 'FIXME: unknown')
@testing.resolve_artifact_names
@testing.resolve_artifact_names
def test_override_2(self):
"""exclude_properties cancels the error."""
-
+
mapper(User, users,
exclude_properties=['name'],
properties=dict(
name=relationship(mapper(Address, addresses))))
-
+
assert bool(User.name)
-
+
@testing.resolve_artifact_names
def test_override_3(self):
"""The column being named elsewhere also cancels the error,"""
adlist = synonym('addresses'),
adname = synonym('addresses')
))
-
+
# ensure the synonym can get at the proxied comparators without
# an explicit compile
User.name == 'ed'
# test compile
assert not isinstance(User.uname == 'jack', bool)
-
+
assert User.uname.property
assert User.adlist.property
-
+
sess = create_session()
-
+
# test RowTuple names
row = sess.query(User.id, User.uname).first()
assert row.uname == row[1]
-
+
u = sess.query(User).filter(User.uname=='jack').one()
fixture = self.static.user_address_result[0].addresses
def test_comparable(self):
class extendedproperty(property):
attribute = 123
-
+
def method1(self):
return "method1"
-
+
def __getitem__(self, key):
return 'value'
class UCComparator(sa.orm.PropComparator):
__hash__ = None
-
+
def method1(self):
return "uccmethod1"
-
+
def method2(self, other):
return "method2"
-
+
def __eq__(self, other):
cls = self.prop.parent.class_
col = getattr(cls, 'name')
AttributeError,
"Neither 'extendedproperty' object nor 'UCComparator' object has an attribute 'nonexistent'",
getattr, User.uc_name, 'nonexistent')
-
+
# test compile
assert not isinstance(User.uc_name == 'jack', bool)
u = q.filter(User.uc_name=='JACK').one()
def __eq__(self, other):
# lower case comparison
return func.lower(self.__clause_element__()) == func.lower(other)
-
+
def intersects(self, other):
# non-standard comparator
return self.__clause_element__().op('&=')(other)
-
+
mapper(User, users, properties={
'name':sa.orm.column_property(users.c.name, comparator_factory=MyComparator)
})
-
+
assert_raises_message(
AttributeError,
"Neither 'InstrumentedAttribute' object nor 'MyComparator' object has an attribute 'nonexistent'",
eq_(str((User.name == 'ed').compile(dialect=sa.engine.default.DefaultDialect())) , "lower(users.name) = lower(:lower_1)")
eq_(str((User.name.intersects('ed')).compile(dialect=sa.engine.default.DefaultDialect())), "users.name &= :name_1")
-
+
@testing.resolve_artifact_names
def test_reentrant_compile(self):
def post_instrument_class(self, mapper):
super(MyFakeProperty, self).post_instrument_class(mapper)
m2.compile()
-
+
m1 = mapper(User, users, properties={
'name':MyFakeProperty(users.c.name)
})
def post_instrument_class(self, mapper):
super(MyFakeProperty, self).post_instrument_class(mapper)
m1.compile()
-
+
m1 = mapper(User, users, properties={
'name':MyFakeProperty(users.c.name)
})
m2 = mapper(Address, addresses)
compile_mappers()
-
+
@testing.resolve_artifact_names
def test_reconstructor(self):
recon = []
pass
class Sub(Base):
pass
-
+
mapper(Base, users)
sa.orm.compile_mappers()
def test_unmapped_subclass_error_premap(self):
class Base(object):
pass
-
+
mapper(Base, users)
-
+
class Sub(Base):
pass
sa.orm.compile_mappers()
-
+
# we can create new instances, set attributes.
s = Sub()
s.name = 'foo'
attributes.get_history(s, 'name'),
(['foo'], (), ())
)
-
+
# using it with an ORM operation, raises
assert_raises(sa.orm.exc.UnmappedClassError,
create_session().add, Sub())
-
+
@testing.resolve_artifact_names
def test_oldstyle_mixin(self):
class OldStyle:
mapper(B, users)
class DocumentTest(testing.TestBase):
-
+
def test_doc_propagate(self):
metadata = MetaData()
t1 = Table('t1', metadata,
class Foo(object):
pass
-
+
class Bar(object):
pass
-
+
mapper(Foo, t1, properties={
'bars':relationship(Bar,
doc="bar relationship",
eq_(Foo.hoho.__doc__, "syn of col4")
eq_(Bar.col1.__doc__, "primary key column")
eq_(Bar.foo.__doc__, "foo relationship")
-
-
-
+
+
+
class OptionsTest(_fixtures.FixtureTest):
@testing.fails_on('maxdb', 'FIXME: unknown')
items = relationship(Item, secondary=order_items)
))
mapper(Item, items)
-
+
sess = create_session()
-
+
oalias = aliased(Order)
opt1 = sa.orm.joinedload(User.orders, Order.items)
opt2a, opt2b = sa.orm.contains_eager(User.orders, Order.items, alias=oalias)
assert opt1 in ustate.load_options
assert opt2a not in ustate.load_options
assert opt2b not in ustate.load_options
-
+
import pickle
pickle.dumps(u1)
@testing.resolve_artifact_names
def test_deep_options_2(self):
"""test (joined|subquery)load_all() options"""
-
+
sess = create_session()
l = (sess.query(User).
def validate_name(self, key, name):
assert name != 'fred'
return name + ' modified'
-
+
mapper(User, users)
sess = create_session()
u1 = User(name='ed')
sess.flush()
sess.expunge_all()
eq_(sess.query(User).filter_by(name='ed modified').one(), User(name='ed'))
-
+
@testing.resolve_artifact_names
def test_collection(self):
def validate_address(self, key, ad):
assert '@' in ad.email_address
return ad
-
+
mapper(User, users, properties={'addresses':relationship(Address)})
mapper(Address, addresses)
sess = create_session()
class DummyComposite(object):
def __init__(self, x, y):
pass
-
+
from sqlalchemy.orm.interfaces import PropComparator
-
+
class MyFactory(PropComparator):
pass
-
+
for args in (
(column_property, users.c.name),
(deferred, users.c.name),
fn = args[0]
args = args[1:]
fn(comparator_factory=MyFactory, *args)
-
+
@testing.resolve_artifact_names
def test_column(self):
from sqlalchemy.orm.properties import ColumnProperty
-
+
class MyFactory(ColumnProperty.Comparator):
__hash__ = None
def __eq__(self, other):
__hash__ = None
def __eq__(self, other):
return func.foobar(self.__clause_element__().c.id) == func.foobar(other.user_id)
-
+
mapper(User, users)
mapper(Address, addresses, properties={
'user':relationship(User, comparator_factory=MyFactory,
self.assert_compile(aliased(Address).user == User(id=5), "foobar(addresses_1.user_id) = foobar(:foobar_1)", dialect=default.DefaultDialect())
self.assert_compile(aliased(User).addresses == Address(id=5, user_id=7), "foobar(users_1.id) = foobar(:foobar_1)", dialect=default.DefaultDialect())
-
+
class DeferredTest(_fixtures.FixtureTest):
@testing.resolve_artifact_names
'isopen':synonym('_isopen', map_column=True),
'description':deferred(orders.c.description, group='foo')
})
-
+
sess = create_session()
o1 = sess.query(Order).get(1)
eq_(o1.description, "order 1")
def go():
q.all()[0].user_id
-
+
self.sql_eq_(go, [
("SELECT orders.id AS orders_id, "
"orders.address_id AS orders_address_id, "
run_inserts = 'once'
run_deletes = None
-
+
@classmethod
def define_tables(cls, metadata):
Table("base", metadata,
Table('related', metadata,
Column('id', Integer, ForeignKey('base.id'), primary_key=True),
)
-
+
@classmethod
@testing.resolve_artifact_names
def setup_mappers(cls):
})
mapper(Child2, child2, inherits=Base, polymorphic_identity='child2')
mapper(Related, related)
-
+
@classmethod
@testing.resolve_artifact_names
def insert_data(cls):
{'id':5},
{'id':6},
])
-
+
@testing.resolve_artifact_names
def test_contains_eager(self):
sess = create_session()
-
-
+
+
child1s = sess.query(Child1).join(Child1.related).options(sa.orm.contains_eager(Child1.related)).order_by(Child1.id)
def go():
[Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
)
self.assert_sql_count(testing.db, go, 1)
-
+
c1 = child1s[0]
self.assert_sql_execution(
[Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
)
self.assert_sql_count(testing.db, go, 1)
-
+
c1 = child1s[0]
self.assert_sql_execution(
[Child1(id=1, related=Related(id=1)), Child1(id=2, related=Related(id=2)), Child1(id=3, related=Related(id=3))]
)
self.assert_sql_count(testing.db, go, 4)
-
+
c1 = child1s[0]
# this *does* joinedload
{'param_1':4}
)
)
-
+
class DeferredPopulationTest(_base.MappedTest):
@classmethod
mapper(Human, human, properties={"thing": relationship(Thing)})
mapper(Thing, thing, properties={"name": deferred(thing.c.name)})
-
+
@classmethod
@testing.resolve_artifact_names
def insert_data(cls):
human.insert().execute([
{"id": 1, "thing_id": 1, "name": "Clark Kent"},
])
-
+
def _test(self, thing):
assert "name" in attributes.instance_state(thing).dict
result = session.query(Thing).first()
thing = session.query(Thing).options(sa.orm.undefer("name")).first()
self._test(thing)
-
+
@testing.resolve_artifact_names
def test_joinedload_with_clear(self):
session = create_session()
result = session.query(Human).add_entity(Thing).join("thing").first()
thing = session.query(Thing).options(sa.orm.undefer("name")).first()
self._test(thing)
-
+
class NoLoadTest(_fixtures.FixtureTest):
run_inserts = 'once'
Column('id', Integer, primary_key=True),
Column('type', String(40)),
Column('data', String(50))
-
+
)
@testing.resolve_artifact_names
def test_cascading_extensions(self):
ext_msg = []
-
+
class Ex1(sa.orm.AttributeExtension):
def set(self, state, value, oldvalue, initiator):
ext_msg.append("Ex1 %r" % value)
return "ex1" + value
-
+
class Ex2(sa.orm.AttributeExtension):
def set(self, state, value, oldvalue, initiator):
ext_msg.append("Ex2 %r" % value)
return "ex2" + value
-
+
class A(_base.BasicEntity):
pass
class B(A):
pass
class C(B):
pass
-
+
mapper(A, t1, polymorphic_on=t1.c.type, polymorphic_identity='a', properties={
'data':column_property(t1.c.data, extension=Ex1())
})
mc = mapper(C, polymorphic_identity='c', inherits=B, properties={
'data':column_property(t1.c.data, extension=Ex2())
})
-
+
a1 = A(data='a1')
b1 = B(data='b1')
c1 = C(data='c1')
-
+
eq_(a1.data, 'ex1a1')
eq_(b1.data, 'ex1b1')
eq_(c1.data, 'ex2c1')
-
+
a1.data = 'a2'
b1.data='b2'
c1.data = 'c2'
eq_(a1.data, 'ex1a2')
eq_(b1.data, 'ex1b2')
eq_(c1.data, 'ex2c2')
-
+
eq_(ext_msg, ["Ex1 'a1'", "Ex1 'b1'", "Ex2 'c1'", "Ex1 'a2'", "Ex1 'b2'", "Ex2 'c2'"])
-
-
+
+
class MapperExtensionTest(_fixtures.FixtureTest):
run_inserts = None
-
+
def extension(self):
methods = []
def test_before_after_only_collection(self):
"""before_update is called on parent for collection modifications,
after_update is called even if no columns were updated.
-
+
"""
Ext1, methods1 = self.extension()
'create_instance', 'populate_instance', 'reconstruct_instance',
'append_result', 'before_update', 'after_update', 'before_delete',
'after_delete'])
-
+
@testing.resolve_artifact_names
def test_create_instance(self):
class CreateUserExt(sa.orm.MapperExtension):
def create_instance(self, mapper, selectcontext, row, class_):
return User.__new__(User)
-
+
mapper(User, users, extension=CreateUserExt())
sess = create_session()
u1 = User()
sess.flush()
sess.expunge_all()
assert sess.query(User).first()
-
+
class RequirementsTest(_base.MappedTest):
"""Tests the contract for user classes."""
assert_raises(sa.exc.ArgumentError, mapper, OldStyle, ht1)
assert_raises(sa.exc.ArgumentError, mapper, 123)
-
+
class NoWeakrefSupport(str):
pass
# TODO: is weakref support detectable without an instance?
#self.assertRaises(sa.exc.ArgumentError, mapper, NoWeakrefSupport, t2)
# end Py2K
-
+
@testing.resolve_artifact_names
def test_comparison_overrides(self):
"""Simple tests to ensure users can supply comparison __methods__.
class H1(object):
def __len__(self):
return len(self.get_value())
-
+
def get_value(self):
self.value = "foobar"
return self.value
def get_value(self):
self.value = "foobar"
return self.value
-
+
mapper(H1, ht1)
mapper(H2, ht1)
-
+
h1 = H1()
h1.value = "Asdf"
h1.value = "asdf asdf" # ding
h2 = H2()
h2.value = "Asdf"
h2.value = "asdf asdf" # ding
-
+
class MagicNamesTest(_base.MappedTest):
@classmethod
sess.add(c)
sess.flush()
sess.expunge_all()
-
+
for C, M in ((Cartographer, Map),
(sa.orm.aliased(Cartographer), sa.orm.aliased(Map))):
c1 = (sess.query(C).
def go():
sess.merge(u)
self.assert_sql_count(testing.db, go, 0)
-
+
@testing.resolve_artifact_names
def test_transient_to_pending_collection(self):
mapper(User, users, properties={
@testing.resolve_artifact_names
def test_merge_empty_attributes(self):
mapper(User, dingalings)
-
+
sess = create_session()
-
+
# merge empty stuff. goes in as NULL.
# not sure what this was originally trying to
# test.
u2 = User(id=2, data="foo")
sess.add(u2)
sess.flush()
-
+
# merge User on u2's pk with
# no "data".
# value isn't whacked from the destination
# dict.
u3 = sess.merge(User(id=2))
eq_(u3.__dict__['data'], "foo")
-
+
# make a change.
u3.data = 'bar'
-
+
# merge another no-"data" user.
# attribute maintains modified state.
# (usually autoflush would have happened
u5 = User(id=3, data="foo")
sess.add(u5)
sess.flush()
-
+
# blow it away from u5, but don't
# mark as expired. so it would just
# be blank.
del u5.data
-
+
# the merge adds expiry to the
# attribute so that it loads.
# not sure if I like this - it currently is needed
u6.data = None
u7 = sess.merge(User(id=3))
assert u6.__dict__['data'] is None
-
-
+
+
@testing.resolve_artifact_names
def test_merge_irregular_collection(self):
mapper(User, users, properties={
a1 = Address(email_address="asdf", user=u1)
sess.add(a1)
sess.flush()
-
+
a2 = Address(id=a1.id, email_address="bar", user=User(name="hoho"))
a2 = sess.merge(a2)
sess.flush()
-
+
# no expire of the attribute
-
+
assert a2.__dict__['user'] is u1
-
+
# merge succeeded
eq_(
sess.query(Address).all(),
[Address(id=a1.id, email_address="bar")]
)
-
+
# didn't touch user
eq_(
sess.query(User).all(),
[User(name="fred")]
)
-
+
@testing.resolve_artifact_names
def test_one_to_many_cascade(self):
'user':relationship(User)
})
mapper(User, users)
-
+
u1 = User(id=1, name="u1")
a1 =Address(id=1, email_address="a1", user=u1)
u2 = User(id=2, name="u2")
-
+
sess = create_session()
sess.add_all([a1, u2])
sess.flush()
-
+
a1.user = u2
-
+
sess2 = create_session()
a2 = sess2.merge(a1)
eq_(
([u2], (), [attributes.PASSIVE_NO_RESULT])
)
assert a2 in sess2.dirty
-
+
sess.refresh(a1)
-
+
sess2 = create_session()
a2 = sess2.merge(a1, load=False)
eq_(
((), [u1], ())
)
assert a2 not in sess2.dirty
-
+
@testing.resolve_artifact_names
def test_many_to_many_cascade(self):
sess.add(u)
sess.commit()
sess.close()
-
+
u2 = User(id=7, name=None, address=None)
u3 = sess.merge(u2)
assert u3.name is None
assert u3.address is None
-
+
sess.close()
-
+
a1 = Address(id=1, user=None)
a2 = sess.merge(a1)
assert a2.user is None
-
+
@testing.resolve_artifact_names
def test_transient_no_load(self):
mapper(User, users)
'uid':synonym('id'),
'foobar':comparable_property(User.Comparator,User.value),
})
-
+
sess = create_session()
u = User()
u.name = 'ed'
@testing.resolve_artifact_names
def test_cascade_doesnt_blowaway_manytoone(self):
"""a merge test that was fixed by [ticket:1202]"""
-
+
s = create_session(autoflush=True)
mapper(User, users, properties={
'addresses':relationship(mapper(Address, addresses),backref='user')})
eq_(after_id, other_id)
eq_(before_id, after_id)
eq_(a1.user, a2.user)
-
+
@testing.resolve_artifact_names
def test_cascades_dont_autoflush(self):
sess = create_session(autoflush=True)
@testing.resolve_artifact_names
def test_dont_expire_pending(self):
"""test that pending instances aren't expired during a merge."""
-
+
mapper(User, users)
u = User(id=7)
sess = create_session(autoflush=True, autocommit=False)
def go():
eq_(u.name, None)
self.assert_sql_count(testing.db, go, 0)
-
+
@testing.resolve_artifact_names
def test_option_state(self):
"""test that the merged takes on the MapperOption characteristics
of that which is merged.
-
+
"""
class Option(MapperOption):
propagate_to_loaders = True
-
+
opt1, opt2 = Option(), Option()
sess = sessionmaker()()
-
+
umapper = mapper(User, users)
-
+
sess.add_all([
User(id=1, name='u1'),
User(id=2, name='u2'),
])
sess.commit()
-
+
sess2 = sessionmaker()()
s2_users = sess2.query(User).options(opt2).all()
-
+
# test 1. no options are replaced by merge options
sess = sessionmaker()()
s1_users = sess.query(User).all()
-
+
for u in s1_users:
ustate = attributes.instance_state(u)
eq_(ustate.load_path, ())
eq_(ustate.load_options, set())
-
+
for u in s2_users:
sess.merge(u)
ustate = attributes.instance_state(u)
eq_(ustate.load_path, (umapper, ))
eq_(ustate.load_options, set([opt2]))
-
+
# test 2. present options are replaced by merge options
sess = sessionmaker()()
s1_users = sess.query(User).options(opt1).all()
for u in s2_users:
sess.merge(u)
-
+
for u in s1_users:
ustate = attributes.instance_state(u)
eq_(ustate.load_path, (umapper, ))
eq_(ustate.load_options, set([opt2]))
-
+
class MutableMergeTest(_base.MappedTest):
@classmethod
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('data', PickleType(comparator=operator.eq))
)
-
+
@classmethod
def setup_classes(cls):
class Data(_base.ComparableEntity):
pass
-
+
@testing.resolve_artifact_names
def test_list(self):
mapper(Data, data)
sess = sessionmaker()()
d = Data(data=["this", "is", "a", "list"])
-
+
sess.add(d)
sess.commit()
-
+
d2 = Data(id=d.id, data=["this", "is", "another", "list"])
d3 = sess.merge(d2)
eq_(d3.data, ["this", "is", "another", "list"])
-
-
-
+
+
+
class CompositeNullPksTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Column('pk1', String(10), primary_key=True),
Column('pk2', String(10), primary_key=True),
)
-
+
@classmethod
def setup_classes(cls):
class Data(_base.ComparableEntity):
pass
-
+
@testing.resolve_artifact_names
def test_merge_allow_partial(self):
mapper(Data, data)
sess = sessionmaker()()
-
+
d1 = Data(pk1="someval", pk2=None)
-
+
def go():
return sess.merge(d1)
self.assert_sql_count(testing.db, go, 1)
def go():
return sess.merge(d1)
self.assert_sql_count(testing.db, go, 0)
-
+
fk_args = dict(deferrable=True, initially='deferred')
else:
fk_args = dict(onupdate='cascade')
-
+
users = Table('users', metadata,
Column('username', String(50), primary_key=True),
Column('fullname', String(100)),
sess.flush()
sess.expunge_all()
assert sess.query(User).get('ed').fullname == 'jack'
-
+
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE')
assert sess.query(Address).get('jack1').username is None
u1 = sess.query(User).get('fred')
eq_(User(username='fred', fullname='jack'), u1)
-
+
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE')
u1 = User(username='jack', fullname='jack')
sess.add(u1)
sess.flush()
-
+
a1 = Address(email='jack1')
u1.address = a1
sess.add(a1)
sess.expunge_all()
eq_([Address(username='ed')], sess.query(Address).all())
-
+
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE')
def test_bidirectional_passive(self):
eq_(['jack'], [u.username for u in r[0].users])
eq_(Item(itemname='item2'), r[1])
eq_(['ed', 'jack'], sorted([u.username for u in r[1].users]))
-
+
sess.expunge_all()
u2 = sess.query(User).get(u2.username)
u2.username='wendy'
class TransientExceptionTesst(_fixtures.FixtureTest):
run_inserts = None
-
+
@testing.resolve_artifact_names
def test_transient_exception(self):
"""An object that goes from a pk value to transient/pending
doesn't count as a "pk" switch.
-
+
"""
mapper(User, users)
mapper(Address, addresses, properties={'user':relationship(User)})
-
+
sess = create_session()
u1 = User(id=5, name='u1')
ad1 = Address(email_address='e1', user=u1)
sess.add_all([u1, ad1])
sess.flush()
-
+
make_transient(u1)
u1.id = None
u1.username='u2'
sess.add(u1)
sess.flush()
-
+
eq_(ad1.user_id, 5)
-
+
sess.expire_all()
eq_(ad1.user_id, 5)
ne_(u1.id, 5)
ne_(u1.id, None)
eq_(sess.query(User).count(), 2)
-
+
class ReversePKsTest(_base.MappedTest):
"""reverse the primary keys of two entities and ensure bookkeeping
succeeds."""
-
-
+
+
@classmethod
def define_tables(cls, metadata):
Table(
Column('status', Integer, primary_key=True),
Column('username', Unicode(50), nullable=False),
)
-
+
@classmethod
def setup_classes(cls):
class User(_base.ComparableEntity):
@testing.resolve_artifact_names
def test_reverse(self):
PUBLISHED, EDITABLE, ARCHIVED = 1, 2, 3
-
+
mapper(User, user)
session = sa.orm.sessionmaker()()
-
+
a_published = User(1, PUBLISHED, u'a')
session.add(a_published)
session.commit()
assert session.query(User).get([1, PUBLISHED]) is a_published
assert session.query(User).get([1, EDITABLE]) is a_editable
-
+
class SelfReferentialTest(_base.MappedTest):
# mssql, mysql don't allow
# ON UPDATE on self-referential keys
fk_args = dict(deferrable=True, initially='deferred')
else:
fk_args = dict(onupdate='cascade')
-
+
Table('nodes', metadata,
Column('name', String(50), primary_key=True),
Column('parent', String(50),
n4 = Node(name='n13', parentnode=n1)
sess.add_all([n2, n3, n4])
sess.commit()
-
+
n1.name = 'new n1'
sess.commit()
eq_(['new n1', 'new n1', 'new n1'],
class CascadeToFKPKTest(_base.MappedTest, testing.AssertsCompiledSQL):
"""A primary key mutation cascades onto a foreign key that is itself a
primary key."""
-
+
@classmethod
def define_tables(cls, metadata):
if testing.against('oracle'):
pass
class Address(_base.ComparableEntity):
pass
-
+
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE')
def test_onetomany_passive(self):
@testing.fails_on_everything_except('sqlite', 'oracle', '+zxjdbc')
def test_onetomany_nonpassive(self):
self._test_onetomany(False)
-
+
def test_o2m_change_passive(self):
self._test_o2m_change(True)
-
+
def test_o2m_change_nonpassive(self):
self._test_o2m_change(False)
@testing.resolve_artifact_names
def _test_o2m_change(self, passive_updates):
"""Change the PK of a related entity to another.
-
+
"on update cascade" is not involved here, so the mapper has
to do the UPDATE itself.
-
+
"""
mapper(User, users, properties={
'addresses':relationship(Address,
a1 = Address(username='ed', email='ed@host1')
u1 = User(username='ed', addresses=[a1])
u2 = User(username='jack')
-
+
sess.add_all([a1, u1, u2])
sess.flush()
-
+
a1.username = 'jack'
sess.flush()
u1.addresses.remove(a1)
u2.addresses.append(a1)
sess.flush()
-
+
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE '
'but requires referential integrity')
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
def test_change_m2o_passive(self):
self._test_change_m2o(True)
-
+
@testing.fails_on_everything_except('sqlite', 'oracle', '+zxjdbc')
def test_change_m2o_nonpassive(self):
self._test_change_m2o(False)
-
+
@testing.resolve_artifact_names
def _test_change_m2o(self, passive_updates):
mapper(User, users)
a1 = Address(user=u1, email='foo@bar')
sess.add_all([u1, a1])
sess.flush()
-
+
u1.username='edmodified'
sess.flush()
eq_(a1.username, 'edmodified')
-
+
sess.expire_all()
eq_(a1.username, 'edmodified')
a1 = Address(user=u1, email='foo@bar')
sess.add_all([u1, u2, a1])
sess.flush()
-
+
a1.user = u2
sess.flush()
-
-
+
+
@testing.resolve_artifact_names
def test_rowswitch_doesntfire(self):
mapper(User, users)
sess = create_session()
u1 = User(username='ed')
a1 = Address(user=u1, email='ed@host1')
-
+
sess.add(u1)
sess.add(a1)
sess.flush()
-
+
sess.delete(u1)
sess.delete(a1)
sess.add(a2)
from sqlalchemy.test.assertsql import CompiledSQL
-
+
# test that the primary key columns of addresses are not
# being updated as well, since this is a row switch.
self.assert_sql_execution(testing.db,
{'etc': 'foo', 'addresses_username':'ed',
'addresses_email':'ed@host1'} ),
)
-
-
+
+
@testing.resolve_artifact_names
def _test_onetomany(self, passive_updates):
"""Change the PK of a related entity via foreign key cascade.
-
+
For databases that require "on update cascade", the mapper
has to identify the row by the new value, not the old, when
it does the update.
-
+
"""
mapper(User, users, properties={
'addresses':relationship(Address,
passive_updates=passive_updates)})
mapper(Address, addresses)
-
+
sess = create_session()
a1, a2 = Address(username='ed', email='ed@host1'),\
Address(username='ed', email='ed@host2')
eq_(a2.username, 'ed')
eq_(sa.select([addresses.c.username]).execute().fetchall(),
[('ed',), ('ed',)])
-
+
u1.username = 'jack'
a2.email='ed@host3'
sess.flush()
class JoinedInheritanceTest(_base.MappedTest):
"""Test cascades of pk->pk/fk on joined table inh."""
-
+
# mssql doesn't allow ON UPDATE on self-referential keys
__unsupported_on__ = ('mssql',)
Column('name', String(50), primary_key=True),
Column('type', String(50), nullable=False),
test_needs_fk=True)
-
+
Table('engineer', metadata,
Column('name', String(50), ForeignKey('person.name', **fk_args),
primary_key=True),
@testing.fails_on_everything_except('sqlite', 'oracle', '+zxjdbc')
def test_pk_nonpassive(self):
self._test_pk(False)
-
+
@testing.fails_on('sqlite', 'sqlite doesnt support ON UPDATE CASCADE')
@testing.fails_on('oracle', 'oracle doesnt support ON UPDATE CASCADE')
def test_fk_passive(self):
self._test_fk(True)
-
+
# PG etc. need passive=True to allow PK->PK cascade
@testing.fails_on_everything_except('sqlite', 'mysql+zxjdbc', 'oracle',
'postgresql+zxjdbc')
e1.name = 'wally'
e1.primary_language = 'c++'
sess.commit()
-
+
@testing.resolve_artifact_names
def _test_fk(self, passive_updates):
mapper(Person, person, polymorphic_on=person.c.type,
})
mapper(Manager, manager, inherits=Person,
polymorphic_identity='manager')
-
+
sess = sa.orm.sessionmaker()()
-
+
m1 = Manager(name='dogbert', paperwork='lots')
e1, e2 = \
Engineer(name='dilbert', primary_language='java', boss=m1),\
eq_(e1.boss_name, 'dogbert')
eq_(e2.boss_name, 'dogbert')
sess.expire_all()
-
+
m1.name = 'pointy haired'
e1.primary_language = 'scala'
e2.primary_language = 'cobol'
sess.commit()
-
+
eq_(e1.boss_name, 'pointy haired')
eq_(e2.boss_name, 'pointy haired')
-
-
-
+
+
+
session.add(j)
p = Port(name='fa0/1')
session.add(p)
-
+
j.port=p
session.flush()
jid = j.id
class PickleTest(_fixtures.FixtureTest):
run_inserts = None
-
+
@testing.resolve_artifact_names
def test_transient(self):
mapper(User, users, properties={
@testing.resolve_artifact_names
def test_no_mappers(self):
-
+
umapper = mapper(User, users)
u1 = User(name='ed')
u1_pickled = pickle.dumps(u1, -1)
# this fails unless the InstanceState
# compiles the mapper
eq_(str(u1), "User(name='ed')")
-
+
@testing.resolve_artifact_names
def test_serialize_path(self):
umapper = mapper(User, users, properties={
'addresses':relationship(Address, backref="user")
})
amapper = mapper(Address, addresses)
-
+
# this is a "relationship" path with mapper, key, mapper, key
p1 = (umapper, 'addresses', amapper, 'email_address')
eq_(
interfaces.deserialize_path(interfaces.serialize_path(p1)),
p1
)
-
+
# this is a "mapper" path with mapper, key, mapper, no key
# at the end.
p2 = (umapper, 'addresses', amapper, )
interfaces.deserialize_path(interfaces.serialize_path(p2)),
p2
)
-
+
# test a blank path
p3 = ()
eq_(
interfaces.deserialize_path(interfaces.serialize_path(p3)),
p3
)
-
+
@testing.resolve_artifact_names
def test_class_deferred_cols(self):
mapper(User, users, properties={
eq_(u2.name, 'ed')
assert 'addresses' not in u2.__dict__
ad = u2.addresses[0]
-
+
# mapper options now transmit over merge(),
# new as of 0.6, so email_address is deferred.
- assert 'email_address' not in ad.__dict__
-
+ assert 'email_address' not in ad.__dict__
+
eq_(ad.email_address, 'ed@bar.com')
eq_(u2, User(name='ed', addresses=[Address(email_address='ed@bar.com')]))
for protocol in -1, 0, 1, 2:
u2 = pickle.loads(pickle.dumps(u1, protocol))
eq_(u1, u2)
-
+
@testing.resolve_artifact_names
def test_options_with_descriptors(self):
mapper(User, users, properties={
]:
opt2 = pickle.loads(pickle.dumps(opt))
eq_(opt.key, opt2.key)
-
+
u1 = sess.query(User).options(opt).first()
-
+
u2 = pickle.loads(pickle.dumps(u1))
-
+
def test_collection_setstate(self):
"""test a particular cycle that requires CollectionAdapter
to not rely upon InstanceState to deserialize."""
-
+
global Child1, Child2, Parent, Screen
-
+
m = MetaData()
c1 = Table('c1', m,
Column('parent_id', String,
class Parent(_base.ComparableEntity):
pass
-
+
mapper(Parent, p, properties={
'children1':relationship(Child1),
'children2':relationship(Child2)
screen1.errors = [obj.children1, obj.children2]
screen2 = Screen(Child2(), screen1)
pickle.loads(pickle.dumps(screen2))
-
+
class PolymorphicDeferredTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
def test_rebuild_state(self):
"""not much of a 'test', but illustrate how to
remove instance-level state before pickling.
-
+
"""
mapper(User, users)
u2 = pickle.loads(pickle.dumps(u1))
attributes.manager_of_class(User).setup_instance(u2)
assert attributes.instance_state(u2)
-
+
class UnpickleSA05Test(_fixtures.FixtureTest):
"""test loading picklestrings from SQLA 0.5."""
-
+
__requires__ = ('python2',)
-
+
@testing.resolve_artifact_names
def test_one(self):
mapper(User, users, properties={
mapper(User, users, properties={
'uname':users.c.name
})
-
+
row = create_session().\
query(User.id, User.uname).\
filter(User.id==7).first()
q.column_descriptions,
asserted
)
-
-
+
+
class GetTest(QueryTest):
def test_get(self):
s = create_session()
def test_get_composite_pk_no_result(self):
s = Session()
assert s.query(CompositePk).get((100,100)) is None
-
+
def test_get_composite_pk_result(self):
s = Session()
one_two = s.query(CompositePk).get((1,2))
assert one_two.i == 1
assert one_two.j == 2
assert one_two.k == 3
-
+
def test_get_too_few_params(self):
s = Session()
q = s.query(CompositePk)
s = Session()
q = s.query(CompositePk)
assert_raises(sa_exc.InvalidRequestError, q.get, (7, 10, 100))
-
+
def test_get_null_pk(self):
"""test that a mapping which can have None in a
PK (i.e. map to an outerjoin) works with get()."""
-
+
s = users.outerjoin(addresses)
-
+
class UserThing(_base.ComparableEntity):
pass
-
+
mapper(UserThing, s, properties={
'id':(users.c.id, addresses.c.user_id),
'address_id':addresses.c.id,
"""test that get()/load() does not use preexisting filter/etc. criterion"""
s = create_session()
-
+
q = s.query(User).join('addresses').filter(Address.user_id==8)
assert_raises(sa_exc.InvalidRequestError, q.get, 7)
assert_raises(sa_exc.InvalidRequestError, s.query(User).filter(User.id==7).get, 19)
-
+
# order_by()/get() doesn't raise
s.query(User).order_by(User.id).get(8)
# Py2K
ustring = 'petit voix m\xe2\x80\x99a'.decode('utf-8')
# end Py2K
-
+
table.insert().execute(id=ustring, data=ustring)
class LocalFoo(Base):
pass
class InvalidGenerationsTest(QueryTest, AssertsCompiledSQL):
def test_no_limit_offset(self):
s = create_session()
-
+
for q in (
s.query(User).limit(2),
s.query(User).offset(2),
assert_raises(sa_exc.InvalidRequestError, q.group_by, 'foo')
assert_raises(sa_exc.InvalidRequestError, q.having, 'foo')
-
+
q.enable_assertions(False).join("addresses")
q.enable_assertions(False).filter(User.name=='ed')
q.enable_assertions(False).order_by('foo')
q.enable_assertions(False).group_by('foo')
-
+
def test_no_from(self):
s = create_session()
-
+
q = s.query(User).select_from(users)
assert_raises(sa_exc.InvalidRequestError, q.select_from, users)
q = s.query(User).join('addresses')
assert_raises(sa_exc.InvalidRequestError, q.select_from, users)
-
+
q = s.query(User).order_by(User.id)
assert_raises(sa_exc.InvalidRequestError, q.select_from, users)
assert_raises(sa_exc.InvalidRequestError, q.select_from, users)
-
+
q.enable_assertions(False).select_from(users)
-
+
# this is fine, however
q.from_self()
-
+
def test_invalid_select_from(self):
s = create_session()
q = s.query(User)
q = s.query(User)
assert_raises(sa_exc.ArgumentError, q.from_statement, User.id==5)
assert_raises(sa_exc.ArgumentError, q.from_statement, users.join(addresses))
-
+
def test_invalid_column(self):
s = create_session()
q = s.query(User)
assert_raises(sa_exc.InvalidRequestError, q.add_column, object())
-
+
def test_distinct(self):
"""test that a distinct() call is not valid before 'clauseelement' conditions."""
-
+
s = create_session()
q = s.query(User).distinct()
assert_raises(sa_exc.InvalidRequestError, q.select_from, User)
assert_raises(sa_exc.InvalidRequestError, q.select_from, User)
assert_raises(sa_exc.InvalidRequestError, q.from_statement, text("select * from table"))
assert_raises(sa_exc.InvalidRequestError, q.with_polymorphic, User)
-
+
def test_cancel_order_by(self):
s = create_session()
# after False was set, this should pass
q._no_select_modifiers("foo")
-
+
def test_mapper_zero(self):
s = create_session()
-
+
q = s.query(User, Address)
assert_raises(sa_exc.InvalidRequestError, q.get, 5)
-
+
def test_from_statement(self):
s = create_session()
-
+
q = s.query(User).filter(User.id==5)
assert_raises(sa_exc.InvalidRequestError, q.from_statement, "x")
q = s.query(User).order_by(User.name)
assert_raises(sa_exc.InvalidRequestError, q.from_statement, "x")
-
+
class OperatorTest(QueryTest, AssertsCompiledSQL):
"""test sql.Comparator implementation for MapperProperties"""
def test_comparison(self):
create_session().query(User)
ualias = aliased(User)
-
+
for (py_op, fwd_op, rev_op) in ((operator.lt, '<', '>'),
(operator.gt, '>', '<'),
(operator.eq, '=', '='),
self.assert_(compiled == fwd_sql or compiled == rev_sql,
"\n'" + compiled + "'\n does not match\n'" +
fwd_sql + "'\n or\n'" + rev_sql + "'")
-
+
def test_negated_null(self):
self._test(User.id == None, "users.id IS NULL")
self._test(~(User.id==None), "users.id IS NOT NULL")
self._test(~(Address.user==None), "addresses.user_id IS NOT NULL")
self._test(None == Address.user, "addresses.user_id IS NULL")
self._test(~(None == Address.user), "addresses.user_id IS NOT NULL")
-
+
def test_relationship(self):
self._test(User.addresses.any(Address.id==17),
"EXISTS (SELECT 1 "
u7 = User(id=7)
attributes.instance_state(u7).commit_all(attributes.instance_dict(u7))
-
+
self._test(Address.user == u7, ":param_1 = addresses.user_id")
self._test(Address.user != u7, "addresses.user_id != :user_id_1 OR addresses.user_id IS NULL")
Node.children==None,
"NOT (EXISTS (SELECT 1 FROM nodes AS nodes_1 WHERE nodes.id = nodes_1.parent_id))"
)
-
+
self._test(
Node.parent==None,
"nodes.parent_id IS NULL"
nalias.children==None,
"NOT (EXISTS (SELECT 1 FROM nodes WHERE nodes_1.id = nodes.parent_id))"
)
-
+
self._test(
nalias.children.any(Node.data=='some data'),
"EXISTS (SELECT 1 FROM nodes WHERE "
"nodes_1.id = nodes.parent_id AND nodes.data = :data_1)")
-
+
# fails, but I think I want this to fail
#self._test(
# Node.children.any(nalias.data=='some data'),
Node.parent.has(Node.data=='some data'),
"EXISTS (SELECT 1 FROM nodes AS nodes_1 WHERE nodes_1.id = nodes.parent_id AND nodes_1.data = :data_1)"
)
-
+
self._test(
Node.parent == Node(id=7),
":param_1 = nodes.parent_id"
nalias.parent != Node(id=7),
'nodes_1.parent_id != :parent_id_1 OR nodes_1.parent_id IS NULL'
)
-
+
self._test(
nalias.children.contains(Node(id=7)), "nodes_1.id = :param_1"
)
-
+
def test_op(self):
self._test(User.name.op('ilike')('17'), "users.name ilike :name_1")
def test_in_on_relationship_not_supported(self):
assert_raises(NotImplementedError, Address.user.in_, [User(id=5)])
-
+
def test_neg(self):
self._test(-User.id, "-users.id")
self._test(User.id + -User.id, "users.id + -users.id")
-
+
def test_between(self):
self._test(User.id.between('a', 'b'),
"users.id BETWEEN :id_1 AND :id_2")
class RawSelectTest(QueryTest, AssertsCompiledSQL):
"""compare a bunch of select() tests with the equivalent Query using straight table/columns.
-
+
Results should be the same as Query should act as a select() pass-thru for ClauseElement entities.
-
+
"""
def test_select(self):
sess = create_session()
# TODO: can we detect only one table in the "froms" and then turn off use_labels ?
s = sess.query(addresses.c.id.label('id'), addresses.c.email_address.label('email')).\
filter(addresses.c.user_id==users.c.id).correlate(users).statement.alias()
-
+
self.assert_compile(sess.query(users, s.c.email).select_from(users.join(s, s.c.id==users.c.id)).with_labels().statement,
"SELECT users.id AS users_id, users.name AS users_name, anon_1.email AS anon_1_email "
"FROM users JOIN (SELECT addresses.id AS id, addresses.email_address AS email FROM addresses "
self.assert_compile(sess.query(x).filter(x==5).statement,
"SELECT lala(users.id) AS foo FROM users WHERE lala(users.id) = :param_1", dialect=default.DefaultDialect())
- self.assert_compile(sess.query(func.sum(x).label('bar')).statement,
+ self.assert_compile(sess.query(func.sum(x).label('bar')).statement,
"SELECT sum(lala(users.id)) AS bar FROM users", dialect=default.DefaultDialect())
class ExpressionTest(QueryTest, AssertsCompiledSQL):
-
+
def test_deferred_instances(self):
session = create_session()
s = session.query(User).filter(and_(addresses.c.email_address == bindparam('emailad'), Address.user_id==User.id)).statement
def test_scalar_subquery(self):
session = create_session()
-
+
q = session.query(User.id).filter(User.id==7).subquery()
-
+
q = session.query(User).filter(User.id==q)
-
+
eq_(User(id=7), q.one())
-
+
def test_label(self):
session = create_session()
"SELECT (SELECT users.id FROM users WHERE users.id = :id_1) AS foo",
use_default_dialect=True
)
-
+
def test_as_scalar(self):
session = create_session()
q = session.query(User.id).filter(User.id==7).as_scalar()
-
+
self.assert_compile(session.query(User).filter(User.id.in_(q)),
'SELECT users.id AS users_id, users.name '
'AS users_name FROM users WHERE users.id '
'IN (SELECT users.id FROM users WHERE '
'users.id = :id_1)',
use_default_dialect=True)
-
-
+
+
def test_param_transfer(self):
session = create_session()
-
+
q = session.query(User.id).filter(User.id==bindparam('foo')).params(foo=7).subquery()
-
+
q = session.query(User).filter(User.id==q)
-
+
eq_(User(id=7), q.one())
-
+
def test_in(self):
session = create_session()
s = session.query(User.id).join(User.addresses).group_by(User.id).having(func.count(Address.id) > 2)
def test_union(self):
s = create_session()
-
+
q1 = s.query(User).filter(User.name=='ed').with_labels()
q2 = s.query(User).filter(User.name=='fred').with_labels()
eq_(
s.query(User).from_statement(union(q1, q2).order_by('users_name')).all(),
[User(name='ed'), User(name='fred')]
)
-
+
def test_select(self):
s = create_session()
-
+
# this is actually not legal on most DBs since the subquery has no alias
q1 = s.query(User).filter(User.name=='ed')
"users.name AS users_name FROM users WHERE users.name = :name_1)",
dialect=default.DefaultDialect()
)
-
+
def test_join(self):
s = create_session()
s.query(User, adalias).join((adalias, User.id==adalias.user_id)).all(),
[(User(id=7,name=u'jack'), Address(email_address=u'jack@bean.com',user_id=7,id=1))]
)
-
+
# more slice tests are available in test/orm/generative.py
class SliceTest(QueryTest):
def test_first(self):
@testing.fails_on_everything_except('sqlite')
def test_limit_offset_applies(self):
"""Test that the expected LIMIT/OFFSET is applied for slices.
-
+
The LIMIT/OFFSET syntax differs slightly on all databases, and
query[x:y] executes immediately, so we are asserting against
SQL strings using sqlite's syntax.
-
+
"""
sess = create_session()
q = sess.query(User)
-
+
self.assert_sql(testing.db, lambda: q[10:20], [
("SELECT users.id AS users_id, users.name AS users_name FROM users LIMIT 10 OFFSET 10", {})
])
])
-
+
class FilterTest(QueryTest):
def test_basic(self):
assert [User(id=7), User(id=8), User(id=9),User(id=10)] == create_session().query(User).all()
assert [User(id=8), User(id=9)] == list(create_session().query(User).order_by(User.id)[1:3])
assert User(id=8) == create_session().query(User).order_by(User.id)[1]
-
+
assert [] == create_session().query(User).order_by(User.id)[3:3]
assert [] == create_session().query(User).order_by(User.id)[0:0]
-
+
@testing.requires.boolean_col_expressions
def test_exists(self):
sess = create_session(testing.db)
-
+
assert sess.query(exists().where(User.id==9)).scalar()
assert not sess.query(exists().where(User.id==29)).scalar()
-
+
def test_one_filter(self):
assert [User(id=8), User(id=9)] == create_session().query(User).filter(User.name.endswith('ed')).all()
-
+
def test_contains(self):
"""test comparing a collection to an object instance."""
filter(User.addresses.any(id=4)).all()
assert [User(id=9)] == sess.query(User).filter(User.addresses.any(email_address='fred@fred.com')).all()
-
+
# test that any() doesn't overcorrelate
assert [User(id=7), User(id=8)] == sess.query(User).join("addresses").filter(~User.addresses.any(Address.email_address=='fred@fred.com')).all()
-
+
# test that the contents are not adapted by the aliased join
assert [User(id=7), User(id=8)] == sess.query(User).join("addresses", aliased=True).filter(~User.addresses.any(Address.email_address=='fred@fred.com')).all()
assert [User(id=10)] == sess.query(User).outerjoin("addresses", aliased=True).filter(~User.addresses.any()).all()
-
+
@testing.crashes('maxdb', 'can dump core')
def test_has(self):
sess = create_session()
# test has() doesnt' get subquery contents adapted by aliased join
assert [Address(id=2), Address(id=3), Address(id=4)] == \
sess.query(Address).join("user", aliased=True).filter(Address.user.has(User.name.like('%ed%'), id=8)).order_by(Address.id).all()
-
+
dingaling = sess.query(Dingaling).get(2)
assert [User(id=9)] == sess.query(User).filter(User.addresses.any(Address.dingaling==dingaling)).all()
-
+
def test_contains_m2m(self):
sess = create_session()
item = sess.query(Item).get(3)
item2 = sess.query(Item).get(5)
assert [Order(id=3)] == sess.query(Order).filter(Order.items.contains(item)).filter(Order.items.contains(item2)).all()
-
+
def test_comparison(self):
"""test scalar comparison to an object instance"""
# m2m
eq_(sess.query(Item).filter(Item.keywords==None).order_by(Item.id).all(), [Item(id=4), Item(id=5)])
eq_(sess.query(Item).filter(Item.keywords!=None).order_by(Item.id).all(), [Item(id=1),Item(id=2), Item(id=3)])
-
+
def test_filter_by(self):
sess = create_session()
user = sess.query(User).get(8)
# one to many generates WHERE NOT EXISTS
assert [User(name='chuck')] == sess.query(User).filter_by(addresses = None).all()
assert [User(name='chuck')] == sess.query(User).filter_by(addresses = null()).all()
-
+
def test_none_comparison(self):
sess = create_session()
-
+
# scalar
eq_(
[Order(description="order 5")],
[Order(description="order 5")],
sess.query(Order).filter(Order.address_id==null()).all()
)
-
+
# o2o
eq_([Address(id=1), Address(id=3), Address(id=4)],
sess.query(Address).filter(Address.dingaling==None).order_by(Address.id).all())
sess.query(Address).filter(Address.dingaling==null()).order_by(Address.id).all())
eq_([Address(id=2), Address(id=5)], sess.query(Address).filter(Address.dingaling != None).order_by(Address.id).all())
eq_([Address(id=2), Address(id=5)], sess.query(Address).filter(Address.dingaling != null()).order_by(Address.id).all())
-
+
# m2o
eq_([Order(id=5)], sess.query(Order).filter(Order.address==None).all())
eq_([Order(id=1), Order(id=2), Order(id=3), Order(id=4)], sess.query(Order).order_by(Order.id).filter(Order.address!=None).all())
-
+
# o2m
eq_([User(id=10)], sess.query(User).filter(User.addresses==None).all())
eq_([User(id=7),User(id=8),User(id=9)], sess.query(User).filter(User.addresses!=None).order_by(User.id).all())
assert [User(id=8), User(id=9)] == create_session().query(User).order_by(User.id).slice(1,3).from_self().all()
assert [User(id=8)] == list(create_session().query(User).filter(User.id.in_([8,9])).from_self().order_by(User.id)[0:1])
-
+
def test_join(self):
assert [
(User(id=8), Address(id=2)),
(User(id=9), Address(id=5))
] == create_session().query(User).filter(User.id.in_([8,9])).from_self().\
join('addresses').add_entity(Address).order_by(User.id, Address.id).all()
-
+
def test_group_by(self):
eq_(
create_session().query(Address.user_id, func.count(Address.id).label('count')).\
group_by(Address.user_id).order_by(Address.user_id).all(),
[(7, 1), (8, 3), (9, 1)]
)
-
+
def test_no_joinedload(self):
"""test that joinedloads are pushed outwards and not rendered in subqueries."""
-
+
s = create_session()
-
+
oracle_as = not testing.against('oracle') and "AS " or ""
-
+
self.assert_compile(
s.query(User).options(joinedload(User.addresses)).from_self().statement,
"SELECT anon_1.users_id, anon_1.users_name, addresses_1.id, addresses_1.user_id, "\
'oracle_as':oracle_as
}
)
-
+
def test_aliases(self):
"""test that aliased objects are accessible externally to a from_self() call."""
-
+
s = create_session()
-
+
ualias = aliased(User)
eq_(
s.query(User, ualias).filter(User.id > ualias.id).from_self(User.name, ualias.name).
(u'jack', u'ed@wood.com'),
(u'jack', u'fred@fred.com')]
)
-
-
+
+
def test_multiple_entities(self):
sess = create_session()
eq_(
sess.query(User, Address).filter(User.id==Address.user_id).filter(Address.id.in_([2, 5])).from_self().options(joinedload('addresses')).first(),
-
+
# order_by(User.id, Address.id).first(),
(User(id=8, addresses=[Address(), Address(), Address()]), Address(id=2)),
)
def test_multiple_with_column_entities(self):
sess = create_session()
-
+
eq_(
sess.query(User.id).from_self().\
add_column(func.count().label('foo')).\
[
(7,1), (8, 1), (9, 1), (10, 1)
]
-
+
)
-
+
class SetOpsTest(QueryTest, AssertsCompiledSQL):
-
+
def test_union(self):
s = create_session()
-
+
fred = s.query(User).filter(User.name=='fred')
ed = s.query(User).filter(User.name=='ed')
jack = s.query(User).filter(User.name=='jack')
-
+
eq_(fred.union(ed).order_by(User.name).all(),
[User(name='ed'), User(name='fred')]
)
eq_(fred.union(ed, jack).order_by(User.name).all(),
[User(name='ed'), User(name='fred'), User(name='jack')]
)
-
+
def test_statement_labels(self):
"""test that label conflicts don't occur with joins etc."""
-
+
s = create_session()
q1 = s.query(User, Address).join(User.addresses).\
filter(Address.email_address=="ed@wood.com")
q2 = s.query(User, Address).join(User.addresses).\
filter(Address.email_address=="jack@bean.com")
q3 = q1.union(q2).order_by(User.name)
-
+
eq_(
q3.all(),
[
(User(name='jack'), Address(email_address="jack@bean.com")),
]
)
-
+
def test_union_labels(self):
"""test that column expressions translate during
the _from_statement() portion of union(), others"""
-
+
s = create_session()
q1 = s.query(User, literal("x"))
q2 = s.query(User, literal_column("'y'"))
q4 = s.query(User, literal_column("'x'").label('foo'))
q5 = s.query(User, literal("y"))
q6 = q4.union(q5)
-
+
for q in (q3.order_by(User.id, "anon_1_anon_2"), q6.order_by(User.id, "foo")):
eq_(q.all(),
[
(User(id=10, name=u'chuck'), u'y')
]
)
-
+
c1, c2 = column('c1'), column('c2')
q1 = s.query(User, c1.label('foo'), c1.label('bar'))
q2 = s.query(User, c1.label('foo'), c2.label('bar'))
"FROM users) AS anon_1",
use_default_dialect=True
)
-
+
@testing.fails_on('mysql', "mysql doesn't support intersect")
def test_intersect(self):
s = create_session()
eq_(fred.union(ed).intersect(ed.union(jack)).all(),
[User(name='ed')]
)
-
+
def test_eager_load(self):
s = create_session()
]
)
self.assert_sql_count(testing.db, go, 1)
-
-
+
+
class AggregateTest(QueryTest):
def test_sum(self):
class CountTest(QueryTest):
def test_basic(self):
s = create_session()
-
+
eq_(s.query(User).count(), 4)
eq_(s.query(User).filter(users.c.name.endswith('ed')).count(), 2)
s = create_session()
q = s.query(User, Address)
eq_(q.count(), 20) # cartesian product
-
+
q = s.query(User, Address).join(User.addresses)
eq_(q.count(), 5)
-
+
def test_nested(self):
s = create_session()
q = s.query(User, Address).limit(2)
q = s.query(User, Address).join(User.addresses).limit(100)
eq_(q.count(), 5)
-
+
def test_cols(self):
"""test that column-based queries always nest."""
-
+
s = create_session()
-
+
q = s.query(func.count(distinct(User.name)))
eq_(q.count(), 1)
q = s.query(Address.user_id)
eq_(q.count(), 5)
eq_(q.distinct().count(), 3)
-
-
+
+
class DistinctTest(QueryTest):
def test_basic(self):
eq_(
def test_hints(self):
from sqlalchemy.dialects import mysql
dialect = mysql.dialect()
-
+
sess = create_session()
-
+
self.assert_compile(
sess.query(User).with_hint(User, 'USE INDEX (col1_index,col2_index)'),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users",
dialect=dialect
)
-
+
ualias = aliased(User)
self.assert_compile(
sess.query(User, ualias).with_hint(ualias, 'USE INDEX (col1_index,col2_index)').
"ON users.id < users_1.id",
dialect=dialect
)
-
+
class TextTest(QueryTest):
def test_fulltext(self):
o = sess.query(Order).filter(with_parent(u1, User.orders)).all()
assert [Order(description="order 1"), Order(description="order 3"), Order(description="order 5")] == o
-
+
# test generative criterion
o = sess.query(Order).with_parent(u1).filter(orders.c.id>2).all()
assert [Order(description="order 3"), Order(description="order 5")] == o
def test_with_transient(self):
sess = Session()
-
+
q = sess.query(User)
u1 = q.filter_by(name='jack').one()
utrans = User(id=u1.id)
[Order(description="order 1"), Order(description="order 3"), Order(description="order 5")],
o.all()
)
-
+
def test_with_pending_autoflush(self):
sess = Session()
sess.query(User).with_parent(opending, 'user').one(),
User(id=o1.user_id)
)
-
+
class InheritedJoinTest(_base.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
-
+
@classmethod
def define_tables(cls, metadata):
Table('companies', metadata,
Column('engineer_name', String(50)),
Column('primary_language', String(50)),
)
-
+
Table('machines', metadata,
Column('machine_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('name', String(50)),
Column('engineer_id', Integer, ForeignKey('engineers.person_id')))
-
+
Table('managers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'), primary_key=True),
Column('status', String(30)),
Column('paperwork_id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('description', String(50)),
Column('person_id', Integer, ForeignKey('people.person_id')))
-
+
@classmethod
@testing.resolve_artifact_names
def setup_classes(cls):
inherits=Person, polymorphic_identity='manager')
mapper(Boss, boss, inherits=Manager, polymorphic_identity='boss')
mapper(Paperwork, paperwork)
-
+
@testing.resolve_artifact_names
def test_single_prop(self):
sess = create_session()
-
+
self.assert_compile(
sess.query(Company).join(Company.employees),
"SELECT companies.company_id AS companies_company_id, companies.name AS companies_name "
"WHERE companies.company_id = people.company_id AND engineers.primary_language ="
" :primary_language_1",
use_default_dialect=True
-
+
)
-
+
@testing.resolve_artifact_names
def test_single_prop_of_type(self):
sess = create_session()
@testing.resolve_artifact_names
def test_prop_with_polymorphic(self):
sess = create_session()
-
+
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
join('paperwork').filter(Paperwork.description.like('%review%')),
"ORDER BY people.person_id"
, use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
join('paperwork', aliased=True).
@testing.resolve_artifact_names
def test_explicit_polymorphic_join(self):
sess = create_session()
-
+
self.assert_compile(
sess.query(Company).join(Engineer).filter(Engineer.engineer_name=='vlad'),
"SELECT companies.company_id AS companies_company_id, companies.name AS "
def test_multiple_adaption(self):
"""test that multiple filter() adapters get chained together "
and work correctly within a multiple-entry join()."""
-
+
sess = create_session()
self.assert_compile(
"anon_1.people_company_id WHERE anon_1.people_name = :name_1"
, use_default_dialect = True
)
-
+
mach_alias = machines.select()
self.assert_compile(
sess.query(Company).join((people.join(engineers), Company.employees),
def setup_classes(cls):
class A(_fixtures.Base):
pass
-
+
class B(_fixtures.Base):
pass
-
+
class C(B):
pass
-
+
class D(A):
pass
-
+
mapper(A, a,
polymorphic_identity='a',
polymorphic_on=a.c.type,
)
mapper(C, c, inherits=B, polymorphic_identity='c')
mapper(D, d, inherits=A, polymorphic_identity='d')
-
+
@classmethod
@testing.resolve_artifact_names
def insert_data(cls):
A(name='a2')
])
sess.flush()
-
+
@testing.resolve_artifact_names
def test_add_entity_equivalence(self):
sess = create_session()
-
+
for q in [
sess.query( A,B).join( A.link),
sess.query( A).join( A.link).add_entity(B),
A(bid=2, id=1, name=u'a1', type=u'a')
)]
)
-
+
class JoinTest(QueryTest, AssertsCompiledSQL):
-
+
def test_single_name(self):
sess = create_session()
"ON addresses.id = orders.address_id"
, use_default_dialect=True
)
-
+
def test_common_mistake(self):
sess = create_session()
-
+
subq = sess.query(User).subquery()
assert_raises_message(
sa_exc.ArgumentError, "You appear to be passing a clause expression",
assert_raises_message(
sa_exc.ArgumentError, "You appear to be passing a clause expression",
sess.query(User).join, Order, User.id==Order.user_id)
-
+
def test_single_prop(self):
sess = create_session()
self.assert_compile(
"FROM orders AS orders_1 JOIN users ON users.id = orders_1.user_id"
, use_default_dialect=True
)
-
+
# another nonsensical query. (from [ticket:1537]).
# in this case, the contract of "left to right" is honored
self.assert_compile(
"orders AS orders_2 JOIN users ON users.id = orders_2.user_id"
, use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(User).join(User.orders, Order.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"ON orders.id = order_items_1.order_id JOIN items ON items.id = order_items_1.item_id"
, use_default_dialect=True
)
-
+
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).join(ualias.orders),
"FROM users AS users_1 JOIN orders ON users_1.id = orders.user_id"
, use_default_dialect=True
)
-
+
# this query is somewhat nonsensical. the old system didn't render a correct
# query for this. In this case its the most faithful to what was asked -
# there's no linkage between User.orders and "oalias", so two FROM elements
"WHERE users.name = :name_1) AS anon_1 JOIN orders ON anon_1.users_id = orders.user_id"
, use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(User).join(User.addresses, aliased=True).filter(Address.email_address=='foo'),
"SELECT users.id AS users_id, users.name AS users_name "
"WHERE items_1.id = :id_1"
, use_default_dialect=True
)
-
+
# test #1 for [ticket:1706]
ualias = aliased(User)
self.assert_compile(
"= addresses.user_id"
, use_default_dialect=True
)
-
+
# test #2 for [ticket:1706]
ualias2 = aliased(User)
self.assert_compile(
"ON users_2.id = addresses.user_id JOIN orders ON users_1.id = orders.user_id"
, use_default_dialect=True
)
-
+
def test_overlapping_paths(self):
for aliased in (True,False):
# load a user who has an order that contains item id 3 and address id 1 (order 3, owned by jack)
result = create_session().query(User).outerjoin('orders', 'items').\
filter_by(id=3).outerjoin('orders','address').filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
-
+
def test_from_joinpoint(self):
sess = create_session()
-
+
for oalias,ialias in [(True, True), (False, False), (True, False), (False, True)]:
eq_(
sess.query(User).join('orders', aliased=oalias).join('items', from_joinpoint=True, aliased=ialias).filter(Item.description == 'item 4').all(),
sess.query(User).join('orders', aliased=oalias).filter(Order.user_id==9).join('items', from_joinpoint=True, aliased=ialias).filter(Item.description=='item 4').all(),
[]
)
-
+
orderalias = aliased(Order)
itemalias = aliased(Item)
eq_(
sess.query(User).join(('orders', orderalias), ('items', itemalias)).filter(orderalias.user_id==9).filter(itemalias.description=='item 4').all(),
[]
)
-
+
def test_join_nonmapped_column(self):
"""test that the search for a 'left' doesn't trip on non-mapped cols"""
sess = create_session()
-
+
# intentionally join() with a non-existent "left" side
self.assert_compile(
sess.query(User.id, literal_column('foo')).join(Order.user),
"SELECT users.id AS users_id, foo FROM orders JOIN users ON users.id = orders.user_id"
, use_default_dialect=True
)
-
-
-
+
+
+
def test_backwards_join(self):
# a more controversial feature. join from
# User->Address, but the onclause is Address.user.
-
+
sess = create_session()
eq_(
sess.query(User, Address).join(Address.user).filter(Address.email_address=='ed@wood.com').all(),
[(User(id=8,name=u'ed'), Address(email_address='ed@wood.com'))]
)
-
+
# this was the controversial part. now, raise an error if the feature is abused.
# before the error raise was added, this would silently work.....
assert_raises(
sa_exc.InvalidRequestError,
sess.query(User).join, (adalias, Address.user),
)
-
+
def test_multiple_with_aliases(self):
sess = create_session()
-
+
ualias = aliased(User)
oalias1 = aliased(Order)
oalias2 = aliased(Order)
def test_select_from_orm_joins(self):
sess = create_session()
-
+
ualias = aliased(User)
oalias1 = aliased(Order)
oalias2 = aliased(Order)
"users_1_name FROM users AS users_1 JOIN orders AS orders_1 ON users_1.id = orders_1.user_id, "
"users JOIN orders AS orders_2 ON users.id = orders_2.user_id "
"WHERE orders_1.user_id = :user_id_1 OR orders_2.user_id = :user_id_2",
-
+
use_default_dialect=True
)
-
-
+
+
def test_overlapping_backwards_joins(self):
sess = create_session()
oalias1 = aliased(Order)
oalias2 = aliased(Order)
-
- # this is invalid SQL - joins from orders_1/orders_2 to User twice.
+
+ # this is invalid SQL - joins from orders_1/orders_2 to User twice.
# but that is what was asked for so they get it !
self.assert_compile(
sess.query(User).join(oalias1.user).join(oalias2.user),
def test_replace_multiple_from_clause(self):
"""test adding joins onto multiple FROM clauses"""
-
+
sess = create_session()
-
+
self.assert_compile(
sess.query(Address, User).join(Address.dingaling).join(User.orders, Order.items),
"SELECT addresses.id AS addresses_id, addresses.user_id AS addresses_user_id, "
"ON orders.id = order_items_1.order_id JOIN items ON items.id = order_items_1.item_id",
use_default_dialect = True
)
-
+
def test_multiple_adaption(self):
sess = create_session()
"JOIN items AS items_1 ON items_1.id = order_items_1.item_id WHERE orders_1.id = :id_1 AND items_1.id = :id_2",
use_default_dialect=True
)
-
+
def test_onclause_conditional_adaption(self):
sess = create_session()
"ON orders_1.id = order_items.order_id AND order_items.item_id = items_1.id",
use_default_dialect=True
)
-
+
oalias = orders.select()
self.assert_compile(
sess.query(User).join((oalias, User.orders),
"ON anon_1.id = order_items.order_id AND order_items.item_id = items.id",
use_default_dialect=True
)
-
-
+
+
# query.join(<stuff>, aliased=True).join((target, sql_expression))
# or: query.join(path_to_some_joined_table_mapper).join((target, sql_expression))
-
+
def test_pure_expression_error(self):
sess = create_session()
-
+
assert_raises_message(sa.exc.InvalidRequestError, "Could not find a FROM clause to join from", sess.query(users).join, addresses)
-
-
+
+
def test_orderby_arg_bug(self):
sess = create_session()
# no arg error
result = sess.query(User).join('orders', aliased=True).order_by(Order.id).reset_joinpoint().order_by(users.c.id).all()
-
+
def test_no_onclause(self):
sess = create_session()
sess.query(User).join(Order, (Item, Order.items)).filter(Item.description == 'item 4').all(),
[User(name='jack')]
)
-
+
def test_clause_onclause(self):
sess = create_session()
).all(),
[User(name='fred')]
)
-
-
+
+
def test_aliased_classes(self):
sess = create_session()
q = sess.query(User, AdAlias).select_from(join(AdAlias, User, AdAlias.user)).filter(User.name=='ed')
eq_(l.all(), [(user8, address2),(user8, address3),(user8, address4),])
-
+
def test_implicit_joins_from_aliases(self):
sess = create_session()
OrderAlias = aliased(Order)
Order(address_id=1,description=u'order 3',isopen=1,user_id=7,id=3)
]
)
-
+
eq_(
sess.query(User, OrderAlias, Item.description).join(('orders', OrderAlias), 'items').filter_by(description='item 3').\
order_by(User.id, OrderAlias.id).all(),
(User(name=u'jack',id=7), Order(address_id=1,description=u'order 3',isopen=1,user_id=7,id=3), u'item 3'),
(User(name=u'fred',id=9), Order(address_id=4,description=u'order 2',isopen=0,user_id=9,id=2), u'item 3')
]
- )
-
+ )
+
def test_aliased_classes_m2m(self):
sess = create_session()
-
+
(order1, order2, order3, order4, order5) = sess.query(Order).all()
(item1, item2, item3, item4, item5) = sess.query(Item).all()
expected = [
(order4, item5),
(order5, item5),
]
-
+
q = sess.query(Order)
q = q.add_entity(Item).select_from(join(Order, Item, 'items')).order_by(Order.id, Item.id)
l = q.all()
(order3, item3),
]
)
-
+
def test_joins_from_adapted_entities(self):
# test for #1853
'(SELECT users.id AS id FROM users) AS '
'anon_2 ON anon_2.id = anon_1.users_id',
use_default_dialect=True)
-
+
def test_reset_joinpoint(self):
for aliased in (True, False):
# load a user who has an order that contains item id 3 and address id 1 (order 3, owned by jack)
result = create_session().query(User).outerjoin('orders', 'items', aliased=aliased).filter_by(id=3).reset_joinpoint().outerjoin('orders','address', aliased=aliased).filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
-
+
def test_overlap_with_aliases(self):
oalias = orders.alias('oalias')
# the left half of the join condition of the any() is aliased.
q = sess.query(User).join('orders', aliased=True).filter(Order.items.any(Item.description=='item 4'))
assert [User(id=7)] == q.all()
-
+
# test that aliasing gets reset when join() is called
q = sess.query(User).join('orders', aliased=True).filter(Order.description=="order 3").join('orders', aliased=True).filter(Order.description=="order 5")
assert q.count() == 1
)
def test_plain_table(self):
-
+
sess = create_session()
-
+
eq_(
sess.query(User.name).join((addresses, User.id==addresses.c.user_id)).order_by(User.id).all(),
[(u'jack',), (u'ed',), (u'ed',), (u'ed',), (u'fred',)]
)
-
+
def test_no_joinpoint_expr(self):
sess = create_session()
-
+
# these are consistent regardless of
# select_from() being present.
-
+
assert_raises_message(
sa_exc.InvalidRequestError,
"Could not find a FROM",
sess.query(users.c.id).join, User
)
-
+
assert_raises_message(
sa_exc.InvalidRequestError,
"Could not find a FROM",
sess.query(users.c.id).select_from(users).join, User
)
-
+
def test_select_from(self):
"""Test that the left edge of the join can be set reliably with select_from()."""
-
+
sess = create_session()
self.assert_compile(
sess.query(Item.id).select_from(User).join(User.orders).join(Order.items),
"SELECT items.id AS items_id FROM users JOIN items ON users.id = items.id",
use_default_dialect=True
)
-
-
-
-
+
+
+
+
def test_from_self_resets_joinpaths(self):
"""test a join from from_self() doesn't confuse joins inside the subquery
with the outside.
"""
sess = create_session()
-
+
self.assert_compile(
sess.query(Item).join(Item.keywords).from_self(Keyword).join(Item.keywords),
"SELECT keywords.id AS keywords_id, keywords.name AS keywords_name FROM "
"keywords.id = item_keywords_2.keyword_id",
use_default_dialect=True
)
-
-
+
+
class MultiplePathTest(_base.MappedTest, AssertsCompiledSQL):
@classmethod
def define_tables(cls, metadata):
'users.id = addresses.user_id ORDER BY '
'users.id, addresses.id',
dialect=default.DefaultDialect())
-
+
def go():
assert self.static.user_address_result == q.all()
self.assert_sql_count(testing.db, go, 1)
selectquery = users.outerjoin(adalias).select(use_labels=True, order_by=[users.c.id, adalias.c.id])
sess = create_session()
q = sess.query(User)
-
+
# string alias name
def go():
l = list(q.options(contains_eager('addresses', alias="adalias")).instances(selectquery.execute()))
def test_mixed_eager_contains_with_limit(self):
sess = create_session()
-
+
q = sess.query(User)
def go():
# outerjoin to User.orders, offset 1/limit 2 so we get user 7 + second two orders.
Order(address_id=None,user_id=7,description=u'order 5',isopen=0,id=5)
])])
self.assert_sql_count(testing.db, go, 1)
-
-
+
+
class MixedEntitiesTest(QueryTest, AssertsCompiledSQL):
def test_values(self):
q = sess.query(User)
q2 = q.select_from(sel).values(User.name)
eq_(list(q2), [(u'jack',), (u'ed',)])
-
+
q = sess.query(User)
q2 = q.order_by(User.id).\
values(User.name, User.name + " " + cast(User.id, String(50)))
[(u'jack', u'jack 7'), (u'ed', u'ed 8'),
(u'fred', u'fred 9'), (u'chuck', u'chuck 10')]
)
-
+
q2 = q.join('addresses').\
filter(User.name.like('%e%')).\
order_by(User.id, Address.id).\
eq_(list(q2),
[(u'ed', u'ed@wood.com'), (u'ed', u'ed@bettyboop.com'),
(u'ed', u'ed@lala.com'), (u'fred', u'fred@fred.com')])
-
+
q2 = q.join('addresses').\
filter(User.name.like('%e%')).\
order_by(desc(Address.email_address)).\
slice(1, 3).values(User.name, Address.email_address)
eq_(list(q2), [(u'ed', u'ed@wood.com'), (u'ed', u'ed@lala.com')])
-
+
adalias = aliased(Address)
q2 = q.join(('addresses', adalias)).\
filter(User.name.like('%e%')).\
values(User.name, adalias.email_address)
eq_(list(q2), [(u'ed', u'ed@wood.com'), (u'ed', u'ed@bettyboop.com'),
(u'ed', u'ed@lala.com'), (u'fred', u'fred@fred.com')])
-
+
q2 = q.values(func.count(User.name))
assert q2.next() == (4,)
def test_correlated_subquery(self):
"""test that a subquery constructed from ORM attributes doesn't leak out
those entities to the outermost query.
-
+
"""
sess = create_session()
-
+
subq = select([func.count()]).\
where(User.id==Address.user_id).\
correlate(users).\
def test_tuple_labeling(self):
sess = create_session()
-
+
# test pickle + all the protocols !
for pickled in False, -1, 0, 1, 2:
for row in sess.query(User, Address).join(User.addresses).all():
if pickled is not False:
row = util.pickle.loads(util.pickle.dumps(row, pickled))
-
+
eq_(row.keys(), ['User', 'Address'])
eq_(row.User, row[0])
eq_(row.Address, row[1])
-
+
for row in sess.query(User.name, User.id.label('foobar')):
if pickled is not False:
row = util.pickle.loads(util.pickle.dumps(row, pickled))
eq_(row.keys(), ['User', 'orders'])
eq_(row.User, row[0])
eq_(row.orders, row[1])
-
+
# test here that first col is not labeled, only
# one name in keys, matches correctly
for row in sess.query(User.name + 'hoho', User.name):
eq_(row.keys(), ['name'])
eq_(row[0], row.name + 'hoho')
-
+
if pickled is not False:
ret = sess.query(User, Address).join(User.addresses).all()
util.pickle.loads(util.pickle.dumps(ret, pickled))
-
+
def test_column_queries(self):
sess = create_session()
eq_(sess.query(User.name).all(), [(u'jack',), (u'ed',), (u'fred',), (u'chuck',)])
-
+
sel = users.select(User.id.in_([7, 8])).alias()
q = sess.query(User.name)
q2 = q.select_from(sel).all()
(u'ed', u'ed@bettyboop.com'), (u'ed', u'ed@lala.com'),
(u'fred', u'fred@fred.com')
])
-
+
eq_(sess.query(User.name, func.count(Address.email_address)).\
outerjoin(User.addresses).group_by(User.id, User.name).\
order_by(User.id).all(),
[(1, User(name='jack',id=7)), (3, User(name='ed',id=8)),
(1, User(name='fred',id=9)), (0, User(name='chuck',id=10))]
)
-
+
adalias = aliased(Address)
eq_(sess.query(User, func.count(adalias.email_address)).\
outerjoin(('addresses', adalias)).group_by(User).\
(User(name=u'chuck',id=10), None)
]
)
-
+
# anon + select from aliasing
eq_(
sess.query(User).join(User.addresses, aliased=True).\
def test_column_from_limited_joinedload(self):
sess = create_session()
-
+
def go():
results = sess.query(User).limit(1).\
options(joinedload('addresses')).\
add_column(User.name).all()
eq_(results, [(User(name='jack'), 'jack')])
self.assert_sql_count(testing.db, go, 1)
-
+
@testing.fails_on('postgresql+pg8000', "'type oid 705 not mapped to py type' (due to literal)")
def test_self_referential(self):
-
+
sess = create_session()
oalias = aliased(Order)
sess.query(Order, oalias).from_self().filter(Order.user_id==oalias.user_id).\
filter(Order.user_id==7).filter(Order.id>oalias.id).\
order_by(Order.id, oalias.id),
-
- # same thing, but reversed.
+
+ # same thing, but reversed.
sess.query(oalias, Order).from_self().filter(oalias.user_id==Order.user_id).\
filter(oalias.user_id==7).filter(Order.id<oalias.id).\
order_by(oalias.id, Order.id),
-
+
# here we go....two layers of aliasing
sess.query(Order, oalias).filter(Order.user_id==oalias.user_id).\
filter(Order.user_id==7).filter(Order.id>oalias.id).\
limit(10).options(joinedload(Order.items)),
]:
-
+
eq_(
q.all(),
[
(Order(address_id=None,description=u'order 5',isopen=0,user_id=7,id=5),
Order(address_id=1,description=u'order 1',isopen=0,user_id=7,id=1)),
(Order(address_id=None,description=u'order 5',isopen=0,user_id=7,id=5),
- Order(address_id=1,description=u'order 3',isopen=1,user_id=7,id=3))
+ Order(address_id=1,description=u'order 3',isopen=1,user_id=7,id=3))
]
)
-
-
+
+
# ensure column expressions are taken from inside the subquery, not restated at the top
q = sess.query(Order.id, Order.description, literal_column("'q'").label('foo')).\
filter(Order.description == u'order 3').from_self()
q.all(),
[(3, u'order 3', 'q')]
)
-
-
+
+
def test_multi_mappers(self):
test_session = create_session()
def test_with_entities(self):
sess = create_session()
-
+
q = sess.query(User).filter(User.id==7).order_by(User.name)
-
+
self.assert_compile(
q.with_entities(User.id,Address).\
filter(Address.user_id == User.id),
'addresses.user_id = users.id ORDER BY '
'users.name',
use_default_dialect=True)
-
-
+
+
def test_multi_columns(self):
sess = create_session()
def test_add_multi_columns(self):
"""test that add_column accepts a FROM clause."""
-
+
sess = create_session()
-
+
eq_(
sess.query(User.id).add_column(users).all(),
[(7, 7, u'jack'), (8, 8, u'ed'), (9, 9, u'fred'), (10, 10, u'chuck')]
)
-
+
def test_multi_columns_2(self):
"""test aliased/nonalised joins with the usage of add_column()"""
sess = create_session()
add_column(func.count(Address.id).label('count'))
eq_(q.all(), expected)
sess.expunge_all()
-
+
adalias = aliased(Address)
q = sess.query(User)
q = q.group_by(users).order_by(User.id).outerjoin(('addresses', adalias)).\
filter(User.id.in_([8, 9])).
order_by(User.id).
one)
-
+
@testing.future
def test_getslice(self):
eq_(sess.query(User.id).filter_by(id=0).scalar(), None)
eq_(sess.query(User).filter_by(id=7).scalar(),
sess.query(User).filter_by(id=7).one())
-
+
assert_raises(sa.orm.exc.MultipleResultsFound, sess.query(User).scalar)
assert_raises(sa.orm.exc.MultipleResultsFound, sess.query(User.id, User.name).scalar)
-
+
@testing.resolve_artifact_names
def test_value(self):
sess = create_session()
def test_join_mapper_order_by(self):
"""test that mapper-level order_by is adapted to a selectable."""
-
+
mapper(User, users, order_by=users.c.id)
sel = users.select(users.c.id.in_([7, 8]))
def test_differentiate_self_external(self):
"""test some different combinations of joining a table to a subquery of itself."""
-
+
mapper(User, users)
-
+
sess = create_session()
sel = sess.query(User).filter(User.id.in_([7, 8])).subquery()
ualias = aliased(User)
-
+
self.assert_compile(
sess.query(User).join((sel, User.id>sel.c.id)),
"SELECT users.id AS users_id, users.name AS users_name FROM "
"users WHERE users.id IN (:id_1, :id_2)) AS anon_1 ON users.id > anon_1.id",
use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(ualias).select_from(sel).filter(ualias.id>sel.c.id),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name FROM "
"IN (:id_1, :id_2)) AS anon_1 JOIN users AS users_1 ON users_1.id > anon_1.id",
use_default_dialect=True
)
-
-
+
+
# this one uses an explicit join(left, right, onclause) so works
self.assert_compile(
sess.query(ualias).select_from(join(sel, ualias, ualias.id>sel.c.id)),
"IN (:id_1, :id_2)) AS anon_1 JOIN users AS users_1 ON users_1.id > anon_1.id",
use_default_dialect=True
)
-
-
-
+
+
+
def test_join_no_order_by(self):
mapper(User, users)
(User(name='ed',id=8), Address(user_id=8,email_address='ed@lala.com',id=4))
]
)
-
+
def test_more_joins(self):
mapper(User, users, properties={
sel = users.select(users.c.id.in_([7, 8]))
sess = create_session()
-
+
eq_(sess.query(User).select_from(sel).join('orders', 'items', 'keywords').filter(Keyword.name.in_(['red', 'big', 'round'])).all(), [
User(name=u'jack',id=7)
])
closed_orders = relationship(Order, primaryjoin = and_(orders.c.isopen == 0, users.c.id==orders.c.user_id), lazy='select')
))
q = create_session().query(User)
-
+
eq_(
q.join('open_orders', 'items', aliased=True).filter(Item.id==4).\
join('closed_orders', 'items', aliased=True).filter(Item.id==3).all(),
class SelfRefMixedTest(_base.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
__dialect__ = default.DefaultDialect()
-
+
@classmethod
def define_tables(cls, metadata):
nodes = Table('nodes', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('parent_id', Integer, ForeignKey('nodes.id'))
)
-
+
sub_table = Table('sub_table', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('node_id', Integer, ForeignKey('nodes.id')),
)
-
+
assoc_table = Table('assoc_table', metadata,
Column('left_id', Integer, ForeignKey('nodes.id')),
Column('right_id', Integer, ForeignKey('nodes.id'))
)
-
+
@classmethod
@testing.resolve_artifact_names
def setup_classes(cls):
class Node(Base):
pass
-
+
class Sub(Base):
pass
"FROM nodes JOIN nodes AS nodes_1 ON nodes.id = nodes_1.parent_id "
"JOIN sub_table ON nodes_1.id = sub_table.node_id"
)
-
+
self.assert_compile(
sess.query(Node).join((n1, Node.children)).join((Sub, Node.subs)),
"SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id "
"assoc_table_1.left_id JOIN nodes AS nodes_1 ON nodes_1.id = "
"assoc_table_1.right_id JOIN sub_table ON nodes_1.id = sub_table.node_id",
)
-
+
self.assert_compile(
sess.query(Node).join((n1, Node.assoc)).join((Sub, Node.subs)),
"SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id "
"assoc_table_1.left_id JOIN nodes AS nodes_1 ON nodes_1.id = "
"assoc_table_1.right_id JOIN sub_table ON nodes.id = sub_table.node_id",
)
-
-
+
+
class SelfReferentialTest(_base.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
run_inserts = 'once'
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('parent_id', Integer, ForeignKey('nodes.id')),
Column('data', String(30)))
-
+
@classmethod
def insert_data(cls):
# TODO: somehow using setup_classes()
# here normally is screwing up the other tests.
-
+
global Node, Sub
class Node(Base):
def append(self, node):
self.children.append(node)
-
+
mapper(Node, nodes, properties={
'children':relationship(Node, lazy='select', join_depth=3,
backref=backref('parent', remote_side=[nodes.c.id])
sess.add(n1)
sess.flush()
sess.close()
-
+
@testing.resolve_artifact_names
def test_join(self):
sess = create_session()
ret = sess.query(Node.data).join(Node.children, aliased=True).filter_by(data='n122').all()
assert ret == [('n12',)]
-
+
node = sess.query(Node).join('children', 'children', aliased=True).filter_by(data='n122').first()
assert node.data=='n1'
node = sess.query(Node).filter_by(data='n122').join('parent', aliased=True).filter_by(data='n12').\
join('parent', aliased=True, from_joinpoint=True).filter_by(data='n1').first()
assert node.data == 'n122'
-
+
@testing.resolve_artifact_names
def test_string_or_prop_aliased(self):
"""test that join('foo') behaves the same as join(Cls.foo) in a self
referential scenario.
-
+
"""
-
+
sess = create_session()
nalias = aliased(Node, sess.query(Node).filter_by(data='n1').subquery())
-
+
q1 = sess.query(nalias).join(nalias.children, aliased=True).\
join(Node.children, from_joinpoint=True)
"nodes_1.parent_id JOIN nodes ON nodes_1.id = nodes.parent_id",
use_default_dialect=True
)
-
+
q1 = sess.query(Node).join(nalias.children, aliased=True).\
join(Node.children, aliased=True, from_joinpoint=True).\
join(Node.children, from_joinpoint=True)
q2 = sess.query(Node).join(nalias.children, aliased=True).\
join("children", aliased=True, from_joinpoint=True).\
join("children", from_joinpoint=True)
-
+
for q in (q1, q2):
self.assert_compile(
q,
"JOIN nodes ON nodes_2.id = nodes.parent_id",
use_default_dialect=True
)
-
+
@testing.resolve_artifact_names
def test_from_self_inside_excludes_outside(self):
"""test the propagation of aliased() from inside to outside
on a from_self()..
"""
sess = create_session()
-
+
n1 = aliased(Node)
-
+
# n1 is not inside the from_self(), so all cols must be maintained
# on the outside
self.assert_compile(
join((Node.parent, parent), (parent.parent, grandparent)).\
filter(Node.data=='n122').filter(parent.data=='n12').\
filter(grandparent.data=='n1').from_self().limit(1)
-
+
# parent, grandparent *are* inside the from_self(), so they
# should get aliased to the outside.
self.assert_compile(
"nodes_2.data = :data_3) AS anon_1 LIMIT 1",
use_default_dialect=True
)
-
+
@testing.resolve_artifact_names
def test_explicit_join(self):
sess = create_session()
-
+
n1 = aliased(Node)
n2 = aliased(Node)
-
+
self.assert_compile(
join(Node, n1, 'children').join(n2, 'children'),
"nodes JOIN nodes AS nodes_1 ON nodes.id = nodes_1.parent_id JOIN nodes AS nodes_2 ON nodes_1.id = nodes_2.parent_id",
"JOIN nodes AS nodes_2 ON nodes_1.id = nodes_2.parent_id",
use_default_dialect=True
)
-
+
self.assert_compile(
sess.query(Node).join((n1, Node.children)).join((n2, Node.children)),
"SELECT nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.data AS "
"JOIN nodes AS nodes_2 ON nodes.id = nodes_2.parent_id",
use_default_dialect=True
)
-
+
node = sess.query(Node).select_from(join(Node, n1, 'children')).filter(n1.data=='n122').first()
assert node.data=='n12'
-
+
node = sess.query(Node).select_from(join(Node, n1, 'children').join(n2, 'children')).\
filter(n2.data=='n122').first()
assert node.data=='n1'
-
+
# mix explicit and named onclauses
node = sess.query(Node).select_from(join(Node, n1, Node.id==n1.parent_id).join(n2, 'children')).\
filter(n2.data=='n122').first()
list(sess.query(Node).select_from(join(Node, n1, 'parent').join(n2, 'parent')).\
filter(and_(Node.data=='n122', n1.data=='n12', n2.data=='n1')).values(Node.data, n1.data, n2.data)),
[('n122', 'n12', 'n1')])
-
+
@testing.resolve_artifact_names
def test_join_to_nonaliased(self):
sess = create_session()
-
+
n1 = aliased(Node)
# using 'n1.parent' implicitly joins to unaliased Node
sess.query(n1).join(n1.parent).filter(Node.data=='n1').all(),
[Node(parent_id=1,data=u'n11',id=2), Node(parent_id=1,data=u'n12',id=3), Node(parent_id=1,data=u'n13',id=4)]
)
-
+
# explicit (new syntax)
eq_(
sess.query(n1).join((Node, n1.parent)).filter(Node.data=='n1').all(),
[Node(parent_id=1,data=u'n11',id=2), Node(parent_id=1,data=u'n12',id=3), Node(parent_id=1,data=u'n13',id=4)]
)
-
-
+
+
@testing.resolve_artifact_names
def test_multiple_explicit_entities(self):
sess = create_session()
-
+
parent = aliased(Node)
grandparent = aliased(Node)
eq_(
options(joinedload(Node.children)).first(),
(Node(data='n122'), Node(data='n12'), Node(data='n1'))
)
-
-
+
+
@testing.resolve_artifact_names
def test_any(self):
sess = create_session()
@testing.resolve_artifact_names
def test_has(self):
sess = create_session()
-
+
eq_(sess.query(Node).filter(Node.parent.has(Node.data=='n12')).order_by(Node.id).all(),
[Node(data='n121'),Node(data='n122'),Node(data='n123')])
eq_(sess.query(Node).filter(Node.parent.has(Node.data=='n122')).all(), [])
@testing.resolve_artifact_names
def test_contains(self):
sess = create_session()
-
+
n122 = sess.query(Node).filter(Node.data=='n122').one()
eq_(sess.query(Node).filter(Node.children.contains(n122)).all(), [Node(data='n12')])
@testing.resolve_artifact_names
def test_eq_ne(self):
sess = create_session()
-
+
n12 = sess.query(Node).filter(Node.data=='n12').one()
eq_(sess.query(Node).filter(Node.parent==n12).all(), [Node(data='n121'),Node(data='n122'),Node(data='n123')])
-
+
eq_(sess.query(Node).filter(Node.parent != n12).all(), [Node(data='n1'), Node(data='n11'), Node(data='n12'), Node(data='n13')])
class SelfReferentialM2MTest(_base.MappedTest):
nodes = Table('nodes', metadata,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('data', String(30)))
-
+
node_to_nodes =Table('node_to_nodes', metadata,
Column('left_node_id', Integer, ForeignKey('nodes.id'),primary_key=True),
Column('right_node_id', Integer, ForeignKey('nodes.id'),primary_key=True),
@classmethod
def insert_data(cls):
global Node
-
+
class Node(Base):
pass
n5 = Node(data='n5')
n6 = Node(data='n6')
n7 = Node(data='n7')
-
+
n1.children = [n2, n3, n4]
n2.children = [n3, n6, n7]
n3.children = [n5, n4]
def test_explicit_join(self):
sess = create_session()
-
+
n1 = aliased(Node)
eq_(
sess.query(Node).select_from(join(Node, n1, 'children')).filter(n1.data.in_(['n3', 'n7'])).order_by(Node.id).all(),
[Node(data='n1'), Node(data='n2')]
)
-
+
class ExternalColumnsTest(QueryTest):
"""test mappers with SQL-expressions added as column properties."""
def test_external_columns(self):
"""test querying mappings that reference external columns or selectables."""
-
+
mapper(User, users, properties={
'concat': column_property((users.c.id * 2)),
'count': column_property(
})
sess = create_session()
-
+
sess.query(Address).options(joinedload('user')).all()
eq_(sess.query(User).all(),
order_by(Address.id).all(),
address_result)
self.assert_sql_count(testing.db, go, 1)
-
+
ualias = aliased(User)
eq_(
sess.query(Address, ualias).join(('user', ualias)).all(),
pass
class Sub2(_base.ComparableEntity):
pass
-
+
mapper(Base, base, properties={
'sub1':relationship(Sub1),
'sub2':relationship(Sub2)
})
-
+
mapper(Sub1, sub1)
mapper(Sub2, sub2)
sess = create_session()
-
+
s11 = Sub1(data='s11')
s12 = Sub1(data='s12')
s2 = Sub2(data='s2')
sess.add(b1)
sess.add(b2)
sess.flush()
-
+
# theres an overlapping ForeignKey here, so not much option except
# to artifically control the flush order
b2.sub2 = [s2]
sess.flush()
-
+
q = sess.query(Base).outerjoin('sub2', aliased=True)
assert sub1.c.id not in q._filter_aliases.equivalents
filter(Sub1.id==1).one(),
b1
)
-
+
class UpdateDeleteTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
@testing.resolve_artifact_names
def test_illegal_operations(self):
s = create_session()
-
+
for q, mname in (
(s.query(User).limit(2), "limit"),
(s.query(User).offset(2), "offset"),
):
assert_raises_message(sa_exc.InvalidRequestError, r"Can't call Query.update\(\) when %s\(\) has been called" % mname, q.update, {'name':'ed'})
assert_raises_message(sa_exc.InvalidRequestError, r"Can't call Query.delete\(\) when %s\(\) has been called" % mname, q.delete)
-
-
+
+
@testing.resolve_artifact_names
def test_delete(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).filter(or_(User.name == 'john', User.name == 'jill')).delete()
-
+
assert john not in sess and jill not in sess
-
+
eq_(sess.query(User).order_by(User.id).all(), [jack,jane])
@testing.resolve_artifact_names
assert john not in sess and jill not in sess
sess.rollback()
assert john in sess and jill in sess
-
+
@testing.resolve_artifact_names
def test_delete_without_session_sync(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).filter(or_(User.name == 'john', User.name == 'jill')).delete(synchronize_session=False)
-
+
assert john in sess and jill in sess
-
+
eq_(sess.query(User).order_by(User.id).all(), [jack,jane])
@testing.resolve_artifact_names
def test_delete_with_fetch_strategy(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).filter(or_(User.name == 'john', User.name == 'jill')).delete(synchronize_session='fetch')
-
+
assert john not in sess and jill not in sess
-
+
eq_(sess.query(User).order_by(User.id).all(), [jack,jane])
@testing.fails_on('mysql', 'FIXME: unknown')
@testing.resolve_artifact_names
def test_delete_invalid_evaluation(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
-
+
assert_raises(sa_exc.InvalidRequestError,
sess.query(User).filter(User.name == select([func.max(User.name)])).delete, synchronize_session='evaluate'
)
-
+
sess.query(User).filter(User.name == select([func.max(User.name)])).delete(synchronize_session='fetch')
-
+
assert john not in sess
-
+
eq_(sess.query(User).order_by(User.id).all(), [jack,jill,jane])
@testing.resolve_artifact_names
def test_update(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).filter(User.age > 29).update({'age': User.age - 10}, synchronize_session='evaluate')
-
+
eq_([john.age, jack.age, jill.age, jane.age], [25,37,29,27])
eq_(sess.query(User.age).order_by(User.id).all(), zip([25,37,29,27]))
)
class Data(_base.ComparableEntity):
pass
-
+
mapper(Data, data, properties={'cnt':data.c.counter})
metadata.create_all()
d1 = Data()
sess.query(Data).update({Data.cnt:Data.cnt + 1})
sess.flush()
-
+
eq_(d1.cnt, 1)
sess.query(Data).update({Data.cnt:Data.cnt + 1}, 'fetch')
sess.flush()
-
+
eq_(d1.cnt, 2)
sess.close()
sess = create_session(bind=testing.db, autocommit=False, autoflush=False)
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
-
+
john.age = 50
jack.age = 37
-
+
# autoflush is false. therefore our '50' and '37' are getting blown away by this operation.
-
+
sess.query(User).filter(User.age > 29).update({'age': User.age - 10}, synchronize_session='evaluate')
for x in (john, jack, jill, jane):
assert not sess.is_modified(x)
eq_([john.age, jack.age, jill.age, jane.age], [25,37,29,27])
-
+
john.age = 25
assert john in sess.dirty
assert jack in sess.dirty
assert jill not in sess.dirty
assert sess.is_modified(john)
assert not sess.is_modified(jack)
-
-
+
+
@testing.resolve_artifact_names
def test_update_with_expire_strategy(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).filter(User.age > 29).update({'age': User.age - 10}, synchronize_session='fetch')
-
+
eq_([john.age, jack.age, jill.age, jane.age], [25,37,29,27])
eq_(sess.query(User.age).order_by(User.id).all(), zip([25,37,29,27]))
@testing.resolve_artifact_names
def test_update_all(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).update({'age': 42}, synchronize_session='evaluate')
-
+
eq_([john.age, jack.age, jill.age, jane.age], [42,42,42,42])
eq_(sess.query(User.age).order_by(User.id).all(), zip([42,42,42,42]))
@testing.resolve_artifact_names
def test_delete_all(self):
sess = create_session(bind=testing.db, autocommit=False)
-
+
john,jack,jill,jane = sess.query(User).order_by(User.id).all()
sess.query(User).delete(synchronize_session='evaluate')
-
+
assert not (john in sess or jack in sess or jill in sess or jane in sess)
eq_(sess.query(User).count(), 0)
-
+
class StatementOptionsTest(QueryTest):
""" Make sure a Query's execution_options are passed on to the
"""Tests a composite FK where, in
the relationship(), one col points
to itself in the same table.
-
+
this is a very unusual case::
-
+
company employee
---------- ----------
company_id <--- company_id ------+
name ^ |
+------------+
-
+
emp_id <---------+
name |
reports_to_id ---+
-
+
employee joins to its sub-employees
both on reports_to_id, *and on company_id to itself*.
-
+
"""
@classmethod
self.company = company
self.emp_id = emp_id
self.reports_to = reports_to
-
+
@testing.resolve_artifact_names
def test_explicit(self):
mapper(Company, company_t)
})
self._test()
-
+
@testing.resolve_artifact_names
def test_very_explicit(self):
mapper(Company, company_t)
})
self._test()
-
+
@testing.resolve_artifact_names
def _test(self):
sess = create_session()
test_needs_autoincrement=True),
Column("foo",Integer,),
test_needs_fk=True)
-
+
Table("tableB",metadata,
Column("id",Integer,ForeignKey("tableA.id"),primary_key=True),
test_needs_fk=True)
def test_onetoone_switch(self):
"""test that active history is enabled on a
one-to-many/one that has use_get==True"""
-
+
mapper(A, tableA, properties={
'b':relationship(B, cascade="all,delete-orphan", uselist=False)})
mapper(B, tableB)
-
+
compile_mappers()
assert A.b.property.strategy.use_get
-
+
sess = create_session()
-
+
a1 = A()
sess.add(a1)
sess.flush()
a1 = sess.query(A).first()
a1.b = B()
sess.flush()
-
+
@testing.resolve_artifact_names
def test_no_delete_PK_AtoB(self):
"""A cant be deleted without B because B would have no PK value."""
def test_delete_cascade_AtoB(self):
"""No 'blank the PK' error when the child is to
be deleted as part of a cascade"""
-
+
for cascade in ("save-update, delete",
#"save-update, delete-orphan",
"save-update, delete, delete-orphan"):
class UniqueColReferenceSwitchTest(_base.MappedTest):
"""test a relationship based on a primary
join against a unique non-pk column"""
-
+
@classmethod
def define_tables(cls, metadata):
Table("table_a", metadata,
ForeignKey('table_a.ident'),
nullable=False),
)
-
+
@classmethod
def setup_classes(cls):
class A(_base.ComparableEntity):
b.a = a2
session.delete(a1)
session.flush()
-
+
class RelationshipToSelectableTest(_base.MappedTest):
"""Test a map to a select that relates to a map to the table."""
"""test a relationship with a non-column entity in the primary join,
is not viewonly, and also has the non-column's clause mentioned in the
foreign keys list.
-
+
"""
-
+
@classmethod
def define_tables(cls, metadata):
Table('tags', metadata, Column("id", Integer, primary_key=True,
sess.add(t1)
sess.flush()
sess.expunge_all()
-
+
# relationship works
eq_(
sess.query(Tag).all(),
[Tag(data='some tag', foo=[TagInstance(data='iplc_case')])]
)
-
+
# both TagInstances were persisted
eq_(
sess.query(TagInstance).order_by(TagInstance.data).all(),
)
class BackrefPropagatesForwardsArgs(_base.MappedTest):
-
+
@classmethod
def define_tables(cls, metadata):
Table('users', metadata,
Column('user_id', Integer),
Column('email', String(50))
)
-
+
@classmethod
def setup_classes(cls):
class User(_base.ComparableEntity):
pass
class Address(_base.ComparableEntity):
pass
-
+
@testing.resolve_artifact_names
def test_backref(self):
-
+
mapper(User, users, properties={
'addresses':relationship(Address,
primaryjoin=addresses.c.user_id==users.c.id,
backref='user')
})
mapper(Address, addresses)
-
+
sess = sessionmaker()()
u1 = User(name='u1', addresses=[Address(email='a1')])
sess.add(u1)
eq_(sess.query(Address).all(), [
Address(email='a1', user=User(name='u1'))
])
-
+
class AmbiguousJoinInterpretedAsSelfRef(_base.MappedTest):
"""test ambiguous joins due to FKs on both sides treated as
self-referential.
-
+
this mapping is very similar to that of
test/orm/inheritance/query.py
SelfReferentialTestJoinedToBase , except that inheritance is
not used here.
-
+
"""
-
+
@classmethod
def define_tables(cls, metadata):
subscriber_table = Table('subscriber', metadata,
'addresses' : relationship(Address,
backref=backref("customer"))
})
-
+
@testing.resolve_artifact_names
def test_mapping(self):
from sqlalchemy.orm.interfaces import ONETOMANY, MANYTOONE
sess = create_session()
assert Subscriber.addresses.property.direction is ONETOMANY
assert Address.customer.property.direction is MANYTOONE
-
+
s1 = Subscriber(type='A',
addresses = [
Address(type='D'),
]
)
a1 = Address(type='B', customer=Subscriber(type='C'))
-
+
assert s1.addresses[0].customer is s1
assert a1.customer.addresses[0] is a1
-
+
sess.add_all([s1, a1])
-
+
sess.flush()
sess.expunge_all()
-
+
eq_(
sess.query(Subscriber).order_by(Subscriber.type).all(),
[
"""Test explicit relationships that are backrefs to each other."""
run_inserts = None
-
+
@testing.resolve_artifact_names
def test_o2m(self):
mapper(User, users, properties={
'addresses':relationship(Address, back_populates='user')
})
-
+
mapper(Address, addresses, properties={
'user':relationship(User, back_populates='addresses')
})
-
+
sess = create_session()
-
+
u1 = User(name='u1')
a1 = Address(email_address='foo')
u1.addresses.append(a1)
assert a1.user is u1
-
+
sess.add(u1)
sess.flush()
sess.expire_all()
mapper(User, users, properties={
'addresses':relationship(Address, back_populates='userr')
})
-
+
mapper(Address, addresses, properties={
'user':relationship(User, back_populates='addresses')
})
-
+
assert_raises(sa.exc.InvalidRequestError, compile_mappers)
-
+
@testing.resolve_artifact_names
def test_invalid_target(self):
mapper(User, users, properties={
'addresses':relationship(Address, back_populates='dingaling'),
})
-
+
mapper(Dingaling, dingalings)
mapper(Address, addresses, properties={
'dingaling':relationship(Dingaling)
})
-
+
assert_raises_message(sa.exc.ArgumentError,
r"reverse_property 'dingaling' on relationship "
"User.addresses references "
"relationship Address.dingaling, which does not "
"reference mapper Mapper\|User\|users",
compile_mappers)
-
+
class JoinConditionErrorTest(testing.TestBase):
-
+
def test_clauseelement_pj(self):
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
id = Column('id', Integer, primary_key=True)
c1id = Column('c1id', Integer, ForeignKey('c1.id'))
c2 = relationship(C1, primaryjoin=C1.id)
-
+
assert_raises(sa.exc.ArgumentError, compile_mappers)
def test_clauseelement_pj_false(self):
c2 = relationship(C1, primaryjoin="x"=="y")
assert_raises(sa.exc.ArgumentError, compile_mappers)
-
+
def test_only_column_elements(self):
m = MetaData()
t1 = Table('t1', m,
class C2(object):
pass
- mapper(C1, t1, properties={'c2':relationship(C2,
+ mapper(C1, t1, properties={'c2':relationship(C2,
primaryjoin=t1.join(t2))})
mapper(C2, t2)
assert_raises(sa.exc.ArgumentError, compile_mappers)
-
+
def test_invalid_string_args(self):
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import util
-
+
for argname, arg in [
('remote_side', ['c1.id']),
('remote_side', ['id']),
class C1(Base):
__tablename__ = 'c1'
id = Column('id', Integer, primary_key=True)
-
+
class C2(Base):
__tablename__ = 'c2'
id_ = Column('id', Integer, primary_key=True)
c1id = Column('c1id', Integer, ForeignKey('c1.id'))
c2 = relationship(C1, **kw)
-
+
assert_raises_message(
sa.exc.ArgumentError,
"Column-based expression object expected "
"for argument '%s'; got: '%s', type %r" %
(argname, arg[0], type(arg[0])),
compile_mappers)
-
-
+
+
def test_fk_error_raised(self):
m = MetaData()
t1 = Table('t1', m,
Column('id', Integer, primary_key=True),
Column('t1id', Integer, ForeignKey('t1.id'))
)
-
+
class C1(object):
pass
class C2(object):
pass
-
+
mapper(C1, t1, properties={'c2':relationship(C2)})
mapper(C2, t3)
-
+
assert_raises(sa.exc.NoReferencedColumnError, compile_mappers)
-
+
def test_join_error_raised(self):
m = MetaData()
t1 = Table('t1', m,
mapper(C2, t3)
assert_raises(sa.exc.ArgumentError, compile_mappers)
-
+
def teardown(self):
- clear_mappers()
-
+ clear_mappers()
+
class TypeMatchTest(_base.MappedTest):
"""test errors raised when trying to add items
whose type is not handled by a relationship"""
Column('t1id', Integer, ForeignKey('t1.id'), primary_key=True),
Column('t2id', Integer, ForeignKey('t2.id'), primary_key=True),
)
-
+
@testing.resolve_artifact_names
def test_viewonly(self):
class A(_base.ComparableEntity):pass
class B(_base.ComparableEntity):pass
-
+
mapper(A, t1, properties={
'bs':relationship(B, secondary=t1t2,
backref=backref('as_', viewonly=True))
})
mapper(B, t2)
-
+
sess = create_session()
a1 = A()
b1 = B(as_=[a1])
eq_(
sess.query(B).first(), B(as_=[A(id=a1.id)])
)
-
+
class ViewOnlyOverlappingNames(_base.MappedTest):
"""'viewonly' mappings with overlapping PK column names."""
class ViewOnlyLocalRemoteM2M(testing.TestBase):
"""test that local-remote is correctly determined for m2m"""
-
+
def test_local_remote(self):
meta = MetaData()
-
+
t1 = Table('t1', meta,
Column('id', Integer, primary_key=True),
)
Column('t1_id', Integer, ForeignKey('t1.id',)),
Column('t2_id', Integer, ForeignKey('t2.id',)),
)
-
+
class A(object): pass
class B(object): pass
mapper( B, t2, )
m.get_property('b_plain').local_remote_pairs == \
[(t1.c.id, t12.c.t1_id), (t2.c.id, t12.c.t2_id)]
-
-
+
+
class ViewOnlyNonEquijoin(_base.MappedTest):
"""'viewonly' mappings based on non-equijoins."""
"mean to set remote_side on the many-to-one side ?",
sa.orm.compile_mappers)
-
+
class InvalidRelationshipEscalationTest(_base.MappedTest):
@classmethod
sa.exc.ArgumentError,
"could not determine any local/remote column pairs",
sa.orm.compile_mappers)
-
-
+
+
@testing.resolve_artifact_names
def test_no_equated_self_ref(self):
mapper(Foo, foos, properties={
viewonly=True)})
mapper(Bar, bars_with_fks)
sa.orm.compile_mappers()
-
+
@testing.resolve_artifact_names
def test_no_equated_self_ref_viewonly(self):
mapper(Foo, foos, properties={
"present, or are otherwise part of a "
"ForeignKeyConstraint on their parent "
"Table.", sa.orm.compile_mappers)
-
+
sa.orm.clear_mappers()
mapper(Foo, foos_with_fks, properties={
'foos':relationship(Foo,
"Could not determine relationship direction for primaryjoin "
"condition",
sa.orm.compile_mappers)
-
+
@testing.resolve_artifact_names
def test_equated_self_ref_wrong_fks(self):
primaryjoin=foos.c.id==foobars.c.fid,
secondaryjoin=foobars.c.bid==bars.c.id)})
mapper(Bar, bars)
-
+
assert_raises_message(sa.exc.SAWarning,
"No ForeignKey objects were present in "
"secondary table 'foobars'. Assumed "
"condition 'foos.id = foobars.fid' on "
"relationship Foo.bars",
sa.orm.compile_mappers)
-
+
sa.orm.clear_mappers()
mapper(Foo, foos, properties={
'bars': relationship(Bar,
Foo.bars.property.secondary_synchronize_pairs,
[(bars.c.id, foobars_with_many_columns.c.bid)]
)
-
-
+
+
@testing.resolve_artifact_names
def test_bad_primaryjoin(self):
mapper(Foo, foos, properties={
"Could not determine relationship direction for "
"primaryjoin condition",
sa.orm.compile_mappers)
-
+
sa.orm.clear_mappers()
mapper(Foo, foos, properties={
'bars': relationship(Bar,
viewonly=True)})
mapper(Bar, bars)
sa.orm.compile_mappers()
-
+
@testing.resolve_artifact_names
def test_bad_secondaryjoin(self):
mapper(Foo, foos, properties={
class ActiveHistoryFlagTest(_fixtures.FixtureTest):
run_inserts = None
run_deletes = None
-
+
def _test_attribute(self, obj, attrname, newvalue):
sess = Session()
sess.add(obj)
oldvalue = getattr(obj, attrname)
sess.commit()
-
+
# expired
assert attrname not in obj.__dict__
-
+
setattr(obj, attrname, newvalue)
eq_(
attributes.get_history(obj, attrname),
([newvalue,], (), [oldvalue,])
)
-
+
@testing.resolve_artifact_names
def test_column_property_flag(self):
mapper(User, users, properties={
u2 = User(name='ed')
a1 = Address(email_address='a1', user=u1)
self._test_attribute(a1, 'user', u2)
-
+
@testing.resolve_artifact_names
def test_composite_property_flag(self):
class MyComposite(object):
})
o1 = Order(composite=MyComposite('foo', 1))
self._test_attribute(o1, "composite", MyComposite('bar', 1))
-
-
+
+
class RelationDeprecationTest(_base.MappedTest):
"""test usage of the old 'relation' function."""
-
+
run_inserts = 'once'
run_deletes = None
def test_config_errors(self):
Session = scoped_session(sa.orm.sessionmaker())
-
+
s = Session()
assert_raises_message(
sa.exc.InvalidRequestError,
"At least one scoped session is already present. ",
Session.configure, bind=testing.db
)
-
+
class ScopedMapperTest(_ScopedTest):
@classmethod
def test_no_selects(self):
subset_select = select([common.c.id, common.c.data])
assert_raises(sa.exc.InvalidRequestError, mapper, Subset, subset_select)
-
+
@testing.resolve_artifact_names
def test_basic(self):
subset_select = select([common.c.id, common.c.data]).alias()
object_session,
User()
)
-
+
@testing.requires.sequences
def test_sequence_execute(self):
seq = Sequence("some_sequence")
eq_(sess.execute(seq), 1)
finally:
seq.drop(testing.db)
-
-
+
+
@testing.resolve_artifact_names
def test_expunge_cascade(self):
mapper(Address, addresses)
sess.execute(users_unbound.delete())
eq_(sess.execute(users_unbound.select()).fetchall(), [])
-
+
sess.close()
@engines.close_open_connections
sess = create_session()
sess.add(User(name='test'))
sess.flush()
-
+
u1 = sess.query(User).first()
make_transient(u1)
assert u1 not in sess
make_transient(u1)
sess.add(u1)
assert u1 in sess.new
-
+
# test expired attributes
# get unexpired
u1 = sess.query(User).first()
# works twice
make_transient(u1)
-
+
sess.close()
-
+
u1.name = 'test2'
sess.add(u1)
sess.flush()
sess.delete(u1)
sess.flush()
assert u1 not in sess
-
+
assert_raises(sa.exc.InvalidRequestError, sess.add, u1)
make_transient(u1)
sess.add(u1)
@testing.resolve_artifact_names
def test_deleted_flag(self):
mapper(User, users)
-
+
sess = sessionmaker()()
-
+
u1 = User(name='u1')
sess.add(u1)
sess.commit()
-
+
sess.delete(u1)
sess.flush()
assert u1 not in sess
assert_raises(sa.exc.InvalidRequestError, sess.add, u1)
sess.rollback()
assert u1 in sess
-
+
sess.delete(u1)
sess.commit()
assert u1 not in sess
assert_raises(sa.exc.InvalidRequestError, sess.add, u1)
-
+
make_transient(u1)
sess.add(u1)
sess.commit()
-
+
eq_(sess.query(User).count(), 1)
-
+
@testing.resolve_artifact_names
def test_autoflush_expressions(self):
"""test that an expression which is dependent on object state is
evaluated after the session autoflushes. This is the lambda
inside of strategies.py lazy_clause.
-
+
"""
mapper(User, users, properties={
'addresses':relationship(Address, backref="user")})
assert newad not in u.addresses
# pending objects dont get expired
assert newad.email_address == 'a new address'
-
+
@testing.resolve_artifact_names
def test_autocommit_doesnt_raise_on_pending(self):
mapper(User, users)
session.begin()
session.flush()
session.commit()
-
+
def test_active_flag(self):
sess = create_session(bind=config.db, autocommit=True)
assert not sess.is_active
assert sess.is_active
sess.rollback()
assert not sess.is_active
-
+
@testing.resolve_artifact_names
def test_textual_execute(self):
"""test that Session.execute() converts to text()"""
def test_transactions_isolated(self):
mapper(User, users)
users.delete().execute()
-
+
s1 = create_session(bind=testing.db, autocommit=False)
s2 = create_session(bind=testing.db, autocommit=False)
u1 = User(name='u1')
s1.add(u1)
s1.flush()
-
+
assert s2.query(User).all() == []
-
+
@testing.requires.two_phase_transactions
@testing.resolve_artifact_names
def test_twophase(self):
sa.exc.DBAPIError,
sess.commit
)
-
+
for i in range(5):
assert_raises_message(sa.exc.InvalidRequestError,
"^This Session's transaction has been "
sess.rollback()
sess.add(User(id=5, name='some name'))
sess.commit()
-
-
+
+
@testing.resolve_artifact_names
def test_no_autocommit_with_explicit_commit(self):
mapper(User, users)
assert user not in s
s.delete(user)
assert user in s
-
+
s.flush()
assert user not in s
assert s.query(User).count() == 0
del user
s.add(u2)
-
+
del u2
gc_collect()
-
+
assert len(s.identity_map) == 1
assert len(s.dirty) == 1
assert None not in s.dirty
s.flush()
gc_collect()
assert not s.dirty
-
+
assert not s.identity_map
@testing.resolve_artifact_names
assert_raises(AssertionError, s.identity_map.add,
sa.orm.attributes.instance_state(u2))
-
-
+
+
@testing.resolve_artifact_names
def test_weakref_with_cycles_o2m(self):
s = sessionmaker()()
mapper(Address, addresses)
s.add(User(name="ed", addresses=[Address(email_address="ed1")]))
s.commit()
-
+
user = s.query(User).options(joinedload(User.addresses)).one()
user.addresses[0].user # lazyload
eq_(user, User(name="ed", addresses=[Address(email_address="ed1")]))
-
+
del user
gc_collect()
assert len(s.identity_map) == 0
del user
gc_collect()
assert len(s.identity_map) == 2
-
+
s.commit()
user = s.query(User).options(joinedload(User.addresses)).one()
eq_(user, User(name="ed", addresses=[Address(email_address="ed2")]))
-
+
@testing.resolve_artifact_names
def test_weakref_with_cycles_o2o(self):
s = sessionmaker()()
del user
gc_collect()
assert len(s.identity_map) == 2
-
+
s.commit()
user = s.query(User).options(joinedload(User.address)).one()
eq_(user, User(name="ed", address=Address(email_address="ed2")))
-
+
@testing.resolve_artifact_names
def test_strong_ref(self):
s = create_session(weak_identity_map=False)
assert s.identity_map._modified
s.flush()
eq_(users.select().execute().fetchall(), [(user.id, 'u2')])
-
+
@testing.fails_on('+zxjdbc', 'http://www.sqlalchemy.org/trac/ticket/1473')
@testing.resolve_artifact_names
def test_prune(self):
@testing.resolve_artifact_names
def test_before_flush(self):
"""test that the flush plan can be affected during before_flush()"""
-
+
mapper(User, users)
-
+
class MyExt(sa.orm.session.SessionExtension):
def before_flush(self, session, flush_context, objects):
for obj in list(session.new) + list(session.dirty):
x = session.query(User).filter(User.name
== 'another %s' % obj.name).one()
session.delete(x)
-
+
sess = create_session(extension = MyExt(), autoflush=True)
u = User(name='u1')
sess.add(u)
User(name='u1')
]
)
-
+
sess.flush()
eq_(sess.query(User).order_by(User.name).all(),
[
@testing.resolve_artifact_names
def test_before_flush_affects_dirty(self):
mapper(User, users)
-
+
class MyExt(sa.orm.session.SessionExtension):
def before_flush(self, session, flush_context, objects):
for obj in list(session.identity_map.values()):
obj.name += " modified"
-
+
sess = create_session(extension = MyExt(), autoflush=True)
u = User(name='u1')
sess.add(u)
User(name='u1')
]
)
-
+
sess.add(User(name='u2'))
sess.flush()
sess.expunge_all()
class MyExt(sa.orm.session.SessionExtension):
def before_flush(s, session, flush_context, objects):
session.flush()
-
+
sess = create_session(extension=MyExt())
sess.add(User(name='foo'))
assert_raises_message(sa.exc.InvalidRequestError,
mapper(User, users)
sess = Session()
-
+
sess.add_all([User(name='u1'), User(name='u2'), User(name='u3')])
sess.commit()
-
+
u1, u2, u3 = sess.query(User).all()
for i, (key, value) in enumerate(sess.identity_map.iteritems()):
if i == 2:
del u3
gc_collect()
-
-
+
+
class DisposedStates(_base.MappedTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
-
+
@classmethod
def define_tables(cls, metadata):
global t1
def __init__(self, data):
self.data = data
mapper(T, t1)
-
+
def teardown(self):
from sqlalchemy.orm.session import _sessions
_sessions.clear()
super(DisposedStates, self).teardown()
-
+
def _set_imap_in_disposal(self, sess, *objs):
"""remove selected objects from the given session, as though
they were dereferenced and removed from WeakIdentityMap.
-
+
Hardcodes the identity map's "all_states()" method to return the
full list of states. This simulates the all_states() method
returning results, afterwhich some of the states get garbage
collected (this normally only happens during asynchronous gc).
The Session now has one or more InstanceState's which have been
removed from the identity map and disposed.
-
+
Will the Session not trip over this ??? Stay tuned.
-
+
"""
all_states = sess.identity_map.all_states()
state = attributes.instance_state(obj)
sess.identity_map.remove(state)
state.dispose()
-
+
def _test_session(self, **kwargs):
global sess
sess = create_session(**kwargs)
o1.data = 't1modified'
o5.data = 't5modified'
-
+
self._set_imap_in_disposal(sess, o2, o4, o5)
return sess
-
+
def test_flush(self):
self._test_session().flush()
-
+
def test_clear(self):
self._test_session().expunge_all()
-
+
def test_close(self):
self._test_session().close()
-
+
def test_expunge_all(self):
self._test_session().expunge_all()
-
+
def test_expire_all(self):
self._test_session().expire_all()
-
+
def test_rollback(self):
sess = self._test_session(autocommit=False, expire_on_commit=True)
sess.commit()
-
+
sess.rollback()
-
-
+
+
class SessionInterface(testing.TestBase):
"""Bogus args to Session methods produce actionable exceptions."""
class EagerTest(_fixtures.FixtureTest, testing.AssertsCompiledSQL):
run_inserts = 'once'
run_deletes = None
-
+
@testing.resolve_artifact_names
def test_basic(self):
mapper(User, users, properties={
order_by=Address.id)
})
sess = create_session()
-
+
q = sess.query(User).options(subqueryload(User.addresses))
-
+
def go():
eq_(
[User(id=7, addresses=[
Address(id=1, email_address='jack@bean.com')])],
q.filter(User.id==7).all()
)
-
+
self.assert_sql_count(testing.db, go, 2)
-
+
def go():
eq_(
self.static.user_address_result,
order_by=Address.id)
})
sess = create_session()
-
+
u = aliased(User)
-
+
q = sess.query(u).options(subqueryload(u.addresses))
def go():
q.order_by(u.id).all()
)
self.assert_sql_count(testing.db, go, 2)
-
+
q = sess.query(u).\
options(subqueryload_all(u.addresses, Address.dingalings))
-
+
def go():
eq_(
[
q.filter(u.id.in_([8, 9])).all()
)
self.assert_sql_count(testing.db, go, 3)
-
-
+
+
@testing.resolve_artifact_names
def test_from_get(self):
mapper(User, users, properties={
order_by=Address.id)
})
sess = create_session()
-
+
q = sess.query(User).options(subqueryload(User.addresses))
def go():
eq_(
Address(id=1, email_address='jack@bean.com')]),
q.get(7)
)
-
+
self.assert_sql_count(testing.db, go, 2)
@testing.resolve_artifact_names
)
self.assert_sql_count(testing.db, go, 2)
-
+
@testing.resolve_artifact_names
def test_disable_dynamic(self):
"""test no subquery option on a dynamic."""
})
mapper(Address, addresses)
sess = create_session()
-
+
# previously this would not raise, but would emit
# the query needlessly and put the result nowhere.
assert_raises_message(
"User.addresses' does not support object population - eager loading cannot be applied.",
sess.query(User).options(subqueryload(User.addresses)).first,
)
-
+
@testing.resolve_artifact_names
def test_many_to_many(self):
mapper(Keyword, keywords)
def test_options_pathing(self):
self._do_options_test(self._pathing_runs)
-
+
def test_mapper_pathing(self):
self._do_mapper_test(self._pathing_runs)
-
+
@testing.resolve_artifact_names
def _do_options_test(self, configs):
mapper(User, users, properties={
order_by=keywords.c.id) #m2m
})
mapper(Keyword, keywords)
-
+
callables = {
'joinedload':joinedload,
'subqueryload':subqueryload
}
-
+
for o, i, k, count in configs:
options = []
if o in callables:
order_by=keywords.c.id)
})
mapper(Keyword, keywords)
-
+
try:
self._do_query_tests([], count)
finally:
clear_mappers()
-
+
@testing.resolve_artifact_names
def _do_query_tests(self, opts, count):
sess = create_session()
order_by(User.id).all(),
self.static.user_item_keyword_result[0:1]
)
-
-
+
+
@testing.resolve_artifact_names
def test_cyclical(self):
"""A circular eager relationship breaks the cycle with a lazy loader"""
n2.append(Node(data='n21'))
n2.children[0].append(Node(data='n211'))
n2.children[0].append(Node(data='n212'))
-
+
sess.add(n1)
sess.add(n2)
sess.flush()
], d)
self.assert_sql_count(testing.db, go, 4)
-
+
mapper(Address, addresses)
-
+
class FixtureDataTest(TransactionTest):
run_inserts = 'each'
-
+
def test_attrs_on_rollback(self):
sess = self.session()
u1 = sess.query(User).get(7)
s.rollback()
assert u1 in s
assert u1 not in s.deleted
-
+
def test_gced_delete_on_rollback(self):
s = self.session()
u1 = User(name='ed')
s.add(u1)
s.commit()
-
+
s.delete(u1)
u1_state = attributes.instance_state(u1)
assert u1_state in s.identity_map.all_states()
del u1
gc_collect()
assert u1_state.obj() is None
-
+
s.rollback()
assert u1_state in s.identity_map.all_states()
u1 = s.query(User).filter_by(name='ed').one()
s.flush()
assert s.scalar(users.count()) == 0
s.commit()
-
+
def test_trans_deleted_cleared_on_rollback(self):
s = self.session()
u1 = User(name='ed')
@testing.requires.two_phase_transactions
def test_rollback_on_prepare(self):
s = self.session(twophase=True)
-
+
u = User(name='ed')
s.add(u)
s.prepare()
s.rollback()
-
+
assert u not in s
-
+
class RollbackRecoverTest(TransactionTest):
def test_pk_violation(self):
sess.commit()
testing.db.execute(users.update(users.c.name=='ed').values(name='edward'))
-
+
assert u1.name == 'ed'
sess.expire_all()
assert u1.name == 'edward'
u1.name = 'edwardo'
sess.rollback()
-
+
testing.db.execute(users.update(users.c.name=='ed').values(name='edward'))
assert u1.name == 'edwardo'
assert u1.name == 'edwardo'
sess.commit()
-
+
assert testing.db.execute(select([users.c.name])).fetchall() == [('edwardo',)]
assert u1.name == 'edwardo'
sess.delete(u1)
sess.commit()
-
+
def test_preflush_no_accounting(self):
sess = sessionmaker(_enable_transaction_accounting=False, autocommit=True)()
u1 = User(name='ed')
sess.add(u1)
sess.flush()
-
+
sess.begin()
u1.name = 'edwardo'
u2 = User(name="some other user")
sess.add(u2)
-
+
sess.rollback()
sess.begin()
assert testing.db.execute(select([users.c.name])).fetchall() == [('ed',)]
-
-
+
+
class AutoCommitTest(TransactionTest):
def test_begin_nested_requires_trans(self):
sess = create_session(autocommit=True)
u1 = User(name='ed')
sess.add(u1)
-
+
sess.begin()
u2 = User(name='some other user')
sess.add(u2)
session.rollback()
-
-
+
+
uni_type = VARCHAR(50, collation='utf8_unicode_ci')
else:
uni_type = sa.Unicode(50)
-
+
Table('uni_t1', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
session.commit()
self.assert_(t1.txt == txt)
-
+
@testing.resolve_artifact_names
def test_relationship(self):
mapper(Test, uni_t1, properties={
@testing.resolve_artifact_names
def test_binary_equality(self):
-
+
# Py3K
#data = b"this is some data"
# Py2K
data = "this is some data"
# end Py2K
-
+
mapper(Foo, t1)
-
+
s = create_session()
-
+
f1 = Foo(data=data)
s.add(f1)
s.flush()
def go():
s.flush()
self.assert_sql_count(testing.db, go, 0)
-
+
class MutableTypesTest(_base.MappedTest):
@classmethod
@testing.resolve_artifact_names
def test_modified_status(self):
f1 = Foo(data = pickleable.Bar(4,5))
-
+
session = Session()
session.add(f1)
session.commit()
f2.data.y = 19
assert f2 in session.dirty
assert 'data' not in sa.orm.attributes.instance_state(f2).unmodified
-
+
@testing.resolve_artifact_names
def test_mutations_persisted(self):
f1 = Foo(data = pickleable.Bar(4,5))
-
+
session = Session()
session.add(f1)
session.commit()
f1.data
session.close()
-
+
f2 = session.query(Foo).first()
f2.data.y = 19
session.commit()
f2.data
session.close()
-
+
f3 = session.query(Foo).first()
ne_(f3.data,f1.data)
eq_(f3.data, pickleable.Bar(4, 19))
-
+
@testing.resolve_artifact_names
def test_no_unnecessary_update(self):
f1 = Foo(data = pickleable.Bar(4,5), val = u'hi')
session.commit()
self.sql_count_(0, session.commit)
-
+
f1.val = u'someothervalue'
self.assert_sql(testing.db, session.commit, [
("UPDATE mutable_t SET val=:val "
("UPDATE mutable_t SET data=:data, val=:val "
"WHERE mutable_t.id = :mutable_t_id",
{'mutable_t_id': f1.id, 'val': u'hi', 'data':f1.data})])
-
+
@testing.resolve_artifact_names
def test_mutated_state_resurrected(self):
f1 = Foo(data = pickleable.Bar(4,5), val = u'hi')
"""test that a non-mutable attribute event subsequent to
a mutable event prevents the object from falling into
resurrected state.
-
+
"""
f1 = Foo(data = pickleable.Bar(4, 5), val=u'some val')
session = Session()
f1.val=u'some new val'
assert sa.orm.attributes.instance_state(f1)._strong_obj is not None
-
+
del f1
session.commit()
eq_(
@testing.resolve_artifact_names
def test_non_mutated_state_not_resurrected(self):
f1 = Foo(data = pickleable.Bar(4,5))
-
+
session = Session()
session.add(f1)
session.commit()
-
+
session = Session()
f1 = session.query(Foo).first()
del f1
def test_scalar_no_net_change_no_update(self):
"""Test that a no-net-change on a scalar attribute event
doesn't cause an UPDATE for a mutable state.
-
+
"""
f1 = Foo(val=u'hi')
def test_expire_attribute_set(self):
"""test one SELECT emitted when assigning to an expired
mutable attribute - this will become 0 in 0.7.
-
+
"""
-
+
f1 = Foo(data = pickleable.Bar(4, 5), val=u'some val')
session = Session()
session.add(f1)
session.commit()
-
+
assert 'data' not in f1.__dict__
def go():
f1.data = pickleable.Bar(10, 15)
self.sql_count_(1, go)
session.commit()
-
+
eq_(f1.data.x, 10)
@testing.resolve_artifact_names
def test_expire_mutate(self):
"""test mutations are detected on an expired mutable
attribute."""
-
+
f1 = Foo(data = pickleable.Bar(4, 5), val=u'some val')
session = Session()
session.add(f1)
session.commit()
-
+
assert 'data' not in f1.__dict__
def go():
f1.data.x = 10
self.sql_count_(1, go)
session.commit()
-
+
eq_(f1.data.x, 10)
-
+
@testing.resolve_artifact_names
def test_deferred_attribute_set(self):
"""test one SELECT emitted when assigning to a deferred
mutable attribute - this will become 0 in 0.7.
-
+
"""
sa.orm.clear_mappers()
mapper(Foo, mutable_t, properties={
session = Session()
session.add(f1)
session.commit()
-
+
session.close()
-
+
f1 = session.query(Foo).first()
def go():
f1.data = pickleable.Bar(10, 15)
self.sql_count_(1, go)
session.commit()
-
+
eq_(f1.data.x, 10)
@testing.resolve_artifact_names
def test_deferred_mutate(self):
"""test mutations are detected on a deferred mutable
attribute."""
-
+
sa.orm.clear_mappers()
mapper(Foo, mutable_t, properties={
'data':sa.orm.deferred(mutable_t.c.data)
session = Session()
session.add(f1)
session.commit()
-
+
session.close()
-
+
f1 = session.query(Foo).first()
def go():
f1.data.x = 10
self.sql_count_(1, go)
session.commit()
-
+
def go():
eq_(f1.data.x, 10)
self.sql_count_(1, go)
-
+
class PickledDictsTest(_base.MappedTest):
assert mytable.count().scalar() == 0
assert myothertable.count().scalar() == 0
-
+
@testing.emits_warning(r".*'passive_deletes' is normally configured on one-to-many")
@testing.resolve_artifact_names
def test_backwards_pd(self):
"""Test that passive_deletes=True disables a delete from an m2o.
-
+
This is not the usual usage and it now raises a warning, but test
that it works nonetheless.
'myclass':relationship(MyClass, cascade="all, delete", passive_deletes=True)
})
mapper(MyClass, mytable)
-
+
session = create_session()
mc = MyClass()
mco = MyOtherClass()
assert mytable.count().scalar() == 1
assert myothertable.count().scalar() == 1
-
+
session.expire(mco, ['myclass'])
session.delete(mco)
session.flush()
-
+
# mytable wasn't deleted, is the point.
assert mytable.count().scalar() == 1
assert myothertable.count().scalar() == 0
-
+
@testing.resolve_artifact_names
def test_aaa_m2o_emits_warning(self):
mapper(MyOtherClass, myothertable, properties={
})
mapper(MyClass, mytable)
assert_raises(sa.exc.SAWarning, sa.orm.compile_mappers)
-
+
class ExtraPassiveDeletesTest(_base.MappedTest):
__requires__ = ('foreign_keys',)
Column('book_id', String(50)),
Column('title', String(50))
)
-
+
@testing.resolve_artifact_names
def test_naming(self):
class Book(_base.ComparableEntity):
pass
-
+
mapper(Book, book)
sess = create_session()
-
+
b1 = Book(book_id='abc', title='def')
sess.add(b1)
sess.flush()
-
+
b1.title = 'ghi'
sess.flush()
sess.close()
sess.query(Book).first(),
Book(book_id='abc', title='ghi')
)
-
-
-
+
+
+
class DefaultTest(_base.MappedTest):
"""Exercise mappings on columns with DefaultGenerators.
Column('id', Integer, ForeignKey('data.id'), primary_key=True),
Column('c', String(50)),
)
-
+
@classmethod
def setup_mappers(cls):
class Data(_base.BasicEntity):
pass
-
+
@testing.resolve_artifact_names
def test_refreshes(self):
mapper(Data, data, properties={
m = mapper(Data, data)
m.add_property('aplusb', column_property(data.c.a + literal_column("' '") + data.c.b))
self._test()
-
+
@testing.resolve_artifact_names
def test_with_inheritance(self):
class SubData(Data):
'aplusb':column_property(data.c.a + literal_column("' '") + data.c.b)
})
mapper(SubData, subdata, inherits=Data)
-
+
sess = create_session()
sd1 = SubData(a="hello", b="there", c="hi")
sess.add(sd1)
sess.flush()
eq_(sd1.aplusb, "hello there")
-
+
@testing.resolve_artifact_names
def _test(self):
sess = create_session()
-
+
d1 = Data(a="hello", b="there")
sess.add(d1)
sess.flush()
-
+
eq_(d1.aplusb, "hello there")
-
+
d1.b = "bye"
sess.flush()
eq_(d1.aplusb, "hello bye")
-
+
d1.b = 'foobar'
d1.aplusb = 'im setting this explicitly'
sess.flush()
eq_(d1.aplusb, "im setting this explicitly")
-
+
class OneToManyTest(_fixtures.FixtureTest):
run_inserts = None
session = create_session()
session.add_all((u1, u2))
session.flush()
-
+
u3 = User(name='user3')
u4 = User(name='user4')
u5 = User(name='user5')
-
+
session.add_all([u4, u5, u3])
session.flush()
-
+
# test insert ordering is maintained
assert names == ['user1', 'user2', 'user4', 'user5', 'user3']
session.expunge_all()
-
+
sa.orm.clear_mappers()
m = mapper(User, users, extension=TestExtension())
class DontAllowFlushOnLoadingObjectTest(_base.MappedTest):
"""Test that objects with NULL identity keys aren't permitted to complete a flush.
-
+
User-defined callables that execute during a load may modify state
on instances which results in their being autoflushed, before attributes
are populated. If the primary key identifiers are missing, an explicit assertion
is needed to check that the object doesn't go through the flush process with
no net changes and gets placed in the identity map with an incorrect
identity key.
-
+
"""
@classmethod
def define_tables(cls, metadata):
Column('id', Integer, primary_key=True),
Column('data', String(30)),
)
-
+
@testing.resolve_artifact_names
def test_flush_raises(self):
class T1(_base.ComparableEntity):
# before 'id' was even populated, i.e. a callable
# within an attribute_mapped_collection
self.__dict__.pop('id', None)
-
+
# generate a change event, perhaps this occurs because
# someone wrote a broken attribute_mapped_collection that
# inappropriately fires off change events when it should not,
# now we're dirty
self.data = 'foo bar'
-
+
# blow away that change, so an UPDATE does not occur
# (since it would break)
self.__dict__.pop('data', None)
-
+
# flush ! any lazyloader here would trigger
# autoflush, for example.
sess.flush()
-
+
mapper(T1, t1)
-
+
sess = Session()
sess.add(T1(data='test', id=5))
sess.commit()
sess.close()
-
+
# make sure that invalid state doesn't get into the session
# with the wrong key. If the identity key is not NULL, at least
# the population process would continue after the erroneous flush
'flush is occuring at an inappropriate '
'time, such as during a load operation.',
sess.query(T1).first)
-
-
-
+
+
+
class RowSwitchTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
assert o5 in sess.deleted
assert o5.t7s[0] in sess.deleted
assert o5.t7s[1] in sess.deleted
-
+
sess.add(o6)
sess.flush()
class C(P):
pass
-
+
@testing.resolve_artifact_names
def test_row_switch_no_child_table(self):
mapper(P, parent)
mapper(C, child, inherits=P)
-
+
sess = create_session()
c1 = C(id=1, pdata='c1', cdata='c1')
sess.add(c1)
sess.flush()
-
+
# establish a row switch between c1 and c2.
# c2 has no value for the "child" table
c2 = C(id=1, pdata='c2')
CompiledSQL("UPDATE parent SET pdata=:pdata WHERE parent.id = :parent_id",
{'pdata':'c2', 'parent_id':1}
),
-
+
# this fires as of [ticket:1362], since we synchronzize
# PK/FKs on UPDATES. c2 is new so the history shows up as
# pure added, update occurs. If a future change limits the
{'pid':1, 'child_id':1}
)
)
-
-
+
+
class TransactionTest(_base.MappedTest):
__requires__ = ('deferrable_constraints',)
for d in deleted:
uow.register_object(d, isdelete=True)
return uow
-
+
def _assert_uow_size(self,
session,
expected
a1, a2 = Address(email_address='a1'), Address(email_address='a2')
u1 = User(name='u1', addresses=[a1, a2])
sess.add(u1)
-
+
self.assert_sql_execution(
testing.db,
sess.flush,
u1 = User(name='u1', addresses=[a1, a2])
sess.add(u1)
sess.flush()
-
+
sess.delete(u1)
sess.delete(a1)
sess.delete(a2)
{'id':u1.id}
),
)
-
+
def test_many_to_one_save(self):
-
+
mapper(User, users)
mapper(Address, addresses, properties={
'user':relationship(User)
a1, a2 = Address(email_address='a1', user=u1), \
Address(email_address='a2', user=u1)
sess.add_all([a1, a2])
-
+
self.assert_sql_execution(
testing.db,
sess.flush,
Address(email_address='a2', user=u1)
sess.add_all([a1, a2])
sess.flush()
-
+
sess.delete(u1)
sess.delete(a1)
sess.delete(a2)
parent = User(name='p1')
c1, c2 = Address(email_address='c1', parent=parent), \
Address(email_address='c2', parent=parent)
-
+
session = Session()
session.add_all([c1, c2])
session.add(parent)
session.flush()
-
+
pid = parent.id
c1id = c1.id
c2id = c2.id
-
+
session.expire(parent)
session.expire(c1)
session.expire(c2)
-
+
session.delete(c1)
session.delete(c2)
session.delete(parent)
-
+
# testing that relationships
# are loaded even if all ids/references are
# expired
lambda ctx: {'id': pid}
),
)
-
+
def test_many_to_many(self):
mapper(Item, items, properties={
'keywords':relationship(Keyword, secondary=item_keywords)
})
mapper(Keyword, keywords)
-
+
sess = create_session()
k1 = Keyword(name='k1')
i1 = Item(description='i1', keywords=[k1])
lambda ctx:{'item_id':i1.id, 'keyword_id':k1.id}
)
)
-
+
# test that keywords collection isn't loaded
sess.expire(i1, ['keywords'])
i1.description = 'i2'
"WHERE items.id = :items_id",
lambda ctx:{'description':'i2', 'items_id':i1.id})
)
-
+
def test_m2o_flush_size(self):
mapper(User, users)
mapper(Address, addresses, properties={
n2, n3 = Node(data='n2'), Node(data='n3')
n1 = Node(data='n1', children=[n2, n3])
-
+
sess.add(n1)
-
+
self.assert_sql_execution(
testing.db,
sess.flush,
-
+
CompiledSQL(
"INSERT INTO nodes (parent_id, data) VALUES "
"(:parent_id, :data)",
sess.add(n1)
sess.flush()
-
+
sess.delete(n1)
sess.delete(n2)
sess.delete(n3)
CompiledSQL("DELETE FROM nodes WHERE nodes.id = :id",
lambda ctx:{'id':n1.id})
)
-
+
def test_many_to_one_save(self):
mapper(Node, nodes, properties={
'parent':relationship(Node, remote_side=nodes.c.id)
sess.add_all([n2, n3])
sess.flush()
-
+
sess.delete(n1)
sess.delete(n2)
sess.delete(n3)
CompiledSQL("DELETE FROM nodes WHERE nodes.id = :id",
lambda ctx: {'id':n1.id})
)
-
+
def test_cycle_rowswitch(self):
mapper(Node, nodes, properties={
'children':relationship(Node)
n3.id = n2.id
n1.children.append(n3)
sess.flush()
-
+
def test_bidirectional_mutations_one(self):
mapper(Node, nodes, properties={
'children':relationship(Node,
sess.delete(n2)
n1.children.append(n3)
sess.flush()
-
+
sess.delete(n1)
sess.delete(n3)
sess.flush()
-
+
def test_bidirectional_multilevel_save(self):
mapper(Node, nodes, properties={
'children':relationship(Node,
self._assert_uow_size(sess, 2)
sess.flush()
-
+
n1.data='jack'
self._assert_uow_size(sess, 2)
sess.flush()
-
+
n2 = Node(data='foo')
sess.add(n2)
sess.flush()
-
+
n1.children.append(n2)
self._assert_uow_size(sess, 3)
-
+
sess.flush()
-
+
sess = create_session()
n1 = sess.query(Node).first()
n1.data='ed'
self._assert_uow_size(sess, 2)
-
+
n1.children
self._assert_uow_size(sess, 2)
parent = Node()
c1, c2 = Node(parent=parent), Node(parent=parent)
-
+
session = Session()
session.add_all([c1, c2])
session.add(parent)
session.flush()
-
+
pid = parent.id
c1id = c1.id
c2id = c2.id
-
+
session.expire(parent)
session.expire(c1)
session.expire(c2)
-
+
session.delete(c1)
session.delete(c2)
session.delete(parent)
-
+
# testing that relationships
# are loaded even if all ids/references are
# expired
lambda ctx: {'id': pid}
),
)
-
-
-
+
+
+
class SingleCyclePlusAttributeTest(_base.MappedTest,
testing.AssertsExecutionResults, AssertsUOW):
@classmethod
Column('parent_id', Integer, ForeignKey('nodes.id')),
Column('data', String(30))
)
-
+
Table('foobars', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
sess.add(n1)
# ensure "foobars" doesn't get yanked in here
self._assert_uow_size(sess, 3)
-
+
n1.foobars.append(FooBar())
# saveupdateall/deleteall for FooBar added here,
# plus processstate node.foobars
# currently the "all" procs stay in pairs
self._assert_uow_size(sess, 6)
-
+
sess.flush()
class SingleCycleM2MTest(_base.MappedTest,
Column('data', String(30)),
Column('favorite_node_id', Integer, ForeignKey('nodes.id'))
)
-
+
node_to_nodes =Table('node_to_nodes', metadata,
Column('left_node_id', Integer,
ForeignKey('nodes.id'),primary_key=True),
Column('right_node_id', Integer,
ForeignKey('nodes.id'),primary_key=True),
)
-
+
@testing.resolve_artifact_names
def test_many_to_many_one(self):
class Node(Base):
pass
-
+
mapper(Node, nodes, properties={
'children':relationship(Node, secondary=node_to_nodes,
primaryjoin=nodes.c.id==node_to_nodes.c.left_node_id,
),
'favorite':relationship(Node, remote_side=nodes.c.id)
})
-
+
sess = create_session()
n1 = Node(data='n1')
n2 = Node(data='n2')
n3 = Node(data='n3')
n4 = Node(data='n4')
n5 = Node(data='n5')
-
+
n4.favorite = n3
n1.favorite = n5
n5.favorite = n2
-
+
n1.children = [n2, n3, n4]
n2.children = [n3, n5]
n3.children = [n5, n4]
-
+
sess.add_all([n1, n2, n3, n4, n5])
-
+
# can't really assert the SQL on this easily
# since there's too many ways to insert the rows.
# so check the end result
(n3.id, n5.id), (n3.id, n4.id)
])
)
-
+
sess.delete(n1)
-
+
self.assert_sql_execution(
testing.db,
sess.flush,
"node_to_nodes.right_node_id AND nodes.id = "
"node_to_nodes.left_node_id" ,
lambda ctx:{u'param_1': n1.id},
- ),
+ ),
CompiledSQL(
"DELETE FROM node_to_nodes WHERE "
"node_to_nodes.left_node_id = :left_node_id AND "
lambda ctx:{'id': n1.id}
),
)
-
+
for n in [n2, n3, n4, n5]:
sess.delete(n)
-
+
# load these collections
# outside of the flush() below
n4.children
n5.children
-
+
self.assert_sql_execution(
testing.db,
sess.flush,
lambda ctx:[{'id': n2.id}, {'id': n3.id}]
),
)
-
+
class RowswitchAccountingTest(_base.MappedTest):
@classmethod
def define_tables(cls, metadata):
Table('child', metadata,
Column('id', Integer, ForeignKey('parent.id'), primary_key=True)
)
-
+
@testing.resolve_artifact_names
def test_accounting_for_rowswitch(self):
class Parent(object):
backref="parent")
})
mapper(Child, child)
-
+
sess = create_session(autocommit=False)
p1 = Parent(1)
assert 'populate_instance' not in carrier
carrier.append(interfaces.MapperExtension)
-
+
# Py3K
#assert 'populate_instance' not in carrier
# Py2K
assert 'populate_instance' in carrier
# end Py2K
-
+
assert carrier.interface
for m in carrier.interface:
assert getattr(interfaces.MapperExtension, m)
row = {users.c.id: 1, users.c.name: "Frank"}
key = util.identity_key(User, row=row)
eq_(key, (User, (1,)))
-
+
'782a5f04b4364a53a6fce762f48921c1',
'bef510f2420f4476a7629013ead237f5',
]
-
+
def make_uuid():
"""generate uuids even on Python 2.4 which has no 'uuid'"""
return _uuids.pop(0)
class VersioningTest(_base.MappedTest):
-
+
@classmethod
def define_tables(cls, metadata):
Table('version_table', metadata,
@testing.resolve_artifact_names
def test_bump_version(self):
"""test that version number can be bumped.
-
+
Ensures that the UPDATE or DELETE is against the
last committed version of version_id_col, not the modified
state.
-
+
"""
mapper(Foo, version_table,
version_id_col=version_table.c.version_id)
f1.version_id = 2
s1.commit()
eq_(f1.version_id, 2)
-
+
# skip an id, test that history
# is honored
f1.version_id = 4
f1.value = "something new"
s1.commit()
eq_(f1.version_id, 4)
-
+
f1.version_id = 5
s1.delete(f1)
s1.commit()
eq_(s1.query(Foo).count(), 0)
-
+
@testing.emits_warning(r'.*does not support updated rowcount')
@engines.close_open_connections
@testing.resolve_artifact_names
# reload it - this expires the old version first
s1.refresh(f1s1, lockmode='read')
-
+
# now assert version OK
s1.query(Foo).with_lockmode('read').get(f1s1.id)
f1s2 = s2.query(Foo).get(f1s1.id)
s2.refresh(f1s2, lockmode='update')
f1s2.value='f1 new value'
-
+
assert_raises(
exc.DBAPIError,
s1.refresh, f1s1, lockmode='update_nowait'
)
s1.rollback()
-
+
s2.commit()
s1.refresh(f1s1, lockmode='update_nowait')
assert f1s1.version_id == f1s2.version_id
session.add(P(id='P1', data='P version 1'))
session.commit()
session.close()
-
+
p = session.query(P).first()
session.delete(p)
session.add(P(id='P1', data="really a row-switch"))
session.commit()
-
+
@testing.resolve_artifact_names
def test_child_row_switch(self):
assert P.c.property.strategy.use_get
-
+
session = sessionmaker()()
session.add(P(id='P1', data='P version 1'))
session.commit()
@testing.resolve_artifact_names
def test_child_row_switch_two(self):
Session = sessionmaker()
-
+
# TODO: not sure this test is
# testing exactly what its looking for
-
+
sess1 = Session()
sess1.add(P(id='P1', data='P version 1'))
sess1.commit()
sess1.close()
-
+
p1 = sess1.query(P).first()
sess2 = Session()
p2 = sess2.query(P).first()
-
+
sess1.delete(p1)
sess1.commit()
-
+
# this can be removed and it still passes
sess1.add(P(id='P1', data='P version 2'))
sess1.commit()
-
+
p2.data = 'P overwritten by concurrent tx'
assert_raises_message(
orm.exc.StaleDataError,
r"1 row\(s\); 0 were matched.",
sess2.commit
)
-
+
class InheritanceTwoVersionIdsTest(_base.MappedTest):
"""Test versioning where both parent/child table have a
versioning column.
-
+
"""
@classmethod
def define_tables(cls, metadata):
# base is populated
eq_(select([base.c.version_id]).scalar(), 1)
-
+
@testing.resolve_artifact_names
def test_sub_only(self):
mapper(Base, base)
# base is not
eq_(select([base.c.version_id]).scalar(), None)
-
+
@testing.resolve_artifact_names
def test_mismatch_version_col_warning(self):
mapper(Base, base,
version_id_col=base.c.version_id)
-
+
assert_raises_message(
exc.SAWarning,
"Inheriting version_id_col 'version_id' does not "
mapper,
Sub, sub, inherits=Base,
version_id_col=sub.c.version_id)
-
\ No newline at end of file
data = [{'name':'John Doe','sex':1,'age':35, 'type':'employee'}] * 100
for j in xrange(500):
i.execute(data)
-
+
# note we arent fetching from employee_table,
# so we can leave it empty even though its "incorrect"
#i = Employee_table.insert()
#data = [{'foo':'foo', 'bar':'bar':'bat':'bat'}] * 100
#for j in xrange(500):
# i.execute(data)
-
+
print "Inserted 50,000 rows"
def sqlite_select(entity_cls):
Column('c2', String(30)),
Column('t1id', Integer, ForeignKey('t1.c1'))
)
-
+
metadata.create_all()
l = []
for y in range(1, 100):
l.append({'c2':'this is t2 #%d' % y, 't1id':x})
t2.insert().execute(*l)
-
+
class T1(_fixtures.Base):
pass
class T2(_fixtures.Base):
't2s':relationship(T2, backref='t1')
})
mapper(T2, t2)
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
clear_mappers()
-
+
@profiling.profiled('clean', report=True)
def test_session_clean(self):
for x in range(0, ITERATIONS):
for x in range(0, ITERATIONS):
sess = create_session()
t1s = sess.query(T1).filter(T1.c1.between(15, 48)).all()
-
+
for index in [2, 7, 12, 15, 18, 20]:
t1s[index].c2 = 'this is some modified text'
for t2 in t1s[index].t2s:
del t1s
gc_collect()
-
+
sess.close()
del sess
gc_collect()
def test_literal_interpretation(self):
t = table('test', column('col1'))
-
+
assert_raises(exc.ArgumentError, case, [("x", "y")])
-
+
self.assert_compile(case([("x", "y")], value=t.c.col1), "CASE test.col1 WHEN :param_1 THEN :param_2 END")
self.assert_compile(case([(t.c.col1==7, "y")], else_="z"), "CASE WHEN (test.col1 = :col1_1) THEN :param_1 ELSE :param_2 END")
-
+
def test_text_doesnt_explode(self):
for s in [
select([case([(info_table.c.info == 'pk_4_data',
text("'yes'"))], else_=text("'no'"
))]).order_by(info_table.c.info),
-
+
select([case([(info_table.c.info == 'pk_4_data',
literal_column("'yes'"))], else_=literal_column("'no'"
))]).order_by(info_table.c.info),
-
+
]:
eq_(s.execute().fetchall(), [
(u'no', ), (u'no', ), (u'no', ), (u'yes', ),
(u'no', ), (u'no', ),
])
-
-
-
+
+
+
@testing.fails_on('firebird', 'FIXME: unknown')
@testing.fails_on('maxdb', 'FIXME: unknown')
def testcase_with_dict(self):
],
whereclause=info_table.c.pk < 4,
from_obj=[info_table])
-
+
assert simple_query.execute().fetchall() == [
('one', 1),
('two', 2),
"SELECT mytable.myid, mytable.name, mytable.description, "
"myothertable.otherid, myothertable.othername FROM mytable, "
"myothertable")
-
+
def test_invalid_col_argument(self):
assert_raises(exc.ArgumentError, select, table1)
assert_raises(exc.ArgumentError, select, table1.c.myid)
-
+
def test_from_subquery(self):
"""tests placing select statements in the column clause of another select, for the
purposes of selecting from the exported columns of that select."""
-
+
s = select([table1], table1.c.name == 'jack')
self.assert_compile(
select(
select([ClauseList(column('a'), column('b'))]).select_from('sometable'),
'SELECT a, b FROM sometable'
)
-
+
def test_use_labels(self):
self.assert_compile(
select([table1.c.myid==5], use_labels=True),
select([cast("data", Integer)], use_labels=True),
"SELECT CAST(:param_1 AS INTEGER) AS anon_1"
)
-
+
self.assert_compile(
select([func.sum(func.lala(table1.c.myid).label('foo')).label('bar')]),
"SELECT sum(lala(mytable.myid)) AS bar FROM mytable"
)
-
+
def test_paramstyles(self):
stmt = text("select :foo, :bar, :bat from sometable")
-
+
self.assert_compile(
stmt,
"select ?, ?, ? from sometable"
"select %(foo)s, %(bar)s, %(bat)s from sometable"
, dialect=default.DefaultDialect(paramstyle='pyformat')
)
-
+
def test_dupe_columns(self):
"""test that deduping is performed against clause element identity, not rendered result."""
-
+
self.assert_compile(
select([column('a'), column('a'), column('a')]),
"SELECT a, a, a"
"SELECT a, b"
, dialect=default.DefaultDialect()
)
-
+
self.assert_compile(
select([bindparam('a'), bindparam('b'), bindparam('c')]),
"SELECT :a, :b, :c"
select(["a", "a", "a"]),
"SELECT a, a, a"
)
-
+
s = select([bindparam('a'), bindparam('b'), bindparam('c')])
s = s.compile(dialect=default.DefaultDialect(paramstyle='qmark'))
eq_(s.positiontup, ['a', 'b', 'c'])
-
+
def test_nested_uselabels(self):
"""test nested anonymous label generation. this
essentially tests the ANONYMOUS_LABEL regex.
'mytable.name AS name, mytable.description '
'AS description FROM mytable) AS anon_2) '
'AS anon_1')
-
+
def test_dont_overcorrelate(self):
self.assert_compile(select([table1], from_obj=[table1,
table1.select()]),
"mytable.myid AS myid, mytable.name AS "
"name, mytable.description AS description "
"FROM mytable)")
-
+
def test_full_correlate(self):
# intentional
t = table('t', column('a'), column('b'))
s2 = select([t.c.a, s])
self.assert_compile(s2, """SELECT t.a, (SELECT t.a WHERE t.a = :a_1) AS anon_1 FROM t""")
-
+
# unintentional
t2 = table('t2', column('c'), column('d'))
s = select([t.c.a]).where(t.c.a==t2.c.d).as_scalar()
s = s.correlate(t, t2)
s2 =select([t, t2, s])
self.assert_compile(s, "SELECT t.a WHERE t.a = t2.d")
-
+
def test_exists(self):
s = select([table1.c.myid]).where(table1.c.myid==5)
-
+
self.assert_compile(exists(s),
"EXISTS (SELECT mytable.myid FROM mytable WHERE mytable.myid = :myid_1)"
)
-
+
self.assert_compile(exists(s.as_scalar()),
"EXISTS (SELECT mytable.myid FROM mytable WHERE mytable.myid = :myid_1)"
)
-
+
self.assert_compile(exists([table1.c.myid], table1.c.myid
== 5).select(),
'SELECT EXISTS (SELECT mytable.myid FROM '
'WHERE EXISTS (SELECT * FROM myothertable '
'AS myothertable_1 WHERE '
'myothertable_1.otherid = mytable.myid)')
-
+
self.assert_compile(
select([
or_(
"OR (EXISTS (SELECT * FROM myothertable WHERE "
"myothertable.otherid = :otherid_2)) AS anon_1"
)
-
+
def test_where_subquery(self):
s = select([addresses.c.street], addresses.c.user_id
self.assert_compile(
label('bar', column('foo', type_=String))+ 'foo',
'foo || :param_1')
-
+
def test_conjunctions(self):
a, b, c = 'a', 'b', 'c'
select([x.label('foo')]),
'SELECT a AND b AND c AS foo'
)
-
+
self.assert_compile(
and_(table1.c.myid == 12, table1.c.name=='asdf',
table2.c.othername == 'foo', "sysdate() = today()"),
'today()',
checkparams = {'othername_1': 'asdf', 'othername_2':'foo', 'otherid_1': 9, 'myid_1': 12}
)
-
+
def test_distinct(self):
self.assert_compile(
select([func.count(distinct(table1.c.myid))]),
"SELECT count(DISTINCT mytable.myid) AS count_1 FROM mytable"
)
-
+
def test_operators(self):
for (py_op, sql_op) in ((operator.add, '+'), (operator.mul, '*'),
(operator.sub, '-'),
self.assert_(compiled == fwd_sql or compiled == rev_sql,
"\n'" + compiled + "'\n does not match\n'" +
fwd_sql + "'\n or\n'" + rev_sql + "'")
-
+
for (py_op, op) in (
(operator.neg, '-'),
(operator.inv, 'NOT '),
(table1.c.myid, "mytable.myid"),
(literal("foo"), ":param_1"),
):
-
+
compiled = str(py_op(expr))
sql = "%s%s" % (op, sql)
eq_(compiled, sql)
-
+
self.assert_compile(
table1.select((table1.c.myid != 12) & ~(table1.c.name=='john')),
"SELECT mytable.myid, mytable.name, mytable.description FROM "
postgresql.PGDialect()),
]:
self.assert_compile(expr, check, dialect=dialect)
-
+
def test_match(self):
for expr, check, dialect in [
(table1.c.myid.match('somstr'),
postgresql.dialect()),
(table1.c.myid.match('somstr'),
"CONTAINS (mytable.myid, :myid_1)",
- oracle.dialect()),
+ oracle.dialect()),
]:
self.assert_compile(expr, check, dialect=dialect)
"SELECT column1 AS foobar, column2 AS hoho, myid FROM "
"(SELECT column1 AS foobar, column2 AS hoho, mytable.myid AS myid FROM mytable)"
)
-
+
self.assert_compile(
select(['col1','col2'], from_obj='tablename').alias('myalias'),
"SELECT col1, col2 FROM tablename"
checkparams={'bar':4, 'whee': 7},
dialect=dialect
)
-
+
# test escaping out text() params with a backslash
self.assert_compile(
text("select * from foo where clock='05:06:07' and mork='\:mindy'"),
"SELECT CURRENT_DATE + s.a AS dates FROM generate_series(:x, :y, :z) as s(a)",
checkparams={'y': None, 'x': None, 'z': None}
)
-
+
self.assert_compile(
s.params(x=5, y=6, z=7),
"SELECT CURRENT_DATE + s.a AS dates FROM generate_series(:x, :y, :z) as s(a)",
checkparams={'y': 6, 'x': 5, 'z': 7}
)
-
+
@testing.emits_warning('.*empty sequence.*')
def test_render_binds_as_literal(self):
"""test a compiler that renders binds inline into
SQL in the columns clause."""
-
+
dialect = default.DefaultDialect()
class Compiler(dialect.statement_compiler):
ansi_bind_rules = True
dialect.statement_compiler = Compiler
-
+
self.assert_compile(
select([literal("someliteral")]),
"SELECT 'someliteral'",
"SELECT mod(mytable.myid, 5) AS mod_1 FROM mytable",
dialect=dialect
)
-
+
self.assert_compile(
select([literal("foo").in_([])]),
"SELECT 'foo' != 'foo' AS anon_1",
dialect=dialect
)
-
+
assert_raises(
exc.CompileError,
bindparam("foo").in_([]).compile, dialect=dialect
)
-
-
+
+
def test_literal(self):
-
+
self.assert_compile(select([literal('foo')]), "SELECT :param_1")
-
+
self.assert_compile(select([literal("foo") + literal("bar")], from_obj=[table1]),
"SELECT :param_1 || :param_2 AS anon_1 FROM mytable")
expr, "SELECT mytable.name COLLATE latin1_german2_ci AS anon_1 FROM mytable")
assert table1.c.name.collate('latin1_german2_ci').type is table1.c.name.type
-
+
expr = select([table1.c.name.collate('latin1_german2_ci').label('k1')]).order_by('k1')
self.assert_compile(expr,"SELECT mytable.name COLLATE latin1_german2_ci AS k1 FROM mytable ORDER BY k1")
'''"table%name"."spaces % more spaces" AS "table%name_spaces % '''\
'''more spaces" FROM "table%name"'''
)
-
-
+
+
def test_joins(self):
self.assert_compile(
join(table2, table1, table1.c.myid == table2.c.otherid).select(),
"select #1 has 2 columns, select #2 has 3",
union, table3.select(), table1.select()
)
-
+
x = union(
select([table1], table1.c.myid == 5),
select([table1], table1.c.myid == 12),
"FROM mytable UNION SELECT mytable.myid, mytable.name, "
"mytable.description FROM mytable) UNION SELECT mytable.myid,"
" mytable.name, mytable.description FROM mytable")
-
+
u1 = union(
select([table1.c.myid, table1.c.name]),
select([table2]),
"FROM thirdtable")
assert u1.corresponding_column(table2.c.otherid) is u1.c.myid
-
+
# TODO - why is there an extra space before the LIMIT ?
self.assert_compile(
union(
"SELECT thirdtable.userid FROM thirdtable)"
)
-
+
s = select([column('foo'), column('bar')])
# ORDER BY's even though not supported by all DB's, are rendered if requested
union(s.order_by("foo").self_group(), s.order_by("bar").limit(10).self_group()),
"(SELECT foo, bar ORDER BY foo) UNION (SELECT foo, bar ORDER BY bar LIMIT 10)"
)
-
+
def test_compound_grouping(self):
s = select([column('foo'), column('bar')]).select_from('bat')
"((SELECT foo, bar FROM bat UNION SELECT foo, bar FROM bat) "
"UNION SELECT foo, bar FROM bat) UNION SELECT foo, bar FROM bat"
)
-
+
self.assert_compile(
union(s, s, s, s),
"SELECT foo, bar FROM bat UNION SELECT foo, bar "
"SELECT foo, bar FROM bat UNION (SELECT foo, bar FROM bat "
"UNION (SELECT foo, bar FROM bat UNION SELECT foo, bar FROM bat))"
)
-
+
self.assert_compile(
select([s.alias()]),
'SELECT anon_1.foo, anon_1.bar FROM (SELECT foo, bar FROM bat) AS anon_1'
"UNION SELECT foo, bar FROM bat) "
"UNION (SELECT foo, bar FROM bat "
"UNION SELECT foo, bar FROM bat)")
-
-
+
+
self.assert_compile(
union(
intersect(s, s),
def test_binds_no_hash_collision(self):
"""test that construct_params doesn't corrupt dict due to hash collisions"""
-
+
total_params = 100000
-
+
in_clause = [':in%d' % i for i in range(total_params)]
params = dict(('in%d' % i, i) for i in range(total_params))
sql = 'text clause %s' % ', '.join(in_clause)
pp = c.construct_params(params)
eq_(len(set(pp)), total_params, '%s %s' % (len(set(pp)), len(pp)))
eq_(len(set(pp.values())), total_params)
-
+
def test_bind_as_col(self):
t = table('foo', column('id'))
s = select([t, literal('lala').label('hoho')])
self.assert_compile(s, "SELECT foo.id, :param_1 AS hoho FROM foo")
-
+
assert [str(c) for c in s.c] == ["id", "hoho"]
-
+
@testing.emits_warning('.*empty sequence.*')
def test_in(self):
self.assert_compile(table1.c.myid.in_(['a']),
),
"(mytable.myid, mytable.name) IN ((myothertable.otherid, myothertable.othername))"
)
-
+
self.assert_compile(
tuple_(table1.c.myid, table1.c.name).in_(
select([table2.c.otherid, table2.c.othername])
"(mytable.myid, mytable.name) IN (SELECT "
"myothertable.otherid, myothertable.othername FROM myothertable)"
)
-
-
+
+
def test_cast(self):
tbl = table('casttest',
column('id', Integer),
self.assert_compile(cast(literal_column('NULL'), Integer),
'CAST(NULL AS INTEGER)',
dialect=sqlite.dialect())
-
+
def test_date_between(self):
import datetime
table = Table('dt', metadata,
"SELECT op.field FROM op WHERE (op.field = op.field) BETWEEN :param_1 AND :param_2")
self.assert_compile(table.select(between((table.c.field == table.c.field), False, True)),
"SELECT op.field FROM op WHERE (op.field = op.field) BETWEEN :param_1 AND :param_2")
-
+
def test_associativity(self):
f = column('f')
self.assert_compile( f - f, "f - f" )
self.assert_compile( f - f - f, "(f - f) - f" )
-
+
self.assert_compile( (f - f) - f, "(f - f) - f" )
self.assert_compile( (f - f).label('foo') - f, "(f - f) - f" )
-
+
self.assert_compile( f - (f - f), "f - (f - f)" )
self.assert_compile( f - (f - f).label('foo'), "f - (f - f)" )
self.assert_compile( f / f - f, "f / f - f" )
self.assert_compile( (f / f) - f, "f / f - f" )
self.assert_compile( (f / f).label('foo') - f, "f / f - f" )
-
+
# because / more precedent than -
self.assert_compile( f - (f / f), "f - f / f" )
self.assert_compile( f - (f / f).label('foo'), "f - f / f" )
self.assert_compile( f - f / f, "f - f / f" )
self.assert_compile( (f - f) / f, "(f - f) / f" )
-
+
self.assert_compile( ((f - f) / f) - f, "(f - f) / f - f")
self.assert_compile( (f - f) / (f - f), "(f - f) / (f - f)")
-
+
# higher precedence
self.assert_compile( (f / f) - (f / f), "f / f - f / f")
self.assert_compile( (f / f) - (f - f), "f / f - (f - f)")
self.assert_compile( (f / f) / (f - f), "(f / f) / (f - f)")
self.assert_compile( f / (f / (f - f)), "f / (f / (f - f))")
-
-
+
+
def test_delayed_col_naming(self):
my_str = Column(String)
-
+
sel1 = select([my_str])
-
+
assert_raises_message(
exc.InvalidRequestError,
"Cannot initialize a sub-selectable with this Column",
lambda: sel1.c
)
-
+
# calling label or as_scalar doesn't compile
- # anything.
+ # anything.
sel2 = select([func.substr(my_str, 2, 3)]).label('my_substr')
-
+
assert_raises_message(
exc.CompileError,
"Cannot compile Column object until it's 'name' is assigned.",
str, sel2
)
-
+
sel3 = select([my_str]).as_scalar()
assert_raises_message(
exc.CompileError,
"Cannot compile Column object until it's 'name' is assigned.",
str, sel3
)
-
+
my_str.name = 'foo'
-
+
self.assert_compile(
sel1,
"SELECT foo",
sel2,
'(SELECT substr(foo, :substr_2, :substr_3) AS substr_1)',
)
-
+
self.assert_compile(
sel3,
"(SELECT foo)"
)
-
+
def test_naming(self):
f1 = func.hoho(table1.c.name)
s1 = select([table1.c.myid, table1.c.myid.label('foobar'),
f1,
func.lala(table1.c.name).label('gg')])
-
+
eq_(
s1.c.keys(),
['myid', 'foobar', str(f1), 'gg']
meta = MetaData()
t1 = Table('mytable', meta, Column('col1', Integer))
-
+
exprs = (
table1.c.myid==12,
func.hoho(table1.c.myid),
t = col.table
else:
t = table1
-
+
s1 = select([col], from_obj=t)
assert s1.c.keys() == [key], s1.c.keys()
-
+
if label:
self.assert_compile(s1, "SELECT %s AS %s FROM mytable" % (expr, label))
else:
self.assert_compile(s1, "SELECT %s FROM mytable" % (expr,))
-
+
s1 = select([s1])
if label:
self.assert_compile(s1,
self.assert_compile(s1,
"SELECT %s FROM (SELECT %s FROM mytable)" %
(expr,expr))
-
+
def test_hints(self):
s = select([table1.c.myid]).with_hint(table1, "test hint %(name)s")
a1 = table1.alias()
s3 = select([a1.c.myid]).with_hint(a1, "index(%(name)s hint)")
-
+
subs4 = select([
table1, table2
]).select_from(table1.join(table2, table1.c.myid==table2.c.otherid)).\
with_hint(table1, 'hint1')
-
+
s4 = select([table3]).select_from(
table3.join(
subs4,
)
).\
with_hint(table3, 'hint3')
-
+
subs5 = select([
table1, table2
]).select_from(table1.join(table2, table1.c.myid==table2.c.otherid))
).\
with_hint(table3, 'hint3').\
with_hint(table1, 'hint1')
-
+
t1 = table('QuotedName', column('col1'))
s6 = select([t1.c.col1]).where(t1.c.col1>10).with_hint(t1, '%(name)s idx1')
a2 = t1.alias('SomeName')
s7 = select([a2.c.col1]).where(a2.c.col1>10).with_hint(a2, '%(name)s idx1')
-
+
mysql_d, oracle_d, sybase_d = \
mysql.dialect(), \
oracle.dialect(), \
expected,
dialect=dialect
)
-
+
class CRUDTest(TestBase, AssertsCompiledSQL):
def test_insert(self):
# generic insert, will create bind params for all columns
where(table1.c.name=='somename'),
"DELETE FROM mytable WHERE mytable.myid = :myid_1 "
"AND mytable.name = :name_1")
-
+
def test_correlated_delete(self):
# test a non-correlated WHERE clause
s = select([table2.c.othername], table2.c.otherid == 7)
"DELETE FROM mytable WHERE mytable.name = (SELECT "
"myothertable.othername FROM myothertable WHERE "
"myothertable.otherid = mytable.myid)")
-
+
def test_binds_that_match_columns(self):
"""test bind params named after column names
replace the normal SET/VALUES generation."""
-
+
t = table('foo', column('x'), column('y'))
u = t.update().where(t.c.x==bindparam('x'))
-
+
assert_raises(exc.CompileError, u.compile)
-
+
self.assert_compile(u, "UPDATE foo SET WHERE foo.x = :x", params={})
assert_raises(exc.CompileError, u.values(x=7).compile)
-
+
self.assert_compile(u.values(y=7), "UPDATE foo SET y=:y WHERE foo.x = :x")
-
+
assert_raises(exc.CompileError, u.values(x=7).compile, column_keys=['x', 'y'])
assert_raises(exc.CompileError, u.compile, column_keys=['x', 'y'])
-
+
self.assert_compile(u.values(x=3 + bindparam('x')),
"UPDATE foo SET x=(:param_1 + :x) WHERE foo.x = :x")
i = t.insert().values(x=3 + bindparam('y'), y=5)
assert_raises(exc.CompileError, i.compile)
-
+
i = t.insert().values(x=3 + bindparam('x2'))
self.assert_compile(i, "INSERT INTO foo (x) VALUES ((:param_1 + :x2))")
self.assert_compile(i, "INSERT INTO foo (x) VALUES ((:param_1 + :x2))", params={})
params={'x':1, 'y':2})
self.assert_compile(i, "INSERT INTO foo (x, y) VALUES ((:param_1 + :x2), :y)",
params={'x2':1, 'y':2})
-
+
def test_labels_no_collision(self):
-
+
t = table('foo', column('id'), column('foo_id'))
-
+
self.assert_compile(
t.update().where(t.c.id==5),
"UPDATE foo SET id=:id, foo_id=:foo_id WHERE foo.id = :id_1"
t.update().where(t.c.id==bindparam(key=t.c.id._label)),
"UPDATE foo SET id=:id, foo_id=:foo_id WHERE foo.id = :foo_id_1"
)
-
+
class InlineDefaultTest(TestBase, AssertsCompiledSQL):
def test_insert(self):
m = MetaData()
self.assert_compile(table4.select(),
"SELECT remote_owner.remotetable.rem_id, remote_owner.remotetable.datatype_id,"
" remote_owner.remotetable.value FROM remote_owner.remotetable")
-
+
self.assert_compile(table4.select(and_(table4.c.datatype_id==7, table4.c.value=='hi')),
"SELECT remote_owner.remotetable.rem_id, remote_owner.remotetable.datatype_id,"
" remote_owner.remotetable.value FROM remote_owner.remotetable WHERE "
' "dbo.remote_owner".remotetable.value AS dbo_remote_owner_remotetable_value FROM'
' "dbo.remote_owner".remotetable'
)
-
+
def test_alias(self):
a = alias(table4, 'remtable')
self.assert_compile(a.select(a.c.datatype_id==7),
def test_double_fk_usage_raises(self):
f = ForeignKey('b.id')
-
+
Column('x', Integer, f)
assert_raises(exc.InvalidRequestError, Column, "y", Integer, f)
-
+
def test_circular_constraint(self):
a = Table("a", metadata,
Column('id', Integer, primary_key=True),
('sometable', 'this_name_is_too_long', 'ix_sometable_t_09aa'),
('sometable', 'this_name_alsois_long', 'ix_sometable_t_3cf1'),
]:
-
+
t1 = Table(tname, MetaData(),
Column(cname, Integer, index=True),
)
ix1 = list(t1.indexes)[0]
-
+
self.assert_compile(
schema.CreateIndex(ix1),
"CREATE INDEX %s "
"ON %s (%s)" % (exp, tname, cname),
dialect=dialect
)
-
+
dialect.max_identifier_length = 22
dialect.max_index_name_length = None
-
+
t1 = Table('t', MetaData(), Column('c', Integer))
assert_raises(
exc.IdentifierError,
dialect=dialect
)
-
+
class ConstraintCompilationTest(TestBase, AssertsCompiledSQL):
def _test_deferrable(self, constraint_factory):
Column('a', Integer),
Column('b', Integer),
constraint_factory(deferrable=True))
-
+
sql = str(schema.CreateTable(t).compile(bind=testing.db))
assert 'DEFERRABLE' in sql, sql
assert 'NOT DEFERRABLE' not in sql, sql
-
+
t = Table('tbl', MetaData(),
Column('a', Integer),
Column('b', Integer),
CheckConstraint('a < b',
deferrable=True,
initially='DEFERRED')))
-
+
self.assert_compile(
schema.CreateTable(t),
"CREATE TABLE tbl (a INTEGER, b INTEGER CHECK (a < b) DEFERRABLE INITIALLY DEFERRED)"
)
-
+
def test_use_alter(self):
m = MetaData()
t = Table('t', m,
Column('a', Integer),
)
-
+
t2 = Table('t2', m,
Column('a', Integer, ForeignKey('t.a', use_alter=True, name='fk_ta')),
Column('b', Integer, ForeignKey('t.a', name='fk_tb')), # to ensure create ordering ...
'DROP TABLE t2',
'DROP TABLE t'
])
-
-
+
+
def test_add_drop_constraint(self):
m = MetaData()
-
+
t = Table('tbl', m,
Column('a', Integer),
Column('b', Integer)
)
-
+
t2 = Table('t2', m,
Column('a', Integer),
Column('b', Integer)
)
-
+
constraint = CheckConstraint('a < b',name="my_test_constraint",
deferrable=True,initially='DEFERRED', table=t)
-
+
# before we create an AddConstraint,
# the CONSTRAINT comes out inline
self.assert_compile(
schema.AddConstraint(constraint),
"ALTER TABLE t2 ADD CONSTRAINT uq_cst UNIQUE (a, b)"
)
-
+
constraint = UniqueConstraint(t2.c.a, t2.c.b, name="uq_cs2")
self.assert_compile(
schema.AddConstraint(constraint),
"ALTER TABLE t2 ADD CONSTRAINT uq_cs2 UNIQUE (a, b)"
)
-
+
assert t.c.a.primary_key is False
constraint = PrimaryKeyConstraint(t.c.a)
assert t.c.a.primary_key is True
schema.AddConstraint(constraint),
"ALTER TABLE tbl ADD PRIMARY KEY (a)"
)
-
-
+
+
assert_raises_message(sa.exc.ArgumentError,
ex_msg,
sa.ColumnDefault, fn)
-
+
def test_arg_signature(self):
def fn1(): pass
def fn2(): pass
assert r.lastrow_has_defaults()
eq_(set(r.context.postfetch_cols),
set([t.c.col3, t.c.col5, t.c.col4, t.c.col6]))
-
+
eq_(t.select(t.c.col1==54).execute().fetchall(),
[(54, 'imthedefault', f, ts, ts, ctexec, True, False,
12, today, None)])
12, today, 'py'),
(53, 'imthedefault', f, ts, ts, ctexec, True, False,
12, today, 'py')])
-
+
def test_missing_many_param(self):
assert_raises_message(exc.InvalidRequestError,
"A value is required for bind parameter 'col7', in parameter group 1",
{'col4':7, 'col8':19},
{'col4':7, 'col7':12, 'col8':19},
)
-
+
def test_insert_values(self):
t.insert(values={'col3':50}).execute()
l = t.select().execute()
l = l.first()
eq_(55, l['col3'])
-
+
class PKDefaultTest(_base.TablesTest):
__requires__ = ('subqueries',)
Column('id', Integer, primary_key=True,
default=sa.select([func.max(t2.c.nextid)]).as_scalar()),
Column('data', String(30)))
-
+
@testing.requires.returning
def test_with_implicit_returning(self):
self._test(True)
-
+
def test_regular(self):
self._test(False)
-
+
@testing.resolve_artifact_names
def _test(self, returning):
if not returning and not testing.db.dialect.implicit_returning:
ids.add(last)
eq_(ids, set([1,2,3,4]))
-
+
eq_(list(bind.execute(aitable.select().order_by(aitable.c.id))),
[(1, 1, None), (2, None, 'row 2'), (3, 3, 'row 3'), (4, 4, None)])
t1 = Table('t1', metadata,
Column('is_true', Boolean, server_default=('1')))
metadata.create_all()
-
+
try:
result = t1.insert().execute()
eq_(1, select([func.count(text('*'))], from_obj=t1).scalar())
"DROP SEQUENCE foo_seq",
use_default_dialect=True,
)
-
+
@testing.fails_on('firebird', 'no FB support for start/increment')
@testing.requires.sequences
start = seq.start or 1
inc = seq.increment or 1
assert values == list(xrange(start, start + inc * 3, inc))
-
+
finally:
seq.drop(testing.db)
-
+
@testing.requires.sequences
def test_seq_nonpk(self):
"""test sequences fire off as defaults on non-pk columns"""
self.assert_compile(func.nosuchfunction(), "nosuchfunction", dialect=dialect)
else:
self.assert_compile(func.nosuchfunction(), "nosuchfunction()", dialect=dialect)
-
- # test generic function compile
+
+ # test generic function compile
class fake_func(GenericFunction):
__return_type__ = sqltypes.Integer
"fake_func(%s)" %
bindtemplate % {'name':'param_1', 'position':1},
dialect=dialect)
-
+
def test_use_labels(self):
self.assert_compile(select([func.foo()], use_labels=True),
"SELECT foo() AS foo_1"
)
def test_underscores(self):
self.assert_compile(func.if_(), "if()")
-
+
def test_generic_now(self):
assert isinstance(func.now().type, sqltypes.DateTime)
('random', oracle.dialect())
]:
self.assert_compile(func.random(), ret, dialect=dialect)
-
+
def test_namespacing_conflicts(self):
self.assert_compile(func.text('foo'), 'text(:text_1)')
-
+
def test_generic_count(self):
assert isinstance(func.count().type, sqltypes.Integer)
assert True
def test_return_type_detection(self):
-
+
for fn in [func.coalesce, func.max, func.min, func.sum]:
for args, type_ in [
((datetime.date(2007, 10, 5),
datetime.datetime(2005, 10, 15, 14, 45, 33)), sqltypes.DateTime)
]:
assert isinstance(fn(*args).type, type_), "%s / %s" % (fn(), type_)
-
+
assert isinstance(func.concat("foo", "bar").type, sqltypes.String)
@engines.close_first
def tearDown(self):
pass
-
+
def test_standalone_execute(self):
x = testing.db.func.current_date().execute().scalar()
y = testing.db.func.current_date().select().execute().scalar()
def test_conn_execute(self):
from sqlalchemy.sql.expression import FunctionElement
from sqlalchemy.ext.compiler import compiles
-
+
class myfunc(FunctionElement):
type = Date()
-
+
@compiles(myfunc)
def compile(elem, compiler, **kw):
return compiler.process(func.current_date())
def test_exec_options(self):
f = func.foo()
eq_(f._execution_options, {})
-
+
f = f.execution_options(foo='bar')
eq_(f._execution_options, {'foo':'bar'})
s = f.select()
eq_(s._execution_options, {'foo':'bar'})
-
+
ret = testing.db.execute(func.now().execution_options(foo='bar'))
eq_(ret.context.execution_options, {'foo':'bar'})
ret.close()
-
-
+
+
@engines.close_first
def test_update(self):
"""
return other is self
__hash__ = ClauseElement.__hash__
-
+
def __eq__(self, other):
return other.expr == self.expr
s2 = vis.traverse(struct)
assert struct == s2
assert not struct.is_other(s2)
-
+
def test_no_clone(self):
struct = B(A("expr1"), A("expr2"), B(A("expr1b"), A("expr2b")), A("expr3"))
class CustomObj(Column):
pass
-
+
assert CustomObj.__visit_name__ == Column.__visit_name__ == 'column'
-
+
foo, bar = CustomObj('foo', String), CustomObj('bar', String)
bin = foo == bar
s = set(ClauseVisitor().iterate(bin))
f = sql_util.ClauseAdapter(a).traverse(f)
self.assert_compile(select([f]), "SELECT t1_1.col1 * :col1_1 AS anon_1 FROM t1 AS t1_1")
-
+
def test_join(self):
clause = t1.join(t2, t1.c.col2==t2.c.col2)
c1 = str(clause)
clause2 = Vis().traverse(clause)
assert c1 == str(clause)
assert str(clause2) == str(t1.join(t2, t1.c.col2==t2.c.col3))
-
+
def test_aliased_column_adapt(self):
clause = t1.select()
-
+
aliased = t1.select().alias()
aliased2 = t1.alias()
adapter = sql_util.ColumnAdapter(aliased)
-
+
f = select([
adapter.columns[c]
for c in aliased2.c
]).select_from(aliased)
-
+
s = select([aliased2]).select_from(aliased)
eq_(str(s), str(f))
str(select([func.count(aliased2.c.col1)]).select_from(aliased)),
str(f)
)
-
-
+
+
def test_text(self):
clause = text("select * from table where foo=:bar", bindparams=[bindparam('bar')])
c1 = str(clause)
print str(s5)
assert str(s5) == s5_assert
assert str(s4) == s4_assert
-
+
def test_union(self):
u = union(t1.select(), t2.select())
u2 = CloningVisitor().traverse(u)
u2 = CloningVisitor().traverse(u)
assert str(u) == str(u2)
assert [str(c) for c in u2.c] == cols
-
+
s1 = select([t1], t1.c.col1 == bindparam('id_param'))
s2 = select([t2])
u = union(s1, s2)
-
+
u2 = u.params(id_param=7)
u3 = u.params(id_param=10)
assert str(u) == str(u2) == str(u3)
assert u2.compile().params == {'id_param':7}
assert u3.compile().params == {'id_param':10}
-
+
def test_in(self):
expr = t1.c.col1.in_(['foo', 'bar'])
expr2 = CloningVisitor().traverse(expr)
assert str(expr) == str(expr2)
-
+
def test_adapt_union(self):
u = union(t1.select().where(t1.c.col1==4), t1.select().where(t1.c.col1==5)).alias()
-
+
assert sql_util.ClauseAdapter(u).traverse(t1) is u
-
+
def test_binds(self):
"""test that unique bindparams change their name upon clone() to prevent conflicts"""
"table1.col3 AS col3 FROM table1 WHERE table1.col1 = :col1_1) AS anon_1, "\
"(SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 WHERE table1.col1 = :col1_2) AS anon_2 "\
"WHERE anon_1.col2 = anon_2.col2")
-
+
def test_extract(self):
s = select([extract('foo', t1.c.col1).label('col1')])
self.assert_compile(s, "SELECT EXTRACT(foo FROM table1.col1) AS col1 FROM table1")
-
+
s2 = CloningVisitor().traverse(s).alias()
s3 = select([s2.c.col1])
self.assert_compile(s, "SELECT EXTRACT(foo FROM table1.col1) AS col1 FROM table1")
self.assert_compile(s3, "SELECT anon_1.col1 FROM (SELECT EXTRACT(foo FROM table1.col1) AS col1 FROM table1) AS anon_1")
-
-
+
+
@testing.emits_warning('.*replaced by another column with the same key')
def test_alias(self):
subq = t2.select().alias('subq')
s = select([t1.c.col1, subq.c.col1], from_obj=[t1, subq, t1.join(subq, t1.c.col1==subq.c.col2)])
s5 = CloningVisitor().traverse(s)
assert orig == str(s) == str(s5)
-
+
def test_correlated_select(self):
s = select(['*'], t1.c.col1==t2.c.col1, from_obj=[t1, t2]).correlate(t2)
class Vis(CloningVisitor):
select.append_whereclause(t1.c.col2==7)
self.assert_compile(Vis().traverse(s), "SELECT * FROM table1 WHERE table1.col1 = table2.col1 AND table1.col2 = :col2_1")
-
+
def test_this_thing(self):
s = select([t1]).where(t1.c.col1=='foo').alias()
s2 = select([s.c.col1])
-
+
self.assert_compile(s2, "SELECT anon_1.col1 FROM (SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 WHERE table1.col1 = :col1_1) AS anon_1")
t1a = t1.alias()
s2 = sql_util.ClauseAdapter(t1a).traverse(s2)
self.assert_compile(s2, "SELECT anon_1.col1 FROM (SELECT table1_1.col1 AS col1, table1_1.col2 AS col2, table1_1.col3 AS col3 FROM table1 AS table1_1 WHERE table1_1.col1 = :col1_1) AS anon_1")
-
+
def test_select_fromtwice(self):
t1a = t1.alias()
-
+
s = select([1], t1.c.col1==t1a.c.col1, from_obj=t1a).correlate(t1)
self.assert_compile(s, "SELECT 1 FROM table1 AS table1_1 WHERE table1.col1 = table1_1.col1")
-
+
s = CloningVisitor().traverse(s)
self.assert_compile(s, "SELECT 1 FROM table1 AS table1_1 WHERE table1.col1 = table1_1.col1")
-
+
s = select([t1]).where(t1.c.col1=='foo').alias()
-
+
s2 = select([1], t1.c.col1==s.c.col1, from_obj=s).correlate(t1)
self.assert_compile(s2, "SELECT 1 FROM (SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 WHERE table1.col1 = :col1_1) AS anon_1 WHERE table1.col1 = anon_1.col1")
s2 = ReplacingCloningVisitor().traverse(s2)
self.assert_compile(s2, "SELECT 1 FROM (SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 WHERE table1.col1 = :col1_1) AS anon_1 WHERE table1.col1 = anon_1.col1")
-
+
class ClauseAdapterTest(TestBase, AssertsCompiledSQL):
@classmethod
def setup_class(cls):
self.assert_compile(select(['*'], t2alias.c.col1==s), "SELECT * FROM table2 AS t2alias WHERE t2alias.col1 = (SELECT * FROM table1 AS t1alias)")
s = CloningVisitor().traverse(s)
self.assert_compile(select(['*'], t2alias.c.col1==s), "SELECT * FROM table2 AS t2alias WHERE t2alias.col1 = (SELECT * FROM table1 AS t1alias)")
-
+
s = select(['*']).where(t1.c.col1==t2.c.col1).as_scalar()
self.assert_compile(select([t1.c.col1, s]), "SELECT table1.col1, (SELECT * FROM table2 WHERE table1.col1 = table2.col1) AS anon_1 FROM table1")
vis = sql_util.ClauseAdapter(t1alias)
j1 = addresses.join(ualias, addresses.c.user_id==ualias.c.id)
self.assert_compile(sql_util.ClauseAdapter(j1).traverse(s), "SELECT count(addresses.id) AS count_1 FROM addresses WHERE users_1.id = addresses.user_id")
-
+
def test_table_to_alias(self):
t1alias = t1.alias('t1alias')
a = Table('a', m, Column('x', Integer), Column('y', Integer))
b = Table('b', m, Column('x', Integer), Column('y', Integer))
c = Table('c', m, Column('x', Integer), Column('y', Integer))
-
+
# force a recursion overflow, by linking a.c.x<->c.c.x, and
# asking for a nonexistent col. corresponding_column should prevent
# endless depth.
c = Table('c', m, Column('x', Integer), Column('y', Integer))
alias = select([a]).select_from(a.join(b, a.c.x==b.c.x)).alias()
-
+
# two levels of indirection from c.x->b.x->a.x, requires recursive
# corresponding_column call
adapt = sql_util.ClauseAdapter(alias, equivalents= {b.c.x: set([ a.c.x]), c.c.x:set([b.c.x])})
assert adapt._corresponding_column(a.c.x, False) is alias.c.x
assert adapt._corresponding_column(c.c.x, False) is alias.c.x
-
+
def test_join_to_alias(self):
metadata = MetaData()
a = Table('a', metadata,
"(SELECT foo.col1 AS col1, foo.col2 AS col2, foo.col3 AS col3 FROM "\
"(SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1) AS foo LIMIT 5 OFFSET 10) AS anon_1 "\
"LEFT OUTER JOIN table1 AS bar ON anon_1.col1 = bar.col1")
-
+
def test_functions(self):
self.assert_compile(sql_util.ClauseAdapter(t1.alias()).traverse(func.count(t1.c.col1)), "count(table1_1.col1)")
s = select([func.count(t1.c.col1)])
self.assert_compile(sql_util.ClauseAdapter(t1.alias()).traverse(s), "SELECT count(table1_1.col1) AS count_1 FROM table1 AS table1_1")
-
+
def test_recursive(self):
metadata = MetaData()
a = Table('a', metadata,
u = union(
a.join(b).select().apply_labels(),
a.join(d).select().apply_labels()
- ).alias()
-
+ ).alias()
+
self.assert_compile(
sql_util.ClauseAdapter(u).traverse(select([c.c.bid]).where(c.c.bid==u.c.b_aid)),
"SELECT c.bid "\
global table1, table2, table3, table4
def _table(name):
return table(name, column("col1"), column("col2"),column("col3"))
-
- table1, table2, table3, table4 = [_table(name) for name in ("table1", "table2", "table3", "table4")]
+
+ table1, table2, table3, table4 = [_table(name) for name in ("table1", "table2", "table3", "table4")]
def test_splice(self):
(t1, t2, t3, t4) = (table1, table2, table1.alias(), table2.alias())
-
+
j = t1.join(t2, t1.c.col1==t2.c.col1).join(t3, t2.c.col1==t3.c.col1).join(t4, t4.c.col1==t1.c.col1)
-
+
s = select([t1]).where(t1.c.col2<5).alias()
-
+
self.assert_compile(sql_util.splice_joins(s, j),
"(SELECT table1.col1 AS col1, table1.col2 AS col2, "\
"table1.col3 AS col3 FROM table1 WHERE table1.col2 < :col2_1) AS anon_1 "\
def test_stop_on(self):
(t1, t2, t3) = (table1, table2, table3)
-
+
j1= t1.join(t2, t1.c.col1==t2.c.col1)
j2 = j1.join(t3, t2.c.col1==t3.c.col1)
-
+
s = select([t1]).select_from(j1).alias()
-
+
self.assert_compile(sql_util.splice_joins(s, j2),
"(SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 JOIN table2 "\
"ON table1.col1 = table2.col1) AS anon_1 JOIN table2 ON anon_1.col1 = table2.col1 JOIN table3 "\
self.assert_compile(sql_util.splice_joins(s, j2, j1),
"(SELECT table1.col1 AS col1, table1.col2 AS col2, table1.col3 AS col3 FROM table1 "\
"JOIN table2 ON table1.col1 = table2.col1) AS anon_1 JOIN table3 ON table2.col1 = table3.col1")
-
+
def test_splice_2(self):
t2a = table2.alias()
t3a = table3.alias()
j1 = table1.join(t2a, table1.c.col1==t2a.c.col1).join(t3a, t2a.c.col2==t3a.c.col2)
-
+
t2b = table4.alias()
j2 = table1.join(t2b, table1.c.col3==t2b.c.col3)
-
+
self.assert_compile(sql_util.splice_joins(table1, j1),
"table1 JOIN table2 AS table2_1 ON table1.col1 = table2_1.col1 "\
"JOIN table3 AS table3_1 ON table2_1.col2 = table3_1.col2")
-
+
self.assert_compile(sql_util.splice_joins(table1, j2), "table1 JOIN table4 AS table4_1 ON table1.col3 = table4_1.col3")
self.assert_compile(sql_util.splice_joins(sql_util.splice_joins(table1, j1), j2),
"table1 JOIN table2 AS table2_1 ON table1.col1 = table2_1.col1 "\
"JOIN table3 AS table3_1 ON table2_1.col2 = table3_1.col2 "\
"JOIN table4 AS table4_1 ON table1.col3 = table4_1.col3")
-
-
+
+
class SelectTest(TestBase, AssertsCompiledSQL):
"""tests the generative capability of Select"""
assert s._execution_options == dict(foo='bar')
# s2 should have its execution_options based on s, though.
assert s2._execution_options == dict(foo='bar', bar='baz')
-
+
# this feature not available yet
def _NOTYET_test_execution_options_in_text(self):
s = text('select 42', execution_options=dict(foo='bar'))
assert_raises(exceptions.IdentifierError, m.drop_all)
assert_raises(exceptions.IdentifierError, t1.create)
assert_raises(exceptions.IdentifierError, t1.drop)
-
+
def test_result(self):
table1.insert().execute(**{"this_is_the_primarykey_column":1, "this_is_the_data_column":"data1"})
table1.insert().execute(**{"this_is_the_primarykey_column":2, "this_is_the_data_column":"data2"})
(1, "data1"),
(2, "data2"),
], repr(result)
-
+
@testing.requires.offset
def go():
r = s.limit(2).offset(1).execute()
(3, "data3"),
], repr(result)
go()
-
+
def test_table_alias_names(self):
if testing.against('oracle'):
self.assert_compile(
self.assert_compile(
select([table1, ta]).select_from(table1.join(ta, table1.c.this_is_the_data_column==ta.c.this_is_the_data_column)).\
where(ta.c.this_is_the_data_column=='data3'),
-
+
"SELECT some_large_named_table.this_is_the_primarykey_column, some_large_named_table.this_is_the_data_column, "
"table_with_exactly_29_c_1.this_is_the_primarykey_column, table_with_exactly_29_c_1.this_is_the_data_column FROM "
"some_large_named_table JOIN table_with_exactly_29_characs AS table_with_exactly_29_c_1 ON "
"WHERE table_with_exactly_29_c_1.this_is_the_data_column = :this_is_the_data_column_1",
dialect=dialect
)
-
+
table2.insert().execute(
{"this_is_the_primarykey_column":1, "this_is_the_data_column":"data1"},
{"this_is_the_primarykey_column":2, "this_is_the_data_column":"data2"},
{"this_is_the_primarykey_column":3, "this_is_the_data_column":"data3"},
{"this_is_the_primarykey_column":4, "this_is_the_data_column":"data4"},
)
-
+
r = table2.alias().select().execute()
assert r.fetchall() == [(x, "data%d" % x) for x in range(1, 5)]
-
+
def test_colbinds(self):
table1.insert().execute(**{"this_is_the_primarykey_column":1, "this_is_the_data_column":"data1"})
table1.insert().execute(**{"this_is_the_primarykey_column":2, "this_is_the_data_column":"data2"})
self.assert_compile(x, "SELECT _1.this_is_the_primarykey_column AS _1, _1.this_is_the_data_column AS _2 FROM "
"(SELECT some_large_named_table.this_is_the_primarykey_column AS _3, some_large_named_table.this_is_the_data_column AS _4 "
"FROM some_large_named_table WHERE some_large_named_table.this_is_the_primarykey_column = :_1) AS _1", dialect=compile_dialect)
-
-
+
+
Column('address', String(30)),
test_needs_acid=True
)
-
+
users2 = Table('u2', metadata,
Column('user_id', INT, primary_key = True),
Column('user_name', VARCHAR(20)),
def test_insert_heterogeneous_params(self):
"""test that executemany parameters are asserted to match the parameter set of the first."""
-
+
assert_raises_message(exc.InvalidRequestError,
"A value is required for bind parameter 'user_name', in parameter group 2",
users.insert().execute,
comp = ins.compile(engine, column_keys=list(values))
if not set(values).issuperset(c.key for c in table.primary_key):
assert comp.returning
-
+
result = engine.execute(table.insert(), **values)
ret = values.copy()
-
+
for col, id in zip(table.primary_key, result.inserted_primary_key):
ret[col.key] = id
if testing.against('firebird', 'postgresql', 'oracle', 'mssql'):
assert testing.db.dialect.implicit_returning
-
+
if testing.db.dialect.implicit_returning:
test_engines = [
engines.testing_engine(options={'implicit_returning':False}),
]
else:
test_engines = [testing.db]
-
+
for engine in test_engines:
metadata = MetaData()
for supported, table, values, assertvalues in [
]
else:
test_engines = [testing.db]
-
+
for engine in test_engines:
-
+
r = engine.execute(users.insert(),
{'user_name':'jack'},
)
assert r.closed
-
+
def test_row_iteration(self):
users.insert().execute(
{'user_id':7, 'user_name':'jack'},
@testing.fails_on('firebird', "kinterbasdb doesn't send full type information")
def test_order_by_label(self):
"""test that a label within an ORDER BY works on each backend.
-
+
This test should be modified to support [ticket:1068] when that ticket
is implemented. For now, you need to put the actual string in the
ORDER BY.
-
+
"""
users.insert().execute(
{'user_id':7, 'user_name':'jack'},
{'user_id':8, 'user_name':'ed'},
{'user_id':9, 'user_name':'fred'},
)
-
+
concat = ("test: " + users.c.user_name).label('thedata')
print select([concat]).order_by("thedata")
eq_(
select([concat]).order_by("thedata").execute().fetchall(),
[("test: ed",), ("test: fred",), ("test: jack",)]
)
-
+
eq_(
select([concat]).order_by("thedata").execute().fetchall(),
[("test: ed",), ("test: fred",), ("test: jack",)]
[("test: ed",), ("test: fred",), ("test: jack",)]
)
go()
-
-
+
+
def test_row_comparison(self):
users.insert().execute(user_id = 7, user_name = 'jack')
rp = users.select().execute().first()
for pickle in False, True:
for use_labels in False, True:
result = users.select(use_labels=use_labels).order_by(users.c.user_id).execute().fetchall()
-
+
if pickle:
result = util.pickle.loads(util.pickle.dumps(result))
-
+
eq_(
result,
[(7, "jack"), (8, "ed"), (9, "fred")]
else:
eq_(result[0]['user_id'], 7)
eq_(result[0].keys(), ["user_id", "user_name"])
-
+
eq_(result[0][0], 7)
eq_(result[0][users.c.user_id], 7)
eq_(result[0][users.c.user_name], 'jack')
-
+
if use_labels:
assert_raises(exc.NoSuchColumnError, lambda: result[0][addresses.c.user_id])
else:
# test with a different table. name resolution is
# causing 'user_id' to match when use_labels wasn't used.
eq_(result[0][addresses.c.user_id], 7)
-
+
assert_raises(exc.NoSuchColumnError, lambda: result[0]['fake key'])
assert_raises(exc.NoSuchColumnError, lambda: result[0][addresses.c.address_id])
-
+
@testing.requires.boolean_col_expressions
def test_or_and_as_columns(self):
true, false = literal(True), literal(False)
-
+
eq_(testing.db.execute(select([and_(true, false)])).scalar(), False)
eq_(testing.db.execute(select([and_(true, true)])).scalar(), True)
eq_(testing.db.execute(select([or_(true, false)])).scalar(), True)
row = testing.db.execute(select([or_(true, false).label("x"), and_(true, false).label("y")])).first()
assert row.x == True
assert row.y == False
-
+
def test_fetchmany(self):
users.insert().execute(user_id = 7, user_name = 'jack')
users.insert().execute(user_id = 8, user_name = 'ed')
), [(5,)]),
):
eq_(expr.execute().fetchall(), result)
-
+
@testing.fails_on("firebird", "see dialect.test_firebird:MiscTest.test_percents_in_text")
@testing.fails_on("oracle", "neither % nor %% are accepted")
@testing.fails_on("informix", "neither % nor %% are accepted")
(text("select 'hello % world'"), "hello % world")
):
eq_(testing.db.scalar(expr), result)
-
+
def test_ilike(self):
users.insert().execute(
{'user_id':1, 'user_name':'one'},
use_labels=labels,
order_by=[users.c.user_id.desc()]),
[(3,), (2,), (1,)])
-
+
@testing.fails_on("+pyodbc", "pyodbc row doesn't seem to accept slices")
def test_column_slices(self):
users.insert().execute(user_id=1, user_name='john')
self.assert_(r[0:1] == (1,))
self.assert_(r[1:] == (2, 'foo@bar.com'))
self.assert_(r[:-1] == (1, 2))
-
+
def test_column_accessor(self):
users.insert().execute(user_id=1, user_name='john')
users.insert().execute(user_id=2, user_name='jack')
r = users.select(users.c.user_id==2).execute().first()
self.assert_(r.user_id == r['user_id'] == r[users.c.user_id] == 2)
self.assert_(r.user_name == r['user_name'] == r[users.c.user_name] == 'jack')
-
+
r = text("select * from query_users where user_id=2", bind=testing.db).execute().first()
self.assert_(r.user_id == r['user_id'] == r[users.c.user_id] == 2)
self.assert_(r.user_name == r['user_name'] == r[users.c.user_name] == 'jack')
-
+
# test a little sqlite weirdness - with the UNION,
# cols come back as "query_users.user_id" in cursor.description
r = text("select query_users.user_id, query_users.user_name from query_users "
users.insert(),
{'user_id':1, 'user_name':'ed'}
)
-
+
eq_(r.lastrowid, 1)
-
-
+
+
def test_graceful_fetch_on_non_rows(self):
"""test that calling fetchone() etc. on a result that doesn't
return rows fails gracefully.
-
+
"""
# these proxies don't work with no cursor.description present.
getattr(result, meth),
)
trans.rollback()
-
+
def test_fetchone_til_end(self):
result = testing.db.execute("select * from query_users")
eq_(result.fetchone(), None)
def test_result_case_sensitivity(self):
"""test name normalization for result sets."""
-
+
row = testing.db.execute(
select([
literal_column("1").label("case_insensitive"),
literal_column("2").label("CaseSensitive")
])
).first()
-
+
assert row.keys() == ["case_insensitive", "CaseSensitive"]
-
+
def test_row_as_args(self):
users.insert().execute(user_id=1, user_name='john')
r = users.select(users.c.user_id==1).execute().first()
r = users.select().execute()
users2.insert().execute(list(r))
assert users2.select().execute().fetchall() == [(1, 'john'), (2, 'ed')]
-
+
users2.delete().execute()
r = users.select().execute()
users2.insert().execute(*list(r))
assert users2.select().execute().fetchall() == [(1, 'john'), (2, 'ed')]
-
+
def test_ambiguous_column(self):
users.insert().execute(user_id=1, user_name='john')
r = users.outerjoin(addresses).select().execute().first()
"Ambiguous column name",
lambda: r['user_id']
)
-
+
result = users.outerjoin(addresses).select().execute()
result = base.BufferedColumnResultProxy(result.context)
r = result.first()
users.insert().execute(user_id=1, user_name='foo')
r = users.select().execute().first()
eq_(len(r), 2)
-
+
r = testing.db.execute('select user_name, user_id from query_users').first()
eq_(len(r), 2)
r = testing.db.execute('select user_name from query_users').first()
"uses sql-92 rules")
def test_bind_in(self):
"""test calling IN against a bind parameter.
-
+
this isn't allowed on several platforms since we
generate ? = ?.
-
+
"""
users.insert().execute(user_id = 7, user_name = 'jack')
users.insert().execute(user_id = 8, user_name = 'fred')
assert len(r) == 3
r = s.execute(search_key=None).fetchall()
assert len(r) == 0
-
+
@testing.emits_warning('.*empty sequence.*')
@testing.fails_on('firebird', 'uses sql-92 bind rules')
def test_literal_in(self):
s = users.select(not_(literal("john").in_([])))
r = s.execute().fetchall()
assert len(r) == 3
-
-
+
+
@testing.emits_warning('.*empty sequence.*')
@testing.requires.boolean_col_expressions
def test_in_filtering_advanced(self):
"""test the behavior of the in_() function when
comparing against an empty collection, specifically
that a proper boolean value is generated.
-
+
"""
users.insert().execute(user_id = 7, user_name = 'jack')
class PercentSchemaNamesTest(TestBase):
"""tests using percent signs, spaces in table and column names.
-
+
Doesn't pass for mysql, postgresql, but this is really a
SQLAlchemy bug - we should be escaping out %% signs for this
operation the same way we do for text() and column labels.
-
+
"""
@classmethod
def teardown(self):
percent_table.delete().execute()
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
-
+
def test_single_roundtrip(self):
percent_table.insert().execute(
{'percent%':5, 'spaces % more spaces':12},
{'percent%':11, 'spaces % more spaces':9},
)
self._assert_table()
-
+
@testing.crashes('mysql+mysqldb', 'MySQLdb handles executemany() inconsistently vs. execute()')
def test_executemany_roundtrip(self):
percent_table.insert().execute(
{'percent%':11, 'spaces % more spaces':9},
)
self._assert_table()
-
+
def _assert_table(self):
for table in (percent_table, percent_table.alias()):
eq_(
(11, 15)
]
)
-
-
-
+
+
+
class LimitTest(TestBase):
@classmethod
addresses.insert().execute(address_id=6, user_id=6, address='addr5')
users.insert().execute(user_id=7, user_name='fido')
addresses.insert().execute(address_id=7, user_id=7, address='addr5')
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
dict(col2="t3col2r2", col3="bbb", col4="aaa"),
dict(col2="t3col2r3", col3="ccc", col4="bbb"),
])
-
+
@engines.close_first
def teardown(self):
pass
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
"""like test_union_all, but breaks the sub-union into
a subquery with an explicit column reference on the outside,
more palatable to a wider variety of engines.
-
+
"""
u = union(
select([t1.c.col3]),
select([t1.c.col3]),
).alias()
-
+
e = union_all(
select([t1.c.col3]),
select([u.c.col3])
def test_except_style2(self):
# same as style1, but add alias().select() to the except_().
# sqlite can handle it now.
-
+
e = except_(union(
select([t1.c.col3, t1.c.col4]),
select([t2.c.col3, t2.c.col4]),
select([t3.c.col3], t3.c.col3 == 'ccc'), #ccc
).alias().select()
)
-
+
eq_(e.execute().fetchall(), [('ccc',)])
eq_(
e.alias().select().execute().fetchall(),
found = self._fetchall_sorted(u.execute())
eq_(found, wanted)
-
+
@testing.requires.intersect
def test_intersect_unions_3(self):
u = intersect(
@classmethod
def teardown_class(cls):
metadata.drop_all()
-
+
# TODO: seems like more tests warranted for this setup.
def test_modulo(self):
'''SELECT 1 FROM (SELECT "foo"."t1"."col1" AS "col1" FROM '''\
'''"foo"."t1") AS anon WHERE anon."col1" = :col1_1'''
)
-
+
metadata = MetaData()
t1 = Table('TableOne', metadata,
Column('ColumnOne', Integer, quote=False), quote=False, schema="FooBar", quote_schema=False)
self.assert_compile(t1.select().apply_labels(),
"SELECT FooBar.TableOne.ColumnOne AS "\
"FooBar_TableOne_ColumnOne FROM FooBar.TableOne" # TODO: is this what we really want here ? what if table/schema
- # *are* quoted?
+ # *are* quoted?
)
a = t1.select().alias('anon')
def setup(self):
meta = MetaData(testing.db)
global table, GoofyType
-
+
class GoofyType(TypeDecorator):
impl = String
-
+
def process_bind_param(self, value, dialect):
if value is None:
return None
if value is None:
return None
return value + "BAR"
-
+
table = Table('tables', meta,
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('persons', Integer),
Column('goofy', GoofyType(50))
)
table.create(checkfirst=True)
-
+
def teardown(self):
table.drop()
-
+
@testing.exclude('firebird', '<', (2, 0), '2.0+ feature')
@testing.exclude('postgresql', '<', (8, 2), '8.2+ feature')
def test_column_targeting(self):
result = table.insert().returning(table.c.id, table.c.full).execute({'persons': 1, 'full': False})
-
+
row = result.first()
assert row[table.c.id] == row['id'] == 1
assert row[table.c.full] == row['full'] == False
-
+
result = table.insert().values(persons=5, full=True, goofy="somegoofy").\
returning(table.c.persons, table.c.full, table.c.goofy).execute()
row = result.first()
eq_(row[table.c.goofy], row['goofy'])
eq_(row['goofy'], "FOOsomegoofyBAR")
-
+
@testing.fails_on('firebird', "fb can't handle returning x AS y")
@testing.exclude('firebird', '<', (2, 0), '2.0+ feature')
@testing.exclude('postgresql', '<', (8, 2), '8.2+ feature')
returning(table.c.persons + 18).execute()
row = result.first()
assert row[0] == 30
-
+
@testing.exclude('firebird', '<', (2, 1), '2.1+ feature')
@testing.exclude('postgresql', '<', (8, 2), '8.2+ feature')
def test_update_returning(self):
test_executemany()
-
-
+
+
@testing.exclude('firebird', '<', (2, 1), '2.1+ feature')
@testing.exclude('postgresql', '<', (8, 2), '8.2+ feature')
@testing.fails_on_everything_except('postgresql', 'firebird')
class KeyReturningTest(TestBase, AssertsExecutionResults):
"""test returning() works with columns that define 'key'."""
-
+
__unsupported_on__ = ('sqlite', 'mysql', 'maxdb', 'sybase', 'access')
def setup(self):
result = table.insert().returning(table.c.foo_id).execute(data='somedata')
row = result.first()
assert row[table.c.foo_id] == row['id'] == 1
-
+
result = table.select().execute().first()
assert row[table.c.foo_id] == row['id'] == 1
-
+
class FoundRowsTest(TestBase, AssertsExecutionResults):
"""tests rowcount functionality"""
-
+
__requires__ = ('sane_rowcount', )
-
+
@classmethod
def setup_class(cls):
global employees_table, metadata
def test_indirect_correspondence_on_labels(self):
# this test depends upon 'distance' to
# get the right result
-
+
# same column three times
s = select([table1.c.col1.label('c2'), table1.c.col1,
def test_direct_correspondence_on_labels(self):
# this test depends on labels being part
# of the proxy set to get the right result
-
+
l1, l2 = table1.c.col1.label('foo'), table1.c.col1.label('bar')
sel = select([l1, l2])
j2 = jjj.alias('foo')
assert j2.corresponding_column(table1.c.col1) \
is j2.c.table1_col1
-
+
def test_against_cloned_non_table(self):
# test that corresponding column digs across
# clone boundaries with anonymous labeled elements
col = func.count().label('foo')
sel = select([col])
-
+
sel2 = visitors.ReplacingCloningVisitor().traverse(sel)
assert sel2.corresponding_column(col) is sel2.c.foo
sel3 = visitors.ReplacingCloningVisitor().traverse(sel2)
assert sel3.corresponding_column(col) is sel3.c.foo
-
+
def test_select_on_table(self):
sel = select([table1, table2], use_labels=True)
def test_union_precedence(self):
# conflicting column correspondence should be resolved based on
# the order of the select()s in the union
-
+
s1 = select([table1.c.col1, table1.c.col2])
s2 = select([table1.c.col2, table1.c.col1])
s3 = select([table1.c.col3, table1.c.colx])
s4 = select([table1.c.colx, table1.c.col3])
-
+
u1 = union(s1, s2)
assert u1.corresponding_column(table1.c.col1) is u1.c.col1
assert u1.corresponding_column(table1.c.col2) is u1.c.col2
-
+
u1 = union(s1, s2, s3, s4)
assert u1.corresponding_column(table1.c.col1) is u1.c.col1
assert u1.corresponding_column(table1.c.col2) is u1.c.col2
assert u1.corresponding_column(table1.c.colx) is u1.c.col2
assert u1.corresponding_column(table1.c.col3) is u1.c.col1
-
+
def test_singular_union(self):
u = union(select([table1.c.col1, table1.c.col2,
j = join(a, table2)
criterion = a.c.acol1 == table2.c.col2
self.assert_(criterion.compare(j.onclause))
-
+
def test_labeled_select_correspoinding(self):
l1 = select([func.max(table1.c.col1)]).label('foo')
s = select([l1])
eq_(s.corresponding_column(l1), s.c.foo)
-
+
s = select([table1.c.col1, l1])
eq_(s.corresponding_column(l1), s.c.foo)
s = select([t2, t3], use_labels=True)
assert_raises(exc.NoReferencedTableError, s.join, t1)
-
+
def test_join_condition(self):
m = MetaData()
]:
assert expected.compare(sql_util.join_condition(left,
right, a_subset=a_subset))
-
+
# these are ambiguous, or have no joins
for left, right, a_subset in [
(t1t2, t3, None),
sql_util.join_condition,
left, right, a_subset=a_subset
)
-
+
als = t2t3.alias()
# test join's behavior, including natural
for left, right, expected in [
"Perhaps you meant to convert the right "
"side to a subquery using alias\(\)\?",
t1t2.join, t2t3.select(use_labels=True))
-
+
class PrimaryKeyTest(TestBase, AssertsExecutionResults):
t3 = Table('t3', meta,
Column('t3id', Integer, ForeignKey('t2.t2id'), primary_key=True),
Column('t3data', String(30)))
-
+
eq_(util.column_set(sql_util.reduce_columns([
t1.c.t1id,
t1.c.t1data,
t3.c.t3data,
])), util.column_set([t1.c.t1id, t1.c.t1data, t2.c.t2data,
t3.c.t3data]))
-
+
def test_reduce_selectable(self):
metadata = MetaData()
eq_(util.column_set(sql_util.reduce_columns(list(s.c), s)),
util.column_set([s.c.engineer_id, s.c.engineer_name,
s.c.manager_id]))
-
+
def test_reduce_aliased_join(self):
metadata = MetaData()
eq_(util.column_set(sql_util.reduce_columns([pjoin.c.people_person_id,
pjoin.c.engineers_person_id, pjoin.c.managers_person_id])),
util.column_set([pjoin.c.people_person_id]))
-
+
def test_reduce_aliased_union(self):
metadata = MetaData()
item_join.c.dummy, item_join.c.child_name])),
util.column_set([item_join.c.id, item_join.c.dummy,
item_join.c.child_name]))
-
+
def test_reduce_aliased_union_2(self):
metadata = MetaData()
select_from(
page_table.join(magazine_page_table).
join(classified_page_table)),
-
+
select([
page_table.c.id,
magazine_page_table.c.page_id,
cast(null(), Integer).label('magazine_page_id')
]).
select_from(page_table.join(magazine_page_table)),
-
+
select([
page_table.c.id,
magazine_page_table.c.page_id,
def __init__(self):
Column.__init__(self, 'foo', Integer)
_constructor = Column
-
+
t1 = Table('t1', MetaData(), MyColumn())
s1 = t1.select()
assert isinstance(t1.c.foo, MyColumn)
assert isinstance(s2.c.foo, Column)
annot_2 = s1._annotate({})
assert isinstance(annot_2.c.foo, Column)
-
+
def test_annotated_corresponding_column(self):
table1 = table('table1', column("col1"))
-
+
s1 = select([table1.c.col1])
t1 = s1._annotate({})
t2 = s1
-
+
# t1 needs to share the same _make_proxy() columns as t2, even
# though it's annotated. otherwise paths will diverge once they
# are corresponded against "inner" below.
def test_annotated_visit(self):
table1 = table('table1', column("col1"), column("col2"))
-
+
bin = table1.c.col1 == bindparam('foo', value=None)
assert str(bin) == "table1.col1 = :foo"
def visit_binary(b):
b.right = table1.c.col2
-
+
b2 = visitors.cloned_traverse(bin, {}, {'binary':visit_binary})
assert str(b2) == "table1.col1 = table1.col2"
def visit_binary(b):
b.left = bindparam('bar')
-
+
b4 = visitors.cloned_traverse(b2, {}, {'binary':visit_binary})
assert str(b4) == ":bar = table1.col2"
b5 = visitors.cloned_traverse(b3, {}, {'binary':visit_binary})
assert str(b5) == ":bar = table1.col2"
-
+
def test_annotate_expressions(self):
table1 = table('table1', column('col1'), column('col2'))
eq_(str(sql_util._deep_annotate(expr, {})), expected)
eq_(str(sql_util._deep_annotate(expr, {},
exclude=[table1.c.col1])), expected)
-
+
def test_deannotate(self):
table1 = table('table1', column("col1"), column("col2"))
-
+
bin = table1.c.col1 == bindparam('foo', value=None)
b2 = sql_util._deep_annotate(bin, {'_orm_adapt':True})
for elem in (b2._annotations, b2.left._annotations):
assert '_orm_adapt' in elem
-
+
for elem in b3._annotations, b3.left._annotations, \
b4._annotations, b4.left._annotations:
assert elem == {}
assert b3.left is not b2.left is not bin.left
assert b4.left is bin.left # since column is immutable
assert b4.right is not bin.right is not b2.right is not b3.right
-
+
def test_uppercase_rendering(self):
"""Test that uppercase types from types.py always render as their
type.
-
+
As of SQLA 0.6, using an uppercase type means you want specifically
that type. If the database in use doesn't support that DDL, it (the DB
backend) should raise an error - it means you should be using a
lowercased (genericized) type.
-
+
"""
-
+
for dialect in [
oracle.dialect(),
mysql.dialect(),
compiled = types.to_instance(type_).\
compile(dialect=dialect)
-
+
assert compiled in expected, \
"%r matches none of %r for dialect %s" % \
(compiled, expected, dialect.name)
-
+
assert str(types.to_instance(type_)) in expected, \
"default str() of type %r not expected, %r" % \
(type_, expected)
-
+
class TypeAffinityTest(TestBase):
def test_type_affinity(self):
for type_, affin in [
(LargeBinary(), types._Binary)
]:
eq_(type_._type_affinity, affin)
-
+
for t1, t2, comp in [
(Integer(), SmallInteger(), True),
(Integer(), String(), False),
Table('foo', meta, column_type)
ct = loads(dumps(column_type))
mt = loads(dumps(meta))
-
+
class UserDefinedTest(TestBase, AssertsCompiledSQL):
"""tests user-defined types."""
]:
for dialect_ in (postgresql, mssql, mysql):
dialect_ = dialect_.dialect()
-
+
raw_impl = types.to_instance(impl_, **kw)
-
+
class MyType(types.TypeDecorator):
impl = impl_
-
+
dec_type = MyType(**kw)
-
+
eq_(dec_type.impl.__class__, raw_impl.__class__)
-
+
raw_dialect_impl = raw_impl.dialect_impl(dialect_)
dec_dialect_impl = dec_type.dialect_impl(dialect_)
eq_(dec_dialect_impl.__class__, MyType)
eq_(raw_dialect_impl.__class__ , dec_dialect_impl.impl.__class__)
-
+
self.assert_compile(
MyType(**kw),
exp,
dialect=dialect_
)
-
+
def test_user_defined_typedec_impl(self):
class MyType(types.TypeDecorator):
impl = Float
-
+
def load_dialect_impl(self, dialect):
if dialect.name == 'sqlite':
return String(50)
else:
return super(MyType, self).load_dialect_impl(dialect)
-
+
sl = sqlite.dialect()
pg = postgresql.dialect()
t = MyType()
t.dialect_impl(dialect=pg).impl.__class__,
Float().dialect_impl(pg).__class__
)
-
+
@testing.provide_metadata
def test_type_coerce(self):
"""test ad-hoc usage of custom types with type_coerce()."""
-
+
class MyType(types.TypeDecorator):
impl = String
def process_bind_param(self, value, dialect):
return value[0:-8]
-
+
def process_result_value(self, value, dialect):
return value + "BIND_OUT"
-
+
t = Table('t', metadata, Column('data', String(50)))
metadata.create_all()
-
+
t.insert().values(data=type_coerce('d1BIND_OUT',MyType)).execute()
eq_(
select([type_coerce(t.c.data, MyType)]).execute().fetchall(),
[('d1BIND_OUT', )]
)
-
+
eq_(
select([t.c.data, type_coerce(t.c.data, MyType)]).execute().fetchall(),
[('d1', 'd1BIND_OUT')]
)
-
+
eq_(
select([t.c.data, type_coerce(t.c.data, MyType)]).\
where(type_coerce(t.c.data, MyType) == 'd1BIND_OUT').\
execute().fetchall(),
[]
)
-
+
@classmethod
def setup_class(cls):
global users, metadata
Column('unicode_text', UnicodeText),
)
metadata.create_all()
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
@engines.close_first
def teardown(self):
unicode_table.delete().execute()
-
+
def test_native_unicode(self):
"""assert expected values for 'native unicode' mode"""
-
+
if \
(testing.against('mssql+pyodbc') and not testing.db.dialect.freetds):
assert testing.db.dialect.returns_unicode_strings == 'conditional'
return
-
+
if testing.against('mssql+pymssql'):
assert testing.db.dialect.returns_unicode_strings == ('charset' in testing.db.url.query)
return
-
+
assert testing.db.dialect.returns_unicode_strings == \
((testing.db.name, testing.db.driver) in \
(
('postgresql','psycopg2'),
('postgresql','pypostgresql'),
('postgresql','pg8000'),
- ('postgresql','zxjdbc'),
+ ('postgresql','zxjdbc'),
('mysql','oursql'),
('mysql','zxjdbc'),
('mysql','mysqlconnector'),
(testing.db.name,
testing.db.driver,
testing.db.dialect.returns_unicode_strings)
-
+
def test_round_trip(self):
unicodedata = u"Alors vous imaginez ma surprise, au lever du jour, "\
u"quand une drôle de petit voix m’a réveillé. Elle "\
u"disait: « S’il vous plaît… dessine-moi un mouton! »"
-
+
unicode_table.insert().execute(unicode_varchar=unicodedata,unicode_text=unicodedata)
-
+
x = unicode_table.select().execute().first()
assert isinstance(x['unicode_varchar'], unicode)
assert isinstance(x['unicode_text'], unicode)
def test_round_trip_executemany(self):
# cx_oracle was producing different behavior for cursor.executemany()
# vs. cursor.execute()
-
+
unicodedata = u"Alors vous imaginez ma surprise, au lever du jour, quand "\
u"une drôle de petit voix m’a réveillé. "\
u"Elle disait: « S’il vous plaît… dessine-moi un mouton! »"
u"Elle disait: « S’il vous plaît… dessine-moi un mouton! »"
unicode_table.insert().execute(unicode_varchar=unicodedata,unicode_text=unicodedata)
-
+
x = union(
select([unicode_table.c.unicode_varchar]),
select([unicode_table.c.unicode_varchar])
).execute().first()
-
+
assert isinstance(x['unicode_varchar'], unicode)
eq_(x['unicode_varchar'], unicodedata)
def test_unicode_warnings(self):
"""test the warnings raised when SQLA must coerce unicode binds,
*and* is using the Unicode type.
-
+
"""
unicodedata = u"Alors vous imaginez ma surprise, au lever du jour, quand "\
u"une drôle de petit voix m’a réveillé. "\
u"Elle disait: « S’il vous plaît… dessine-moi un mouton! »"
-
+
# using Unicode explicly - warning should be emitted
u = Unicode()
uni = u.dialect_impl(testing.db.dialect).bind_processor(testing.db.dialect)
assert_raises(exc.SAWarning, uni, 'x')
assert isinstance(uni(unicodedata), str)
# end Py2K
-
+
eq_(uni(unicodedata), unicodedata.encode('utf-8'))
-
+
# using convert unicode at engine level -
# this should not be raising a warning
unicode_engine = engines.utf8_engine(options={'convert_unicode':True,})
unicode_engine.dialect.supports_unicode_binds = False
-
+
s = String()
uni = s.dialect_impl(unicode_engine.dialect).bind_processor(unicode_engine.dialect)
# this is not the unicode type - no warning
uni('x')
assert isinstance(uni(unicodedata), str)
# end Py2K
-
+
eq_(uni(unicodedata), unicodedata.encode('utf-8'))
-
+
@testing.fails_if(
lambda: testing.db_spec("postgresql+pg8000")(testing.db) and util.py3k,
"pg8000 appropriately does not accept 'bytes' for a VARCHAR column."
)
def test_ignoring_unicode_error(self):
"""checks String(unicode_error='ignore') is passed to underlying codec."""
-
+
unicodedata = u"Alors vous imaginez ma surprise, au lever du jour, quand "\
u"une drôle de petit voix m’a réveillé. "\
u"Elle disait: « S’il vous plaît… dessine-moi un mouton! »"
-
+
asciidata = unicodedata.encode('ascii', 'ignore')
-
+
m = MetaData()
table = Table('unicode_err_table', m,
Column('sort', Integer),
Column('plain_varchar_no_coding_error', \
String(248, convert_unicode='force', unicode_error='ignore'))
)
-
+
m2 = MetaData()
utf8_table = Table('unicode_err_table', m2,
Column('sort', Integer),
Column('plain_varchar_no_coding_error', \
String(248, convert_unicode=True))
)
-
+
engine = engines.testing_engine(options={'encoding':'ascii'})
m.create_all(engine)
try:
# switch to utf-8
engine.dialect.encoding = 'utf-8'
from binascii import hexlify
-
+
# the row that we put in was stored as hexlified ascii
row = engine.execute(utf8_table.select()).first()
x = row['plain_varchar_no_coding_error']
a = hexlify(x)
b = hexlify(asciidata)
eq_(a, b)
-
+
# insert another row which will be stored with
# utf-8 only chars
engine.execute(
ascii_row = result.fetchone()
utf8_row = result.fetchone()
result.close()
-
+
x = ascii_row['plain_varchar_no_coding_error']
# on python3 "x" comes back as string (i.e. unicode),
# hexlify requires bytes
else:
a = hexlify(x)
eq_(a, b)
-
+
finally:
m.drop_all(engine)
)
metadata.create_all()
-
+
def teardown(self):
enum_table.delete().execute()
non_native_enum_table.delete().execute()
-
+
@classmethod
def teardown_class(cls):
metadata.drop_all()
{'id':2, 'someenum':'two'},
{'id':3, 'someenum':'one'},
])
-
+
eq_(
enum_table.select().order_by(enum_table.c.id).execute().fetchall(),
[
(3, 'one'),
]
)
-
+
def test_adapt(self):
from sqlalchemy.dialects.postgresql import ENUM
e1 = Enum('one','two','three', native_enum=False)
e1 = Enum('one','two','three', name='foo', schema='bar')
eq_(e1.adapt(ENUM).name, 'foo')
eq_(e1.adapt(ENUM).schema, 'bar')
-
+
@testing.fails_on('mysql+mysqldb', "MySQL seems to issue a 'data truncated' warning.")
def test_constraint(self):
assert_raises(exc.DBAPIError,
non_native_enum_table.insert().execute,
{'id':4, 'someenum':'four'}
)
-
+
class BinaryTest(TestBase, AssertsExecutionResults):
__excluded_on__ = (
('mysql', '<', (4, 1, 1)), # screwy varbinary types
if value:
value.stuff = 'this is the right stuff'
return value
-
+
metadata = MetaData(testing.db)
binary_table = Table('binary_table', metadata,
Column('primary_id', Integer, primary_key=True, test_needs_autoincrement=True),
'data, not really known how to make this work')
def test_comparison(self):
"""test that type coercion occurs on comparison for binary"""
-
+
expr = binary_table.c.data == 'foo'
assert isinstance(expr.right.type, LargeBinary)
-
+
data = os.urandom(32)
binary_table.insert().execute(data=data)
eq_(binary_table.select().where(binary_table.c.data==data).alias().count().scalar(), 1)
-
-
+
+
def load_stream(self, name):
f = os.path.join(os.path.dirname(__file__), "..", name)
return open(f, mode='rb').read()
return process
def adapt_operator(self, op):
return {operators.add:operators.sub, operators.sub:operators.add}.get(op, op)
-
+
class MyTypeDec(types.TypeDecorator):
impl = String
-
+
def process_bind_param(self, value, dialect):
return "BIND_IN" + str(value)
def process_result_value(self, value, dialect):
return value + "BIND_OUT"
-
+
meta = MetaData(testing.db)
test_table = Table('test', meta,
Column('id', Integer, primary_key=True),
expr = test_table.c.bvalue == bindparam("somevalue")
eq_(expr.right.type._type_affinity, String)
-
+
eq_(
testing.db.execute(test_table.select().where(expr),
{"somevalue":"foo"}).fetchall(),
[(1, 'somedata',
datetime.date(2007, 10, 15), 25, 'BIND_INfooBIND_OUT')]
)
-
+
def test_literal_adapt(self):
# literals get typed based on the types dictionary, unless
# compatible with the left side type
expr = column('foo', CHAR) == "asdf"
eq_(expr.right.type.__class__, CHAR)
-
-
+
+
@testing.fails_on('firebird', 'Data type unknown on the parameter')
def test_operator_adapt(self):
"""test type-based overloading of operators"""
assert testing.db.execute(select([expr.label('foo')])).scalar() == 21
expr = test_table.c.avalue + literal(40, type_=MyCustomType)
-
+
# + operator converted to -
# value is calculated as: (250 - (40 * 10)) / 10 == -15
assert testing.db.execute(select([expr.label('foo')])).scalar() == -15
def test_typedec_operator_adapt(self):
expr = test_table.c.bvalue + "hi"
-
+
assert expr.type.__class__ is MyTypeDec
assert expr.right.type.__class__ is MyTypeDec
-
+
eq_(
testing.db.execute(select([expr.label('foo')])).scalar(),
"BIND_INfooBIND_INhiBIND_OUT"
def test_typedec_righthand_coercion(self):
class MyTypeDec(types.TypeDecorator):
impl = String
-
+
def process_bind_param(self, value, dialect):
return "BIND_IN" + str(value)
tab = table('test', column('bvalue', MyTypeDec))
expr = tab.c.bvalue + 6
-
+
self.assert_compile(
expr,
"test.bvalue || :bvalue_1",
use_default_dialect=True
)
-
+
assert expr.type.__class__ is MyTypeDec
eq_(
testing.db.execute(select([expr.label('foo')])).scalar(),
"BIND_INfooBIND_IN6BIND_OUT"
)
-
-
+
+
def test_bind_typing(self):
from sqlalchemy.sql import column
-
+
class MyFoobarType(types.UserDefinedType):
pass
-
+
class Foo(object):
pass
-
+
# unknown type + integer, right hand bind
# is an Integer
expr = column("foo", MyFoobarType) + 5
assert expr.right.type._type_affinity is types.Integer
-
+
# untyped bind - it gets assigned MyFoobarType
expr = column("foo", MyFoobarType) + bindparam("foo")
assert expr.right.type._type_affinity is MyFoobarType
# coerces to the left
expr = column("foo", MyFoobarType) + Foo()
assert expr.right.type._type_affinity is MyFoobarType
-
+
# including for non-commutative ops
expr = column("foo", MyFoobarType) - Foo()
assert expr.right.type._type_affinity is MyFoobarType
expr = column("foo", MyFoobarType) - datetime.date(2010, 8, 25)
assert expr.right.type._type_affinity is types.Date
-
+
def test_date_coercion(self):
from sqlalchemy.sql import column
-
+
expr = column('bar', types.NULLTYPE) - column('foo', types.TIMESTAMP)
eq_(expr.type._type_affinity, types.NullType)
-
+
expr = func.sysdate() - column('foo', types.TIMESTAMP)
eq_(expr.type._type_affinity, types.Interval)
expr = func.current_date() - column('foo', types.TIMESTAMP)
eq_(expr.type._type_affinity, types.Interval)
-
+
def test_numerics_coercion(self):
from sqlalchemy.sql import column
import operator
-
+
for op in (
operator.add,
operator.mul,
str(column('a', types.NullType()) + column('b', types.NullType())),
"a + b"
)
-
+
def test_expression_typing(self):
expr = column('bar', Integer) - 3
-
+
eq_(expr.type._type_affinity, Integer)
expr = bindparam('bar') + bindparam('foo')
eq_(expr.type, types.NULLTYPE)
-
+
def test_distinct(self):
s = select([distinct(test_table.c.avalue)])
eq_(testing.db.execute(s).scalar(), 25)
assert distinct(test_table.c.data).type == test_table.c.data.type
assert test_table.c.data.distinct().type == test_table.c.data.type
-
+
class CompileTest(TestBase, AssertsCompiledSQL):
def test_default_compile(self):
"""test that the base dialect of the type object is used
for default compilation.
-
+
"""
for type_, expected in (
(String(), "VARCHAR"),
datetime.time(23, 59, 59, time_micro)),
(10, 'colber', None, None, None),
]
-
-
+
+
fnames = ['user_id', 'user_name', 'user_datetime',
'user_date', 'user_time']
def setup(self):
global metadata
metadata = MetaData(testing.db)
-
+
def teardown(self):
metadata.drop_all()
-
+
@testing.emits_warning(r".*does \*not\* support Decimal objects natively")
def _do_test(self, type_, input_, output, filter_ = None):
t = Table('t', metadata, Column('x', type_))
#print result
#print output
eq_(result, output)
-
+
def test_numeric_as_decimal(self):
self._do_test(
Numeric(precision=8, scale=4),
[15.7563],
filter_ = lambda n:n is not None and round(n, 5) or None
)
-
+
@testing.fails_on('mssql+pymssql', 'FIXME: improve pymssql dec handling')
def test_precision_decimal(self):
numbers = set([
decimal.Decimal("0.004354"),
decimal.Decimal("900.0"),
])
-
+
self._do_test(
Numeric(precision=18, scale=12),
numbers,
@testing.fails_on('mssql+pymssql', 'FIXME: improve pymssql dec handling')
def test_enotation_decimal(self):
"""test exceedingly small decimals.
-
+
Decimal reports values with E notation when the exponent
is greater than 6.
-
+
"""
-
+
numbers = set([
decimal.Decimal('1E-2'),
decimal.Decimal('1E-3'),
numbers,
numbers
)
-
+
@testing.fails_on("sybase+pyodbc",
"Don't know how do get these values through FreeTDS + Sybase")
@testing.fails_on("firebird", "Precision must be from 1 to 18")
numbers,
numbers
)
-
+
@testing.fails_on('sqlite', 'TODO')
@testing.fails_on('postgresql+pg8000', 'TODO')
@testing.fails_on("firebird", "Precision must be from 1 to 18")
numbers,
numbers
)
-
-
+
+
class IntervalTest(TestBase, AssertsExecutionResults):
@classmethod
def setup_class(cls):
eq_(row['native_interval'], None)
eq_(row['native_interval_args'], None)
eq_(row['non_native_interval'], None)
-
-
+
+
class BooleanTest(TestBase, AssertsExecutionResults):
@classmethod
def setup_class(cls):
Column('unconstrained_value', Boolean(create_constraint=False)),
)
bool_table.create()
-
+
@classmethod
def teardown_class(cls):
bool_table.drop()
-
+
def teardown(self):
bool_table.delete().execute()
-
+
def test_boolean(self):
bool_table.insert().execute(id=1, value=True)
bool_table.insert().execute(id=2, value=False)
eq_(res3, [(1, True), (2, False),
(3, True), (4, True),
(5, True), (6, None)])
-
+
# ensure we're getting True/False, not just ints
assert res3[0][1] is True
assert res3[1][1] is False
-
+
@testing.fails_on('mysql',
"The CHECK clause is parsed but ignored by all storage engines.")
@testing.fails_on('mssql',
def test_unconstrained(self):
testing.db.execute(
"insert into booltest (id, unconstrained_value) values (1, 5)")
-
+
class PickleTest(TestBase):
def test_eq_comparison(self):
p1 = PickleType()
-
+
for obj in (
{'1':'2'},
pickleable.Bar(5, 6),
p1.compare_values,
pickleable.BrokenComparable('foo'),
pickleable.BrokenComparable('foo'))
-
+
def test_nonmutable_comparison(self):
p1 = PickleType()
pickleable.OldSchool(10, 11)
):
assert p1.compare_values(p1.copy_value(obj), obj)
-
+
class CallableTest(TestBase):
@classmethod
def setup_class(cls):
try:
engine = metadata.bind
-
+
# reset the identifier preparer, so that we can force it to cache
# a unicode identifier
engine.dialect.identifier_preparer = engine.dialect.preparer(engine.dialect)
select([column(u'special_col')]).select_from(t1).execute().close()
assert isinstance(engine.dialect.identifier_preparer.format_sequence(Sequence('special_col')), unicode)
-
+
# now execute, run the sequence. it should run in u"Special_col.nextid" or similar as
# a unicode object; cx_oracle asserts that this is None or a String (postgresql lets it pass thru).
# ensure that executioncontext._exec_default() is encoding.
def create_tables(cls):
tables.metadata.drop_all(bind=testing.db)
tables.metadata.create_all(bind=testing.db)
-
+
@classmethod
def drop_tables(cls):
tables.metadata.drop_all(bind=testing.db)
@classmethod
def setup_class(cls):
super(SavePostTest, cls).setup_class()
-
+
mappers.zblog_mappers()
global blog_id, user_id
s = create_session(bind=testing.db)
def test_attach_noautoflush(self):
"""Test pending backref behavior."""
-
+
s = create_session(bind=testing.db, autoflush=False)
s.begin()