:tags:
:tickets:
- types types types! still werent working....have to use TypeDecorator again :(
+ types types types! still weren't working....have to use TypeDecorator again :(
.. change::
:tags:
:tickets:
fixed attributes bug where if an object is committed, its lazy-loaded list got
- blown away if it hadnt been loaded
+ blown away if it hadn't been loaded
.. change::
:tags:
:tags:
:tickets:
- two issues related to postgres, which doesnt want to give you the "lastrowid"
+ two issues related to postgres, which doesn't want to give you the "lastrowid"
since oids are deprecated:
* postgres database-side defaults that are on primary key cols *do* execute
- explicitly beforehand, even though thats not the idea of a PassiveDefault. this is
+ explicitly beforehand, even though that's not the idea of a PassiveDefault. this is
because sequences on columns get reflected as PassiveDefaults, but need to be explicitly
executed on a primary key col so we know what we just inserted.
* if you did add a row that has a bunch of database-side defaults on it,
unit-of-work does a better check for "orphaned" objects that are
part of a "delete-orphan" cascade, for certain conditions where the
- parent isnt available to cascade from.
+ parent isn't available to cascade from.
.. change::
:tags:
so far will convert this to "TIME[STAMP] (WITH|WITHOUT) TIME ZONE",
so that control over timezone presence is more controllable (psycopg2
returns datetimes with tzinfo's if available, which can create confusion
- against datetimes that dont).
+ against datetimes that don't).
.. change::
:tags:
with the session, and the INSERT statements are then sorted within the
mapper save_obj. the INSERT ordering has basically been pushed all
the way to the end of the flush cycle. that way the various sorts and
- organizations occuring within UOWTask (particularly the circular task
- sort) dont have to worry about maintaining order (which they werent anyway)
+ organizations occurring within UOWTask (particularly the circular task
+ sort) don't have to worry about maintaining order (which they weren't anyway)
.. change::
:tags:
:tags:
:tickets:
- overhaul to MapperExtension calling scheme, wasnt working very well
+ overhaul to MapperExtension calling scheme, wasn't working very well
previously
.. change::
:tags:
:tickets:
- select_table mappers *still* werent always compiling
+ select_table mappers *still* weren't always compiling
.. change::
:tags:
:tickets: 206
utterly remarkable: added a single space between 'CREATE TABLE'
- and '(<the rest of it>' since *thats how MySQL indicates a non-
+ and '(<the rest of it>' since *that's how MySQL indicates a non-
reserved word tablename.....*
.. change::
of an attribute is no longer micromanaged with each change and is
instead part of a "CommittedState" object created when the
instance is first loaded. HistoryArraySet is gone, the behavior of
- list attributes is now more open ended (i.e. theyre not sets anymore).
+ list attributes is now more open ended (i.e. they're not sets anymore).
.. change::
:tags:
:tickets:
fix to transaction control, so that repeated rollback() calls
- dont fail (was failing pretty badly when flush() would raise
+ don't fail (was failing pretty badly when flush() would raise
an exception in a larger try/except transaction block)
.. change::
:tags:
:tickets:
- fixed bug where tables with schema names werent getting indexed in
+ fixed bug where tables with schema names weren't getting indexed in
the MetaData object properly
.. change::
:tags:
:tickets: 207
- fixed bug where Column with redefined "key" property wasnt getting
+ fixed bug where Column with redefined "key" property wasn't getting
type conversion happening in the ResultProxy
.. change::
:tickets:
fixed old bug where if a many-to-many table mapped as "secondary"
- had extra columns, delete operations didnt work
+ had extra columns, delete operations didn't work
.. change::
:tags:
:tickets: 138
added NonExistentTable exception thrown when reflecting a table
- that doesnt exist
+ that doesn't exist
.. change::
:tags:
:tags:
:tickets:
- placeholder dispose() method added to SingletonThreadPool, doesnt
+ placeholder dispose() method added to SingletonThreadPool, doesn't
do anything yet
.. change::
:tickets:
rollback() is automatically called when an exception is raised,
- but only if theres no transaction in process (i.e. works more like
+ but only if there's no transaction in process (i.e. works more like
autocommit).
.. change::
"oid" system has been totally moved into compile-time behavior;
if they are used in an order_by where they are not available, the order_by
- doesnt get compiled, fixes
+ doesn't get compiled, fixes
.. change::
:tags:
:tags: sql
:tickets: 768
- dont assume join criterion consists only of column objects
+ don't assume join criterion consists only of column objects
.. change::
:tags: sql
:tags: sql
:tickets:
- ForeignKey to a table in a schema thats not the default schema
+ ForeignKey to a table in a schema that's not the default schema
requires the schema to be explicit; i.e. ForeignKey('alt_schema.users.id')
.. change::
:tags: sqlite
:tickets: 603
- string PK column inserts dont get overwritten with OID
+ string PK column inserts don't get overwritten with OID
.. change::
:tags: mssql
parenthesis are applied to clauses via a new _Grouping
construct. uses operator precedence to more intelligently apply
parenthesis to clauses, provides cleaner nesting of clauses
- (doesnt mutate clauses placed in other clauses, i.e. no 'parens'
+ (doesn't mutate clauses placed in other clauses, i.e. no 'parens'
flag)
.. change::
:tags: sql
:tickets: 578
- removed "no group by's in a select thats part of a UNION"
+ removed "no group by's in a select that's part of a UNION"
restriction
.. change::
:tags: orm
:tickets:
- fixed bug in query.instances() that wouldnt handle more than
+ fixed bug in query.instances() that wouldn't handle more than
on additional mapper or one additional column.
.. change::
means their lengths are dialect-dependent. So on oracle a label
that gets truncated to 30 chars will go out to 63 characters
on postgres. Also, the true labelname is always attached as the
- accessor on the parent Selectable so theres no need to be aware
+ accessor on the parent Selectable so there's no need to be aware
of the "truncated" label names.
.. change::
:tickets: 513
the "mini" column labels generated when using subqueries, which
- are to work around glitchy SQLite behavior that doesnt understand
+ are to work around glitchy SQLite behavior that doesn't understand
"foo.id" as equivalent to "id", are now only generated in the case
that those named columns are selected from (part of)
:tickets:
mysql uses "DESCRIBE.<tablename>", catching exceptions
- if table doesnt exist, in order to determine if a table exists.
+ if table doesn't exist, in order to determine if a table exists.
this supports unicode table names as well as schema names. tested
with MySQL5 but should work with 4.1 series as well. (#557)
more fixes to polymorphic relations, involving proper lazy-clause
generation on many-to-one relationships to polymorphic mappers. also fixes to detection of "direction", more specific
targeting of columns that belong to the polymorphic union vs. those
- that dont.
+ that don't.
.. change::
:tags: orm
got binary working for any size input ! cx_oracle works fine,
it was my fault as BINARY was being passed and not BLOB for
- setinputsizes (also unit tests werent even setting input sizes).
+ setinputsizes (also unit tests weren't even setting input sizes).
.. change::
:tags: oracle
:tags: orm, bugs
:tickets:
- fix to deferred so that load operation doesnt mistakenly occur when only
+ fix to deferred so that load operation doesn't mistakenly occur when only
PK col attributes are set
.. change::
:tickets:
type system slightly modified to support TypeDecorators that can be
- overridden by the dialect (ok, thats not very clear, it allows the mssql
+ overridden by the dialect (ok, that's not very clear, it allows the mssql
tweak below to be possible)
.. change::
:tickets: 420
mysql is inconsistent with what kinds of quotes it uses in foreign keys
- during a SHOW CREATE TABLE, reflection updated to accomodate for all three
+ during a SHOW CREATE TABLE, reflection updated to accommodate for all three
styles
.. change::
:tags: orm
:tickets: 407
- fixed bug in mapper refresh/expire whereby eager loaders didnt properly
+ fixed bug in mapper refresh/expire whereby eager loaders didn't properly
re-populate item lists
.. change::
:tickets:
MySQL detects errors 2006 (server has gone away) and 2014
- (commands out of sync) and invalidates the connection on which it occured.
+ (commands out of sync) and invalidates the connection on which it occurred.
.. change::
:tags:
:tickets:
added onupdate and ondelete keyword arguments to ForeignKey; propagate
- to underlying ForeignKeyConstraint if present. (dont propagate in the
+ to underlying ForeignKeyConstraint if present. (don't propagate in the
other direction, however)
.. change::
fixed bug in circular dependency sorting at flush time; if object A
contained a cyclical many-to-one relationship to object B, and object B
- was just attached to object A, *but* object B itself wasnt changed,
+ was just attached to object A, *but* object B itself wasn't changed,
the many-to-one synchronize of B's primary key attribute to A's foreign key
- attribute wouldnt occur.
+ attribute wouldn't occur.
.. change::
:tags: orm
a fair amount of cleanup to the schema package, removal of ambiguous
methods, methods that are no longer needed. slightly more constrained
- useage, greater emphasis on explicitness
+ usage, greater emphasis on explicitness
.. change::
:tags: schema
:tags: connections/pooling/execution
:tickets:
- fixed bug where Connection wouldnt lose its Transaction
+ fixed bug where Connection wouldn't lose its Transaction
after commit/rollback
.. change::
including the addition of a MutableType mixin which is implemented by
PickleType. unit-of-work now tracks the "dirty" list as an expression
of all persistent objects where the attribute manager detects changes.
- The basic issue thats fixed is detecting changes on PickleType
+ The basic issue that's fixed is detecting changes on PickleType
objects, but also generalizes type handling and "modified" object
checking to be more complete and extensible.
implemented "version check" logic in Query/Mapper, used
when version_id_col is in effect and query.with_lockmode()
- is used to get() an instance thats already loaded
+ is used to get() an instance that's already loaded
.. change::
:tags: orm
transaction directly to the parent of the transaction
that could be rolled back to. Now it rolls back the next
transaction up that can handle it, but sets the current
- transaction to it's parent and inactivates the
+ transaction to its parent and inactivates the
transactions in between. Inactive transactions can only
be rolled back or closed, any other call results in an
error.
subtransactions.
- unitofwork flush didn't close the failed transaction
- when the session was not in a transaction and commiting
+ when the session was not in a transaction and committing
the transaction failed.
.. change::
Better support for schemas in SQLite (linked in by ATTACH
DATABASE ... AS name). In some cases in the past, schema
- names were ommitted from generated SQL for SQLite. This is
+ names were omitted from generated SQL for SQLite. This is
no longer the case.
.. change::
:tags:
:tickets:
- The 'Smallinteger' compatiblity name (small i!) is no longer imported,
+ The 'Smallinteger' compatibility name (small i!) is no longer imported,
but remains in schema.py for now. SmallInteger (big I!) is still
imported.
:tickets: 643
Class-level properties are now usable as query elements... no more
- '.c.'! "Class.c.propname" is now superceded by "Class.propname". All
+ '.c.'! "Class.c.propname" is now superseded by "Class.propname". All
clause operators are supported, as well as higher level operators such
as Class.prop==<some instance> for scalar attributes,
Class.prop.contains(<some instance>) and Class.prop.any(<some
query.get() clauses, etc. and act as though they are regular single-column
scalars... except they're not! Use the function composite(cls, \*columns)
inside of the mapper's "properties" dict, and instances of cls will be
- created/mapped to a single attribute, comprised of the values correponding
+ created/mapped to a single attribute, comprised of the values corresponding
to \*columns.
.. change::
Joined-table inheritance will now generate the primary key columns of all
inherited classes against the root table of the join only. This implies
that each row in the root table is distinct to a single instance. If for
- some rare reason this is not desireable, explicit primary_key settings on
+ some rare reason this is not desirable, explicit primary_key settings on
individual mappers will override it.
.. change::
:tickets:
Speed! Clause compilation as well as the mechanics of SQL constructs have
- been streamlined and simplified to a signficant degree, for a 20-30%
+ been streamlined and simplified to a significant degree, for a 20-30%
improvement of the statement construction/compilation overhead of 0.3.
.. change::
case_sensitive=(True|False) setting removed from schema items, since
checking this state added a lot of method call overhead and there was no
decent reason to ever set it to False. Table and column names which are
- all lower case will be treated as case-insenstive (yes we adjust for
+ all lower case will be treated as case-insensitive (yes we adjust for
Oracle's UPPERCASE style too).
.. change::
Very rudimental support for OUT parameters added; use sql.outparam(name,
type) to set up an OUT parameter, just like bindparam(); after execution,
- values are avaiable via result.out_parameters dictionary.
+ values are available via result.out_parameters dictionary.
correspondence for cloned selectables which contain
free-standing column expressions. This bug is
generally only noticeable when exercising newer
- ORM behavior only availble in 0.6 via,
+ ORM behavior only available in 0.6 via,
but is more correct at the SQL expression level
as well.
:tags: orm
:tickets: 1501
- Fixed recursion issue which occured if a mapped object's
+ Fixed recursion issue which occurred if a mapped object's
`__len__()` or `__nonzero__()` method resulted in state
changes.
and secondaryjoin do. For the extremely rare use case where
the backref of a relation() has intentionally different
"foreign_keys" configured, both sides now need to be
- configured explicity (if they do in fact require this setting,
+ configured explicitly (if they do in fact require this setting,
see the next note...).
.. change::
graph of mappers.
- Cached a wasteful "table sort" operation that previously
- occured multiple times per flush, also removing significant
+ occurred multiple times per flush, also removing significant
method call count from flush().
- Other redundant behaviors have been simplified in
when determining "orphan" status - for a persistent object
it only detects an in-python de-association event to establish
the object as an "orphan". Next, the good news: to support
- one-to-one via a foreign key or assocation table, or to
+ one-to-one via a foreign key or association table, or to
support one-to-many via an association table, a new flag
single_parent=True may be set which indicates objects
linked to the relation are only meant to have a single parent.
The "unicode warning" against non-unicode bind data
is now raised only when the
- Unicode type is used explictly; not when
+ Unicode type is used explicitly; not when
convert_unicode=True is used on the engine
or String type.
A warning is now emitted if a mapper is created against a
join or other single selectable that includes multiple
columns with the same name in its .c. collection,
- and those columns aren't explictly named as part of
+ and those columns aren't explicitly named as part of
the same or separate attributes (or excluded).
In 0.7 this warning will be an exception. Note that
this warning is not emitted when the combination occurs
the _Label construct, i.e. the one that is produced
whenever you say somecol.label(), now counts itself
- in its "proxy_set" unioned with that of it's
+ in its "proxy_set" unioned with that of its
contained column's proxy set, instead of
directly returning that of the contained column.
This allows column correspondence
:tags: examples
:tickets:
- The beaker_caching example has been reorgnized
+ The beaker_caching example has been reorganized
such that the Session, cache manager,
declarative_base are part of environment, and
custom cache code is portable and now within
:tags: orm
:tickets:
- To accomodate the fact that there are now two kinds of eager
+ To accommodate the fact that there are now two kinds of eager
loading available, the new names for eagerload() and
eagerload_all() are joinedload() and joinedload_all(). The
old names will remain as synonyms for the foreseeable future.
:tags: postgresql
:tickets: 997
- the TIME and TIMESTAMP types are now availble from the
+ the TIME and TIMESTAMP types are now available from the
postgresql dialect directly, which add the PG-specific
argument 'precision' to both. 'precision' and
'timezone' are correctly reflected for both TIME and
Fixed bug in session.rollback() which involved not removing
formerly "pending" objects from the session before
- re-integrating "deleted" objects, typically occured with
+ re-integrating "deleted" objects, typically occurred with
natural primary keys. If there was a primary key conflict
between them, the attach of the deleted would fail
internally. The formerly "pending" objects are now expunged
the date/time/interval system created for Postgresql
EXTRACT in has now been generalized into
the type system. The previous behavior which often
- occured of an expression "column + literal" forcing
+ occurred of an expression "column + literal" forcing
the type of "literal" to be the same as that of "column"
will now usually not occur - the type of
"literal" is first derived from the Python type of the
postgresql://scott:tiger@localhost/test
postgresql+pg8000://scott:tiger@localhost/test
- The "postgres" name remains for backwards compatiblity
+ The "postgres" name remains for backwards compatibility
in the following ways:
- There is a "postgres.py" dummy dialect which
a column of type TIMESTAMP now defaults to NULL if
"nullable=False" is not passed to Column(), and no default
is present. This is now consistent with all other types,
- and in the case of TIMESTAMP explictly renders "NULL"
+ and in the case of TIMESTAMP explicitly renders "NULL"
due to MySQL's "switching" of default nullability
for TIMESTAMP columns.
:tickets: 2529
Added gaerdbms import to mysql/__init__.py,
- the absense of which was preventing the new
+ the absence of which was preventing the new
GAE dialect from being loaded.
.. change::
:tickets:
Streamlined the process by which a Select
- determines what's in it's '.c' collection.
+ determines what's in its '.c' collection.
Behaves identically, except that a
raw ClauseList() passed to select([])
(which is not a documented case anyway) will
:tags: schema
:tickets: 2109
- The 'useexisting' flag on Table has been superceded
+ The 'useexisting' flag on Table has been superseded
by a new pair of flags 'keep_existing' and
'extend_existing'. 'extend_existing' is equivalent
to 'useexisting' - the existing Table is returned,
:tags: general
:tickets: 1902
- New event system, supercedes all extensions, listeners,
+ New event system, supersedes all extensions, listeners,
etc.
.. change::
:tags: orm
:tickets: 1903
- Hybrid Attributes, implements/supercedes synonym()
+ Hybrid Attributes, implements/supersedes synonym()
.. change::
:tags: orm
:tags: orm
:tickets:
- Mutation Event Extension, supercedes "mutable=True"
+ Mutation Event Extension, supersedes "mutable=True"
.. seealso::
execution_options() on Connection accepts
"isolation_level" argument, sets transaction isolation
level for that connection only until returned to the
- connection pool, for thsoe backends which support it
+ connection pool, for those backends which support it
(SQLite, Postgresql)
.. change::
attempts when an existing connection attempt is blocking. Previously,
the production of new connections was serialized within the block
that monitored overflow; the overflow counter is now altered within
- it's own critical section outside of the connection process itself.
+ its own critical section outside of the connection process itself.
.. change::
:tags: bug, engine, pool
that the cursor rowcount matches the number of primary keys that should
have matched; this behavior had been taken off in most cases
(except when version_id is used) to support the unusual edge case of
- self-referential ON DELETE CASCADE; to accomodate this, the message
+ self-referential ON DELETE CASCADE; to accommodate this, the message
is now just a warning, not an exception, and the flag can be used
to indicate a mapping that expects self-refererntial cascaded
deletes of this nature. See also :ticket:`2403` for background on the
:tags: feature, orm
The :class:`.exc.StatementError` or DBAPI-related subclass
- now can accomodate additional information about the "reason" for
+ now can accommodate additional information about the "reason" for
the exception; the :class:`.Session` now adds some detail to it
when the exception occurs within an autoflush. This approach
is taken as opposed to combining :class:`.FlushError` with
operations. End user code which emulates the behavior of backrefs
must now ensure that recursive event propagation schemes are halted,
if the scheme does not use the backref handlers. Using this new system,
- backref handlers can now peform a
+ backref handlers can now perform a
"two-hop" operation when an object is appended to a collection,
associated with a new many-to-one, de-associated with the previous
many-to-one, and then removed from a previous collection. Before this
The "auto-aliasing" behavior of the :meth:`.Query.select_from`
method has been turned off. The specific behavior is now
- availble via a new method :meth:`.Query.select_entity_from`.
+ available via a new method :meth:`.Query.select_entity_from`.
The auto-aliasing behavior here was never well documented and
is generally not what's desired, as :meth:`.Query.select_from`
has become more oriented towards controlling how a JOIN is
Notice the nice clean alias names too. The joining doesn't
care if it's against the same immediate table or some other
-object which then cycles back to the beginining. Any kind
+object which then cycles back to the beginning. Any kind
of chain of eager loads can cycle back onto itself when
``join_depth`` is specified. When not present, eager
loading automatically stops when it hits a cycle.
To make room for the new subquery load feature, the existing
```eagerload()````/````eagerload_all()```` options are now
-superceded by ````joinedload()```` and
+superseded by ````joinedload()```` and
````joinedload_all()````. The old names will hang around
for the foreseeable future just like ````relation()```.
which is invalid SQL as "t1" is not referred to in any FROM clause.
-Now, in the absense of an enclosing SELECT, it returns::
+Now, in the absence of an enclosing SELECT, it returns::
SELECT t1.x, t2.y FROM t1, t2
:meth:`.AttributeEvents.append`, or :meth:`.AttributeEvents.remove` events,
and b. initiates further attribute modification operations as a result of these
events may need to be modified to prevent recursive loops, as the attribute system
-no longer stops a chain of events from propagating endlessly in the absense of the backref
+no longer stops a chain of events from propagating endlessly in the absence of the backref
event handlers. Additionally, code which depends upon the value of the ``initiator``
will need to be adjusted to the new API, and furthermore must be ready for the
value of ``initiator`` to change from its original value within a string of
:func:`.bindparam` expression.
The potentially backwards-compatible changes involve two unlikely
-scenarios. Since the the bound parameter is
+scenarios. Since the bound parameter is
**cloned**, users should not be relying upon making in-place changes to a
:func:`.bindparam` construct once created. Additionally, code which uses
:func:`.bindparam` within an :class:`.Insert` or :class:`.Update` statement
:func:`~sqlalchemy.schema.MetaData.create_all` is issued, and
:class:`~sqlalchemy.schema.ForeignKeyConstraint` invokes the "CONSTRAINT"
keyword inline with "CREATE TABLE". There are some cases where this is
-undesireable, particularly when two tables reference each other mutually, each
+undesirable, particularly when two tables reference each other mutually, each
with a foreign key referencing the other. In such a situation at least one of
the foreign key constraints must be generated after both tables have been
built. To support such a scheme, :class:`~sqlalchemy.schema.ForeignKey` and
.. seealso::
:paramref:`.MetaData.naming_convention` - for additional usage details
- as well as a listing of all avaiable naming components.
+ as well as a listing of all available naming components.
:ref:`alembic:tutorial_constraint_names` - in the Alembic documentation.
the internals of both SQLAlchemy Core and ORM.
.. versionadded:: 0.7
- The system supercedes the previous system of "extension", "proxy",
+ The system supersedes the previous system of "extension", "proxy",
and "listener" classes.
Event Registration
:members:
.. versionadded:: 0.7
- The event system supercedes the previous system of "extension", "listener",
+ The event system supersedes the previous system of "extension", "listener",
and "proxy" classes.
Connection Pool Events
* The value of the ``.quote`` setting for :class:`.Column` or :class:`.Table`
-* The assocation of a particular :class:`.Sequence` with a given :class:`.Column`
+* The association of a particular :class:`.Sequence` with a given :class:`.Column`
The relational database also in many cases reports on table metadata in a
different format than what was specified in SQLAlchemy. The :class:`.Table`
:class:`~sqlalchemy.schema.Table` construct, which resembles regular SQL
CREATE TABLE statements. We'll make two tables, one of which represents
"users" in an application, and another which represents zero or more "email
-addreses" for each row in the "users" table:
+addresses" for each row in the "users" table:
.. sourcecode:: pycon+sql
(4,)
{stop}[(u'wendy', 2)]
-A common system of dealing with duplicates in composed SELECT statments
+A common system of dealing with duplicates in composed SELECT statements
is the DISTINCT modifier. A simple DISTINCT clause can be added using the
:meth:`.Select.distinct` method:
flushed. Without further steps, you instead would need to replace the existing
value with a new one on each parent object to detect changes. Note that
there's nothing wrong with this, as many applications may not require that the
-values are ever mutated once created. For those which do have this requirment,
+values are ever mutated once created. For those which do have this requirement,
support for mutability is best applied using the ``sqlalchemy.ext.mutable``
extension - see the example in :ref:`mutable_toplevel`.
See :ref:`session_deleting_from_collections` for a description of this behavior.
-why isnt my ``__init__()`` called when I load objects?
+why isn't my ``__init__()`` called when I load objects?
------------------------------------------------------
See :ref:`mapping_constructors` for a description of this behavior.
many-to-one relationships load as according to foreign key attributes
regardless of the object being in any particular state.
Both techniques are **not recommended for general use**; they were added to suit
- specfic programming scenarios encountered by users which involve the repurposing
+ specific programming scenarios encountered by users which involve the repurposing
of the ORM's usual object states.
The recipe `ExpireRelationshipOnFKChange <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ExpireRelationshipOnFKChange>`_ features an example using SQLAlchemy events
A subquery comes in two general flavors, one known as a "scalar select"
which specifically must return exactly one row and one column, and the
other form which acts as a "derived table" and serves as a source of
- rows for the FROM clause of another select. A scalar select is eligble
+ rows for the FROM clause of another select. A scalar select is eligible
to be placed in the :term:`WHERE clause`, :term:`columns clause`,
ORDER BY clause or HAVING clause of the enclosing select, whereas the
derived table form is eligible to be placed in the FROM clause of the
The above subquery refers to the ``user_account`` table, which is not itself
in the ``FROM`` clause of this nested query. Instead, the ``user_account``
- table is recieved from the enclosing query, where each row selected from
+ table is received from the enclosing query, where each row selected from
``user_account`` results in a distinct execution of the subquery.
A correlated subquery is in most cases present in the :term:`WHERE clause`
The ORM includes a wide variety of hooks available for subscription.
.. versionadded:: 0.7
- The event supercedes the previous system of "extension" classes.
+ The event supersedes the previous system of "extension" classes.
For an introduction to the event API, see :ref:`event_toplevel`. Non-ORM events
such as those regarding connections and low-level statement execution are described in
session.query(MyClass).options(lazyload('*'))
-Above, the ``lazyload('*')`` option will supercede the ``lazy`` setting
+Above, the ``lazyload('*')`` option will supersede the ``lazy`` setting
of all :func:`.relationship` constructs in use for that query,
except for those which use the ``'dynamic'`` style of loading.
If some relationships specify
cause all those relationships to use ``'select'`` loading, e.g. emit a
SELECT statement when each attribute is accessed.
-The option does not supercede loader options stated in the
+The option does not supersede loader options stated in the
query, such as :func:`.eagerload`,
:func:`.subqueryload`, etc. The query below will still use joined loading
for the ``widget`` relationship::
While the :func:`.synonym` is useful for simple mirroring, the use case
of augmenting attribute behavior with descriptors is better handled in modern
usage using the :ref:`hybrid attribute <mapper_hybrids>` feature, which
-is more oriented towards Python descriptors. Techically, a :func:`.synonym`
+is more oriented towards Python descriptors. Technically, a :func:`.synonym`
can do everything that a :class:`.hybrid_property` can do, as it also supports
injection of custom SQL capabilities, but the hybrid is more straightforward
to use in more complex situations.
The above UPDATE statement is updating the row that not only matches
``user.id = 1``, it also is requiring that ``user.version_id = 1``, where "1"
is the last version identifier we've been known to use on this object.
-If a transaction elsewhere has modifed the row independently, this version id
+If a transaction elsewhere has modified the row independently, this version id
will no longer match, and the UPDATE statement will report that no rows matched;
this is the condition that SQLAlchemy tests, that exactly one row matched our
UPDATE (or DELETE) statement. If zero rows match, that indicates our version
* The new instance is returned.
With :meth:`~.Session.merge`, the given "source"
-instance is not modifed nor is it associated with the target :class:`.Session`,
+instance is not modified nor is it associated with the target :class:`.Session`,
and remains available to be merged with any number of other :class:`.Session`
objects. :meth:`~.Session.merge` is useful for
taking the state of any kind of object structure without regard for its
:meth:`~.Session.merge` is an extremely useful method for many purposes. However,
it deals with the intricate border between objects that are transient/detached and
-those that are persistent, as well as the automated transferrence of state.
+those that are persistent, as well as the automated transference of state.
The wide variety of scenarios that can present themselves here often require a
more careful approach to the state of objects. Common problems with merge usually involve
some unexpected state regarding the object being passed to :meth:`~.Session.merge`.
relationship, SQLAlchemy's default behavior of setting a foreign key
to ``NULL`` can be caught in one of two ways:
- * The easiest and most common is just to to set the
+ * The easiest and most common is just to set the
foreign-key-holding column to ``NOT NULL`` at the database schema
level. An attempt by SQLAlchemy to set the column to NULL will
fail with a simple NOT NULL constraint exception.
of usage, and can in some cases lead to concurrent connection
checkouts.
- In the absense of a demarcated transaction, the :class:`.Session`
+ In the absence of a demarcated transaction, the :class:`.Session`
cannot make appropriate decisions as to when autoflush should
occur nor when auto-expiration should occur, so these features
should be disabled with ``autoflush=False, expire_on_commit=False``.
given a primary key, returns a list of shards
to search. here, we don't have any particular information from a
- pk so we just return all shard ids. often, youd want to do some
+ pk so we just return all shard ids. often, you'd want to do some
kind of round-robin strategy here so that requests are evenly
distributed among DBs.
make_transient(self)
self.id = None
- # history of the 'elements' collecton.
+ # history of the 'elements' collection.
# this is a tuple of groups: (added, unchanged, deleted)
hist = attributes.get_history(self, 'elements')
config_value_id = Column(ForeignKey('config_value.id'), primary_key=True)
- """Reference the primary key of hte ConfigValue object."""
+ """Reference the primary key of the ConfigValue object."""
config_value = relationship("ConfigValue", lazy="joined", innerjoin=True)
"""Reference the related ConfigValue object."""
* ``concurrency_level`` - set the backend policy with regards to threading
issues: by default SQLAlchemy uses policy 1. See the linked documents
- below for futher information.
+ below for further information.
.. seealso::
zxjdbc/JDBC layer. To allow multiple character sets to be sent from the
MySQL Connector/J JDBC driver, by default SQLAlchemy sets its
``characterEncoding`` connection property to ``UTF-8``. It may be
-overriden via a ``create_engine`` URL parameter.
+overridden via a ``create_engine`` URL parameter.
"""
import re
def get_table_names(self, connection, schema=None, **kw):
schema = self.denormalize_name(schema or self.default_schema_name)
- # note that table_names() isnt loading DBLINKed or synonym'ed tables
+ # note that table_names() isn't loading DBLINKed or synonym'ed tables
if schema is None:
schema = self.default_schema_name
s = sql.text(
locale. Under OCI_, this is controlled by the NLS_LANG
environment variable. Upon first connection, the dialect runs a
test to determine the current "decimal" character, which can be
-a comma "," for european locales. From that point forward the
+a comma "," for European locales. From that point forward the
outputtypehandler uses that character to represent a decimal
point. Note that cx_oracle 5.0.3 or greater is required
when dealing with numerics with locale settings that don't use
def _detect_decimal_char(self, connection):
"""detect if the decimal separator character is not '.', as
- is the case with european locale settings for NLS_LANG.
+ is the case with European locale settings for NLS_LANG.
cx_oracle itself uses similar logic when it formats Python
Decimal objects to strings on the bind side (as of 5.0.3),
def _parse_hstore(hstore_str):
- """Parse an hstore from it's literal string representation.
+ """Parse an hstore from its literal string representation.
Attempts to approximate PG's hstore input parsing rules as closely as
possible. Although currently this is not strictly necessary, since the
:func:`.create_engine`.
SQLAlchemy can also be instructed to skip the usage of the psycopg2
-``UNICODE`` extension and to instead utilize it's own unicode encode/decode
+``UNICODE`` extension and to instead utilize its own unicode encode/decode
services, which are normally reserved only for those DBAPIs that don't
fully support unicode directly. Passing ``use_native_unicode=False`` to
:func:`.create_engine` will disable usage of ``psycopg2.extensions.UNICODE``.
If this function returns a list of HSTORE identifiers, we then determine that
the ``HSTORE`` extension is present.
-2. If the ``use_native_hstore`` flag is at it's default of ``True``, and
+2. If the ``use_native_hstore`` flag is at its default of ``True``, and
we've detected that ``HSTORE`` oids are available, the
``psycopg2.extensions.register_hstore()`` extension is invoked for all
connections.
Constraint checking on SQLite has three prerequisites:
* At least version 3.6.19 of SQLite must be in use
-* The SQLite libary must be compiled *without* the SQLITE_OMIT_FOREIGN_KEY
+* The SQLite library must be compiled *without* the SQLITE_OMIT_FOREIGN_KEY
or SQLITE_OMIT_TRIGGER symbols enabled.
* The ``PRAGMA foreign_keys = ON`` statement must be emitted on all connections
before use.
:param regexp: regular expression which will be applied to incoming result
rows. If the regexp contains named groups, the resulting match dict is
applied to the Python datetime() constructor as keyword arguments.
- Otherwise, if positional groups are used, the the datetime() constructor
+ Otherwise, if positional groups are used, the datetime() constructor
is called with positional arguments via
``*map(int, match_obj.groups(0))``.
"""
incoming result rows. If the regexp contains named groups, the
resulting match dict is applied to the Python date() constructor
as keyword arguments. Otherwise, if positional groups are used, the
- the date() constructor is called with positional arguments via
+ date() constructor is called with positional arguments via
``*map(int, match_obj.groups(0))``.
"""
:param regexp: regular expression which will be applied to incoming result
rows. If the regexp contains named groups, the resulting match dict is
applied to the Python time() constructor as keyword arguments. Otherwise,
- if positional groups are used, the the time() constructor is called with
+ if positional groups are used, the time() constructor is called with
positional arguments via ``*map(int, match_obj.groups(0))``.
"""
def _resolve_type_affinity(self, type_):
"""Return a data type from a reflected column, using affinity tules.
- SQLite's goal for universal compatability introduces some complexity
+ SQLite's goal for universal compatibility introduces some complexity
during reflection, as a column's defined type might not actually be a
type that SQLite understands - or indeed, my not be defined *at all*.
Internally, SQLite handles this with a 'data type affinity' for each
self._reentrant_error = True
try:
# non-DBAPI error - if we already got a context,
- # or theres no string statement, don't wrap it
+ # or there's no string statement, don't wrap it
should_wrap = isinstance(e, self.dialect.dbapi.Error) or \
(statement is not None and context is None)
The connection passed here is a SQLAlchemy Connection object,
with full capabilities.
- The initalize() method of the base dialect should be called via
+ The initialize() method of the base dialect should be called via
super().
"""
:meth:`.Dialect.do_autocommit`
hook is provided for DBAPIs that need some extra commands emitted
after a commit in order to enter the next transaction, when the
- SQLAlchemy :class:`.Connection` is used in it's default "autocommit"
+ SQLAlchemy :class:`.Connection` is used in its default "autocommit"
mode.
:param dbapi_connection: a DBAPI connection, typically
_dispatch_target = SchemaEventTarget
def before_create(self, target, connection, **kw):
- """Called before CREATE statments are emitted.
+ """Called before CREATE statements are emitted.
:param target: the :class:`.MetaData` or :class:`.Table`
object which is the target of the event.
"""
def after_create(self, target, connection, **kw):
- """Called after CREATE statments are emitted.
+ """Called after CREATE statements are emitted.
:param target: the :class:`.MetaData` or :class:`.Table`
object which is the target of the event.
"""
def before_drop(self, target, connection, **kw):
- """Called before DROP statments are emitted.
+ """Called before DROP statements are emitted.
:param target: the :class:`.MetaData` or :class:`.Table`
object which is the target of the event.
"""
def after_drop(self, target, connection, **kw):
- """Called after DROP statments are emitted.
+ """Called after DROP statements are emitted.
:param target: the :class:`.MetaData` or :class:`.Table`
object which is the target of the event.
The :meth:`.PoolEvents.reset` event is usually followed by the
- the :meth:`.PoolEvents.checkin` event is called, except in those
+ :meth:`.PoolEvents.checkin` event is called, except in those
cases where the connection is discarded immediately after reset.
:param dbapi_connection: a DBAPI connection.
This event is called with the DBAPI exception instance
received from the DBAPI itself, *before* SQLAlchemy wraps the
- exception with it's own exception wrappers, and before any
+ exception with its own exception wrappers, and before any
other operations are performed on the DBAPI cursor; the
existing transaction remains in effect as well as any state
on the cursor.
object present is matched up to the class to which it is to be mapped,
if any, else it is skipped.
-3. As the :class:`.ForeignKeyConstraint` we are examining correponds to a reference
+3. As the :class:`.ForeignKeyConstraint` we are examining corresponds to a reference
from the immediate mapped class,
the relationship will be set up as a many-to-one referring to the referred class;
a corresponding one-to-many backref will be created on the referred class referring
primaryjoin=lambda: Target.id==cls.target_id
)
-or alternatively, the string form (which ultmately generates a lambda)::
+or alternatively, the string form (which ultimately generates a lambda)::
class RefTargetMixin(object):
@declared_attr
Above, the ``HasStringCollection`` mixin produces a :func:`.relationship`
which refers to a newly generated class called ``StringAttribute``. The
-``StringAttribute`` class is generated with it's own :class:`.Table`
+``StringAttribute`` class is generated with its own :class:`.Table`
definition which is local to the parent class making usage of the
``HasStringCollection`` mixin. It also produces an :func:`.association_proxy`
object which proxies references to the ``strings`` attribute onto the ``value``
SQLAlchemy's unit of work performs all INSERTs before DELETEs within a
single flush. In the case of a primary key, it will trade
an INSERT/DELETE of the same primary key for an UPDATE statement in order
- to lessen the impact of this lmitation, however this does not take place
+ to lessen the impact of this limitation, however this does not take place
for a UNIQUE column.
A future feature will allow the "DELETE before INSERT" behavior to be
possible, allevating this limitation, though this feature will require
import sys
# set initial level to WARN. This so that
-# log statements don't occur in the absense of explicit
+# log statements don't occur in the absence of explicit
# logging being enabled for 'sqlalchemy'.
rootlogger = logging.getLogger('sqlalchemy')
if rootlogger.level == logging.NOTSET:
any other kind of SQL expression other than a :class:`.Column`,
the attribute will refer to the :attr:`.MapperProperty.info` dictionary
associated directly with the :class:`.ColumnProperty`, assuming the SQL
- expression itself does not have it's own ``.info`` attribute
+ expression itself does not have its own ``.info`` attribute
(which should be the case, unless a user-defined SQL construct
has defined one).
replaces it.
:param initiator: An instance of :class:`.attributes.Event`
representing the initiation of the event. May be modified
- from it's original value by backref handlers in order to control
+ from its original value by backref handlers in order to control
chained event propagation.
.. versionchanged:: 0.9.0 the ``initiator`` argument is now
:param value: the value being removed.
:param initiator: An instance of :class:`.attributes.Event`
representing the initiation of the event. May be modified
- from it's original value by backref handlers in order to control
+ from its original value by backref handlers in order to control
chained event propagation.
.. versionchanged:: 0.9.0 the ``initiator`` argument is now
or expired.
:param initiator: An instance of :class:`.attributes.Event`
representing the initiation of the event. May be modified
- from it's original value by backref handlers in order to control
+ from its original value by backref handlers in order to control
chained event propagation.
.. versionchanged:: 0.9.0 the ``initiator`` argument is now
See the section :ref:`concrete_inheritance` for an example.
:param confirm_deleted_rows: defaults to True; when a DELETE occurs
- of one more more rows based on specific primary keys, a warning is
+ of one more rows based on specific primary keys, a warning is
emitted when the number of rows matched does not equal the number
of rows expected. This parameter may be set to False to handle the case
where database ON DELETE CASCADE rules may be deleting some of those
if self.inherit_condition is None:
# figure out inherit condition from our table to the
# immediate table of the inherited mapper, not its
- # full table which could pull in other stuff we dont
+ # full table which could pull in other stuff we don't
# want (allows test/inheritance.InheritTest4 to pass)
self.inherit_condition = sql_util.join_condition(
self.inherits.local_table,
setter = True
if isinstance(self.polymorphic_on, util.string_types):
- # polymorphic_on specified as as string - link
+ # polymorphic_on specified as a string - link
# it to mapped ColumnProperty
try:
self.polymorphic_on = self._props[self.polymorphic_on]
# attempt to skip dependencies that are not
# significant to the inheritance chain
# for two tables that are related by inheritance.
- # while that dependency may be important, it's techinically
+ # while that dependency may be important, it's technically
# not what we mean to sort on here.
parent = table_to_mapper.get(fk.parent.table)
dep = table_to_mapper.get(fk.column.table)
.. seealso::
- :ref:`self_referential` - in-depth explaination of how
+ :ref:`self_referential` - in-depth explanation of how
:paramref:`~.relationship.remote_side`
is used to configure self-referential relationships.
self._deleted = {}
# TODO: need much more test coverage for bind_mapper() and similar !
- # TODO: + crystalize + document resolution order
+ # TODO: + crystallize + document resolution order
# vis. bind_mapper/bind_table
def bind_mapper(self, mapper, bind):
e.add_detail(
"raised as a result of Query-invoked autoflush; "
"consider using a session.no_autoflush block if this "
- "flush is occuring prematurely")
+ "flush is occurring prematurely")
util.raise_from_cause(e)
def refresh(self, instance, attribute_names=None, lockmode=None):
def setup_query(self, context, entity, path, loadopt, adapter, \
column_collection=None, parentmapper=None,
**kwargs):
- """Add a left outer join to the statement thats being constructed."""
+ """Add a left outer join to the statement that's being constructed."""
if not context.query._enable_eagerloads:
return
self.mapper.identity_key_from_row(row, decorator)
return decorator
except KeyError:
- # no identity key - dont return a row
+ # no identity key - don't return a row
# processor, will cause a degrade to lazy
return False
for key in table_map:
table = table_map[key]
- # mysql doesnt like selecting from a select;
+ # mysql doesn't like selecting from a select;
# make it an alias of the select
if isinstance(table, sql.Select):
table = table.alias()
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
-"""Compatiblity namespace for sqlalchemy.sql.schema and related.
+"""Compatibility namespace for sqlalchemy.sql.schema and related.
"""
c in implicit_return_defaults:
self.returning.append(c)
elif not c.primary_key:
- # dont add primary key column to postfetch
+ # don't add primary key column to postfetch
self.postfetch.append(c)
else:
values.append(
%(fullname)s - the Table name including schema, quoted if needed
The DDL's "context", if any, will be combined with the standard
- substutions noted above. Keys present in the context will override
+ substitutions noted above. Keys present in the context will override
the standard substitutions.
"""
class _CreateDropBase(DDLElement):
- """Base class for DDL constucts that represent CREATE and DROP or
+ """Base class for DDL constructs that represent CREATE and DROP or
equivalents.
The common theme of _CreateDropBase is a single
passed to :func:`.type_coerce` as targets.
For example, if a type implements the :meth:`.TypeEngine.bind_expression`
method or :meth:`.TypeEngine.bind_processor` method or equivalent,
- these functions will take effect at statement compliation/execution time
+ these functions will take effect at statement compilation/execution time
when a literal value is passed, as in::
# bound-value handling of MyStringType will be applied to the
or a Python string which will be coerced into a bound literal value.
:param type_: A :class:`.TypeEngine` class or instance indicating
- the type to which the the expression is coerced.
+ the type to which the expression is coerced.
.. seealso::
expr = users_table.c.name == 'Wendy'
The above expression will produce a :class:`.BinaryExpression`
- contruct, where the left side is the :class:`.Column` object
+ construct, where the left side is the :class:`.Column` object
representing the ``name`` column, and the right side is a :class:`.BindParameter`
representing the literal value::
languages. It returns an instance of :class:`.Case`.
:func:`.case` in its usual form is passed a list of "when"
- contructs, that is, a list of conditions and results as tuples::
+ constructs, that is, a list of conditions and results as tuples::
from sqlalchemy import case
"""Represent an expression that is ``LEFT <operator> RIGHT``.
A :class:`.BinaryExpression` is generated automatically
- whenever two column expressions are used in a Python binary expresion::
+ whenever two column expressions are used in a Python binary expression::
>>> from sqlalchemy.sql import column
>>> column('a') + column('b')
def _maybe_wrap_callable(self, fn):
"""Wrap callables that don't accept a context.
- This is to allow easy compatiblity with default callables
+ This is to allow easy compatibility with default callables
that aren't specific to accepting of a context.
"""
:param \*expressions:
Column expressions to include in the index. The expressions
are normally instances of :class:`.Column`, but may also
- be arbitrary SQL expressions which ultmately refer to a
+ be arbitrary SQL expressions which ultimately refer to a
:class:`.Column`.
:param unique=False:
The values associated with each "constraint class" or "constraint
mnemonic" key are string naming templates, such as
``"uq_%(table_name)s_%(column_0_name)s"``,
- which decribe how the name should be composed. The values associated
+ which describe how the name should be composed. The values associated
with user-defined "token" keys should be callables of the form
``fn(constraint, table)``, which accepts the constraint/index
object and :class:`.Table` as arguments, returning a string
self.__engines[bind] = e
self.context._engine = e
else:
- # TODO: this is squirrely. we shouldnt have to hold onto engines
+ # TODO: this is squirrely. we shouldn't have to hold onto engines
# in a case like this
if bind not in self.__engines:
self.__engines[bind] = bind
Use this parameter to explicitly specify "from" objects which are not
automatically locatable. This could include
:class:`~sqlalchemy.schema.Table` objects that aren't otherwise present,
- or :class:`.Join` objects whose presence will supercede that of the
+ or :class:`.Join` objects whose presence will supersede that of the
:class:`~sqlalchemy.schema.Table` objects already located in the other
clauses.
# here is the same item is _correlate as in _from_obj but the
# _correlate version has an annotation on it - (specifically
# RelationshipProperty.Comparator._criterion_exists() does
- # this). Also keep _correlate liberally open with it's previous
+ # this). Also keep _correlate liberally open with its previous
# contents, as this set is used for matching, not rendering.
self._correlate = set(clone(f) for f in
self._correlate).union(self._correlate)
# 4. clone other things. The difficulty here is that Column
# objects are not actually cloned, and refer to their original
# .table, resulting in the wrong "from" parent after a clone
- # operation. Hence _from_cloned and _from_obj supercede what is
+ # operation. Hence _from_cloned and _from_obj supersede what is
# present here.
self._raw_columns = [clone(c, **kw) for c in self._raw_columns]
for attr in '_whereclause', '_having', '_order_by_clause', \
except KeyError:
pass
else:
- # couldnt adapt - so just return the type itself
+ # couldn't adapt - so just return the type itself
# (it may be a user-defined type)
return typeobj
# if we adapted the given generic type to a database-specific type,
def visit(element):
if isinstance(element, ScalarSelect):
- # we dont want to dig into correlated subqueries,
+ # we don't want to dig into correlated subqueries,
# those are just column elements by themselves
yield element
elif element.__visit_name__ == 'binary' and \
This function is primarily used to determine the most minimal "primary key"
from a selectable, by reducing the set of primary key columns present
- in the the selectable to just those that are not repeated.
+ in the selectable to just those that are not repeated.
"""
ignore_nonexistent_tables = kw.pop('ignore_nonexistent_tables', False)
# so the error doesn't at least keep happening.
pool._refs.clear()
_STRAY_CONNECTION_FAILURES = 0
- assert False, "Stray conections in cleanup: %s" % err
+ assert False, "Stray connections in cleanup: %s" % err
def eq_(a, b, msg=None):
def _after_test_ctx(self):
# this can cause a deadlock with pg8000 - pg8000 acquires
- # prepared statment lock inside of rollback() - if async gc
+ # prepared statement lock inside of rollback() - if async gc
# is collecting in finalize_fairy, deadlock.
# not sure if this should be if pypy/jython only.
# note that firebird/fdb definitely needs this though
_recursion_stack.add(id(self))
try:
- # pick the entity thats not SA persisted as the source
+ # pick the entity that's not SA persisted as the source
try:
self_key = sa.orm.attributes.instance_state(self).key
except sa.orm.exc.NO_STATE:
target database.
External dialect test suites should subclass SuiteRequirements
-to provide specific inclusion/exlusions.
+to provide specific inclusion/exclusions.
"""
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
-"""Compatiblity namespace for sqlalchemy.sql.types.
+"""Compatibility namespace for sqlalchemy.sql.types.
"""
try:
del self[item[0]]
except KeyError:
- # if we couldnt find a key, most
+ # if we couldn't find a key, most
# likely some other thread broke in
# on us. loop around and try again
break
return self.scopefunc() in self.registry
def set(self, obj):
- """Set the value forthe current scope."""
+ """Set the value for the current scope."""
self.registry[self.scopefunc()] = obj
Probes a class's __init__ method, collecting all named arguments. If the
__init__ defines a \**kwargs catch-all, then the constructor is presumed to
- pass along unrecognized keywords to it's base classes, and the collection
+ pass along unrecognized keywords to its base classes, and the collection
process is repeated recursively on each of the bases.
Uses a subset of inspect.getargspec() to cut down on method overhead.
sess.delete(a)
sess.flush()
- # dont need to clear_mappers()
+ # don't need to clear_mappers()
del B
del A
sess.delete(a)
sess.flush()
- # dont need to clear_mappers()
+ # don't need to clear_mappers()
del B
del A
def teardown(self):
# the tests leave some fake connections
- # around which dont necessarily
+ # around which don't necessarily
# get gc'ed as quickly as we'd like,
# on backends like pypy, python3.2
pool_module._refs.clear()
testing.db,
lambda: engine.execute(t1.insert()),
ExactSQL("INSERT INTO t1 DEFAULT VALUES"),
- # we dont have an event for
+ # we don't have an event for
# "SELECT @@IDENTITY" part here.
# this will be in 0.8 with #2459
)
startswith(".".join(str(x) for x in v))
# currently not passing with pg 9.3 that does not seem to generate
- # any notices here, woudl rather find a way to mock this
+ # any notices here, would rather find a way to mock this
@testing.only_on('postgresql+psycopg2', 'psycopg2-specific feature')
def _test_notice_logging(self):
log = logging.getLogger('sqlalchemy.dialects.postgresql')
table.insert(inline=True).execute({'data': 'd8'})
- # note that the test framework doesnt capture the "preexecute"
+ # note that the test framework doesn't capture the "preexecute"
# of a seqeuence or default. we just see it in the bind params.
self.assert_sql(self.engine, go, [], with_sequences=[
psycopg will return a datetime with a tzinfo attached to it, if
postgresql returns it. python then will not let you compare a
- datetime with a tzinfo to a datetime that doesnt have one. this
+ datetime with a tzinfo to a datetime that doesn't have one. this
test illustrates two ways to have datetime types with and without
timezone info. """
self._test_stmt_exception_pickleable(Exception("hello world"))
@testing.crashes("postgresql+psycopg2",
- "Older versions dont support cursor pickling, newer ones do")
+ "Older versions don't support cursor pickling, newer ones do")
@testing.fails_on("mysql+oursql",
"Exception doesn't come back exactly the same from pickle")
@testing.fails_on("mysql+mysqlconnector",
assert not conn.closed
assert conn.invalidated
- # close shouldnt break
+ # close shouldn't break
conn.close()
name = Column('name', String(50))
# this is not "valid" but we want to test that Address.id
- # doesnt get stuck into user's table
+ # doesn't get stuck into user's table
adr_count = Address.id
# assert that the "id" column is available without a second
# load. as of 0.7, the ColumnProperty tests all columns
- # in it's list to see which is present in the row.
+ # in its list to see which is present in the row.
sess.expunge_all()
# create an Admin, and append a Role. the dependency processors
# corresponding to the "roles" attribute for the Admin mapper and the User mapper
- # have to ensure that two dependency processors dont fire off and insert the
+ # have to ensure that two dependency processors don't fire off and insert the
# many to many row twice.
a = Admin()
a.roles.append(adminrole)
def test_eager_terminate(self):
"""Eager query generation does not include the same mapper's table twice.
- Or, that bi-directional eager loads dont include each other in eager
+ Or, that bi-directional eager loads don't include each other in eager
query generation.
"""
attributes.register_attribute(MyTest2, 'b', uselist=False,
useobject=False)
- # shouldnt be pickling callables at the class level
+ # shouldn't be pickling callables at the class level
def somecallable(state, passive):
return None
p = Post("post 5")
- # setting blog doesnt call 'posts' callable, calls with no fetch
+ # setting blog doesn't call 'posts' callable, calls with no fetch
p.blog = b
eq_(
lazy_posts.mock_calls, [
# backref fires
assert u1.address is a2
- # didnt work this way tho
+ # didn't work this way tho
assert a1.user is u1
# moves appropriately after commit
# the bug here is that the dependency sort comes up with T1/T2 in a
# cycle, but there are no T1/T2 objects to be saved. therefore no
# "cyclical subtree" gets generated, and one or the other of T1/T2
- # gets lost, and processors on T3 dont fire off. the test will then
+ # gets lost, and processors on T3 don't fire off. the test will then
# fail because the FK's on T3 are not nullable.
o3 = T3()
o3.t1 = o1
def test_cycle(self):
"""
- This test has a peculiar aspect in that it doesnt create as many
+ This test has a peculiar aspect in that it doesn't create as many
dependent relationships as the other tests, and revealed a small
glitch in the circular dependency sorting.
self.sql_count_(0, go)
def test_unsaved_group(self):
- """Deferred loading doesnt kick in when just PK cols are set"""
+ """Deferred loading doesn't kick in when just PK cols are set"""
orders, Order = self.tables.orders, self.classes.Order
assert o2 not in sess.dirty
# this will mark it as 'dirty', but nothing actually changed
o2.description = 'order 3'
- # therefore the flush() shouldnt actually issue any SQL
+ # therefore the flush() shouldn't actually issue any SQL
self.assert_sql_count(testing.db, sess.flush, 0)
def test_map_selectable_wo_deferred(self):
# change the value in the DB
users.update(users.c.id==7, values=dict(name='jack')).execute()
sess.expire(u)
- # object isnt refreshed yet, using dict to bypass trigger
+ # object isn't refreshed yet, using dict to bypass trigger
assert u.__dict__.get('name') != 'jack'
assert 'name' in attributes.instance_state(u).expired_attributes
assert 'addresses' not in u.__dict__
# hit the lazy loader. just does the lazy load,
- # doesnt do the overall refresh
+ # doesn't do the overall refresh
def go():
assert u.addresses[0].email_address=='ed@wood.com'
self.assert_sql_count(testing.db, go, 1)
def test_refresh_with_lazy(self):
"""test that when a lazy loader is set as a trigger on an object's attribute
- (at the attribute level, not the class level), a refresh() operation doesnt
+ (at the attribute level, not the class level), a refresh() operation doesn't
fire the lazy loader or create any problems"""
User, Address, addresses, users = (self.classes.User,
sess.add(b2)
sess.flush()
- # theres an overlapping ForeignKey here, so not much option except
- # to artifically control the flush order
+ # there's an overlapping ForeignKey here, so not much option except
+ # to artificially control the flush order
b2.sub2 = [s2]
sess.flush()
x = "something"
@property
def y(self):
- return "somethign else"
+ return "something else"
m = mapper(Foo, users, properties={"addresses":relationship(Address)})
x = "something"
@property
def y(self):
- return "somethign else"
+ return "something else"
m = mapper(Foo, users)
a1 = aliased(Foo)
'converted' to represent the correct objects. However, at the
moment I'd rather not support this use case; if you are merging
with load=False, you're typically dealing with caching and the
- merged objects shouldnt be 'dirty'.
+ merged objects shouldn't be 'dirty'.
"""
self.assert_sql_count(testing.db, go, 0)
def test_no_load_disallows_dirty(self):
- """load=False doesnt support 'dirty' objects right now
+ """load=False doesn't support 'dirty' objects right now
(see test_no_load_with_eager()). Therefore lets assert it.
'somenewaddress')
# this use case is not supported; this is with a pending Address
- # on the pre-merged object, and we currently dont support
+ # on the pre-merged object, and we currently don't support
# 'dirty' objects being merged with load=False. in this case,
# the empty '_state.parents' collection would be an issue, since
# the optimistic flag is False in _is_orphan() for pending
users.update(values={User.username:'jack'}).execute(username='ed')
# expire/refresh works off of primary key. the PK is gone
- # in this case so theres no way to look it up. criterion-
+ # in this case so there's no way to look it up. criterion-
# based session invalidation could solve this [ticket:911]
sess.expire(u1)
assert_raises(sa.orm.exc.ObjectDeletedError, getattr, u1, 'username')
u.addresses[0].email_address = 'lala'
u.orders[1].items[2].description = 'item 12'
- # test that lazy load doesnt change child items
+ # test that lazy load doesn't change child items
s.query(User).populate_existing().all()
assert u.addresses[0].email_address == 'lala'
assert u.orders[1].items[2].description == 'item 12'
assert [Address(id=2), Address(id=3), Address(id=4)] == \
sess.query(Address).join("user").filter(Address.user.has(User.name.like('%ed%'), id=8)).order_by(Address.id).all()
- # test has() doesnt' get subquery contents adapted by aliased join
+ # test has() doesn't get subquery contents adapted by aliased join
assert [Address(id=2), Address(id=3), Address(id=4)] == \
sess.query(Address).join("user", aliased=True).filter(Address.user.has(User.name.like('%ed%'), id=8)).order_by(Address.id).all()
assert len(u.addresses) == 3
assert newad not in u.addresses
- # pending objects dont get expired
+ # pending objects don't get expired
assert newad.email_address == 'a new address'
def test_expunge_cascade(self):
session.add_all((u, u2))
session.flush()
- # assert the first one retreives the same from the identity map
+ # assert the first one retrieves the same from the identity map
nu = session.query(m).get(u.id)
assert u is nu
mapper(Order, orders, properties={
'description': sa.orm.deferred(orders.c.description)})
- # dont set deferred attribute, commit session
+ # don't set deferred attribute, commit session
o = Order(id=42)
session = create_session(autocommit=False)
session.add(o)
else:
s1.commit()
- # new in 0.5 ! dont need to close the session
+ # new in 0.5 ! don't need to close the session
f1 = s1.query(Foo).get(f1.id)
f2 = s1.query(Foo).get(f2.id)
expected_test_params_list
)
- # check that params() doesnt modify original statement
+ # check that params() doesn't modify original statement
s = select([table1], or_(table1.c.myid == bindparam('myid'),
table2.c.otherid ==
bindparam('myotherid')))
@testing.fails_on_everything_except('postgresql')
def test_as_from(self):
- # TODO: shouldnt this work on oracle too ?
+ # TODO: shouldn't this work on oracle too ?
x = func.current_date(bind=testing.db).execute().scalar()
y = func.current_date(bind=testing.db).select().execute().scalar()
z = func.current_date(bind=testing.db).scalar()
def setup_class(cls):
global A, B
- # establish two ficticious ClauseElements.
+ # establish two fictitious ClauseElements.
# define deep equality semantics as well as deep
# identity semantics.
class A(ClauseElement):
table_c.c.bar.onupdate.arg) == 'z'
assert isinstance(table2_c.c.id.default, Sequence)
- # constraints dont get reflected for any dialect right
+ # constraints don't get reflected for any dialect right
# now
if has_constraints:
jj = select([table1.c.col1.label('bar_col1')])
jjj = join(table1, jj, table1.c.col1 == jj.c.bar_col1)
- # test column directly agaisnt itself
+ # test column directly against itself
assert jjj.corresponding_column(jjj.c.table1_col1) \
is jjj.c.table1_col1
)
def test_select_composition_four(self):
- # test that use_labels doesnt interfere with literal columns
+ # test that use_labels doesn't interfere with literal columns
self.assert_compile(
select(["column1", "column2", table1.c.myid], from_obj=table1,
use_labels=True),
)
def test_select_composition_five(self):
- # test that use_labels doesnt interfere
+ # test that use_labels doesn't interfere
# with literal columns that have textual labels
self.assert_compile(
select(["column1 AS foobar", "column2 AS hoho", table1.c.myid],
def test_select_composition_six(self):
# test that "auto-labeling of subquery columns"
- # doesnt interfere with literal columns,
- # exported columns dont get quoted
+ # doesn't interfere with literal columns,
+ # exported columns don't get quoted
self.assert_compile(
select(["column1 AS foobar", "column2 AS hoho", table1.c.myid],
from_obj=[table1]).select(),
MyType = self.MyType
# test coerce from nulltype - e.g. use an object that
- # doens't match to a known type
+ # does't match to a known type
class MyObj(object):
def __str__(self):
return "THISISMYOBJ"