added 'version_id' keyword argument to mapper. this keyword should reference a
Column object with type Integer, preferably non-nullable, which will be used on
the mapped table to track version numbers. this number is incremented on each
- save operation and is specifed in the UPDATE/DELETE conditions so that it
+ save operation and is specified in the UPDATE/DELETE conditions so that it
factors into the returned row count, which results in a ConcurrencyError if the
value received is not the expected count.
changed to StaleDataError, and descriptive
error messages have been revised to reflect
exactly what the issue is. Both names will
- remain available for the forseeable future
+ remain available for the foreseeable future
for schemes that may be specifying
ConcurrentModificationError in an "except:"
clause.
Fixed an 0.9 regression where ORM instance or mapper events applied
to a base class such as a declarative base with the propagate=True
flag would fail to apply to existing mapped classes which also
- used inheritance due to an assertion. Addtionally, repaired an
+ used inheritance due to an assertion. Additionally, repaired an
attribute error which could occur during removal of such an event,
depending on how it was first assigned.
adaptation which goes on has been made more robust, such that if a descriptor
returns another instrumented attribute, rather than a compound SQL
expression element, the operation will still proceed.
- Addtionally, the "adapted" operator will retain its class; previously,
+ Additionally, the "adapted" operator will retain its class; previously,
a change in class from ``InstrumentedAttribute`` to ``QueryableAttribute``
(a superclass) would interact with Python's operator system such that
an expression like ``aliased(MyClass.x) > MyClass.x`` would reverse itself
:versions: 1.1.0b3
Fixed bug in :paramref:`.Select.with_for_update.of`, where the Oracle
- "rownum" approach to LIMIT/OFFSET would fail to accomodate for the
+ "rownum" approach to LIMIT/OFFSET would fail to accommodate for the
expressions inside the "OF" clause, which must be stated at the topmost
level referring to expression within the subquery. The expressions are
now added to the subquery if needed.
:tickets: 3690
Fixed bug where when using ``case_sensitive=False`` with an
- :class:`.Engine`, the result set would fail to correctly accomodate
+ :class:`.Engine`, the result set would fail to correctly accommodate
for duplicate column names in the result set, causing an error
when the statement is executed in 1.0, and preventing the
"ambiguous column" exception from functioning in 1.1.
pool of only a single connection were used, this means the pool would
be fully checked out until that stack trace were freed. This mostly
impacts very specific debugging scenarios and is unlikely to have been
- noticable in any production application. The fix applies an
+ noticeable in any production application. The fix applies an
explicit checkin of the record before re-raising the caught exception.
Fixed bug where the truncation of long labels in SQL could produce
a label that overlapped another label that is not truncated; this
- because the length threshhold for truncation was greater than
+ because the length threshold for truncation was greater than
the portion of the label that remains after truncation. These
two values have now been made the same; label_length - 6.
The effect here is that shorter column labels will be "truncated"
The Postgresql :class:`.postgresql.ENUM` type will emit a
DROP TYPE instruction when a plain ``table.drop()`` is called,
assuming the object is not associated directly with a
- :class:`.MetaData` object. In order to accomodate the use case of
+ :class:`.MetaData` object. In order to accommodate the use case of
an enumerated type shared between multiple tables, the type should
be associated directly with the :class:`.MetaData` object; in this
case the type will only be created at the metadata level, or if
A new method is added to :class:`.TypeEngine` :meth:`.TypeEngine.literal_processor`
as well as :meth:`.TypeDecorator.process_literal_param` for :class:`.TypeDecorator`
-which take on the task of rendering so-called "inline literal paramters" - parameters
+which take on the task of rendering so-called "inline literal parameters" - parameters
that normally render as "bound" values, but are instead being rendered inline
into the SQL statement due to the compiler configuration. This feature is used
when generating DDL for constructs such as :class:`.CheckConstraint`, as well
using caching, which upon successive calls features vastly reduced
Python function call overhead (over 75%). By specifying a
:class:`.Query` object as a series of lambdas which are only invoked
-once, a query as a pre-compiled unit begins to be feasable::
+once, a query as a pre-compiled unit begins to be feasible::
from sqlalchemy.ext import baked
from sqlalchemy import bindparam
an unresolvable cycle; in this case a warning is emitted, and the tables
are dropped with **no** ordering, which is usually fine on SQLite unless
constraints are enabled. To resolve the warning and proceed with at least
-a partial ordering on a SQLite database, particuarly one where constraints
+a partial ordering on a SQLite database, particularly one where constraints
are enabled, re-apply "use_alter" flags to those
:class:`.ForeignKey` and :class:`.ForeignKeyConstraint` objects which should
be explicitly omitted from the sort.
(None,)
Note above, there is a comparison ``WHERE ? = address.user_id`` where the
-bound value ``?`` is receving ``None``, or ``NULL`` in SQL. **This will
+bound value ``?`` is receiving ``None``, or ``NULL`` in SQL. **This will
always return False in SQL**. The comparison here would in theory
generate SQL as follows::
LIMIT :param_1
In the case that the LEFT OUTER JOIN returns more than one row, the ORM
-has always emitted a warning here and ignored addtional results for
+has always emitted a warning here and ignored additional results for
``uselist=False``, so the results in that error situation should not change.
:ticket:`3249`
table.drop(engine) # will emit DROP TABLE and DROP TYPE - new for 1.0
This means that if a second table also has an enum named 'myenum', the
-above DROP operation will now fail. In order to accomodate the use case
+above DROP operation will now fail. In order to accommodate the use case
of a common shared enumerated type, the behavior of a metadata-associated
enumeration has been enhanced.
for INSERT and UPDATE statements and "OUTPUT DELETED" for DELETE statements;
the key caveat is that triggers are not supported in conjunction with this
keyword. On Oracle, it is known as "RETURNING...INTO", and requires that the
- value be placed into an OUT paramter, meaning not only is the syntax awkward,
+ value be placed into an OUT parameter, meaning not only is the syntax awkward,
but it can also only be used for one row at a time.
SQLAlchemy's :meth:`.UpdateBase.returning` system provides a layer of abstraction
The rationale for this system is to greatly reduce Python interpreter
overhead for everything that occurs **before the SQL is emitted**.
The caching of the "baked" system does **not** in any way reduce SQL calls or
-cache the **return results** from the database. A technique that demonstates
+cache the **return results** from the database. A technique that demonstrates
the caching of the SQL calls and result sets themselves is available in
:ref:`examples_caching`.
:func:`~sqlalchemy.orm.joinedload()` option except it is assumed that the
:class:`~sqlalchemy.orm.query.Query` will specify the appropriate joins
explicitly. Below, we specify a join between ``User`` and ``Address``
-and addtionally establish this as the basis for eager loading of ``User.addresses``::
+and additionally establish this as the basis for eager loading of ``User.addresses``::
class User(Base):
__tablename__ = 'user'
FROM a
WHERE ? = a.b_id
-This SELECT is redundant becasue ``b.a`` is the same value as ``a1``. We
+This SELECT is redundant because ``b.a`` is the same value as ``a1``. We
can create an on-load rule to populate this for us::
from sqlalchemy import event
Objects can appear in the :class:`.Session` directly in the :term:`persistent`
state when they are loaded from the database. Tracking this state transition
-is synonymous with tracking objects as they are loaded, and is synonomous
+is synonymous with tracking objects as they are loaded, and is synonymous
with using the :meth:`.InstanceEvents.load` instance-level event. However, the
:meth:`.SessionEvents.loaded_as_persistent` event is provided as a
session-centric hook for intercepting objects as they enter the persistent
:class:`.Session`, if desired, by placing them into the
:attr:`.Session.info` dictionary.
-An event based approach is also feasable. A simple recipe that provides
+An event based approach is also feasible. A simple recipe that provides
"strong referencing" behavior for all objects as they remain within
the :term:`persistent` state is as follows::
This
-is an auxilliary use case suitable for testing and bulk insert scenarios.
+is an auxiliary use case suitable for testing and bulk insert scenarios.
MAX on VARCHAR / NVARCHAR
-------------------------
limitselect._oracle_visit = True
limitselect._is_wrapper = True
- # add expressions to accomodate FOR UPDATE OF
+ # add expressions to accommodate FOR UPDATE OF
for_update = select._for_update_arg
if for_update is not None and for_update.of:
for_update = for_update._clone()
def Any(other, arrexpr, operator=operators.eq):
"""A synonym for the :meth:`.ARRAY.Comparator.any` method.
- This method is legacy and is here for backwards-compatiblity.
+ This method is legacy and is here for backwards-compatibility.
.. seealso::
def All(other, arrexpr, operator=operators.eq):
"""A synonym for the :meth:`.ARRAY.Comparator.all` method.
- This method is legacy and is here for backwards-compatiblity.
+ This method is legacy and is here for backwards-compatibility.
.. seealso::
# encoding
client_encoding = utf8
-The ``client_encoding`` can be overriden for a session by executing the SQL:
+The ``client_encoding`` can be overridden for a session by executing the SQL:
SET CLIENT_ENCODING TO 'utf8';
"""
def loaded_as_persistent(self, session, instance):
- """Intercept the "loaded as peristent" transition for a specific object.
+ """Intercept the "loaded as persistent" transition for a specific object.
This event is invoked within the ORM loading process, and is invoked
very similarly to the :meth:`.InstanceEvents.load` event. However,
* Part of the primary key
- * Not refering to another column via :class:`.ForeignKey`, unless
+ * Not referring to another column via :class:`.ForeignKey`, unless
the value is specified as ``'ignore_fk'``::
# turn on autoincrement for this column despite
# fires off to load "addresses", but needs foreign key or primary key
# attributes in order to lazy load; hits those attributes, such as
# below it hits "u.id". "u.id" triggers full unexpire operation,
- # joinedloads addresses since lazy='joined'. this is all wihtin lazy load
+ # joinedloads addresses since lazy='joined'. this is all within lazy load
# which fires unconditionally; so an unnecessary joinedload (or
# lazyload) was issued. would prefer not to complicate lazyloading to
# "figure out" that the operation should be aborted right now.
# we didn't insert a value for 'data',
# so its not in dict, but also when we hit it, it isn't
- # expired because there's no column default on it or anyhting like that
+ # expired because there's no column default on it or anything like that
assert 'data' not in d1.__dict__
def go():
eq_(d1.data, None)
def test_set_none_replaces_scalar(self):
# this case worked before #3060, because a straight scalar
- # set of None shows up. Howver, as test_set_none_w_get
+ # set of None shows up. However, as test_set_none_w_get
# shows, we can't rely on this - the get of None will blow
# away the history.
A, B, C = self._fixture()
'CAST(%s AS %s)' % (literal, expected_results[4]))
# fixme: shoving all of this dialect-specific stuff in one test
- # is now officialy completely ridiculous AND non-obviously omits
+ # is now officially completely ridiculous AND non-obviously omits
# coverage on other dialects.
sel = select([tbl, cast(tbl.c.v1, Numeric)]).compile(
dialect=dialect)