:tags:
:tickets:
- added assertion to tx = session.begin(); tx.rollback(); tx.begin(), i.e. cant
+ added assertion to tx = session.begin(); tx.rollback(); tx.begin(), i.e. can't
use it after a rollback()
.. change::
executed on a primary key col so we know what we just inserted.
* if you did add a row that has a bunch of database-side defaults on it,
and the PassiveDefault thing was working the old way, i.e. they just execute on
- the DB side, the "cant get the row back without an OID" exception that occurred
+ the DB side, the "can't get the row back without an OID" exception that occurred
also will not happen unless someone (usually the ORM) explicitly asks for it.
.. change::
:tickets:
fix to postgres, where it will explicitly pre-execute a PassiveDefault on a table
- if it is a primary key column, pursuant to the ongoing "we cant get inserted rows
+ if it is a primary key column, pursuant to the ongoing "we can't get inserted rows
back from postgres" issue
.. change::
:tickets:
supports_sane_rowcount() set to False due to ticket #370.
- versioned_id_col feature wont work in FB.
+ versioned_id_col feature won't work in FB.
.. change::
:tags: firebird
:tags: sql
:tickets:
- use_labels flag on select() wont auto-create labels for literal text
+ use_labels flag on select() won't auto-create labels for literal text
column elements, since we can make no assumptions about the text. to
create labels for literal columns, you can say "somecol AS
somelabel", or use literal_column("somecol").label("somelabel")
:tags: sql
:tickets:
- quoting wont occur for literal columns when they are "proxied" into
+ quoting won't occur for literal columns when they are "proxied" into
the column collection for their selectable (is_literal flag is
propagated). literal columns are specified via
literal_column("somestring").
placed in the select statement by something other than the eager
loader itself, to fix possibility of dupe columns as illustrated in. however, this means you have to be more careful with
the columns placed in the "order by" of Query.select(), that you
- have explicitly named them in your criterion (i.e. you cant rely on
+ have explicitly named them in your criterion (i.e. you can't rely on
the eager loader adding them in for you)
.. change::
:tags: oracle
:tickets: 363
- issues a log warning when a related table cant be reflected due to
+ issues a log warning when a related table can't be reflected due to
certain permission errors
.. change::
:tags: orm, bugs
:tickets:
- eager relation to an inheriting mapper wont fail if no rows returned for
+ eager relation to an inheriting mapper won't fail if no rows returned for
the relationship.
.. change::
:tags: orm
:tickets: 346
- session.flush() wont close a connection it opened
+ session.flush() won't close a connection it opened
.. change::
:tags: orm
:tickets:
Wrote a docstring for Oracle dialect. Apparently that Ohloh
- "few source code comments" label is starting to sting :).
+ "few source code comments" label is starting to string :).
.. change::
:tags: oracle
.. topic:: the Python DBAPI is where autobegin actually happens
The design of "commit as you go" is intended to be complementary to the
- design of the :term:`DBAPI`, which is the underyling database interface
+ design of the :term:`DBAPI`, which is the underlying database interface
that SQLAlchemy interacts with. In the DBAPI, the ``connection`` object does
not assume changes to the database will be automatically committed, instead
requiring in the default case that the ``connection.commit()`` method is
can be any number of "schemas" which then contain the actual table objects.
A table within a specific schema is referred towards explicitly using the
- syntax "<schemaname>.<tablename>". Constrast this to an architecture such
+ syntax "<schemaname>.<tablename>". Contrast this to an architecture such
as that of MySQL, where there are only "databases", however SQL statements
can refer to multiple databases at once, using the same syntax except it
is "<database>.<tablename>". On Oracle, this syntax refers to yet another
.. note:: The above reference to a "pre-buffered" vs. "un-buffered"
:class:`_result.Result` object refers to the process by which the ORM
converts incoming raw database rows from the :term:`DBAPI` into ORM
- objects. It does not imply whether or not the underyling ``cursor``
+ objects. It does not imply whether or not the underlying ``cursor``
object itself, which represents pending results from the DBAPI, is itself
buffered or unbuffered, as this is essentially a lower layer of buffering.
For background on buffering of the ``cursor`` results itself, see the
attribute is also added which will always refer to the real driver-level
connection regardless of what API it presents.
-Accessing the underlying connnection for an asyncio driver
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Accessing the underlying connection for an asyncio driver
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When an asyncio driver is in use, there are two changes to the above
scheme. The first is that when using an :class:`_asyncio.AsyncConnection`,
particular SQLAlchemy API has been invoked by end-user code, and *before*
some other internal aspect of that API occurs.
- Constrast this to the architecture of the asyncio extension, which takes
+ Contrast this to the architecture of the asyncio extension, which takes
place on the **exterior** of SQLAlchemy's usual flow from end-user API to
DBAPI function.
query, such as the WHERE clause, the ORDER BY clause, and make use of the
ad-hoc expression; that is, this won't work::
- # wont work
+ # won't work
q = session.query(A).options(
with_expression(A.expr, A.x + A.y)
).filter(A.expr > 5).order_by(A.expr)
which will be introduced later in this tutorial.
The RETURNING feature is generally [1]_ only
supported for statement executions that use a single set of bound
- parameters; that is, it wont work with the "executemany" form introduced
+ parameters; that is, it won't work with the "executemany" form introduced
at :ref:`tutorial_multiple_parameters`. Additionally, some dialects
such as the Oracle dialect only allow RETURNING to return a single row
overall, meaning it won't work with "INSERT..FROM SELECT" nor will it
asyncio_engine = create_async_engine("postgresql+psycopg_async://scott:tiger@localhost/test")
The ``psycopg`` dialect has the same API features as that of ``psycopg2``,
-with the exeption of the "fast executemany" helpers. The "fast executemany"
+with the exception of the "fast executemany" helpers. The "fast executemany"
helpers are expected to be generalized and ported to ``psycopg`` before the final
release of SQLAlchemy 2.0, however.
keeping the effect of such an option localized to a "sub" connection.
.. versionchanged:: 2.0 The :meth:`_engine.Connection.execution_options`
- method, in constrast to other objects with this method, modifies
+ method, in contrast to other objects with this method, modifies
the connection in-place without creating copy of it.
As discussed elsewhere, the :meth:`_engine.Connection.execution_options`
as with textual non-ordered columns.
The name-matched system of merging is the same as that used by
- SQLAlchemy for all cases up through te 0.9 series. Positional
+ SQLAlchemy for all cases up through the 0.9 series. Positional
matching for compiled SQL expressions was introduced in 1.0 as a
major performance feature, and positional matching for textual
:class:`_expression.TextualSelect` objects in 1.1.
# TODO: figure out a more robust way to check this. The node is some
# kind of _SpecialForm, there's a typing.Optional that's _SpecialForm,
- # but I cant figure out how to get them to match up
+ # but I can't figure out how to get them to match up
if typ.name == "Optional":
# convert from "Optional?" to the more familiar
# UnionType[..., NoneType()]
def __getitem__(self, entity):
try:
return self.path[entity]
- except TypeError as te:
- raise IndexError(f"{entity}") from te
+ except TypeError as err:
+ raise IndexError(f"{entity}") from err
class PropRegistry(PathRegistry):
# it to provide a real expression object.
#
# from there, it starts to look much like Query itself won't be
- # passed into the execute process and wont generate its own cache
+ # passed into the execute process and won't generate its own cache
# key; this will all occur in terms of the ORM-enabled Select.
if (
not self._compile_options._set_base_alias
# scenario which should only be occurring in a loader
# that is against a non-aliased lead element with
# single path. otherwise the
- # "B" wont match into the B(B, B2).
+ # "B" won't match into the B(B, B2).
#
# i>=2 prevents this check from proceeding for
# the first path element.
pool.
:class:`.PoolProxiedConnection` is basically the public-facing interface
- for the :class:`._ConnectionFairy` implemenatation object, users familiar
+ for the :class:`._ConnectionFairy` implementation object, users familiar
with :class:`._ConnectionFairy` can consider this object to be
equivalent.
in a result row subsequent to statement execution time.
Subclasses of :class:`_types.TypeDecorator` can override this method
- to provide custom column expresion behavior for the type. This
+ to provide custom column expression behavior for the type. This
implementation will **replace** that of the underlying implementation
type.
lines.extend([line, to_inject, "\n"])
to_inject = None
elif line.endswith("::"):
- # TODO: this still wont cover if the code example itself has blank
- # lines in it, need to detect those via indentation.
+ # TODO: this still won't cover if the code example itself has
+ # blank lines in it, need to detect those via indentation.
lines.extend([line, doclines.popleft()])
continue
lines.append(line)
[mypy-sqlalchemy.ext.mypy.*]
ignore_errors = False
-
[sqla_testing]
requirement_cls = test.requirements:DefaultRequirements
profile_file = test/profiles.txt
def test_one_unique(self):
# assert that one() counts rows after uniqueness has been applied.
- # this would raise if we didnt have unique
+ # this would raise if we didn't have unique
result = self._fixture(data=[(1, 1, 1), (1, 1, 1)])
row = result.unique().one()
def test_one_unique_mapping(self):
# assert that one() counts rows after uniqueness has been applied.
- # this would raise if we didnt have unique
+ # this would raise if we didn't have unique
result = self._fixture(data=[(1, 1, 1), (1, 1, 1)])
row = result.mappings().unique().one()
-- can't make a ref from local schema to the
-- remote schema's table without this,
- -- *and* cant give yourself a grant !
+ -- *and* can't give yourself a grant !
-- so we give it to public. ideas welcome.
grant references on %(test_schema)s.parent to public;
grant references on %(test_schema)s.child to public;
"connection not open",
"could not receive data from server",
"could not send data to server",
- # psycopg2 client errors, psycopg2/conenction.h,
+ # psycopg2 client errors, psycopg2/connection.h,
# psycopg2/cursor.h
"connection already closed",
"cursor already closed",
if cast_fn:
value = cast_fn(value, JSON)
- # why wont this work?!?!?
+ # why won't this work?!?!?
# should be exactly json_to_recordset(to_json('string'::text))
#
fn = (
extras = psycopg_extras()
else:
- assert False, "Unknonw dialect"
+ assert False, "Unknown dialect"
return extras
@classmethod
"REFERENCES implicit_referred_comp)"
)
- # worst case - FK that refers to nonexistent table so we cant
+ # worst case - FK that refers to nonexistent table so we can't
# get pks. requires FK pragma is turned off
conn.exec_driver_sql(
"CREATE TABLE implicit_referrer_comp_fake "
"concrete": True,
}
- # didnt call configure_mappers() again
+ # didn't call configure_mappers() again
assert_raises_message(
orm_exc.UnmappedClassError,
".*and has a mapping pending",
# calling with *args
eq_(bq(sess).params(uname="fred").count(), 1)
# with multiple params, the **kwargs will be used
- bq += lambda q: q.filter(User.id == bindparam("anid"))
- eq_(bq(sess).params(uname="fred", anid=9).count(), 1)
+ bq += lambda q: q.filter(User.id == bindparam("an_id"))
+ eq_(bq(sess).params(uname="fred", an_id=9).count(), 1)
eq_(
# wrong id, so 0 results:
- bq(sess).params(uname="fred", anid=8).count(),
+ bq(sess).params(uname="fred", an_id=8).count(),
0,
)
def define_tables(cls, metadata):
global foo, bar, blub, bar_foo, blub_bar, blub_foo
- # the 'data' columns are to appease SQLite which cant handle a blank
+ # the 'data' columns are to appease SQLite which can't handle a blank
# INSERT
foo = Table(
"foo",
cls is self.classes.Sub1
and Link.child.entity.class_ is self.classes.Parent
):
- # in 1.x we werent checking for this:
+ # in 1.x we weren't checking for this:
# query(Sub1).options(
# joinedload(Sub1.links).joinedload(Link.child).joinedload(Sub1.links)
# )
ra = aliased(Report, subq)
# this test previously used select_entity_from(). the standard
- # conversion to use aliased() neds to be adjusted to be against
- # Employee, not Manger, otherwise the ORM will add the manager single
+ # conversion to use aliased() needs to be adjusted to be against
+ # Employee, not Manager, otherwise the ORM will add the manager single
# inh criteria to the outside which will break the outer join
ma = aliased(Employee, subq)
d2 = sess.get(DerivedII, "uid2")
sess.expunge_all()
- # object is not in the session; therefore the lazy load cant trigger
+ # object is not in the session; therefore the lazy load can't trigger
# here, eager load had to succeed
assert len([c for c in d2.comments]) == 1
y = T3(data="T3a")
x = T2(data="T2a", t3=y)
- # cant attach the T3 to another T2
+ # can't attach the T3 to another T2
assert_raises(sa_exc.InvalidRequestError, T2, data="T2b", t3=y)
# set via backref tho is OK, unsets from previous parent
# the column properties
stmt = select(stmt.subquery())
- # TODO: shouldnt we be able to get to stmt.subquery().c.count ?
+ # TODO: shouldn't we be able to get to stmt.subquery().c.count ?
self.assert_compile(
stmt,
"SELECT anon_2.anon_1, anon_2.anon_3, anon_2.id, anon_2.name "
)
# note this doesn't apply to "bound" loaders since they don't seem
- # to have this ".*" featue.
+ # to have this ".*" feature.
def test_load_only_subclass_of_type(self):
s = fixture_session()
"people.person_id = managers.person_id ORDER BY people.person_id",
)
# note this doesn't apply to "bound" loaders since they don't seem
- # to have this ".*" featue.
+ # to have this ".*" feature.
assert u.uname == "jack2"
assert "name" in u.__dict__
- # this wont work unless we add API hooks through the attr. system to
+ # this won't work unless we add API hooks through the attr. system to
# provide "expire" behavior on a synonym
# sess.expire(u, ['uname'])
# users.update(users.c.id==7).execute(name='jack3')
use_default_dialect=True,
)
- # this fails (and we cant quite fix right now).
+ # this fails (and we can't quite fix right now).
if False:
self.assert_compile(
sess.query(User, ualias)
sess = fixture_session()
u1 = sess.get(User, 7)
u2 = sess.get(User, 8)
- # comparaison ops need to work
+ # comparison ops need to work
a1 = sess.query(Address).filter(Address.user == u1).one()
eq_(a1.id, 1)
a1.user = u2
@testing.provide_metadata
def test_works_two(self):
- # doesn't actually work with real FKs beacuse it creates conflicts :)
+ # doesn't actually work with real FKs because it creates conflicts :)
self._fixture_one(
add_b_a=True, add_b_a_overlaps="a_member", add_bsub1_a=True
)
sess.commit()
def test_no_delete_PK_AtoB(self):
- """A cant be deleted without B because B would have no PK value."""
+ """A can't be deleted without B because B would have no PK value."""
tableB, A, B, tableA = (
self.tables.tableB,
def test_nullPKsOK_BtoA(self, metadata, connection):
A, tableA = self.classes.A, self.tables.tableA
- # postgresql cant handle a nullable PK column...?
+ # postgresql can't handle a nullable PK column...?
tableC = Table(
"tablec",
metadata,
# here, the "lazy" strategy has to ensure the "secondary"
# table is part of the "select_from()", since it's a join().
- # referring to just the columns wont actually render all those
+ # referring to just the columns won't actually render all those
# join conditions.
self.assert_sql_execution(
testing.db,
"""
return fails_if(
self._mysql_not_mariadb_103,
- 'MySQL error 1093 "Cant specify target table '
+ "MySQL error 1093 \"Can't specify target table "
'for update in FROM clause", resolved by MariaDB 10.3',
)
@property
def dupe_order_by_ok(self):
- """target db wont choke if ORDER BY specifies the same expression
+ """target db won't choke if ORDER BY specifies the same expression
more than once
"""
)
def test_boolean_inversion_mysql(self):
- # because mysql doesnt have native boolean
+ # because mysql doesn't have native boolean
self.assert_compile(
~self.table1.c.myid.match("somstr"),
"NOT MATCH (mytable.myid) AGAINST (%s IN BOOLEAN MODE)",
)
def test_boolean_inversion_mssql(self):
- # because mssql doesnt have native boolean
+ # because mssql doesn't have native boolean
self.assert_compile(
~self.table1.c.myid.match("somstr"),
"NOT CONTAINS (mytable.myid, :myid_1)",