:ticket:`4617`
+
+.. _change_5284:
+
+select() now accepts positional expressions
+-------------------------------------------
+
+The :func:`.select` construct will now accept "columns clause"
+arguments positionally::
+
+ # new way, supports 2.0
+ stmt = select(table.c.col1, table.c.col2, ...)
+
+When sending the arguments positionally, no other keyword arguments are permitted.
+In SQLAlchemy 2.0, the above calling style will be the only calling style
+supported.
+
+For the duration of 1.4, the previous calling style will still continue
+to function, which passes the list of columns or other expressions as a list::
+
+ # old way, still works in 1.4
+ stmt = select([table.c.col1, table.c.col2, ...])
+
+The above legacy calling style also accepts the old keyword arguments that have
+since been removed from most narrative documentation::
+
+ # very much the old way, but still works in 1.4
+ stmt = select([table.c.col1, table.c.col2, ...], whereclause=table.c.col1 == 5)
+
+The detection between the two styles is based on whether or not the first
+positional argument is a list. There are unfortunately still likely some
+usages that look like the following, where the keyword for the "whereclause"
+is omitted::
+
+ # very much the old way, but still works in 1.4
+ stmt = select([table.c.col1, table.c.col2, ...], table.c.col1 == 5)
+
+As part of this change, the :class:`.Select` construct also gains the 2.0-style
+"future" API which includes an updated :meth:`.Select.join` method as well
+as methods like :meth:`.Select.filter_by` and :meth:`.Select.join_from`.
+
+.. seealso::
+
+ :ref:`error_c9ae`
+
+ :ref:`migration_20_toplevel`
+
+
+:ticket:`5284`
+
.. _change_4645:
All IN expressions render parameters for each value in the list on the fly (e.g. expanding parameters)
an application can gradually adjust all of its 1.4-style code to work fully
against 2.0 as well.
-* APIs which are explicitly incompatible with SQLAlchemy 1.x style will be
- available in two new packages ``sqlalchemy.future`` and
- ``sqlalchemy.future.orm``. The most prominent objects in these new packages
- will be the :func:`sqlalchemy.future.select` object, which now features
- a refined constructor, and additionally will be compatible with ORM
- querying, as well as the new declarative base construct in
- ``sqlalchemy.future.orm``.
-
-* SQLAlchemy 2.0 will include the same ``sqlalchemy.future`` and
- ``sqlalchemy.future.orm`` packages; once an application only needs to run on
- SQLAlchemy 2.0 (as well as Python 3 only of course :) ), the "future" imports
- can be changed to refer to the canonical import, for example ``from
- sqlalchemy.future import select`` becomes ``from sqlalchemy import select``.
-
+* Currently, the main API which is explicitly incompatible with SQLAlchemy 1.x
+ style is the behavior of the :class:`_engine.Engine` and
+ :class:`_engine.Connection` objects in terms connectionless execution as well
+ as "autocommit", in that the future API no longer has these behaviors, and
+ two new methods :meth:`_future.Connection.commit` and
+ :meth:`_future.Connection.rollback` are added in order to accommodate for
+ commit-as-you-go use. These new objects are currently in a separate package
+ ``sqlalchemy.future``; in order to access the future versions of these, pass
+ the parameter :paramref:`_engine.create_engine.future` to the
+ :func:`_engine.create_engine` function.
+
+* The :class:`_orm.Session` object also has a newer behavior when using the
+ :meth:`_orm.Session.execute` method, in that incoming statements are
+ interpreted in an ORM context if applicable, as well as that the
+ :class:`_engine.Result` object returned uses new-style tuples
+ (see :ref:`migration_20_result_rows`). Within 1.4 this newer style
+ is enabled by passing :paramref:`_orm.Session.future` to the session
+ constructor or :class:`_orm.sessionmaker` object.
Python 3 Only
=============
result[0].all() # same as result.scalars().all()
result[2:5].all() # same as result.columns('c', 'd', 'e').all()
+.. _migration_20_result_rows:
+
Result rows unified between Core and ORM on named-tuple interface
==================================================================
--- /dev/null
+.. change::
+ :tags: change, sql
+ :tickets: 5284
+
+ The :func:`_expression.select` construct is moving towards a new calling
+ form that is ``select(col1, col2, col3, ..)``, with all other keyword
+ arguments removed, as these are all suited using generative methods. The
+ single list of column or table arguments passed to ``select()`` is still
+ accepted, however is no longer necessary if expressions are passed in a
+ simple positional style. Other keyword arguments are disallowed when this
+ form is used.
+
+
+ .. seealso::
+
+ :ref:`change_5284`
.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import select
- >>> s = select([users])
+ >>> s = select(users)
>>> result = conn.execute(s)
{opensql}SELECT users.id, users.name, users.fullname
FROM users
Above, we issued a basic :func:`_expression.select` call, placing the ``users`` table
within the COLUMNS clause of the select, and then executing. SQLAlchemy
expanded the ``users`` table into the set of each of its columns, and also
-generated a FROM clause for us. The result returned is again a
+generated a FROM clause for us.
+
+.. versionchanged:: 1.4 The :func:`_expression.select` construct now accepts
+ column arguments positionally, as ``select(*args)``. The previous style
+ of ``select()`` accepting a list of column elements is now deprecated.
+ See :ref:`change_5284`.
+
+The result returned is again a
:class:`~sqlalchemy.engine.CursorResult` object, which acts much like a
DBAPI cursor, including methods such as
:func:`~sqlalchemy.engine.CursorResult.fetchone` and
.. sourcecode:: pycon+sql
- >>> s = select([users.c.name, users.c.fullname])
+ >>> s = select(users.c.name, users.c.fullname)
{sql}>>> result = conn.execute(s)
SELECT users.name, users.fullname
FROM users
.. sourcecode:: pycon+sql
- {sql}>>> for row in conn.execute(select([users, addresses])):
+ {sql}>>> for row in conn.execute(select(users, addresses)):
... print(row)
SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.email_address
FROM users, addresses
.. sourcecode:: pycon+sql
- >>> s = select([users, addresses]).where(users.c.id == addresses.c.user_id)
+ >>> s = select(users, addresses).where(users.c.id == addresses.c.user_id)
{sql}>>> for row in conn.execute(s):
... print(row)
SELECT users.id, users.name, users.fullname, addresses.id,
from sqlalchemy import type_coerce
expr = type_coerce(somecolumn.op('-%>')('foo'), MySpecialType())
- stmt = select([expr])
+ stmt = select(expr)
For boolean operators, use the :meth:`.Operators.bool_op` method, which
.. sourcecode:: pycon+sql
- >>> s = select([(users.c.fullname +
+ >>> s = select((users.c.fullname +
... ", " + addresses.c.email_address).
- ... label('title')]).\
+ ... label('title')).\
... where(
... and_(
... users.c.id == addresses.c.user_id,
.. sourcecode:: pycon+sql
- >>> s = select([(users.c.fullname +
+ >>> s = select((users.c.fullname +
... ", " + addresses.c.email_address).
- ... label('title')]).\
+ ... label('title')).\
... where(users.c.id == addresses.c.user_id).\
... where(users.c.name.between('m', 'z')).\
... where(
j = stmt.join(addresses, stmt.c.id == addresses.c.user_id)
- new_stmt = select([stmt.c.id, addresses.c.id]).\
+ new_stmt = select(stmt.c.id, addresses.c.id).\
select_from(j).where(stmt.c.name == 'x')
The positional form of :meth:`_expression.TextClause.columns` is particularly useful
.. sourcecode:: pycon+sql
- >>> s = select([
+ >>> s = select(
... text("users.fullname || ', ' || addresses.email_address AS title")
- ... ]).\
+ ... ).\
... where(
... and_(
... text("users.id = addresses.user_id"),
>>> from sqlalchemy import select, and_, text, String
>>> from sqlalchemy.sql import table, literal_column
- >>> s = select([
+ >>> s = select(
... literal_column("users.fullname", String) +
... ', ' +
... literal_column("addresses.email_address").label("title")
- ... ]).\
+ ... ).\
... where(
... and_(
... literal_column("users.id") == literal_column("addresses.user_id"),
.. sourcecode:: pycon+sql
>>> from sqlalchemy import func
- >>> stmt = select([
+ >>> stmt = select(
... addresses.c.user_id,
- ... func.count(addresses.c.id).label('num_addresses')]).\
+ ... func.count(addresses.c.id).label('num_addresses')).\
... group_by("user_id").order_by("user_id", "num_addresses")
{sql}>>> conn.execute(stmt).fetchall()
.. sourcecode:: pycon+sql
>>> from sqlalchemy import func, desc
- >>> stmt = select([
+ >>> stmt = select(
... addresses.c.user_id,
- ... func.count(addresses.c.id).label('num_addresses')]).\
+ ... func.count(addresses.c.id).label('num_addresses')).\
... group_by("user_id").order_by("user_id", desc("num_addresses"))
{sql}>>> conn.execute(stmt).fetchall()
.. sourcecode:: pycon+sql
>>> u1a, u1b = users.alias(), users.alias()
- >>> stmt = select([u1a, u1b]).\
+ >>> stmt = select(u1a, u1b).\
... where(u1a.c.name > u1b.c.name).\
... order_by(u1a.c.name) # using "name" here would be ambiguous
>>> a1 = addresses.alias()
>>> a2 = addresses.alias()
- >>> s = select([users]).\
+ >>> s = select(users).\
... where(and_(
... users.c.id == a1.c.user_id,
... users.c.id == a2.c.user_id,
.. sourcecode:: pycon+sql
>>> address_subq = s.subquery()
- >>> s = select([users.c.name]).where(users.c.id == address_subq.c.id)
+ >>> s = select(users.c.name).where(users.c.id == address_subq.c.id)
>>> conn.execute(s).fetchall()
{opensql}SELECT users.name
FROM users,
.. sourcecode:: pycon+sql
- >>> s = select([users.c.fullname]).select_from(
+ >>> s = select(users.c.fullname).select_from(
... users.join(addresses,
... addresses.c.email_address.like(users.c.name + '%'))
... )
.. sourcecode:: pycon+sql
- >>> s = select([users.c.fullname]).select_from(users.outerjoin(addresses))
+ >>> s = select(users.c.fullname).select_from(users.outerjoin(addresses))
>>> print(s)
SELECT users.fullname
FROM users
.. sourcecode:: pycon+sql
- >>> users_cte = select([users.c.id, users.c.name]).where(users.c.name == 'wendy').cte()
- >>> stmt = select([addresses]).where(addresses.c.user_id == users_cte.c.id).order_by(addresses.c.id)
+ >>> users_cte = select(users.c.id, users.c.name).where(users.c.name == 'wendy').cte()
+ >>> stmt = select(addresses).where(addresses.c.user_id == users_cte.c.id).order_by(addresses.c.id)
>>> conn.execute(stmt).fetchall()
{opensql}WITH anon_1 AS
(SELECT users.id AS id, users.name AS name
.. sourcecode:: pycon+sql
- >>> users_cte = select([users.c.id, users.c.name]).cte(recursive=True)
+ >>> users_cte = select(users.c.id, users.c.name).cte(recursive=True)
>>> users_recursive = users_cte.alias()
- >>> users_cte = users_cte.union(select([users.c.id, users.c.name]).where(users.c.id > users_recursive.c.id))
- >>> stmt = select([addresses]).where(addresses.c.user_id == users_cte.c.id).order_by(addresses.c.id)
+ >>> users_cte = users_cte.union(select(users.c.id, users.c.name).where(users.c.id > users_recursive.c.id))
+ >>> stmt = select(addresses).where(addresses.c.user_id == users_cte.c.id).order_by(addresses.c.id)
>>> conn.execute(stmt).fetchall()
{opensql}WITH RECURSIVE anon_1(id, name) AS
(SELECT users.id AS id, users.name AS name
.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import bindparam
- >>> s = users.select(users.c.name == bindparam('username'))
- {sql}>>> conn.execute(s, username='wendy').fetchall()
+ >>> s = users.select().where(users.c.name == bindparam('username'))
+ {sql}>>> conn.execute(s, {"username": "wendy"}).fetchall()
SELECT users.id, users.name, users.fullname
FROM users
WHERE users.name = ?
.. sourcecode:: pycon+sql
- >>> s = users.select(users.c.name.like(bindparam('username', type_=String) + text("'%'")))
- {sql}>>> conn.execute(s, username='wendy').fetchall()
+ >>> s = users.select().where(users.c.name.like(bindparam('username', type_=String) + text("'%'")))
+ {sql}>>> conn.execute(s, {"username": "wendy"}).fetchall()
SELECT users.id, users.name, users.fullname
FROM users
WHERE users.name LIKE ? || '%'
.. sourcecode:: pycon+sql
- >>> s = select([users, addresses]).\
+ >>> s = select(users, addresses).\
... where(
... or_(
... users.c.name.like(
... ).\
... select_from(users.outerjoin(addresses)).\
... order_by(addresses.c.id)
- {sql}>>> conn.execute(s, name='jack').fetchall()
+ {sql}>>> conn.execute(s, {"name": "jack"}).fetchall()
SELECT users.id, users.name, users.fullname, addresses.id,
addresses.user_id, addresses.email_address
FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id
have type-specific operator behavior as well as result-set behaviors, such
as date and numeric coercions, the type may need to be specified explicitly::
- stmt = select([func.date(some_table.c.date_string, type_=Date)])
+ stmt = select(func.date(some_table.c.date_string, type_=Date))
Functions are most typically used in the columns clause of a select statement,
.. sourcecode:: pycon+sql
>>> conn.execute(
- ... select([
+ ... select(
... func.max(addresses.c.email_address, type_=String).
... label('maxemail')
- ... ])
+ ... )
... ).scalar()
{opensql}SELECT max(addresses.email_address) AS maxemail
FROM addresses
.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
- >>> calculate = select([column('q'), column('z'), column('r')]).\
+ >>> calculate = select(column('q'), column('z'), column('r')).\
... select_from(
... func.calculate(
... bindparam('x'),
... )
... )
>>> calc = calculate.alias()
- >>> print(select([users]).where(users.c.id > calc.c.z))
+ >>> print(select(users).where(users.c.id > calc.c.z))
SELECT users.id, users.name, users.fullname
FROM users, (SELECT q, z, r
FROM calculate(:x, :y)) AS anon_1
>>> calc1 = calculate.alias('c1').unique_params(x=17, y=45)
>>> calc2 = calculate.alias('c2').unique_params(x=5, y=12)
- >>> s = select([users]).\
+ >>> s = select(users).\
... where(users.c.id.between(calc1.c.z, calc2.c.z))
>>> print(s)
SELECT users.id, users.name, users.fullname
:data:`~.expression.func`, can be turned into a "window function", that is an
OVER clause, using the :meth:`.FunctionElement.over` method::
- >>> s = select([
+ >>> s = select(
... users.c.id,
... func.row_number().over(order_by=users.c.name)
- ... ])
+ ... )
>>> print(s)
SELECT users.id, row_number() OVER (ORDER BY users.name) AS anon_1
FROM users
either the :paramref:`.expression.over.rows` or
:paramref:`.expression.over.range` parameters::
- >>> s = select([
+ >>> s = select(
... users.c.id,
... func.row_number().over(
... order_by=users.c.name,
... rows=(-2, None))
- ... ])
+ ... )
>>> print(s)
SELECT users.id, row_number() OVER
(ORDER BY users.name ROWS BETWEEN :param_1 PRECEDING AND UNBOUNDED FOLLOWING) AS anon_1
.. sourcecode:: pycon+sql
>>> from sqlalchemy import cast
- >>> s = select([cast(users.c.id, String)])
+ >>> s = select(cast(users.c.id, String))
>>> conn.execute(s).fetchall()
{opensql}SELECT CAST(users.id AS VARCHAR) AS id
FROM users
>>> from sqlalchemy import JSON
>>> from sqlalchemy import type_coerce
>>> from sqlalchemy.dialects import mysql
- >>> s = select([
+ >>> s = select(
... type_coerce(
... {'some_key': {'foo': 'bar'}}, JSON
... )['some_key']
- ... ])
+ ... )
>>> print(s.compile(dialect=mysql.dialect()))
SELECT JSON_EXTRACT(%s, %s) AS anon_1
... addresses.select().
... where(addresses.c.email_address.like('%@msn.com'))
... ).subquery().select(), # apply subquery here
- ... addresses.select(addresses.c.email_address.like('%@msn.com'))
+ ... addresses.select().where(addresses.c.email_address.like('%@msn.com'))
... )
{sql}>>> conn.execute(u).fetchall()
SELECT anon_1.id, anon_1.user_id, anon_1.email_address
.. sourcecode:: pycon+sql
- >>> subq = select([func.count(addresses.c.id)]).\
+ >>> subq = select(func.count(addresses.c.id)).\
... where(users.c.id == addresses.c.user_id).\
... scalar_subquery()
.. sourcecode:: pycon+sql
- >>> conn.execute(select([users.c.name, subq])).fetchall()
+ >>> conn.execute(select(users.c.name, subq)).fetchall()
{opensql}SELECT users.name, (SELECT count(addresses.id) AS count_1
FROM addresses
WHERE users.id = addresses.user_id) AS anon_1
.. sourcecode:: pycon+sql
- >>> subq = select([func.count(addresses.c.id)]).\
+ >>> subq = select(func.count(addresses.c.id)).\
... where(users.c.id == addresses.c.user_id).\
... label("address_count")
- >>> conn.execute(select([users.c.name, subq])).fetchall()
+ >>> conn.execute(select(users.c.name, subq)).fetchall()
{opensql}SELECT users.name, (SELECT count(addresses.id) AS count_1
FROM addresses
WHERE users.id = addresses.user_id) AS address_count
.. sourcecode:: pycon+sql
- >>> stmt = select([addresses.c.user_id]).\
+ >>> stmt = select(addresses.c.user_id).\
... where(addresses.c.user_id == users.c.id).\
... where(addresses.c.email_address == 'jack@yahoo.com')
- >>> enclosing_stmt = select([users.c.name]).\
+ >>> enclosing_stmt = select(users.c.name).\
... where(users.c.id == stmt.scalar_subquery())
>>> conn.execute(enclosing_stmt).fetchall()
{opensql}SELECT users.name
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.id]).\
+ >>> stmt = select(users.c.id).\
... where(users.c.id == addresses.c.user_id).\
... where(users.c.name == 'jack').\
... correlate(addresses)
>>> enclosing_stmt = select(
- ... [users.c.name, addresses.c.email_address]).\
+ ... users.c.name, addresses.c.email_address).\
... select_from(users.join(addresses)).\
... where(users.c.id == stmt.scalar_subquery())
>>> conn.execute(enclosing_stmt).fetchall()
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.id]).\
+ >>> stmt = select(users.c.id).\
... where(users.c.name == 'wendy').\
... correlate(None)
- >>> enclosing_stmt = select([users.c.name]).\
+ >>> enclosing_stmt = select(users.c.name).\
... where(users.c.id == stmt.scalar_subquery())
>>> conn.execute(enclosing_stmt).fetchall()
{opensql}SELECT users.name
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.id]).\
+ >>> stmt = select(users.c.id).\
... where(users.c.id == addresses.c.user_id).\
... where(users.c.name == 'jack').\
... correlate_except(users)
>>> enclosing_stmt = select(
- ... [users.c.name, addresses.c.email_address]).\
+ ... users.c.name, addresses.c.email_address).\
... select_from(users.join(addresses)).\
... where(users.c.id == stmt.scalar_subquery())
>>> conn.execute(enclosing_stmt).fetchall()
>>> from sqlalchemy import table, column, select, true
>>> people = table('people', column('people_id'), column('age'), column('name'))
>>> books = table('books', column('book_id'), column('owner_id'))
- >>> subq = select([books.c.book_id]).\
+ >>> subq = select(books.c.book_id).\
... where(books.c.owner_id == people.c.people_id).lateral("book_subq")
- >>> print(select([people]).select_from(people.join(subq, true())))
+ >>> print(select(people).select_from(people.join(subq, true())))
SELECT people.people_id, people.age, people.name
FROM people JOIN LATERAL (SELECT books.book_id AS book_id
FROM books WHERE books.owner_id = people.people_id)
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name]).order_by(users.c.name)
+ >>> stmt = select(users.c.name).order_by(users.c.name)
>>> conn.execute(stmt).fetchall()
{opensql}SELECT users.name
FROM users ORDER BY users.name
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name]).order_by(users.c.name.desc())
+ >>> stmt = select(users.c.name).order_by(users.c.name.desc())
>>> conn.execute(stmt).fetchall()
{opensql}SELECT users.name
FROM users ORDER BY users.name DESC
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name, func.count(addresses.c.id)]).\
+ >>> stmt = select(users.c.name, func.count(addresses.c.id)).\
... select_from(users.join(addresses)).\
... group_by(users.c.name)
>>> conn.execute(stmt).fetchall()
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name, func.count(addresses.c.id)]).\
+ >>> stmt = select(users.c.name, func.count(addresses.c.id)).\
... select_from(users.join(addresses)).\
... group_by(users.c.name).\
... having(func.length(users.c.name) > 4)
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name]).\
+ >>> stmt = select(users.c.name).\
... where(addresses.c.email_address.
... contains(users.c.name)).\
... distinct()
.. sourcecode:: pycon+sql
- >>> stmt = select([users.c.name, addresses.c.email_address]).\
+ >>> stmt = select(users.c.name, addresses.c.email_address).\
... select_from(users.join(addresses)).\
... limit(1).offset(1)
>>> conn.execute(stmt).fetchall()
.. sourcecode:: pycon+sql
- >>> stmt = select([addresses.c.email_address]).\
+ >>> stmt = select(addresses.c.email_address).\
... where(addresses.c.user_id == users.c.id).\
... limit(1)
>>> conn.execute(users.update().values(fullname=stmt.scalar_subquery()))
Legacy API Features
===================
+.. _error_c9ae:
+
+select() construct created in "legacy" mode; keyword arguments, etc.
+--------------------------------------------------------------------
+
+The :func:`_expression.select` construct has been updated as of SQLAlchemy
+1.4 to support the newer calling style that will be standard in
+:ref:`SQLAlchemy 2.0 <error_b8d9>`. For backwards compatibility in the
+interm, the construct accepts arguments in both the "legacy" style as well
+as the "new" style.
+
+The "new" style features that column and table expressions are passed
+positionally to the :func:`_expression.select` construct only; any other
+modifiers to the object must be passed using subsequent method chaining::
+
+ # this is the way to do it going forward
+ stmt = select(table1.c.myid).where(table1.c.myid == table2.c.otherid)
+
+For comparison, a :func:`_expression.select` in legacy forms of SQLAlchemy,
+before methods like :meth:`.Select.where` were even added, would like::
+
+ # this is how it was documented in original SQLAlchemy versions
+ # many years ago
+ stmt = select([table1.c.myid], whereclause=table1.c.myid == table2.c.otherid)
+
+Or even that the "whereclause" would be passed positionally::
+
+ # this is also how it was documented in original SQLAlchemy versions
+ # many years ago
+ stmt = select([table1.c.myid], table1.c.myid == table2.c.otherid)
+
+For some years now, the additional "whereclause" and other arguments that are
+accepted have been removed from most narrative documentation, leading to a
+calling style that is most familiar as the list of column arguments passed
+as a list, but no further arguments::
+
+ # this is how it's been documented since around version 1.0 or so
+ stmt = select([table1.c.myid]).where(table1.c.myid == table2.c.otherid)
+
+.. seealso::
+
+ :ref:`error_b8d9`
+
+ :ref:`change_5284`
+
+ :ref:`migration_20_toplevel`
+
+
+
.. _error_b8d9:
-The <some function> in SQLAlchemy 2.0 will no longer <something>; use the "future" construct
+The <some function> in SQLAlchemy 2.0 will no longer <something>
--------------------------------------------------------------------------------------------
SQLAlchemy 2.0 is expected to be a major shift for a wide variety of key
else
return tmp;
- tmp = BaseRow_subscript_mapping(self, name);
+ tmp = BaseRow_subscript_impl(self, name, 1);
+
if (tmp == NULL && PyErr_ExceptionMatches(PyExc_KeyError)) {
#if PY_MAJOR_VERSION >= 3
def has_table(self, connection, tablename, dbname, owner, schema):
tables = ischema.tables
- s = sql.select([tables.c.table_name]).where(
+ s = sql.select(tables.c.table_name).where(
sql.and_(
tables.c.table_type == "BASE TABLE",
tables.c.table_name == tablename,
def has_sequence(self, connection, sequencename, dbname, owner, schema):
sequences = ischema.sequences
- s = sql.select([sequences.c.sequence_name]).where(
+ s = sql.select(sequences.c.sequence_name).where(
sequences.c.sequence_name == sequencename
)
def get_sequence_names(self, connection, dbname, owner, schema, **kw):
sequences = ischema.sequences
- s = sql.select([sequences.c.sequence_name])
+ s = sql.select(sequences.c.sequence_name)
if owner:
s = s.where(sequences.c.sequence_schema == owner)
def get_table_names(self, connection, dbname, owner, schema, **kw):
tables = ischema.tables
s = (
- sql.select([tables.c.table_name])
+ sql.select(tables.c.table_name)
.where(
sql.and_(
tables.c.table_schema == owner,
@_db_plus_owner_listing
def get_view_names(self, connection, dbname, owner, schema, **kw):
tables = ischema.tables
- s = sql.select(
- [tables.c.table_name],
- sql.and_(
- tables.c.table_schema == owner, tables.c.table_type == "VIEW"
- ),
- order_by=[tables.c.table_name],
+ s = (
+ sql.select(tables.c.table_name)
+ .where(
+ sql.and_(
+ tables.c.table_schema == owner,
+ tables.c.table_type == "VIEW",
+ )
+ )
+ .order_by(tables.c.table_name)
)
view_names = [r[0] for r in connection.execute(s)]
return view_names
computed_cols.c.definition, NVARCHAR(4000)
)
- s = sql.select(
- [columns, computed_definition, computed_cols.c.is_persisted],
- whereclause,
- from_obj=join,
- order_by=[columns.c.ordinal_position],
+ s = (
+ sql.select(
+ columns, computed_definition, computed_cols.c.is_persisted
+ )
+ .where(whereclause)
+ .select_from(join)
+ .order_by(columns.c.ordinal_position)
)
c = connection.execution_options(future_result=True).execute(s)
# Primary key constraints
s = sql.select(
- [C.c.column_name, TC.c.constraint_type, C.c.constraint_name],
+ C.c.column_name, TC.c.constraint_type, C.c.constraint_name
+ ).where(
sql.and_(
TC.c.constraint_name == C.c.constraint_name,
TC.c.table_schema == C.c.table_schema,
R = ischema.key_constraints.alias("R")
# Foreign key constraints
- s = sql.select(
- [
+ s = (
+ sql.select(
C.c.column_name,
R.c.table_schema,
R.c.table_name,
RR.c.match_option,
RR.c.update_rule,
RR.c.delete_rule,
- ],
- sql.and_(
- C.c.table_name == tablename,
- C.c.table_schema == owner,
- RR.c.constraint_schema == C.c.table_schema,
- C.c.constraint_name == RR.c.constraint_name,
- R.c.constraint_name == RR.c.unique_constraint_name,
- R.c.constraint_schema == RR.c.unique_constraint_schema,
- C.c.ordinal_position == R.c.ordinal_position,
- ),
- order_by=[RR.c.constraint_name, R.c.ordinal_position],
+ )
+ .where(
+ sql.and_(
+ C.c.table_name == tablename,
+ C.c.table_schema == owner,
+ RR.c.constraint_schema == C.c.table_schema,
+ C.c.constraint_name == RR.c.constraint_name,
+ R.c.constraint_name == RR.c.unique_constraint_name,
+ R.c.constraint_schema == RR.c.unique_constraint_schema,
+ C.c.ordinal_position == R.c.ordinal_position,
+ )
+ )
+ .order_by(RR.c.constraint_name, R.c.ordinal_position)
)
# group rows by constraint ID, to handle multi-column FKs
# legacy stuff.
if should_close_with_result and context._soft_closed:
assert not self._is_future
- assert not context._is_future_result
# CursorResult already exhausted rows / has no rows.
# close us now
be applied to all connections. See
:meth:`~sqlalchemy.engine.Connection.execution_options`
+ :param future: Use the 2.0 style :class:`_future.Engine` and
+ :class:`_future.Connection` API.
+
+ ..versionadded:: 1.4
+
+ .. seealso::
+
+ :ref:`migration_20_toplevel`
+
:param hide_parameters: Boolean, when set to True, SQL statement parameters
will not be displayed in INFO logging nor will they be formatted into
the string representation of :class:`.StatementError` objects.
pool._dialect = dialect
# create engine.
- engineclass = kwargs.pop("_future_engine_class", base.Engine)
+ if kwargs.pop("future", False):
+ from sqlalchemy import future
+
+ default_engine_class = future.Engine
+ else:
+ default_engine_class = base.Engine
+
+ engineclass = kwargs.pop("_future_engine_class", default_engine_class)
engine_args = {}
for k in util.get_cls_kwargs(engineclass):
parameters = {}
def check_unicode(test):
- statement = cast_to(
- expression.select([test]).compile(dialect=self)
- )
+ statement = cast_to(expression.select(test).compile(dialect=self))
try:
cursor = connection.connection.cursor()
connection._cursor_execute(cursor, statement, parameters)
cursor.execute(
cast_to(
expression.select(
- [expression.literal_column("'x'").label("some_label")]
+ expression.literal_column("'x'").label("some_label")
).compile(dialect=self)
)
)
)
result = self.session.execute(
- statement, params, execution_options=execution_options
+ statement, params, execution_options=execution_options, future=True
)
if result._attributes.get("is_single_entity", False):
result = result.scalars()
from .engine import Connection # noqa
from .engine import create_engine # noqa
from .engine import Engine # noqa
-from .selectable import Select # noqa
+from ..sql.selectable import Select # noqa
from ..util.langhelpers import public_factory
select = public_factory(Select._create_future_select, ".future.select")
+++ /dev/null
-from ..sql import coercions
-from ..sql import roles
-from ..sql.base import _generative
-from ..sql.selectable import GenerativeSelect
-from ..sql.selectable import Select as _LegacySelect
-from ..sql.selectable import SelectState
-from ..sql.util import _entity_namespace_key
-
-
-class Select(_LegacySelect):
- _is_future = True
- _setup_joins = ()
- _legacy_setup_joins = ()
- inherit_cache = True
-
- @classmethod
- def _create_select(cls, *entities):
- raise NotImplementedError("use _create_future_select")
-
- @classmethod
- def _create_future_select(cls, *entities):
- r"""Construct a new :class:`_expression.Select` using the 2.
- x style API.
-
- .. versionadded:: 2.0 - the :func:`_future.select` construct is
- the same construct as the one returned by
- :func:`_expression.select`, except that the function only
- accepts the "columns clause" entities up front; the rest of the
- state of the SELECT should be built up using generative methods.
-
- Similar functionality is also available via the
- :meth:`_expression.FromClause.select` method on any
- :class:`_expression.FromClause`.
-
- .. seealso::
-
- :ref:`coretutorial_selecting` - Core Tutorial description of
- :func:`_expression.select`.
-
- :param \*entities:
- Entities to SELECT from. For Core usage, this is typically a series
- of :class:`_expression.ColumnElement` and / or
- :class:`_expression.FromClause`
- objects which will form the columns clause of the resulting
- statement. For those objects that are instances of
- :class:`_expression.FromClause` (typically :class:`_schema.Table`
- or :class:`_expression.Alias`
- objects), the :attr:`_expression.FromClause.c`
- collection is extracted
- to form a collection of :class:`_expression.ColumnElement` objects.
-
- This parameter will also accept :class:`_expression.TextClause`
- constructs as
- given, as well as ORM-mapped classes.
-
- """
-
- self = cls.__new__(cls)
- self._raw_columns = [
- coercions.expect(
- roles.ColumnsClauseRole, ent, apply_propagate_attrs=self
- )
- for ent in entities
- ]
-
- GenerativeSelect.__init__(self)
-
- return self
-
- def filter(self, *criteria):
- """A synonym for the :meth:`_future.Select.where` method."""
-
- return self.where(*criteria)
-
- def _exported_columns_iterator(self):
- meth = SelectState.get_plugin_class(self).exported_columns_iterator
- return meth(self)
-
- def _filter_by_zero(self):
- if self._setup_joins:
- meth = SelectState.get_plugin_class(
- self
- ).determine_last_joined_entity
- _last_joined_entity = meth(self)
- if _last_joined_entity is not None:
- return _last_joined_entity
-
- if self._from_obj:
- return self._from_obj[0]
-
- return self._raw_columns[0]
-
- def filter_by(self, **kwargs):
- r"""Apply the given filtering criterion as a WHERE clause
- to this select.
-
- """
- from_entity = self._filter_by_zero()
-
- clauses = [
- _entity_namespace_key(from_entity, key) == value
- for key, value in kwargs.items()
- ]
- return self.filter(*clauses)
-
- @property
- def column_descriptions(self):
- """Return a 'column descriptions' structure which may be
- plugin-specific.
-
- """
- meth = SelectState.get_plugin_class(self).get_column_descriptions
- return meth(self)
-
- @_generative
- def join(self, target, onclause=None, isouter=False, full=False):
- r"""Create a SQL JOIN against this :class:`_expression.Select`
- object's criterion
- and apply generatively, returning the newly resulting
- :class:`_expression.Select`.
-
-
- """
- target = coercions.expect(
- roles.JoinTargetRole, target, apply_propagate_attrs=self
- )
- if onclause is not None:
- onclause = coercions.expect(roles.OnClauseRole, onclause)
- self._setup_joins += (
- (target, onclause, None, {"isouter": isouter, "full": full}),
- )
-
- @_generative
- def join_from(
- self, from_, target, onclause=None, isouter=False, full=False
- ):
- r"""Create a SQL JOIN against this :class:`_expression.Select`
- object's criterion
- and apply generatively, returning the newly resulting
- :class:`_expression.Select`.
-
-
- """
- # note the order of parsing from vs. target is important here, as we
- # are also deriving the source of the plugin (i.e. the subject mapper
- # in an ORM query) which should favor the "from_" over the "target"
-
- from_ = coercions.expect(
- roles.FromClauseRole, from_, apply_propagate_attrs=self
- )
- target = coercions.expect(
- roles.JoinTargetRole, target, apply_propagate_attrs=self
- )
-
- self._setup_joins += (
- (target, onclause, from_, {"isouter": isouter, "full": full}),
- )
-
- def outerjoin(self, target, onclause=None, full=False):
- """Create a left outer join.
-
-
-
- """
- return self.join(target, onclause=onclause, isouter=True, full=full,)
from ..sql import roles
from ..sql import util as sql_util
from ..sql import visitors
+from ..sql.base import _entity_namespace_key
from ..sql.base import _select_iterables
from ..sql.base import CacheableOptions
from ..sql.base import CompileState
# were passed to session.execute:
# session.execute(legacy_select([User.id, User.name]))
# see test_query->test_legacy_tuple_old_select
- if not statement._is_future:
- return result
load_options = execution_options.get(
"_sa_orm_load_options", QueryContext.default_load_options
compound_eager_adapter = None
correlate = None
+ correlate_except = None
_where_criteria = ()
_having_criteria = ()
def create_for_statement(cls, statement, compiler, **kw):
"""compiler hook, we arrive here from compiler.visit_select() only."""
- if not statement._is_future:
- return SelectState(statement, compiler, **kw)
-
if compiler is not None:
toplevel = not compiler.stack
compiler._rewrites_selected_columns = True
for s in query._correlate
)
)
+ elif query._correlate_except:
+ self.correlate_except = tuple(
+ util.flatten_iterator(
+ sql_util.surface_selectables(s) if s is not None else None
+ for s in query._correlate_except
+ )
+ )
elif not query._auto_correlate:
self.correlate = (None,)
hints=self.select_statement._hints,
statement_hints=self.select_statement._statement_hints,
correlate=self.correlate,
+ correlate_except=self.correlate_except,
**self._select_args
)
hints=self.select_statement._hints,
statement_hints=self.select_statement._statement_hints,
correlate=self.correlate,
+ correlate_except=self.correlate_except,
**self._select_args
)
hints,
statement_hints,
correlate,
+ correlate_except,
limit_clause,
offset_clause,
distinct,
if correlate:
statement.correlate.non_generative(statement, *correlate)
+ if correlate_except:
+ statement.correlate_except.non_generative(
+ statement, *correlate_except
+ )
+
return statement
def _adapt_polymorphic_element(self, element):
# string given, e.g. query(Foo).join("bar").
# we look to the left entity or what we last joined
# towards
- onclause = sql.util._entity_namespace_key(
+ onclause = _entity_namespace_key(
inspect(self._joinpoint_zero()), onclause
)
info = inspect(jp0)
if getattr(info, "mapper", None) is onclause._parententity:
- onclause = sql.util._entity_namespace_key(
- info, onclause.key
- )
+ onclause = _entity_namespace_key(info, onclause.key)
# legacy ^^^^^^^^^^^^^^^^^^^^^^^^^^^
if isinstance(onclause, interfaces.PropComparator):
params=load_options._params,
execution_options={"_sa_orm_load_options": load_options},
bind_arguments=bind_arguments,
+ future=True,
)
.unique()
.scalars()
from .. import sql
from .. import util
from ..engine import result as _result
-from ..future import select as future_select
from ..sql import coercions
from ..sql import expression
from ..sql import operators
from ..sql import roles
+from ..sql import select
+from ..sql.base import _entity_namespace_key
from ..sql.base import CompileState
from ..sql.base import Options
from ..sql.dml import DeleteDMLState
from ..sql.dml import UpdateDMLState
from ..sql.elements import BooleanClauseList
-from ..sql.util import _entity_namespace_key
def _bulk_insert(
)
)
- stmt = table.update(clauses)
+ stmt = table.update().where(clauses)
return stmt
cached_stmt = base_mapper._memo(("update", table), update_stmt)
)
)
- stmt = table.update(clauses)
+ stmt = table.update().where(clauses)
if mapper.version_id_col is not None:
stmt = stmt.return_defaults(mapper.version_id_col)
)
)
- return table.delete(clauses)
+ return table.delete().where(clauses)
statement = base_mapper._memo(("delete", table), delete_stmt)
for connection, recs in groupby(delete, lambda rec: rec[1]): # connection
for k, v in iterator:
if mapper:
if isinstance(k, util.string_types):
- desc = sql.util._entity_namespace_key(mapper, k)
+ desc = _entity_namespace_key(mapper, k)
values.extend(desc._bulk_update_tuples(v))
elif "entity_namespace" in k._annotations:
k_anno = k._annotations
):
mapper = update_options._subject_mapper
- select_stmt = future_select(
+ select_stmt = select(
*(mapper.primary_key + (mapper.select_identity_token,))
)
select_stmt._where_criteria = statement._where_criteria
execution_options,
bind_arguments,
_add_event=skip_for_full_returning,
+ future=True,
)
matched_rows = result.fetchall()
from .. import log
from .. import sql
from .. import util
-from ..future.selectable import Select as FutureSelect
from ..sql import coercions
from ..sql import expression
from ..sql import roles
+from ..sql import Select
from ..sql import util as sql_util
from ..sql.annotation import SupportsCloneAnnotations
+from ..sql.base import _entity_namespace_key
from ..sql.base import _generative
from ..sql.base import Executable
from ..sql.selectable import _SelectFromElements
from ..sql.selectable import LABEL_STYLE_NONE
from ..sql.selectable import LABEL_STYLE_TABLENAME_PLUS_COL
from ..sql.selectable import SelectStatementGrouping
-from ..sql.util import _entity_namespace_key
from ..sql.visitors import InternalTraversal
from ..util import collections_abc
stmt._propagate_attrs = self._propagate_attrs
else:
# Query / select() internal attributes are 99% cross-compatible
- stmt = FutureSelect.__new__(FutureSelect)
+ stmt = Select.__new__(Select)
stmt.__dict__.update(self.__dict__)
stmt.__dict__.update(
_label_style=self._label_style,
statement,
params,
execution_options={"_sa_orm_load_options": self.load_options},
+ future=True,
)
# legacy: automatically set scalars, unique
delete_,
self.load_options._params,
execution_options={"synchronize_session": synchronize_session},
+ future=True,
)
bulk_del.result = result
self.session.dispatch.after_bulk_delete(bulk_del)
upd,
self.load_options._params,
execution_options={"synchronize_session": synchronize_session},
+ future=True,
)
bulk_ud.result = result
self.session.dispatch.after_bulk_update(bulk_ud)
crit = j & sql.True_._ifnone(criterion)
if secondary is not None:
- ex = sql.exists(
- [1], crit, from_obj=[dest, secondary]
- ).correlate_except(dest, secondary)
+ ex = (
+ sql.exists(1)
+ .where(crit)
+ .select_from(dest, secondary)
+ .correlate_except(dest, secondary)
+ )
else:
- ex = sql.exists([1], crit, from_obj=dest).correlate_except(
- dest
+ ex = (
+ sql.exists(1)
+ .where(crit)
+ .select_from(dest)
+ .correlate_except(dest)
)
return ex
"_compile_state_cls",
"_starting_event_idx",
"_events_todo",
+ "_future",
)
def __init__(
bind_arguments,
compile_state_cls,
events_todo,
+ future,
):
self.session = session
self.statement = statement
self.bind_arguments = bind_arguments
self._compile_state_cls = compile_state_cls
self._events_todo = list(events_todo)
+ self._future = future
def _remaining_events(self):
return self._events_todo[self._starting_event_idx + 1 :]
_execution_options,
_bind_arguments,
_parent_execute_state=self,
+ future=self._future,
)
@property
self,
bind=None,
autoflush=True,
+ future=False,
expire_on_commit=True,
autocommit=False,
twophase=False,
so that all attribute/object access subsequent to a completed
transaction will load from the most recent database state.
+ :param future: if True, use 2.0 style behavior for the
+ :meth:`_orm.Session.execute` method. This includes that the
+ :class:`_engine.Result` object returned will return new-style
+ tuple rows, as well as that Core constructs such as
+ :class:`_sql.Select`,
+ :class:`_sql.Update` and :class:`_sql.Delete` will be interpreted
+ in an ORM context if they are made against ORM entities rather than
+ plain :class:`.Table` metadata objects.
+
+ The "future" flag is also available on a per-execution basis
+ using the :paramref:`_orm.Session.execute.future` flag.
+
+ .. versionadded:: 1.4
+
+ .. seealso::
+
+ :ref:`migration_20_toplevel`
+
+ :ref:`migration_20_result_rows`
+
:param info: optional dictionary of arbitrary data to be associated
with this :class:`.Session`. Is available via the
:attr:`.Session.info` attribute. Note the dictionary is copied at
self._flushing = False
self._warn_on_events = False
self._transaction = None
+ self.future = future
self.hash_key = _new_sessionid()
self.autoflush = autoflush
self.autocommit = autocommit
params=None,
execution_options=util.immutabledict(),
bind_arguments=None,
+ future=False,
_parent_execute_state=None,
_add_event=None,
**kw
Contents of this dictionary are passed to the
:meth:`.Session.get_bind` method.
+ :param future:
+ Use future style execution for this statement. This is
+ the same effect as the :paramref:`_orm.Session.future` flag,
+ except at the level of this single statement execution. See
+ that flag for details.
+
+ .. versionadded:: 1.4
+
:param mapper:
deprecated; use the bind_arguments dictionary
"""
statement = coercions.expect(roles.CoerceTextStatementRole, statement)
+ future = future or self.future
+
if not bind_arguments:
bind_arguments = kw
elif kw:
bind_arguments.update(kw)
- if (
+ if future and (
statement._propagate_attrs.get("compile_state_plugin", None)
== "orm"
):
+ # note that even without "future" mode, we need
compile_state_cls = CompileState._get_plugin_class_for_plugin(
statement, "orm"
)
)
else:
bind_arguments.setdefault("clause", statement)
- if statement._is_future:
+ if future:
execution_options = util.immutabledict().merge_with(
execution_options, {"future_result": True}
)
bind_arguments,
compile_state_cls,
events_todo,
+ future,
)
for idx, fn in enumerate(events_todo):
orm_exec_state._starting_event_idx = idx
)
raise exc.UnboundExecutionError(msg)
return bind
+
+
+def _entity_namespace_key(entity, key):
+ """Return an entry from an entity_namespace.
+
+
+ Raises :class:`_exc.InvalidRequestError` rather than attribute error
+ on not found.
+
+ """
+
+ ns = entity.entity_namespace
+ try:
+ return getattr(ns, key)
+ except AttributeError as err:
+ util.raise_(
+ exc.InvalidRequestError(
+ 'Entity namespace for "%s" has no property "%s"'
+ % (entity, key)
+ ),
+ replace_context=err,
+ )
lateral = public_factory(Lateral._factory, ".sql.expression.lateral")
or_ = public_factory(BooleanClauseList.or_, ".sql.expression.or_")
bindparam = public_factory(BindParameter, ".sql.expression.bindparam")
-select = public_factory(Select, ".sql.expression.select")
+select = public_factory(Select._create, ".sql.expression.select")
text = public_factory(TextClause._create_text, ".sql.expression.text")
table = public_factory(TableClause, ".sql.expression.table")
column = public_factory(ColumnClause, ".sql.expression.column")
from .base import _clone
from .base import _cloned_difference
from .base import _cloned_intersection
+from .base import _entity_namespace_key
from .base import _expand_cloned
from .base import _from_objects
from .base import _generative
:func:`_expression.select` function.
"""
- return Select(*args, **kwargs).subquery(alias)
+ return Select.create_legacy_select(*args, **kwargs).subquery(alias)
class ReturnsRows(roles.ReturnsRowsRole, ClauseElement):
_use_schema_map = False
- def select(self, whereclause=None, **params):
- """Return a SELECT of this :class:`_expression.FromClause`.
+ @util.deprecated_params(
+ whereclause=(
+ "2.0",
+ "The :paramref:`_sql.FromClause.select().whereclause` parameter "
+ "is deprecated and will be removed in version 2.0. "
+ "Please make use of "
+ "the :meth:`.Select.where` "
+ "method to add WHERE criteria to the SELECT statement.",
+ ),
+ kwargs=(
+ "2.0",
+ "The :meth:`_sql.FromClause.select` method will no longer accept "
+ "keyword arguments in version 2.0. Please use generative methods "
+ "from the "
+ ":class:`_sql.Select` construct in order to apply additional "
+ "modifications.",
+ ),
+ )
+ def select(self, whereclause=None, **kwargs):
+ r"""Return a SELECT of this :class:`_expression.FromClause`.
+
+
+ e.g.::
+
+ stmt = some_table.select().where(some_table.c.id == 5)
+
+ :param whereclause: a WHERE clause, equivalent to calling the
+ :meth:`_sql.Select.where` method.
+
+ :param \**kwargs: additional keyword arguments are passed to the
+ legacy constructor for :class:`_sql.Select` described at
+ :meth:`_sql.Select.create_legacy_select`.
.. seealso::
method which allows for arbitrary column lists.
"""
-
- return Select([self], whereclause, **params)
+ if whereclause is not None:
+ kwargs["whereclause"] = whereclause
+ return Select._create_select_from_fromclause(self, [self], **kwargs)
def join(self, right, onclause=None, isouter=False, full=False):
"""Return a :class:`_expression.Join` from this
"join explicitly." % (a.description, b.description)
)
+ @util.deprecated_params(
+ whereclause=(
+ "2.0",
+ "The :paramref:`_sql.Join.select().whereclause` parameter "
+ "is deprecated and will be removed in version 2.0. "
+ "Please make use of "
+ "the :meth:`.Select.where` "
+ "method to add WHERE criteria to the SELECT statement.",
+ ),
+ kwargs=(
+ "2.0",
+ "The :meth:`_sql.Join.select` method will no longer accept "
+ "keyword arguments in version 2.0. Please use generative "
+ "methods from the "
+ ":class:`_sql.Select` construct in order to apply additional "
+ "modifications.",
+ ),
+ )
def select(self, whereclause=None, **kwargs):
r"""Create a :class:`_expression.Select` from this
:class:`_expression.Join`.
- The equivalent long-hand form, given a :class:`_expression.Join`
- object
- ``j``, is::
+ E.g.::
+
+ stmt = table_a.join(table_b, table_a.c.id == table_b.c.a_id)
- from sqlalchemy import select
- j = select([j.left, j.right], **kw).\
- where(whereclause).\
- select_from(j)
+ stmt = stmt.select()
- :param whereclause: the WHERE criterion that will be sent to
- the :func:`select()` function
+ The above will produce a SQL string resembling::
- :param \**kwargs: all other kwargs are sent to the
- underlying :func:`select()` function.
+ SELECT table_a.id, table_a.col, table_b.id, table_b.a_id
+ FROM table_a JOIN table_b ON table_a.id = table_b.a_id
+
+ :param whereclause: WHERE criteria, same as calling
+ :meth:`_sql.Select.where` on the resulting statement
+
+ :param \**kwargs: additional keyword arguments are passed to the
+ legacy constructor for :class:`_sql.Select` described at
+ :meth:`_sql.Select.create_legacy_select`.
"""
collist = [self.left, self.right]
def select(self, *arg, **kw):
return self._implicit_subquery.select(*arg, **kw)
- @util.deprecated(
- "1.4",
- "The :meth:`_expression.SelectBase.join` method is deprecated "
- "and will be removed in a future release; this method implicitly "
- "creates a subquery that should be explicit. "
- "Please call :meth:`_expression.SelectBase.subquery` "
- "first in order to create "
- "a subquery, which then can be selected.",
- )
- def join(self, *arg, **kw):
- return self._implicit_subquery.join(*arg, **kw)
-
- @util.deprecated(
- "1.4",
- "The :meth:`_expression.SelectBase.outerjoin` method is deprecated "
- "and will be removed in a future release; this method implicitly "
- "creates a subquery that should be explicit. "
- "Please call :meth:`_expression.SelectBase.subquery` "
- "first in order to create "
- "a subquery, which then can be selected.",
- )
- def outerjoin(self, *arg, **kw):
- return self._implicit_subquery.outerjoin(*arg, **kw)
-
@HasMemoized.memoized_attribute
def _implicit_subquery(self):
return self.subquery()
for s in selects
]
+ if kwargs and util.SQLALCHEMY_WARN_20:
+ util.warn_deprecated_20(
+ "Set functions such as union(), union_all(), extract(), etc. "
+ "in SQLAlchemy 2.0 will accept a "
+ "series of SELECT statements only. "
+ "Please use generative methods such as order_by() for "
+ "additional modifications to this CompoundSelect.",
+ stacklevel=4,
+ )
+
GenerativeSelect.__init__(self, **kwargs)
@classmethod
__visit_name__ = "select"
- _is_future = False
_setup_joins = ()
_legacy_setup_joins = ()
("compile_options", InternalTraversal.dp_has_cache_key)
]
- @classmethod
- def _create_select(cls, *entities):
- r"""Construct an old style :class:`_expression.Select` using the
- the 2.x style constructor.
-
- """
-
- self = cls.__new__(cls)
- self._raw_columns = [
- coercions.expect(roles.ColumnsClauseRole, ent) for ent in entities
- ]
-
- GenerativeSelect.__init__(self)
-
- return self
-
@classmethod
def _create_select_from_fromclause(cls, target, entities, *arg, **kw):
if arg or kw:
- if util.SQLALCHEMY_WARN_20:
- util.warn_deprecated_20(
- "Passing arguments to %s.select() is deprecated and "
- "will be removed in SQLAlchemy 2.0. "
- "Please use generative "
- "methods such as select().where(), etc."
- % (target.__class__.__name__,)
- )
- return Select(entities, *arg, **kw)
+ return Select.create_legacy_select(entities, *arg, **kw)
else:
return Select._create_select(*entities)
- def __init__(
- self,
+ @classmethod
+ @util.deprecated(
+ "2.0",
+ "The legacy calling style of :func:`_sql.select` is deprecated and "
+ "will be removed in SQLAlchemy 2.0. Please use the new calling "
+ "style described at :func:`_sql.select`.",
+ )
+ def create_legacy_select(
+ cls,
columns=None,
whereclause=None,
from_obj=None,
suffixes=None,
**kwargs
):
- """Construct a new :class:`_expression.Select` using the 1.x style
- API.
+ """Construct a new :class:`_expression.Select` using the 1.x style API.
+
+ This method is called implicitly when the :func:`_expression.select`
+ construct is used and the first argument is a Python list or other
+ plain sequence object, which is taken to refer to the columns
+ collection.
+
+ .. versionchanged:: 1.4 Added the :meth:`.Select.create_legacy_select`
+ constructor which documents the calling style in use when the
+ :func:`.select` construct is invoked using 1.x-style arguments.
Similar functionality is also available via the
:meth:`_expression.FromClause.select` method on any
:class:`_expression.FromClause`.
- All arguments which accept :class:`_expression.ClauseElement`
- arguments also
- accept string arguments, which will be converted as appropriate into
- either :func:`_expression.text` or
- :func:`_expression.literal_column` constructs.
+ All arguments which accept :class:`_expression.ClauseElement` arguments
+ also accept string arguments, which will be converted as appropriate
+ into either :func:`_expression.text()` or
+ :func:`_expression.literal_column()` constructs.
.. seealso::
:meth:`_expression.Select.apply_labels`
"""
- if util.SQLALCHEMY_WARN_20:
- util.warn_deprecated_20(
- "The select() function in SQLAlchemy 2.0 will accept a "
- "series of columns / tables and other entities only, "
- "passed positionally. For forwards compatibility, use the "
- "sqlalchemy.future.select() construct.",
- stacklevel=4,
- )
+ self = cls.__new__(cls)
self._auto_correlate = correlate
except TypeError as err:
util.raise_(
exc.ArgumentError(
- "columns argument to select() must "
- "be a Python list or other iterable"
+ "select() construct created in legacy mode, i.e. with "
+ "keyword arguments, must provide the columns argument as "
+ "a Python list or other iterable.",
+ code="c9ae",
),
from_=err,
)
self._setup_suffixes(suffixes)
GenerativeSelect.__init__(self, **kwargs)
+ return self
+
+ @classmethod
+ def _create_future_select(cls, *entities):
+ r"""Construct a new :class:`_expression.Select` using the 2.
+ x style API.
+
+ .. versionadded:: 1.4 - The :func:`_sql.select` function now accepts
+ column arguments positionally. The top-level :func:`_sql.select`
+ function will automatically use the 1.x or 2.x style API based on
+ the incoming argumnents; using :func:`_future.select` from the
+ ``sqlalchemy.future`` module will enforce that only the 2.x style
+ constructor is used.
+
+ Similar functionality is also available via the
+ :meth:`_expression.FromClause.select` method on any
+ :class:`_expression.FromClause`.
+
+ .. seealso::
+
+ :ref:`coretutorial_selecting` - Core Tutorial description of
+ :func:`_expression.select`.
+
+ :param \*entities:
+ Entities to SELECT from. For Core usage, this is typically a series
+ of :class:`_expression.ColumnElement` and / or
+ :class:`_expression.FromClause`
+ objects which will form the columns clause of the resulting
+ statement. For those objects that are instances of
+ :class:`_expression.FromClause` (typically :class:`_schema.Table`
+ or :class:`_expression.Alias`
+ objects), the :attr:`_expression.FromClause.c`
+ collection is extracted
+ to form a collection of :class:`_expression.ColumnElement` objects.
+
+ This parameter will also accept :class:`_expression.TextClause`
+ constructs as
+ given, as well as ORM-mapped classes.
+
+ """
+
+ self = cls.__new__(cls)
+ self._raw_columns = [
+ coercions.expect(
+ roles.ColumnsClauseRole, ent, apply_propagate_attrs=self
+ )
+ for ent in entities
+ ]
+
+ GenerativeSelect.__init__(self)
+
+ return self
+
+ _create_select = _create_future_select
+
+ @classmethod
+ def _create(cls, *args, **kw):
+ r"""Create a :class:`.Select` using either the 1.x or 2.0 constructor
+ style.
+
+ For the legacy calling style, see :meth:`.Select.create_legacy_select`.
+ If the first argument passed is a Python sequence or if keyword
+ arguments are present, this style is used.
+
+ .. versionadded:: 2.0 - the :func:`_future.select` construct is
+ the same construct as the one returned by
+ :func:`_expression.select`, except that the function only
+ accepts the "columns clause" entities up front; the rest of the
+ state of the SELECT should be built up using generative methods.
+
+ Similar functionality is also available via the
+ :meth:`_expression.FromClause.select` method on any
+ :class:`_expression.FromClause`.
+
+ .. seealso::
+
+ :ref:`coretutorial_selecting` - Core Tutorial description of
+ :func:`_expression.select`.
+
+ :param \*entities:
+ Entities to SELECT from. For Core usage, this is typically a series
+ of :class:`_expression.ColumnElement` and / or
+ :class:`_expression.FromClause`
+ objects which will form the columns clause of the resulting
+ statement. For those objects that are instances of
+ :class:`_expression.FromClause` (typically :class:`_schema.Table`
+ or :class:`_expression.Alias`
+ objects), the :attr:`_expression.FromClause.c`
+ collection is extracted
+ to form a collection of :class:`_expression.ColumnElement` objects.
+
+ This parameter will also accept :class:`_expression.TextClause`
+ constructs as given, as well as ORM-mapped classes.
+
+ """
+ if (args and isinstance(args[0], list)) or kw:
+ return cls.create_legacy_select(*args, **kw)
+ else:
+ return cls._create_future_select(*args)
+
+ def __init__(self,):
+ raise NotImplementedError()
def _scalar_type(self):
elem = self._raw_columns[0]
cols = list(elem._select_iterable)
return cols[0].type
+ def filter(self, *criteria):
+ """A synonym for the :meth:`_future.Select.where` method."""
+
+ return self.where(*criteria)
+
+ def _filter_by_zero(self):
+ if self._setup_joins:
+ meth = SelectState.get_plugin_class(
+ self
+ ).determine_last_joined_entity
+ _last_joined_entity = meth(self)
+ if _last_joined_entity is not None:
+ return _last_joined_entity
+
+ if self._from_obj:
+ return self._from_obj[0]
+
+ return self._raw_columns[0]
+
+ def filter_by(self, **kwargs):
+ r"""apply the given filtering criterion as a WHERE clause
+ to this select.
+
+ """
+ from_entity = self._filter_by_zero()
+
+ clauses = [
+ _entity_namespace_key(from_entity, key) == value
+ for key, value in kwargs.items()
+ ]
+ return self.filter(*clauses)
+
+ @property
+ def column_descriptions(self):
+ """Return a 'column descriptions' structure which may be
+ plugin-specific.
+
+ """
+ meth = SelectState.get_plugin_class(self).get_column_descriptions
+ return meth(self)
+
+ @_generative
+ def join(self, target, onclause=None, isouter=False, full=False):
+ r"""Create a SQL JOIN against this :class:`_expresson.Select`
+ object's criterion
+ and apply generatively, returning the newly resulting
+ :class:`_expression.Select`.
+
+ .. versionchanged:: 1.4 :meth:`_expression.Select.join` now modifies
+ the FROM list of the :class:`.Select` object in place, rather than
+ implicitly producing a subquery.
+
+ :param target: target table to join towards
+
+ :param onclause: ON clause of the join.
+
+ :param isouter: if True, generate LEFT OUTER join. Same as
+ :meth:`_expression.Select.outerjoin`.
+
+ :param full: if True, generate FULL OUTER join.
+
+ .. seealso::
+
+ :meth:`_expression.Select.join_from`
+
+ """
+ target = coercions.expect(
+ roles.JoinTargetRole, target, apply_propagate_attrs=self
+ )
+ if onclause is not None:
+ onclause = coercions.expect(roles.OnClauseRole, onclause)
+ self._setup_joins += (
+ (target, onclause, None, {"isouter": isouter, "full": full}),
+ )
+
+ @_generative
+ def join_from(
+ self, from_, target, onclause=None, isouter=False, full=False
+ ):
+ r"""Create a SQL JOIN against this :class:`_expresson.Select`
+ object's criterion
+ and apply generatively, returning the newly resulting
+ :class:`_expression.Select`.
+
+ .. versionadded:: 1.4
+
+ :param from\_: the left side of the join, will be rendered in the
+ FROM clause and is roughly equivalent to using the
+ :meth:`.Select.select_from` method.
+
+ :param target: target table to join towards
+
+ :param onclause: ON clause of the join.
+
+ :param isouter: if True, generate LEFT OUTER join. Same as
+ :meth:`_expression.Select.outerjoin`.
+
+ :param full: if True, generate FULL OUTER join.
+
+ .. seealso::
+
+ :meth:`_expression.Select.join`
+
+ """
+ # note the order of parsing from vs. target is important here, as we
+ # are also deriving the source of the plugin (i.e. the subject mapper
+ # in an ORM query) which should favor the "from_" over the "target"
+
+ from_ = coercions.expect(
+ roles.FromClauseRole, from_, apply_propagate_attrs=self
+ )
+ target = coercions.expect(
+ roles.JoinTargetRole, target, apply_propagate_attrs=self
+ )
+ if onclause is not None:
+ onclause = coercions.expect(roles.OnClauseRole, onclause)
+
+ self._setup_joins += (
+ (target, onclause, from_, {"isouter": isouter, "full": full}),
+ )
+
+ def outerjoin(self, target, onclause=None, full=False):
+ """Create a left outer join.
+
+ Parameters are the same as that of :meth:`_expression.Select.join`.
+
+ .. versionchanged:: 1.4 :meth:`_expression.Select.outerjoin` now
+ modifies the FROM list of the :class:`.Select` object in place,
+ rather than implicitly producing a subquery.
+
+ """
+ return self.join(target, onclause=onclause, isouter=True, full=full,)
+
@property
def froms(self):
"""Return the displayed list of :class:`_expression.FromClause`
return ColumnCollection(collection).as_immutable()
+ # def _exported_columns_iterator(self):
+ # return _select_iterables(self._raw_columns)
+
def _exported_columns_iterator(self):
- return _select_iterables(self._raw_columns)
+ meth = SelectState.get_plugin_class(self).exported_columns_iterator
+ return meth(self)
def _ensure_disambiguated_names(self):
if self._label_style is LABEL_STYLE_NONE:
inherit_cache = True
def __init__(self, *args, **kwargs):
- """Construct a new :class:`_expression.Exists` against an existing
- :class:`_expression.Select` object.
+ """Construct a new :class:`_expression.Exists` construct.
- Calling styles are of the following forms::
+ The modern form of :func:`.exists` is to invoke with no arguments,
+ which will produce an ``"EXISTS *"`` construct. A WHERE clause
+ is then added using the :meth:`.Exists.where` method::
- # use on an existing select()
- s = select([table.c.col1]).where(table.c.col2==5)
- s_e = exists(s)
+ exists_criteria = exists().where(table1.c.col1 == table2.c.col2)
- # an exists is usually used in a where of another select
- # to produce a WHERE EXISTS (SELECT ... )
- select([table.c.col1]).where(s_e)
+ The EXISTS criteria is then used inside of an enclosing SELECT::
- # but can also be used in a select to produce a
- # SELECT EXISTS (SELECT ... ) query
- select([s_e])
+ stmt = select(table1.c.col1).where(exists_criteria)
- # construct a select() at once
- exists(['*'], **select_arguments).where(criterion)
+ The above statement will then be of the form::
- # columns argument is optional, generates "EXISTS (SELECT *)"
- # by default.
- exists().where(table.c.col2==5)
+ SELECT col1 FROM table1 WHERE EXISTS
+ (SELECT * FROM table2 WHERE table2.col2 = table1.col1)
"""
if args and isinstance(args[0], (SelectBase, ScalarSelect)):
s = args[0]
else:
if not args:
- args = ([literal_column("*")],)
- s = Select(*args, **kwargs).scalar_subquery()
+ args = (literal_column("*"),)
+ s = Select._create(*args, **kwargs).scalar_subquery()
UnaryExpression.__init__(
self,
element = fn(element)
return element.self_group(against=operators.exists)
- def select(self, whereclause=None, **params):
+ @util.deprecated_params(
+ whereclause=(
+ "2.0",
+ "The :paramref:`_sql.Exists.select().whereclause` parameter "
+ "is deprecated and will be removed in version 2.0. "
+ "Please make use "
+ "of the :meth:`.Select.where` "
+ "method to add WHERE criteria to the SELECT statement.",
+ ),
+ kwargs=(
+ "2.0",
+ "The :meth:`_sql.Exists.select` method will no longer accept "
+ "keyword arguments in version 2.0. "
+ "Please use generative methods from the "
+ ":class:`_sql.Select` construct in order to apply additional "
+ "modifications.",
+ ),
+ )
+ def select(self, whereclause=None, **kwargs):
+ r"""Return a SELECT of this :class:`_expression.Exists`.
+
+ e.g.::
+
+ stmt = exists(some_table.c.id).where(some_table.c.id == 5).select()
+
+ This will produce a statement resembling::
+
+ SELECT EXISTS (SELECT id FROM some_table WHERE some_table = :param) AS anon_1
+
+ :param whereclause: a WHERE clause, equivalent to calling the
+ :meth:`_sql.Select.where` method.
+
+ :param **kwargs: additional keyword arguments are passed to the
+ legacy constructor for :class:`_sql.Select` described at
+ :meth:`_sql.Select.create_legacy_select`.
+
+ .. seealso::
+
+ :func:`_expression.select` - general purpose
+ method which allows for arbitrary column lists.
+
+ """ # noqa
+
if whereclause is not None:
- params["whereclause"] = whereclause
- return Select._create_select_from_fromclause(self, [self], **params)
+ kwargs["whereclause"] = whereclause
+ return Select._create_select_from_fromclause(self, [self], **kwargs)
def correlate(self, *fromclause):
e = self._clone()
)
return e
- def select_from(self, clause):
+ def select_from(self, *froms):
"""Return a new :class:`_expression.Exists` construct,
applying the given
expression to the :meth:`_expression.Select.select_from`
"""
e = self._clone()
- e.element = self._regroup(lambda element: element.select_from(clause))
+ e.element = self._regroup(lambda element: element.select_from(*froms))
return e
def where(self, clause):
def __setstate__(self, state):
self.__dict__.update(state)
self.columns = util.WeakPopulateDict(self._locate_col)
-
-
-def _entity_namespace_key(entity, key):
- """Return an entry from an entity_namespace.
-
-
- Raises :class:`_exc.InvalidRequestError` rather than attribute error
- on not found.
-
- """
-
- ns = entity.entity_namespace
- try:
- return getattr(ns, key)
- except AttributeError as err:
- util.raise_(
- exc.InvalidRequestError(
- 'Entity namespace for "%s" has no property "%s"'
- % (entity, key)
- ),
- replace_context=err,
- )
with mock.patch("warnings.warn", our_warn), mock.patch(
"sqlalchemy.util.SQLALCHEMY_WARN_20", True
- ), mock.patch("sqlalchemy.engine.row.LegacyRow._default_key_style", 2):
+ ), mock.patch(
+ "sqlalchemy.util.deprecations.SQLALCHEMY_WARN_20", True
+ ), mock.patch(
+ "sqlalchemy.engine.row.LegacyRow._default_key_style", 2
+ ):
yield
if assert_ and (not py2konly or not compat.py3k):
connection.execute(unicode_table.insert(), {"unicode_data": self.data})
- row = connection.execute(
- select([unicode_table.c.unicode_data])
- ).first()
+ row = connection.execute(select(unicode_table.c.unicode_data)).first()
eq_(row, (self.data,))
assert isinstance(row[0], util.text_type)
)
rows = connection.execute(
- select([unicode_table.c.unicode_data])
+ select(unicode_table.c.unicode_data)
).fetchall()
eq_(rows, [(self.data,) for i in range(3)])
for row in rows:
unicode_table = self.tables.unicode_table
connection.execute(unicode_table.insert(), {"unicode_data": None})
- row = connection.execute(
- select([unicode_table.c.unicode_data])
- ).first()
+ row = connection.execute(select(unicode_table.c.unicode_data)).first()
eq_(row, (None,))
def _test_empty_strings(self, connection):
unicode_table = self.tables.unicode_table
connection.execute(unicode_table.insert(), {"unicode_data": u("")})
- row = connection.execute(
- select([unicode_table.c.unicode_data])
- ).first()
+ row = connection.execute(select(unicode_table.c.unicode_data)).first()
eq_(row, (u(""),))
def test_literal(self):
text_table = self.tables.text_table
connection.execute(text_table.insert(), {"text_data": "some text"})
- row = connection.execute(select([text_table.c.text_data])).first()
+ row = connection.execute(select(text_table.c.text_data)).first()
eq_(row, ("some text",))
@testing.requires.empty_strings_text
text_table = self.tables.text_table
connection.execute(text_table.insert(), {"text_data": ""})
- row = connection.execute(select([text_table.c.text_data])).first()
+ row = connection.execute(select(text_table.c.text_data)).first()
eq_(row, ("",))
def test_text_null_strings(self, connection):
text_table = self.tables.text_table
connection.execute(text_table.insert(), {"text_data": None})
- row = connection.execute(select([text_table.c.text_data])).first()
+ row = connection.execute(select(text_table.c.text_data)).first()
eq_(row, (None,))
def test_literal(self):
connection.execute(date_table.insert(), {"date_data": self.data})
- row = connection.execute(select([date_table.c.date_data])).first()
+ row = connection.execute(select(date_table.c.date_data)).first()
compare = self.compare or self.data
eq_(row, (compare,))
connection.execute(date_table.insert(), {"date_data": None})
- row = connection.execute(select([date_table.c.date_data])).first()
+ row = connection.execute(select(date_table.c.date_data)).first()
eq_(row, (None,))
@testing.requires.datetime_literals
date_table.insert(), {"date_data": self.data}
)
id_ = result.inserted_primary_key[0]
- stmt = select([date_table.c.id]).where(
+ stmt = select(date_table.c.id).where(
case(
[
(
connection.execute(int_table.insert(), {"integer_data": data})
- row = connection.execute(select([int_table.c.integer_data])).first()
+ row = connection.execute(select(int_table.c.integer_data)).first()
eq_(row, (data,))
def test_float_coerce_round_trip(self, connection):
expr = 15.7563
- val = connection.scalar(select([literal(expr)]))
+ val = connection.scalar(select(literal(expr)))
eq_(val, expr)
# this does not work in MySQL, see #4036, however we choose not
def test_decimal_coerce_round_trip(self, connection):
expr = decimal.Decimal("15.7563")
- val = connection.scalar(select([literal(expr)]))
+ val = connection.scalar(select(literal(expr)))
eq_(val, expr)
@testing.emits_warning(r".*does \*not\* support Decimal objects natively")
def test_decimal_coerce_round_trip_w_cast(self, connection):
expr = decimal.Decimal("15.7563")
- val = connection.scalar(select([cast(expr, Numeric(10, 4))]))
+ val = connection.scalar(select(cast(expr, Numeric(10, 4))))
eq_(val, expr)
@testing.requires.precision_numerics_general
)
row = connection.execute(
- select(
- [boolean_table.c.value, boolean_table.c.unconstrained_value]
- )
+ select(boolean_table.c.value, boolean_table.c.unconstrained_value)
).first()
eq_(row, (True, False))
)
row = connection.execute(
- select(
- [boolean_table.c.value, boolean_table.c.unconstrained_value]
- )
+ select(boolean_table.c.value, boolean_table.c.unconstrained_value)
).first()
eq_(row, (None, None))
eq_(
conn.scalar(
- select([boolean_table.c.id]).where(boolean_table.c.value)
+ select(boolean_table.c.id).where(boolean_table.c.value)
),
1,
)
eq_(
conn.scalar(
- select([boolean_table.c.id]).where(
+ select(boolean_table.c.id).where(
boolean_table.c.unconstrained_value
)
),
)
eq_(
conn.scalar(
- select([boolean_table.c.id]).where(~boolean_table.c.value)
+ select(boolean_table.c.id).where(~boolean_table.c.value)
),
2,
)
eq_(
conn.scalar(
- select([boolean_table.c.id]).where(
+ select(boolean_table.c.id).where(
~boolean_table.c.unconstrained_value
)
),
data_table.insert(), {"name": "row1", "data": data_element}
)
- row = connection.execute(select([data_table.c.data])).first()
+ row = connection.execute(select(data_table.c.data)).first()
eq_(row, (data_element,))
expr = data_table.c.data["key1"]
expr = getattr(expr, "as_%s" % datatype)()
- roundtrip = conn.scalar(select([expr]))
+ roundtrip = conn.scalar(select(expr))
eq_(roundtrip, value)
if util.py3k: # skip py2k to avoid comparing unicode to str etc.
is_(type(roundtrip), type(value))
expr = data_table.c.data["key1"]
expr = getattr(expr, "as_%s" % datatype)()
- row = conn.execute(select([expr]).where(expr == value)).first()
+ row = conn.execute(select(expr).where(expr == value)).first()
# make sure we get a row even if value is None
eq_(row, (value,))
expr = data_table.c.data[("key1", "subkey1")]
expr = getattr(expr, "as_%s" % datatype)()
- row = conn.execute(select([expr]).where(expr == value)).first()
+ row = conn.execute(select(expr).where(expr == value)).first()
# make sure we get a row even if value is None
eq_(row, (value,))
)
row = conn.execute(
- select([data_table.c.data, data_table.c.nulldata])
+ select(data_table.c.data, data_table.c.nulldata)
).first()
eq_(row, (data_element, data_element))
conn.execute(
data_table.insert(), {"name": "row1", "data": data_element}
)
- row = conn.execute(select([data_table.c.data])).first()
+ row = conn.execute(select(data_table.c.data)).first()
eq_(row, (data_element,))
eq_(js.mock_calls, [mock.call(data_element)])
eq_(
conn.scalar(
- select([self.tables.data_table.c.name]).where(col.is_(null()))
+ select(self.tables.data_table.c.name).where(col.is_(null()))
),
"r1",
)
- eq_(conn.scalar(select([col])), None)
+ eq_(conn.scalar(select(col)), None)
def test_round_trip_json_null_as_json_null(self, connection):
col = self.tables.data_table.c["data"]
eq_(
conn.scalar(
- select([self.tables.data_table.c.name]).where(
+ select(self.tables.data_table.c.name).where(
cast(col, String) == "null"
)
),
"r1",
)
- eq_(conn.scalar(select([col])), None)
+ eq_(conn.scalar(select(col)), None)
def test_round_trip_none_as_json_null(self):
col = self.tables.data_table.c["data"]
eq_(
conn.scalar(
- select([self.tables.data_table.c.name]).where(
+ select(self.tables.data_table.c.name).where(
cast(col, String) == "null"
)
),
"r1",
)
- eq_(conn.scalar(select([col])), None)
+ eq_(conn.scalar(select(col)), None)
def test_unicode_round_trip(self):
# note we include Unicode supplementary characters as well
)
eq_(
- conn.scalar(select([self.tables.data_table.c.data])),
+ conn.scalar(select(self.tables.data_table.c.data)),
{
util.u("réve🐍 illé"): util.u("réve🐍 illé"),
"data": {"k1": util.u("drôl🐍e")},
def _test_index_criteria(self, crit, expected, test_literal=True):
self._criteria_fixture()
with config.db.connect() as conn:
- stmt = select([self.tables.data_table.c.name]).where(crit)
+ stmt = select(self.tables.data_table.c.name).where(crit)
eq_(conn.scalar(stmt), expected)
"ignore", category=DeprecationWarning, message=".*inspect.get.*argspec"
)
- # ignore 2.0 warnings unless we are explicitly testing for them
- warnings.filterwarnings("ignore", category=sa_exc.RemovedIn20Warning)
-
# ignore things that are deprecated *as of* 2.0 :)
warnings.filterwarnings(
"ignore",
from .deprecations import deprecated_cls # noqa
from .deprecations import deprecated_params # noqa
from .deprecations import inject_docstring_text # noqa
+from .deprecations import SQLALCHEMY_WARN_20 # noqa
from .deprecations import warn_deprecated # noqa
from .deprecations import warn_deprecated_20 # noqa
from .langhelpers import add_parameter_text # noqa
from .langhelpers import warn_exception # noqa
from .langhelpers import warn_limited # noqa
from .langhelpers import wrap_callable # noqa
-
-
-SQLALCHEMY_WARN_20 = False
"""Helpers related to deprecation of functions, methods, classes, other
functionality."""
+import os
import re
import warnings
from .. import exc
+SQLALCHEMY_WARN_20 = False
+
+if os.getenv("SQLALCHEMY_WARN_20", "false").lower() in ("true", "yes", "1"):
+ SQLALCHEMY_WARN_20 = True
+
+
def _warn_with_version(msg, version, type_, stacklevel):
+ if type_ is exc.RemovedIn20Warning and not SQLALCHEMY_WARN_20:
+ return
+
+ if type_ is exc.RemovedIn20Warning:
+ msg += " (Background on SQLAlchemy 2.0 at: http://sqlalche.me/e/b8d9)"
+
warn = type_(msg)
warn.deprecated_since = version
def warn_deprecated_20(msg, stacklevel=3):
- msg += " (Background on SQLAlchemy 2.0 at: http://sqlalche.me/e/b8d9)"
_warn_with_version(
msg,
def deprecated_20_cls(clsname, alternative=None, constructor="__init__"):
message = (
- ".. deprecated:: 2.0 The %s class is considered legacy as of the "
+ ".. deprecated:: 1.4 The %s class is considered legacy as of the "
"1.x series of SQLAlchemy and will be removed in 2.0." % clsname
)
"""
+ # nothing is deprecated "since" 2.0 at this time. All "removed in 2.0"
+ # should emit the RemovedIn20Warning, but messaging should be expressed
+ # in terms of "deprecated since 1.4".
+
+ if version == "2.0":
+ if warning is None:
+ warning = exc.RemovedIn20Warning
+ version = "1.4"
if add_deprecation_to_docstring:
- header = ".. deprecated:: %s %s" % (version, (message or ""))
+ header = ".. deprecated:: %s %s" % (version, (message or ""),)
else:
header = None
if warning is None:
warning = exc.SADeprecationWarning
- message += " (deprecated since: %s)" % version
+ if warning is not exc.RemovedIn20Warning:
+ message += " (deprecated since: %s)" % version
def decorate(fn):
return _decorate_with_warning(
messages = {}
versions = {}
version_warnings = {}
+
for param, (version, message) in specs.items():
versions[param] = version
messages[param] = _sanitize_restructured_text(message)
def decorate(fn):
spec = compat.inspect_getfullargspec(fn)
+
if spec.defaults is not None:
defaults = dict(
zip(
check_defaults = ()
check_kw = set(messages)
+ check_any_kw = spec.varkw
+
@decorator
def warned(fn, *args, **kwargs):
for m in check_defaults:
version_warnings[m],
stacklevel=3,
)
+
+ if check_any_kw in messages and set(kwargs).difference(
+ check_defaults
+ ):
+
+ _warn_with_version(
+ messages[check_any_kw],
+ versions[check_any_kw],
+ version_warnings[check_any_kw],
+ stacklevel=3,
+ )
+
for m in check_kw:
if m in kwargs:
_warn_with_version(
version_warnings[m],
stacklevel=3,
)
-
return fn(*args, **kwargs)
doc = fn.__doc__ is not None and fn.__doc__ or ""
doc = inject_param_text(
doc,
{
- param: ".. deprecated:: %s %s" % (version, (message or ""))
+ param: ".. deprecated:: %s %s"
+ % ("1.4" if version == "2.0" else version, (message or ""))
for param, (version, message) in specs.items()
},
)
while doclines:
line = doclines.pop(0)
if to_inject is None:
- m = re.match(r"(\s+):param (?:\\\*\*?)?(.+?):", line)
+ m = re.match(r"(\s+):param (.+?):", line)
if m:
- param = m.group(2)
+ param = m.group(2).lstrip("*")
if param in inject_params:
# default indent to that of :param: plus one
indent = " " * len(m.group(1)) + " "
s = select([t1], t1.c.c2 == t2.c.c1)
s.compile(dialect=self.dialect)
- @profiling.function_call_count()
+ @profiling.function_call_count(variance=0.15, warmup=1)
def go():
s = select([t1], t1.c.c2 == t2.c.c1)
s.compile(dialect=self.dialect)
s = select([t1], t1.c.c2 == t2.c.c1).apply_labels()
s.compile(dialect=self.dialect)
- @profiling.function_call_count()
+ @profiling.function_call_count(variance=0.15, warmup=1)
def go():
s = select([t1], t1.c.c2 == t2.c.c1).apply_labels()
s.compile(dialect=self.dialect)
from sqlalchemy import testing
from sqlalchemy import Unicode
from sqlalchemy import util
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import configure_mappers
def go():
sessmaker = sessionmaker(bind=self.engine)
sess = sessmaker()
- r = sess.execute(select([1]))
+ r = sess.execute(select(1))
r.close()
sess.close()
del sess
# execute with a non-unicode object. a warning is emitted,
# this warning shouldn't clog up memory.
- self.engine.execute(
- table1.select().where(table1.c.col2 == "foo%d" % i[0])
- )
+ with self.engine.connect() as conn:
+ conn.execute(
+ table1.select().where(table1.c.col2 == "foo%d" % i[0])
+ )
i[0] += 1
try:
@assert_cycles()
def go():
- stmt = future_select(User)
+ stmt = select(User)
s.execute(stmt)
go()
@assert_cycles()
def go():
- stmt = future_select(User)
+ stmt = select(User)
stmt._generate_cache_key()
go()
# as of cache key
@assert_cycles(7)
def go():
- s = select([users]).select_from(users.join(addresses))
+ s = select(users).select_from(users.join(addresses))
state = s._compile_state_factory(s, s.compile())
state.froms
def test_adapt_statement_replacement_traversal(self):
User, Address = self.classes("User", "Address")
- statement = select([User]).select_from(
+ statement = select(User).select_from(
orm_join(User, Address, User.addresses)
)
def test_adapt_statement_cloned_traversal(self):
User, Address = self.classes("User", "Address")
- statement = select([User]).select_from(
+ statement = select(User).select_from(
orm_join(User, Address, User.addresses)
)
t = Table("sometable", m, Column("somecolumn", String))
self.assert_compile(
- select([t]).order_by(
+ select(t).order_by(
t.c.somecolumn.collate("Latin1_General_CS_AS_KS_WS_CI").asc()
),
"SELECT sometable.somecolumn FROM sometable "
"#other", meta, Column("sym", String), Column("newval", Integer)
)
stmt = table.update().values(
- val=select([other.c.newval])
+ val=select(other.c.newval)
.where(table.c.sym == other.c.sym)
.scalar_subquery()
)
@testing.combinations(
(
- lambda: select([literal("x"), literal("y")]),
+ lambda: select(literal("x"), literal("y")),
"SELECT [POSTCOMPILE_param_1] AS anon_1, "
"[POSTCOMPILE_param_2] AS anon_2",
{
},
),
(
- lambda t: select([t]).where(t.c.foo.in_(["x", "y", "z"])),
+ lambda t: select(t).where(t.c.foo.in_(["x", "y", "z"])),
"SELECT sometable.foo FROM sometable WHERE sometable.foo "
"IN ([POSTCOMPILE_foo_1])",
{
# for now, we don't really know what the above means, at least
# don't lose the dot
self.assert_compile(
- select([tbl]),
+ select(tbl),
"SELECT [abc.def.efg].hij.test.id FROM [abc.def.efg].hij.test",
)
)
self.assert_compile(
- select([tbl]),
+ select(tbl),
"SELECT [abc].[def].[efg].hij.test.id "
"FROM [abc].[def].[efg].hij.test",
)
schema=quoted_name("foo.dbo", True),
)
self.assert_compile(
- select([tbl]), "SELECT [foo.dbo].test.id FROM [foo.dbo].test"
+ select(tbl), "SELECT [foo.dbo].test.id FROM [foo.dbo].test"
)
def test_force_schema_quoted_w_dot_case_insensitive(self):
schema=quoted_name("foo.dbo", True),
)
self.assert_compile(
- select([tbl]), "SELECT [foo.dbo].test.id FROM [foo.dbo].test"
+ select(tbl), "SELECT [foo.dbo].test.id FROM [foo.dbo].test"
)
def test_force_schema_quoted_name_w_dot_case_sensitive(self):
schema=quoted_name("Foo.dbo", True),
)
self.assert_compile(
- select([tbl]), "SELECT [Foo.dbo].test.id FROM [Foo.dbo].test"
+ select(tbl), "SELECT [Foo.dbo].test.id FROM [Foo.dbo].test"
)
def test_force_schema_quoted_w_dot_case_sensitive(self):
schema="[Foo.dbo]",
)
self.assert_compile(
- select([tbl]), "SELECT [Foo.dbo].test.id FROM [Foo.dbo].test"
+ select(tbl), "SELECT [Foo.dbo].test.id FROM [Foo.dbo].test"
)
def test_schema_autosplit_w_dot_case_insensitive(self):
schema="foo.dbo",
)
self.assert_compile(
- select([tbl]), "SELECT foo.dbo.test.id FROM foo.dbo.test"
+ select(tbl), "SELECT foo.dbo.test.id FROM foo.dbo.test"
)
def test_schema_autosplit_w_dot_case_sensitive(self):
schema="Foo.dbo",
)
self.assert_compile(
- select([tbl]), "SELECT [Foo].dbo.test.id FROM [Foo].dbo.test"
+ select(tbl), "SELECT [Foo].dbo.test.id FROM [Foo].dbo.test"
)
def test_delete_schema(self):
tbl.delete(tbl.c.id == 1),
"DELETE FROM paj.test WHERE paj.test.id = " ":id_1",
)
- s = select([tbl.c.id]).where(tbl.c.id == 1)
+ s = select(tbl.c.id).where(tbl.c.id == 1)
self.assert_compile(
tbl.delete().where(tbl.c.id.in_(s)),
"DELETE FROM paj.test WHERE paj.test.id IN "
tbl.delete(tbl.c.id == 1),
"DELETE FROM banana.paj.test WHERE " "banana.paj.test.id = :id_1",
)
- s = select([tbl.c.id]).where(tbl.c.id == 1)
+ s = select(tbl.c.id).where(tbl.c.id == 1)
self.assert_compile(
tbl.delete().where(tbl.c.id.in_(s)),
"DELETE FROM banana.paj.test WHERE "
"DELETE FROM [banana split].paj.test WHERE "
"[banana split].paj.test.id = :id_1",
)
- s = select([tbl.c.id]).where(tbl.c.id == 1)
+ s = select(tbl.c.id).where(tbl.c.id == 1)
self.assert_compile(
tbl.delete().where(tbl.c.id.in_(s)),
"DELETE FROM [banana split].paj.test WHERE "
"space].test WHERE [banana split].[paj "
"with a space].test.id = :id_1",
)
- s = select([tbl.c.id]).where(tbl.c.id == 1)
+ s = select(tbl.c.id).where(tbl.c.id == 1)
self.assert_compile(
tbl.delete().where(tbl.c.id.in_(s)),
"DELETE FROM [banana split].[paj with a space].test "
"sometable", m, Column("col1", Integer), Column("col2", Integer)
)
self.assert_compile(
- select([func.max(t.c.col1)]),
+ select(func.max(t.c.col1)),
"SELECT max(sometable.col1) AS max_1 FROM " "sometable",
)
for field in "day", "month", "year":
self.assert_compile(
- select([extract(field, t.c.col1)]),
+ select(extract(field, t.c.col1)),
"SELECT DATEPART(%s, t.col1) AS anon_1 FROM t" % field,
)
def test_limit_using_top(self):
t = table("t", column("x", Integer), column("y", Integer))
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).limit(10)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).limit(10)
self.assert_compile(
s,
def test_limit_zero_using_top(self):
t = table("t", column("x", Integer), column("y", Integer))
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).limit(0)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).limit(0)
self.assert_compile(
s,
def test_offset_using_window(self):
t = table("t", column("x", Integer), column("y", Integer))
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).offset(20)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).offset(20)
# test that the select is not altered with subsequent compile
# calls
t = table("t", column("x", Integer), column("y", Integer))
s = (
- select([t])
+ select(t)
.where(t.c.x == 5)
.order_by(t.c.y)
.limit(10)
def test_limit_offset_using_window(self):
t = table("t", column("x", Integer), column("y", Integer))
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).limit(10).offset(20)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).limit(10).offset(20)
self.assert_compile(
s,
dialect_2012 = mssql.base.MSDialect()
dialect_2012._supports_offset_fetch = True
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).limit(10).offset(20)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).limit(10).offset(20)
self.assert_compile(
s,
t1 = table("t1", column("x", Integer), column("y", Integer))
t2 = table("t2", column("x", Integer), column("y", Integer))
- order_by = select([t2.c.y]).where(t1.c.x == t2.c.x).scalar_subquery()
+ order_by = select(t2.c.y).where(t1.c.x == t2.c.x).scalar_subquery()
s = (
- select([t1])
+ select(t1)
.where(t1.c.x == 5)
.order_by(order_by)
.limit(10)
expr1 = func.foo(t.c.x).label("x")
expr2 = func.foo(t.c.x).label("y")
- stmt1 = select([expr1]).order_by(expr1.desc()).offset(1)
- stmt2 = select([expr2]).order_by(expr2.desc()).offset(1)
+ stmt1 = select(expr1).order_by(expr1.desc()).offset(1)
+ stmt2 = select(expr2).order_by(expr2.desc()).offset(1)
self.assert_compile(
stmt1,
def test_limit_zero_offset_using_window(self):
t = table("t", column("x", Integer), column("y", Integer))
- s = select([t]).where(t.c.x == 5).order_by(t.c.y).limit(0).offset(0)
+ s = select(t).where(t.c.x == 5).order_by(t.c.y).limit(0).offset(0)
# render the LIMIT of zero, but not the OFFSET
# of zero, so produces TOP 0
t1 = Table("t1", metadata, Column("id", Integer, primary_key=True))
self.assert_compile(
- select([try_cast(t1.c.id, Integer)]),
+ select(try_cast(t1.c.id, Integer)),
"SELECT TRY_CAST (t1.id AS INTEGER) AS id FROM t1",
)
def test_union_schema_to_non(self):
t1, t2 = self.t1, self.t2
s = (
- select([t2.c.a, t2.c.b])
+ select(t2.c.a, t2.c.b)
.apply_labels()
- .union(select([t1.c.a, t1.c.b]).apply_labels())
+ .union(select(t1.c.a, t1.c.b).apply_labels())
.alias()
.select()
)
def test_column_subquery_to_alias(self):
a1 = self.t2.alias("a1")
- s = select([self.t2, select([a1.c.a]).scalar_subquery()])
+ s = select([self.t2, select(a1.c.a).scalar_subquery()])
self._assert_sql(
s,
"SELECT t2_1.a, t2_1.b, t2_1.c, "
def test_insert_plain_param(self):
with testing.db.connect() as conn:
conn.execute(cattable.insert(), id=5)
- eq_(conn.scalar(select([cattable.c.id])), 5)
+ eq_(conn.scalar(select(cattable.c.id)), 5)
def test_insert_values_key_plain(self):
with testing.db.connect() as conn:
conn.execute(cattable.insert().values(id=5))
- eq_(conn.scalar(select([cattable.c.id])), 5)
+ eq_(conn.scalar(select(cattable.c.id)), 5)
def test_insert_values_key_expression(self):
with testing.db.connect() as conn:
conn.execute(cattable.insert().values(id=literal(5)))
- eq_(conn.scalar(select([cattable.c.id])), 5)
+ eq_(conn.scalar(select(cattable.c.id)), 5)
def test_insert_values_col_plain(self):
with testing.db.connect() as conn:
conn.execute(cattable.insert().values({cattable.c.id: 5}))
- eq_(conn.scalar(select([cattable.c.id])), 5)
+ eq_(conn.scalar(select(cattable.c.id)), 5)
def test_insert_values_col_expression(self):
with testing.db.connect() as conn:
conn.execute(cattable.insert().values({cattable.c.id: literal(5)}))
- eq_(conn.scalar(select([cattable.c.id])), 5)
+ eq_(conn.scalar(select(cattable.c.id)), 5)
class QueryUnicodeTest(fixtures.TestBase):
from sqlalchemy import Column
from sqlalchemy import DECIMAL
from sqlalchemy import Integer
+from sqlalchemy import select
from sqlalchemy import Sequence
from sqlalchemy import String
from sqlalchemy import Table
-from sqlalchemy.future import select
from sqlalchemy.testing import eq_
from sqlalchemy.testing import fixtures
("day", fivedaysago.day),
):
r = connection.execute(
- select([extract(field, fivedaysago)])
+ select(extract(field, fivedaysago))
).scalar()
eq_(r, exp)
if convert_int:
last_ts_1 = int(codecs.encode(last_ts_1, "hex"), 16)
- eq_(conn.scalar(select([t.c.rv])), last_ts_1)
+ eq_(conn.scalar(select(t.c.rv)), last_ts_1)
conn.execute(
t.update().values(data="bar").where(t.c.data == "foo")
if convert_int:
last_ts_2 = int(codecs.encode(last_ts_2, "hex"), 16)
- eq_(conn.scalar(select([t.c.rv])), last_ts_2)
+ eq_(conn.scalar(select(t.c.rv)), last_ts_2)
def test_cant_insert_rowvalue(self):
self._test_cant_insert(self.tables.rv_t)
)
primary_key = result.inserted_primary_key
returned = conn.scalar(
- select([numeric_table.c.numericcol]).where(
+ select(numeric_table.c.numericcol).where(
numeric_table.c.id == primary_key[0]
)
)
conn.execute(tbl.insert())
if "int_y" in tbl.c:
eq_(
- conn.execute(select([tbl.c.int_y])).scalar(),
+ conn.execute(select(tbl.c.int_y)).scalar(),
counter + 1,
)
assert (
with engine.connect() as conn:
conn.execute(binary_table.insert(), data=data)
- eq_(conn.scalar(select([binary_table.c.data])), expected)
+ eq_(conn.scalar(select(binary_table.c.data)), expected)
eq_(
conn.scalar(
conn.execute(binary_table.delete())
conn.execute(binary_table.insert(), data=None)
- eq_(conn.scalar(select([binary_table.c.data])), None)
+ eq_(conn.scalar(select(binary_table.c.data)), None)
eq_(
conn.scalar(
Column("col1", Integer),
Column("master_ssl_verify_server_cert", Integer),
)
- x = select([table.c.col1, table.c.master_ssl_verify_server_cert])
+ x = select(table.c.col1, table.c.master_ssl_verify_server_cert)
self.assert_compile(
x,
t = sql.table("t", sql.column("col1"), sql.column("col2"))
self.assert_compile(
- select([t]).limit(10).offset(20),
+ select(t).limit(10).offset(20),
"SELECT t.col1, t.col2 FROM t LIMIT %s, %s",
{"param_1": 20, "param_2": 10},
)
self.assert_compile(
- select([t]).limit(10),
+ select(t).limit(10),
"SELECT t.col1, t.col2 FROM t LIMIT %s",
{"param_1": 10},
)
self.assert_compile(
- select([t]).offset(10),
+ select(t).offset(10),
"SELECT t.col1, t.col2 FROM t LIMIT %s, 18446744073709551615",
{"param_1": 10},
)
for field in "year", "month", "day":
self.assert_compile(
- select([extract(field, t.c.col1)]),
+ select(extract(field, t.c.col1)),
"SELECT EXTRACT(%s FROM t.col1) AS anon_1 FROM t" % field,
)
# millsecondS to millisecond
self.assert_compile(
- select([extract("milliseconds", t.c.col1)]),
+ select(extract("milliseconds", t.c.col1)),
"SELECT EXTRACT(millisecond FROM t.col1) AS anon_1 FROM t",
)
__dialect__ = mysql.dialect()
def test_distinct_string(self):
- s = select(["*"]).select_from(table("foo"))
+ s = select("*").select_from(table("foo"))
s._distinct = "foo"
with expect_deprecated(
__backend__ = True
@testing.emits_warning()
- def test_is_boolean_symbols_despite_no_native(self):
+ def test_is_boolean_symbols_despite_no_native(self, connection):
+
is_(
- testing.db.scalar(select([cast(true().is_(true()), Boolean)])),
- True,
+ connection.scalar(select(cast(true().is_(true()), Boolean))), True,
)
is_(
- testing.db.scalar(select([cast(true().isnot(true()), Boolean)])),
+ connection.scalar(select(cast(true().isnot(true()), Boolean))),
False,
)
is_(
- testing.db.scalar(select([cast(false().is_(false()), Boolean)])),
+ connection.scalar(select(cast(false().is_(false()), Boolean))),
True,
)
# test [ticket:3263]
result = connection.execute(
select(
- [
- matchtable.c.title.match("Agile Ruby Programming").label(
- "ruby"
- ),
- matchtable.c.title.match("Dive Python").label("python"),
- matchtable.c.title,
- ]
+ matchtable.c.title.match("Agile Ruby Programming").label(
+ "ruby"
+ ),
+ matchtable.c.title.match("Dive Python").label("python"),
+ matchtable.c.title,
).order_by(matchtable.c.id)
).fetchall()
eq_(
def test_any_w_comparator(self, connection):
stuff = self.tables.stuff
- stmt = select([stuff.c.id]).where(
- stuff.c.value > any_(select([stuff.c.value]).scalar_subquery())
+ stmt = select(stuff.c.id).where(
+ stuff.c.value > any_(select(stuff.c.value).scalar_subquery())
)
eq_(connection.execute(stmt).fetchall(), [(2,), (3,), (4,), (5,)])
def test_all_w_comparator(self, connection):
stuff = self.tables.stuff
- stmt = select([stuff.c.id]).where(
- stuff.c.value >= all_(select([stuff.c.value]).scalar_subquery())
+ stmt = select(stuff.c.id).where(
+ stuff.c.value >= all_(select(stuff.c.value).scalar_subquery())
)
eq_(connection.execute(stmt).fetchall(), [(5,)])
def test_any_literal(self, connection):
stuff = self.tables.stuff
- stmt = select([4 == any_(select([stuff.c.value]).scalar_subquery())])
+ stmt = select([4 == any_(select(stuff.c.value).scalar_subquery())])
is_(connection.execute(stmt).scalar(), True)
assert not c.autoincrement
tbl.insert().execute()
if "int_y" in tbl.c:
- assert select([tbl.c.int_y]).scalar() == 1
+ assert select(tbl.c.int_y).scalar() == 1
assert list(tbl.select().execute().first()).count(1) == 1
else:
assert 1 not in list(tbl.select().execute().first())
scale_value=45.768392065789,
unscale_value=45.768392065789,
)
- result = conn.scalar(select([t.c.scale_value]))
+ result = conn.scalar(select(t.c.scale_value))
eq_(result, decimal.Decimal("45.768392065789"))
- result = conn.scalar(select([t.c.unscale_value]))
+ result = conn.scalar(select(t.c.unscale_value))
eq_(result, decimal.Decimal("45.768392065789"))
@testing.only_if("mysql")
# MySQLdb 1.2.3 and also need to pass either use_unicode=1
# or charset=utf8 to the URL.
t.insert().execute(id=1, data=u("some text"))
- assert isinstance(
- testing.db.scalar(select([t.c.data])), util.text_type
- )
+ assert isinstance(testing.db.scalar(select(t.c.data)), util.text_type)
@testing.metadata_fixture(ddl="class")
def bit_table(self, metadata):
conn.execute(t.insert().values(t1=datetime.time(8, 37, 35)))
eq_(
- conn.execute(select([t.c.t1])).scalar(),
+ conn.execute(select(t.c.t1)).scalar(),
datetime.time(8, 37, 35),
)
with testing.db.connect() as conn:
conn.execute(mysql_json.insert(), foo=value)
- eq_(conn.scalar(select([mysql_json.c.foo])), value)
+ eq_(conn.scalar(select(mysql_json.c.foo)), value)
class EnumSetTest(
eq_(
conn.execute(
- select([t.c.e1, t.c.e2]).order_by(t.c.id)
+ select(t.c.e1, t.c.e2).order_by(t.c.id)
).fetchall(),
[("", ""), ("", ""), ("two", "two"), (None, None)],
)
def test_subquery(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t]).subquery()
- s = select([s.c.col1, s.c.col2])
+ s = select(t).subquery()
+ s = select(s.c.col1, s.c.col2)
self.assert_compile(
s,
)
included_parts = (
- select([part.c.sub_part, part.c.part, part.c.quantity])
+ select(part.c.sub_part, part.c.part, part.c.quantity)
.where(part.c.part == "p1")
.cte(name="included_parts", recursive=True)
.suffix_with(
parts_alias = part.alias("p")
included_parts = included_parts.union_all(
select(
- [
- parts_alias.c.sub_part,
- parts_alias.c.part,
- parts_alias.c.quantity,
- ]
+ parts_alias.c.sub_part,
+ parts_alias.c.part,
+ parts_alias.c.quantity,
).where(parts_alias.c.part == incl_alias.c.sub_part)
)
q = select(
- [
- included_parts.c.sub_part,
- func.sum(included_parts.c.quantity).label("total_quantity"),
- ]
+ included_parts.c.sub_part,
+ func.sum(included_parts.c.quantity).label("total_quantity"),
).group_by(included_parts.c.sub_part)
self.assert_compile(
def test_limit_one(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t])
+ s = select(t)
c = s.compile(dialect=oracle.OracleDialect())
assert t.c.col1 in set(c._create_result_map()["col1"][1])
- s = select([t]).limit(10).offset(20)
+ s = select(t).limit(10).offset(20)
self.assert_compile(
s,
"SELECT anon_1.col1, anon_1.col2 FROM "
def test_limit_one_firstrows(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t])
- s = select([t]).limit(10).offset(20)
+ s = select(t)
+ s = select(t).limit(10).offset(20)
self.assert_compile(
s,
"SELECT anon_1.col1, anon_1.col2 FROM "
def test_limit_two(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t]).limit(10).offset(20).subquery()
+ s = select(t).limit(10).offset(20).subquery()
- s2 = select([s.c.col1, s.c.col2])
+ s2 = select(s.c.col1, s.c.col2)
self.assert_compile(
s2,
"SELECT anon_1.col1, anon_1.col2 FROM "
def test_limit_three(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t]).limit(10).offset(20).order_by(t.c.col2)
+ s = select(t).limit(10).offset(20).order_by(t.c.col2)
self.assert_compile(
s,
"SELECT anon_1.col1, anon_1.col2 FROM "
def test_limit_four(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t]).with_for_update().limit(10).order_by(t.c.col2)
+ s = select(t).with_for_update().limit(10).order_by(t.c.col2)
self.assert_compile(
s,
"SELECT anon_1.col1, anon_1.col2 FROM (SELECT "
def test_limit_four_firstrows(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t]).with_for_update().limit(10).order_by(t.c.col2)
+ s = select(t).with_for_update().limit(10).order_by(t.c.col2)
self.assert_compile(
s,
"SELECT /*+ FIRST_ROWS([POSTCOMPILE_ora_frow_1]) */ "
def test_limit_five(self):
t = table("sometable", column("col1"), column("col2"))
- s = (
- select([t])
- .with_for_update()
- .limit(10)
- .offset(20)
- .order_by(t.c.col2)
- )
+ s = select(t).with_for_update().limit(10).offset(20).order_by(t.c.col2)
self.assert_compile(
s,
"SELECT anon_1.col1, anon_1.col2 FROM "
t = table("sometable", column("col1"), column("col2"))
s = (
- select([t])
+ select(t)
.limit(10)
.offset(literal(10) + literal(20))
.order_by(t.c.col2)
col = literal_column("SUM(ABC)").label("SUM(ABC)")
tbl = table("my_table")
- query = select([col]).select_from(tbl).order_by(col).limit(100)
+ query = select(col).select_from(tbl).order_by(col).limit(100)
self.assert_compile(
query,
col = literal_column("SUM(ABC)").label(quoted_name("SUM(ABC)", True))
tbl = table("my_table")
- query = select([col]).select_from(tbl).order_by(col).limit(100)
+ query = select(col).select_from(tbl).order_by(col).limit(100)
self.assert_compile(
query,
col = literal_column("SUM(ABC)").label("SUM(ABC)_")
tbl = table("my_table")
- query = select([col]).select_from(tbl).order_by(col).limit(100)
+ query = select(col).select_from(tbl).order_by(col).limit(100)
self.assert_compile(
query,
col = literal_column("SUM(ABC)").label(quoted_name("SUM(ABC)_", True))
tbl = table("my_table")
- query = select([col]).select_from(tbl).order_by(col).limit(100)
+ query = select(col).select_from(tbl).order_by(col).limit(100)
self.assert_compile(
query,
table1 = table("mytable", column("myid"), column("name"))
self.assert_compile(
- select([table1.c.myid, table1.c.name])
+ select(table1.c.myid, table1.c.name)
.where(table1.c.myid == 7)
.with_for_update(nowait=True, of=table1.c.name)
.limit(10),
table1 = table("mytable", column("myid"), column("name"))
self.assert_compile(
- select([table1.c.myid])
+ select(table1.c.myid)
.where(table1.c.myid == 7)
.with_for_update(nowait=True, of=table1.c.name)
.limit(10),
table1 = table("mytable", column("myid"), column("name"))
self.assert_compile(
- select([table1.c.myid, table1.c.name])
+ select(table1.c.myid, table1.c.name)
.where(table1.c.myid == 7)
.with_for_update(nowait=True, of=table1.c.name)
.limit(10)
table1 = table("mytable", column("myid"), column("name"))
self.assert_compile(
- select([table1.c.myid])
+ select(table1.c.myid)
.where(table1.c.myid == 7)
.with_for_update(nowait=True, of=table1.c.name)
.limit(10)
table1 = table("mytable", column("myid"), column("foo"), column("bar"))
self.assert_compile(
- select([table1.c.myid, table1.c.bar])
+ select(table1.c.myid, table1.c.bar)
.where(table1.c.myid == 7)
.with_for_update(nowait=True, of=[table1.c.foo, table1.c.bar])
.limit(10)
class MyType(TypeDecorator):
impl = Integer
- stmt = select([type_coerce(column("x"), MyType).label("foo")]).limit(1)
+ stmt = select(type_coerce(column("x"), MyType).label("foo")).limit(1)
dialect = oracle.dialect()
compiled = stmt.compile(dialect=dialect)
assert isinstance(compiled._create_result_map()["foo"][-1], MyType)
dialect = oracle.OracleDialect(use_binds_for_limits=False)
self.assert_compile(
- select([t]).limit(10),
+ select(t).limit(10),
"SELECT anon_1.col1, anon_1.col2 FROM "
"(SELECT sometable.col1 AS col1, "
"sometable.col2 AS col2 FROM sometable) anon_1 "
dialect = oracle.OracleDialect(use_binds_for_limits=False)
self.assert_compile(
- select([t]).offset(10),
+ select(t).offset(10),
"SELECT anon_1.col1, anon_1.col2 FROM (SELECT "
"anon_2.col1 AS col1, anon_2.col2 AS col2, ROWNUM AS ora_rn "
"FROM (SELECT sometable.col1 AS col1, sometable.col2 AS col2 "
dialect = oracle.OracleDialect(use_binds_for_limits=False)
self.assert_compile(
- select([t]).limit(10).offset(10),
+ select(t).limit(10).offset(10),
"SELECT anon_1.col1, anon_1.col2 FROM (SELECT "
"anon_2.col1 AS col1, anon_2.col2 AS col2, ROWNUM AS ora_rn "
"FROM (SELECT sometable.col1 AS col1, sometable.col2 AS col2 "
dialect = oracle.OracleDialect(use_binds_for_limits=True)
self.assert_compile(
- select([t]).limit(10),
+ select(t).limit(10),
"SELECT anon_1.col1, anon_1.col2 FROM "
"(SELECT sometable.col1 AS col1, "
"sometable.col2 AS col2 FROM sometable) anon_1 WHERE ROWNUM "
dialect = oracle.OracleDialect(use_binds_for_limits=True)
self.assert_compile(
- select([t]).offset(10),
+ select(t).offset(10),
"SELECT anon_1.col1, anon_1.col2 FROM "
"(SELECT anon_2.col1 AS col1, anon_2.col2 AS col2, "
"ROWNUM AS ora_rn "
dialect = oracle.OracleDialect(use_binds_for_limits=True)
self.assert_compile(
- select([t]).limit(10).offset(10),
+ select(t).limit(10).offset(10),
"SELECT anon_1.col1, anon_1.col2 FROM "
"(SELECT anon_2.col1 AS col1, anon_2.col2 AS col2, "
"ROWNUM AS ora_rn "
anon = a_table.alias()
self.assert_compile(
- select([other_table, anon])
+ select(other_table, anon)
.select_from(other_table.outerjoin(anon))
.apply_labels(),
"SELECT other_thirty_characters_table_.id "
dialect=dialect,
)
self.assert_compile(
- select([other_table, anon])
+ select(other_table, anon)
.select_from(other_table.outerjoin(anon))
.apply_labels(),
"SELECT other_thirty_characters_table_.id "
table1, table2, table3 = self._test_outer_join_fixture()
subq = (
- select([table1])
+ select(table1)
.select_from(
table1.outerjoin(table2, table1.c.myid == table2.c.otherid)
)
.alias()
)
- q = select([table3]).select_from(
+ q = select(table3).select_from(
table3.outerjoin(subq, table3.c.userid == subq.c.myid)
)
def test_outer_join_seven(self):
table1, table2, table3 = self._test_outer_join_fixture()
- q = select([table1.c.name]).where(table1.c.name == "foo")
+ q = select(table1.c.name).where(table1.c.name == "foo")
self.assert_compile(
q,
"SELECT mytable.name FROM mytable WHERE " "mytable.name = :name_1",
def test_outer_join_eight(self):
table1, table2, table3 = self._test_outer_join_fixture()
subq = (
- select([table3.c.otherstuff])
+ select(table3.c.otherstuff)
.where(table3.c.otherstuff == table1.c.name)
.label("bar")
)
- q = select([table1.c.name, subq])
+ q = select(table1.c.name, subq)
self.assert_compile(
q,
"SELECT mytable.name, (SELECT "
column("othername", String),
)
- stmt = select([table1]).select_from(
+ stmt = select(table1).select_from(
table1.outerjoin(
table2,
and_(
dialect=oracle.dialect(use_ansi=False),
)
- stmt = select([table1]).select_from(
+ stmt = select(table1).select_from(
table1.outerjoin(
table2,
and_(
j = a.join(b.join(c, b.c.b == c.c.c), a.c.a == b.c.b)
self.assert_compile(
- select([j]),
+ select(j),
"SELECT a.a, b.b, c.c FROM a, b, c "
"WHERE a.a = b.b AND b.b = c.c",
dialect=oracle.OracleDialect(use_ansi=False),
j = a.outerjoin(b.join(c, b.c.b == c.c.c), a.c.a == b.c.b)
self.assert_compile(
- select([j]),
+ select(j),
"SELECT a.a, b.b, c.c FROM a, b, c "
"WHERE a.a = b.b(+) AND b.b = c.c",
dialect=oracle.OracleDialect(use_ansi=False),
j = a.join(b.outerjoin(c, b.c.b == c.c.c), a.c.a == b.c.b)
self.assert_compile(
- select([j]),
+ select(j),
"SELECT a.a, b.b, c.c FROM a, b, c "
"WHERE a.a = b.b AND b.b = c.c(+)",
dialect=oracle.OracleDialect(use_ansi=False),
)
at_alias = address_types.alias()
s = (
- select([at_alias, addresses])
+ select(at_alias, addresses)
.select_from(
addresses.outerjoin(
at_alias, addresses.c.address_type_id == at_alias.c.id
eq_(result.returned_defaults, (47,))
- eq_(conn.scalar(select([test.c.bar])), 47)
+ eq_(conn.scalar(select(test.c.bar)), 47)
def test_computed_update_warning(self):
test = self.tables.test
# returns the *old* value
eq_(result.returned_defaults, (47,))
- eq_(conn.scalar(select([test.c.bar])), 52)
+ eq_(conn.scalar(select(test.c.bar)), 52)
def test_computed_update_no_warning(self):
test = self.tables.test_no_returning
# no returning
eq_(result.returned_defaults, None)
- eq_(conn.scalar(select([test.c.bar])), 52)
+ eq_(conn.scalar(select(test.c.bar)), 52)
class OutParamTest(fixtures.TestBase, AssertsExecutionResults):
eq_(
connection.scalar(
select(
- [
- literal_column("2", type_=Integer())
- + bindparam("2_1", value=2)
- ]
+ literal_column("2", type_=Integer())
+ + bindparam("2_1", value=2)
)
),
4,
t.create(connection)
connection.execute(
- select([t]).where(t.c.foo.in_(bindparam("uid", expanding=True))),
+ select(t).where(t.c.foo.in_(bindparam("uid", expanding=True))),
uid=[1, 2, 3],
)
"%(test_schema)s_pt.data FROM %(test_schema)s_pt"
% {"test_schema": testing.config.test_schema},
)
- select([parent]).execute().fetchall()
+ select(parent).execute().fetchall()
def test_reflect_alt_synonym_owner_local_table(self):
meta = MetaData(testing.db)
"FROM %(test_schema)s.local_table"
% {"test_schema": testing.config.test_schema},
)
- select([parent]).execute().fetchall()
+ select(parent).execute().fetchall()
@testing.provide_metadata
def test_create_same_names_implicit_schema(self):
"%(test_schema)s.parent.id = %(test_schema)s.child.parent_id"
% {"test_schema": testing.config.test_schema},
)
- select([parent, child]).select_from(
+ select(parent, child).select_from(
parent.join(child)
).execute().fetchall()
"localtable.parent_id"
% {"test_schema": testing.config.test_schema},
)
- select([parent, lcl]).select_from(
+ select(parent, lcl).select_from(
parent.join(lcl)
).execute().fetchall()
finally:
"%(test_schema)s.child.parent_id"
% {"test_schema": testing.config.test_schema},
)
- select([parent, child]).select_from(
+ select(parent, child).select_from(
parent.join(child)
).execute().fetchall()
"localtable.parent_id"
% {"test_schema": testing.config.test_schema},
)
- select([parent, lcl]).select_from(
+ select(parent, lcl).select_from(
parent.join(lcl)
).execute().fetchall()
finally:
"%(test_schema)s.ctable.parent_id"
% {"test_schema": testing.config.test_schema},
)
- select([parent, child]).select_from(
+ select(parent, child).select_from(
parent.join(child)
).execute().fetchall()
with testing.db.begin() as conn:
t.create(conn)
conn.execute(t.insert(), {"x": 5})
- s1 = select([t]).subquery()
- s2 = select([column("rowid")]).select_from(s1)
+ s1 = select(t).subquery()
+ s2 = select(column("rowid")).select_from(s1)
rowid = conn.scalar(s2)
# the ROWID type is not really needed here,
# as cx_oracle just treats it as a string,
# but we want to make sure the ROWID works...
rowid_col = column("rowid", oracle.ROWID)
- s3 = select([t.c.x, rowid_col]).where(
+ s3 = select(t.c.x, rowid_col).where(
rowid_col == cast(rowid, oracle.ROWID)
)
eq_(conn.execute(s3).fetchall(), [(5, rowid)])
)
eq_(
- select([t1.c.numericcol])
- .order_by(t1.c.intcol)
- .execute()
- .fetchall(),
+ select(t1.c.numericcol).order_by(t1.c.intcol).execute().fetchall(),
[(float("inf"),), (float("-inf"),)],
)
)
eq_(
- select([t1.c.numericcol])
- .order_by(t1.c.intcol)
- .execute()
- .fetchall(),
+ select(t1.c.numericcol).order_by(t1.c.intcol).execute().fetchall(),
[(decimal.Decimal("Infinity"),), (decimal.Decimal("-Infinity"),)],
)
eq_(
[
tuple(str(col) for col in row)
- for row in select([t1.c.numericcol])
+ for row in select(t1.c.numericcol)
.order_by(t1.c.intcol)
.execute()
],
)
eq_(
- select([t1.c.numericcol])
- .order_by(t1.c.intcol)
- .execute()
- .fetchall(),
+ select(t1.c.numericcol).order_by(t1.c.intcol).execute().fetchall(),
[(decimal.Decimal("NaN"),), (decimal.Decimal("NaN"),)],
)
t = Table("t", metadata, Column("data", oracle.LONG))
metadata.create_all(testing.db)
connection.execute(t.insert(), data="xyz")
- eq_(connection.scalar(select([t.c.data])), "xyz")
+ eq_(connection.scalar(select(t.c.data)), "xyz")
@testing.provide_metadata
def test_longstring(self, connection):
Column("col1", Integer),
Column("variadic", Integer),
)
- x = select([table.c.col1, table.c.variadic])
+ x = select(table.c.col1, table.c.variadic)
self.assert_compile(
x, """SELECT pg_table.col1, pg_table."variadic" FROM pg_table"""
expected = "SELECT foo.id FROM ONLY testtbl1 AS foo"
self.assert_compile(stmt, expected)
- stmt = select([tbl1, tbl2]).with_hint(tbl1, "ONLY", "postgresql")
+ stmt = select(tbl1, tbl2).with_hint(tbl1, "ONLY", "postgresql")
expected = (
"SELECT testtbl1.id, testtbl2.id FROM ONLY testtbl1, " "testtbl2"
)
self.assert_compile(stmt, expected)
- stmt = select([tbl1, tbl2]).with_hint(tbl2, "ONLY", "postgresql")
+ stmt = select(tbl1, tbl2).with_hint(tbl2, "ONLY", "postgresql")
expected = (
"SELECT testtbl1.id, testtbl2.id FROM testtbl1, ONLY " "testtbl2"
)
self.assert_compile(stmt, expected)
- stmt = select([tbl1, tbl2])
+ stmt = select(tbl1, tbl2)
stmt = stmt.with_hint(tbl1, "ONLY", "postgresql")
stmt = stmt.with_hint(tbl2, "ONLY", "postgresql")
expected = (
m = MetaData()
table = Table("table1", m, Column("a", Integer), Column("b", Integer))
expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
- stmt = select([expr])
+ stmt = select(expr)
# note this tests that the object exports FROM objects
# correctly
expr = func.string_agg(
table.c.a, aggregate_order_by(literal_column("','"), table.c.a)
)
- stmt = select([expr])
+ stmt = select(expr)
self.assert_compile(
stmt,
literal_column("','"), table.c.a, table.c.b.desc()
),
)
- stmt = select([expr])
+ stmt = select(expr)
self.assert_compile(
stmt,
m = MetaData()
table = Table("table1", m, Column("a", Integer), Column("b", Integer))
expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
- stmt = select([expr])
+ stmt = select(expr)
a1 = table.alias("foo")
stmt2 = sql_util.ClauseAdapter(a1).traverse(stmt)
.cte("i_upsert")
)
- stmt = select([i])
+ stmt = select(i)
self.assert_compile(
stmt,
def test_plain_generative(self):
self.assert_compile(
- select([self.table]).distinct(),
+ select(self.table).distinct(),
"SELECT DISTINCT t.id, t.a, t.b FROM t",
)
def test_on_columns_generative(self):
self.assert_compile(
- select([self.table]).distinct(self.table.c.a),
+ select(self.table).distinct(self.table.c.a),
"SELECT DISTINCT ON (t.a) t.id, t.a, t.b FROM t",
)
def test_on_columns_generative_multi_call(self):
self.assert_compile(
- select([self.table])
+ select(self.table)
.distinct(self.table.c.a)
.distinct(self.table.c.b),
"SELECT DISTINCT ON (t.a, t.b) t.id, t.a, t.b FROM t",
def test_literal_binds(self):
self.assert_compile(
- select([self.table]).distinct(self.table.c.a == 10),
+ select(self.table).distinct(self.table.c.a == 10),
"SELECT DISTINCT ON (t.a = 10) t.id, t.a, t.b FROM t",
literal_binds=True,
)
def test_distinct_on_subquery_anon(self):
- sq = select([self.table]).alias()
+ sq = select(self.table).alias()
q = (
- select([self.table.c.id, sq.c.id])
+ select(self.table.c.id, sq.c.id)
.distinct(sq.c.id)
.where(self.table.c.id == sq.c.id)
)
)
def test_distinct_on_subquery_named(self):
- sq = select([self.table]).alias("sq")
+ sq = select(self.table).alias("sq")
q = (
- select([self.table.c.id, sq.c.id])
+ select(self.table.c.id, sq.c.id)
.distinct(sq.c.id)
.where(self.table.c.id == sq.c.id)
)
raise ValueError(c)
def test_match_basic(self):
- s = select([self.table_alt.c.id]).where(
+ s = select(self.table_alt.c.id).where(
self.table_alt.c.title.match("somestring")
)
self.assert_compile(
)
def test_match_regconfig(self):
- s = select([self.table_alt.c.id]).where(
+ s = select(self.table_alt.c.id).where(
self.table_alt.c.title.match(
"somestring", postgresql_regconfig="english"
)
)
def test_match_tsvector(self):
- s = select([self.table_alt.c.id]).where(
+ s = select(self.table_alt.c.id).where(
func.to_tsvector(self.table_alt.c.title).match("somestring")
)
self.assert_compile(
)
def test_match_tsvectorconfig(self):
- s = select([self.table_alt.c.id]).where(
+ s = select(self.table_alt.c.id).where(
func.to_tsvector("english", self.table_alt.c.title).match(
"somestring"
)
)
def test_match_tsvectorconfig_regconfig(self):
- s = select([self.table_alt.c.id]).where(
+ s = select(self.table_alt.c.id).where(
func.to_tsvector("english", self.table_alt.c.title).match(
"somestring", postgresql_regconfig="english"
)
)
eq_(
- conn.execute(select([self.tables.data])).fetchall(),
+ conn.execute(select(self.tables.data)).fetchall(),
[
(1, "x1", "y1", 5),
(2, "x2", "y2", 5),
)
eq_(
conn.execute(
- select([self.tables.data]).order_by(self.tables.data.c.id)
+ select(self.tables.data).order_by(self.tables.data.c.id)
).fetchall(),
[(1, "x1", "y5", 5), (2, "x2", "y2", 5), (3, "x3", "y6", 5)],
)
ins = t.insert(inline=True).values(
id=bindparam("id"),
- x=select([literal_column("5")])
+ x=select(literal_column("5"))
.select_from(self.tables.data)
.scalar_subquery(),
y=bindparam("y"),
ins = t.insert(inline=True).values(
id=bindparam("id"),
- x=select([literal_column("5")])
+ x=select(literal_column("5"))
.select_from(self.tables.data)
.scalar_subquery(),
y=bindparam("y"),
def test_extract(self, connection):
fivedaysago = testing.db.scalar(
- select([func.now()])
+ select(func.now())
) - datetime.timedelta(days=5)
for field, exp in (
("year", fivedaysago.year),
):
r = connection.execute(
select(
- [extract(field, func.now() + datetime.timedelta(days=-5))]
+ extract(field, func.now() + datetime.timedelta(days=-5))
)
).scalar()
eq_(r, exp)
eq_(
conn.scalar(
select(
- [
- cast(
- literal(quoted_name("some_name", False)),
- String,
- )
- ]
+ cast(literal(quoted_name("some_name", False)), String,)
)
),
"some_name",
conn.execute(i, {"id": 1, "data": "initial data"})
eq_(
- conn.scalar(sql.select([bind_targets.c.data])),
+ conn.scalar(sql.select(bind_targets.c.data)),
"initial data processed",
)
conn.execute(i, {"id": 1, "data": "new inserted data"})
eq_(
- conn.scalar(sql.select([bind_targets.c.data])),
+ conn.scalar(sql.select(bind_targets.c.data)),
"new updated data processed",
)
with expect_warnings(
".*has no Python-side or server-side default.*"
):
- assert_raises(
- (exc.IntegrityError, exc.ProgrammingError),
- eng.execute,
- t2.insert(),
- )
+ with eng.connect() as conn:
+ assert_raises(
+ (exc.IntegrityError, exc.ProgrammingError),
+ conn.execute,
+ t2.insert(),
+ )
def test_sequence_insert(self):
table = Table(
for field in fields:
result = self.bind.scalar(
- select([extract(field, expr)]).select_from(t)
+ select(extract(field, expr)).select_from(t)
)
eq_(result, fields[field])
(Numeric(asdecimal=False), 140.381230939),
]:
ret = connection.execute(
- select([func.stddev_pop(data_table.c.data, type_=type_)])
+ select(func.stddev_pop(data_table.c.data, type_=type_))
).scalar()
eq_(round_decimal(ret, 9), result)
ret = connection.execute(
- select([cast(func.stddev_pop(data_table.c.data), type_)])
+ select(cast(func.stddev_pop(data_table.c.data), type_))
).scalar()
eq_(round_decimal(ret, 9), result)
],
)
connection.execute(t1.insert(), {"bar": "two"})
- eq_(connection.scalar(select([t1.c.bar])), "two")
+ eq_(connection.scalar(select(t1.c.bar)), "two")
@testing.provide_metadata
def test_non_native_enum_w_unicode(self, connection):
)
connection.execute(t1.insert(), {"bar": util.u("Ü")})
- eq_(connection.scalar(select([t1.c.bar])), util.u("Ü"))
+ eq_(connection.scalar(select(t1.c.bar)), util.u("Ü"))
@testing.provide_metadata
def test_disable_create(self):
self.metadata.create_all(testing.db)
connection.execute(t1.insert(), {"data": "two"})
- eq_(connection.scalar(select([t1.c.data])), "twoHITHERE")
+ eq_(connection.scalar(select(t1.c.data)), "twoHITHERE")
@testing.provide_metadata
def test_generic_w_pg_variant(self, connection):
@staticmethod
def _scalar(expression):
with testing.db.connect() as conn:
- return conn.scalar(select([expression]))
+ return conn.scalar(select(expression))
def test_cast_name(self):
eq_(self._scalar(cast("pg_class", postgresql.REGCLASS)), "pg_class")
)
with testing.db.connect() as conn:
oid = conn.scalar(
- select([pga.c.attrelid]).where(
+ select(pga.c.attrelid).where(
pga.c.attrelid == cast("pg_class", postgresql.REGCLASS)
)
)
def test_array_any(self):
col = column("x", postgresql.ARRAY(Integer))
self.assert_compile(
- select([col.any(7, operator=operators.lt)]),
+ select(col.any(7, operator=operators.lt)),
"SELECT %(param_1)s < ANY (x) AS anon_1",
checkparams={"param_1": 7},
)
def test_array_all(self):
col = column("x", postgresql.ARRAY(Integer))
self.assert_compile(
- select([col.all(7, operator=operators.lt)]),
+ select(col.all(7, operator=operators.lt)),
"SELECT %(param_1)s < ALL (x) AS anon_1",
checkparams={"param_1": 7},
)
literal = array([4, 5])
self.assert_compile(
- select([col + literal]),
+ select(col + literal),
"SELECT x || ARRAY[%(param_1)s, %(param_2)s] AS anon_1",
checkparams={"param_1": 4, "param_2": 5},
)
values_table.insert(), [{"value": i} for i in range(1, 10)]
)
- stmt = select([func.array_agg(values_table.c.value)])
+ stmt = select(func.array_agg(values_table.c.value))
eq_(connection.execute(stmt).scalar(), list(range(1, 10)))
stmt = select([func.array_agg(values_table.c.value)[3]])
strarr=[util.u("abc"), util.u("def")],
)
results = connection.execute(
- select([arrtable.c.id]).where(arrtable.c.intarr < [4, 5, 6])
+ select(arrtable.c.id).where(arrtable.c.intarr < [4, 5, 6])
).fetchall()
eq_(len(results), 1)
eq_(results[0][0], 5)
arrtable = self.tables.arrtable
connection.execute(arrtable.insert(), dimarr=[[1, 2, 3], [4, 5, 6]])
eq_(
- connection.scalar(select([arrtable.c.dimarr])),
+ connection.scalar(select(arrtable.c.dimarr)),
[[-1, 0, 1], [2, 3, 4]],
)
connection.execute(arrtable.insert(), intarr=[4, 5, 6])
eq_(
connection.scalar(
- select([arrtable.c.intarr]).where(
+ select(arrtable.c.intarr).where(
postgresql.Any(5, arrtable.c.intarr)
)
),
connection.execute(arrtable.insert(), intarr=[4, 5, 6])
eq_(
connection.scalar(
- select([arrtable.c.intarr]).where(
+ select(arrtable.c.intarr).where(
arrtable.c.intarr.all(4, operator=operators.le)
)
),
with testing.db.begin() as conn:
eq_(
conn.scalar(
- select([arrtable.c.intarr]).where(
+ select(arrtable.c.intarr).where(
arrtable.c.intarr.contains(struct([4, 5]))
)
),
with testing.db.begin() as conn:
eq_(
conn.scalar(
- select([dim_arrtable.c.intarr]).where(
+ select(dim_arrtable.c.intarr).where(
dim_arrtable.c.intarr.contains(struct([4, 5]))
)
),
self._fixture_456(arrtable)
eq_(
connection.scalar(
- select([arrtable.c.intarr]).where(
- arrtable.c.intarr.contains([])
- )
+ select(arrtable.c.intarr).where(arrtable.c.intarr.contains([]))
),
[4, 5, 6],
)
connection.execute(arrtable.insert(), intarr=[4, 5, 6])
eq_(
connection.scalar(
- select([arrtable.c.intarr]).where(
+ select(arrtable.c.intarr).where(
arrtable.c.intarr.overlap([7, 6])
)
),
tbl.insert(), [{"enum_col": ["foo"]}, {"enum_col": ["foo", "bar"]}]
)
- sel = select([tbl.c.enum_col]).order_by(tbl.c.id)
+ sel = select(tbl.c.enum_col).order_by(tbl.c.id)
eq_(
connection.execute(sel).fetchall(), [(["foo"],), (["foo", "bar"],)]
)
if util.py3k:
connection.execute(tbl.insert(), {"pyenum_col": [MyEnum.a]})
- sel = select([tbl.c.pyenum_col]).order_by(tbl.c.id.desc())
+ sel = select(tbl.c.pyenum_col).order_by(tbl.c.id.desc())
eq_(connection.scalar(sel), [MyEnum.a])
],
)
- sel = select([tbl.c.json_col]).order_by(tbl.c.id)
+ sel = select(tbl.c.json_col).order_by(tbl.c.id)
eq_(
connection.execute(sel).fetchall(),
[(["foo"],), ([{"foo": "bar"}, [1]],), ([None],)],
__backend__ = True
def test_timestamp(self, connection):
- s = select([text("timestamp '2007-12-25'")])
+ s = select(text("timestamp '2007-12-25'"))
result = connection.execute(s).first()
eq_(result[0], datetime.datetime(2007, 12, 25, 0, 0))
def test_interval_arithmetic(self, connection):
# basically testing that we get timedelta back for an INTERVAL
# result. more of a driver assertion.
- s = select([text("timestamp '2007-12-25' - timestamp '2007-11-15'")])
+ s = select(text("timestamp '2007-12-25' - timestamp '2007-11-15'"))
result = connection.execute(s).first()
eq_(result[0], datetime.timedelta(40))
t = Table("t1", self.metadata, Column("data", postgresql.TSVECTOR))
t.create()
connection.execute(t.insert(), data="a fat cat sat")
- eq_(connection.scalar(select([t.c.data])), "'a' 'cat' 'fat' 'sat'")
+ eq_(connection.scalar(select(t.c.data)), "'a' 'cat' 'fat' 'sat'")
connection.execute(t.update(), data="'a' 'cat' 'fat' 'mat' 'sat'")
eq_(
- connection.scalar(select([t.c.data])),
- "'a' 'cat' 'fat' 'mat' 'sat'",
+ connection.scalar(select(t.c.data)), "'a' 'cat' 'fat' 'mat' 'sat'",
)
@testing.provide_metadata
connection.execute(utable.insert(), {"data": value1})
connection.execute(utable.insert(), {"data": value2})
r = connection.execute(
- select([utable.c.data]).where(utable.c.data != value1)
+ select(utable.c.data).where(utable.c.data != value1)
)
eq_(r.fetchone()[0], value2)
eq_(r.fetchone(), None)
self.hashcol = self.test_table.c.hash
def _test_where(self, whereclause, expected):
- stmt = select([self.test_table]).where(whereclause)
+ stmt = select(self.test_table).where(whereclause)
self.assert_compile(
stmt,
"SELECT test_table.id, test_table.hash FROM test_table "
)
def test_cols(self, colclause_fn, expected, from_):
colclause = colclause_fn(self)
- stmt = select([colclause])
+ stmt = select(colclause)
self.assert_compile(
stmt,
("SELECT %s" + (" FROM test_table" if from_ else "")) % expected,
def _assert_data(self, compare, conn):
data = conn.execute(
- select([self.tables.data_table.c.data]).order_by(
+ select(self.tables.data_table.c.data).order_by(
self.tables.data_table.c.name
)
).fetchall()
expr = hstore(
postgresql.array(["1", "2"]), postgresql.array(["3", None])
)["1"]
- eq_(connection.scalar(select([expr])), "3")
+ eq_(connection.scalar(select(expr)), "3")
@testing.requires.psycopg2_native_hstore
def test_insert_native(self):
data_table = self.tables.data_table
with engine.begin() as conn:
result = conn.execute(
- select([data_table.c.data]).where(
+ select(data_table.c.data).where(
data_table.c.data["k1"] == "r3v1"
)
).first()
assert isinstance(cols[0]["type"], self._col_type)
def _assert_data(self, conn):
- data = conn.execute(
- select([self.tables.data_table.c.range])
- ).fetchall()
+ data = conn.execute(select(self.tables.data_table.c.range)).fetchall()
eq_(data, [(self._data_obj(),)])
def test_insert_obj(self, connection):
)
# select
range_ = self.tables.data_table.c.range
- data = connection.execute(select([range_ + range_])).fetchall()
+ data = connection.execute(select(range_ + range_)).fetchall()
eq_(data, [(self._data_obj(),)])
def test_intersection_result(self, connection):
)
# select
range_ = self.tables.data_table.c.range
- data = connection.execute(select([range_ * range_])).fetchall()
+ data = connection.execute(select(range_ * range_)).fetchall()
eq_(data, [(self._data_obj(),)])
def test_difference_result(self, connection):
)
# select
range_ = self.tables.data_table.c.range
- data = connection.execute(select([range_ - range_])).fetchall()
+ data = connection.execute(select(range_ - range_)).fetchall()
eq_(data, [(self._data_obj().__class__(empty=True),)])
)
def test_where(self, whereclause_fn, expected):
whereclause = whereclause_fn(self)
- stmt = select([self.test_table]).where(whereclause)
+ stmt = select(self.test_table).where(whereclause)
self.assert_compile(
stmt,
"SELECT test_table.id, test_table.test_column FROM test_table "
)
def test_cols(self, colclause_fn, expected, from_):
colclause = colclause_fn(self)
- stmt = select([colclause])
+ stmt = select(colclause)
self.assert_compile(
stmt,
("SELECT %s" + (" FROM test_table" if from_ else "")) % expected,
def _assert_data(self, compare, conn, column="data"):
col = self.tables.data_table.c[column]
data = conn.execute(
- select([col]).order_by(self.tables.data_table.c.name)
+ select(col).order_by(self.tables.data_table.c.name)
).fetchall()
eq_([d for d, in data], compare)
def _assert_column_is_NULL(self, conn, column="data"):
col = self.tables.data_table.c[column]
- data = conn.execute(select([col]).where(col.is_(null()))).fetchall()
+ data = conn.execute(select(col).where(col.is_(null()))).fetchall()
eq_([d for d, in data], [None])
def _assert_column_is_JSON_NULL(self, conn, column="data"):
col = self.tables.data_table.c[column]
data = conn.execute(
- select([col]).where(cast(col, String) == "null")
+ select(col).where(cast(col, String) == "null")
).fetchall()
eq_([d for d, in data], [None])
options=dict(json_serializer=dumps, json_deserializer=loads)
)
- s = select([cast({"key": "value", "x": "q"}, self.test_type)])
+ s = select(cast({"key": "value", "x": "q"}, self.test_type))
with engine.begin() as conn:
eq_(conn.scalar(s), {"key": "value", "x": "dumps_y_loads"})
data_table = self.tables.data_table
result = connection.execute(
- select([data_table.c.name]).where(
+ select(data_table.c.name).where(
data_table.c.data[("k1", "r6v1", "subr")].astext == "[1, 2, 3]"
)
)
data_table = self.tables.data_table
result = connection.execute(
- select([data_table.c.name]).where(
+ select(data_table.c.name).where(
data_table.c.data["k1"]["r6v1"]["subr"].astext == "[1, 2, 3]"
)
)
data_table = self.tables.data_table
with engine.begin() as conn:
result = conn.execute(
- select([data_table.c.data]).where(
+ select(data_table.c.data).where(
data_table.c.data["k1"].astext == "r3v1"
)
).first()
eq_(result, ({"k1": "r3v1", "k2": "r3v2"},))
result = conn.execute(
- select([data_table.c.data]).where(
+ select(data_table.c.data).where(
data_table.c.data["k1"].astext.cast(String) == "r3v1"
)
).first()
def test_fixed_round_trip(self, connection):
s = select(
- [
- cast(
- {"key": "value", "key2": {"k1": "v1", "k2": "v2"}},
- self.test_type,
- )
- ]
+ cast(
+ {"key": "value", "key2": {"k1": "v1", "k2": "v2"}},
+ self.test_type,
+ )
)
eq_(
connection.scalar(s),
def test_unicode_round_trip(self, connection):
s = select(
- [
- cast(
- {
- util.u("réveillé"): util.u("réveillé"),
- "data": {"k1": util.u("drôle")},
- },
- self.test_type,
- )
- ]
+ cast(
+ {
+ util.u("réveillé"): util.u("réveillé"),
+ "data": {"k1": util.u("drôle")},
+ },
+ self.test_type,
+ )
)
eq_(
connection.scalar(s),
def test_alias(self):
t = table("sometable", column("col1"), column("col2"))
- s = select([t.alias()])
+ s = select(t.alias())
self.assert_compile(
s,
"SELECT sometable_1.col1, sometable_1.col2 "
"sometable", m, Column("col1", Integer), Column("col2", Integer)
)
self.assert_compile(
- select([func.max(t.c.col1)]),
+ select(func.max(t.c.col1)),
"SELECT max(sometable.col1) AS max_1 FROM " "sometable",
)
assert_raises(
exc.StatementError,
connection.execute,
- select([1]).where(bindparam("date", type_=Date)),
+ select(1).where(bindparam("date", type_=Date)),
date=str(datetime.date(2007, 10, 30)),
)
[("2004-05-21T00:00:00",), ("2010-10-15T12:37:00",)],
)
eq_(
- connection.execute(select([t.c.d]).order_by(t.c.d)).fetchall(),
+ connection.execute(select(t.c.d).order_by(t.c.d)).fetchall(),
[
(datetime.datetime(2004, 5, 21, 0, 0),),
(datetime.datetime(2010, 10, 15, 12, 37),),
[("20040521000000",), ("20101015123700",)],
)
eq_(
- connection.execute(select([t.c.d]).order_by(t.c.d)).fetchall(),
+ connection.execute(select(t.c.d).order_by(t.c.d)).fetchall(),
[
(datetime.datetime(2004, 5, 21, 0, 0),),
(datetime.datetime(2010, 10, 15, 12, 37),),
[("20040521",), ("20101015",)],
)
eq_(
- connection.execute(select([t.c.d]).order_by(t.c.d)).fetchall(),
+ connection.execute(select(t.c.d).order_by(t.c.d)).fetchall(),
[(datetime.date(2004, 5, 21),), (datetime.date(2010, 10, 15),)],
)
[("2004|05|21",), ("2010|10|15",)],
)
eq_(
- connection.execute(select([t.c.d]).order_by(t.c.d)).fetchall(),
+ connection.execute(select(t.c.d).order_by(t.c.d)).fetchall(),
[(datetime.date(2004, 5, 21),), (datetime.date(2010, 10, 15),)],
)
with testing.db.connect() as conn:
conn.execute(sqlite_json.insert(), foo=value)
- eq_(conn.scalar(select([sqlite_json.c.foo])), value)
+ eq_(conn.scalar(select(sqlite_json.c.foo)), value)
@testing.provide_metadata
def test_extract_subobject(self):
with engine.begin() as conn:
conn.execute(sqlite_json.insert(), {"foo": data_element})
- row = conn.execute(select([sqlite_json.c.foo])).first()
+ row = conn.execute(select(sqlite_json.c.foo)).first()
eq_(row, (data_element,))
eq_(js.mock_calls, [mock.call(data_element)])
conn.execute(t.insert())
conn.execute(t.insert().values(x=today))
eq_(
- conn.execute(select([t.c.x]).order_by(t.c.id)).fetchall(),
+ conn.execute(select(t.c.x).order_by(t.c.id)).fetchall(),
[(now,), (today,)],
)
conn.execute(t.insert())
conn.execute(t.insert().values(x=35))
eq_(
- conn.execute(select([t.c.x]).order_by(t.c.id)).fetchall(),
+ conn.execute(select(t.c.x).order_by(t.c.id)).fetchall(),
[(22,), (35,)],
)
}
for field, subst in mapping.items():
self.assert_compile(
- select([extract(field, t.c.col1)]),
+ select(extract(field, t.c.col1)),
"SELECT CAST(STRFTIME('%s', t.col1) AS "
"INTEGER) AS anon_1 FROM t" % subst,
)
__only_on__ = "sqlite"
- # empty insert (i.e. INSERT INTO table DEFAULT VALUES) fails on
- # 3.3.7 and before
+ # empty insert was added as of sqlite 3.3.8.
- def _test_empty_insert(self, table, expect=1):
+ def _test_empty_insert(self, connection, table, expect=1):
try:
- table.create()
+ table.create(connection)
for wanted in expect, expect * 2:
- table.insert().execute()
- rows = table.select().execute().fetchall()
+ connection.execute(table.insert())
+ rows = connection.execute(table.select()).fetchall()
eq_(len(rows), wanted)
finally:
- table.drop()
+ table.drop(connection)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk1(self):
+ def test_empty_insert_pk1(self, connection):
self._test_empty_insert(
- Table(
- "a",
- MetaData(testing.db),
- Column("id", Integer, primary_key=True),
- )
+ connection,
+ Table("a", MetaData(), Column("id", Integer, primary_key=True),),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk2(self):
+ def test_empty_insert_pk2(self, connection):
# now warns due to [ticket:3216]
with expect_warnings(
assert_raises(
exc.IntegrityError,
self._test_empty_insert,
+ connection,
Table(
"b",
- MetaData(testing.db),
+ MetaData(),
Column("x", Integer, primary_key=True),
Column("y", Integer, primary_key=True),
),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk2_fv(self):
+ def test_empty_insert_pk2_fv(self, connection):
assert_raises(
exc.DBAPIError,
self._test_empty_insert,
+ connection,
Table(
"b",
- MetaData(testing.db),
+ MetaData(),
Column(
"x",
Integer,
),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk3(self):
+ def test_empty_insert_pk3(self, connection):
# now warns due to [ticket:3216]
with expect_warnings(
"Column 'c.x' is marked as a member of the primary key for table"
assert_raises(
exc.IntegrityError,
self._test_empty_insert,
+ connection,
Table(
"c",
- MetaData(testing.db),
+ MetaData(),
Column("x", Integer, primary_key=True),
Column(
"y", Integer, DefaultClause("123"), primary_key=True
),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk3_fv(self):
+ def test_empty_insert_pk3_fv(self, connection):
assert_raises(
exc.DBAPIError,
self._test_empty_insert,
+ connection,
Table(
"c",
- MetaData(testing.db),
+ MetaData(),
Column(
"x",
Integer,
),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_pk4(self):
+ def test_empty_insert_pk4(self, connection):
self._test_empty_insert(
+ connection,
Table(
"d",
- MetaData(testing.db),
+ MetaData(),
Column("x", Integer, primary_key=True),
Column("y", Integer, DefaultClause("123")),
- )
+ ),
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_nopk1(self):
+ def test_empty_insert_nopk1(self, connection):
self._test_empty_insert(
- Table("e", MetaData(testing.db), Column("id", Integer))
+ connection, Table("e", MetaData(), Column("id", Integer))
)
- @testing.exclude("sqlite", "<", (3, 3, 8), "no database support")
- def test_empty_insert_nopk2(self):
+ def test_empty_insert_nopk2(self, connection):
self._test_empty_insert(
+ connection,
Table(
- "f",
- MetaData(testing.db),
- Column("x", Integer),
- Column("y", Integer),
- )
+ "f", MetaData(), Column("x", Integer), Column("y", Integer),
+ ),
)
- def test_inserts_with_spaces(self):
+ @testing.provide_metadata
+ def test_inserts_with_spaces(self, connection):
tbl = Table(
"tbl",
- MetaData("sqlite:///"),
+ self.metadata,
Column("with space", Integer),
Column("without", Integer),
)
- tbl.create()
- try:
- tbl.insert().execute({"without": 123})
- assert list(tbl.select().execute()) == [(None, 123)]
- tbl.insert().execute({"with space": 456})
- assert list(tbl.select().execute()) == [(None, 123), (456, None)]
- finally:
- tbl.drop()
+ tbl.create(connection)
+ connection.execute(tbl.insert(), {"without": 123})
+ eq_(connection.execute(tbl.select()).fetchall(), [(None, 123)])
+ connection.execute(tbl.insert(), {"with space": 456})
+ eq_(
+ connection.execute(tbl.select()).fetchall(),
+ [(None, 123), (456, None)],
+ )
def full_text_search_missing():
transaction.commit()
eq_(
connection.execute(
- select([users.c.user_id]).order_by(users.c.user_id)
+ select(users.c.user_id).order_by(users.c.user_id)
).fetchall(),
[(1,), (3,)],
)
transaction.commit()
eq_(
connection.execute(
- select([users.c.user_id]).order_by(users.c.user_id)
+ select(users.c.user_id).order_by(users.c.user_id)
).fetchall(),
[(1,), (2,), (3,)],
)
transaction.commit()
eq_(
connection.execute(
- select([users.c.user_id]).order_by(users.c.user_id)
+ select(users.c.user_id).order_by(users.c.user_id)
).fetchall(),
[(1,), (4,)],
)
for field, subst in list(mapping.items()):
self.assert_compile(
- select([extract(field, t.c.col1)]),
+ select(extract(field, t.c.col1)),
'SELECT DATEPART("%s", t.col1) AS anon_1 FROM t' % subst,
)
def test_limit_offset(self):
- stmt = select([1]).limit(5).offset(6)
+ stmt = select(1).limit(5).offset(6)
assert stmt.compile().params == {"param_1": 5, "param_2": 6}
self.assert_compile(
stmt, "SELECT 1 ROWS LIMIT :param_1 OFFSET :param_2"
)
def test_offset(self):
- stmt = select([1]).offset(10)
+ stmt = select(1).offset(10)
assert stmt.compile().params == {"param_1": 10}
self.assert_compile(stmt, "SELECT 1 ROWS OFFSET :param_1")
def test_limit(self):
- stmt = select([1]).limit(5)
+ stmt = select(1).limit(5)
assert stmt.compile().params == {"param_1": 5}
self.assert_compile(stmt, "SELECT 1 ROWS LIMIT :param_1")
cls.users.create(testing.db)
def teardown(self):
- testing.db.execute(self.users.delete()).close()
+ with testing.db.connect() as conn:
+ conn.execute(self.users.delete())
@classmethod
def teardown_class(cls):
return go
def _assert_no_data(self):
- eq_(
- testing.db.scalar(
- select([func.count("*")]).select_from(self.table)
- ),
- 0,
- )
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count("*")).select_from(self.table)),
+ 0,
+ )
def _assert_fn(self, x, value=None):
- eq_(testing.db.execute(self.table.select()).fetchall(), [(x, value)])
+ with testing.db.connect() as conn:
+ eq_(conn.execute(self.table.select()).fetchall(), [(x, value)])
def test_transaction_engine_fn_commit(self):
fn = self._trans_fn()
).default_from()
)
- result = (
- connection.execution_options(no_parameters=True)
- .execute(stmt)
- .scalar()
- )
+ with _string_deprecation_expect():
+ result = (
+ connection.execution_options(no_parameters=True)
+ .execute(stmt)
+ .scalar()
+ )
eq_(result, "%")
@testing.requires.qmark_paramstyle
tsa.exc.StatementError,
r"\(.*.SomeException\) " r"nope\n\[SQL\: u?SELECT 1 ",
conn.execute,
- select([1]).where(column("foo") == literal("bar", MyType())),
+ select(1).where(column("foo") == literal("bar", MyType())),
)
- _go(testing.db)
with testing.db.connect() as conn:
_go(conn)
".*SELECT users.user_name AS .méil."
),
conn.execute,
- select([users.c.user_name.label(name)]).where(
+ select(users.c.user_name.label(name)).where(
users.c.user_name == bindparam("uname")
),
{"uname_incorrect": "foo"},
MyException,
"nope",
conn.execute,
- select([1]).where(column("foo") == literal("bar", MyType())),
+ select(1).where(column("foo") == literal("bar", MyType())),
)
- _go(testing.db)
conn = testing.db.connect()
try:
_go(conn)
finally:
conn.close()
- def test_empty_insert(self):
+ def test_empty_insert(self, connection):
"""test that execute() interprets [] as a list with no params"""
users_autoinc = self.tables.users_autoinc
- testing.db.execute(
+ connection.execute(
users_autoinc.insert().values(user_name=bindparam("name", None)),
[],
)
- eq_(testing.db.execute(users_autoinc.select()).fetchall(), [(1, None)])
+ eq_(connection.execute(users_autoinc.select()).fetchall(), [(1, None)])
@testing.only_on("sqlite")
def test_execute_compiled_favors_compiled_paramstyle(self):
d1 = default.DefaultDialect(paramstyle="format")
d2 = default.DefaultDialect(paramstyle="pyformat")
- testing.db.execute(stmt.compile(dialect=d1))
- testing.db.execute(stmt.compile(dialect=d2))
+ with testing.db.connect() as conn:
+ conn.execute(stmt.compile(dialect=d1))
+ conn.execute(stmt.compile(dialect=d2))
eq_(
do_exec.mock_calls,
eng = create_engine(testing.db.url)
def my_init(connection):
- connection.execution_options(foo="bar").execute(select([1]))
+ connection.execution_options(foo="bar").execute(select(1))
with patch.object(eng.dialect, "initialize", my_init):
conn = eng.connect()
def test_works_after_dispose(self):
eng = create_engine(testing.db.url)
for i in range(3):
- eq_(eng.scalar(select([1])), 1)
+ with eng.connect() as conn:
+ eq_(conn.scalar(select(1)), 1)
eng.dispose()
def test_works_after_dispose_testing_engine(self):
eng = engines.testing_engine()
for i in range(3):
- eq_(eng.scalar(select([1])), 1)
+ with eng.connect() as conn:
+ eq_(conn.scalar(select(1)), 1)
eng.dispose()
return go
def _assert_no_data(self):
- eq_(
- testing.db.scalar(
- select([func.count("*")]).select_from(self.table)
- ),
- 0,
- )
+ with testing.db.connect() as conn:
+ eq_(
+ conn.scalar(select(func.count("*")).select_from(self.table)),
+ 0,
+ )
def _assert_fn(self, x, value=None):
- eq_(testing.db.execute(self.table.select()).fetchall(), [(x, value)])
+ with testing.db.connect() as conn:
+ eq_(conn.execute(self.table.select()).fetchall(), [(x, value)])
def test_transaction_engine_ctx_commit(self):
fn = self._trans_fn()
@engines.close_first
def teardown(self):
- testing.db.execute(users.delete())
+ with testing.db.connect() as conn:
+ conn.execute(users.delete())
@classmethod
def teardown_class(cls):
m = MetaData()
t1 = Table("x", m, Column("q", Integer))
ins = t1.insert()
- stmt = select([t1.c.q])
+ stmt = select(t1.c.q)
cache = {}
with config.db.connect().execution_options(
eq_(
conn._execute_20(
- select([t1.c.x]), execution_options=execution_options
+ select(t1.c.x), execution_options=execution_options
).scalar(),
1,
)
eq_(
conn._execute_20(
- select([t2.c.x]), execution_options=execution_options
+ select(t2.c.x), execution_options=execution_options
).scalar(),
2,
)
eq_(
conn._execute_20(
- select([t3.c.x]), execution_options=execution_options
+ select(t3.c.x), execution_options=execution_options
).scalar(),
3,
)
conn.execute(t2.update().values(x=2).where(t2.c.x == 1))
conn.execute(t3.update().values(x=3).where(t3.c.x == 1))
- eq_(conn.scalar(select([t1.c.x])), 1)
- eq_(conn.scalar(select([t2.c.x])), 2)
- eq_(conn.scalar(select([t3.c.x])), 3)
+ eq_(conn.scalar(select(t1.c.x)), 1)
+ eq_(conn.scalar(select(t2.c.x)), 2)
+ eq_(conn.scalar(select(t3.c.x)), 3)
conn.execute(t1.delete())
conn.execute(t2.delete())
with self.sql_execution_asserter(config.db) as asserter:
eng = config.db.execution_options(schema_translate_map=map_)
conn = eng.connect()
- conn.execute(select([t2.c.x]))
+ conn.execute(select(t2.c.x))
asserter.assert_(
CompiledSQL("SELECT [SCHEMA_foo].t2.x FROM [SCHEMA_foo].t2")
)
canary = Mock()
event.listen(e1, "before_execute", canary)
- s1 = select([1])
- s2 = select([2])
+ s1 = select(1)
+ s2 = select(2)
with e1.connect() as conn:
conn.execute(s1)
e2.connect()
with e1.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary.be1.call_count, 1)
eq_(canary.be2.call_count, 1)
with e2.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary.be1.call_count, 2)
eq_(canary.be2.call_count, 1)
conn = e1.connect()
event.listen(conn, "before_execute", canary.be2)
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary.be1.call_count, 1)
eq_(canary.be2.call_count, 1)
if testing.requires.legacy_engine.enabled:
- conn._branch().execute(select([1]))
+ conn._branch().execute(select(1))
eq_(canary.be1.call_count, 2)
eq_(canary.be2.call_count, 2)
conn = e1.connect()
event.listen(e1, "before_execute", canary.be1)
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary.be1.call_count, 1)
- conn._branch().execute(select([1]))
+ conn._branch().execute(select(1))
eq_(canary.be1.call_count, 2)
def test_force_conn_events_false(self):
e1, connection=e1.raw_connection(), _has_events=False
)
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary.be1.call_count, 0)
- conn._branch().execute(select([1]))
+ conn._branch().execute(select(1))
eq_(canary.be1.call_count, 0)
def test_cursor_events_ctx_execute_scalar(self):
event.listen(e1, "before_cursor_execute", canary.bce)
event.listen(e1, "after_cursor_execute", canary.ace)
- stmt = str(select([1]).compile(dialect=e1.dialect))
+ stmt = str(select(1).compile(dialect=e1.dialect))
with e1.connect() as conn:
dialect = conn.dialect
event.listen(e1, "before_cursor_execute", canary.bce)
event.listen(e1, "after_cursor_execute", canary.ace)
- stmt = str(select([1]).compile(dialect=e1.dialect))
+ stmt = str(select(1).compile(dialect=e1.dialect))
with e1.connect() as conn:
event.listen(e1, "after_execute", after_execute)
with e1.connect() as conn:
- conn.execute(select([1]))
- conn.execute(select([1]).compile(dialect=e1.dialect).statement)
- conn.execute(select([1]).compile(dialect=e1.dialect))
+ conn.execute(select(1))
+ conn.execute(select(1).compile(dialect=e1.dialect).statement)
+ conn.execute(select(1).compile(dialect=e1.dialect))
conn._execute_compiled(
- select([1]).compile(dialect=e1.dialect), (), {}, {}
+ select(1).compile(dialect=e1.dialect), (), {}, {}
)
def test_execute_events(self):
),
)
- if isinstance(engine, Connection) and engine._is_future:
+ if isinstance(engine, Connection):
ctx = None
conn = engine
- elif engine._is_future:
- ctx = conn = engine.connect()
else:
- ctx = None
- conn = engine
+ ctx = conn = engine.connect()
try:
m.create_all(conn, checkfirst=False)
conn = engine.connect()
c2 = conn.execution_options(foo="bar")
eq_(c2._execution_options, {"foo": "bar"})
- c2.execute(select([1]))
+ c2.execute(select(1))
c3 = c2.execution_options(bar="bat")
eq_(c3._execution_options, {"foo": "bar", "bar": "bat"})
eq_(canary, ["execute", "cursor_execute"])
event.listen(eng1, "before_execute", l3)
with eng.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["l1", "l2"])
with eng1.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["l1", "l2", "l3", "l1", "l2"])
event.listen(eng1, "before_execute", l4)
with eng.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["l1", "l2", "l3"])
with eng1.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["l1", "l2", "l3", "l4", "l1", "l2", "l3"])
event.remove(eng, "before_execute", l3)
with eng1.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["l2"])
@testing.requires.ad_hoc_engines
engine, "before_cursor_execute", cursor_execute, retval=True
)
with engine.connect() as conn:
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(canary, ["execute", "cursor_execute"])
@testing.requires.legacy_engine
conn = engine.connect()
trans = conn.begin()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans.rollback()
trans = conn.begin()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans.commit()
eq_(
conn = engine.connect()
trans = conn.begin()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans.rollback()
trans = conn.begin()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans.commit()
eq_(
trans = conn.begin()
trans2 = conn.begin_nested()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans2.rollback()
trans2 = conn.begin_nested()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans2.commit()
trans.rollback()
trans = conn.begin_twophase()
- conn.execute(select([1]))
+ conn.execute(select(1))
trans.prepare()
trans.commit()
tsa.exc.StatementError,
r"\(.*.SomeException\) " r"nope\n\[SQL\: u?SELECT 1 ",
conn.execute,
- select([1]).where(column("foo") == literal("bar", MyType())),
+ select(1).where(column("foo") == literal("bar", MyType())),
)
ctx = listener.mock_calls[0][1][0]
"sqlalchemy.engine.cursor.BaseCursorResult.__init__",
Mock(side_effect=tsa.exc.InvalidRequestError("duplicate col")),
):
- assert_raises(
- tsa.exc.InvalidRequestError, engine.execute, text("select 1"),
- )
+ with engine.connect() as conn:
+ assert_raises(
+ tsa.exc.InvalidRequestError,
+ conn.execute,
+ text("select 1"),
+ )
# cursor is closed
assert_raises_message(
dbapi.connect = Mock(side_effect=self.ProgrammingError("random error"))
- assert_raises(MySpecialException, conn.execute, select([1]))
+ assert_raises(MySpecialException, conn.execute, select(1))
def test_handle_error_custom_connect(self):
dbapi = self.dbapi
connection = connection.execution_options(**conn_opts)
if exec_opts:
- connection.execute(select([1]), execution_options=exec_opts)
+ connection.execute(select(1), execution_options=exec_opts)
else:
- connection.execute(select([1]))
+ connection.execute(select(1))
eq_(opts, [expected])
):
opts.append(("after", execution_options))
- stmt = select([1])
+ stmt = select(1)
if stmt_opts:
stmt = stmt.execution_options(**stmt_opts)
eq_(opts, [("before", expected), ("after", expected)])
def test_no_branching(self, connection):
- assert_raises_message(
- NotImplementedError,
- "sqlalchemy.future.Connection does not support "
- "'branching' of new connections.",
- connection.connect,
- )
+ with testing.expect_deprecated(
+ r"The Connection.connect\(\) function/method is considered legacy"
+ ):
+ assert_raises_message(
+ NotImplementedError,
+ "sqlalchemy.future.Connection does not support "
+ "'branching' of new connections.",
+ connection.connect,
+ )
# connection works
- conn.execute(select([1]))
+ conn.execute(select(1))
# create a second connection within the pool, which we'll ensure
# also goes away
# set it to fail
self.dbapi.shutdown()
- assert_raises(tsa.exc.DBAPIError, conn.execute, select([1]))
+ assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
# assert was invalidated
[[call()], [call()], []],
)
- conn.execute(select([1]))
+ conn.execute(select(1))
conn.close()
eq_(
trans = conn.begin()
self.dbapi.shutdown()
- assert_raises(tsa.exc.DBAPIError, conn.execute, select([1]))
+ assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
eq_([c.close.mock_calls for c in self.dbapi.connections], [[call()]])
assert not conn.closed
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
assert trans.is_active
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
assert not trans.is_active
trans.rollback()
assert not trans.is_active
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
eq_(
[c.close.mock_calls for c in self.dbapi.connections],
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
assert not trans.is_active
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
assert not trans.is_active
trans.rollback()
assert not trans.is_active
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
def test_commit_fails_contextmanager(self):
tsa.exc.PendingRollbackError,
"This connection is on an inactive transaction. Please rollback",
conn.execute,
- select([1]),
+ select(1),
)
assert not trans.is_active
tsa.exc.PendingRollbackError,
"This connection is on an inactive transaction. Please rollback",
conn.execute,
- select([1]),
+ select(1),
)
assert not trans.is_active
trans.rollback()
assert not trans.is_active
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
def test_invalidate_dont_call_finalizer(self):
def test_conn_reusable(self):
conn = self.db.connect()
- conn.execute(select([1]))
+ conn.execute(select(1))
eq_(self.dbapi.connect.mock_calls, [self.mock_connect])
self.dbapi.shutdown()
- assert_raises(tsa.exc.DBAPIError, conn.execute, select([1]))
+ assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
assert not conn.closed
assert conn.invalidated
eq_([c.close.mock_calls for c in self.dbapi.connections], [[call()]])
# test reconnects
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
eq_(
self.dbapi.shutdown()
- assert_raises(tsa.exc.DBAPIError, conn.execute, select([1]))
+ assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
conn.close()
assert conn.closed
tsa.exc.ResourceClosedError,
"This Connection is closed",
conn.execute,
- select([1]),
+ select(1),
)
def test_noreconnect_execute_plus_closewresult(self):
tsa.exc.DBAPIError,
"something broke on execute but we didn't lose the connection",
conn.execute,
- select([1]),
+ select(1),
)
assert conn.closed
"something broke on rollback but we didn't "
"lose the connection",
conn.execute,
- select([1]),
+ select(1),
)
assert conn.closed
tsa.exc.ResourceClosedError,
"This Connection is closed",
conn.execute,
- select([1]),
+ select(1),
)
def test_reconnect_on_reentrant(self):
conn = self.db.connect()
- conn.execute(select([1]))
+ conn.execute(select(1))
assert len(self.dbapi.connections) == 1
tsa.exc.DBAPIError,
"Lost the DB connection on rollback",
conn.execute,
- select([1]),
+ select(1),
)
assert not conn.closed
tsa.exc.DBAPIError,
"Lost the DB connection on rollback",
conn.execute,
- select([1]),
+ select(1),
)
assert conn.closed
tsa.exc.ResourceClosedError,
"This Connection is closed",
conn.execute,
- select([1]),
+ select(1),
)
def test_check_disconnect_no_cursor(self):
conn = self.db.connect()
- result = conn.execute(select([1]))
+ result = conn.execute(select(1))
result.cursor.close()
conn.close()
def go():
with conn.begin():
- conn.execute(select([1]))
+ conn.execute(select(1))
assert_raises(MockExitIsh, go)
eq_(pool._invalidate_time, 0) # pool not invalidated
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
def test_invalidate_conn_interrupt_nodisconnect_workaround(self):
def go():
with conn.begin():
- conn.execute(select([1]))
+ conn.execute(select(1))
assert_raises(MockExitIsh, go)
eq_(pool._invalidate_time, 0) # pool not invalidated
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
def test_invalidate_conn_w_contextmanager_disconnect(self):
def go():
with conn.begin():
- conn.execute(select([1]))
+ conn.execute(select(1))
assert_raises(exc.DBAPIError, go) # wraps a MockDisconnect
ne_(pool._invalidate_time, 0) # pool is invalidated
- conn.execute(select([1]))
+ conn.execute(select(1))
assert not conn.invalidated
def test_reconnect(self):
conn = self.engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.closed
self.engine.test_shutdown()
- _assert_invalidated(conn.execute, select([1]))
+ _assert_invalidated(conn.execute, select(1))
assert not conn.closed
assert conn.invalidated
assert conn.invalidated
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
# one more time
self.engine.test_shutdown()
- _assert_invalidated(conn.execute, select([1]))
+ _assert_invalidated(conn.execute, select(1))
assert conn.invalidated
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
conn.close()
c1 = self.engine.connect()
c2 = self.engine.connect()
- eq_(c1.execute(select([1])).scalar(), 1)
+ eq_(c1.execute(select(1)).scalar(), 1)
self.engine.test_shutdown()
- _assert_invalidated(c1.execute, select([1]))
+ _assert_invalidated(c1.execute, select(1))
p2 = self.engine.pool
- _assert_invalidated(c2.execute, select([1]))
+ _assert_invalidated(c2.execute, select(1))
# pool isn't replaced
assert self.engine.pool is p2
with patch.object(self.engine.pool, "logger") as logger:
c1_branch = c1.connect()
- eq_(c1_branch.execute(select([1])).scalar(), 1)
+ eq_(c1_branch.execute(select(1)).scalar(), 1)
self.engine.test_shutdown()
- _assert_invalidated(c1_branch.execute, select([1]))
+ _assert_invalidated(c1_branch.execute, select(1))
assert c1.invalidated
assert c1_branch.invalidated
c1 = self.engine.connect()
c1_branch = c1.connect()
- eq_(c1_branch.execute(select([1])).scalar(), 1)
+ eq_(c1_branch.execute(select(1)).scalar(), 1)
self.engine.test_shutdown()
- _assert_invalidated(c1.execute, select([1]))
+ _assert_invalidated(c1.execute, select(1))
assert c1.invalidated
assert c1_branch.invalidated
c1_branch = c1.connect()
- eq_(c1_branch.execute(select([1])).scalar(), 1)
+ eq_(c1_branch.execute(select(1)).scalar(), 1)
self.engine.test_shutdown()
- _assert_invalidated(c1_branch.execute, select([1]))
+ _assert_invalidated(c1_branch.execute, select(1))
assert not c1_branch.closed
assert not c1_branch._still_open_and_dbapi_connection_is_valid
with expect_warnings(
"An exception has occurred during handling .*", py2konly=True
):
- assert_raises(tsa.exc.DBAPIError, conn.execute, select([1]))
+ assert_raises(tsa.exc.DBAPIError, conn.execute, select(1))
def test_rollback_on_invalid_plain(self):
conn = self.engine.connect()
options=dict(poolclass=pool.NullPool)
)
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.closed
engine.test_shutdown()
- _assert_invalidated(conn.execute, select([1]))
+ _assert_invalidated(conn.execute, select(1))
assert not conn.closed
assert conn.invalidated
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
def test_close(self):
conn = self.engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.closed
self.engine.test_shutdown()
- _assert_invalidated(conn.execute, select([1]))
+ _assert_invalidated(conn.execute, select(1))
conn.close()
conn = self.engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
def test_with_transaction(self):
conn = self.engine.connect()
trans = conn.begin()
assert trans.is_valid
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.closed
self.engine.test_shutdown()
- _assert_invalidated(conn.execute, select([1]))
+ _assert_invalidated(conn.execute, select(1))
assert not conn.closed
assert conn.invalidated
assert trans.is_active
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
assert trans.is_active
assert not trans.is_valid
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
# still asks us..
tsa.exc.PendingRollbackError,
"Can't reconnect until invalid transaction is rolled back",
conn.execute,
- select([1]),
+ select(1),
)
# OK!
# conn still invalid but we can reconnect
assert conn.invalidated
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
assert not conn.invalidated
engine = engines.reconnecting_engine()
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
conn.close()
# set the pool recycle down to 1.
# can connect, no exception
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
conn.close()
engine = engines.reconnecting_engine(options={"pool_pre_ping": True})
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
stale_connection = conn.connection.connection
conn.close()
engine.test_restart()
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
conn.close()
def exercise_stale_connection():
engine = engines.reconnecting_engine(options={"pool_pre_ping": True})
conn = engine.connect()
- eq_(conn.execute(select([1])).scalar(), 1)
+ eq_(conn.execute(select(1)).scalar(), 1)
conn.close()
engine.test_shutdown(stop=True)
from sqlalchemy import text
from sqlalchemy import util
from sqlalchemy import VARCHAR
-from sqlalchemy.future import select as future_select
from sqlalchemy.testing import assert_raises
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import eq_
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 0,
+ conn.scalar(select(func.count(1)).select_from(users)), 0,
)
def test_inactive_due_to_subtransaction_no_commit(self, local_connection):
conn.execute(users.insert(), {"user_id": 1, "user_name": "name"})
conn.rollback()
- eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)), 0
- )
+ eq_(conn.scalar(select(func.count(1)).select_from(users)), 0)
@testing.requires.autocommit
def test_autocommit_isolation_level(self):
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
@testing.requires.autocommit
def test_no_autocommit_w_autobegin(self):
with testing.db.connect() as conn:
- conn.execute(future_select(1))
+ conn.execute(select(1))
assert_raises_message(
exc.InvalidRequestError,
assert not conn.in_transaction()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
conn.execute(users.insert(), {"user_id": 2, "user_name": "name 2"})
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
assert conn.in_transaction()
assert not conn.in_transaction()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
def test_rollback_on_close(self):
conn.rollback()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
def test_rollback_no_begin(self):
conn.commit()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
def test_no_double_begin(self):
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 0,
+ conn.scalar(select(func.count(1)).select_from(users)), 0,
)
def test_begin_block(self):
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
@testing.requires.savepoints
conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
savepoint.rollback()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
@testing.requires.savepoints
conn.execute(users.insert(), {"user_id": 2, "user_name": "name2"})
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
savepoint.commit()
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
@testing.requires.savepoints
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 0,
+ conn.scalar(select(func.count(1)).select_from(users)), 0,
)
@testing.requires.savepoints
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 2,
+ conn.scalar(select(func.count(1)).select_from(users)), 2,
)
@testing.requires.savepoints
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 3,
+ conn.scalar(select(func.count(1)).select_from(users)), 3,
)
@testing.requires.savepoints
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 1,
+ conn.scalar(select(func.count(1)).select_from(users)), 1,
)
@testing.requires.savepoints
with testing.db.connect() as conn:
eq_(
- conn.scalar(future_select(func.count(1)).select_from(users)),
- 0,
+ conn.scalar(select(func.count(1)).select_from(users)), 0,
)
q = sess.query(User).filter(User.id == 7).set_cache_key("user7")
- eq_(sess.execute(q).all(), [(User(id=7, addresses=[Address(id=1)]),)])
+ eq_(
+ sess.execute(q, future=True).all(),
+ [(User(id=7, addresses=[Address(id=1)]),)],
+ )
eq_(list(q.cache), ["user7"])
- eq_(sess.execute(q).all(), [(User(id=7, addresses=[Address(id=1)]),)])
+ eq_(
+ sess.execute(q, future=True).all(),
+ [(User(id=7, addresses=[Address(id=1)]),)],
+ )
def test_use_w_baked(self):
User, Address = self._o2m_fixture()
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import MetaData
+from sqlalchemy import select
from sqlalchemy import sql
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import update
from sqlalchemy import util
from sqlalchemy.ext.horizontal_shard import ShardedSession
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import create_session
from sqlalchemy.orm import deferred
tokyo.reports.append(Report(80.0, id_=1))
newyork.reports.append(Report(75, id_=1))
quito.reports.append(Report(85))
- sess = create_session()
+ sess = create_session(future=True)
for c in [tokyo, newyork, toronto, london, dublin, brasilia, quito]:
sess.add(c)
sess.flush()
def test_query_explicit_shard_via_bind_opts(self):
sess = self._fixture_data()
- stmt = future_select(WeatherLocation).filter(WeatherLocation.id == 1)
+ stmt = select(WeatherLocation).filter(WeatherLocation.id == 1)
tokyo = (
sess.execute(stmt, bind_arguments={"shard_id": "asia"})
sess = self._fixture_data()
tokyo = (
- sess.execute(
- future_select(WeatherLocation).filter_by(city="Tokyo")
- )
+ sess.execute(select(WeatherLocation).filter_by(city="Tokyo"))
.scalars()
.one()
)
eq_(tokyo.city, "Tokyo")
asia_and_europe = sess.execute(
- future_select(WeatherLocation).filter(
+ select(WeatherLocation).filter(
WeatherLocation.continent.in_(["Europe", "Asia"])
)
).scalars()
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{80.0, 75.0, 85.0},
)
- temps = sess.execute(future_select(Report)).scalars().all()
+ temps = sess.execute(select(Report)).scalars().all()
eq_(set(t.temperature for t in temps), {80.0, 75.0, 85.0})
sess.execute(
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{86.0, 75.0, 91.0},
)
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{80.0, 75.0, 85.0},
)
- temps = sess.execute(future_select(Report)).scalars().all()
+ temps = sess.execute(select(Report)).scalars().all()
eq_(set(t.temperature for t in temps), {80.0, 75.0, 85.0})
# MARKMARK
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{86.0, 81.0, 91.0},
)
def test_bulk_delete_future_synchronize_evaluate(self):
sess = self._fixture_data()
- temps = sess.execute(future_select(Report)).scalars().all()
+ temps = sess.execute(select(Report)).scalars().all()
eq_(set(t.temperature for t in temps), {80.0, 75.0, 85.0})
sess.execute(
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{75.0},
)
def test_bulk_delete_future_synchronize_fetch(self):
sess = self._fixture_data()
- temps = sess.execute(future_select(Report)).scalars().all()
+ temps = sess.execute(select(Report)).scalars().all()
eq_(set(t.temperature for t in temps), {80.0, 75.0, 85.0})
sess.execute(
eq_(
set(
row.temperature
- for row in sess.execute(future_select(Report.temperature))
+ for row in sess.execute(select(Report.temperature))
),
{75.0},
)
if use_correlate_except:
Common.num_superclass = column_property(
- select([func.count(Superclass.id)])
+ select(func.count(Superclass.id))
.where(Superclass.common_id == Common.id)
.correlate_except(Superclass)
.scalar_subquery()
if not use_correlate_except:
Common.num_superclass = column_property(
- select([func.count(Superclass.id)])
+ select(func.count(Superclass.id))
.where(Superclass.common_id == Common.id)
.correlate(Common)
.scalar_subquery()
sess.add(a)
sess.flush()
- eq_(select([func.count("*")]).select_from(user_roles).scalar(), 1)
+ eq_(select(func.count("*")).select_from(user_roles).scalar(), 1)
def test_two(self):
admins, users, roles, user_roles = (
a.password = "sadmin"
sess.flush()
- eq_(select([func.count("*")]).select_from(user_roles).scalar(), 1)
+ eq_(select(func.count("*")).select_from(user_roles).scalar(), 1)
class PassiveDeletesTest(fixtures.MappedTest):
s = Session()
- q = s.query(ASingleSubA).select_entity_from(
- select([ASingle]).subquery()
- )
+ q = s.query(ASingleSubA).select_entity_from(select(ASingle).subquery())
assert_raises_message(
sa_exc.InvalidRequestError,
s = Session()
- q = s.query(AJoinedSubA).select_entity_from(
- select([AJoined]).subquery()
- )
+ q = s.query(AJoinedSubA).select_entity_from(select(AJoined).subquery())
assert_raises_message(
sa_exc.InvalidRequestError,
a_table, c_table, d_table, e_table = self.tables("a", "c", "d", "e")
poly = (
- select([a_table.c.id, a_table.c.type, c_table, d_table, e_table])
+ select(a_table.c.id, a_table.c.type, c_table, d_table, e_table)
.select_from(
a_table.join(c_table).outerjoin(d_table).outerjoin(e_table)
)
from sqlalchemy import select
from sqlalchemy import testing
from sqlalchemy import true
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import create_session
from sqlalchemy.orm import defaultload
self.assert_sql_count(testing.db, go, 3)
eq_(
- select([func.count("*")])
+ select(func.count("*"))
.select_from(
sess.query(Person)
.with_polymorphic("*")
)
def test_multi_join_future(self):
- sess = create_session()
+ sess = create_session(future=True)
e = aliased(Person)
c = aliased(Company)
q = (
- future_select(Company, Person, c, e)
+ select(Company, Person, c, e)
.join(Person, Company.employees)
.join(e, c.employees)
.filter(Person.person_id != e.person_id)
eq_(
sess.execute(
- future_select(func.count()).select_from(q.subquery())
+ select(func.count()).select_from(q.subquery())
).scalar(),
1,
)
eq_(sess.query(Engineer).all()[0], Engineer(name="dilbert"))
def test_filter_on_subclass_one_future(self):
- sess = create_session()
+ sess = create_session(future=True)
eq_(
- sess.execute(future_select(Engineer)).scalar(),
- Engineer(name="dilbert"),
+ sess.execute(select(Engineer)).scalar(), Engineer(name="dilbert"),
)
def test_filter_on_subclass_two(self):
)
def test_join_from_polymorphic_nonaliased_one_future(self):
- sess = create_session()
+ sess = create_session(future=True)
eq_(
sess.execute(
- future_select(Person)
+ select(Person)
.join(Person.paperwork)
.filter(Paperwork.description.like("%review%"))
)
)
def test_join_from_polymorphic_flag_aliased_one_future(self):
- sess = create_session()
+ sess = create_session(future=True)
pa = aliased(Paperwork)
eq_(
sess.execute(
- future_select(Person)
+ select(Person)
.order_by(Person.person_id)
.join(Person.paperwork.of_type(pa))
.filter(pa.description.like("%review%"))
)
def test_join_from_with_polymorphic_nonaliased_one_future(self):
- sess = create_session()
+ sess = create_session(future=True)
pm = with_polymorphic(Person, [Manager])
eq_(
sess.execute(
- future_select(pm)
+ select(pm)
.order_by(pm.person_id)
.join(pm.paperwork)
.filter(Paperwork.description.like("%review%"))
expected,
)
- def test_self_referential_two_newstyle(self):
+ def test_self_referential_two_future(self):
# TODO: this is the first test *EVER* of an aliased class of
# an aliased class. we should add many more tests for this.
# new case added in Id810f485c5f7ed971529489b84694e02a3356d6d
- sess = create_session()
+ sess = create_session(future=True)
expected = [(m1, e1), (m1, e2), (m1, b1)]
p1 = aliased(Person)
p2 = aliased(Person)
stmt = (
- future_select(p1, p2)
+ select(p1, p2)
.filter(p1.company_id == p2.company_id)
.filter(p1.name == "dogbert")
.filter(p1.person_id > p2.person_id)
pa1 = aliased(p1, subq)
pa2 = aliased(p2, subq)
- stmt = future_select(pa1, pa2).order_by(pa1.person_id, pa2.person_id)
+ stmt = select(pa1, pa2).order_by(pa1.person_id, pa2.person_id)
eq_(
sess.execute(stmt).unique().all(), expected,
C1 = aliased(Child1, flat=True)
- # figure out all the things we need to do in Core to make
- # the identical query that the ORM renders.
+ # this was "figure out all the things we need to do in Core to make
+ # the identical query that the ORM renders.", however as of
+ # I765a0b912b3dcd0e995426427d8bb7997cbffd51 this is using the ORM
+ # to create the query in any case
salias = secondary.alias()
stmt = (
- select([Child2])
+ select(Child2)
.select_from(
join(
Child2,
self.assert_compile(
stmt.apply_labels(),
- "SELECT parent.id AS parent_id, "
- "parent.cls AS parent_cls, child2.id AS child2_id "
+ "SELECT child2.id AS child2_id, parent.id AS parent_id, "
+ "parent.cls AS parent_cls "
"FROM secondary AS secondary_1, "
"parent JOIN child2 ON parent.id = child2.id JOIN secondary AS "
"secondary_2 ON parent.id = secondary_2.left_id JOIN "
# another way to check
eq_(
- select([func.count("*")])
+ select(func.count("*"))
.select_from(q.limit(1).with_labels().subquery())
.scalar(),
1,
Base, Child = self.classes.Base, self.classes.Child
base, child = self.tables.base, self.tables.child
- base_select = select([base]).alias()
+ base_select = select(base).alias()
mapper(
Base,
base_select,
sess.flush()
stmt = (
- select([reports, employees])
+ select(reports, employees)
.select_from(
reports.outerjoin(
employees,
poly = (
select(
- [
- employee.c.id,
- employee.c.type,
- employee.c.name,
- manager.c.manager_data,
- null().label("engineer_info"),
- null().label("manager_id"),
- ]
+ employee.c.id,
+ employee.c.type,
+ employee.c.name,
+ manager.c.manager_data,
+ null().label("engineer_info"),
+ null().label("manager_id"),
)
.select_from(employee.join(manager))
.apply_labels()
.union_all(
select(
- [
- employee.c.id,
- employee.c.type,
- employee.c.name,
- null().label("manager_data"),
- engineer.c.engineer_info,
- engineer.c.manager_id,
- ]
+ employee.c.id,
+ employee.c.type,
+ employee.c.name,
+ null().label("manager_data"),
+ engineer.c.engineer_info,
+ engineer.c.manager_id,
)
.select_from(employee.join(engineer))
.apply_labels()
b_id = Column(ForeignKey("b.id"))
partition = select(
- [
- B,
- func.row_number()
- .over(order_by=B.id, partition_by=B.a_id)
- .label("index"),
- ]
+ B,
+ func.row_number()
+ .over(order_by=B.id, partition_by=B.a_id)
+ .label("index"),
).alias()
partitioned_b = aliased(B, alias=partition)
)
sess.add_all((item1, item2))
sess.flush()
- eq_(select([func.count("*")]).select_from(item_keywords).scalar(), 3)
+ eq_(select(func.count("*")).select_from(item_keywords).scalar(), 3)
sess.delete(item1)
sess.delete(item2)
sess.flush()
- eq_(select([func.count("*")]).select_from(item_keywords).scalar(), 0)
+ eq_(select(func.count("*")).select_from(item_keywords).scalar(), 0)
from sqlalchemy import table
from sqlalchemy import testing
from sqlalchemy import true
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import backref
from sqlalchemy.orm import create_session
from sqlalchemy.orm import mapper
mapper(User, users)
session = create_session()
+
session.execute(users.insert(), dict(name="Johnny"))
assert len(session.query(User).filter_by(name="Johnny").all()) == 1
),
(lambda Address: {"mapper": Address}, "e2"),
(lambda Address: {"clause": Query([Address])._statement_20()}, "e2"),
- (lambda addresses: {"clause": select([addresses])}, "e2"),
+ (lambda addresses: {"clause": select(addresses)}, "e2"),
(
lambda User, addresses: {
"mapper": User,
- "clause": select([addresses]),
+ "clause": select(addresses),
},
"e1",
),
(
lambda e2, User, addresses: {
"mapper": User,
- "clause": select([addresses]),
+ "clause": select(addresses),
"bind": e2,
},
"e2",
),
(
lambda User, Address: {
- "clause": future_select(1).join_from(User, Address)
+ "clause": select(1).join_from(User, Address)
},
"e1",
),
(
lambda User, Address: {
- "clause": future_select(1).join_from(Address, User)
+ "clause": select(1).join_from(Address, User)
},
"e2",
),
- (
- lambda User: {
- "clause": future_select(1).where(User.name == "ed"),
- },
- "e1",
- ),
- (lambda: {"clause": future_select(1)}, "e3"),
+ (lambda User: {"clause": select(1).where(User.name == "ed")}, "e1",),
+ (lambda: {"clause": select(1)}, "e3"),
(lambda User: {"clause": Query([User])._statement_20()}, "e1"),
(lambda: {"clause": Query([1])._statement_20()}, "e3"),
(
),
(
lambda User: {
- "clause": future_select(1)
- .select_from(User)
- .join(User.addresses)
+ "clause": select(1).select_from(User).join(User.addresses)
},
"e1",
),
lambda Address: {"mapper": inspect(Address), "clause": mock.ANY},
"e2",
),
- (lambda: future_select(1), lambda: {"clause": mock.ANY}, "e3"),
+ (lambda: select(1), lambda: {"clause": mock.ANY}, "e3"),
(
- lambda User, Address: future_select(1).join_from(User, Address),
+ lambda User, Address: select(1).join_from(User, Address),
lambda User: {"clause": mock.ANY, "mapper": inspect(User)},
"e1",
),
(
- lambda User, Address: future_select(1).join_from(Address, User),
+ lambda User, Address: select(1).join_from(Address, User),
lambda Address: {"clause": mock.ANY, "mapper": inspect(Address)},
"e2",
),
(
- lambda User: future_select(1).where(User.name == "ed"),
+ lambda User: select(1).where(User.name == "ed"),
# no mapper for this one because the plugin is not "orm"
lambda User: {"clause": mock.ANY},
"e1",
),
(
- lambda User: future_select(1)
- .select_from(User)
- .where(User.name == "ed"),
+ lambda User: select(1).select_from(User).where(User.name == "ed"),
lambda User: {"clause": mock.ANY, "mapper": inspect(User)},
"e1",
),
(
- lambda User: future_select(User.id),
+ lambda User: select(User.id),
lambda User: {"clause": mock.ANY, "mapper": inspect(User)},
"e1",
),
canary.get_bind(**kw)
return Session.get_bind(self, **kw)
- sess = GetBindSession(e3)
+ sess = GetBindSession(e3, future=True)
sess.bind_mapper(User, e1)
sess.bind_mapper(Address, e2)
def test_bind_selectable_union(self, two_table_fixture):
session, base_class_bind, concrete_sub_bind = two_table_fixture
- stmt = select([self.tables.base_table]).union(
- select([self.tables.concrete_sub_table])
+ stmt = select(self.tables.base_table).union(
+ select(self.tables.concrete_sub_table)
)
is_(session.get_bind(clause=stmt), base_class_bind)
b1 = Bundle("b1", Data.d1, Data.d2)
self.assert_compile(
- select([b1.c.d1, b1.c.d2]), "SELECT data.d1, data.d2 FROM data"
+ select(b1.c.d1, b1.c.d2), "SELECT data.d1, data.d2 FROM data"
)
def test_result(self):
import random
from sqlalchemy import inspect
+from sqlalchemy import select
from sqlalchemy import testing
from sqlalchemy import text
-from sqlalchemy.future import select
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import defaultload
from sqlalchemy.orm import defer
]:
eq_(left._generate_cache_key(), right._generate_cache_key())
- def test_future_selects_w_orm_joins(self):
+ def test_selects_w_orm_joins(self):
User, Address, Keyword, Order, Item = self.classes(
"User", "Address", "Keyword", "Order", "Item"
self._run_cache_key_fixture(
lambda: (
- future_select(User).join(User.addresses),
- future_select(User).join(User.orders),
- future_select(User).join(User.addresses).join(User.orders),
- future_select(User).join(Address, User.addresses),
- future_select(User).join(a1, User.addresses),
- future_select(User).join(User.addresses.of_type(a1)),
- future_select(User)
+ select(User).join(User.addresses),
+ select(User).join(User.orders),
+ select(User).join(User.addresses).join(User.orders),
+ select(User).join(Address, User.addresses),
+ select(User).join(a1, User.addresses),
+ select(User).join(User.addresses.of_type(a1)),
+ select(User)
.join(Address, User.addresses)
.join_from(User, Order),
- future_select(User)
+ select(User)
.join(Address, User.addresses)
.join_from(User, User.orders),
- future_select(User.id, Order.id).select_from(
+ select(User.id, Order.id).select_from(
orm_join(User, Order, User.orders)
),
),
with_polymorphic(
Person,
[Manager, Engineer],
- future_select(Person)
+ select(Person)
.outerjoin(Manager)
.outerjoin(Engineer)
.subquery(),
def five():
subq = (
- future_select(Person)
+ select(Person)
.outerjoin(Manager)
.outerjoin(Engineer)
.subquery()
def six():
subq = (
- future_select(Person)
+ select(Person)
.outerjoin(Manager)
.outerjoin(Engineer)
.subquery()
# query.
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
def query(names):
stmt = (
sess.delete(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 0)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 0)
+ eq_(select(func.count("*")).select_from(users).scalar(), 0)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 0)
def test_delete_unloaded_collections(self):
"""Unloaded collections are still included in a delete-cascade
sess.add(u)
sess.flush()
sess.expunge_all()
- eq_(select([func.count("*")]).select_from(addresses).scalar(), 2)
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(addresses).scalar(), 2)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
u = sess.query(User).get(u.id)
assert "addresses" not in u.__dict__
sess.delete(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(addresses).scalar(), 0)
- eq_(select([func.count("*")]).select_from(users).scalar(), 0)
+ eq_(select(func.count("*")).select_from(addresses).scalar(), 0)
+ eq_(select(func.count("*")).select_from(users).scalar(), 0)
def test_cascades_onlycollection(self):
"""Cascade only reaches instances that are still part of the
sess.add(u2)
sess.flush()
sess.expunge_all()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 1)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 1)
eq_(
sess.query(User).all(),
[User(name="newuser", orders=[Order(description="someorder")])],
)
sess.add(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 2)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 2)
del u.orders[0]
sess.delete(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 0)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 0)
+ eq_(select(func.count("*")).select_from(users).scalar(), 0)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 0)
def test_collection_orphans(self):
User, users, orders, Order = (
sess.add(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 2)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 2)
u.orders[:] = []
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 0)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 0)
class O2MCascadeTest(fixtures.MappedTest):
)
sess.add(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 2)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 2)
del u.orders[0]
sess.delete(u)
sess.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 0)
- eq_(select([func.count("*")]).select_from(orders).scalar(), 1)
+ eq_(select(func.count("*")).select_from(users).scalar(), 0)
+ eq_(select(func.count("*")).select_from(orders).scalar(), 1)
class O2OSingleParentTest(_fixtures.FixtureTest):
)
sess = create_session()
- eq_(select([func.count("*")]).select_from(prefs).scalar(), 3)
- eq_(select([func.count("*")]).select_from(extra).scalar(), 3)
+ eq_(select(func.count("*")).select_from(prefs).scalar(), 3)
+ eq_(select(func.count("*")).select_from(extra).scalar(), 3)
jack = sess.query(User).filter_by(name="jack").one()
jack.pref = None
sess.flush()
- eq_(select([func.count("*")]).select_from(prefs).scalar(), 2)
- eq_(select([func.count("*")]).select_from(extra).scalar(), 2)
+ eq_(select(func.count("*")).select_from(prefs).scalar(), 2)
+ eq_(select(func.count("*")).select_from(extra).scalar(), 2)
def test_cascade_on_deleted(self):
"""test a bug introduced by r6711"""
assert p in sess
assert e in sess
sess.flush()
- eq_(select([func.count("*")]).select_from(prefs).scalar(), 2)
- eq_(select([func.count("*")]).select_from(extra).scalar(), 2)
+ eq_(select(func.count("*")).select_from(prefs).scalar(), 2)
+ eq_(select(func.count("*")).select_from(extra).scalar(), 2)
def test_pending_expunge(self):
Pref, User = self.classes.Pref, self.classes.User
a1.bs.remove(b1)
sess.flush()
- eq_(select([func.count("*")]).select_from(atob).scalar(), 0)
- eq_(select([func.count("*")]).select_from(b).scalar(), 0)
- eq_(select([func.count("*")]).select_from(a).scalar(), 1)
+ eq_(select(func.count("*")).select_from(atob).scalar(), 0)
+ eq_(select(func.count("*")).select_from(b).scalar(), 0)
+ eq_(select(func.count("*")).select_from(a).scalar(), 1)
def test_delete_orphan_dynamic(self):
a, A, B, b, atob = (
a1.bs.remove(b1)
sess.flush()
- eq_(select([func.count("*")]).select_from(atob).scalar(), 0)
- eq_(select([func.count("*")]).select_from(b).scalar(), 0)
- eq_(select([func.count("*")]).select_from(a).scalar(), 1)
+ eq_(select(func.count("*")).select_from(atob).scalar(), 0)
+ eq_(select(func.count("*")).select_from(b).scalar(), 0)
+ eq_(select(func.count("*")).select_from(a).scalar(), 1)
def test_delete_orphan_cascades(self):
a, A, c, b, C, B, atob = (
a1.bs.remove(b1)
sess.flush()
- eq_(select([func.count("*")]).select_from(atob).scalar(), 0)
- eq_(select([func.count("*")]).select_from(b).scalar(), 0)
- eq_(select([func.count("*")]).select_from(a).scalar(), 1)
- eq_(select([func.count("*")]).select_from(c).scalar(), 0)
+ eq_(select(func.count("*")).select_from(atob).scalar(), 0)
+ eq_(select(func.count("*")).select_from(b).scalar(), 0)
+ eq_(select(func.count("*")).select_from(a).scalar(), 1)
+ eq_(select(func.count("*")).select_from(c).scalar(), 0)
def test_cascade_delete(self):
a, A, B, b, atob = (
sess.delete(a1)
sess.flush()
- eq_(select([func.count("*")]).select_from(atob).scalar(), 0)
- eq_(select([func.count("*")]).select_from(b).scalar(), 0)
- eq_(select([func.count("*")]).select_from(a).scalar(), 0)
+ eq_(select(func.count("*")).select_from(atob).scalar(), 0)
+ eq_(select(func.count("*")).select_from(b).scalar(), 0)
+ eq_(select(func.count("*")).select_from(a).scalar(), 0)
def test_single_parent_error(self):
a, A, B, b, atob = (
def test_selectable_column_mapped(self):
from sqlalchemy import select
- s = select([self.tables.foo]).alias()
+ s = select(self.tables.foo).alias()
Foo = self.classes.Foo
mapper(Foo, s)
self._run_test([(Foo.b, Foo(b=5), 5), (s.c.b, Foo(b=5), 5)])
from sqlalchemy import String
from sqlalchemy import testing
from sqlalchemy import update
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import composite
from sqlalchemy.orm import CompositeProperty
},
)
- def _fixture(self):
+ def _fixture(self, future=False):
Graph, Edge, Point = (
self.classes.Graph,
self.classes.Edge,
self.classes.Point,
)
- sess = Session()
+ sess = Session(future=future)
g = Graph(
id=1,
edges=[
def test_bulk_update_sql(self):
Edge, Point = (self.classes.Edge, self.classes.Point)
- sess = self._fixture()
+ sess = self._fixture(future=True)
e1 = sess.execute(
- future_select(Edge).filter(Edge.start == Point(14, 5))
+ select(Edge).filter(Edge.start == Point(14, 5))
).scalar_one()
eq_(e1.end, Point(2, 7))
def test_bulk_update_evaluate(self):
Edge, Point = (self.classes.Edge, self.classes.Point)
- sess = self._fixture()
+ sess = self._fixture(future=True)
e1 = sess.execute(
- future_select(Edge).filter(Edge.start == Point(14, 5))
+ select(Edge).filter(Edge.start == Point(14, 5))
).scalar_one()
eq_(e1.end, Point(2, 7))
start, end = Edge.start, Edge.end
- stmt = select([start, end]).where(start == Point(3, 4))
+ stmt = select(start, end).where(start == Point(3, 4))
self.assert_compile(
stmt,
"SELECT edges.x1, edges.y1, edges.x2, edges.y2 "
configure_mappers()
self.assert_compile(
- select([Edge]).order_by(Edge.start),
+ select(Edge).order_by(Edge.start),
"SELECT edge.id, edge.x1, edge.y1, edge.x2, edge.y2 FROM edge "
"ORDER BY edge.x1, edge.y1",
)
from sqlalchemy import insert
from sqlalchemy import literal_column
from sqlalchemy import or_
+from sqlalchemy import select
from sqlalchemy import testing
from sqlalchemy import util
-from sqlalchemy.future import select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import column_property
from sqlalchemy.orm import contains_eager
Order, orders = self.classes.Order, self.tables.orders
order_select = sa.select(
- [
- orders.c.id,
- orders.c.user_id,
- orders.c.address_id,
- orders.c.description,
- orders.c.isopen,
- ]
+ orders.c.id,
+ orders.c.user_id,
+ orders.c.address_id,
+ orders.c.description,
+ orders.c.isopen,
).alias()
mapper(
Order,
)
sess = create_session()
- stmt = sa.select([Order]).order_by(Order.id)
+ stmt = sa.select(Order).order_by(Order.id)
o1 = (sess.query(Order).from_statement(stmt).all())[0]
def go():
)
sess = create_session()
- stmt = sa.select([Order]).order_by(Order.id)
+ stmt = sa.select(Order).order_by(Order.id)
o1 = (sess.query(Order).from_statement(stmt).all())[0]
assert_raises_message(
mapper(Order, orders)
sess = create_session()
- stmt = sa.select([Order]).order_by(Order.id)
+ stmt = sa.select(Order).order_by(Order.id)
o1 = (
sess.query(Order)
.from_statement(stmt)
mapper(Order, orders)
sess = create_session()
- stmt = sa.select([Order]).order_by(Order.id)
+ stmt = sa.select(Order).order_by(Order.id)
o1 = (
sess.query(Order)
.from_statement(stmt)
a_id = Column(ForeignKey("a.id"))
A.b_count = deferred(
- select([func.count(1)]).where(A.id == B.a_id).scalar_subquery()
+ select(func.count(1)).where(A.id == B.a_id).scalar_subquery()
)
def test_deferred_autoflushes(self):
Engineer = _poly_fixtures.Engineer
with DeprecatedQueryTest._expect_implicit_subquery():
- p_poly = with_polymorphic(Person, [Engineer], select([Person]))
+ p_poly = with_polymorphic(Person, [Engineer], select(Person))
is_true(
- sa.inspect(p_poly).selectable.compare(select([Person]).subquery())
+ sa.inspect(p_poly).selectable.compare(select(Person).subquery())
)
def test_multiple_adaption(self):
eq_(
testing.db.scalar(
- select([func.count(cast(1, Integer))]).where(
+ select(func.count(cast(1, Integer))).where(
addresses.c.user_id != None
)
), # noqa
eq_(
testing.db.execute(
- select([addresses]).where(addresses.c.user_id != None) # noqa
+ select(addresses).where(addresses.c.user_id != None) # noqa
).fetchall(),
[(a1.id, u1.id, "foo")],
)
sess.flush()
eq_(
testing.db.scalar(
- select([func.count(cast(1, Integer))]).where(
+ select(func.count(cast(1, Integer))).where(
addresses.c.user_id != None
)
), # noqa
sess.flush()
eq_(
testing.db.execute(
- select([addresses]).where(addresses.c.user_id != None) # noqa
+ select(addresses).where(addresses.c.user_id != None) # noqa
).fetchall(),
[(a1.id, u1.id, "foo")],
)
sess.flush()
eq_(
testing.db.execute(
- select([addresses]).where(addresses.c.user_id != None) # noqa
+ select(addresses).where(addresses.c.user_id != None) # noqa
).fetchall(),
[(a2.id, u1.id, "bar")],
)
sess.commit()
eq_(
testing.db.scalar(
- select([func.count("*")]).where(addresses.c.user_id == None)
+ select(func.count("*")).where(addresses.c.user_id == None)
), # noqa
0,
)
eq_(
testing.db.scalar(
- select([func.count("*")]).where(addresses.c.user_id != None)
+ select(func.count("*")).where(addresses.c.user_id != None)
), # noqa
6,
)
if expected:
eq_(
testing.db.scalar(
- select([func.count("*")]).where(
+ select(func.count("*")).where(
addresses.c.user_id == None
) # noqa
),
)
eq_(
testing.db.scalar(
- select([func.count("*")]).where(
+ select(func.count("*")).where(
addresses.c.user_id != None
) # noqa
),
else:
eq_(
testing.db.scalar(
- select([func.count("*")]).select_from(addresses)
+ select(func.count("*")).select_from(addresses)
),
0,
)
mapper(Item, items)
open_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 1).alias()
+ Order, select(orders).where(orders.c.isopen == 1).alias()
)
closed_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 0).alias()
+ Order, select(orders).where(orders.c.isopen == 0).alias()
)
mapper(
def test_column_property(self):
A = self.classes.A
b_table, a_table = self.tables.b, self.tables.a
- cp = select([func.sum(b_table.c.value)]).where(
+ cp = select(func.sum(b_table.c.value)).where(
b_table.c.a_id == a_table.c.id
)
def test_column_property_desc(self):
A = self.classes.A
b_table, a_table = self.tables.b, self.tables.a
- cp = select([func.sum(b_table.c.value)]).where(
+ cp = select(func.sum(b_table.c.value)).where(
b_table.c.a_id == a_table.c.id
)
A = self.classes.A
b_table, a_table = self.tables.b, self.tables.a
cp = (
- select([func.sum(b_table.c.value)])
+ select(func.sum(b_table.c.value))
.where(b_table.c.a_id == a_table.c.id)
.correlate(a_table)
)
b_table, a_table = self.tables.b, self.tables.a
self._fixture({})
cp = (
- select([func.sum(b_table.c.value)])
+ select(func.sum(b_table.c.value))
.where(b_table.c.a_id == a_table.c.id)
.correlate(a_table)
.scalar_subquery()
b_table, a_table = self.tables.b, self.tables.a
self._fixture({})
cp = (
- select([func.sum(b_table.c.value)])
+ select(func.sum(b_table.c.value))
.where(b_table.c.a_id == a_table.c.id)
.correlate(a_table)
.scalar_subquery()
b_table, a_table = self.tables.b, self.tables.a
self._fixture({})
cp = (
- select([func.sum(b_table.c.value)])
+ select(func.sum(b_table.c.value))
.where(b_table.c.a_id == a_table.c.id)
.correlate(a_table)
.scalar_subquery()
if ondate:
# the more 'relational' way to do this, join on the max date
stuff_view = (
- select([func.max(salias.c.date).label("max_date")])
+ select(func.max(salias.c.date).label("max_date"))
.where(salias.c.user_id == users.c.id)
.correlate(users)
)
# perform better in some
# cases - subquery does a limit with order by DESC, join on the id
stuff_view = (
- select([salias.c.id])
+ select(salias.c.id)
.where(salias.c.user_id == users.c.id)
.correlate(users)
.order_by(salias.c.date.desc())
sess.close()
sess = create_session()
- d1 = sess.query(Data).from_statement(select([Data.id])).first()
+ d1 = sess.query(Data).from_statement(select(Data.id)).first()
# cols not present in the row are implicitly expired
def go():
sess = create_session()
d1 = (
sess.query(Data)
- .from_statement(select([Data.id]))
+ .from_statement(select(Data.id))
.options(undefer(Data.data))
.first()
)
from sqlalchemy import union
from sqlalchemy import util
from sqlalchemy.engine import default
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import backref
from sqlalchemy.orm import clear_mappers
query = select(
[func.count(addresses.c.id)], addresses.c.user_id == users.c.id
).scalar_subquery()
- query = select([users.c.name.label("users_name"), query])
+ query = select(users.c.name.label("users_name"), query)
self.assert_compile(
query, self.query_correlated, dialect=default.DefaultDialect()
)
.correlate(users)
.scalar_subquery()
)
- query = select([users.c.name.label("users_name"), query])
+ query = select(users.c.name.label("users_name"), query)
self.assert_compile(
query, self.query_correlated, dialect=default.DefaultDialect()
)
.correlate(None)
.scalar_subquery()
)
- query = select([users.c.name.label("users_name"), query])
+ query = select(users.c.name.label("users_name"), query)
self.assert_compile(
query, self.query_not_correlated, dialect=default.DefaultDialect()
)
def test_correlate_to_union_newstyle(self):
User = self.classes.User
- q = future_select(User).apply_labels()
+ q = select(User).apply_labels()
- q = future_select(User).union(q).apply_labels().subquery()
+ q = select(User).union(q).apply_labels().subquery()
u_alias = aliased(User)
raw_subq = exists().where(u_alias.id > q.c[0])
self.assert_compile(
- future_select(q, raw_subq).apply_labels(),
+ select(q, raw_subq).apply_labels(),
"SELECT anon_1.users_id AS anon_1_users_id, "
"anon_1.users_name AS anon_1_users_name, "
"EXISTS (SELECT * FROM users AS users_1 "
q3.all(), [(7, 1), (8, 1), (9, 1), (10, 1)],
)
- q3 = future_select(q2)
+ q3 = select(q2)
eq_(sess.execute(q3).fetchall(), [(7, 1), (8, 1), (9, 1), (10, 1)])
from sqlalchemy.sql import column
c1, c2 = column("c1"), column("c2")
- q1 = future_select(c1, c2).where(c1 == "dog")
- q2 = future_select(c1, c2).where(c1 == "cat")
+ q1 = select(c1, c2).where(c1 == "dog")
+ q2 = select(c1, c2).where(c1 == "cat")
subq = q1.union(q2).subquery()
- q3 = future_select(subq).apply_labels()
+ q3 = select(subq).apply_labels()
self.assert_compile(
q3.order_by(subq.c.c1),
from sqlalchemy.sql import column
t1 = table("t1", column("c1"), column("c2"))
- stmt = (
- future_select(t1.c.c1, t1.c.c2)
- .where(t1.c.c1 == "dog")
- .apply_labels()
- )
+ stmt = select(t1.c.c1, t1.c.c2).where(t1.c.c1 == "dog").apply_labels()
subq1 = stmt.subquery("anon_2").select().apply_labels()
subq2 = subq1.subquery("anon_1")
- q1 = future_select(subq2).apply_labels()
+ q1 = select(subq2).apply_labels()
self.assert_compile(
# as in test_anonymous_expression_from_self_twice_newstyle_wlabels,
from sqlalchemy.sql import column
c1, c2 = column("c1"), column("c2")
- subq = future_select(c1, c2).where(c1 == "dog").subquery()
+ subq = select(c1, c2).where(c1 == "dog").subquery()
- subq2 = future_select(subq).apply_labels().subquery()
+ subq2 = select(subq).apply_labels().subquery()
- stmt = future_select(subq2).apply_labels()
+ stmt = select(subq2).apply_labels()
self.assert_compile(
# because of the apply labels we don't have simple keys on
from sqlalchemy.sql import column
c1, c2 = column("c1"), column("c2")
- subq = future_select(c1, c2).where(c1 == "dog").subquery()
+ subq = select(c1, c2).where(c1 == "dog").subquery()
- subq2 = future_select(subq).subquery()
+ subq2 = select(subq).subquery()
- stmt = future_select(subq2)
+ stmt = select(subq2)
self.assert_compile(
# without labels we can access .c1 but the statement will not
def test_anonymous_labeled_expression_newstyle(self):
c1, c2 = column("c1"), column("c2")
- q1 = future_select(c1.label("foo"), c2.label("bar")).where(c1 == "dog")
- q2 = future_select(c1.label("foo"), c2.label("bar")).where(c1 == "cat")
+ q1 = select(c1.label("foo"), c2.label("bar")).where(c1 == "dog")
+ q2 = select(c1.label("foo"), c2.label("bar")).where(c1 == "cat")
subq = union(q1, q2).subquery()
- q3 = future_select(subq).apply_labels()
+ q3 = select(subq).apply_labels()
self.assert_compile(
q3.order_by(subq.c.foo),
"SELECT anon_1.foo AS anon_1_foo, anon_1.bar AS anon_1_bar FROM "
sess = create_session()
subq = (
- select([func.count()])
+ select(func.count())
.where(User.id == Address.user_id)
.correlate(users)
.label("count")
# same thing without the correlate, as it should
# not be needed
subq = (
- select([func.count()])
+ select(func.count())
.where(User.id == Address.user_id)
.label("count")
)
def test_from_self_internal_literals_newstyle(self):
Order = self.classes.Order
- stmt = future_select(
+ stmt = select(
Order.id, Order.description, literal_column("'q'").label("foo")
).where(Order.description == "order 3")
subq = aliased(Order, stmt.apply_labels().subquery())
- stmt = future_select(subq).apply_labels()
+ stmt = select(subq).apply_labels()
self.assert_compile(
stmt,
"SELECT anon_1.orders_id AS "
# TODO: figure out why group_by(users) doesn't work here
count = func.count(addresses.c.id).label("count")
s = (
- select([users, count])
+ select(users, count)
.select_from(users.outerjoin(addresses))
.group_by(*[c for c in users.c])
.order_by(User.id)
sess = create_session()
q = sess.query(User.id, User.name)
- stmt = select([users]).order_by(users.c.id)
+ stmt = select(users).order_by(users.c.id)
q = q.from_statement(stmt)
eq_(q.all(), [(7, "jack"), (8, "ed"), (9, "fred"), (10, "chuck")])
sess = create_session()
not_users = table("users", column("id"), column("name"))
- ua = aliased(User, select([not_users]).alias(), adapt_on_names=True)
+ ua = aliased(User, select(not_users).alias(), adapt_on_names=True)
q = sess.query(User.name).select_entity_from(ua).order_by(User.name)
self.assert_compile(
addresses,
properties={
"username": column_property(
- select([User.fullname])
+ select(User.fullname)
.where(User.id == addresses.c.user_id)
.label("y")
)
eq_(
sess.query(User)
.select_entity_from(
- select([users]).order_by(User.id).offset(2).alias()
+ select(users).order_by(User.id).offset(2).alias()
)
.join(Order, User.id == Order.user_id)
.all(),
mp = mapper(Parent, parent)
mapper(Child, child)
- derived = select([child]).alias()
+ derived = select(child).alias()
npc = aliased(Child, derived)
cls.npc = npc
cls.derived = derived
stmt = s.query(Person).subquery()
subq = (
- select([Book.book_id])
+ select(Book.book_id)
.where(Person.people_id == Book.book_owner_id)
.subquery()
.lateral()
stmt = s.query(Person).subquery()
subq = (
- select([Book.book_id])
+ select(Book.book_id)
.correlate(Person)
.where(Person.people_id == Book.book_owner_id)
.subquery()
def test_cols_round_trip(self, plain_fixture):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
# note this does a traversal + _clone of the InstrumentedAttribute
# for the first time ever
def test_entity_round_trip(self, plain_fixture):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
def query(names):
stmt = lambda_stmt(
def test_subqueryload_internal_lambda(self, plain_fixture):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
def query(names):
stmt = (
def test_subqueryload_external_lambda_caveats(self, plain_fixture):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
def query(names):
stmt = lambda_stmt(
def test_does_filter_aliasing_work(self, plain_fixture):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
# aliased=True is to be deprecated, other filter lambdas
# that go into effect include polymorphic filtering.
def test_join_entity_arg(self, plain_fixture, test_case):
User, Address = plain_fixture
- s = Session()
+ s = Session(future=True)
stmt = testing.resolve_lambda(test_case, **locals())
self.assert_compile(
Company = self.classes.Company
Manager = self.classes.Manager
- s = Session()
+ s = Session(future=True)
q = s.query(Company).join(lambda: Manager, lambda: Company.employees)
def test_update(self):
User, Address = self.classes("User", "Address")
- s = Session()
+ s = Session(future=True)
def go(ids, values):
stmt = lambda_stmt(lambda: update(User).where(User.id.in_(ids)))
mapper(Item, items)
open_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 1).alias()
+ Order, select(orders).where(orders.c.isopen == 1).alias()
)
closed_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 0).alias()
+ Order, select(orders).where(orders.c.isopen == 0).alias()
)
mapper(
mapper(Stuff, stuff)
stuff_view = (
- sa.select([stuff.c.id])
+ sa.select(stuff.c.id)
.where(stuff.c.user_id == user_t.c.id)
.correlate(user_t)
.order_by(sa.desc(stuff.c.date))
s = Session()
q = s.query(User.id, User.name)
- stmt = select([User.id])
+ stmt = select(User.id)
assert_raises_message(
exc.NoSuchColumnError,
users.insert().values({User.foobar: "name1"}).execute()
eq_(
- sa.select([User.foobar])
+ sa.select(User.foobar)
.where(User.foobar == "name1")
.execute()
.fetchall(),
users.update().values({User.foobar: User.foobar + "foo"}).execute()
eq_(
- sa.select([User.foobar])
+ sa.select(User.foobar)
.where(User.foobar == "name1foo")
.execute()
.fetchall(),
def test_no_pks_1(self):
User, users = self.classes.User, self.tables.users
- s = sa.select([users.c.name]).alias("foo")
+ s = sa.select(users.c.name).alias("foo")
assert_raises(sa.exc.ArgumentError, mapper, User, s)
def test_no_pks_2(self):
User, users = self.classes.User, self.tables.users
- s = sa.select([users.c.name]).alias()
+ s = sa.select(users.c.name).alias()
assert_raises(sa.exc.ArgumentError, mapper, User, s)
def test_reconfigure_on_other_mapper(self):
a = (
s.query(Address)
.from_statement(
- sa.select([addresses.c.id, addresses.c.user_id]).order_by(
+ sa.select(addresses.c.id, addresses.c.user_id).order_by(
addresses.c.id
)
)
assert User.id.property.columns[0] is users.c.id
assert User.name.property.columns[0] is users.c.name
expr = User.name + "name"
- expr2 = sa.select([User.name, users.c.id])
+ expr2 = sa.select(User.name, users.c.id)
m.add_property("x", column_property(expr))
m.add_property("y", column_property(expr2.scalar_subquery()))
sess.add(a)
sess.flush()
- eq_(select([func.count("*")]).select_from(addresses).scalar(), 6)
- eq_(select([func.count("*")]).select_from(email_bounces).scalar(), 5)
+ eq_(select(func.count("*")).select_from(addresses).scalar(), 6)
+ eq_(select(func.count("*")).select_from(email_bounces).scalar(), 5)
def test_mapping_to_outerjoin(self):
"""Mapping to an outer join with a nullable composite primary key."""
h1.h1s.append(H1())
s.flush()
- eq_(select([func.count("*")]).select_from(ht1).scalar(), 4)
+ eq_(select(func.count("*")).select_from(ht1).scalar(), 4)
h6 = H6()
h6.h1a = h1
a1 = u1.addresses[0]
eq_(
- sa.select([addresses.c.username]).execute().fetchall(),
+ sa.select(addresses.c.username).execute().fetchall(),
[("jack",), ("jack",)],
)
sess.flush()
assert u1.addresses[0].username == "ed"
eq_(
- sa.select([addresses.c.username]).execute().fetchall(),
+ sa.select(addresses.c.username).execute().fetchall(),
[("ed",), ("ed",)],
)
eq_(a1.username, None)
eq_(
- sa.select([addresses.c.username]).execute().fetchall(),
+ sa.select(addresses.c.username).execute().fetchall(),
[(None,), (None,)],
)
eq_(a1.username, "ed")
eq_(a2.username, "ed")
eq_(
- sa.select([addresses.c.username]).execute().fetchall(),
+ sa.select(addresses.c.username).execute().fetchall(),
[("ed",), ("ed",)],
)
eq_(a1.username, "jack")
eq_(a2.username, "jack")
eq_(
- sa.select([addresses.c.username]).execute().fetchall(),
+ sa.select(addresses.c.username).execute().fetchall(),
[("jack",), ("jack",)],
)
from sqlalchemy import util
from sqlalchemy.engine import default
from sqlalchemy.ext.compiler import compiles
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import aliased
from sqlalchemy.orm import attributes
from sqlalchemy.orm import backref
(lambda s, users: s.query(users),),
(lambda s, User: s.query(User.id, User.name),),
(lambda s, users: s.query(users.c.id, users.c.name),),
- (lambda s, users: future_select(users),),
- (lambda s, User: future_select(User.id, User.name),),
- (lambda s, users: future_select(users.c.id, users.c.name),),
+ (lambda s, users: select(users),),
+ (lambda s, User: select(User.id, User.name),),
+ (lambda s, users: select(users.c.id, users.c.name),),
)
def test_modern_tuple_future(self, test_case):
# check we are not getting a LegacyRow back
mapper(User, users)
- s = Session()
+ s = Session(future=True)
q = testing.resolve_lambda(test_case, **locals())
assert "jack" in row
@testing.combinations(
- (lambda s, users: select([users]),),
- (lambda s, User: select([User.id, User.name]),),
- (lambda s, users: select([users.c.id, users.c.name]),),
+ (lambda s, users: select(users),),
+ (lambda s, User: select(User.id, User.name),),
+ (lambda s, users: select(users.c.id, users.c.name),),
)
def test_legacy_tuple_old_select(self, test_case):
q = testing.resolve_lambda(test_case, **locals())
row = s.execute(q.order_by(User.id)).first()
+
+ # old style row
assert "jack" not in row
assert "jack" in tuple(row)
+ row = s.execute(q.order_by(User.id), future=True).first()
+
+ # new style row - not sure what to do here w/ future yet
+ assert "jack" in row
+ assert "jack" in tuple(row)
+
def test_entity_mapping_access(self):
User, users = self.classes.User, self.tables.users
Address, addresses = self.classes.Address, self.tables.addresses
return "max(id)"
# assert that there is no "AS max_" or any label of any kind.
- eq_(str(select([not_named_max()])), "SELECT max(id)")
+ eq_(str(select(not_named_max())), "SELECT max(id)")
# ColumnElement still handles it by applying label()
q = sess.query(not_named_max()).select_from(users)
q1 = s.query(User).filter(User.name == "ed")
self.assert_compile(
- select([q1.with_labels().subquery()]),
+ select(q1.with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name "
q1 = s.query(User.id, User.name).group_by(User.name)
self.assert_compile(
- select([q1.with_labels().subquery()]),
+ select(q1.with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users GROUP BY users.name) "
# test append something to group_by
self.assert_compile(
- select([q1.group_by(User.id).with_labels().subquery()]),
+ select(q1.group_by(User.id).with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users "
# test cancellation by using None, replacement with something else
self.assert_compile(
select(
- [q1.group_by(None).group_by(User.id).with_labels().subquery()]
+ q1.group_by(None).group_by(User.id).with_labels().subquery()
),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
# test cancellation by using None, replacement with nothing
self.assert_compile(
- select([q1.group_by(None).with_labels().subquery()]),
+ select(q1.group_by(None).with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users) AS anon_1",
q1 = s.query(User.id, User.name).order_by(User.name)
self.assert_compile(
- select([q1.with_labels().subquery()]),
+ select(q1.with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users ORDER BY users.name) "
# test append something to order_by
self.assert_compile(
- select([q1.order_by(User.id).with_labels().subquery()]),
+ select(q1.order_by(User.id).with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users "
# test cancellation by using None, replacement with something else
self.assert_compile(
select(
- [q1.order_by(None).order_by(User.id).with_labels().subquery()]
+ q1.order_by(None).order_by(User.id).with_labels().subquery()
),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
# test cancellation by using None, replacement with nothing
self.assert_compile(
- select([q1.order_by(None).with_labels().subquery()]),
+ select(q1.order_by(None).with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users) AS anon_1",
# test cancellation by using None, replacement with something else
self.assert_compile(
select(
- [q1.order_by(False).order_by(User.id).with_labels().subquery()]
+ q1.order_by(False).order_by(User.id).with_labels().subquery()
),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
# test cancellation by using None, replacement with nothing
self.assert_compile(
- select([q1.order_by(False).with_labels().subquery()]),
+ select(q1.order_by(False).with_labels().subquery()),
"SELECT anon_1.users_id, anon_1.users_name FROM "
"(SELECT users.id AS users_id, "
"users.name AS users_name FROM users) AS anon_1",
User, Address = self.classes("User", "Address")
users, addresses = self.tables("users", "addresses")
stmt = (
- select([func.max(addresses.c.email_address)])
+ select(func.max(addresses.c.email_address))
.where(addresses.c.user_id == users.c.id)
.correlate(users)
)
s = create_session()
eq_(
- s.execute(future_select(func.count()).select_from(User)).scalar(),
- 4,
+ s.execute(select(func.count()).select_from(User)).scalar(), 4,
)
eq_(
s.execute(
- future_select(func.count()).filter(User.name.endswith("ed"))
+ select(func.count()).filter(User.name.endswith("ed"))
).scalar(),
2,
)
s = create_session()
- stmt = future_select(User, Address).join(Address, true())
+ stmt = select(User, Address).join(Address, true())
- stmt = future_select(func.count()).select_from(stmt.subquery())
+ stmt = select(func.count()).select_from(stmt.subquery())
eq_(s.scalar(stmt), 20) # cartesian product
- stmt = future_select(User, Address).join(Address)
+ stmt = select(User, Address).join(Address)
- stmt = future_select(func.count()).select_from(stmt.subquery())
+ stmt = select(func.count()).select_from(stmt.subquery())
eq_(s.scalar(stmt), 5)
def test_nested(self):
s = create_session()
- stmt = future_select(User, Address).join(Address, true()).limit(2)
+ stmt = select(User, Address).join(Address, true()).limit(2)
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 2,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 2,
)
- stmt = future_select(User, Address).join(Address, true()).limit(100)
+ stmt = select(User, Address).join(Address, true()).limit(100)
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 20,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 20,
)
- stmt = future_select(User, Address).join(Address).limit(100)
+ stmt = select(User, Address).join(Address).limit(100)
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 5,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 5,
)
def test_cols(self):
s = create_session()
- stmt = future_select(func.count(distinct(User.name)))
+ stmt = select(func.count(distinct(User.name)))
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 1,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 1,
)
- stmt = future_select(func.count(distinct(User.name))).distinct()
+ stmt = select(func.count(distinct(User.name))).distinct()
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 1,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 1,
)
- stmt = future_select(User.name)
+ stmt = select(User.name)
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 4,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 4,
)
- stmt = future_select(User.name, Address).join(Address, true())
+ stmt = select(User.name, Address).join(Address, true())
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 20,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 20,
)
- stmt = future_select(Address.user_id)
+ stmt = select(Address.user_id)
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 5,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 5,
)
stmt = stmt.distinct()
eq_(
- s.scalar(future_select(func.count()).select_from(stmt.subquery())),
- 3,
+ s.scalar(select(func.count()).select_from(stmt.subquery())), 3,
)
eq_(
s.query(User)
.from_statement(
- select([column("id"), column("name")])
+ select(column("id"), column("name"))
.select_from(table("users"))
.order_by("id")
)
mapper.add_property(
"score",
column_property(
- select([func.sum(Address.id)])
+ select(func.sum(Address.id))
.where(Address.user_id == User.id)
.scalar_subquery()
),
def test_join_targets_o2m_selfref(self):
joincond = self._join_fixture_o2m_selfref()
- left = select([joincond.parent_persist_selectable]).alias("pj")
+ left = select(joincond.parent_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
left, joincond.child_persist_selectable, True
)
self.assert_compile(pj, "pj.id = selfref.sid")
self.assert_compile(pj, "pj.id = selfref.sid")
- right = select([joincond.child_persist_selectable]).alias("pj")
+ right = select(joincond.child_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
joincond.parent_persist_selectable, right, True
)
def test_join_targets_o2m_left_aliased(self):
joincond = self._join_fixture_o2m()
- left = select([joincond.parent_persist_selectable]).alias("pj")
+ left = select(joincond.parent_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
left, joincond.child_persist_selectable, True
)
def test_join_targets_o2m_right_aliased(self):
joincond = self._join_fixture_o2m()
- right = select([joincond.child_persist_selectable]).alias("pj")
+ right = select(joincond.child_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
joincond.parent_persist_selectable, right, True
)
def test_join_targets_o2m_composite_selfref(self):
joincond = self._join_fixture_o2m_composite_selfref()
- right = select([joincond.child_persist_selectable]).alias("pj")
+ right = select(joincond.child_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
joincond.parent_persist_selectable, right, True
)
def test_join_targets_m2o_composite_selfref(self):
joincond = self._join_fixture_m2o_composite_selfref()
- right = select([joincond.child_persist_selectable]).alias("pj")
+ right = select(joincond.child_persist_selectable).alias("pj")
pj, sj, sec, adapter, ds = joincond.join_targets(
joincond.parent_persist_selectable, right, True
)
sess.add(a)
sess.flush()
- eq_(select([func.count("*")]).select_from(t3).scalar(), 2)
+ eq_(select(func.count("*")).select_from(t3).scalar(), 2)
a.t2s.remove(c)
sess.flush()
- eq_(select([func.count("*")]).select_from(t3).scalar(), 1)
+ eq_(select(func.count("*")).select_from(t3).scalar(), 1)
class CustomOperatorTest(fixtures.MappedTest, AssertsCompiledSQL):
a, b = cls.tables("a", "b")
secondary = (
- select([a.c.id.label("aid"), b])
+ select(a.c.id.label("aid"), b)
.select_from(a.join(b, a.c.b_ids.like("%" + b.c.id + "%")))
.alias()
)
def test_no_tables(self):
Subset = self.classes.Subset
- selectable = select([column("x"), column("y"), column("z")]).alias()
+ selectable = select(column("x"), column("y"), column("z")).alias()
mapper(Subset, selectable, primary_key=[selectable.c.x])
self.assert_compile(
def test_no_table_needs_pl(self):
Subset = self.classes.Subset
- selectable = select([column("x"), column("y"), column("z")]).alias()
+ selectable = select(column("x"), column("y"), column("z")).alias()
assert_raises_message(
sa.exc.ArgumentError,
"could not assemble any primary key columns",
def test_no_selects(self):
Subset, common = self.classes.Subset, self.tables.common
- subset_select = select([common.c.id, common.c.data])
+ subset_select = select(common.c.id, common.c.data)
assert_raises(sa.exc.ArgumentError, mapper, Subset, subset_select)
def test_basic(self):
Subset, common = self.classes.Subset, self.tables.common
- subset_select = select([common.c.id, common.c.data]).alias()
+ subset_select = select(common.c.id, common.c.data).alias()
mapper(Subset, subset_select)
sess = Session(bind=testing.db)
sess.add(Subset(data=1))
mapper(Item, items)
open_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 1).alias()
+ Order, select(orders).where(orders.c.isopen == 1).alias()
)
closed_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 0).alias()
+ Order, select(orders).where(orders.c.isopen == 0).alias()
)
mapper(
sess.execute(users.insert(), {"id": 9, "name": "u9"})
eq_(
sess.execute(
- sa.select([users.c.id]).order_by(users.c.id)
+ sa.select(users.c.id).order_by(users.c.id)
).fetchall(),
[(7,), (8,), (9,)],
)
is_(s._transaction, None)
- s.execute(select([1]))
+ s.execute(select(1))
is_not_(s._transaction, None)
s.commit()
is_(s._transaction, None)
- s.execute(select([1]))
+ s.execute(select(1))
is_not_(s._transaction, None)
s.close()
is_(s._transaction, None)
- s.execute(select([1]))
+ s.execute(select(1))
is_not_(s._transaction, None)
s.close()
mapper(Item, items)
open_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 1).alias()
+ Order, select(orders).where(orders.c.isopen == 1).alias()
)
closed_mapper = aliased(
- Order, select([orders]).where(orders.c.isopen == 0).alias()
+ Order, select(orders).where(orders.c.isopen == 0).alias()
)
mapper(
if the original Query has a from_self() present, it needs to create
.subquery() in terms of the Query class as a from_self() selectable
doesn't work correctly with the future select. So it has
- to create a Query object now that it gets only a future_select.
+ to create a Query object now that it gets only a select.
neutron is currently dependent on this use case which means others
are too.
sess.commit()
sess.close()
engine2.dispose()
- eq_(select([func.count("*")]).select_from(users).scalar(), 1)
- eq_(select([func.count("*")]).select_from(addresses).scalar(), 1)
+ eq_(select(func.count("*")).select_from(users).scalar(), 1)
+ eq_(select(func.count("*")).select_from(addresses).scalar(), 1)
@testing.requires.independent_connections
def test_invalidate(self):
sess.add(to_flush.pop())
sess.commit()
eq_(x, [2])
- eq_(sess.scalar(select([func.count(users.c.id)])), 3)
+ eq_(sess.scalar(select(func.count(users.c.id))), 3)
def test_continue_flushing_guard(self):
users, User = self.tables.users, self.classes.User
u1 = s.query(User).filter_by(name="ed").one()
assert u1_state not in s.identity_map.all_states()
- eq_(s.scalar(select([func.count("*")]).select_from(users)), 1)
+ eq_(s.scalar(select(func.count("*")).select_from(users)), 1)
s.delete(u1)
s.flush()
- eq_(s.scalar(select([func.count("*")]).select_from(users)), 0)
+ eq_(s.scalar(select(func.count("*")).select_from(users)), 0)
s.commit()
def test_trans_deleted_cleared_on_rollback(self):
session.flush()
p_count = (
- select([func.count("*")])
+ select(func.count("*"))
.where(people.c.person == "im the key")
.scalar()
)
eq_(p_count, 1)
eq_(
- select([func.count("*")])
+ select(func.count("*"))
.where(peoplesites.c.person == "im the key")
.scalar(),
1,
def test_insert(self):
User = self.classes.User
- u = User(name="test", counter=sa.select([5]).scalar_subquery())
+ u = User(name="test", counter=sa.select(5).scalar_subquery())
session = create_session()
session.add(u)
session.flush()
session.expunge_all()
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 4)
+ eq_(select(func.count("*")).select_from(myothertable).scalar(), 4)
mc = session.query(MyClass).get(mc.id)
session.delete(mc)
session.flush()
- eq_(select([func.count("*")]).select_from(mytable).scalar(), 0)
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 0)
+ eq_(select(func.count("*")).select_from(mytable).scalar(), 0)
+ eq_(select(func.count("*")).select_from(myothertable).scalar(), 0)
@testing.emits_warning(
r".*'passive_deletes' is normally configured on one-to-many"
session.add(mco)
session.flush()
- eq_(select([func.count("*")]).select_from(mytable).scalar(), 1)
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 1)
+ eq_(session.scalar(select(func.count("*")).select_from(mytable)), 1)
+ eq_(
+ session.scalar(select(func.count("*")).select_from(myothertable)),
+ 1,
+ )
session.expire(mco, ["myclass"])
session.delete(mco)
session.flush()
# mytable wasn't deleted, is the point.
- eq_(select([func.count("*")]).select_from(mytable).scalar(), 1)
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 0)
+ eq_(session.scalar(select(func.count("*")).select_from(mytable)), 1)
+ eq_(
+ session.scalar(select(func.count("*")).select_from(myothertable)),
+ 0,
+ )
def test_aaa_m2o_emits_warning(self):
myothertable, MyClass, MyOtherClass, mytable = (
session.flush()
session.expunge_all()
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 4)
+ eq_(select(func.count("*")).select_from(myothertable).scalar(), 4)
mc = session.query(MyClass).get(mc.id)
session.delete(mc)
assert_raises(sa.exc.DBAPIError, session.flush)
session.flush()
session.expunge_all()
- eq_(select([func.count("*")]).select_from(myothertable).scalar(), 1)
+ eq_(select(func.count("*")).select_from(myothertable).scalar(), 1)
mc = session.query(MyClass).get(mc.id)
session.delete(mc)
u = session.query(User).get(u.id)
session.delete(u)
session.flush()
- eq_(select([func.count("*")]).select_from(users).scalar(), 0)
- eq_(select([func.count("*")]).select_from(addresses).scalar(), 0)
+ eq_(select(func.count("*")).select_from(users).scalar(), 0)
+ eq_(select(func.count("*")).select_from(addresses).scalar(), 0)
def test_batch_mode(self):
"""The 'batch=False' flag on mapper()"""
session.add(i)
session.flush()
- eq_(select([func.count("*")]).select_from(item_keywords).scalar(), 2)
+ eq_(select(func.count("*")).select_from(item_keywords).scalar(), 2)
i.keywords = []
session.flush()
- eq_(select([func.count("*")]).select_from(item_keywords).scalar(), 0)
+ eq_(select(func.count("*")).select_from(item_keywords).scalar(), 0)
def test_scalar(self):
"""sa.dependency won't delete an m2m relationship referencing None."""
session.add(i)
session.flush()
- eq_(select([func.count("*")]).select_from(assoc).scalar(), 2)
+ eq_(select(func.count("*")).select_from(assoc).scalar(), 2)
i.keywords = []
session.flush()
- eq_(select([func.count("*")]).select_from(assoc).scalar(), 0)
+ eq_(select(func.count("*")).select_from(assoc).scalar(), 0)
class BooleanColTest(fixtures.MappedTest):
eq_(
sess.scalar(
- select([func.count("*")]).select_from(self.tables.parent)
+ select(func.count("*")).select_from(self.tables.parent)
),
0,
)
from sqlalchemy import testing
from sqlalchemy import text
from sqlalchemy import update
-from sqlalchemy.future import select as future_select
from sqlalchemy.orm import backref
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import mapper
User = self.classes.User
- s = Session()
+ s = Session(future=True)
jill = s.query(User).filter(User.name == "jill").one()
User = self.classes.User
- s = Session()
+ s = Session(future=True)
jill = s.query(User).filter(User.name == "jill").one()
assert_raises(
exc.InvalidRequestError,
sess.query(User)
- .filter(
- User.name == select([func.max(User.name)]).scalar_subquery()
- )
+ .filter(User.name == select(func.max(User.name)).scalar_subquery())
.delete,
synchronize_session="evaluate",
)
sess.query(User).filter(
- User.name == select([func.max(User.name)]).scalar_subquery()
+ User.name == select(func.max(User.name)).scalar_subquery()
).delete(synchronize_session="fetch")
assert john not in sess
def test_update_future(self):
User, users = self.classes.User, self.tables.users
- sess = Session()
+ sess = Session(future=True)
john, jack, jill, jane = (
- sess.execute(future_select(User).order_by(User.id)).scalars().all()
+ sess.execute(select(User).order_by(User.id)).scalars().all()
)
sess.execute(
eq_([john.age, jack.age, jill.age, jane.age], [25, 37, 29, 27])
eq_(
- sess.execute(future_select(User.age).order_by(User.id)).all(),
+ sess.execute(select(User.age).order_by(User.id)).all(),
list(zip([25, 37, 29, 27])),
)
def test_update_multi_values_error_future(self):
User = self.classes.User
- session = Session()
+ session = Session(future=True)
# Do update using a tuple and check that order is preserved
def test_update_preserve_parameter_order_future(self):
User = self.classes.User
- session = Session()
+ session = Session(future=True)
# Do update using a tuple and check that order is preserved
person = self.classes.Person.__table__
engineer = self.classes.Engineer.__table__
- sess = Session()
+ sess = Session(future=True)
sess.query(person.join(engineer)).filter(person.c.name == "e2").update(
{person.c.name: "updated", engineer.c.engineer_name: "e2a"},
)
obj = sess.execute(
- future_select(self.classes.Engineer).filter(
+ select(self.classes.Engineer).filter(
self.classes.Engineer.name == "updated"
)
).scalar()
session.commit()
eq_(b1.version_id, 1)
# base is populated
- eq_(select([base.c.version_id]).scalar(), 1)
+ eq_(select(base.c.version_id).scalar(), 1)
def test_sub_both(self):
Base, sub, base, Sub = (
session.commit()
# table is populated
- eq_(select([sub.c.version_id]).scalar(), 1)
+ eq_(select(sub.c.version_id).scalar(), 1)
# base is populated
- eq_(select([base.c.version_id]).scalar(), 1)
+ eq_(select(base.c.version_id).scalar(), 1)
def test_sub_only(self):
Base, sub, base, Sub = (
session.commit()
# table is populated
- eq_(select([sub.c.version_id]).scalar(), 1)
+ eq_(select(sub.c.version_id).scalar(), 1)
# base is not
- eq_(select([base.c.version_id]).scalar(), None)
+ eq_(select(base.c.version_id).scalar(), None)
def test_mismatch_version_col_warning(self):
Base, sub, base, Sub = (
# /home/classic/dev/sqlalchemy/test/profiles.txt
# This file is written out on a per-environment basis.
-# For each test in aaa_profiling, the corresponding function and
+# For each test in aaa_profiling, the corresponding function and
# environment is located within this file. If it doesn't exist,
# the test is skipped.
-# If a callcount does exist, it is compared to what we received.
+# If a callcount does exist, it is compared to what we received.
# assertions are raised if the counts do not match.
-#
-# To add a new callcount test, apply the function_call_count
-# decorator and re-run the tests using the --write-profiles
+#
+# To add a new callcount test, apply the function_call_count
+# decorator and re-run the tests using the --write-profiles
# option - this file will be rewritten including the new count.
-#
+#
# TEST: test.aaa_profiling.test_compiler.CompileTest.test_insert
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_cextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_nocextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_cextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_nocextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_cextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_nocextensions 63
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_cextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_nocextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_cextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_nocextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_cextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_nocextensions 64
test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_oracle_cx_oracle_dbapiunicode_cextensions 62
test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 62
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_cextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_cextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 63
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_cextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_nocextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_mysqldb_dbapiunicode_cextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_mysqldb_dbapiunicode_nocextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_pymysql_dbapiunicode_cextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_pymysql_dbapiunicode_nocextensions 67
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_cextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_cextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 64
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_mysqldb_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_mysqldb_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_pymysql_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_pymysql_dbapiunicode_nocextensions 69
test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_oracle_cx_oracle_dbapiunicode_cextensions 67
test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_postgresql_psycopg2_dbapiunicode_cextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_sqlite_pysqlite_dbapiunicode_cextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 67
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_postgresql_psycopg2_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_sqlite_pysqlite_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 69
# TEST: test.aaa_profiling.test_compiler.CompileTest.test_select
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_cextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_nocextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_cextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_nocextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_cextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_nocextensions 154
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_cextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_nocextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_cextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_nocextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_cextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_nocextensions 162
test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_oracle_cx_oracle_dbapiunicode_cextensions 152
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 152
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_cextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_cextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 154
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_nocextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_mysqldb_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_mysqldb_dbapiunicode_nocextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_pymysql_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_pymysql_dbapiunicode_nocextensions 167
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 165
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_cextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_cextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 162
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_cextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_nocextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_mysqldb_dbapiunicode_cextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_mysqldb_dbapiunicode_nocextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_pymysql_dbapiunicode_cextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_pymysql_dbapiunicode_nocextensions 177
test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_oracle_cx_oracle_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_postgresql_psycopg2_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_sqlite_pysqlite_dbapiunicode_cextensions 167
-test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 167
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_postgresql_psycopg2_dbapiunicode_cextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_sqlite_pysqlite_dbapiunicode_cextensions 177
+test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 177
# TEST: test.aaa_profiling.test_compiler.CompileTest.test_select_labels
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_cextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_nocextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_cextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_nocextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_cextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_nocextensions 171
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_cextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_nocextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_cextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_nocextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_cextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_nocextensions 179
test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_oracle_cx_oracle_dbapiunicode_cextensions 170
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 170
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_cextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_cextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 171
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_cextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_nocextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_mysqldb_dbapiunicode_cextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_mysqldb_dbapiunicode_nocextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_pymysql_dbapiunicode_cextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_pymysql_dbapiunicode_nocextensions 185
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 182
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_cextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_cextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 179
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_cextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_nocextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_mysqldb_dbapiunicode_cextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_mysqldb_dbapiunicode_nocextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_pymysql_dbapiunicode_cextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_pymysql_dbapiunicode_nocextensions 194
test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_oracle_cx_oracle_dbapiunicode_cextensions 185
test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_postgresql_psycopg2_dbapiunicode_cextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_sqlite_pysqlite_dbapiunicode_cextensions 185
-test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 185
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_postgresql_psycopg2_dbapiunicode_cextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_sqlite_pysqlite_dbapiunicode_cextensions 194
+test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 194
# TEST: test.aaa_profiling.test_compiler.CompileTest.test_update
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_cextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_nocextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_cextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_nocextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_cextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_nocextensions 68
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_nocextensions 69
test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_oracle_cx_oracle_dbapiunicode_cextensions 67
test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 67
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_cextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_cextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 68
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_cextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_nocextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_mysqldb_dbapiunicode_cextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_mysqldb_dbapiunicode_nocextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_pymysql_dbapiunicode_cextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_pymysql_dbapiunicode_nocextensions 70
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_cextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 69
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_cextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_nocextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_mysqldb_dbapiunicode_cextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_mysqldb_dbapiunicode_nocextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_pymysql_dbapiunicode_cextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_pymysql_dbapiunicode_nocextensions 72
test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_oracle_cx_oracle_dbapiunicode_cextensions 70
test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_postgresql_psycopg2_dbapiunicode_cextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_sqlite_pysqlite_dbapiunicode_cextensions 70
-test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 70
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_postgresql_psycopg2_dbapiunicode_cextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_sqlite_pysqlite_dbapiunicode_cextensions 72
+test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 72
# TEST: test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_cextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_nocextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_cextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_nocextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_cextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_nocextensions 151
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_cextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_nocextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_cextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_nocextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_cextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_nocextensions 152
test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_oracle_cx_oracle_dbapiunicode_cextensions 150
test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 150
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_cextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_cextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 151
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_cextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_nocextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_mysqldb_dbapiunicode_cextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_mysqldb_dbapiunicode_nocextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_pymysql_dbapiunicode_cextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_pymysql_dbapiunicode_nocextensions 156
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_cextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_cextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 152
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_cextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_nocextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_mysqldb_dbapiunicode_cextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_mysqldb_dbapiunicode_nocextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_pymysql_dbapiunicode_cextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_pymysql_dbapiunicode_nocextensions 158
test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_oracle_cx_oracle_dbapiunicode_cextensions 156
test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_oracle_cx_oracle_dbapiunicode_nocextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_postgresql_psycopg2_dbapiunicode_cextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_sqlite_pysqlite_dbapiunicode_cextensions 156
-test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 156
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_postgresql_psycopg2_dbapiunicode_cextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_postgresql_psycopg2_dbapiunicode_nocextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_sqlite_pysqlite_dbapiunicode_cextensions 158
+test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 158
# TEST: test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_cached
from sqlalchemy import values
from sqlalchemy.dialects import mysql
from sqlalchemy.dialects import postgresql
-from sqlalchemy.future import select as future_select
from sqlalchemy.schema import Sequence
from sqlalchemy.sql import bindparam
from sqlalchemy.sql import ColumnElement
.correlate_except(table_b),
),
lambda: (
- future_select(table_a.c.a),
- future_select(table_a.c.a).join(
- table_b, table_a.c.a == table_b.c.a
- ),
- future_select(table_a.c.a).join_from(
+ select(table_a.c.a),
+ select(table_a.c.a).join(table_b, table_a.c.a == table_b.c.a),
+ select(table_a.c.a).join_from(
table_a, table_b, table_a.c.a == table_b.c.a
),
- future_select(table_a.c.a).join_from(table_a, table_b),
- future_select(table_a.c.a).join_from(table_c, table_b),
- future_select(table_a.c.a)
+ select(table_a.c.a).join_from(table_a, table_b),
+ select(table_a.c.a).join_from(table_c, table_b),
+ select(table_a.c.a)
.join(table_b, table_a.c.a == table_b.c.a)
.join(table_c, table_b.c.b == table_c.c.x),
- future_select(table_a.c.a).join(table_b),
- future_select(table_a.c.a).join(table_c),
- future_select(table_a.c.a).join(
- table_b, table_a.c.a == table_b.c.b
- ),
- future_select(table_a.c.a).join(
- table_c, table_a.c.a == table_c.c.x
- ),
+ select(table_a.c.a).join(table_b),
+ select(table_a.c.a).join(table_c),
+ select(table_a.c.a).join(table_b, table_a.c.a == table_b.c.b),
+ select(table_a.c.a).join(table_c, table_a.c.a == table_c.c.x),
),
lambda: (
select([table_a.c.a]).cte(),
# lambda statements don't collect bindparameter objects
# for fixed values, has to be in a variable
value = random.randint(10, 20)
- return lambda_stmt(lambda: future_select(table_a)) + (
+ return lambda_stmt(lambda: select(table_a)) + (
lambda s: s.where(table_a.c.a == value)
)
"columns; use this object directly within a "
"column-level expression.",
getattr,
- select([table1.c.myid]).scalar_subquery().self_group(),
+ select(table1.c.myid).scalar_subquery().self_group(),
"columns",
)
"columns; use this object directly within a "
"column-level expression.",
getattr,
- select([table1.c.myid]).scalar_subquery(),
+ select(table1.c.myid).scalar_subquery(),
"columns",
)
)
self.assert_compile(
- select([table1, table2]),
+ select(table1, table2),
"SELECT mytable.myid, mytable.name, mytable.description, "
"myothertable.otherid, myothertable.othername FROM mytable, "
"myothertable",
)
- def test_invalid_col_argument(self):
- assert_raises(exc.ArgumentError, select, table1)
- assert_raises(exc.ArgumentError, select, table1.c.myid)
-
def test_int_limit_offset_coercion(self):
for given, exp in [
("5", 5),
]:
eq_(select().limit(given)._limit, exp)
eq_(select().offset(given)._offset, exp)
- eq_(select(limit=given)._limit, exp)
- eq_(select(offset=given)._offset, exp)
assert_raises(ValueError, select().limit, "foo")
assert_raises(ValueError, select().offset, "foo")
- assert_raises(ValueError, select, offset="foo")
- assert_raises(ValueError, select, limit="foo")
def test_limit_offset_no_int_coercion_one(self):
exp1 = literal_column("Q")
exp2 = literal_column("Y")
self.assert_compile(
- select([1]).limit(exp1).offset(exp2), "SELECT 1 LIMIT Q OFFSET Y"
+ select(1).limit(exp1).offset(exp2), "SELECT 1 LIMIT Q OFFSET Y"
)
self.assert_compile(
- select([1]).limit(bindparam("x")).offset(bindparam("y")),
+ select(1).limit(bindparam("x")).offset(bindparam("y")),
"SELECT 1 LIMIT :x OFFSET :y",
)
def test_limit_offset_no_int_coercion_two(self):
exp1 = literal_column("Q")
exp2 = literal_column("Y")
- sel = select([1]).limit(exp1).offset(exp2)
+ sel = select(1).limit(exp1).offset(exp2)
assert_raises_message(
exc.CompileError,
def test_limit_offset_no_int_coercion_three(self):
exp1 = bindparam("Q")
exp2 = bindparam("Y")
- sel = select([1]).limit(exp1).offset(exp2)
+ sel = select(1).limit(exp1).offset(exp2)
assert_raises_message(
exc.CompileError,
),
]:
self.assert_compile(
- select([1]).limit(lim).offset(offset),
+ select(1).limit(lim).offset(offset),
"SELECT 1 " + exp,
checkparams=params,
)
def test_select_precol_compile_ordering(self):
s1 = (
- select([column("x")])
+ select(column("x"))
.select_from(text("a"))
.limit(5)
.scalar_subquery()
)
- s2 = select([s1]).limit(10)
+ s2 = select(s1).limit(10)
class MyCompiler(compiler.SQLCompiler):
def get_select_precolumns(self, select, **kw):
another select, for the
purposes of selecting from the exported columns of that select."""
- s = select([table1], table1.c.name == "jack").subquery()
+ s = select(table1).where(table1.c.name == "jack").subquery()
self.assert_compile(
- select([s], s.c.myid == 7),
+ select(s).where(s.c.myid == 7),
"SELECT anon_1.myid, anon_1.name, anon_1.description FROM "
"(SELECT mytable.myid AS myid, "
"mytable.name AS name, mytable.description AS description "
"anon_1.myid = :myid_1",
)
- sq = select([table1])
+ sq = select(table1)
self.assert_compile(
sq.subquery().select(),
"SELECT anon_1.myid, anon_1.name, anon_1.description FROM "
"AS description FROM mytable) AS anon_1",
)
- sq = select([table1]).alias("sq")
+ sq = select(table1).alias("sq")
self.assert_compile(
- sq.select(sq.c.myid == 7),
+ sq.select().where(sq.c.myid == 7),
"SELECT sq.myid, sq.name, sq.description FROM "
"(SELECT mytable.myid AS myid, mytable.name AS name, "
"mytable.description AS description FROM mytable) AS sq "
"WHERE sq.myid = :myid_1",
)
- sq = select(
- [table1, table2],
- and_(table1.c.myid == 7, table2.c.otherid == table1.c.myid),
- use_labels=True,
- ).alias("sq")
+ sq = (
+ select(table1, table2)
+ .where(and_(table1.c.myid == 7, table2.c.otherid == table1.c.myid))
+ .apply_labels()
+ .alias("sq")
+ )
sqstring = (
"SELECT mytable.myid AS mytable_myid, mytable.name AS "
"sq.myothertable_othername FROM (%s) AS sq" % sqstring,
)
- sq2 = select([sq], use_labels=True).alias("sq2")
+ sq2 = select(sq).apply_labels().alias("sq2")
self.assert_compile(
sq2.select(),
def test_select_from_clauselist(self):
self.assert_compile(
- select([ClauseList(column("a"), column("b"))]).select_from(
+ select(ClauseList(column("a"), column("b"))).select_from(
text("sometable")
),
"SELECT a, b FROM sometable",
def test_use_labels(self):
self.assert_compile(
- select([table1.c.myid == 5], use_labels=True),
+ select(table1.c.myid == 5).apply_labels(),
"SELECT mytable.myid = :myid_1 AS anon_1 FROM mytable",
)
self.assert_compile(
- select([func.foo()], use_labels=True), "SELECT foo() AS foo_1"
+ select(func.foo()).apply_labels(), "SELECT foo() AS foo_1"
)
# this is native_boolean=False for default dialect
self.assert_compile(
- select([not_(True)], use_labels=True),
- "SELECT :param_1 = 0 AS anon_1",
+ select(not_(True)).apply_labels(), "SELECT :param_1 = 0 AS anon_1",
)
self.assert_compile(
- select([cast("data", Integer)], use_labels=True),
+ select(cast("data", Integer)).apply_labels(),
"SELECT CAST(:param_1 AS INTEGER) AS anon_1",
)
self.assert_compile(
select(
- [func.sum(func.lala(table1.c.myid).label("foo")).label("bar")]
+ func.sum(func.lala(table1.c.myid).label("foo")).label("bar")
),
"SELECT sum(lala(mytable.myid)) AS bar FROM mytable",
)
self.assert_compile(
- select([keyed]), "SELECT keyed.x, keyed.y" ", keyed.z FROM keyed"
+ select(keyed), "SELECT keyed.x, keyed.y" ", keyed.z FROM keyed"
)
self.assert_compile(
- select([keyed]).apply_labels(),
+ select(keyed).apply_labels(),
"SELECT keyed.x AS keyed_x, keyed.y AS "
"keyed_y, keyed.z AS keyed_z FROM keyed",
)
self.assert_compile(
- select([select([keyed]).apply_labels().subquery()]).apply_labels(),
+ select(select(keyed).apply_labels().subquery()).apply_labels(),
"SELECT anon_1.keyed_x AS anon_1_keyed_x, "
"anon_1.keyed_y AS anon_1_keyed_y, "
"anon_1.keyed_z AS anon_1_keyed_z "
"""as of 1.4, there's no deduping."""
self.assert_compile(
- select([column("a"), column("a"), column("a")]),
+ select(column("a"), column("a"), column("a")),
"SELECT a, a, a",
dialect=default.DefaultDialect(),
)
c = column("a")
self.assert_compile(
- select([c, c, c]),
+ select(c, c, c),
"SELECT a, a, a",
dialect=default.DefaultDialect(),
)
a, b = column("a"), column("b")
self.assert_compile(
- select([a, b, b, b, a, a]),
+ select(a, b, b, b, a, a),
"SELECT a, b, b, b, a, a",
dialect=default.DefaultDialect(),
)
Column("c", Integer, key="a"),
)
self.assert_compile(
- select([a, b, c, a, b, c]),
+ select(a, b, c, a, b, c),
"SELECT a, b, c, a, b, c",
dialect=default.DefaultDialect(),
)
self.assert_compile(
- select([bindparam("a"), bindparam("b"), bindparam("c")]),
+ select(bindparam("a"), bindparam("b"), bindparam("c")),
"SELECT :a AS anon_1, :b AS anon_2, :c AS anon_3",
dialect=default.DefaultDialect(paramstyle="named"),
)
self.assert_compile(
- select([bindparam("a"), bindparam("b"), bindparam("c")]),
+ select(bindparam("a"), bindparam("b"), bindparam("c")),
"SELECT ? AS anon_1, ? AS anon_2, ? AS anon_3",
dialect=default.DefaultDialect(paramstyle="qmark"),
)
self.assert_compile(
- select([column("a"), column("a"), column("a")]), "SELECT a, a, a"
+ select(column("a"), column("a"), column("a")), "SELECT a, a, a"
)
- s = select([bindparam("a"), bindparam("b"), bindparam("c")])
+ s = select(bindparam("a"), bindparam("b"), bindparam("c"))
s = s.compile(dialect=default.DefaultDialect(paramstyle="qmark"))
eq_(s.positiontup, ["a", "b", "c"])
foo = table("foo", column("id"), column("bar_id"))
foo_bar = table("foo_bar", column("id"))
- stmt = select([foo, foo_bar]).apply_labels()
+ stmt = select(foo, foo_bar).apply_labels()
self.assert_compile(
stmt,
"SELECT foo.id AS foo_id, foo.bar_id AS foo_bar_id, "
# robust behavior when dupes are present is still very useful.
stmt = select(
- [
- foo.c.id,
- foo.c.bar_id,
- foo_bar.c.id,
- foo.c.bar_id,
- foo.c.id,
- foo.c.bar_id,
- foo_bar.c.id,
- foo_bar.c.id,
- ]
+ foo.c.id,
+ foo.c.bar_id,
+ foo_bar.c.id,
+ foo.c.bar_id,
+ foo.c.id,
+ foo.c.bar_id,
+ foo_bar.c.id,
+ foo_bar.c.id,
).apply_labels()
self.assert_compile(
stmt,
# of the same column are not used. only the label applied to the
# first occurrence of each column is used
self.assert_compile(
- select([stmt.subquery()]),
+ select(stmt.subquery()),
"SELECT "
"anon_1.foo_id, " # from 1st foo.id in derived (line 1)
"anon_1.foo_bar_id, " # from 1st foo.bar_id in derived (line 2)
def test_dupe_columns_use_labels(self):
t = table("t", column("a"), column("b"))
self.assert_compile(
- select([t.c.a, t.c.a, t.c.b, t.c.a]).apply_labels(),
+ select(t.c.a, t.c.a, t.c.b, t.c.a).apply_labels(),
"SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, "
"t.a AS t_a__1 FROM t",
)
def test_dupe_columns_use_labels_derived_selectable(self):
t = table("t", column("a"), column("b"))
- stmt = select([t.c.a, t.c.a, t.c.b, t.c.a]).apply_labels().subquery()
+ stmt = select(t.c.a, t.c.a, t.c.b, t.c.a).apply_labels().subquery()
self.assert_compile(
- select([stmt]),
+ select(stmt),
"SELECT anon_1.t_a, anon_1.t_a, anon_1.t_b, anon_1.t_a FROM "
"(SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, t.a AS t_a__1 "
"FROM t) AS anon_1",
a, b, a_a = t.c.a, t.c.b, t.c.a._annotate({"some_orm_thing": True})
self.assert_compile(
- select([a, a_a, b, a_a]).apply_labels(),
+ select(a, a_a, b, a_a).apply_labels(),
"SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, "
"t.a AS t_a__1 FROM t",
)
self.assert_compile(
- select([a_a, a, b, a_a]).apply_labels(),
+ select(a_a, a, b, a_a).apply_labels(),
"SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, "
"t.a AS t_a__1 FROM t",
)
self.assert_compile(
- select([a_a, a_a, b, a]).apply_labels(),
+ select(a_a, a_a, b, a).apply_labels(),
"SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, "
"t.a AS t_a__1 FROM t",
)
def test_dupe_columns_use_labels_derived_selectable_mix_annotations(self):
t = table("t", column("a"), column("b"))
a, b, a_a = t.c.a, t.c.b, t.c.a._annotate({"some_orm_thing": True})
- stmt = select([a, a_a, b, a_a]).apply_labels().subquery()
+ stmt = select(a, a_a, b, a_a).apply_labels().subquery()
self.assert_compile(
- select([stmt]),
+ select(stmt),
"SELECT anon_1.t_a, anon_1.t_a, anon_1.t_b, anon_1.t_a FROM "
"(SELECT t.a AS t_a, t.a AS t_a__1, t.b AS t_b, t.a AS t_a__1 "
"FROM t) AS anon_1",
foo_bar__id = foo_bar.c.id._annotate({"some_orm_thing": True})
stmt = select(
- [
- foo.c.bar_id,
- foo_bar.c.id,
- foo_bar.c.id,
- foo_bar__id,
- foo_bar__id,
- ]
+ foo.c.bar_id, foo_bar.c.id, foo_bar.c.id, foo_bar__id, foo_bar__id,
).apply_labels()
self.assert_compile(
# second and third occurrences of a.c.a are labeled, but are
# dupes of each other.
self.assert_compile(
- select([a.c.a, a.c.a, a.c.b, a.c.a]).apply_labels(),
+ select(a.c.a, a.c.a, a.c.b, a.c.a).apply_labels(),
"SELECT t_1.a AS t_1_a, t_1.a AS t_1_a__1, t_1.b AS t_1_b, "
"t_1.a AS t_1_a__1 "
"FROM t AS t_1",
"""
s1 = table1.select()
s2 = s1.alias()
- s3 = select([s2], use_labels=True)
+ s3 = select(s2).apply_labels()
s4 = s3.alias()
- s5 = select([s4], use_labels=True)
+ s5 = select(s4).apply_labels()
self.assert_compile(
s5,
"SELECT anon_1.anon_2_myid AS "
def test_nested_label_targeting_keyed(self):
s1 = keyed.select()
s2 = s1.alias()
- s3 = select([s2], use_labels=True)
+ s3 = select(s2).apply_labels()
self.assert_compile(
s3,
"SELECT anon_1.x AS anon_1_x, "
)
s4 = s3.alias()
- s5 = select([s4], use_labels=True)
+ s5 = select(s4).apply_labels()
self.assert_compile(
s5,
"SELECT anon_1.anon_2_x AS anon_1_anon_2_x, "
)
def test_exists(self):
- s = select([table1.c.myid]).where(table1.c.myid == 5)
+ s = select(table1.c.myid).where(table1.c.myid == 5)
self.assert_compile(
exists(s),
)
self.assert_compile(
- exists([table1.c.myid], table1.c.myid == 5).select(),
+ exists(table1.c.myid).where(table1.c.myid == 5).select(),
"SELECT EXISTS (SELECT mytable.myid FROM "
"mytable WHERE mytable.myid = :myid_1) AS anon_1",
params={"mytable_myid": 5},
)
self.assert_compile(
- select([table1, exists([1], from_obj=table2)]),
+ select(table1, exists(1).select_from(table2)),
"SELECT mytable.myid, mytable.name, "
"mytable.description, EXISTS (SELECT 1 "
"FROM myothertable) AS anon_1 FROM mytable",
params={},
)
self.assert_compile(
- select([table1, exists([1], from_obj=table2).label("foo")]),
+ select(table1, exists(1).select_from(table2).label("foo")),
"SELECT mytable.myid, mytable.name, "
"mytable.description, EXISTS (SELECT 1 "
"FROM myothertable) AS foo FROM mytable",
)
self.assert_compile(
- table1.select(
+ table1.select().where(
exists()
.where(table2.c.otherid == table1.c.myid)
.correlate(table1)
"myothertable.otherid = mytable.myid)",
)
self.assert_compile(
- table1.select(
+ table1.select().where(
exists()
.where(table2.c.otherid == table1.c.myid)
.correlate(table1)
self.assert_compile(
select(
- [
- or_(
- exists().where(table2.c.otherid == "foo"),
- exists().where(table2.c.otherid == "bar"),
- )
- ]
+ or_(
+ exists().where(table2.c.otherid == "foo"),
+ exists().where(table2.c.otherid == "bar"),
+ )
),
"SELECT (EXISTS (SELECT * FROM myothertable "
"WHERE myothertable.otherid = :otherid_1)) "
)
self.assert_compile(
- select([exists([1])]), "SELECT EXISTS (SELECT 1) AS anon_1"
+ select(exists(1)), "SELECT EXISTS (SELECT 1) AS anon_1"
)
self.assert_compile(
- select([~exists([1])]), "SELECT NOT (EXISTS (SELECT 1)) AS anon_1"
+ select(~exists(1)), "SELECT NOT (EXISTS (SELECT 1)) AS anon_1"
)
self.assert_compile(
- select([~(~exists([1]))]),
+ select(~(~exists(1))),
"SELECT NOT (NOT (EXISTS (SELECT 1))) AS anon_1",
)
def test_where_subquery(self):
- s = select(
- [addresses.c.street],
- addresses.c.user_id == users.c.user_id,
- correlate=True,
- ).alias("s")
+ s = (
+ select(addresses.c.street)
+ .where(addresses.c.user_id == users.c.user_id)
+ .alias("s")
+ )
# don't correlate in a FROM list
self.assert_compile(
- select([users, s.c.street], from_obj=s),
+ select(users, s.c.street).select_from(s),
"SELECT users.user_id, users.user_name, "
"users.password, s.street FROM users, "
"(SELECT addresses.street AS street FROM "
"users.user_id) AS s",
)
self.assert_compile(
- table1.select(
+ table1.select().where(
table1.c.myid
- == select(
- [table1.c.myid], table1.c.name == "jack"
- ).scalar_subquery()
+ == select(table1.c.myid)
+ .where(table1.c.name == "jack")
+ .scalar_subquery()
),
"SELECT mytable.myid, mytable.name, "
"mytable.description FROM mytable WHERE "
"mytable WHERE mytable.name = :name_1)",
)
self.assert_compile(
- table1.select(
+ table1.select().where(
table1.c.myid
- == select(
- [table2.c.otherid], table1.c.name == table2.c.othername
- ).scalar_subquery()
+ == select(table2.c.otherid)
+ .where(table1.c.name == table2.c.othername)
+ .scalar_subquery()
),
"SELECT mytable.myid, mytable.name, "
"mytable.description FROM mytable WHERE "
"e)",
)
self.assert_compile(
- table1.select(exists([1], table2.c.otherid == table1.c.myid)),
+ table1.select().where(
+ exists(1).where(table2.c.otherid == table1.c.myid)
+ ),
"SELECT mytable.myid, mytable.name, "
"mytable.description FROM mytable WHERE "
"EXISTS (SELECT 1 FROM myothertable WHERE "
"myothertable.otherid = mytable.myid)",
)
talias = table1.alias("ta")
- s = select(
- [talias], exists([1], table2.c.otherid == talias.c.myid)
- ).subquery("sq2")
+ s = (
+ select(talias)
+ .where(exists(1).where(table2.c.otherid == talias.c.myid))
+ .subquery("sq2")
+ )
self.assert_compile(
- select([s, table1]),
+ select(s, table1),
"SELECT sq2.myid, sq2.name, "
"sq2.description, mytable.myid, "
"mytable.name, mytable.description FROM "
# test constructing the outer query via append_column(), which
# occurs in the ORM's Query object
- s = select(
- [], exists([1], table2.c.otherid == table1.c.myid), from_obj=table1
+ s = (
+ select()
+ .where(exists(1).where(table2.c.otherid == table1.c.myid))
+ .select_from(table1)
)
s.add_columns.non_generative(s, table1)
self.assert_compile(
def test_orderby_subquery(self):
self.assert_compile(
table1.select().order_by(
- select(
- [table2.c.otherid], table1.c.myid == table2.c.otherid
- ).scalar_subquery()
+ select(table2.c.otherid)
+ .where(table1.c.myid == table2.c.otherid)
+ .scalar_subquery()
),
"SELECT mytable.myid, mytable.name, "
"mytable.description FROM mytable ORDER BY "
self.assert_compile(
table1.select().order_by(
desc(
- select(
- [table2.c.otherid], table1.c.myid == table2.c.otherid
- ).scalar_subquery()
+ select(table2.c.otherid)
+ .where(table1.c.myid == table2.c.otherid)
+ .scalar_subquery()
)
),
"SELECT mytable.myid, mytable.name, "
)
def test_scalar_select(self):
- s = select([table1.c.myid], correlate=False).scalar_subquery()
+ s = select(table1.c.myid).correlate(None).scalar_subquery()
self.assert_compile(
- select([table1, s]),
+ select(table1, s),
"SELECT mytable.myid, mytable.name, "
"mytable.description, (SELECT mytable.myid "
"FROM mytable) AS anon_1 FROM mytable",
)
- s = select([table1.c.myid]).scalar_subquery()
+ s = select(table1.c.myid).scalar_subquery()
self.assert_compile(
- select([table2, s]),
+ select(table2, s),
"SELECT myothertable.otherid, "
"myothertable.othername, (SELECT "
"mytable.myid FROM mytable) AS anon_1 FROM "
"myothertable",
)
- s = select([table1.c.myid]).correlate(None).scalar_subquery()
+ s = select(table1.c.myid).correlate(None).scalar_subquery()
self.assert_compile(
- select([table1, s]),
+ select(table1, s),
"SELECT mytable.myid, mytable.name, "
"mytable.description, (SELECT mytable.myid "
"FROM mytable) AS anon_1 FROM mytable",
)
- s = select([table1.c.myid]).scalar_subquery()
+ s = select(table1.c.myid).scalar_subquery()
s2 = s.where(table1.c.myid == 5)
self.assert_compile(
s2,
# test that aliases use scalar_subquery() when used in an explicitly
# scalar context
- s = select([table1.c.myid]).scalar_subquery()
+ s = select(table1.c.myid).scalar_subquery()
self.assert_compile(
- select([table1.c.myid]).where(table1.c.myid == s),
+ select(table1.c.myid).where(table1.c.myid == s),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid = (SELECT mytable.myid FROM "
"mytable)",
)
self.assert_compile(
- select([table1.c.myid]).where(table1.c.myid < s),
+ select(table1.c.myid).where(table1.c.myid < s),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid < (SELECT mytable.myid FROM "
"mytable)",
)
- s = select([table1.c.myid]).scalar_subquery()
+ s = select(table1.c.myid).scalar_subquery()
self.assert_compile(
- select([table2, s]),
+ select(table2, s),
"SELECT myothertable.otherid, "
"myothertable.othername, (SELECT "
"mytable.myid FROM mytable) AS anon_1 FROM "
# test expressions against scalar selects
self.assert_compile(
- select([s - literal(8)]),
+ select(s - literal(8)),
"SELECT (SELECT mytable.myid FROM mytable) "
"- :param_1 AS anon_1",
)
self.assert_compile(
- select([select([table1.c.name]).scalar_subquery() + literal("x")]),
+ select(select(table1.c.name).scalar_subquery() + literal("x")),
"SELECT (SELECT mytable.name FROM mytable) "
"|| :param_1 AS anon_1",
)
self.assert_compile(
- select([s > literal(8)]),
+ select(s > literal(8)),
"SELECT (SELECT mytable.myid FROM mytable) "
"> :param_1 AS anon_1",
)
self.assert_compile(
- select([select([table1.c.name]).label("foo")]),
+ select(select(table1.c.name).label("foo")),
"SELECT (SELECT mytable.name FROM mytable) " "AS foo",
)
# scalar selects should not have any attributes on their 'c' or
# 'columns' attribute
- s = select([table1.c.myid]).scalar_subquery()
+ s = select(table1.c.myid).scalar_subquery()
assert_raises_message(
exc.InvalidRequestError,
"Scalar Select expression has no columns; use this "
places = table("places", column("id"), column("nm"))
zipcode = "12345"
qlat = (
- select([zips.c.latitude], zips.c.zipcode == zipcode)
+ select(zips.c.latitude)
+ .where(zips.c.zipcode == zipcode)
.correlate(None)
.scalar_subquery()
)
qlng = (
- select([zips.c.longitude], zips.c.zipcode == zipcode)
+ select(zips.c.longitude)
+ .where(zips.c.zipcode == zipcode)
.correlate(None)
.scalar_subquery()
)
- q = select(
- [
+ q = (
+ select(
places.c.id,
places.c.nm,
zips.c.zipcode,
func.latlondist(qlat, qlng).label("dist"),
- ],
- zips.c.zipcode == zipcode,
- order_by=["dist", places.c.nm],
+ )
+ .where(zips.c.zipcode == zipcode)
+ .order_by("dist", places.c.nm)
)
self.assert_compile(
)
zalias = zips.alias("main_zip")
- qlat = select(
- [zips.c.latitude], zips.c.zipcode == zalias.c.zipcode
- ).scalar_subquery()
- qlng = select(
- [zips.c.longitude], zips.c.zipcode == zalias.c.zipcode
- ).scalar_subquery()
- q = select(
- [
- places.c.id,
- places.c.nm,
- zalias.c.zipcode,
- func.latlondist(qlat, qlng).label("dist"),
- ],
- order_by=["dist", places.c.nm],
+ qlat = (
+ select(zips.c.latitude)
+ .where(zips.c.zipcode == zalias.c.zipcode)
+ .scalar_subquery()
)
+ qlng = (
+ select(zips.c.longitude)
+ .where(zips.c.zipcode == zalias.c.zipcode)
+ .scalar_subquery()
+ )
+ q = select(
+ places.c.id,
+ places.c.nm,
+ zalias.c.zipcode,
+ func.latlondist(qlat, qlng).label("dist"),
+ ).order_by("dist", places.c.nm)
self.assert_compile(
q,
"SELECT places.id, places.nm, "
)
a1 = table2.alias("t2alias")
- s1 = select(
- [a1.c.otherid], table1.c.myid == a1.c.otherid
- ).scalar_subquery()
+ s1 = (
+ select(a1.c.otherid)
+ .where(table1.c.myid == a1.c.otherid)
+ .scalar_subquery()
+ )
j1 = table1.join(table2, table1.c.myid == table2.c.otherid)
- s2 = select([table1, s1], from_obj=j1)
+ s2 = select(table1, s1).select_from(j1)
self.assert_compile(
s2,
"SELECT mytable.myid, mytable.name, "
def test_label_comparison_one(self):
x = func.lala(table1.c.myid).label("foo")
self.assert_compile(
- select([x], x == 5),
+ select(x).where(x == 5),
"SELECT lala(mytable.myid) AS foo FROM "
"mytable WHERE lala(mytable.myid) = "
":param_1",
dialect = default.DefaultDialect()
self.assert_compile(
- select([lab1, lab2]).order_by(lab1, desc(lab2)),
+ select(lab1, lab2).order_by(lab1, desc(lab2)),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY foo, bar DESC",
# the function embedded label renders as the function
self.assert_compile(
- select([lab1, lab2]).order_by(func.hoho(lab1), desc(lab2)),
+ select(lab1, lab2).order_by(func.hoho(lab1), desc(lab2)),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY hoho(mytable.myid + :myid_1), bar DESC",
# binary expressions render as the expression without labels
self.assert_compile(
- select([lab1, lab2]).order_by(lab1 + "test"),
+ select(lab1, lab2).order_by(lab1 + "test"),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY mytable.myid + :myid_1 + :param_1",
# labels within functions in the columns clause render
# with the expression
self.assert_compile(
- select([lab1, func.foo(lab1)]).order_by(lab1, func.foo(lab1)),
+ select(lab1, func.foo(lab1)).order_by(lab1, func.foo(lab1)),
"SELECT mytable.myid + :myid_1 AS foo, "
"foo(mytable.myid + :myid_1) AS foo_1 FROM mytable "
"ORDER BY foo, foo(mytable.myid + :myid_1)",
ly = (func.lower(table1.c.name) + table1.c.description).label("ly")
self.assert_compile(
- select([lx, ly]).order_by(lx, ly.desc()),
+ select(lx, ly).order_by(lx, ly.desc()),
"SELECT mytable.myid + mytable.myid AS lx, "
"lower(mytable.name) || mytable.description AS ly "
"FROM mytable ORDER BY lx, ly DESC",
# expression isn't actually the same thing (even though label is)
self.assert_compile(
- select([lab1, lab2]).order_by(
+ select(lab1, lab2).order_by(
table1.c.myid.label("foo"), desc(table1.c.name.label("bar"))
),
"SELECT mytable.myid + :myid_1 AS foo, "
# it's also an exact match, not aliased etc.
self.assert_compile(
- select([lab1, lab2]).order_by(
+ select(lab1, lab2).order_by(
desc(table1.alias().c.name.label("bar"))
),
"SELECT mytable.myid + :myid_1 AS foo, "
# but! it's based on lineage
lab2_lineage = lab2.element._clone()
self.assert_compile(
- select([lab1, lab2]).order_by(desc(lab2_lineage.label("bar"))),
+ select(lab1, lab2).order_by(desc(lab2_lineage.label("bar"))),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY bar DESC",
# want to render a name that isn't specifically a Label elsewhere
# in the query
self.assert_compile(
- select([table1.c.myid]).order_by(table1.c.name.label("name")),
+ select(table1.c.myid).order_by(table1.c.name.label("name")),
"SELECT mytable.myid FROM mytable ORDER BY mytable.name",
)
# as well as if it doesn't match
self.assert_compile(
- select([table1.c.myid]).order_by(
+ select(table1.c.myid).order_by(
func.lower(table1.c.name).label("name")
),
"SELECT mytable.myid FROM mytable ORDER BY lower(mytable.name)",
dialect = default.DefaultDialect()
dialect.supports_simple_order_by_label = False
self.assert_compile(
- select([lab1, lab2]).order_by(lab1, desc(lab2)),
+ select(lab1, lab2).order_by(lab1, desc(lab2)),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY mytable.myid + :myid_1, somefunc(mytable.name) DESC",
dialect=dialect,
)
self.assert_compile(
- select([lab1, lab2]).order_by(func.hoho(lab1), desc(lab2)),
+ select(lab1, lab2).order_by(func.hoho(lab1), desc(lab2)),
"SELECT mytable.myid + :myid_1 AS foo, "
"somefunc(mytable.name) AS bar FROM mytable "
"ORDER BY hoho(mytable.myid + :myid_1), "
dialect = default.DefaultDialect()
self.assert_compile(
- select([lab1, lab2]).group_by(lab1, lab2),
+ select(lab1, lab2).group_by(lab1, lab2),
"SELECT mytable.myid + :myid_1 AS foo, somefunc(mytable.name) "
"AS bar FROM mytable GROUP BY mytable.myid + :myid_1, "
"somefunc(mytable.name)",
assert isinstance(x.type, Boolean)
assert str(x) == "a AND b AND c"
self.assert_compile(
- select([x.label("foo")]), "SELECT a AND b AND c AS foo"
+ select(x.label("foo")), "SELECT a AND b AND c AS foo"
)
self.assert_compile(
t = table("t", column("x"))
self.assert_compile(
- select([t]).where(and_(t.c.x == 5, or_(and_(or_(t.c.x == 7))))),
+ select(t).where(and_(t.c.x == 5, or_(and_(or_(t.c.x == 7))))),
"SELECT t.x FROM t WHERE t.x = :x_1 AND t.x = :x_2",
)
self.assert_compile(
- select([t]).where(and_(or_(t.c.x == 12, and_(or_(t.c.x == 8))))),
+ select(t).where(and_(or_(t.c.x == 12, and_(or_(t.c.x == 8))))),
"SELECT t.x FROM t WHERE t.x = :x_1 OR t.x = :x_2",
)
self.assert_compile(
- select([t]).where(
+ select(t).where(
and_(or_(or_(t.c.x == 12), and_(or_(and_(t.c.x == 8)))))
),
"SELECT t.x FROM t WHERE t.x = :x_1 OR t.x = :x_2",
)
self.assert_compile(
- select([t]).where(
+ select(t).where(
and_(
or_(
or_(t.c.x == 12),
t = table("t", column("x"))
self.assert_compile(
- select([t]).where(true()),
+ select(t).where(true()),
"SELECT t.x FROM t WHERE 1 = 1",
dialect=default.DefaultDialect(supports_native_boolean=False),
)
self.assert_compile(
- select([t]).where(true()),
+ select(t).where(true()),
"SELECT t.x FROM t WHERE true",
dialect=default.DefaultDialect(supports_native_boolean=True),
)
self.assert_compile(
- select([t]),
+ select(t),
"SELECT t.x FROM t",
dialect=default.DefaultDialect(supports_native_boolean=True),
)
def test_distinct(self):
self.assert_compile(
- select([table1.c.myid.distinct()]),
+ select(table1.c.myid.distinct()),
"SELECT DISTINCT mytable.myid FROM mytable",
)
self.assert_compile(
- select([distinct(table1.c.myid)]),
+ select(distinct(table1.c.myid)),
"SELECT DISTINCT mytable.myid FROM mytable",
)
self.assert_compile(
- select([table1.c.myid]).distinct(),
+ select(table1.c.myid).distinct(),
"SELECT DISTINCT mytable.myid FROM mytable",
)
self.assert_compile(
- select([func.count(table1.c.myid.distinct())]),
+ select(func.count(table1.c.myid.distinct())),
"SELECT count(DISTINCT mytable.myid) AS count_1 FROM mytable",
)
self.assert_compile(
- select([func.count(distinct(table1.c.myid))]),
+ select(func.count(distinct(table1.c.myid))),
"SELECT count(DISTINCT mytable.myid) AS count_1 FROM mytable",
)
"DISTINCT ON is currently supported only by the PostgreSQL "
"dialect"
):
- select(["*"]).distinct(table1.c.myid).compile()
+ select("*").distinct(table1.c.myid).compile()
def test_where_empty(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
BooleanClauseList._construct_raw(operators.and_)
),
"SELECT mytable.myid FROM mytable",
)
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
BooleanClauseList._construct_raw(operators.or_)
),
"SELECT mytable.myid FROM mytable",
def test_order_by_nulls(self):
self.assert_compile(
- table2.select(
- order_by=[
- table2.c.otherid,
- table2.c.othername.desc().nullsfirst(),
- ]
+ table2.select().order_by(
+ table2.c.otherid, table2.c.othername.desc().nullsfirst(),
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid, "
)
self.assert_compile(
- table2.select(
- order_by=[
- table2.c.otherid,
- table2.c.othername.desc().nullslast(),
- ]
+ table2.select().order_by(
+ table2.c.otherid, table2.c.othername.desc().nullslast(),
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid, "
)
self.assert_compile(
- table2.select(
- order_by=[
- table2.c.otherid.nullslast(),
- table2.c.othername.desc().nullsfirst(),
- ]
+ table2.select().order_by(
+ table2.c.otherid.nullslast(),
+ table2.c.othername.desc().nullsfirst(),
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid NULLS LAST, "
)
self.assert_compile(
- table2.select(
- order_by=[
- table2.c.otherid.nullsfirst(),
- table2.c.othername.desc(),
- ]
+ table2.select().order_by(
+ table2.c.otherid.nullsfirst(), table2.c.othername.desc(),
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid NULLS FIRST, "
)
self.assert_compile(
- table2.select(
- order_by=[
- table2.c.otherid.nullsfirst(),
- table2.c.othername.desc().nullslast(),
- ]
+ table2.select().order_by(
+ table2.c.otherid.nullsfirst(),
+ table2.c.othername.desc().nullslast(),
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid NULLS FIRST, "
def test_orderby_groupby(self):
self.assert_compile(
- table2.select(
- order_by=[table2.c.otherid, asc(table2.c.othername)]
+ table2.select().order_by(
+ table2.c.otherid, asc(table2.c.othername)
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid, "
)
self.assert_compile(
- table2.select(
- order_by=[table2.c.otherid, table2.c.othername.desc()]
+ table2.select().order_by(
+ table2.c.otherid, table2.c.othername.desc()
),
"SELECT myothertable.otherid, myothertable.othername FROM "
"myothertable ORDER BY myothertable.otherid, "
)
self.assert_compile(
- select(
- [table2.c.othername, func.count(table2.c.otherid)],
- group_by=[table2.c.othername],
+ select(table2.c.othername, func.count(table2.c.otherid)).group_by(
+ table2.c.othername
),
"SELECT myothertable.othername, "
"count(myothertable.otherid) AS count_1 "
# generative group by
self.assert_compile(
- select(
- [table2.c.othername, func.count(table2.c.otherid)]
- ).group_by(table2.c.othername),
+ select(table2.c.othername, func.count(table2.c.otherid)).group_by(
+ table2.c.othername
+ ),
"SELECT myothertable.othername, "
"count(myothertable.otherid) AS count_1 "
"FROM myothertable GROUP BY myothertable.othername",
)
self.assert_compile(
- select([table2.c.othername, func.count(table2.c.otherid)])
+ select(table2.c.othername, func.count(table2.c.otherid))
.group_by(table2.c.othername)
.group_by(None),
"SELECT myothertable.othername, "
)
self.assert_compile(
- select(
- [table2.c.othername, func.count(table2.c.otherid)],
- group_by=[table2.c.othername],
- order_by=[table2.c.othername],
- ),
+ select(table2.c.othername, func.count(table2.c.otherid))
+ .group_by(table2.c.othername)
+ .order_by(table2.c.othername),
"SELECT myothertable.othername, "
"count(myothertable.otherid) AS count_1 "
"FROM myothertable "
name = "custom"
statement_compiler = CustomCompiler
- stmt = select([table1.c.myid]).order_by(table1.c.myid)
+ stmt = select(table1.c.myid).order_by(table1.c.myid)
self.assert_compile(
stmt,
"SELECT mytable.myid FROM mytable ORDER BY "
name = "custom"
statement_compiler = CustomCompiler
- stmt = select([table1.c.myid]).group_by(table1.c.myid)
+ stmt = select(table1.c.myid).group_by(table1.c.myid)
self.assert_compile(
stmt,
"SELECT mytable.myid FROM mytable GROUP BY "
def test_for_update(self):
self.assert_compile(
- table1.select(table1.c.myid == 7).with_for_update(),
+ table1.select().where(table1.c.myid == 7).with_for_update(),
"SELECT mytable.myid, mytable.name, mytable.description "
"FROM mytable WHERE mytable.myid = :myid_1 FOR UPDATE",
)
# not supported by dialect, should just use update
self.assert_compile(
- table1.select(table1.c.myid == 7).with_for_update(nowait=True),
+ table1.select()
+ .where(table1.c.myid == 7)
+ .with_for_update(nowait=True),
"SELECT mytable.myid, mytable.name, mytable.description "
"FROM mytable WHERE mytable.myid = :myid_1 FOR UPDATE",
)
# test the alias for a table1. column names stay the same,
# table name "changes" to "foo".
self.assert_compile(
- select([table1.alias("foo")]),
+ select(table1.alias("foo")),
"SELECT foo.myid, foo.name, foo.description FROM mytable AS foo",
)
for dialect in (oracle.dialect(),):
self.assert_compile(
- select([table1.alias("foo")]),
+ select(table1.alias("foo")),
"SELECT foo.myid, foo.name, foo.description FROM mytable foo",
dialect=dialect,
)
self.assert_compile(
- select([table1.alias()]),
+ select(table1.alias()),
"SELECT mytable_1.myid, mytable_1.name, mytable_1.description "
"FROM mytable AS mytable_1",
)
# which become the column keys accessible off the Selectable object.
# also, only use one column from the second table and all columns
# from the first table1.
- q = select(
- [table1, table2.c.otherid],
- table1.c.myid == table2.c.otherid,
- use_labels=True,
+ q = (
+ select(table1, table2.c.otherid)
+ .where(table1.c.myid == table2.c.otherid)
+ .apply_labels()
)
# make an alias of the "selectable". column names
# should produce two underscores.
# also, reference the column "mytable_myid" off of the t2view alias.
self.assert_compile(
- a.select(a.c.mytable_myid == 9, use_labels=True),
+ a.select().where(a.c.mytable_myid == 9).apply_labels(),
"SELECT t2view.mytable_myid AS t2view_mytable_myid, "
"t2view.mytable_name "
"AS t2view_mytable_name, "
def test_alias_nesting_table(self):
self.assert_compile(
- select([table1.alias("foo").alias("bar").alias("bat")]),
+ select(table1.alias("foo").alias("bar").alias("bat")),
"SELECT bat.myid, bat.name, bat.description FROM mytable AS bat",
)
self.assert_compile(
- select([table1.alias(None).alias("bar").alias("bat")]),
+ select(table1.alias(None).alias("bar").alias("bat")),
"SELECT bat.myid, bat.name, bat.description FROM mytable AS bat",
)
self.assert_compile(
- select([table1.alias("foo").alias(None).alias("bat")]),
+ select(table1.alias("foo").alias(None).alias("bat")),
"SELECT bat.myid, bat.name, bat.description FROM mytable AS bat",
)
self.assert_compile(
- select([table1.alias("foo").alias("bar").alias(None)]),
+ select(table1.alias("foo").alias("bar").alias(None)),
"SELECT bar_1.myid, bar_1.name, bar_1.description "
"FROM mytable AS bar_1",
)
self.assert_compile(
- select([table1.alias("foo").alias(None).alias(None)]),
+ select(table1.alias("foo").alias(None).alias(None)),
"SELECT anon_1.myid, anon_1.name, anon_1.description "
"FROM mytable AS anon_1",
)
def test_alias_nesting_subquery(self):
- stmt = select([table1]).subquery()
+ stmt = select(table1).subquery()
self.assert_compile(
- select([stmt.alias("foo").alias("bar").alias("bat")]),
+ select(stmt.alias("foo").alias("bar").alias("bat")),
"SELECT bat.myid, bat.name, bat.description FROM "
"(SELECT mytable.myid AS myid, mytable.name AS name, "
"mytable.description AS description FROM mytable) AS bat",
)
self.assert_compile(
- select([stmt.alias("foo").alias(None).alias(None)]),
+ select(stmt.alias("foo").alias(None).alias(None)),
"SELECT anon_1.myid, anon_1.name, anon_1.description FROM "
"(SELECT mytable.myid AS myid, mytable.name AS name, "
"mytable.description AS description FROM mytable) AS anon_1",
def test_collate(self):
# columns clause
self.assert_compile(
- select([column("x").collate("bar")]),
+ select(column("x").collate("bar")),
"SELECT x COLLATE bar AS anon_1",
)
# WHERE clause
self.assert_compile(
- select([column("x")]).where(column("x").collate("bar") == "foo"),
+ select(column("x")).where(column("x").collate("bar") == "foo"),
"SELECT x WHERE (x COLLATE bar) = :param_1",
)
# ORDER BY clause
self.assert_compile(
- select([column("x")]).order_by(column("x").collate("bar")),
+ select(column("x")).order_by(column("x").collate("bar")),
"SELECT x ORDER BY x COLLATE bar",
)
def test_literal(self):
self.assert_compile(
- select([literal("foo")]), "SELECT :param_1 AS anon_1"
+ select(literal("foo")), "SELECT :param_1 AS anon_1"
)
self.assert_compile(
- select([literal("foo") + literal("bar")], from_obj=[table1]),
+ select(literal("foo") + literal("bar")).select_from(table1),
"SELECT :param_1 || :param_2 AS anon_1 FROM mytable",
)
self.assert_compile(
select(
- [
- value_tbl.c.id,
- (value_tbl.c.val2 - value_tbl.c.val1) / value_tbl.c.val1,
- ]
+ value_tbl.c.id,
+ (value_tbl.c.val2 - value_tbl.c.val1) / value_tbl.c.val1,
),
"SELECT values.id, (values.val2 - values.val1) "
"/ values.val1 AS anon_1 FROM values",
)
self.assert_compile(
- select(
- [value_tbl.c.id],
+ select(value_tbl.c.id).where(
(value_tbl.c.val2 - value_tbl.c.val1) / value_tbl.c.val1 > 2.0,
),
"SELECT values.id FROM values WHERE "
)
self.assert_compile(
- select(
- [value_tbl.c.id],
+ select(value_tbl.c.id).where(
value_tbl.c.val1
/ (value_tbl.c.val2 - value_tbl.c.val1)
/ value_tbl.c.val1
column("spaces % more spaces"),
)
self.assert_compile(
- t.select(use_labels=True),
+ t.select().apply_labels(),
"""SELECT "table%name"."percent%" AS "table%name_percent%", """
""""table%name"."%(oneofthese)s" AS """
""""table%name_%(oneofthese)s", """
)
self.assert_compile(
- select(
- [table1],
- from_obj=[
- join(table1, table2, table1.c.myid == table2.c.otherid)
- ],
+ select(table1).select_from(
+ join(table1, table2, table1.c.myid == table2.c.otherid)
),
"SELECT mytable.myid, mytable.name, mytable.description FROM "
"mytable JOIN myothertable ON mytable.myid = myothertable.otherid",
self.assert_compile(
select(
- [
- join(
- join(
- table1, table2, table1.c.myid == table2.c.otherid
- ),
- table3,
- table1.c.myid == table3.c.userid,
- )
- ]
+ join(
+ join(table1, table2, table1.c.myid == table2.c.otherid),
+ table3,
+ table1.c.myid == table3.c.userid,
+ )
),
"SELECT mytable.myid, mytable.name, mytable.description, "
"myothertable.otherid, myothertable.othername, "
)
self.assert_compile(
- select(
- [table1, table2, table3],
- from_obj=[
- join(
- table1, table2, table1.c.myid == table2.c.otherid
- ).outerjoin(table3, table1.c.myid == table3.c.userid)
- ],
+ select(table1, table2, table3).select_from(
+ join(
+ table1, table2, table1.c.myid == table2.c.otherid
+ ).outerjoin(table3, table1.c.myid == table3.c.userid)
),
"SELECT mytable.myid, mytable.name, mytable.description, "
"myothertable.otherid, myothertable.othername, "
" thirdtable.userid",
)
self.assert_compile(
- select(
- [table1, table2, table3],
- from_obj=[
- outerjoin(
- table1,
- join(
- table2, table3, table2.c.otherid == table3.c.userid
- ),
- table1.c.myid == table2.c.otherid,
- )
- ],
+ select(table1, table2, table3).select_from(
+ outerjoin(
+ table1,
+ join(table2, table3, table2.c.otherid == table3.c.userid),
+ table1.c.myid == table2.c.otherid,
+ )
),
"SELECT mytable.myid, mytable.name, mytable.description, "
"myothertable.otherid, myothertable.othername, "
"mytable.myid = myothertable.otherid",
)
- query = select(
- [table1, table2],
- or_(
- table1.c.name == "fred",
- table1.c.myid == 10,
- table2.c.othername != "jack",
- text("EXISTS (select yay from foo where boo = lar)"),
- ),
- from_obj=[
+ query = (
+ select(table1, table2)
+ .where(
+ or_(
+ table1.c.name == "fred",
+ table1.c.myid == 10,
+ table2.c.othername != "jack",
+ text("EXISTS (select yay from foo where boo = lar)"),
+ )
+ )
+ .select_from(
outerjoin(table1, table2, table1.c.myid == table2.c.otherid)
- ],
+ )
)
self.assert_compile(
query,
table2, table1.c.myid == table2.c.otherid, full=True
),
]:
- stmt = select([table1]).select_from(spec)
+ stmt = select(table1).select_from(spec)
self.assert_compile(
stmt,
"SELECT mytable.myid, mytable.name, mytable.description FROM "
)
x = union(
- select([table1], table1.c.myid == 5),
- select([table1], table1.c.myid == 12),
- order_by=[table1.c.myid],
- )
+ select(table1).where(table1.c.myid == 5),
+ select(table1).where(table1.c.myid == 12),
+ ).order_by(table1.c.myid)
self.assert_compile(
x,
"ORDER BY myid",
)
- x = union(select([table1]), select([table1]))
- x = union(x, select([table1]))
+ x = union(select(table1), select(table1))
+ x = union(x, select(table1))
self.assert_compile(
x,
"(SELECT mytable.myid, mytable.name, mytable.description "
)
u1 = union(
- select([table1.c.myid, table1.c.name]),
- select([table2]),
- select([table3]),
+ select(table1.c.myid, table1.c.name),
+ select(table2),
+ select(table3),
).order_by("name")
self.assert_compile(
u1,
assert u1s.corresponding_column(table2.c.otherid) is u1s.c.myid
self.assert_compile(
- union(
- select([table1.c.myid, table1.c.name]),
- select([table2]),
- order_by=["myid"],
- offset=10,
- limit=5,
- ),
+ union(select(table1.c.myid, table1.c.name), select(table2))
+ .order_by("myid")
+ .offset(10)
+ .limit(5),
# note table name is omitted here. The CompoundSelect, inside of
# _label_resolve_dict(), creates a subquery of itself and then
# turns "named_with_column" off, so that we can order by the
"Can't resolve label reference for ORDER BY / GROUP BY / "
"DISTINCT etc. Textual "
"SQL expression 'noname'",
- union(
- select([table1.c.myid, table1.c.name]),
- select([table2]),
- order_by=["noname"],
- ).compile,
+ union(select(table1.c.myid, table1.c.name), select(table2),)
+ .order_by("noname")
+ .compile,
)
self.assert_compile(
union(
select(
- [
- table1.c.myid,
- table1.c.name,
- func.max(table1.c.description),
- ],
- table1.c.name == "name2",
- group_by=[table1.c.myid, table1.c.name],
- ),
- table1.select(table1.c.name == "name1"),
+ table1.c.myid,
+ table1.c.name,
+ func.max(table1.c.description),
+ )
+ .where(table1.c.name == "name2")
+ .group_by(table1.c.myid, table1.c.name),
+ table1.select().where(table1.c.name == "name1"),
),
"SELECT mytable.myid, mytable.name, "
"max(mytable.description) AS max_1 "
self.assert_compile(
union(
- select([literal(100).label("value")]),
- select([literal(200).label("value")]),
+ select(literal(100).label("value")),
+ select(literal(200).label("value")),
),
"SELECT :param_1 AS value UNION SELECT :param_2 AS value",
)
self.assert_compile(
union_all(
- select([table1.c.myid]),
- union(select([table2.c.otherid]), select([table3.c.userid])),
+ select(table1.c.myid),
+ union(select(table2.c.otherid), select(table3.c.userid)),
),
"SELECT mytable.myid FROM mytable UNION ALL "
"(SELECT myothertable.otherid FROM myothertable UNION "
"SELECT thirdtable.userid FROM thirdtable)",
)
- s = select([column("foo"), column("bar")])
+ s = select(column("foo"), column("bar"))
self.assert_compile(
union(s.order_by("foo"), s.order_by("bar")),
def test_dupe_cols_hey_we_can_union(self):
"""test the original inspiration for [ticket:4753]."""
- s1 = select([table1, table1.c.myid]).where(table1.c.myid == 5)
- s2 = select([table1, table2.c.otherid]).where(
+ s1 = select(table1, table1.c.myid).where(table1.c.myid == 5)
+ s2 = select(table1, table2.c.otherid).where(
table1.c.myid == table2.c.otherid
)
self.assert_compile(
)
def test_compound_grouping(self):
- s = select([column("foo"), column("bar")]).select_from(text("bat"))
+ s = select(column("foo"), column("bar")).select_from(text("bat"))
self.assert_compile(
union(union(union(s, s), s), s),
)
self.assert_compile(
- select([s.alias()]),
+ select(s.alias()),
"SELECT anon_1.foo, anon_1.bar FROM "
"(SELECT foo, bar FROM bat) AS anon_1",
)
self.assert_compile(
- select([union(s, s).alias()]),
+ select(union(s, s).alias()),
"SELECT anon_1.foo, anon_1.bar FROM "
"(SELECT foo, bar FROM bat UNION "
"SELECT foo, bar FROM bat) AS anon_1",
)
self.assert_compile(
- select([except_(s, s).alias()]),
+ select(except_(s, s).alias()),
"SELECT anon_1.foo, anon_1.bar FROM "
"(SELECT foo, bar FROM bat EXCEPT "
"SELECT foo, bar FROM bat) AS anon_1",
# fixme: shoving all of this dialect-specific stuff in one test
# is now officially completely ridiculous AND non-obviously omits
# coverage on other dialects.
- sel = select([tbl, cast(tbl.c.v1, Numeric)]).compile(
- dialect=dialect
- )
+ sel = select(tbl, cast(tbl.c.v1, Numeric)).compile(dialect=dialect)
if isinstance(dialect, type(mysql.dialect())):
eq_(
str(sel),
)
self.assert_compile(
select(
- [
- func.row_number()
- .over(order_by=table1.c.description)
- .label("foo")
- ]
+ func.row_number()
+ .over(order_by=table1.c.description)
+ .label("foo")
),
"SELECT row_number() OVER (ORDER BY mytable.description) "
"AS foo FROM mytable",
# test from_obj generation.
# from func:
self.assert_compile(
- select(
- [func.max(table1.c.name).over(partition_by=["description"])]
- ),
+ select(func.max(table1.c.name).over(partition_by=["description"])),
"SELECT max(mytable.name) OVER (PARTITION BY mytable.description) "
"AS anon_1 FROM mytable",
)
# from partition_by
self.assert_compile(
- select([func.row_number().over(partition_by=[table1.c.name])]),
+ select(func.row_number().over(partition_by=[table1.c.name])),
"SELECT row_number() OVER (PARTITION BY mytable.name) "
"AS anon_1 FROM mytable",
)
# from order_by
self.assert_compile(
- select([func.row_number().over(order_by=table1.c.name)]),
+ select(func.row_number().over(order_by=table1.c.name)),
"SELECT row_number() OVER (ORDER BY mytable.name) "
"AS anon_1 FROM mytable",
)
# this tests that _from_objects
# concantenates OK
self.assert_compile(
- select([column("x") + over(func.foo())]),
+ select(column("x") + over(func.foo())),
"SELECT x + foo() OVER () AS anon_1",
)
# test a reference to a label that in the referecned selectable;
# this resolves
expr = (table1.c.myid + 5).label("sum")
- stmt = select([expr]).alias()
+ stmt = select(expr).alias()
self.assert_compile(
- select([stmt.c.sum, func.row_number().over(order_by=stmt.c.sum)]),
+ select(stmt.c.sum, func.row_number().over(order_by=stmt.c.sum)),
"SELECT anon_1.sum, row_number() OVER (ORDER BY anon_1.sum) "
"AS anon_2 FROM (SELECT mytable.myid + :myid_1 AS sum "
"FROM mytable) AS anon_1",
# in the columns clause; doesn't resolve
expr = (table1.c.myid + 5).label("sum")
self.assert_compile(
- select([expr, func.row_number().over(order_by=expr)]),
+ select(expr, func.row_number().over(order_by=expr)),
"SELECT mytable.myid + :myid_1 AS sum, "
"row_number() OVER "
"(ORDER BY mytable.myid + :myid_1) AS anon_1 FROM mytable",
expr = table1.c.myid
self.assert_compile(
- select([func.row_number().over(order_by=expr, rows=(0, None))]),
+ select(func.row_number().over(order_by=expr, rows=(0, None))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid ROWS BETWEEN CURRENT "
"ROW AND UNBOUNDED FOLLOWING)"
)
self.assert_compile(
- select([func.row_number().over(order_by=expr, rows=(None, None))]),
+ select(func.row_number().over(order_by=expr, rows=(None, None))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid ROWS BETWEEN UNBOUNDED "
"PRECEDING AND UNBOUNDED FOLLOWING)"
)
self.assert_compile(
- select([func.row_number().over(order_by=expr, range_=(None, 0))]),
+ select(func.row_number().over(order_by=expr, range_=(None, 0))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid RANGE BETWEEN "
"UNBOUNDED PRECEDING AND CURRENT ROW)"
)
self.assert_compile(
- select([func.row_number().over(order_by=expr, range_=(-5, 10))]),
+ select(func.row_number().over(order_by=expr, range_=(-5, 10))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid RANGE BETWEEN "
":param_1 PRECEDING AND :param_2 FOLLOWING)"
)
self.assert_compile(
- select([func.row_number().over(order_by=expr, range_=(1, 10))]),
+ select(func.row_number().over(order_by=expr, range_=(1, 10))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid RANGE BETWEEN "
":param_1 FOLLOWING AND :param_2 FOLLOWING)"
)
self.assert_compile(
- select([func.row_number().over(order_by=expr, range_=(-10, -1))]),
+ select(func.row_number().over(order_by=expr, range_=(-10, -1))),
"SELECT row_number() OVER "
"(ORDER BY mytable.myid RANGE BETWEEN "
":param_1 PRECEDING AND :param_2 PRECEDING)"
from sqlalchemy import within_group
stmt = select(
- [
- table1.c.myid,
- within_group(
- func.percentile_cont(0.5), table1.c.name.desc()
- ).over(
- range_=(1, 2),
- partition_by=table1.c.name,
- order_by=table1.c.myid,
- ),
- ]
+ table1.c.myid,
+ within_group(func.percentile_cont(0.5), table1.c.name.desc()).over(
+ range_=(1, 2),
+ partition_by=table1.c.name,
+ order_by=table1.c.myid,
+ ),
)
eq_ignore_whitespace(
str(stmt),
)
stmt = select(
- [
- table1.c.myid,
- within_group(
- func.percentile_cont(0.5), table1.c.name.desc()
- ).over(
- rows=(1, 2),
- partition_by=table1.c.name,
- order_by=table1.c.myid,
- ),
- ]
+ table1.c.myid,
+ within_group(func.percentile_cont(0.5), table1.c.name.desc()).over(
+ rows=(1, 2),
+ partition_by=table1.c.name,
+ order_by=table1.c.myid,
+ ),
)
eq_ignore_whitespace(
str(stmt),
table = Table("dt", metadata, Column("date", Date))
self.assert_compile(
- table.select(
+ table.select().where(
table.c.date.between(
datetime.date(2006, 6, 1), datetime.date(2006, 6, 5)
)
)
self.assert_compile(
- table.select(
+ table.select().where(
sql.between(
table.c.date,
datetime.date(2006, 6, 1),
def test_delayed_col_naming(self):
my_str = Column(String)
- sel1 = select([my_str])
+ sel1 = select(my_str)
assert_raises_message(
exc.InvalidRequestError,
# calling label or scalar_subquery doesn't compile
# anything.
- sel2 = select([func.substr(my_str, 2, 3)]).label("my_substr")
+ sel2 = select(func.substr(my_str, 2, 3)).label("my_substr")
assert_raises_message(
exc.CompileError,
dialect=default.DefaultDialect(),
)
- sel3 = select([my_str]).scalar_subquery()
+ sel3 = select(my_str).scalar_subquery()
assert_raises_message(
exc.CompileError,
"Cannot compile Column object until its 'name' is assigned.",
f1 = func.hoho(table1.c.name)
s1 = select(
- [
- table1.c.myid,
- table1.c.myid.label("foobar"),
- f1,
- func.lala(table1.c.name).label("gg"),
- ]
+ table1.c.myid,
+ table1.c.myid.label("foobar"),
+ f1,
+ func.lala(table1.c.name).label("gg"),
)
eq_(list(s1.subquery().c.keys()), ["myid", "foobar", str(f1), "gg"])
else:
t = table1
- s1 = select([col], from_obj=t)
+ s1 = select(col).select_from(t)
assert list(s1.subquery().c.keys()) == [key], list(s1.c.keys())
if lbl:
else:
self.assert_compile(s1, "SELECT %s FROM mytable" % (expr,))
- s1 = select([s1.subquery()])
+ s1 = select(s1.subquery())
if lbl:
alias_ = "anon_2" if lbl == "anon_1" else "anon_1"
self.assert_compile(
)
def test_hints(self):
- s = select([table1.c.myid]).with_hint(table1, "test hint %(name)s")
+ s = select(table1.c.myid).with_hint(table1, "test hint %(name)s")
s2 = (
- select([table1.c.myid])
+ select(table1.c.myid)
.with_hint(table1, "index(%(name)s idx)", "oracle")
.with_hint(table1, "WITH HINT INDEX idx", "sybase")
)
a1 = table1.alias()
- s3 = select([a1.c.myid]).with_hint(a1, "index(%(name)s hint)")
+ s3 = select(a1.c.myid).with_hint(a1, "index(%(name)s hint)")
subs4 = (
- select([table1, table2])
+ select(table1, table2)
.select_from(
table1.join(table2, table1.c.myid == table2.c.otherid)
)
).subquery()
s4 = (
- select([table3])
+ select(table3)
.select_from(
table3.join(subs4, subs4.c.othername == table3.c.otherstuff)
)
t1 = table("QuotedName", column("col1"))
s6 = (
- select([t1.c.col1])
+ select(t1.c.col1)
.where(t1.c.col1 > 10)
.with_hint(t1, "%(name)s idx1")
)
a2 = t1.alias("SomeName")
s7 = (
- select([a2.c.col1])
+ select(a2.c.col1)
.where(a2.c.col1 > 10)
.with_hint(a2, "%(name)s idx1")
)
def test_statement_hints(self):
stmt = (
- select([table1.c.myid])
+ select(table1.c.myid)
.with_statement_hint("test hint one")
.with_statement_hint("test hint two", "mysql")
)
expected_test_params_list,
) in [
(
- select(
- [table1, table2],
+ select(table1, table2).where(
and_(
table1.c.myid == table2.c.otherid,
table1.c.name == bindparam("mytablename"),
[5],
),
(
- select(
- [table1],
+ select(table1).where(
or_(
table1.c.myid == bindparam("myid"),
table2.c.otherid == bindparam("myid"),
[5, 5],
),
(
- select(
- [table1],
+ select(table1).where(
or_(
table1.c.myid == bindparam("myid", unique=True),
table2.c.otherid == bindparam("myid", unique=True),
(
# testing select.params() here - bindparam() objects
# must get required flag set to False
- select(
- [table1],
+ select(table1)
+ .where(
or_(
table1.c.myid == bindparam("myid"),
table2.c.otherid == bindparam("myotherid"),
),
- ).params({"myid": 8, "myotherid": 7}),
+ )
+ .params({"myid": 8, "myotherid": 7}),
"SELECT mytable.myid, mytable.name, mytable.description FROM "
"mytable, myothertable WHERE mytable.myid = "
":myid OR myothertable.otherid = :myotherid",
[5, 7],
),
(
- select(
- [table1],
+ select(table1).where(
or_(
table1.c.myid
== bindparam("myid", value=7, unique=True),
)
# check that params() doesn't modify original statement
- s = select(
- [table1],
+ s = select(table1).where(
or_(
table1.c.myid == bindparam("myid"),
table2.c.otherid == bindparam("myotherid"),
assert s3.compile().params == {"myid": 9, "myotherid": 7}
# test using same 'unique' param object twice in one compile
- s = (
- select([table1.c.myid])
- .where(table1.c.myid == 12)
- .scalar_subquery()
- )
- s2 = select([table1, s], table1.c.myid == s)
+ s = select(table1.c.myid).where(table1.c.myid == 12).scalar_subquery()
+ s2 = select(table1, s).where(table1.c.myid == s)
self.assert_compile(
s2,
"SELECT mytable.myid, mytable.name, mytable.description, "
assert [pp[k] for k in positional.positiontup] == [12, 12]
# check that conflicts with "unique" params are caught
- s = select(
- [table1],
+ s = select(table1).where(
or_(table1.c.myid == 7, table1.c.myid == bindparam("myid_1")),
)
assert_raises_message(
s,
)
- s = select(
- [table1],
+ s = select(table1).where(
or_(
table1.c.myid == 7,
table1.c.myid == 8,
l = c.label(None)
# new case as of Id810f485c5f7ed971529489b84694e02a3356d6d
- subq = select([l]).subquery()
+ subq = select(l).subquery()
# this creates a ColumnClause as a proxy to the Label() that has
# an anoymous name, so the column has one too.
def test_bind_as_col(self):
t = table("foo", column("id"))
- s = select([t, literal("lala").label("hoho")])
+ s = select(t, literal("lala").label("hoho"))
self.assert_compile(s, "SELECT foo.id, :param_1 AS hoho FROM foo")
assert [str(c) for c in s.subquery().c] == ["anon_1.id", "anon_1.hoho"]
assert_raises_message(
exc.InvalidRequestError,
r"A value is required for bind parameter 'x'",
- select([table1])
+ select(table1)
.where(
and_(
table1.c.myid == bindparam("x", required=True),
assert_raises_message(
exc.InvalidRequestError,
r"A value is required for bind parameter 'x'",
- select([table1])
+ select(table1)
.where(table1.c.myid == bindparam("x", required=True))
.compile()
.construct_params,
exc.InvalidRequestError,
r"A value is required for bind parameter 'x', "
"in parameter group 2",
- select([table1])
+ select(table1)
.where(
and_(
table1.c.myid == bindparam("x", required=True),
exc.InvalidRequestError,
r"A value is required for bind parameter 'x', "
"in parameter group 2",
- select([table1])
+ select(table1)
.where(table1.c.myid == bindparam("x", required=True))
.compile()
.construct_params,
@testing.combinations(
(
- select([table1]).where(table1.c.myid == 5),
- select([table1]).where(table1.c.myid == 10),
+ select(table1).where(table1.c.myid == 5),
+ select(table1).where(table1.c.myid == 10),
{"myid_1": 5},
{"myid_1": 10},
None,
None,
),
(
- select([table1]).where(
+ select(table1).where(
table1.c.myid
== bindparam(None, unique=True, callable_=lambda: 5)
),
- select([table1]).where(
+ select(table1).where(
table1.c.myid
== bindparam(None, unique=True, callable_=lambda: 10)
),
),
(
union(
- select([table1]).where(table1.c.myid == 5),
- select([table1]).where(table1.c.myid == 12),
+ select(table1).where(table1.c.myid == 5),
+ select(table1).where(table1.c.myid == 12),
),
union(
- select([table1]).where(table1.c.myid == 5),
- select([table1]).where(table1.c.myid == 15),
+ select(table1).where(table1.c.myid == 5),
+ select(table1).where(table1.c.myid == 15),
),
{"myid_1": 5, "myid_2": 12},
{"myid_1": 5, "myid_2": 15},
"""
- stmt = select([table1.c.myid]).where(table1.c.myid == 5)
+ stmt = select(table1.c.myid).where(table1.c.myid == 5)
# get the original bindparam.
original_bind = stmt._where_criteria[0].right
)
# now make a totally new statement with the same cache key
- new_stmt = select([table1.c.myid]).where(table1.c.myid == 10)
+ new_stmt = select(table1.c.myid).where(table1.c.myid == 10)
new_cache_key = new_stmt._generate_cache_key()
# cache keys match
"""
- stmt = select([table1.c.myid]).where(table1.c.myid == 5)
+ stmt = select(table1.c.myid).where(table1.c.myid == 5)
original_bind = stmt._where_criteria[0].right
# it's anonymous so unique=True
# make a new statement that uses the clones as distinct
# parameters
- modified_stmt = select([table1.c.myid]).where(
+ modified_stmt = select(table1.c.myid).where(
or_(table1.c.myid == b1, table1.c.myid == b2)
)
# make a new statement doing the same thing and make sure
# the binds match up correctly
- new_stmt = select([table1.c.myid]).where(table1.c.myid == 8)
+ new_stmt = select(table1.c.myid).where(table1.c.myid == 8)
new_original_bind = new_stmt._where_criteria[0].right
new_b1 = new_original_bind._clone()
new_b1.value = 20
new_b2 = new_original_bind._clone()
new_b2.value = 18
- modified_new_stmt = select([table1.c.myid]).where(
+ modified_new_stmt = select(table1.c.myid).where(
or_(table1.c.myid == new_b1, table1.c.myid == new_b2)
)
self.assert_compile(
tuple_(table1.c.myid, table1.c.name).in_(
- select([table2.c.otherid, table2.c.othername])
+ select(table2.c.otherid, table2.c.othername)
),
"(mytable.myid, mytable.name) IN (SELECT "
"myothertable.otherid, myothertable.othername FROM myothertable)",
)
def test_limit_offset_select_literal_binds(self):
- stmt = select([1]).limit(5).offset(6)
+ stmt = select(1).limit(5).offset(6)
self.assert_compile(
stmt, "SELECT 1 LIMIT 5 OFFSET 6", literal_binds=True
)
def test_limit_offset_compound_select_literal_binds(self):
- stmt = select([1]).union(select([2])).limit(5).offset(6)
+ stmt = select(1).union(select(2)).limit(5).offset(6)
self.assert_compile(
stmt,
"SELECT 1 UNION SELECT 2 LIMIT 5 OFFSET 6",
def test_multiple_col_binds(self):
self.assert_compile(
- select(
- [literal_column("*")],
+ select(literal_column("*")).where(
or_(
table1.c.myid == 12,
table1.c.myid == "asdf",
@testing.combinations(
(
"one",
- select([literal("someliteral")]),
+ select(literal("someliteral")),
"SELECT [POSTCOMPILE_param_1] AS anon_1",
dict(
check_literal_execute={"param_1": "someliteral"},
),
(
"two",
- select([table1.c.myid + 3]),
+ select(table1.c.myid + 3),
"SELECT mytable.myid + [POSTCOMPILE_myid_1] "
"AS anon_1 FROM mytable",
dict(check_literal_execute={"myid_1": 3}, check_post_param={}),
),
(
"three",
- select([table1.c.myid.in_([4, 5, 6])]),
+ select(table1.c.myid.in_([4, 5, 6])),
"SELECT mytable.myid IN ([POSTCOMPILE_myid_1]) "
"AS anon_1 FROM mytable",
dict(
),
(
"four",
- select([func.mod(table1.c.myid, 5)]),
+ select(func.mod(table1.c.myid, 5)),
"SELECT mod(mytable.myid, [POSTCOMPILE_mod_2]) "
"AS mod_1 FROM mytable",
dict(check_literal_execute={"mod_2": 5}, check_post_param={}),
),
(
"five",
- select([literal("foo").in_([])]),
+ select(literal("foo").in_([])),
"SELECT [POSTCOMPILE_param_1] IN ([POSTCOMPILE_param_2]) "
"AS anon_1",
dict(
),
(
"six",
- select([literal(util.b("foo"))]),
+ select(literal(util.b("foo"))),
"SELECT [POSTCOMPILE_param_1] AS anon_1",
dict(
check_literal_execute={"param_1": util.b("foo")},
),
(
"seven",
- select([table1.c.myid == bindparam("foo", callable_=lambda: 5)]),
+ select(table1.c.myid == bindparam("foo", callable_=lambda: 5)),
"SELECT mytable.myid = [POSTCOMPILE_foo] AS anon_1 FROM mytable",
dict(check_literal_execute={"foo": 5}, check_post_param={}),
),
def test_render_literal_execute_parameter(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid == bindparam("foo", 5, literal_execute=True)
),
"SELECT mytable.myid FROM mytable "
def test_render_literal_execute_parameter_literal_binds(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid == bindparam("foo", 5, literal_execute=True)
),
"SELECT mytable.myid FROM mytable " "WHERE mytable.myid = 5",
def test_render_literal_execute_parameter_render_postcompile(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid == bindparam("foo", 5, literal_execute=True)
),
"SELECT mytable.myid FROM mytable " "WHERE mytable.myid = 5",
def test_render_expanding_parameter(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid.in_(bindparam("foo", expanding=True))
),
"SELECT mytable.myid FROM mytable "
def test_render_expanding_parameter_literal_binds(self):
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid.in_(bindparam("foo", [1, 2, 3], expanding=True))
),
"SELECT mytable.myid FROM mytable "
# parameters on the fly.
self.assert_compile(
- select([table1.c.myid]).where(
+ select(table1.c.myid).where(
table1.c.myid.in_(bindparam("foo", [1, 2, 3], expanding=True))
),
"SELECT mytable.myid FROM mytable "
class StringifySpecialTest(fixtures.TestBase):
def test_basic(self):
- stmt = select([table1]).where(table1.c.myid == 10)
+ stmt = select(table1).where(table1.c.myid == 10)
eq_ignore_whitespace(
str(stmt),
"SELECT mytable.myid, mytable.name, mytable.description "
def test_cte(self):
# stringify of these was supported anyway by defaultdialect.
- stmt = select([table1.c.myid]).cte()
- stmt = select([stmt])
+ stmt = select(table1.c.myid).cte()
+ stmt = select(stmt)
eq_ignore_whitespace(
str(stmt),
"WITH anon_1 AS (SELECT mytable.myid AS myid FROM mytable) "
)
def test_array_index(self):
- stmt = select([column("foo", types.ARRAY(Integer))[5]])
+ stmt = select(column("foo", types.ARRAY(Integer))[5])
eq_ignore_whitespace(str(stmt), "SELECT foo[:foo_1] AS anon_1")
class MyType(types.TypeEngine):
__visit_name__ = "mytype"
- stmt = select([cast(table1.c.myid, MyType)])
+ stmt = select(cast(table1.c.myid, MyType))
eq_ignore_whitespace(
str(stmt),
from sqlalchemy import within_group
stmt = select(
- [
- table1.c.myid,
- within_group(func.percentile_cont(0.5), table1.c.name.desc()),
- ]
+ table1.c.myid,
+ within_group(func.percentile_cont(0.5), table1.c.name.desc()),
)
eq_ignore_whitespace(
str(stmt),
def test_with_hint_table(self):
stmt = (
- select([table1])
+ select(table1)
.select_from(
table1.join(table2, table1.c.myid == table2.c.otherid)
)
def test_with_hint_statement(self):
stmt = (
- select([table1])
+ select(table1)
.select_from(
table1.join(table2, table1.c.myid == table2.c.otherid)
)
def test_select(self):
s = (
- select([self.column])
+ select(self.column)
.select_from(self.table)
.where(self.column == self.criterion)
.order_by(self.column)
def test_embedded_element_true_to_none(self):
stmt = table1.insert().cte()
eq_(stmt._execution_options, {"autocommit": True})
- s2 = select([table1]).select_from(stmt)
+ s2 = select(table1).select_from(stmt)
eq_(s2._execution_options, {})
compiled = s2.compile()
stmt = table1.insert().cte()
eq_(stmt._execution_options, {"autocommit": True})
s2 = (
- select([table1])
+ select(table1)
.select_from(stmt)
.execution_options(autocommit=False)
)
)
self.assert_compile(
- table4.select(
+ table4.select().where(
and_(table4.c.datatype_id == 7, table4.c.value == "hi")
),
"SELECT remote_owner.remotetable.rem_id, "
" remote_owner.remotetable.value = :value_1",
)
- s = table4.select(
- and_(table4.c.datatype_id == 7, table4.c.value == "hi"),
- use_labels=True,
+ s = (
+ table4.select()
+ .where(and_(table4.c.datatype_id == 7, table4.c.value == "hi"))
+ .apply_labels()
)
self.assert_compile(
s,
# multi-part schema name labels - convert '.' to '_'
self.assert_compile(
- table5.select(use_labels=True),
+ table5.select().apply_labels(),
'SELECT "dbo.remote_owner".remotetable.rem_id AS'
" dbo_remote_owner_remotetable_rem_id, "
'"dbo.remote_owner".remotetable.datatype_id'
schema_translate_map = {"remote_owner": "foob"}
self.assert_compile(
- select([table1, table4]).select_from(
+ select(table1, table4).select_from(
join(table1, table4, table1.c.myid == table4.c.rem_id)
),
"SELECT mytable.myid, mytable.name, mytable.description, "
alias = table1.alias()
stmt = (
- select([table2, alias])
+ select(table2, alias)
.select_from(table2.join(alias, table2.c.otherid == alias.c.myid))
.where(alias.c.name == "foo")
)
def test_alias(self):
a = alias(table4, "remtable")
self.assert_compile(
- a.select(a.c.datatype_id == 7),
+ a.select().where(a.c.datatype_id == 7),
"SELECT remtable.rem_id, remtable.datatype_id, "
"remtable.value FROM"
" remote_owner.remotetable AS remtable "
def test_update(self):
self.assert_compile(
- table4.update(
- table4.c.value == "test", values={table4.c.datatype_id: 12}
- ),
+ table4.update()
+ .where(table4.c.value == "test")
+ .values({table4.c.datatype_id: 12}),
"UPDATE remote_owner.remotetable SET datatype_id=:datatype_id "
"WHERE remote_owner.remotetable.value = :value_1",
)
def test_insert(self):
self.assert_compile(
- table4.insert(values=(2, 5, "test")),
+ table4.insert().values((2, 5, "test")),
"INSERT INTO remote_owner.remotetable "
"(rem_id, datatype_id, value) VALUES "
"(:rem_id, :datatype_id, :value)",
# test that "schema" works correctly when passed to table
t1 = table("foo", column("a"), column("b"), schema="bar")
self.assert_compile(
- select([t1]).select_from(t1),
+ select(t1).select_from(t1),
"SELECT bar.foo.a, bar.foo.b FROM bar.foo",
)
# test alias behavior
t1 = table("foo", schema="bar")
self.assert_compile(
- select(["*"]).select_from(t1.alias("t")),
+ select("*").select_from(t1.alias("t")),
"SELECT * FROM bar.foo AS t",
)
)
self.assert_compile(
- select([t1]).select_from(t1).apply_labels(),
+ select(t1).select_from(t1).apply_labels(),
"SELECT here.baz.id AS here_baz_id, here.baz.name AS "
"here_baz_name, here.baz.meta AS here_baz_meta FROM here.baz",
)
.columns(id=Integer, name=String)
.subquery()
)
- stmt = select([t1.c.anotherid]).select_from(
+ stmt = select(t1.c.anotherid).select_from(
t1.join(s, t1.c.anotherid == s.c.id)
)
compiled = stmt.compile()
def test_dont_overcorrelate(self):
self.assert_compile(
- select([table1])
+ select(table1)
.select_from(table1)
.select_from(table1.select().subquery()),
"SELECT mytable.myid, mytable.name, "
def _fixture(self):
t1 = table("t1", column("a"))
t2 = table("t2", column("a"))
- return t1, t2, select([t1]).where(t1.c.a == t2.c.a)
+ return t1, t2, select(t1).where(t1.c.a == t2.c.a)
def _assert_where_correlated(self, stmt):
self.assert_compile(
def test_correlate_semiauto_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_correlated(
- select([t2]).where(t2.c.a == s1.correlate(t2).scalar_subquery())
+ select(t2).where(t2.c.a == s1.correlate(t2).scalar_subquery())
)
def test_correlate_semiauto_column(self):
t1, t2, s1 = self._fixture()
self._assert_column_correlated(
- select([t2, s1.correlate(t2).scalar_subquery()])
+ select(t2, s1.correlate(t2).scalar_subquery())
)
def test_correlate_semiauto_from(self):
t1, t2, s1 = self._fixture()
- self._assert_from_uncorrelated(select([t2, s1.correlate(t2).alias()]))
+ self._assert_from_uncorrelated(select(t2, s1.correlate(t2).alias()))
def test_correlate_semiauto_having(self):
t1, t2, s1 = self._fixture()
self._assert_having_correlated(
- select([t2]).having(t2.c.a == s1.correlate(t2).scalar_subquery())
+ select(t2).having(t2.c.a == s1.correlate(t2).scalar_subquery())
)
def test_correlate_except_inclusion_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_correlated(
- select([t2]).where(
+ select(t2).where(
t2.c.a == s1.correlate_except(t1).scalar_subquery()
)
)
def test_correlate_except_exclusion_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_uncorrelated(
- select([t2]).where(
+ select(t2).where(
t2.c.a == s1.correlate_except(t2).scalar_subquery()
)
)
def test_correlate_except_inclusion_column(self):
t1, t2, s1 = self._fixture()
self._assert_column_correlated(
- select([t2, s1.correlate_except(t1).scalar_subquery()])
+ select(t2, s1.correlate_except(t1).scalar_subquery())
)
def test_correlate_except_exclusion_column(self):
t1, t2, s1 = self._fixture()
self._assert_column_uncorrelated(
- select([t2, s1.correlate_except(t2).scalar_subquery()])
+ select(t2, s1.correlate_except(t2).scalar_subquery())
)
def test_correlate_except_inclusion_from(self):
t1, t2, s1 = self._fixture()
self._assert_from_uncorrelated(
- select([t2, s1.correlate_except(t1).alias()])
+ select(t2, s1.correlate_except(t1).alias())
)
def test_correlate_except_exclusion_from(self):
t1, t2, s1 = self._fixture()
self._assert_from_uncorrelated(
- select([t2, s1.correlate_except(t2).alias()])
+ select(t2, s1.correlate_except(t2).alias())
)
def test_correlate_except_none(self):
t1, t2, s1 = self._fixture()
self._assert_where_all_correlated(
- select([t1, t2]).where(
+ select(t1, t2).where(
t2.c.a == s1.correlate_except(None).scalar_subquery()
)
)
def test_correlate_except_having(self):
t1, t2, s1 = self._fixture()
self._assert_having_correlated(
- select([t2]).having(
+ select(t2).having(
t2.c.a == s1.correlate_except(t1).scalar_subquery()
)
)
def test_correlate_auto_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_correlated(
- select([t2]).where(t2.c.a == s1.scalar_subquery())
+ select(t2).where(t2.c.a == s1.scalar_subquery())
)
def test_correlate_auto_column(self):
t1, t2, s1 = self._fixture()
- self._assert_column_correlated(select([t2, s1.scalar_subquery()]))
+ self._assert_column_correlated(select(t2, s1.scalar_subquery()))
def test_correlate_auto_from(self):
t1, t2, s1 = self._fixture()
- self._assert_from_uncorrelated(select([t2, s1.alias()]))
+ self._assert_from_uncorrelated(select(t2, s1.alias()))
def test_correlate_auto_having(self):
t1, t2, s1 = self._fixture()
self._assert_having_correlated(
- select([t2]).having(t2.c.a == s1.scalar_subquery())
+ select(t2).having(t2.c.a == s1.scalar_subquery())
)
def test_correlate_disabled_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_uncorrelated(
- select([t2]).where(t2.c.a == s1.correlate(None).scalar_subquery())
+ select(t2).where(t2.c.a == s1.correlate(None).scalar_subquery())
)
def test_correlate_disabled_column(self):
t1, t2, s1 = self._fixture()
self._assert_column_uncorrelated(
- select([t2, s1.correlate(None).scalar_subquery()])
+ select(t2, s1.correlate(None).scalar_subquery())
)
def test_correlate_disabled_from(self):
t1, t2, s1 = self._fixture()
- self._assert_from_uncorrelated(
- select([t2, s1.correlate(None).alias()])
- )
+ self._assert_from_uncorrelated(select(t2, s1.correlate(None).alias()))
def test_correlate_disabled_having(self):
t1, t2, s1 = self._fixture()
self._assert_having_uncorrelated(
- select([t2]).having(t2.c.a == s1.correlate(None).scalar_subquery())
+ select(t2).having(t2.c.a == s1.correlate(None).scalar_subquery())
)
def test_correlate_all_where(self):
t1, t2, s1 = self._fixture()
self._assert_where_all_correlated(
- select([t1, t2]).where(
+ select(t1, t2).where(
t2.c.a == s1.correlate(t1, t2).scalar_subquery()
)
)
def test_correlate_all_column(self):
t1, t2, s1 = self._fixture()
self._assert_column_all_correlated(
- select([t1, t2, s1.correlate(t1, t2).scalar_subquery()])
+ select(t1, t2, s1.correlate(t1, t2).scalar_subquery())
)
def test_correlate_all_from(self):
t1, t2, s1 = self._fixture()
self._assert_from_all_uncorrelated(
- select([t1, t2, s1.correlate(t1, t2).alias()])
+ select(t1, t2, s1.correlate(t1, t2).alias())
)
def test_correlate_where_all_unintentional(self):
assert_raises_message(
exc.InvalidRequestError,
"returned no FROM clauses due to auto-correlation",
- select([t1, t2]).where(t2.c.a == s1.scalar_subquery()).compile,
+ select(t1, t2).where(t2.c.a == s1.scalar_subquery()).compile,
)
def test_correlate_from_all_ok(self):
t1, t2, s1 = self._fixture()
self.assert_compile(
- select([t1, t2, s1.subquery()]),
+ select(t1, t2, s1.subquery()),
"SELECT t1.a, t2.a, anon_1.a FROM t1, t2, "
"(SELECT t1.a AS a FROM t1, t2 WHERE t1.a = t2.a) AS anon_1",
)
def test_correlate_auto_where_singlefrom(self):
t1, t2, s1 = self._fixture()
- s = select([t1.c.a])
- s2 = select([t1]).where(t1.c.a == s.scalar_subquery())
+ s = select(t1.c.a)
+ s2 = select(t1).where(t1.c.a == s.scalar_subquery())
self.assert_compile(
s2, "SELECT t1.a FROM t1 WHERE t1.a = " "(SELECT t1.a FROM t1)"
)
def test_correlate_semiauto_where_singlefrom(self):
t1, t2, s1 = self._fixture()
- s = select([t1.c.a])
+ s = select(t1.c.a)
- s2 = select([t1]).where(t1.c.a == s.correlate(t1).scalar_subquery())
+ s2 = select(t1).where(t1.c.a == s.correlate(t1).scalar_subquery())
self._assert_where_single_full_correlated(s2)
def test_correlate_except_semiauto_where_singlefrom(self):
t1, t2, s1 = self._fixture()
- s = select([t1.c.a])
+ s = select(t1.c.a)
- s2 = select([t1]).where(
+ s2 = select(t1).where(
t1.c.a == s.correlate_except(t2).scalar_subquery()
)
self._assert_where_single_full_correlated(s2)
# new as of #2748
t1 = table("t1", column("a"))
t2 = table("t2", column("a"), column("b"))
- s = select([t2.c.b]).where(t1.c.a == t2.c.a)
+ s = select(t2.c.b).where(t1.c.a == t2.c.a)
s = s.correlate_except(t2).alias("s")
- s2 = select([func.foo(s.c.b)]).scalar_subquery()
- s3 = select([t1], order_by=s2)
+ s2 = select(func.foo(s.c.b)).scalar_subquery()
+ s3 = select(t1).order_by(s2)
self.assert_compile(
s3,
s = s.correlate(p).subquery()
s = exists().select_from(s).where(s.c.id == 1)
- s = select([p]).where(s)
+ s = select(p).where(s)
self.assert_compile(
s,
"SELECT parent.id FROM parent WHERE EXISTS (SELECT * "
t3 = table("t3", column("z"))
s = (
- select([t1])
+ select(t1)
.where(t1.c.x == t2.c.y)
.where(t2.c.y == t3.c.z)
.correlate_except(t1)
t2 = table("t2", column("y"))
t3 = table("t3", column("z"))
- s = select([t1.c.x]).where(t1.c.x == t2.c.y)
- s2 = select([t3.c.z]).where(t3.c.z == s.scalar_subquery())
- s3 = select([t1]).where(t1.c.x == s2.scalar_subquery())
+ s = select(t1.c.x).where(t1.c.x == t2.c.y)
+ s2 = select(t3.c.z).where(t3.c.z == s.scalar_subquery())
+ s3 = select(t1).where(t1.c.x == s2.scalar_subquery())
self.assert_compile(
s3,
t1 = table("t1", column("x"))
t2 = table("t2", column("y"))
- s = select([t1.c.x]).where(t1.c.x == t2.c.y)
- s2 = select([t2, s.subquery()])
- s3 = select([t1, s2.subquery()])
+ s = select(t1.c.x).where(t1.c.x == t2.c.y)
+ s2 = select(t2, s.subquery())
+ s3 = select(t1, s2.subquery())
self.assert_compile(
s3,
def test_coerce_bool_where(self):
self.assert_compile(
- select([self.bool_table]).where(self.bool_table.c.x),
+ select(self.bool_table).where(self.bool_table.c.x),
"SELECT t.x FROM t WHERE t.x",
)
def test_coerce_bool_where_non_native(self):
self.assert_compile(
- select([self.bool_table]).where(self.bool_table.c.x),
+ select(self.bool_table).where(self.bool_table.c.x),
"SELECT t.x FROM t WHERE t.x = 1",
dialect=default.DefaultDialect(supports_native_boolean=False),
)
self.assert_compile(
- select([self.bool_table]).where(~self.bool_table.c.x),
+ select(self.bool_table).where(~self.bool_table.c.x),
"SELECT t.x FROM t WHERE t.x = 0",
dialect=default.DefaultDialect(supports_native_boolean=False),
)
def test_compound_populates(self):
t = Table("t", MetaData(), Column("a", Integer), Column("b", Integer))
- stmt = select([t]).union(select([t]))
+ stmt = select(t).union(select(t))
comp = stmt.compile()
eq_(
comp._create_result_map(),
def test_compound_not_toplevel_doesnt_populate(self):
t = Table("t", MetaData(), Column("a", Integer), Column("b", Integer))
- subq = select([t]).union(select([t])).subquery()
- stmt = select([t.c.a]).select_from(t.join(subq, t.c.a == subq.c.a))
+ subq = select(t).union(select(t)).subquery()
+ stmt = select(t.c.a).select_from(t.join(subq, t.c.a == subq.c.a))
comp = stmt.compile()
eq_(
comp._create_result_map(),
def test_compound_only_top_populates(self):
t = Table("t", MetaData(), Column("a", Integer), Column("b", Integer))
- stmt = select([t.c.a]).union(select([t.c.b]))
+ stmt = select(t.c.a).union(select(t.c.b))
comp = stmt.compile()
eq_(
comp._create_result_map(),
t = Table("t", MetaData(), Column("a", Integer))
l1 = t.c.a.label("bar")
tc = type_coerce(t.c.a + "str", String)
- stmt = select([t.c.a, l1, tc])
+ stmt = select(t.c.a, l1, tc)
comp = stmt.compile()
tc_anon_label = comp._create_result_map()["anon_1"][1][0]
eq_(
"t1", MetaData(), Column("a", Integer), Column("b", Integer)
)
t2 = Table("t2", MetaData(), Column("t1_a", Integer))
- union = select([t2]).union(select([t2])).alias()
+ union = select(t2).union(select(t2)).alias()
t1_alias = t1.alias()
stmt = (
- select([t1, t1_alias])
+ select(t1, t1_alias)
.select_from(t1.join(union, t1.c.a == union.c.t1_a))
.apply_labels()
)
stmt = (
t2.insert()
- .values(a=select([astring]).scalar_subquery())
+ .values(a=select(astring).scalar_subquery())
.returning(aint)
)
comp = stmt.compile(dialect=postgresql.dialect())
Table("t1", m, astring)
t2 = Table("t2", m, aint)
- stmt = (
- t2.insert().from_select(["a"], select([astring])).returning(aint)
- )
+ stmt = t2.insert().from_select(["a"], select(astring)).returning(aint)
comp = stmt.compile(dialect=postgresql.dialect())
eq_(
comp._create_result_map(),
def test_nested_api(self):
from sqlalchemy.engine.cursor import CursorResultMetaData
- stmt2 = select([table2]).subquery()
+ stmt2 = select(table2).subquery()
- stmt1 = select([table1]).select_from(stmt2)
+ stmt1 = select(table1).select_from(stmt2)
contexts = {}
l1, l2, l3 = t.c.z.label("a"), t.c.x.label("b"), t.c.x.label("c")
orig = [t.c.x, t.c.y, l1, l2, l3]
- stmt = select(orig)
+ stmt = select(*orig)
wrapped = stmt._generate()
wrapped = wrapped.add_columns(
func.ROW_NUMBER().over(order_by=t.c.z)
).alias()
- wrapped_again = select([c for c in wrapped.c])
+ wrapped_again = select(*[c for c in wrapped.c])
dialect = default.DefaultDialect()
# create the statement with some duplicate columns. right now
# the behavior is that these redundant columns are deduped.
- stmt = select([t.c.x, t.c.y, l1, t.c.y, l2, t.c.x, l3])
+ stmt = select(t.c.x, t.c.y, l1, t.c.y, l2, t.c.x, l3)
# so the statement has 7 inner columns...
eq_(len(list(stmt.selected_columns)), 7)
).alias()
# so when we wrap here we're going to have only 5 columns
- wrapped_again = select([c for c in wrapped.c])
+ wrapped_again = select(*[c for c in wrapped.c])
# so the compiler logic that matches up the "wrapper" to the
# "select_wraps_for" can't use inner_columns to match because
from sqlalchemy import column
from sqlalchemy import create_engine
from sqlalchemy import exc
+from sqlalchemy import exists
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy import INT
)
def test_select_of_select(self):
- stmt = select([self.table1.c.myid])
+ stmt = select(self.table1.c.myid)
with testing.expect_deprecated(
r"The SelectBase.select\(\) method is deprecated and will be "
"FROM mytable) AS anon_1",
)
- def test_join_of_select(self):
- stmt = select([self.table1.c.myid])
-
- with testing.expect_deprecated(
- r"The SelectBase.join\(\) method is deprecated and will be "
- "removed"
- ):
- self.assert_compile(
- stmt.join(
- self.table2, self.table2.c.otherid == self.table1.c.myid
- ),
- # note the SQL is wrong here as the subquery now has a name.
- # however, even SQLite which accepts unnamed subqueries in a
- # JOIN cannot actually join with how SQLAlchemy 1.3 and
- # earlier would render:
- # sqlite> select myid, otherid from (select myid from mytable)
- # join myothertable on mytable.myid=myothertable.otherid;
- # Error: no such column: mytable.myid
- # if using stmt.c.col, that fails often as well if there are
- # any naming overlaps:
- # sqlalchemy.exc.OperationalError: (sqlite3.OperationalError)
- # ambiguous column name: id
- # [SQL: SELECT id, data
- # FROM (SELECT a.id AS id, a.data AS data
- # FROM a) JOIN b ON b.a_id = id]
- # so that shows that nobody is using this anyway
- "(SELECT mytable.myid AS myid FROM mytable) AS anon_1 "
- "JOIN myothertable ON myothertable.otherid = mytable.myid",
- )
-
- def test_outerjoin_of_select(self):
- stmt = select([self.table1.c.myid])
-
- with testing.expect_deprecated(
- r"The SelectBase.outerjoin\(\) method is deprecated and will be "
- "removed"
- ):
- self.assert_compile(
- stmt.outerjoin(
- self.table2, self.table2.c.otherid == self.table1.c.myid
- ),
- # note the SQL is wrong here as the subquery now has a name
- "(SELECT mytable.myid AS myid FROM mytable) AS anon_1 "
- "LEFT OUTER JOIN myothertable "
- "ON myothertable.otherid = mytable.myid",
- )
-
def test_standalone_alias(self):
with testing.expect_deprecated(
"Implicit coercion of SELECT and textual SELECT constructs"
):
- stmt = alias(select([self.table1.c.myid]), "foo")
+ stmt = alias(select(self.table1.c.myid), "foo")
self.assert_compile(stmt, "SELECT mytable.myid FROM mytable")
is_true(
- stmt.compare(select([self.table1.c.myid]).subquery().alias("foo"))
+ stmt.compare(select(self.table1.c.myid).subquery().alias("foo"))
)
def test_as_scalar(self):
r"The SelectBase.as_scalar\(\) method is deprecated and "
"will be removed in a future release."
):
- stmt = select([self.table1.c.myid]).as_scalar()
+ stmt = select(self.table1.c.myid).as_scalar()
- is_true(stmt.compare(select([self.table1.c.myid]).scalar_subquery()))
+ is_true(stmt.compare(select(self.table1.c.myid).scalar_subquery()))
def test_as_scalar_from_subquery(self):
with testing.expect_deprecated(
r"The Subquery.as_scalar\(\) method, which was previously "
r"``Alias.as_scalar\(\)`` prior to version 1.4"
):
- stmt = select([self.table1.c.myid]).subquery().as_scalar()
+ stmt = select(self.table1.c.myid).subquery().as_scalar()
- is_true(stmt.compare(select([self.table1.c.myid]).scalar_subquery()))
+ is_true(stmt.compare(select(self.table1.c.myid).scalar_subquery()))
def test_fromclause_subquery(self):
- stmt = select([self.table1.c.myid])
+ stmt = select(self.table1.c.myid)
with testing.expect_deprecated(
"Implicit coercion of SELECT and textual SELECT constructs "
"into FROM clauses is deprecated"
):
element = coercions.expect(
roles.FromClauseRole,
- SelectStatementGrouping(select([self.table1])),
+ SelectStatementGrouping(select(self.table1)),
)
is_true(
element.compare(
- SelectStatementGrouping(select([self.table1])).subquery()
+ SelectStatementGrouping(select(self.table1)).subquery()
)
)
"Implicit coercion of SELECT and textual SELECT constructs "
"into FROM clauses is deprecated"
):
- stmt = select(["*"]).select_from(expr.select())
+ stmt = select("*").select_from(expr.select())
self.assert_compile(
stmt, "SELECT * FROM (SELECT rows(:rows_2) AS rows_1) AS anon_1"
)
users = table(
"users", column("id"), column("name"), column("fullname")
)
- calculate = select(
- [column("q"), column("z"), column("r")],
- from_obj=[
- func.calculate(bindparam("x", None), bindparam("y", None))
- ],
+ calculate = select(column("q"), column("z"), column("r")).select_from(
+ func.calculate(bindparam("x", None), bindparam("y", None))
)
with testing.expect_deprecated(
"deprecated and will be removed"
):
self.assert_compile(
- select([users], users.c.id > calculate.c.z),
+ select(users).where(users.c.id > calculate.c.z),
"SELECT users.id, users.name, users.fullname "
"FROM users, (SELECT q, z, r "
"FROM calculate(:x, :y)) AS anon_1 "
"deprecated"
)
+ def test_select_list_argument(self):
+
+ with testing.expect_deprecated_20(
+ r"The legacy calling style of select\(\) is deprecated "
+ "and will be removed in SQLAlchemy 2.0"
+ ):
+ stmt = select([column("q")])
+ self.assert_compile(stmt, "SELECT q")
+
+ def test_select_kw_argument(self):
+
+ with testing.expect_deprecated_20(
+ r"The legacy calling style of select\(\) is deprecated "
+ "and will be removed in SQLAlchemy 2.0"
+ ):
+ stmt = select(whereclause=column("q") == 5).add_columns(
+ column("q")
+ )
+ self.assert_compile(stmt, "SELECT q WHERE q = :q_1")
+
+ @testing.combinations(
+ (
+ lambda table1: table1.select(table1.c.col1 == 5),
+ "FromClause",
+ "whereclause",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx "
+ "FROM table1 WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1: table1.select(whereclause=table1.c.col1 == 5),
+ "FromClause",
+ "whereclause",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx "
+ "FROM table1 WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1: table1.select(order_by=table1.c.col1),
+ "FromClause",
+ "kwargs",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx "
+ "FROM table1 ORDER BY table1.col1",
+ ),
+ (
+ lambda table1: exists().select(table1.c.col1 == 5),
+ "Exists",
+ "whereclause",
+ "SELECT EXISTS (SELECT *) AS anon_1 FROM table1 "
+ "WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1: exists().select(whereclause=table1.c.col1 == 5),
+ "Exists",
+ "whereclause",
+ "SELECT EXISTS (SELECT *) AS anon_1 FROM table1 "
+ "WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1: exists().select(
+ order_by=table1.c.col1, from_obj=table1
+ ),
+ "Exists",
+ "kwargs",
+ "SELECT EXISTS (SELECT *) AS anon_1 FROM table1 "
+ "ORDER BY table1.col1",
+ ),
+ (
+ lambda table1, table2: table1.join(table2).select(
+ table1.c.col1 == 5
+ ),
+ "Join",
+ "whereclause",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx, "
+ "table2.col1, table2.col2, table2.col3, table2.coly FROM table1 "
+ "JOIN table2 ON table1.col1 = table2.col2 "
+ "WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1, table2: table1.join(table2).select(
+ whereclause=table1.c.col1 == 5
+ ),
+ "Join",
+ "whereclause",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx, "
+ "table2.col1, table2.col2, table2.col3, table2.coly FROM table1 "
+ "JOIN table2 ON table1.col1 = table2.col2 "
+ "WHERE table1.col1 = :col1_1",
+ ),
+ (
+ lambda table1, table2: table1.join(table2).select(
+ order_by=table1.c.col1
+ ),
+ "Join",
+ "kwargs",
+ "SELECT table1.col1, table1.col2, table1.col3, table1.colx, "
+ "table2.col1, table2.col2, table2.col3, table2.coly FROM table1 "
+ "JOIN table2 ON table1.col1 = table2.col2 "
+ "ORDER BY table1.col1",
+ ),
+ )
+ def test_select_method_parameters(
+ self, stmt, clsname, paramname, expected_sql
+ ):
+ if paramname == "whereclause":
+ warning_txt = (
+ r"The %s.select\(\).whereclause parameter is deprecated "
+ "and will be removed in version 2.0" % clsname
+ )
+ else:
+ warning_txt = (
+ r"The %s.select\(\) method will no longer accept "
+ "keyword arguments in version 2.0. " % clsname
+ )
+ with testing.expect_deprecated_20(
+ warning_txt,
+ r"The legacy calling style of select\(\) is deprecated "
+ "and will be removed in SQLAlchemy 2.0",
+ ):
+ stmt = testing.resolve_lambda(
+ stmt, table1=self.table1, table2=self.table2
+ )
+
+ self.assert_compile(stmt, expected_sql)
+
def test_deprecated_subquery_standalone(self):
from sqlalchemy import subquery
)
self.assert_compile(
- select([stmt]),
+ select(stmt),
"SELECT anon_1.a FROM (SELECT 1 AS a ORDER BY 1) AS anon_1",
)
def test_column(self):
- stmt = select([column("x")])
+ stmt = select(column("x"))
with testing.expect_deprecated(
r"The Select.column\(\) method is deprecated and will be "
"removed in a future release."
self.assert_compile(stmt, "SELECT x, q")
def test_append_column_after_replace_selectable(self):
- basesel = select([literal_column("1").label("a")])
+ basesel = select(literal_column("1").label("a"))
tojoin = select(
- [literal_column("1").label("a"), literal_column("2").label("b")]
+ literal_column("1").label("a"), literal_column("2").label("b")
)
basefrom = basesel.alias("basefrom")
joinfrom = tojoin.alias("joinfrom")
- sel = select([basefrom.c.a])
+ sel = select(basefrom.c.a)
with testing.expect_deprecated(
r"The Selectable.replace_selectable\(\) " "method is deprecated"
# test that corresponding column digs across
# clone boundaries with anonymous labeled elements
col = func.count().label("foo")
- sel = select([col])
+ sel = select(col)
sel2 = visitors.ReplacingCloningVisitor().traverse(sel)
with testing.expect_deprecated("The SelectBase.c"):
u = (
select(
- [
- self.table1.c.col1,
- self.table1.c.col2,
- self.table1.c.col3,
- self.table1.c.colx,
- null().label("coly"),
- ]
+ self.table1.c.col1,
+ self.table1.c.col2,
+ self.table1.c.col3,
+ self.table1.c.colx,
+ null().label("coly"),
)
.union(
select(
- [
- self.table2.c.col1,
- self.table2.c.col2,
- self.table2.c.col3,
- null().label("colx"),
- self.table2.c.coly,
- ]
+ self.table2.c.col1,
+ self.table2.c.col2,
+ self.table2.c.col3,
+ null().label("colx"),
+ self.table2.c.coly,
)
)
.alias("analias")
)
- s1 = self.table1.select(use_labels=True)
- s2 = self.table2.select(use_labels=True)
+ s1 = self.table1.select().apply_labels()
+ s2 = self.table2.select().apply_labels()
with self._c_deprecated():
assert u.corresponding_column(s1.c.table1_col2) is u.c.col2
assert u.corresponding_column(s2.c.table2_col2) is u.c.col2
assert s2.c.corresponding_column(u.c.coly) is s2.c.table2_coly
def test_join_against_self_implicit_subquery(self):
- jj = select([self.table1.c.col1.label("bar_col1")])
+ jj = select(self.table1.c.col1.label("bar_col1"))
with testing.expect_deprecated(
"The SelectBase.c and SelectBase.columns attributes are "
"deprecated and will be removed",
assert j2.corresponding_column(self.table1.c.col1) is j2.c.table1_col1
def test_select_labels(self):
- a = self.table1.select(use_labels=True)
+ a = self.table1.select().apply_labels()
j = join(a._implicit_subquery, self.table2)
criterion = a._implicit_subquery.c.table1_col1 == self.table2.c.col2
r"The SelectBase.select\(\) method is deprecated"
):
self.assert_compile(
- select([col]).select(),
+ select(col).select(),
'SELECT anon_1."NEEDS QUOTES" FROM '
'(SELECT NEEDS QUOTES AS "NEEDS QUOTES") AS anon_1',
)
def test_append_whereclause(self):
t = table("t", column("q"))
- stmt = select([t])
+ stmt = select(t)
with self._expect_deprecated("Select", "whereclause", "where"):
stmt.append_whereclause(t.c.q == 5)
def test_append_having(self):
t = table("t", column("q"))
- stmt = select([t]).group_by(t.c.q)
+ stmt = select(t).group_by(t.c.q)
with self._expect_deprecated("Select", "having", "having"):
stmt.append_having(t.c.q == 5)
def test_append_order_by(self):
t = table("t", column("q"), column("x"))
- stmt = select([t]).where(t.c.q == 5)
+ stmt = select(t).where(t.c.q == 5)
with self._expect_deprecated(
"GenerativeSelect", "order_by", "order_by"
def test_append_group_by(self):
t = table("t", column("q"))
- stmt = select([t])
+ stmt = select(t)
with self._expect_deprecated(
"GenerativeSelect", "group_by", "group_by"
t1 = table("t1", column("q"))
t2 = table("t2", column("q"), column("p"))
- inner = select([t2.c.p]).where(t2.c.q == t1.c.q)
+ inner = select(t2.c.p).where(t2.c.q == t1.c.q)
with self._expect_deprecated("Select", "correlation", "correlate"):
inner.append_correlation(t1)
- stmt = select([t1]).where(t1.c.q == inner.scalar_subquery())
+ stmt = select(t1).where(t1.c.q == inner.scalar_subquery())
self.assert_compile(
stmt,
def test_append_column(self):
t1 = table("t1", column("q"), column("p"))
- stmt = select([t1.c.q])
+ stmt = select(t1.c.q)
with self._expect_deprecated("Select", "column", "column"):
stmt.append_column(t1.c.p)
self.assert_compile(stmt, "SELECT t1.q, t1.p FROM t1")
def test_append_prefix(self):
t1 = table("t1", column("q"), column("p"))
- stmt = select([t1.c.q])
+ stmt = select(t1.c.q)
with self._expect_deprecated("Select", "prefix", "prefix_with"):
stmt.append_prefix("FOO BAR")
self.assert_compile(stmt, "SELECT FOO BAR t1.q FROM t1")
t1 = table("t1", column("q"))
t2 = table("t2", column("q"))
- stmt = select([t1])
+ stmt = select(t1)
with self._expect_deprecated("Select", "from", "select_from"):
stmt.append_from(t1.join(t2, t1.c.q == t2.c.q))
self.assert_compile(stmt, "SELECT t1.q FROM t1 JOIN t2 ON t1.q = t2.q")
def test_column_label_overlap_fallback(self, connection):
content, bar = self.tables.content, self.tables.bar
row = connection.execute(
- select([content.c.type.label("content_type")])
+ select(content.c.type.label("content_type"))
).first()
not_in_(content.c.type, row)
in_(sql.column("content_type"), row)
row = connection.execute(
- select([func.now().label("content_type")])
+ select(func.now().label("content_type"))
).first()
not_in_(content.c.type, row)
not_in_(bar.c.content_type, row)
# columns which the statement is against to be lightweight
# cols, which results in a more liberal comparison scheme
a, b = sql.column("a"), sql.column("b")
- stmt = select([a, b]).select_from(table("keyed2"))
+ stmt = select(a, b).select_from(table("keyed2"))
row = connection.execute(stmt).first()
with testing.expect_deprecated(
keyed2 = self.tables.keyed2
a, b = sql.column("a"), sql.column("b")
- stmt = select([keyed2.c.a, keyed2.c.b])
+ stmt = select(keyed2.c.a, keyed2.c.b)
row = connection.execute(stmt).first()
with testing.expect_deprecated(
# this will create column() objects inside
# the select(), these need to match on name anyway
r = connection.execute(
- select([column("user_id"), column("user_name")])
+ select(column("user_id"), column("user_name"))
.select_from(table("users"))
.where(text("user_id=2"))
).first()
self.metadata.create_all(testing.db)
connection.execute(content.insert().values(type="t1"))
- row = connection.execute(content.select(use_labels=True)).first()
+ row = connection.execute(content.select().apply_labels()).first()
in_(content.c.type, row._mapping)
not_in_(bar.c.content_type, row)
with testing.expect_deprecated(
in_(sql.column("content_type"), row)
row = connection.execute(
- select([content.c.type.label("content_type")])
+ select(content.c.type.label("content_type"))
).first()
with testing.expect_deprecated(
"Retrieving row values using Column objects "
in_(sql.column("content_type"), row)
row = connection.execute(
- select([func.now().label("content_type")])
+ select(func.now().label("content_type"))
).first()
not_in_(content.c.type, row)
for pickle in False, True:
for use_labels in False, True:
+ stmt = users.select()
+ if use_labels:
+ stmt = stmt.apply_labels()
+
result = conn.execute(
- users.select(use_labels=use_labels).order_by(
- users.c.user_id
- )
+ stmt.order_by(users.c.user_id)
).fetchall()
if pickle:
with eng.connect() as conn:
row = conn.execute(
select(
- [
- literal_column("1").label("SOMECOL"),
- literal_column("1").label("SOMECOL"),
- ]
+ literal_column("1").label("SOMECOL"),
+ literal_column("1").label("SOMECOL"),
)
).first()
"Using non-integer/slice indices on Row is deprecated "
"and will be removed in version 2.0;"
):
- row = connection.execute(select([col])).first()
+ row = connection.execute(select(col)).first()
eq_(row["foo"], 1)
eq_(row._mapping["foo"], 1)
"Using non-integer/slice indices on Row is deprecated "
"and will be removed in version 2.0;"
):
- row = connection.execute(select([col])).first()
+ row = connection.execute(select(col)).first()
eq_(row[col], 1)
eq_(row._mapping[col], 1)
def mydefault_using_connection(ctx):
conn = ctx.connection
try:
- return conn.execute(select([text("12")])).scalar()
+ return conn.execute(select(text("12"))).scalar()
finally:
# ensure a "close()" on this connection does nothing,
# since its a "branched" connection
):
conn.execute(table.insert().values(x=5))
- eq_(conn.execute(select([table])).first(), (5, 12))
+ eq_(conn.execute(select(table)).first(), (5, 12))
class DMLTest(fixtures.TestBase, AssertsCompiledSQL):
Column(
"col2",
Integer,
- default=select([func.coalesce(func.max(foo.c.id))]),
+ default=select(func.coalesce(func.max(foo.c.id))),
),
)
Column(
"col2",
Integer,
- onupdate=select([func.coalesce(func.max(foo.c.id))]),
+ onupdate=select(func.coalesce(func.max(foo.c.id))),
),
Column("col3", String(30)),
)
from sqlalchemy import text
from sqlalchemy import tuple_
from sqlalchemy import union
-from sqlalchemy.future import select as future_select
from sqlalchemy.sql import ClauseElement
from sqlalchemy.sql import column
from sqlalchemy.sql import operators
from sqlalchemy.sql.elements import Grouping
c1 = Grouping(literal_column("q"))
- s1 = select([c1])
+ s1 = select(c1)
class Vis(CloningVisitor):
def visit_grouping(self, elem):
def test_subquery(self):
a, b, c = column("a"), column("b"), column("c")
- subq = select([c]).where(c == a).scalar_subquery()
+ subq = select(c).where(c == a).scalar_subquery()
expr = and_(a == b, b == subq)
self._assert_traversal(
expr, [(operators.eq, a, b), (operators.eq, b, subq)]
f = t.c.col1 * 5
self.assert_compile(
- select([f]), "SELECT t1.col1 * :col1_1 AS anon_1 FROM t1"
+ select(f), "SELECT t1.col1 * :col1_1 AS anon_1 FROM t1"
)
f.anon_label
f = sql_util.ClauseAdapter(a).traverse(f)
self.assert_compile(
- select([f]), "SELECT t1_1.col1 * :col1_1 AS anon_1 FROM t1 AS t1_1"
+ select(f), "SELECT t1_1.col1 * :col1_1 AS anon_1 FROM t1 AS t1_1"
)
def test_join(self):
aliased
)
- s = select([aliased2]).select_from(aliased)
+ s = select(aliased2).select_from(aliased)
eq_(str(s), str(f))
f = select([adapter.columns[func.count(aliased2.c.col1)]]).select_from(
aliased
)
eq_(
- str(select([func.count(aliased2.c.col1)]).select_from(aliased)),
+ str(select(func.count(aliased2.c.col1)).select_from(aliased)),
str(f),
)
def test_aliased_cloned_column_adapt_inner(self):
- clause = select([t1.c.col1, func.foo(t1.c.col2).label("foo")])
+ clause = select(t1.c.col1, func.foo(t1.c.col2).label("foo"))
c_sub = clause.subquery()
- aliased1 = select([c_sub.c.col1, c_sub.c.foo]).subquery()
+ aliased1 = select(c_sub.c.col1, c_sub.c.foo).subquery()
aliased2 = clause
aliased2.selected_columns.col1, aliased2.selected_columns.foo
aliased3 = cloned_traverse(aliased2, {}, {})
eq_(str(f1), str(f2))
def test_aliased_cloned_column_adapt_exported(self):
- clause = select(
- [t1.c.col1, func.foo(t1.c.col2).label("foo")]
- ).subquery()
+ clause = select(t1.c.col1, func.foo(t1.c.col2).label("foo")).subquery()
- aliased1 = select([clause.c.col1, clause.c.foo]).subquery()
+ aliased1 = select(clause.c.col1, clause.c.foo).subquery()
aliased2 = clause
aliased2.c.col1, aliased2.c.foo
aliased3 = cloned_traverse(aliased2, {}, {})
eq_(str(f1), str(f2))
def test_aliased_cloned_schema_column_adapt_exported(self):
- clause = select(
- [t3.c.col1, func.foo(t3.c.col2).label("foo")]
- ).subquery()
+ clause = select(t3.c.col1, func.foo(t3.c.col2).label("foo")).subquery()
- aliased1 = select([clause.c.col1, clause.c.foo]).subquery()
+ aliased1 = select(clause.c.col1, clause.c.foo).subquery()
aliased2 = clause
aliased2.c.col1, aliased2.c.foo
aliased3 = cloned_traverse(aliased2, {}, {})
lblx_adapted = adapter.traverse(lbl_x)
self.assert_compile(
- select([lblx_adapted.self_group()]),
+ select(lblx_adapted.self_group()),
"SELECT (table3_1.col1 = :col1_1) AS x FROM table3 AS table3_1",
)
self.assert_compile(
- select([lblx_adapted.is_(True)]),
+ select(lblx_adapted.is_(True)),
"SELECT (table3_1.col1 = :col1_1) IS 1 AS anon_1 "
"FROM table3 AS table3_1",
)
def test_cte_w_union(self):
- t = select([func.values(1).label("n")]).cte("t", recursive=True)
- t = t.union_all(select([t.c.n + 1]).where(t.c.n < 100))
- s = select([func.sum(t.c.n)])
+ t = select(func.values(1).label("n")).cte("t", recursive=True)
+ t = t.union_all(select(t.c.n + 1).where(t.c.n < 100))
+ s = select(func.sum(t.c.n))
from sqlalchemy.sql.visitors import cloned_traverse
def test_aliased_cte_w_union(self):
t = (
- select([func.values(1).label("n")])
+ select(func.values(1).label("n"))
.cte("t", recursive=True)
.alias("foo")
)
- t = t.union_all(select([t.c.n + 1]).where(t.c.n < 100))
- s = select([func.sum(t.c.n)])
+ t = t.union_all(select(t.c.n + 1).where(t.c.n < 100))
+ s = select(func.sum(t.c.n))
from sqlalchemy.sql.visitors import cloned_traverse
assert set(clause2._bindparams.keys()) == set(["bar", "lala"])
def test_select(self):
- s2 = select([t1])
+ s2 = select(t1)
s2_assert = str(s2)
s3_assert = str(select([t1], t1.c.col2 == 7))
eq_([str(c) for c in u2.selected_columns], cols)
s1 = select([t1], t1.c.col1 == bindparam("id_param"))
- s2 = select([t2])
+ s2 = select(t2)
u = union(s1, s2)
u2 = u.params(id_param=7)
)
def test_extract(self):
- s = select([extract("foo", t1.c.col1).label("col1")])
+ s = select(extract("foo", t1.c.col1).label("col1"))
self.assert_compile(
s, "SELECT EXTRACT(foo FROM table1.col1) AS col1 FROM table1"
)
s2 = CloningVisitor().traverse(s).alias()
- s3 = select([s2.c.col1])
+ s3 = select(s2.c.col1)
self.assert_compile(
s, "SELECT EXTRACT(foo FROM table1.col1) AS col1 FROM table1"
)
@testing.emits_warning(".*replaced by another column with the same key")
def test_alias(self):
subq = t2.select().alias("subq")
- s = select(
- [t1.c.col1, subq.c.col1],
- from_obj=[t1, subq, t1.join(subq, t1.c.col1 == subq.c.col2)],
+ s = select(t1.c.col1, subq.c.col1).select_from(
+ t1, subq, t1.join(subq, t1.c.col1 == subq.c.col2)
)
orig = str(s)
s2 = CloningVisitor().traverse(s)
eq_(str(s), str(s4))
subq = subq.alias("subq")
- s = select(
- [t1.c.col1, subq.c.col1],
- from_obj=[t1, subq, t1.join(subq, t1.c.col1 == subq.c.col2)],
+ s = select(t1.c.col1, subq.c.col1).select_from(
+ t1, subq, t1.join(subq, t1.c.col1 == subq.c.col2),
)
s5 = CloningVisitor().traverse(s)
eq_(str(s), str(s5))
select.where.non_generative(select, t1.c.col2 == 7)
self.assert_compile(
- select([t2]).where(
- t2.c.col1 == Vis().traverse(s).scalar_subquery()
- ),
+ select(t2).where(t2.c.col1 == Vis().traverse(s).scalar_subquery()),
"SELECT table2.col1, table2.col2, table2.col3 "
"FROM table2 WHERE table2.col1 = "
"(SELECT * FROM table1 WHERE table1.col1 = table2.col1 "
)
def test_this_thing(self):
- s = select([t1]).where(t1.c.col1 == "foo").alias()
- s2 = select([s.c.col1])
+ s = select(t1).where(t1.c.col1 == "foo").alias()
+ s2 = select(s.c.col1)
self.assert_compile(
s2,
)
def test_this_thing_using_setup_joins_one(self):
- s = (
- future_select(t1)
- .join_from(t1, t2, t1.c.col1 == t2.c.col2)
- .subquery()
- )
- s2 = future_select(s.c.col1).join_from(t3, s, t3.c.col2 == s.c.col1)
+ s = select(t1).join_from(t1, t2, t1.c.col1 == t2.c.col2).subquery()
+ s2 = select(s.c.col1).join_from(t3, s, t3.c.col2 == s.c.col1)
self.assert_compile(
s2,
)
def test_this_thing_using_setup_joins_two(self):
- s = (
- future_select(t1.c.col1)
- .join(t2, t1.c.col1 == t2.c.col2)
- .subquery()
- )
- s2 = future_select(s.c.col1)
+ s = select(t1.c.col1).join(t2, t1.c.col1 == t2.c.col2).subquery()
+ s2 = select(s.c.col1)
self.assert_compile(
s2,
t1a = t1.alias()
s = select([1], t1.c.col1 == t1a.c.col1, from_obj=t1a).correlate(t1a)
- s = select([t1]).where(t1.c.col1 == s.scalar_subquery())
+ s = select(t1).where(t1.c.col1 == s.scalar_subquery())
self.assert_compile(
s,
"SELECT table1.col1, table1.col2, table1.col3 FROM table1 "
)
def test_select_fromtwice_two(self):
- s = select([t1]).where(t1.c.col1 == "foo").alias()
+ s = select(t1).where(t1.c.col1 == "foo").alias()
s2 = select([1], t1.c.col1 == s.c.col1, from_obj=s).correlate(t1)
- s3 = select([t1]).where(t1.c.col1 == s2.scalar_subquery())
+ s3 = select(t1).where(t1.c.col1 == s2.scalar_subquery())
self.assert_compile(
s3,
"SELECT table1.col1, table1.col2, table1.col3 "
)
def test_select_setup_joins_adapt_element_one(self):
- s = future_select(t1).join(t2, t1.c.col1 == t2.c.col2)
+ s = select(t1).join(t2, t1.c.col1 == t2.c.col2)
t1a = t1.alias()
)
def test_select_setup_joins_adapt_element_two(self):
- s = future_select(literal_column("1")).join_from(
+ s = select(literal_column("1")).join_from(
t1, t2, t1.c.col1 == t2.c.col2
)
)
def test_select_setup_joins_adapt_element_three(self):
- s = future_select(literal_column("1")).join_from(
+ s = select(literal_column("1")).join_from(
t1, t2, t1.c.col1 == t2.c.col2
)
)
def test_select_setup_joins_straight_clone(self):
- s = future_select(t1).join(t2, t1.c.col1 == t2.c.col2)
+ s = select(t1).join(t2, t1.c.col1 == t2.c.col2)
s2 = CloningVisitor().traverse(s)
t1a = t1.alias()
adapter = sql_util.ColumnAdapter(t1a, anonymize_labels=True)
- expr = select([t1a.c.col1]).label("x")
+ expr = select(t1a.c.col1).label("x")
expr_adapted = adapter.traverse(expr)
is_not_(expr, expr_adapted)
is_(adapter.columns[expr], expr_adapted)
t1a = t1.alias()
adapter = sql_util.ColumnAdapter(t1a, anonymize_labels=True)
- expr = select([t1a.c.col1]).label("x")
+ expr = select(t1a.c.col1).label("x")
expr_adapted = adapter.traverse(expr)
is_not_(expr, expr_adapted)
is_(adapter.traverse(expr), expr_adapted)
t1a = t1.alias()
adapter = sql_util.ColumnAdapter(t1a, anonymize_labels=True)
- expr = select([t1a.c.col1]).label("x")
+ expr = select(t1a.c.col1).label("x")
expr_adapted = adapter.columns[expr]
is_not_(expr, expr_adapted)
is_(adapter.columns[expr], expr_adapted)
t2a = t2.alias(name="t2a")
a1 = sql_util.ColumnAdapter(t1a)
- s1 = select([t1a.c.col1, t2a.c.col1]).apply_labels().alias()
+ s1 = select(t1a.c.col1, t2a.c.col1).apply_labels().alias()
a2 = sql_util.ColumnAdapter(s1)
a3 = a2.wrap(a1)
a4 = a1.wrap(a2)
"""
- stmt = select([t1.c.col1, t2.c.col1]).apply_labels().subquery()
+ stmt = select(t1.c.col1, t2.c.col1).apply_labels().subquery()
sa = stmt.alias()
- stmt2 = select([t2, sa]).subquery()
+ stmt2 = select(t2, sa).subquery()
a1 = sql_util.ColumnAdapter(stmt)
a2 = sql_util.ColumnAdapter(stmt2)
a2 = sql_util.ColumnAdapter(t2a)
a3 = a2.wrap(a1)
- stmt = select([t1.c.col1, t2.c.col2])
+ stmt = select(t1.c.col1, t2.c.col2)
self.assert_compile(
a3.traverse(stmt),
t1a, include_fn=lambda col: "a1" in col._annotations
)
- s1 = select([t1a, t2a]).apply_labels().alias()
+ s1 = select(t1a, t2a).apply_labels().alias()
a2 = sql_util.ColumnAdapter(
s1, include_fn=lambda col: "a2" in col._annotations
)
)
s = (
- select([literal_column("*")])
+ select(literal_column("*"))
.where(t1.c.col1 == t2.c.col1)
.scalar_subquery()
)
self.assert_compile(
- select([t1.c.col1, s]),
+ select(t1.c.col1, s),
"SELECT table1.col1, (SELECT * FROM table2 "
"WHERE table1.col1 = table2.col1) AS "
"anon_1 FROM table1",
vis = sql_util.ClauseAdapter(t1alias)
s = vis.traverse(s)
self.assert_compile(
- select([t1alias.c.col1, s]),
+ select(t1alias.c.col1, s),
"SELECT t1alias.col1, (SELECT * FROM "
"table2 WHERE t1alias.col1 = table2.col1) "
"AS anon_1 FROM table1 AS t1alias",
)
s = CloningVisitor().traverse(s)
self.assert_compile(
- select([t1alias.c.col1, s]),
+ select(t1alias.c.col1, s),
"SELECT t1alias.col1, (SELECT * FROM "
"table2 WHERE t1alias.col1 = table2.col1) "
"AS anon_1 FROM table1 AS t1alias",
)
s = (
- select([literal_column("*")])
+ select(literal_column("*"))
.where(t1.c.col1 == t2.c.col1)
.correlate(t1)
.scalar_subquery()
)
self.assert_compile(
- select([t1.c.col1, s]),
+ select(t1.c.col1, s),
"SELECT table1.col1, (SELECT * FROM table2 "
"WHERE table1.col1 = table2.col1) AS "
"anon_1 FROM table1",
vis = sql_util.ClauseAdapter(t1alias)
s = vis.traverse(s)
self.assert_compile(
- select([t1alias.c.col1, s]),
+ select(t1alias.c.col1, s),
"SELECT t1alias.col1, (SELECT * FROM "
"table2 WHERE t1alias.col1 = table2.col1) "
"AS anon_1 FROM table1 AS t1alias",
)
s = CloningVisitor().traverse(s)
self.assert_compile(
- select([t1alias.c.col1, s]),
+ select(t1alias.c.col1, s),
"SELECT t1alias.col1, (SELECT * FROM "
"table2 WHERE t1alias.col1 = table2.col1) "
"AS anon_1 FROM table1 AS t1alias",
# "control" subquery - uses correlate which has worked w/ adaption
# for a long time
control_s = (
- select([t2.c.col1])
+ select(t2.c.col1)
.where(t2.c.col1 == t1.c.col1)
.correlate(t2)
.scalar_subquery()
# will do the same thing as the "control" query since the correlation
# works out the same
s = (
- select([t2.c.col1])
+ select(t2.c.col1)
.where(t2.c.col1 == t1.c.col1)
.correlate_except(t1)
.scalar_subquery()
)
# use both subqueries in statements
- control_stmt = select([control_s, t1.c.col1, t2.c.col1]).select_from(
+ control_stmt = select(control_s, t1.c.col1, t2.c.col1).select_from(
t1.join(t2, t1.c.col1 == t2.c.col1)
)
- stmt = select([s, t1.c.col1, t2.c.col1]).select_from(
+ stmt = select(s, t1.c.col1, t2.c.col1).select_from(
t1.join(t2, t1.c.col1 == t2.c.col1)
)
# they are the same
t1alias = t1.alias("t1alias")
vis = sql_util.ClauseAdapter(t1alias)
self.assert_compile(
- select([t1alias, t2]).where(
+ select(t1alias, t2).where(
t1alias.c.col1
== vis.traverse(
select(
t1alias = t1.alias("t1alias")
vis = sql_util.ClauseAdapter(t1alias)
self.assert_compile(
- select([t1alias, t2]).where(
+ select(t1alias, t2).where(
t1alias.c.col1
== vis.traverse(
select(
vis = sql_util.ClauseAdapter(t1alias)
ff = vis.traverse(func.count(t1.c.col1).label("foo"))
self.assert_compile(
- select([ff]),
+ select(ff),
"SELECT count(t1alias.col1) AS foo FROM " "table1 AS t1alias",
)
assert list(_from_objects(ff)) == [t1alias]
# def test_table_to_alias_2(self):
- # TODO: self.assert_compile(vis.traverse(select([func.count(t1.c
- # .col1).l abel('foo')]), clone=True), "SELECT
+ # TODO: self.assert_compile(vis.traverse(select(func.count(t1.c
+ # .col1).l abel('foo')), clone=True), "SELECT
# count(t1alias.col1) AS foo FROM table1 AS t1alias")
def test_table_to_alias_13(self):
t2alias = t2.alias("t2alias")
vis.chain(sql_util.ClauseAdapter(t2alias))
self.assert_compile(
- select([t1alias, t2alias]).where(
+ select(t1alias, t2alias).where(
t1alias.c.col1
== vis.traverse(
select(["*"], t1.c.col1 == t2.c.col2, from_obj=[t1, t2])
b = Table("b", m, Column("x", Integer), Column("y", Integer))
c = Table("c", m, Column("x", Integer), Column("y", Integer))
- alias = select([a]).select_from(a.join(b, a.c.x == b.c.x)).alias()
+ alias = select(a).select_from(a.join(b, a.c.x == b.c.x)).alias()
# two levels of indirection from c.x->b.x->a.x, requires recursive
# corresponding_column call
)
def test_derived_from(self):
- assert select([t1]).is_derived_from(t1)
- assert not select([t2]).is_derived_from(t1)
- assert not t1.is_derived_from(select([t1]))
+ assert select(t1).is_derived_from(t1)
+ assert not select(t2).is_derived_from(t1)
+ assert not t1.is_derived_from(select(t1))
assert t1.alias().is_derived_from(t1)
- s1 = select([t1, t2]).alias("foo")
- s2 = select([s1]).limit(5).offset(10).alias()
+ s1 = select(t1, t2).alias("foo")
+ s2 = select(s1).limit(5).offset(10).alias()
assert s2.is_derived_from(s1)
s2 = s2._clone()
assert s2.is_derived_from(s1)
# original issue from ticket #904
- s1 = select([t1]).alias("foo")
- s2 = select([s1]).limit(5).offset(10).alias()
+ s1 = select(t1).alias("foo")
+ s2 = select(s1).limit(5).offset(10).alias()
self.assert_compile(
sql_util.ClauseAdapter(s2).traverse(s1),
"SELECT foo.col1, foo.col2, foo.col3 FROM "
)
def test_aliasedselect_to_aliasedselect_join(self):
- s1 = select([t1]).alias("foo")
- s2 = select([s1]).limit(5).offset(10).alias()
+ s1 = select(t1).alias("foo")
+ s2 = select(s1).limit(5).offset(10).alias()
j = s1.outerjoin(t2, s1.c.col1 == t2.c.col1)
self.assert_compile(
sql_util.ClauseAdapter(s2).traverse(j).select(),
)
def test_aliasedselect_to_aliasedselect_join_nested_table(self):
- s1 = select([t1]).alias("foo")
- s2 = select([s1]).limit(5).offset(10).alias()
+ s1 = select(t1).alias("foo")
+ s2 = select(s1).limit(5).offset(10).alias()
talias = t1.alias("bar")
# here is the problem. s2 is derived from s1 which is derived
sql_util.ClauseAdapter(t1.alias()).traverse(func.count(t1.c.col1)),
"count(table1_1.col1)",
)
- s = select([func.count(t1.c.col1)])
+ s = select(func.count(t1.c.col1))
self.assert_compile(
sql_util.ClauseAdapter(t1.alias()).traverse(s),
"SELECT count(table1_1.col1) AS count_1 "
self.assert_compile(
sql_util.ClauseAdapter(u).traverse(
- select([c.c.bid]).where(c.c.bid == u.c.b_aid)
+ select(c.c.bid).where(c.c.bid == u.c.b_aid)
),
"SELECT c.bid "
"FROM c, (SELECT a.id AS a_id, b.id AS b_id, b.aid AS b_aid "
t1a = t1.alias()
adapter = sql_util.ClauseAdapter(t1a, anonymize_labels=True)
- expr = select([t1.c.col2]).where(t1.c.col3 == 5).label("expr")
+ expr = select(t1.c.col2).where(t1.c.col3 == 5).label("expr")
expr_adapted = adapter.traverse(expr)
- stmt = select([expr, expr_adapted]).order_by(expr, expr_adapted)
+ stmt = select(expr, expr_adapted).order_by(expr, expr_adapted)
self.assert_compile(
stmt,
"SELECT "
t1a = t1.alias()
adapter = sql_util.ClauseAdapter(t1a, anonymize_labels=True)
- expr = select([t1.c.col2]).where(t1.c.col3 == 5).label(None)
+ expr = select(t1.c.col2).where(t1.c.col3 == 5).label(None)
expr_adapted = adapter.traverse(expr)
- stmt = select([expr, expr_adapted]).order_by(expr, expr_adapted)
+ stmt = select(expr, expr_adapted).order_by(expr, expr_adapted)
self.assert_compile(
stmt,
"SELECT "
t1a, anonymize_labels=True, allow_label_resolve=False
)
- expr = select([t1.c.col2]).where(t1.c.col3 == 5).label(None)
+ expr = select(t1.c.col2).where(t1.c.col3 == 5).label(None)
l1 = expr
is_(l1._order_by_label_element, l1)
eq_(l1._allow_label_resolve, True)
.join(t3, t2.c.col1 == t3.c.col1)
.join(t4, t4.c.col1 == t1.c.col1)
)
- s = select([t1]).where(t1.c.col2 < 5).alias()
+ s = select(t1).where(t1.c.col2 < 5).alias()
self.assert_compile(
sql_util.splice_joins(s, j),
"(SELECT table1.col1 AS col1, table1.col2 "
t1, t2, t3 = table1, table2, table3
j1 = t1.join(t2, t1.c.col1 == t2.c.col1)
j2 = j1.join(t3, t2.c.col1 == t3.c.col1)
- s = select([t1]).select_from(j1).alias()
+ s = select(t1).select_from(j1).alias()
self.assert_compile(
sql_util.splice_joins(s, j2),
"(SELECT table1.col1 AS col1, table1.col2 "
r"Coercing CTE object into a select\(\) for use in "
r"IN\(\); please pass a select\(\) construct explicitly",
):
- s2 = select([column("q").in_(stmt)])
+ s2 = select(column("q").in_(stmt))
self.assert_compile(
s2,
from sqlalchemy.engine import default
from sqlalchemy.engine import Row
from sqlalchemy.ext.compiler import compiles
-from sqlalchemy.future import select as future_select
from sqlalchemy.sql import ColumnElement
from sqlalchemy.sql import expression
from sqlalchemy.sql.selectable import TextualSelect
eq_(dict(row), {"user_id": 1, "user_name": "foo"})
eq_(row.keys(), ["user_id", "user_name"])
+ def test_row_namedtuple_legacy_ok(self, connection):
+ users = self.tables.users
+
+ connection.execute(users.insert(), user_id=1, user_name="foo")
+ result = connection.execute(users.select())
+ row = result.first()
+ eq_(row.user_id, 1)
+ eq_(row.user_name, "foo")
+
def test_keys_anon_labels(self, connection):
"""test [ticket:3483]"""
],
)
- result = connection.execute(
- future_select(users).order_by(users.c.user_id)
- )
+ result = connection.execute(select(users).order_by(users.c.user_id))
eq_(
result.all(),
[(7, "jack", 1, 2), (8, "ed", 2, 3), (9, "fred", 15, 20)],
],
)
- result = connection.execute(
- future_select(users).order_by(users.c.user_id)
- )
+ result = connection.execute(select(users).order_by(users.c.user_id))
all_ = result.columns(*columns).all()
eq_(all_, expected)
[{"user_id": 7, "user_name": "jack", "x": 1, "y": 2}],
)
- result = connection.execute(
- future_select(users).order_by(users.c.user_id)
- )
+ result = connection.execute(select(users).order_by(users.c.user_id))
all_ = (
result.columns("x", "y", "user_name", "user_id")
[{"user_id": 7, "user_name": "jack", "x": 1, "y": 2}],
)
- result = connection.execute(
- future_select(users).order_by(users.c.user_id)
- )
+ result = connection.execute(select(users).order_by(users.c.user_id))
result = result.columns("x", "y", "user_name")
getter = result._metadata._getter("y")
],
)
- result = connection.execute(
- future_select(users).order_by(users.c.user_id)
- )
+ result = connection.execute(select(users).order_by(users.c.user_id))
start = 0
for partition in result.columns(0, 1).partitions(20):
is_true(
expect(
roles.LabeledColumnExprRole,
- select([column("q")]).scalar_subquery(),
- ).compare(select([column("q")]).label(None))
+ select(column("q")).scalar_subquery(),
+ ).compare(select(column("q")).label(None))
)
is_true(
"implicitly coercing SELECT object to scalar subquery"
):
expect(
- roles.LabeledColumnExprRole, select([column("q")]),
+ roles.LabeledColumnExprRole, select(column("q")),
)
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
):
expect(
- roles.LabeledColumnExprRole, select([column("q")]).alias(),
+ roles.LabeledColumnExprRole, select(column("q")).alias(),
)
def test_statement_no_text_coercion(self):
"constructs into FROM clauses is deprecated;"
):
element = expect(
- roles.FromClauseRole, SelectStatementGrouping(select([t]))
+ roles.FromClauseRole, SelectStatementGrouping(select(t))
)
is_true(
- element.compare(
- SelectStatementGrouping(select([t])).subquery()
- )
+ element.compare(SelectStatementGrouping(select(t)).subquery())
)
def test_offset_or_limit_role_only_ints_or_clauseelement(self):
- assert_raises(ValueError, select([t]).limit, "some limit")
+ assert_raises(ValueError, select(t).limit, "some limit")
- assert_raises(ValueError, select([t]).offset, "some offset")
+ assert_raises(ValueError, select(t).offset, "some offset")
def test_offset_or_limit_role_clauseelement(self):
bind = bindparam("x")
- stmt = select([t]).limit(bind)
+ stmt = select(t).limit(bind)
is_(stmt._limit_clause, bind)
- stmt = select([t]).offset(bind)
+ stmt = select(t).offset(bind)
is_(stmt._offset_clause, bind)
def test_from_clause_is_not_a_select(self):
def test_statement_coercion_select(self):
is_true(
- expect(roles.CoerceTextStatementRole, select([t])).compare(
- select([t])
- )
+ expect(roles.CoerceTextStatementRole, select(t)).compare(select(t))
)
def test_statement_coercion_ddl(self):
is_(expect(roles.CoerceTextStatementRole, d1), d1)
def test_strict_from_clause_role(self):
- stmt = select([t]).subquery()
+ stmt = select(t).subquery()
is_true(
expect(roles.StrictFromClauseRole, stmt).compare(
- select([t]).subquery()
+ select(t).subquery()
)
)
def test_strict_from_clause_role_disallow_select(self):
- stmt = select([t])
+ stmt = select(t)
assert_raises_message(
exc.ArgumentError,
r"FROM expression, such as a Table or alias\(\) "
# than just replacing the outer alias.
is_true(
expect(
- roles.AnonymizedFromClauseRole, select([t]).subquery()
- ).compare(select([t]).subquery().alias())
+ roles.AnonymizedFromClauseRole, select(t).subquery()
+ ).compare(select(t).subquery().alias())
)
def test_statement_coercion_sequence(self):
)
def test_column_roles(self):
- stmt = select([self.table1.c.myid])
+ stmt = select(self.table1.c.myid)
for role in [
roles.WhereHavingRole,
is_true(coerced.compare(stmt.scalar_subquery()))
def test_labeled_role(self):
- stmt = select([self.table1.c.myid])
+ stmt = select(self.table1.c.myid)
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
"implicitly coercing SELECT object to scalar subquery"
):
self.assert_compile(
- func.coalesce(select([self.table1.c.myid])),
+ func.coalesce(select(self.table1.c.myid)),
"coalesce((SELECT mytable.myid FROM mytable))",
)
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
):
- s = select([self.table1.c.myid]).alias()
+ s = select(self.table1.c.myid).alias()
self.assert_compile(
- select([self.table1.c.myid]).where(self.table1.c.myid == s),
+ select(self.table1.c.myid).where(self.table1.c.myid == s),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid = (SELECT mytable.myid FROM "
"mytable)",
"implicitly coercing SELECT object to scalar subquery"
):
self.assert_compile(
- select([self.table1.c.myid]).where(s > self.table1.c.myid),
+ select(self.table1.c.myid).where(s > self.table1.c.myid),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid < (SELECT mytable.myid FROM "
"mytable)",
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
):
- s = select([self.table1.c.myid]).alias()
+ s = select(self.table1.c.myid).alias()
self.assert_compile(
- select([self.table1.c.myid]).where(self.table1.c.myid == s),
+ select(self.table1.c.myid).where(self.table1.c.myid == s),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid = (SELECT mytable.myid FROM "
"mytable)",
"implicitly coercing SELECT object to scalar subquery"
):
self.assert_compile(
- select([self.table1.c.myid]).where(s > self.table1.c.myid),
+ select(self.table1.c.myid).where(s > self.table1.c.myid),
"SELECT mytable.myid FROM mytable WHERE "
"mytable.myid < (SELECT mytable.myid FROM "
"mytable)",
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
):
- u = update(
- table1,
- values={
- table1.c.name: select(
- [mt.c.name], mt.c.myid == table1.c.myid
+ u = update(table1).values(
+ {
+ table1.c.name: select(mt.c.name).where(
+ mt.c.myid == table1.c.myid
)
},
)
with testing.expect_warnings(
"implicitly coercing SELECT object to scalar subquery"
):
- u = update(
- table1, table1.c.name == "jack", values={table1.c.name: s}
+ u = (
+ update(table1)
+ .where(table1.c.name == "jack")
+ .values({table1.c.name: s})
)
self.assert_compile(
u,
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import MetaData
+from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy import Table
-from sqlalchemy.future import select as future_select
from sqlalchemy.sql import column
from sqlalchemy.sql import table
from sqlalchemy.testing import assert_raises_message
class FutureSelectTest(fixtures.TestBase, AssertsCompiledSQL):
__dialect__ = "default"
- def test_join_nofrom_implicit_left_side_explicit_onclause(self):
- stmt = future_select(table1).join(
- table2, table1.c.myid == table2.c.otherid
+ def test_legacy_calling_style_kw_only(self):
+ stmt = select(
+ whereclause=table1.c.myid == table2.c.otherid
+ ).add_columns(table1.c.myid)
+
+ self.assert_compile(
+ stmt,
+ "SELECT mytable.myid FROM mytable, myothertable "
+ "WHERE mytable.myid = myothertable.otherid",
+ )
+
+ def test_legacy_calling_style_col_seq_only(self):
+ stmt = select([table1.c.myid]).where(table1.c.myid == table2.c.otherid)
+
+ self.assert_compile(
+ stmt,
+ "SELECT mytable.myid FROM mytable, myothertable "
+ "WHERE mytable.myid = myothertable.otherid",
+ )
+
+ def test_new_calling_style(self):
+ stmt = select(table1.c.myid).where(table1.c.myid == table2.c.otherid)
+
+ self.assert_compile(
+ stmt,
+ "SELECT mytable.myid FROM mytable, myothertable "
+ "WHERE mytable.myid = myothertable.otherid",
)
+ def test_kw_triggers_old_style(self):
+
+ assert_raises_message(
+ exc.ArgumentError,
+ r"select\(\) construct created in legacy mode, "
+ "i.e. with keyword arguments",
+ select,
+ table1.c.myid,
+ whereclause=table1.c.myid == table2.c.otherid,
+ )
+
+ def test_join_nofrom_implicit_left_side_explicit_onclause(self):
+ stmt = select(table1).join(table2, table1.c.myid == table2.c.otherid)
+
self.assert_compile(
stmt,
"SELECT mytable.myid, mytable.name, mytable.description "
)
def test_join_nofrom_explicit_left_side_explicit_onclause(self):
- stmt = future_select(table1).join_from(
+ stmt = select(table1).join_from(
table1, table2, table1.c.myid == table2.c.otherid
)
)
def test_join_nofrom_implicit_left_side_implicit_onclause(self):
- stmt = future_select(parent).join(child)
+ stmt = select(parent).join(child)
self.assert_compile(
stmt,
)
def test_join_nofrom_explicit_left_side_implicit_onclause(self):
- stmt = future_select(parent).join_from(parent, child)
+ stmt = select(parent).join_from(parent, child)
self.assert_compile(
stmt,
def test_join_froms_implicit_left_side_explicit_onclause(self):
stmt = (
- future_select(table1)
+ select(table1)
.select_from(table1)
.join(table2, table1.c.myid == table2.c.otherid)
)
def test_join_froms_explicit_left_side_explicit_onclause(self):
stmt = (
- future_select(table1)
+ select(table1)
.select_from(table1)
.join_from(table1, table2, table1.c.myid == table2.c.otherid)
)
)
def test_join_froms_implicit_left_side_implicit_onclause(self):
- stmt = future_select(parent).select_from(parent).join(child)
+ stmt = select(parent).select_from(parent).join(child)
self.assert_compile(
stmt,
)
def test_join_froms_explicit_left_side_implicit_onclause(self):
- stmt = (
- future_select(parent).select_from(parent).join_from(parent, child)
- )
+ stmt = select(parent).select_from(parent).join_from(parent, child)
self.assert_compile(
stmt,
def test_joins_w_filter_by(self):
stmt = (
- future_select(parent)
+ select(parent)
.filter_by(data="p1")
.join(child)
.filter_by(data="c1")
assert_raises_message(
exc.InvalidRequestError,
'Entity namespace for "mytable" has no property "foo"',
- future_select(table1).filter_by,
+ select(table1).filter_by,
foo="bar",
)
# same column three times
s = select(
- [
- table1.c.col1.label("c2"),
- table1.c.col1,
- table1.c.col1.label("c1"),
- ]
+ table1.c.col1.label("c2"),
+ table1.c.col1,
+ table1.c.col1.label("c1"),
).subquery()
# this tests the same thing as
assert s.corresponding_column(s.c.c1) is s.c.c1
def test_labeled_select_twice(self):
- scalar_select = select([table1.c.col1]).label("foo")
+ scalar_select = select(table1.c.col1).label("foo")
- s1 = select([scalar_select])
- s2 = select([scalar_select, scalar_select])
+ s1 = select(scalar_select)
+ s2 = select(scalar_select, scalar_select)
eq_(
s1.selected_columns.foo.proxy_set,
)
def test_labeled_subquery_twice(self):
- scalar_select = select([table1.c.col1]).label("foo")
+ scalar_select = select(table1.c.col1).label("foo")
- s1 = select([scalar_select]).subquery()
- s2 = select([scalar_select, scalar_select]).subquery()
+ s1 = select(scalar_select).subquery()
+ s2 = select(scalar_select, scalar_select).subquery()
eq_(
s1.c.foo.proxy_set,
assert s2.corresponding_column(scalar_select) is s2.c.foo
def test_labels_name_w_separate_key(self):
- label = select([table1.c.col1]).label("foo")
+ label = select(table1.c.col1).label("foo")
label.key = "bar"
- s1 = select([label])
+ s1 = select(label)
assert s1.corresponding_column(label) is s1.selected_columns.bar
# renders as foo
)
def test_labels_anon_w_separate_key(self):
- label = select([table1.c.col1]).label(None)
+ label = select(table1.c.col1).label(None)
label.key = "bar"
- s1 = select([label])
+ s1 = select(label)
# .bar is there
assert s1.corresponding_column(label) is s1.selected_columns.bar
)
def test_labels_anon_w_separate_key_subquery(self):
- label = select([table1.c.col1]).label(None)
+ label = select(table1.c.col1).label(None)
label.key = label._key_label = "bar"
- s1 = select([label])
+ s1 = select(label)
subq = s1.subquery()
- s2 = select([subq]).where(subq.c.bar > 5)
+ s2 = select(subq).where(subq.c.bar > 5)
self.assert_compile(
s2,
"SELECT anon_2.anon_1 FROM (SELECT (SELECT table1.col1 "
)
def test_labels_anon_generate_binds_subquery(self):
- label = select([table1.c.col1]).label(None)
+ label = select(table1.c.col1).label(None)
label.key = label._key_label = "bar"
- s1 = select([label])
+ s1 = select(label)
subq = s1.subquery()
- s2 = select([subq]).where(subq.c[0] > 5)
+ s2 = select(subq).where(subq.c[0] > 5)
self.assert_compile(
s2,
"SELECT anon_2.anon_1 FROM (SELECT (SELECT table1.col1 "
)
def test_select_label_grouped_still_corresponds(self):
- label = select([table1.c.col1]).label("foo")
+ label = select(table1.c.col1).label("foo")
label2 = label.self_group()
- s1 = select([label])
- s2 = select([label2])
+ s1 = select(label)
+ s2 = select(label2)
assert s1.corresponding_column(label) is s1.selected_columns.foo
assert s2.corresponding_column(label) is s2.selected_columns.foo
def test_subquery_label_grouped_still_corresponds(self):
- label = select([table1.c.col1]).label("foo")
+ label = select(table1.c.col1).label("foo")
label2 = label.self_group()
- s1 = select([label]).subquery()
- s2 = select([label2]).subquery()
+ s1 = select(label).subquery()
+ s2 = select(label2).subquery()
assert s1.corresponding_column(label) is s1.c.foo
assert s2.corresponding_column(label) is s2.c.foo
# of the proxy set to get the right result
l1, l2 = table1.c.col1.label("foo"), table1.c.col1.label("bar")
- sel = select([l1, l2])
+ sel = select(l1, l2)
sel2 = sel.alias()
assert sel2.corresponding_column(l1) is sel2.c.foo
assert sel2.corresponding_column(l2) is sel2.c.bar
- sel2 = select([table1.c.col1.label("foo"), table1.c.col2.label("bar")])
+ sel2 = select(table1.c.col1.label("foo"), table1.c.col2.label("bar"))
sel3 = sel.union(sel2).alias()
assert sel3.corresponding_column(l1) is sel3.c.foo
assert sel3.corresponding_column(l2) is sel3.c.bar
def test_keyed_gen(self):
- s = select([keyed])
+ s = select(keyed)
eq_(s.selected_columns.colx.key, "colx")
eq_(s.selected_columns.colx.name, "x")
assert sel2.corresponding_column(keyed.c.z) is sel2.c.z
def test_keyed_label_gen(self):
- s = select([keyed]).apply_labels()
+ s = select(keyed).apply_labels()
assert (
s.selected_columns.corresponding_column(keyed.c.colx)
def test_clone_c_proxy_key_upper(self):
c = Column("foo", Integer, key="bar")
t = Table("t", MetaData(), c)
- s = select([t])._clone()
+ s = select(t)._clone()
assert c in s.selected_columns.bar.proxy_set
- s = select([t]).subquery()._clone()
+ s = select(t).subquery()._clone()
assert c in s.c.bar.proxy_set
def test_clone_c_proxy_key_lower(self):
c = column("foo")
c.key = "bar"
t = table("t", c)
- s = select([t])._clone()
+ s = select(t)._clone()
assert c in s.selected_columns.bar.proxy_set
- s = select([t]).subquery()._clone()
+ s = select(t).subquery()._clone()
assert c in s.c.bar.proxy_set
def test_no_error_on_unsupported_expr_key(self):
expr = BinaryExpression(t.c.x, t.c.y, myop)
- s = select([t, expr])
+ s = select(t, expr)
# anon_label, e.g. a truncated_label, is used here because
# the expr has no name, no key, and myop() can't create a
# string, so this is the last resort
eq_(s.selected_columns.keys(), ["x", "y", expr.anon_label])
- s = select([t, expr]).subquery()
+ s = select(t, expr).subquery()
eq_(s.c.keys(), ["x", "y", expr.anon_label])
def test_cloned_intersection(self):
assert s.corresponding_column(a1.c.col1) is s.c.a1_col1
def test_join_against_self(self):
- jj = select([table1.c.col1.label("bar_col1")]).subquery()
+ jj = select(table1.c.col1.label("bar_col1")).subquery()
jjj = join(table1, jj, table1.c.col1 == jj.c.bar_col1)
# test column directly against itself
assert j2.corresponding_column(table1.c.col1) is j2.c.table1_col1
def test_clone_append_column(self):
- sel = select([literal_column("1").label("a")])
+ sel = select(literal_column("1").label("a"))
eq_(list(sel.selected_columns.keys()), ["a"])
cloned = visitors.ReplacingCloningVisitor().traverse(sel)
cloned.add_columns.non_generative(
def test_clone_col_list_changes_then_proxy(self):
t = table("t", column("q"), column("p"))
- stmt = select([t.c.q]).subquery()
+ stmt = select(t.c.q).subquery()
def add_column(stmt):
stmt.add_columns.non_generative(stmt, t.c.p)
def test_clone_col_list_changes_then_schema_proxy(self):
t = Table("t", MetaData(), Column("q", Integer), Column("p", Integer))
- stmt = select([t.c.q]).subquery()
+ stmt = select(t.c.q).subquery()
def add_column(stmt):
stmt.add_columns.non_generative(stmt, t.c.p)
def test_append_column_after_visitor_replace(self):
# test for a supported idiom that matches the deprecated / removed
# replace_selectable method
- basesel = select([literal_column("1").label("a")])
+ basesel = select(literal_column("1").label("a"))
tojoin = select(
- [literal_column("1").label("a"), literal_column("2").label("b")]
+ literal_column("1").label("a"), literal_column("2").label("b")
)
basefrom = basesel.alias("basefrom")
joinfrom = tojoin.alias("joinfrom")
- sel = select([basefrom.c.a])
+ sel = select(basefrom.c.a)
replace_from = basefrom.join(joinfrom, basefrom.c.a == joinfrom.c.a)
# test that corresponding column digs across
# clone boundaries with anonymous labeled elements
col = func.count().label("foo")
- sel = select([col]).subquery()
+ sel = select(col).subquery()
sel2 = visitors.ReplacingCloningVisitor().traverse(sel)
assert sel2.corresponding_column(col) is sel2.c.foo
class MyType(TypeDecorator):
impl = Integer
- stmt = select([type_coerce(column("x"), MyType).label("foo")])
+ stmt = select(type_coerce(column("x"), MyType).label("foo"))
subq = stmt.subquery()
stmt2 = subq.select()
subq2 = stmt2.subquery()
j2 = t4.join(j, onclause=t4.c.d == t2.c.b)
- stmt = select([t1, t2, t3, t4]).select_from(j2)
+ stmt = select(t1, t2, t3, t4).select_from(j2)
self.assert_compile(
stmt,
"SELECT t1.a, t2.b, t3.c, t4.d FROM t3, "
"t4 JOIN (t1 JOIN t2 ON t1.a = t3.c) ON t4.d = t2.b",
)
- stmt = select([t1]).select_from(t3).select_from(j2)
+ stmt = select(t1).select_from(t3).select_from(j2)
self.assert_compile(
stmt,
"SELECT t1.a FROM t3, t4 JOIN (t1 JOIN t2 ON t1.a = t3.c) "
# not quite a use case yet but this is expected to become
# prominent w/ PostgreSQL's tuple functions
- stmt = select([table1.c.col1, table1.c.col2])
+ stmt = select(table1.c.col1, table1.c.col2)
a = stmt.alias("a")
# TODO: this case is crazy, sending SELECT or FROMCLAUSE has to
# statements go into functions in PG. seems likely select statement,
# but not alias, subquery or other FROM object
self.assert_compile(
- select([func.foo(a)]),
+ select(func.foo(a)),
"SELECT foo(SELECT table1.col1, table1.col2 FROM table1) "
"AS foo_1 FROM "
"(SELECT table1.col1 AS col1, table1.col2 AS col2 FROM table1) "
# its underlying Selects matches to that same Table
u = select(
- [
- table1.c.col1,
- table1.c.col2,
- table1.c.col3,
- table1.c.colx,
- null().label("coly"),
- ]
+ table1.c.col1,
+ table1.c.col2,
+ table1.c.col3,
+ table1.c.colx,
+ null().label("coly"),
).union(
select(
- [
- table2.c.col1,
- table2.c.col2,
- table2.c.col3,
- null().label("colx"),
- table2.c.coly,
- ]
+ table2.c.col1,
+ table2.c.col2,
+ table2.c.col3,
+ null().label("colx"),
+ table2.c.coly,
)
)
s1 = table1.select(use_labels=True)
# conflicting column correspondence should be resolved based on
# the order of the select()s in the union
- s1 = select([table1.c.col1, table1.c.col2])
- s2 = select([table1.c.col2, table1.c.col1])
- s3 = select([table1.c.col3, table1.c.colx])
- s4 = select([table1.c.colx, table1.c.col3])
+ s1 = select(table1.c.col1, table1.c.col2)
+ s2 = select(table1.c.col2, table1.c.col1)
+ s3 = select(table1.c.col3, table1.c.colx)
+ s4 = select(table1.c.colx, table1.c.col3)
u1 = union(s1, s2).subquery()
assert u1.corresponding_column(table1.c.col1) is u1.c.col1
assert u1.corresponding_column(table1.c.col3) is u1.c.col1
def test_proxy_set_pollution(self):
- s1 = select([table1.c.col1, table1.c.col2])
- s2 = select([table1.c.col2, table1.c.col1])
+ s1 = select(table1.c.col1, table1.c.col2)
+ s2 = select(table1.c.col2, table1.c.col1)
for c in s1.selected_columns:
c.proxy_set
def test_singular_union(self):
u = union(
- select([table1.c.col1, table1.c.col2, table1.c.col3]),
- select([table1.c.col1, table1.c.col2, table1.c.col3]),
+ select(table1.c.col1, table1.c.col2, table1.c.col3),
+ select(table1.c.col1, table1.c.col2, table1.c.col3),
)
- u = union(select([table1.c.col1, table1.c.col2, table1.c.col3]))
+ u = union(select(table1.c.col1, table1.c.col2, table1.c.col3))
assert u.selected_columns.col1 is not None
assert u.selected_columns.col2 is not None
assert u.selected_columns.col3 is not None
u = (
select(
- [
- table1.c.col1,
- table1.c.col2,
- table1.c.col3,
- table1.c.colx,
- null().label("coly"),
- ]
+ table1.c.col1,
+ table1.c.col2,
+ table1.c.col3,
+ table1.c.colx,
+ null().label("coly"),
)
.union(
select(
- [
- table2.c.col1,
- table2.c.col2,
- table2.c.col3,
- null().label("colx"),
- table2.c.coly,
- ]
+ table2.c.col1,
+ table2.c.col2,
+ table2.c.col3,
+ null().label("colx"),
+ table2.c.coly,
)
)
.alias("analias")
assert s2.corresponding_column(u.c.coly) is s2.c.table2_coly
def test_union_of_alias(self):
- s1 = select([table1.c.col1, table1.c.col2])
- s2 = select([table1.c.col1, table1.c.col2]).alias()
+ s1 = select(table1.c.col1, table1.c.col2)
+ s2 = select(table1.c.col1, table1.c.col2).alias()
# previously this worked
assert_raises_message(
)
def test_union_of_text(self):
- s1 = select([table1.c.col1, table1.c.col2])
+ s1 = select(table1.c.col1, table1.c.col2)
s2 = text("select col1, col2 from foo").columns(
column("col1"), column("col2")
)
assert u2.corresponding_column(s2.selected_columns.col1) is u2.c.col1
def test_foo(self):
- s1 = select([table1.c.col1, table1.c.col2])
- s2 = select([table1.c.col2, table1.c.col1])
+ s1 = select(table1.c.col1, table1.c.col2)
+ s2 = select(table1.c.col2, table1.c.col1)
u1 = union(s1, s2).subquery()
assert u1.corresponding_column(table1.c.col2) is u1.c.col2
)
# table1_new = table1
- s1 = select([table1_new.c.col1, table1_new.c.col2])
- s2 = select([table1_new.c.col2, table1_new.c.col1])
+ s1 = select(table1_new.c.col1, table1_new.c.col2)
+ s2 = select(table1_new.c.col2, table1_new.c.col1)
u1 = union(s1, s2).subquery()
# TODO: failing due to proxy_set not correct
assert u1.corresponding_column(table1_new.c.col2) is u1.c.col2
def test_union_alias_dupe_keys(self):
- s1 = select([table1.c.col1, table1.c.col2, table2.c.col1])
- s2 = select([table2.c.col1, table2.c.col2, table2.c.col3])
+ s1 = select(table1.c.col1, table1.c.col2, table2.c.col1)
+ s2 = select(table2.c.col1, table2.c.col2, table2.c.col3)
u1 = union(s1, s2).subquery()
assert (
assert u1.corresponding_column(table2.c.col3) is u1.c._all_columns[2]
def test_union_alias_dupe_keys_disambiguates_in_subq_compile_one(self):
- s1 = select([table1.c.col1, table1.c.col2, table2.c.col1]).limit(1)
- s2 = select([table2.c.col1, table2.c.col2, table2.c.col3]).limit(1)
+ s1 = select(table1.c.col1, table1.c.col2, table2.c.col1).limit(1)
+ s2 = select(table2.c.col1, table2.c.col2, table2.c.col3).limit(1)
u1 = union(s1, s2).subquery()
eq_(u1.c.keys(), ["col1", "col2", "col1_1"])
- stmt = select([u1])
+ stmt = select(u1)
eq_(stmt.selected_columns.keys(), ["col1", "col2", "col1_1"])
eq_(u1.c.keys(), ["a_id", "b_id", "b_aid"])
- stmt = select([u1])
+ stmt = select(u1)
eq_(stmt.selected_columns.keys(), ["a_id", "b_id", "b_aid"])
)
def test_union_alias_dupe_keys_grouped(self):
- s1 = select([table1.c.col1, table1.c.col2, table2.c.col1]).limit(1)
- s2 = select([table2.c.col1, table2.c.col2, table2.c.col3]).limit(1)
+ s1 = select(table1.c.col1, table1.c.col2, table2.c.col1).limit(1)
+ s2 = select(table2.c.col1, table2.c.col2, table2.c.col3).limit(1)
u1 = union(s1, s2).subquery()
assert (
u = (
select(
- [
- table1.c.col1,
- table1.c.col2,
- table1.c.col3,
- table1.c.colx,
- null().label("coly"),
- ]
+ table1.c.col1,
+ table1.c.col2,
+ table1.c.col3,
+ table1.c.colx,
+ null().label("coly"),
)
.union(
select(
- [
- table2.c.col1,
- table2.c.col2,
- table2.c.col3,
- null().label("colx"),
- table2.c.coly,
- ]
+ table2.c.col1,
+ table2.c.col2,
+ table2.c.col3,
+ null().label("colx"),
+ table2.c.coly,
)
)
.alias("analias")
)
- s = select([u]).subquery()
+ s = select(u).subquery()
s1 = table1.select(use_labels=True).subquery()
s2 = table2.select(use_labels=True).subquery()
assert s.corresponding_column(s1.c.table1_col2) is s.c.col2
u = (
select(
- [
- table1.c.col1,
- table1.c.col2,
- table1.c.col3,
- table1.c.colx,
- null().label("coly"),
- ]
+ table1.c.col1,
+ table1.c.col2,
+ table1.c.col3,
+ table1.c.colx,
+ null().label("coly"),
)
.union(
select(
- [
- table2.c.col1,
- table2.c.col2,
- table2.c.col3,
- null().label("colx"),
- table2.c.coly,
- ]
+ table2.c.col1,
+ table2.c.col2,
+ table2.c.col3,
+ null().label("colx"),
+ table2.c.coly,
)
)
.alias("analias")
self.assert_(criterion.compare(j.onclause))
def test_scalar_cloned_comparator(self):
- sel = select([table1.c.col1]).scalar_subquery()
+ sel = select(table1.c.col1).scalar_subquery()
sel == table1.c.col1
sel2 = visitors.ReplacingCloningVisitor().traverse(sel)
def test_column_labels(self):
a = select(
- [
- table1.c.col1.label("acol1"),
- table1.c.col2.label("acol2"),
- table1.c.col3.label("acol3"),
- ]
+ table1.c.col1.label("acol1"),
+ table1.c.col2.label("acol2"),
+ table1.c.col3.label("acol3"),
).subquery()
j = join(a, table2)
criterion = a.c.acol1 == table2.c.col2
self.assert_(criterion.compare(j.onclause))
def test_labeled_select_corresponding(self):
- l1 = select([func.max(table1.c.col1)]).label("foo")
+ l1 = select(func.max(table1.c.col1)).label("foo")
- s = select([l1])
+ s = select(l1)
eq_(s.corresponding_column(l1), s.selected_columns.foo)
- s = select([table1.c.col1, l1])
+ s = select(table1.c.col1, l1)
eq_(s.corresponding_column(l1), s.selected_columns.foo)
def test_labeled_subquery_corresponding(self):
- l1 = select([func.max(table1.c.col1)]).label("foo")
- s = select([l1]).subquery()
+ l1 = select(func.max(table1.c.col1)).label("foo")
+ s = select(l1).subquery()
eq_(s.corresponding_column(l1), s.c.foo)
- s = select([table1.c.col1, l1]).subquery()
+ s = select(table1.c.col1, l1).subquery()
eq_(s.corresponding_column(l1), s.c.foo)
def test_select_alias_labels(self):
metadata = MetaData()
a = Table("a", metadata, Column("id", Integer, primary_key=True))
- j2 = select([a.c.id.label("aid")]).alias("bar")
+ j2 = select(a.c.id.label("aid")).alias("bar")
j3 = a.join(j2, j2.c.aid == a.c.id)
- j4 = select([j3]).alias("foo")
+ j4 = select(j3).alias("foo")
assert j4.corresponding_column(j2.c.aid) is j4.c.aid
assert j4.corresponding_column(a.c.id) is j4.c.id
def test_multi_label_chain_naming_col(self):
# See [ticket:2167] for this one.
l1 = table1.c.col1.label("a")
- l2 = select([l1]).label("b")
- s = select([l2]).subquery()
+ l2 = select(l1).label("b")
+ s = select(l2).subquery()
assert s.c.b is not None
self.assert_compile(
s.select(),
"(SELECT (SELECT table1.col1 AS a FROM table1) AS b) AS anon_1",
)
- s2 = select([s.element.label("c")]).subquery()
+ s2 = select(s.element.label("c")).subquery()
self.assert_compile(
s2.select(),
"SELECT anon_1.c FROM (SELECT (SELECT ("
# style to the select, eliminating the self-referential call unless
# the select already had labeling applied
- s = select([t]).apply_labels()
+ s = select(t).apply_labels()
with testing.expect_deprecated("The SelectBase.c"):
s.where.non_generative(s, s.c.t_x > 5)
def test_unusual_column_elements_text(self):
"""test that .c excludes text()."""
- s = select([table1.c.col1, text("foo")]).subquery()
+ s = select(table1.c.col1, text("foo")).subquery()
eq_(list(s.c), [s.c.col1])
def test_unusual_column_elements_clauselist(self):
from sqlalchemy.sql.expression import ClauseList
s = select(
- [table1.c.col1, ClauseList(table1.c.col2, table1.c.col3)]
+ table1.c.col1, ClauseList(table1.c.col2, table1.c.col3)
).subquery()
eq_(list(s.c), [s.c.col1, s.c.col2, s.c.col3])
"""test that BooleanClauseList is placed as single element in .c."""
c2 = and_(table1.c.col2 == 5, table1.c.col3 == 4)
- s = select([table1.c.col1, c2]).subquery()
+ s = select(table1.c.col1, c2).subquery()
eq_(list(s.c), [s.c.col1, s.corresponding_column(c2)])
def test_from_list_deferred_constructor(self):
c1 = Column("c1", Integer)
c2 = Column("c2", Integer)
- select([c1])
+ select(c1)
t = Table("t", MetaData(), c1, c2)
eq_(c1._from_objects, [t])
eq_(c2._from_objects, [t])
- self.assert_compile(select([c1]), "SELECT t.c1 FROM t")
- self.assert_compile(select([c2]), "SELECT t.c2 FROM t")
+ self.assert_compile(select(c1), "SELECT t.c1 FROM t")
+ self.assert_compile(select(c2), "SELECT t.c2 FROM t")
def test_from_list_deferred_whereclause(self):
c1 = Column("c1", Integer)
c2 = Column("c2", Integer)
- select([c1]).where(c1 == 5)
+ select(c1).where(c1 == 5)
t = Table("t", MetaData(), c1, c2)
eq_(c1._from_objects, [t])
eq_(c2._from_objects, [t])
- self.assert_compile(select([c1]), "SELECT t.c1 FROM t")
- self.assert_compile(select([c2]), "SELECT t.c2 FROM t")
+ self.assert_compile(select(c1), "SELECT t.c1 FROM t")
+ self.assert_compile(select(c2), "SELECT t.c2 FROM t")
def test_from_list_deferred_fromlist(self):
m = MetaData()
t1 = Table("t1", m, Column("x", Integer))
c1 = Column("c1", Integer)
- select([c1]).where(c1 == 5).select_from(t1)
+ select(c1).where(c1 == 5).select_from(t1)
t2 = Table("t2", MetaData(), c1)
eq_(c1._from_objects, [t2])
- self.assert_compile(select([c1]), "SELECT t2.c1 FROM t2")
+ self.assert_compile(select(c1), "SELECT t2.c1 FROM t2")
def test_from_list_deferred_cloning(self):
c1 = Column("c1", Integer)
c2 = Column("c2", Integer)
- s = select([c1])
- s2 = select([c2])
+ s = select(c1)
+ s2 = select(c2)
s3 = sql_util.ClauseAdapter(s).traverse(s2)
Table("t", MetaData(), c1, c2)
def test_from_list_with_columns(self):
table1 = table("t1", column("a"))
table2 = table("t2", column("b"))
- s1 = select([table1.c.a, table2.c.b])
+ s1 = select(table1.c.a, table2.c.b)
self.assert_compile(s1, "SELECT t1.a, t2.b FROM t1, t2")
s2 = s1.with_only_columns([table2.c.b])
self.assert_compile(s2, "SELECT t2.b FROM t2")
def test_from_list_against_existing_one(self):
c1 = Column("c1", Integer)
- s = select([c1])
+ s = select(c1)
# force a compile.
self.assert_compile(s, "SELECT c1")
c1 = Column("c1", Integer)
c2 = Column("c2", Integer)
- s = select([c1])
+ s = select(c1)
# force a compile.
eq_(str(s), "SELECT c1")
eq_(c2._from_objects, [t])
self.assert_compile(s, "SELECT t.c1 FROM t")
- self.assert_compile(select([c1]), "SELECT t.c1 FROM t")
- self.assert_compile(select([c2]), "SELECT t.c2 FROM t")
+ self.assert_compile(select(c1), "SELECT t.c1 FROM t")
+ self.assert_compile(select(c2), "SELECT t.c2 FROM t")
def test_label_gen_resets_on_table(self):
c1 = Column("c1", Integer)
def test_whereclause_adapted(self):
table1 = table("t1", column("a"))
- s1 = select([table1]).subquery()
+ s1 = select(table1).subquery()
- s2 = select([s1]).where(s1.c.a == 5)
+ s2 = select(s1).where(s1.c.a == 5)
assert s2._whereclause.left.table is s1
- ta = select([table1]).subquery()
+ ta = select(table1).subquery()
s3 = sql_util.ClauseAdapter(ta).traverse(s2)
def test_select_samename_init(self):
a = table("a", column("x"))
b = table("b", column("y"))
- s = select([a, b]).apply_labels()
+ s = select(a, b).apply_labels()
s.selected_columns
q = column("x")
b.append_column(q)
def test_alias_alias_samename_init(self):
a = table("a", column("x"))
b = table("b", column("y"))
- s1 = select([a, b]).apply_labels().alias()
+ s1 = select(a, b).apply_labels().alias()
s2 = s1.alias()
s1.c
def test_aliased_select_samename_uninit(self):
a = table("a", column("x"))
b = table("b", column("y"))
- s = select([a, b]).apply_labels().alias()
+ s = select(a, b).apply_labels().alias()
q = column("x")
b.append_column(q)
s._refresh_for_new_column(q)
def test_aliased_select_samename_init(self):
a = table("a", column("x"))
b = table("b", column("y"))
- s = select([a, b]).apply_labels().alias()
+ s = select(a, b).apply_labels().alias()
s.c
q = column("x")
b.append_column(q)
a = table("a", column("x"))
b = table("b", column("y"))
c = table("c", column("z"))
- s = select([a, b]).apply_labels().alias()
+ s = select(a, b).apply_labels().alias()
s.c
q = column("x")
c.append_column(q)
def test_aliased_select_no_cols_clause(self):
a = table("a", column("x"))
- s = select([a.c.x]).apply_labels().alias()
+ s = select(a.c.x).apply_labels().alias()
s.c
q = column("q")
a.append_column(q)
def test_union_uninit(self):
a = table("a", column("x"))
- s1 = select([a])
- s2 = select([a])
+ s1 = select(a)
+ s2 = select(a)
s3 = s1.union(s2)
q = column("q")
a.append_column(q)
def test_union_init(self):
a = table("a", column("x"))
- s1 = select([a])
- s2 = select([a])
+ s1 = select(a)
+ s2 = select(a)
s3 = s1.union(s2)
s3.selected_columns
q = column("q")
c1 = column("x")
assert c1.label(None) is not c1
- eq_(str(select([c1.label(None)])), "SELECT x AS x_1")
+ eq_(str(select(c1.label(None))), "SELECT x AS x_1")
def test_anon_labels_literal_column(self):
c1 = literal_column("x")
assert c1.label(None) is not c1
- eq_(str(select([c1.label(None)])), "SELECT x AS x_1")
+ eq_(str(select(c1.label(None))), "SELECT x AS x_1")
def test_anon_labels_func(self):
c1 = func.count("*")
assert c1.label(None) is not c1
- eq_(str(select([c1])), "SELECT count(:count_2) AS count_1")
- select([c1]).compile()
+ eq_(str(select(c1)), "SELECT count(:count_2) AS count_1")
+ select(c1).compile()
- eq_(str(select([c1.label(None)])), "SELECT count(:count_2) AS count_1")
+ eq_(str(select(c1.label(None))), "SELECT count(:count_2) AS count_1")
def test_named_labels_named_column(self):
c1 = column("x")
- eq_(str(select([c1.label("y")])), "SELECT x AS y")
+ eq_(str(select(c1.label("y"))), "SELECT x AS y")
def test_named_labels_literal_column(self):
c1 = literal_column("x")
- eq_(str(select([c1.label("y")])), "SELECT x AS y")
+ eq_(str(select(c1.label("y"))), "SELECT x AS y")
class JoinAliasingTest(fixtures.TestBase, AssertsCompiledSQL):
j1 = a.join(b, a.c.a == b.c.b)
j2 = c.join(d, c.c.c == d.c.d)
self.assert_compile(
- select([j1.join(j2, b.c.b == c.c.c).alias()]),
+ select(j1.join(j2, b.c.b == c.c.c).alias()),
"SELECT anon_1.a_a, anon_1.b_b, anon_1.c_c, anon_1.d_d "
"FROM (SELECT a.a AS a_a, b.b AS b_b, c.c AS c_c, d.d AS d_d "
"FROM a JOIN b ON a.a = b.b "
Column("manager_name", String(50)),
)
s = (
- select([engineers, managers])
+ select(engineers, managers)
.where(engineers.c.engineer_name == managers.c.manager_name)
.subquery()
)
Column("z", Integer, ForeignKey("t1.x")),
Column("q", Integer),
)
- s1 = select([t1, t2])
+ s1 = select(t1, t2)
s2 = s1.reduce_columns(only_synonyms=False)
eq_(set(s2.selected_columns), set([t1.c.x, t1.c.y, t2.c.q]))
Column("x", Integer, ForeignKey("t1.x")),
Column("q", Integer, ForeignKey("t1.y")),
)
- s1 = select([t1, t2])
+ s1 = select(t1, t2)
s1 = s1.reduce_columns(only_synonyms=True)
eq_(
set(s1.selected_columns),
)
# test that the first appearance in the columns clause
# wins - t1 is first, t1.c.x wins
- s1 = select([t1]).subquery()
- s2 = select([t1, s1]).where(t1.c.x == s1.c.x).where(s1.c.y == t1.c.z)
+ s1 = select(t1).subquery()
+ s2 = select(t1, s1).where(t1.c.x == s1.c.x).where(s1.c.y == t1.c.z)
eq_(
set(s2.reduce_columns().selected_columns),
set([t1.c.x, t1.c.y, t1.c.z, s1.c.y, s1.c.z]),
)
# reverse order, s1.c.x wins
- s1 = select([t1]).subquery()
- s2 = select([s1, t1]).where(t1.c.x == s1.c.x).where(s1.c.y == t1.c.z)
+ s1 = select(t1).subquery()
+ s2 = select(s1, t1).where(t1.c.x == s1.c.x).where(s1.c.y == t1.c.z)
eq_(
set(s2.reduce_columns().selected_columns),
set([s1.c.x, t1.c.y, t1.c.z, s1.c.y, s1.c.z]),
pjoin = union(
select(
- [
- page_table.c.id,
- magazine_page_table.c.page_id,
- classified_page_table.c.magazine_page_id,
- ]
+ page_table.c.id,
+ magazine_page_table.c.page_id,
+ classified_page_table.c.magazine_page_id,
).select_from(
page_table.join(magazine_page_table).join(
classified_page_table
)
),
select(
- [
- page_table.c.id,
- magazine_page_table.c.page_id,
- cast(null(), Integer).label("magazine_page_id"),
- ]
+ page_table.c.id,
+ magazine_page_table.c.page_id,
+ cast(null(), Integer).label("magazine_page_id"),
).select_from(page_table.join(magazine_page_table)),
).alias("pjoin")
eq_(
pjoin = union(
select(
- [
- page_table.c.id,
- magazine_page_table.c.page_id,
- cast(null(), Integer).label("magazine_page_id"),
- ]
+ page_table.c.id,
+ magazine_page_table.c.page_id,
+ cast(null(), Integer).label("magazine_page_id"),
).select_from(page_table.join(magazine_page_table)),
select(
- [
- page_table.c.id,
- magazine_page_table.c.page_id,
- classified_page_table.c.magazine_page_id,
- ]
+ page_table.c.id,
+ magazine_page_table.c.page_id,
+ classified_page_table.c.magazine_page_id,
).select_from(
page_table.join(magazine_page_table).join(
classified_page_table
assert t1.select().is_derived_from(t1)
assert not t2.select().is_derived_from(t1)
- assert select([t1, t2]).is_derived_from(t1)
+ assert select(t1, t2).is_derived_from(t1)
assert t1.select().alias("foo").is_derived_from(t1)
- assert select([t1, t2]).alias("foo").is_derived_from(t1)
+ assert select(t1, t2).alias("foo").is_derived_from(t1)
assert not t2.select().alias("foo").is_derived_from(t1)
c1 = Column("foo", Integer)
- stmt = select([c1]).alias()
+ stmt = select(c1).alias()
proxy = stmt.c.foo
proxy.proxy_set
c1 = Column("foo", Integer)
- stmt = select([c1]).alias()
+ stmt = select(c1).alias()
proxy = stmt.c.foo
c1.proxy_set
assert isinstance(s1.c.foo, Column)
annot_1 = t1.c.foo._annotate({})
- s2 = select([annot_1]).subquery()
+ s2 = select(annot_1).subquery()
assert isinstance(s2.c.foo, Column)
annot_2 = s1._annotate({})
assert isinstance(annot_2.c.foo, Column)
def test_annotated_corresponding_column(self):
table1 = table("table1", column("col1"))
- s1 = select([table1.c.col1]).subquery()
+ s1 = select(table1.c.col1).subquery()
t1 = s1._annotate({})
t2 = s1
assert t1.c is t2.c
assert t1.c.col1 is t2.c.col1
- inner = select([s1]).subquery()
+ inner = select(s1).subquery()
assert (
inner.corresponding_column(t2.c.col1, require_embedded=False)
def test_annotate_aliased(self):
t1 = table("t1", column("c1"))
- s = select([(t1.c.c1 + 3).label("bat")])
+ s = select((t1.c.c1 + 3).label("bat"))
a = s.alias()
a = sql_util._deep_annotate(a, {"foo": "bar"})
eq_(a._annotations["foo"], "bar")
table1 = table("table1", column("col1"), column("col2"))
subq = (
- select([table1])
- .where(table1.c.col1 == bindparam("foo"))
- .subquery()
+ select(table1).where(table1.c.col1 == bindparam("foo")).subquery()
)
- stmt = select([subq])
+ stmt = select(subq)
s2 = sql_util._deep_annotate(stmt, {"_orm_adapt": True})
s3 = sql_util._deep_deannotate(s2)
table1 = table("table1", column("x"))
table2 = table("table2", column("y"))
a1 = table1.alias()
- s = select([a1.c.x]).select_from(a1.join(table2, a1.c.x == table2.c.y))
+ s = select(a1.c.x).select_from(a1.join(table2, a1.c.x == table2.c.y))
for sel in (
sql_util._deep_deannotate(s),
visitors.cloned_traverse(s, {}, {}),
"""
t1 = table("table1", column("col1"), column("col2"))
- s = select([t1.c.col1._annotate({"foo": "bar"})])
- s2 = select([t1.c.col1._annotate({"bat": "hoho"})])
+ s = select(t1.c.col1._annotate({"foo": "bar"}))
+ s2 = select(t1.c.col1._annotate({"bat": "hoho"}))
s3 = s.union(s2)
sel = sql_util._deep_annotate(s3, {"new": "thing"})
table1 = table("table1", column("x"))
table2 = table("table2", column("y"))
a1 = table1.alias()
- s = select([a1.c.x]).select_from(a1.join(table2, a1.c.x == table2.c.y))
+ s = select(a1.c.x).select_from(a1.join(table2, a1.c.x == table2.c.y))
- assert_s = select([select([s.subquery()]).subquery()])
+ assert_s = select([select(s.subquery()).subquery()])
for fn in (
sql_util._deep_deannotate,
lambda s: sql_util._deep_annotate(s, {"foo": "bar"}),
lambda s: visitors.replacement_traverse(s, {}, lambda x: None),
):
- sel = fn(select([fn(select([fn(s.subquery())]).subquery())]))
+ sel = fn(select([fn(select(fn(s.subquery())).subquery())]))
eq_(str(assert_s), str(sel))
def test_bind_unique_test(self):
m = MetaData()
t1 = Table("t1", m, Column("x", Integer))
t2 = Table("t2", m, Column("x", Integer))
- return select([t1, t2])
+ return select(t1, t2)
def test_names_overlap_nolabel(self):
sel = self._names_overlap()
m = MetaData()
t1 = Table("t1", m, Column("x", Integer, key="a"))
t2 = Table("t2", m, Column("x", Integer, key="b"))
- return select([t1, t2])
+ return select(t1, t2)
def test_names_overlap_keys_dont_nolabel(self):
sel = self._names_overlap_keys_dont()
m = MetaData()
t1 = Table("t", m, Column("x_id", Integer))
t2 = Table("t_x", m, Column("id", Integer))
- return select([t1, t2])
+ return select(t1, t2)
def test_labels_overlap_nolabel(self):
sel = self._labels_overlap()
m = MetaData()
t1 = Table("t", m, Column("x_id", Integer, key="a"))
t2 = Table("t_x", m, Column("id", Integer, key="b"))
- return select([t1, t2])
+ return select(t1, t2)
def test_labels_overlap_keylabels_dont_nolabel(self):
sel = self._labels_overlap_keylabels_dont()
m = MetaData()
t1 = Table("t", m, Column("a", Integer, key="x_id"))
t2 = Table("t_x", m, Column("b", Integer, key="id"))
- return select([t1, t2])
+ return select(t1, t2)
def test_keylabels_overlap_labels_dont_nolabel(self):
sel = self._keylabels_overlap_labels_dont()
m = MetaData()
t1 = Table("t", m, Column("x_id", Integer, key="x_a"))
t2 = Table("t_x", m, Column("id", Integer, key="a"))
- return select([t1, t2])
+ return select(t1, t2)
def test_keylabels_overlap_labels_overlap_nolabel(self):
sel = self._keylabels_overlap_labels_overlap()
m = MetaData()
t1 = Table("t1", m, Column("a", Integer, key="x"))
t2 = Table("t2", m, Column("b", Integer, key="x"))
- return select([t1, t2])
+ return select(t1, t2)
def test_keys_overlap_names_dont_nolabel(self):
sel = self._keys_overlap_names_dont()
def test_select_label_alt_name(self):
t = self._fixture()
l1, l2 = t.c.x.label("a"), t.c.y.label("b")
- s = select([l1, l2])
+ s = select(l1, l2)
mapping = self._mapping(s)
assert l1 in mapping
def test_select_alias_label_alt_name(self):
t = self._fixture()
l1, l2 = t.c.x.label("a"), t.c.y.label("b")
- s = select([l1, l2]).alias()
+ s = select(l1, l2).alias()
mapping = self._mapping(s)
assert l1 in mapping
def test_select_alias_column(self):
t = self._fixture()
x, y = t.c.x, t.c.y
- s = select([x, y]).alias()
+ s = select(x, y).alias()
mapping = self._mapping(s)
assert t.c.x in mapping
def test_select_alias_column_apply_labels(self):
t = self._fixture()
x, y = t.c.x, t.c.y
- s = select([x, y]).apply_labels().alias()
+ s = select(x, y).apply_labels().alias()
mapping = self._mapping(s)
assert t.c.x in mapping
x = t.c.x
ta = t.alias()
- s = select([ta.c.x, ta.c.y])
+ s = select(ta.c.x, ta.c.y)
mapping = self._mapping(s)
assert x not in mapping
ta = t.alias()
l1, l2 = ta.c.x.label("a"), ta.c.y.label("b")
- s = select([l1, l2])
+ s = select(l1, l2)
mapping = self._mapping(s)
assert x not in mapping
assert l1 in mapping
eq_(
[
type(entry[-1])
- for entry in select([expr]).compile()._result_columns
+ for entry in select(expr).compile()._result_columns
],
[Boolean],
)
eq_(
[
type(entry[-1])
- for entry in select([expr]).compile()._result_columns
+ for entry in select(expr).compile()._result_columns
],
[Boolean],
)
eq_(
[
type(entry[-1])
- for entry in select([expr]).compile()._result_columns
+ for entry in select(expr).compile()._result_columns
],
[Boolean],
)
def test_column_subquery_plain(self):
t = self._fixture()
- s1 = select([t.c.x]).where(t.c.x > 5).scalar_subquery()
- s2 = select([s1])
+ s1 = select(t.c.x).where(t.c.x > 5).scalar_subquery()
+ s2 = select(s1)
mapping = self._mapping(s2)
assert t.c.x not in mapping
assert s1 in mapping
def test_basic_clone(self):
t = table("t", column("c"))
- s = select([t]).with_for_update(read=True, of=t.c.c)
+ s = select(t).with_for_update(read=True, of=t.c.c)
s2 = visitors.ReplacingCloningVisitor().traverse(s)
assert s2._for_update_arg is not s._for_update_arg
eq_(s2._for_update_arg.read, True)
def test_adapt(self):
t = table("t", column("c"))
- s = select([t]).with_for_update(read=True, of=t.c.c)
+ s = select(t).with_for_update(read=True, of=t.c.c)
a = t.alias()
s2 = sql_util.ClauseAdapter(a).traverse(s)
eq_(s2._for_update_arg.of, [a.c.c])
def test_get_children_preserves_multiple_nesting(self):
t = table("t", column("c"))
- stmt = select([t])
+ stmt = select(t)
a1 = stmt.alias()
a2 = a1.alias()
eq_(set(a2.get_children(column_collections=False)), {a1})
def test_correspondence_multiple_nesting(self):
t = table("t", column("c"))
- stmt = select([t])
+ stmt = select(t)
a1 = stmt.alias()
a2 = a1.alias()
def test_copy_internals_multiple_nesting(self):
t = table("t", column("c"))
- stmt = select([t])
+ stmt = select(t)
a1 = stmt.alias()
a2 = a1.alias()