certain contexts, such as :class:`.AliasedInsp` and
:class:`.AttributeState`.
-A walkthrough of some key capabilities follows::
+A walkthrough of some key capabilities follows:
+
+.. sourcecode:: pycon+sql
>>> class User(Base):
... __tablename__ = "user"
>>> # an expression
>>> print(b.expression)
- "user".id = address.user_id
+ {printsql}"user".id = address.user_id{stop}
>>> # inspect works on instances
>>> u1 = User(id=3, name="x")
Above, the ``LowerString`` type defines a SQL expression that will be emitted
whenever the ``test_table.c.data`` column is rendered in the columns
-clause of a SELECT statement::
+clause of a SELECT statement:
+
+.. sourcecode:: pycon+sql
>>> print(select([test_table]).where(test_table.c.data == "HI"))
{printsql}SELECT lower(test_table.data) AS data
The "collate" keyword, long accepted by the MySQL dialect, is now established
on all :class:`.String` types and will render on any backend, including
-when features such as :meth:`_schema.MetaData.create_all` and :func:`.cast` is used::
+when features such as :meth:`_schema.MetaData.create_all` and :func:`.cast` is used:
+
+.. sourcecode:: pycon+sql
>>> stmt = select([cast(sometable.c.somechar, String(20, collation="utf8"))])
>>> print(stmt)
The :meth:`.ColumnOperators.collate` operator now works more appropriately within an
``ORDER BY`` expression as well, as a specific precedence has been given to the
``ASC`` and ``DESC`` operators which will again ensure no parentheses are
-generated::
+generated:
+
+.. sourcecode:: pycon+sql
>>> # 0.8
>>> print(column("x").collate("en_EN").desc())
- (x COLLATE en_EN) DESC
+ {printsql}(x COLLATE en_EN) DESC{stop}
>>> # 0.9
>>> print(column("x").collate("en_EN").desc())
- x COLLATE en_EN DESC
+ {printsql}x COLLATE en_EN DESC{stop}
:ticket:`2879`
----------------------------------------------------------------
The :class:`_postgresql.ENUM` type will now apply escaping to single quote
-signs within the enumerated values::
+signs within the enumerated values:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.dialects import postgresql
>>> type = postgresql.ENUM("one", "two", "three's", name="myenum")
syntactical feature has been added, and is also backported to 0.8.3,
so technically isn't "new" in 0.9. A :func:`_expression.select` construct or other
compatible construct can be passed to the new method :meth:`_expression.Insert.from_select`
-where it will be used to render an ``INSERT .. SELECT`` construct::
+where it will be used to render an ``INSERT .. SELECT`` construct:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import table, column
>>> t1 = table("t1", column("a"), column("b"))
t1 = Table("t", MetaData(), Column("x", Boolean()), Column("y", Integer))
A select construct will now render the boolean column as a binary expression
-on backends that don't feature ``true``/``false`` constant behavior::
+on backends that don't feature ``true``/``false`` constant behavior:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select, and_, false, true
>>> from sqlalchemy.dialects import mysql, postgresql
The :func:`.and_` and :func:`.or_` constructs will now exhibit quasi
"short circuit" behavior, that is truncating a rendered expression, when a
-:func:`.true` or :func:`.false` constant is present::
+:func:`.true` or :func:`.false` constant is present:
+
+.. sourcecode:: pycon+sql
>>> print(
... select([t1]).where(and_(t1.c.y > 5, false())).compile(dialect=postgresql.dialect())
... )
{printsql}SELECT t.x, t.y FROM t WHERE false
-:func:`.true` can be used as the base to build up an expression::
+:func:`.true` can be used as the base to build up an expression:
+
+.. sourcecode:: pycon+sql
>>> expr = true()
>>> expr = expr & (t1.c.y > 5)
{printsql}SELECT t.x, t.y FROM t WHERE t.y > :y_1
The boolean constants :func:`.true` and :func:`.false` themselves render as
-``0 = 1`` and ``1 = 1`` for a backend with no boolean constants::
+``0 = 1`` and ``1 = 1`` for a backend with no boolean constants:
+
+.. sourcecode:: pycon+sql
>>> print(select([t1]).where(and_(t1.c.y > 5, false())).compile(dialect=mysql.dialect()))
{printsql}SELECT t.x, t.y FROM t WHERE 0 = 1
Interpretation of ``None``, while not particularly valid SQL, is at least
-now consistent::
+now consistent:
+
+.. sourcecode:: pycon+sql
>>> print(select([t1.c.x]).where(None))
- {printsql}SELECT t.x FROM t WHERE NULL
+ {printsql}SELECT t.x FROM t WHERE NULL{stop}
>>> print(select([t1.c.x]).where(None).where(None))
- {printsql}SELECT t.x FROM t WHERE NULL AND NULL
+ {printsql}SELECT t.x FROM t WHERE NULL AND NULL{stop}
>>> print(select([t1.c.x]).where(and_(None, None)))
- {printsql}SELECT t.x FROM t WHERE NULL AND NULL
+ {printsql}SELECT t.x FROM t WHERE NULL AND NULL{stop}
:ticket:`2804`
in the SELECT statement as was the case in 1.0.
Below is an example that renders UPDATE, INSERT and SELECT all in one
-statement::
+statement:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select, literal, exists
>>> orders = table(
----------------------------------------------------------------
New :paramref:`.expression.over.range_` and :paramref:`.expression.over.rows` parameters allow
-RANGE and ROWS expressions for window functions::
+RANGE and ROWS expressions for window functions:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import func
>>> print(func.row_number().over(order_by="x", range_=(-5, 10)))
- row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND :param_2 FOLLOWING)
+ {printsql}row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND :param_2 FOLLOWING){stop}
>>> print(func.row_number().over(order_by="x", rows=(None, 0)))
- row_number() OVER (ORDER BY x ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
+ {printsql}row_number() OVER (ORDER BY x ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW){stop}
>>> print(func.row_number().over(order_by="x", range_=(-2, None)))
- row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND UNBOUNDED FOLLOWING)
+ {printsql}row_number() OVER (ORDER BY x RANGE BETWEEN :param_1 PRECEDING AND UNBOUNDED FOLLOWING){stop}
:paramref:`.expression.over.range_` and :paramref:`.expression.over.rows` are specified as
2-tuples and indicate negative and positive values for specific ranges,
is added to Core. The implementation of :meth:`_expression.Select.lateral` employs
special logic beyond just rendering the LATERAL keyword to allow for
correlation of tables that are derived from the same FROM clause as the
-selectable, e.g. lateral correlation::
+selectable, e.g. lateral correlation:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select, true
>>> people = table("people", column("people_id"), column("age"), column("name"))
New operators :meth:`.ColumnOperators.is_distinct_from` and
:meth:`.ColumnOperators.isnot_distinct_from` allow the IS DISTINCT
-FROM and IS NOT DISTINCT FROM sql operation::
+FROM and IS NOT DISTINCT FROM sql operation:
+
+.. sourcecode:: pycon+sql
>>> print(column("x").is_distinct_from(None))
- x IS DISTINCT FROM NULL
+ {printsql}x IS DISTINCT FROM NULL{stop}
+
+Handling is provided for NULL, True and False:
-Handling is provided for NULL, True and False::
+.. sourcecode:: pycon+sql
>>> print(column("x").isnot_distinct_from(False))
- x IS NOT DISTINCT FROM false
+ {printsql}x IS NOT DISTINCT FROM false{stop}
For SQLite, which doesn't have this operator, "IS" / "IS NOT" is rendered,
-which on SQLite works for NULL unlike other backends::
+which on SQLite works for NULL unlike other backends:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.dialects import sqlite
>>> print(column("x").is_distinct_from(None).compile(dialect=sqlite.dialect()))
- x IS NOT NULL
+ {printsql}x IS NOT NULL{stop}
.. _change_1957:
Calling ``str()`` on a Core SQL construct will now produce a string
in more cases than before, supporting various SQL constructs not normally
present in default SQL such as RETURNING, array indexes, and non-standard
-datatypes::
+datatypes:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column
t>>> t = table('x', column('a'), column('b'))
A server default passed to :paramref:`_schema.Column.server_default` as a plain
Python string that has quotes embedded is now
-passed through the literal quoting system::
+passed through the literal quoting system:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import MetaData, Table, Column, CreateTable
>>> from sqlalchemy.types import String
)
The legacy mode of behavior will attempt to turn a schema-qualified table
-name into an alias::
+name into an alias:
+
+.. sourcecode:: pycon+sql
>>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
>>> print(account_table.select().compile(eng))
:attr:`.func` namespace. In the case of CUBE and ROLLUP, these functions
already work in previous versions, however for GROUPING SETS, a placeholder
is added to the compiler to allow for the space. All three functions
-are named in the documentation now::
+are named in the documentation now:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select, table, column, func, tuple_
>>> t = table("t", column("value"), column("x"), column("y"), column("z"), column("q"))
To assist with boolean comparison operators, a new shorthand method
:meth:`.Operators.bool_op` has been added. This method should be preferred
-for on-the-fly boolean operators::
+for on-the-fly boolean operators:
+
+.. sourcecode:: pycon+sql
>>> print(column("x", types.Integer).bool_op("-%>")(5))
- x -%> :x_1
+ {printsql}x -%> :x_1
.. _change_3740:
percent-sign-sensitive paramstyle or not (e.g. 'format' or 'pyformat').
Previously, it was not possible to produce a :obj:`_expression.literal_column`
-construct that stated a single percent sign::
+construct that stated a single percent sign:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import literal_column
>>> print(literal_column("some%symbol"))
- some%%symbol
+ {printsql}some%%symbol
The percent sign is now unaffected for dialects that are not set to
use the 'format' or 'pyformat' paramstyles; dialects such most MySQL
dialects which do state one of these paramstyles will continue to escape
-as is appropriate::
+as is appropriate:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import literal_column
>>> print(literal_column("some%symbol"))
- some%symbol
+ {printsql}some%symbol{stop}
>>> from sqlalchemy.dialects import mysql
>>> print(literal_column("some%symbol").compile(dialect=mysql.dialect()))
- some%%symbol
+ {printsql}some%%symbol{stop}
As part of this change, the doubling that has been present when using
operators like :meth:`.ColumnOperators.contains`,
value = Column(String)
The ``User.values`` association proxy refers to the ``Element.value`` column.
-Standard column operations are now available, such as ``like``::
+Standard column operations are now available, such as ``like``:
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.values.like("%foo%")))
{printsql}SELECT "user".id AS user_id
FROM element
WHERE "user".id = element.user_id AND element.value LIKE :value_1)
-``equals``::
+``equals``:
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.values == "foo"))
{printsql}SELECT "user".id AS user_id
When comparing to ``None``, the ``IS NULL`` expression is augmented with
a test that the related row does not exist at all; this is the same
-behavior as before::
+behavior as before:
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.values == None))
{printsql}SELECT "user".id AS user_id
Note that the :meth:`.ColumnOperators.contains` operator is in fact a string
comparison operator; **this is a change in behavior** in that previously,
the association proxy used ``.contains`` as a list containment operator only.
-With a column-oriented comparison, it now behaves like a "like"::
+With a column-oriented comparison, it now behaves like a "like":
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.values.contains("foo")))
{printsql}SELECT "user".id AS user_id
value = Column(String)
The ``.contains()`` method produces the same expression as before, testing
-the list of ``User.elements`` for the presence of an ``Element`` object::
+the list of ``User.elements`` for the presence of an ``Element`` object:
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.elements.contains(Element(id=1))))
SELECT "user".id AS user_id
stmt = select(A.id, A.data).where(A.id.in_([1, 2, 3]))
-The pre-execution string representation is::
+The pre-execution string representation is:
+
+.. sourcecode:: pycon+sql
>>> print(stmt)
{printsql}SELECT a.id, a.data
FROM a
WHERE a.id IN ([POSTCOMPILE_id_1])
-To render the values directly, use ``literal_binds`` as was the case previously::
+To render the values directly, use ``literal_binds`` as was the case previously:
+
+.. sourcecode:: pycon+sql
>>> print(stmt.compile(compile_kwargs={"literal_binds": True}))
{printsql}SELECT a.id, a.data
WHERE a.id IN (1, 2, 3)
A new flag, "render_postcompile", is added as a helper to allow the current
-bound value to be rendered as it would be passed to the database::
+bound value to be rendered as it would be passed to the database:
+
+.. sourcecode:: pycon+sql
>>> print(stmt.compile(compile_kwargs={"render_postcompile": True}))
{printsql}SELECT a.id, a.data
The warning is only generated by default when the statement is compiled by the
:class:`_engine.Connection` for execution; calling the :meth:`_expression.ClauseElement.compile`
-method will not emit a warning unless the linting flag is supplied::
+method will not emit a warning unless the linting flag is supplied:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import FROM_LINTING
>>> print(q.statement.compile(linting=FROM_LINTING))
tuples are organized and ordered in the identical way in that the columns were
selected. The ORM :class:`_query.Query` already works this way, so this change
allows for greater cross-compatibility between the two, which is a key goal of
-the 2.0 transition::
+the 2.0 transition:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import column, select
>>> c1, c2, c3, c4 = column("c1"), column("c2"), column("c3"), column("c4")
>>> stmt = select(c1, c2, c3.label("c2"), c2, c4)
>>> print(stmt)
- SELECT c1, c2, c3 AS c2, c2, c4
+ {printsql}SELECT c1, c2, c3 AS c2, c2, c4
To support this change, the :class:`_expression.ColumnCollection` used by
:class:`_expression.SelectBase` as well as for derived FROM clauses such as subqueries
also support duplicate columns; this includes the new
:attr:`_expression.SelectBase.selected_columns` attribute, the deprecated ``SelectBase.c``
attribute, as well as the :attr:`_expression.FromClause.c` attribute seen on constructs
-such as :class:`.Subquery` and :class:`_expression.Alias`::
+such as :class:`.Subquery` and :class:`_expression.Alias`:
+
+.. sourcecode:: pycon+sql
>>> list(stmt.selected_columns)
[
statements."`` is **removed**; the :meth:`_expression.Select.apply_labels` is still
available and is still used by the ORM for all SELECT operations, however it
does not imply deduplication of column objects, although it does imply
-deduplication of implicitly generated labels::
+deduplication of implicitly generated labels:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table
>>> user = table("user", column("id"), column("name"))
Finally, the change makes it easier to create UNION and other
:class:`_selectable.CompoundSelect` objects, by ensuring that the number and position
of columns in a SELECT statement mirrors what was given, in a use case such
-as::
+as:
+
+.. sourcecode:: pycon+sql
>>> s1 = select(user, user.c.id)
>>> s2 = select(c1, c2, c3)
In SQLAlchemy Core expressions, we never deal with a raw generated name like
the above, as SQLAlchemy applies auto-labeling to expressions like these, which
-are up until now always a so-called "anonymous" expression::
+are up until now always a so-called "anonymous" expression:
+
+.. sourcecode:: pycon+sql
>>> print(select(cast(foo.c.data, String)))
- SELECT CAST(foo.data AS VARCHAR) AS anon_1 # old behavior
+ {printsql}SELECT CAST(foo.data AS VARCHAR) AS anon_1 # old behavior
FROM foo
These anonymous expressions were necessary as SQLAlchemy's
is becoming more comfortable with SELECT statements that have duplicate column
or label names such as in :ref:`change_4753`. So we now emulate PostgreSQL's
reasonable behavior for simple modifications to a single column, most
-prominently with CAST::
+prominently with CAST:
+
+.. sourcecode:: pycon+sql
>>> print(select(cast(foo.c.data, String)))
{printsql}SELECT CAST(foo.data AS VARCHAR) AS data
FROM foo
For CAST against expressions that don't have a name, the previous logic is used
-to generate the usual "anonymous" labels::
+to generate the usual "anonymous" labels:
+
+.. sourcecode:: pycon+sql
>>> print(select(cast("hi there," + foo.c.data, String)))
{printsql}SELECT CAST(:data_1 + foo.data AS VARCHAR) AS anon_1
A :func:`.cast` against a :class:`.Label`, despite having to omit the label
expression as these don't render inside of a CAST, will nonetheless make use of
-the given name::
+the given name:
+
+.. sourcecode:: pycon+sql
>>> print(select(cast(("hi there," + foo.c.data).label("hello_data"), String)))
{printsql}SELECT CAST(:data_1 + foo.data AS VARCHAR) AS hello_data
FROM foo
And of course as was always the case, :class:`.Label` can be applied to the
-expression on the outside to apply an "AS <name>" label directly::
+expression on the outside to apply an "AS <name>" label directly:
+
+.. sourcecode:: pycon+sql
>>> print(select(cast(("hi there," + foo.c.data), String).label("hello_data")))
{printsql}SELECT CAST(:data_1 + foo.data AS VARCHAR) AS hello_data
with it in order to eagerly load the ``bs`` collection. After the
``session.commit()``, the state of the object is expired. Upon accessing
the ``.data`` column attribute, the object is refreshed and this will now
-include the joinedload operation as well::
+include the joinedload operation as well:
+
+.. sourcecode:: pycon+sql
>>> a1.data
{execsql}SELECT a.id AS a_id, a.data AS a_data, b_1.id AS b_1_id, b_1.a_id AS b_1_a_id
strategy for these loaders is not necessary in order to eagerly load attributes
on a single object; so they will instead invoke the "immediateload" strategy in
a refresh scenario, which resembles the query emitted by "lazyload", emitted as
-an additional query::
+an additional query:
+
+.. sourcecode:: pycon+sql
>>> a1 = session.query(A).options(selectinload(A.bs)).first()
>>> a1.data = "new data"
-----------------------------------------------------------------------------
Literal rendering is now implemented for date and time types for backend
-specific compilation, including PostgreSQL and Oracle::
+specific compilation, including PostgreSQL and Oracle:
+
+.. sourcecode:: pycon+sql
>>> import datetime
... dialect=postgresql.dialect(), compile_kwargs={"literal_binds": True}
... )
... )
- '2022-12-17 11:02:13.575789'
+ {printsql}'2022-12-17 11:02:13.575789'{stop}
>>> print(
... date_literal.compile(
... dialect=oracle.dialect(), compile_kwargs={"literal_binds": True}
... )
... )
- TO_TIMESTAMP('2022-12-17 11:02:13.575789', 'YYYY-MM-DD HH24:MI:SS.FF')
+ {printsql}TO_TIMESTAMP('2022-12-17 11:02:13.575789', 'YYYY-MM-DD HH24:MI:SS.FF'){stop}
Previously, such literal rendering only worked when stringifying statements
without any dialect given; when attempting to render with a dialect-specific
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prior to SQLAlchemy 1.4, the :class:`.Sequence` construct would emit only
-simple ``CREATE SEQUENCE`` DDL, if no additional arguments were specified::
+simple ``CREATE SEQUENCE`` DDL, if no additional arguments were specified:
+
+.. sourcecode:: pycon+sql
>>> # SQLAlchemy 1.3 (and 2.0)
>>> from sqlalchemy import Sequence
However, as :class:`.Sequence` support was added for MS SQL Server, where the
default start value is inconveniently set to ``-2**63``,
version 1.4 decided to default the DDL to emit a start value of 1, if
-:paramref:`.Sequence.start` were not otherwise provided::
+:paramref:`.Sequence.start` were not otherwise provided:
+
+.. sourcecode:: pycon+sql
>>> # SQLAlchemy 1.4 (only)
>>> from sqlalchemy import Sequence
should interact with each other.
Therefore, to ensure that the start value is 1 on all backends,
-**the start value of 1 may be indicated explicitly**, as below::
+**the start value of 1 may be indicated explicitly**, as below:
+
+.. sourcecode:: pycon+sql
>>> # All SQLAlchemy versions
>>> from sqlalchemy import Sequence
>>> from sqlalchemy.schema import CreateSequence
>>> print(CreateSequence(Sequence("my_seq", start=1)))
- CREATE SEQUENCE my_seq START WITH 1
+ {printsql}CREATE SEQUENCE my_seq START WITH 1
Beyond all of that, for autogeneration of integer primary keys on modern
backends including PostgreSQL, Oracle, SQL Server, the :class:`.Identity`
"0". For this and similar backends, SQLAlchemy now renders the SQL using
a form which is equivalent towards:
-.. sourcecode:: text
+.. sourcecode:: sql
%(param_1)s / CAST(%(param_2)s AS NUMERIC)
would return the floating point value "0.5". For these and similar backends,
SQLAlchemy now renders the SQL using a form which is equivalent towards:
-.. sourcecode:: text
+.. sourcecode:: sql
FLOOR(%(param_1)s / %(param_2)s)
already supplied methods above, the most expedient way to produce this operator is
to use the :meth:`_sql.Operators.op` method on any SQL expression object; this method
is given a string representing the SQL operator to render, and the return value
-is a Python callable that accepts any arbitrary right-hand side expression::
+is a Python callable that accepts any arbitrary right-hand side expression:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import column
>>> expr = column("x").op(">>")(column("y"))
>>> print(expr)
- x >> y
+ {printsql}x >> y
When making use of custom SQL types, there is also a means of implementing
custom operators as above that are automatically present upon any column
referring to a new class, subclassing the :class:`.TypeEngine.Comparator` class
associated with the :class:`.Integer` type.
-Usage::
+Usage:
+
+.. sourcecode:: pycon+sql
>>> sometable = Table("sometable", metadata, Column("data", MyInt))
>>> print(sometable.c.data + 5)
- sometable.data goofy :data_1
+ {printsql}sometable.data goofy :data_1
The implementation for :meth:`.ColumnOperators.__add__` is consulted
by an owning SQL expression, by instantiating the :class:`.TypeEngine.Comparator` with
def log(self, other):
return func.log(self.expr, other)
-Using the above type::
+Using the above type:
+
+.. sourcecode:: pycon+sql
>>> print(sometable.c.data.log(5))
- log(:log_1, :log_2)
+ {printsql}log(:log_1, :log_2)
When using :meth:`.Operators.op` for comparison operations that return a
boolean result, the :paramref:`.Operators.op.is_comparison` flag should be
self.expr, modifier=operators.custom_op("!"), type_=MyInteger
)
-Using the above type::
+Using the above type:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
>>> print(column("x", MyInteger).factorial())
- x !
+ {printsql}x !
.. seealso::
objects that don't correspond directly to the database-native type name, if we
create a new :class:`_schema.Table` object against a new :class:`_schema.MetaData` collection
for this database table elsewhere using reflection, it will not have this
-datatype. For example::
+datatype. For example:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import (
... Table,
However, if we create another instance of :class:`_schema.Table` using reflection,
the use of :class:`.PickleType` is not represented in the SQLite database we've
-created; we instead get back :class:`.BLOB`::
+created; we instead get back :class:`.BLOB`:
+
+.. sourcecode:: pycon+sql
>>> metadata_two = MetaData()
>>> my_reflected_table = Table("my_table", metadata_two, autoload_with=engine)
In order to embed the "next value" function of a :class:`.Sequence`
inside of a SQL statement like a SELECT or INSERT, use the :meth:`.Sequence.next_value`
method, which will render at statement compilation time a SQL function that is
-appropriate for the target backend::
+appropriate for the target backend:
+
+.. sourcecode:: pycon+sql
>>> my_seq = Sequence("some_sequence", start=1)
>>> stmt = select(my_seq.next_value())
SQL that is rendered using this :class:`_schema.Table`, such as the SELECT
statement below, will explicitly qualify the table name ``financial_info`` with
-the ``remote_banks`` schema name::
+the ``remote_banks`` schema name:
+
+.. sourcecode:: pycon+sql
>>> print(select(financial_info))
- SELECT remote_banks.financial_info.id, remote_banks.financial_info.value
+ {printsql}SELECT remote_banks.financial_info.id, remote_banks.financial_info.value
FROM remote_banks.financial_info
When a :class:`_schema.Table` object is declared with an explicit schema
+.. highlight:: pycon+sql
+
Operator Reference
===============================
* :meth:`_sql.ColumnOperators.__eq__` (Python "``==``" operator)::
>>> print(column("x") == 5)
- x = :x_1
+ {printsql}x = :x_1
..
* :meth:`_sql.ColumnOperators.__ne__` (Python "``!=``" operator)::
>>> print(column("x") != 5)
- x != :x_1
+ {printsql}x != :x_1
..
* :meth:`_sql.ColumnOperators.__gt__` (Python "``>``" operator)::
>>> print(column("x") > 5)
- x > :x_1
+ {printsql}x > :x_1
..
* :meth:`_sql.ColumnOperators.__lt__` (Python "``<``" operator)::
>>> print(column("x") < 5)
- x < :x_1
+ {printsql}x < :x_1
..
* :meth:`_sql.ColumnOperators.__ge__` (Python "``>=``" operator)::
>>> print(column("x") >= 5)
- x >= :x_1
+ {printsql}x >= :x_1
..
* :meth:`_sql.ColumnOperators.__le__` (Python "``<=``" operator)::
>>> print(column("x") <= 5)
- x <= :x_1
+ {printsql}x <= :x_1
..
* :meth:`_sql.ColumnOperators.between`::
>>> print(column("x").between(5, 10))
- x BETWEEN :x_1 AND :x_2
+ {printsql}x BETWEEN :x_1 AND :x_2
..
IN is available most typically by passing a list of
values to the :meth:`_sql.ColumnOperators.in_` method::
-
>>> print(column("x").in_([1, 2, 3]))
- x IN (__[POSTCOMPILE_x_1])
+ {printsql}x IN (__[POSTCOMPILE_x_1])
The special bound form ``__[POSTCOMPILE`` is rendered into individual parameters
-at execution time, illustrated below:
-
-.. sourcecode:: pycon+sql
+at execution time, illustrated below::
>>> stmt = select(User.id).where(User.id.in_([1, 2, 3]))
>>> result = conn.execute(stmt)
SQLAlchemy produces a mathematically valid result for an empty IN expression
by rendering a backend-specific subquery that returns no rows. Again
-in other words, "it just works":
-
-.. sourcecode:: pycon+sql
+in other words, "it just works"::
>>> stmt = select(User.id).where(User.id.in_([]))
>>> result = conn.execute(stmt)
"NOT IN" is available via the :meth:`_sql.ColumnOperators.not_in` operator::
>>> print(column("x").not_in([1, 2, 3]))
- (x NOT IN (__[POSTCOMPILE_x_1]))
+ {printsql}(x NOT IN (__[POSTCOMPILE_x_1]))
This is typically more easily available by negating with the ``~`` operator::
>>> print(~column("x").in_([1, 2, 3]))
- (x NOT IN (__[POSTCOMPILE_x_1]))
+ {printsql}(x NOT IN (__[POSTCOMPILE_x_1]))
Tuple IN Expressions
~~~~~~~~~~~~~~~~~~~~
>>> tup = tuple_(column("x", Integer), column("y", Integer))
>>> expr = tup.in_([(1, 2), (3, 4)])
>>> print(expr)
- (x, y) IN (__[POSTCOMPILE_param_1])
-
-To illustrate the parameters rendered:
-
-.. sourcecode:: pycon+sql
+ {printsql}(x, y) IN (__[POSTCOMPILE_param_1])
+To illustrate the parameters rendered::
>>> tup = tuple_(User.id, Address.id)
>>> stmt = select(User.name).join(Address).where(tup.in_([(1, 1), (2, 2)]))
subquery::
>>> print(column("x").in_(select(user_table.c.id)))
- x IN (SELECT user_account.id
+ {printsql}x IN (SELECT user_account.id
FROM user_account)
Tuples work as expected::
... select(user_table.c.id, address_table.c.id).join(address_table)
... )
... )
- (x, y) IN (SELECT user_account.id, address.id
+ {printsql}(x, y) IN (SELECT user_account.id, address.id
FROM user_account JOIN address ON user_account.id = address.user_id)
Identity Comparisons
using regular Python ``None``::
>>> print(column("x").is_(None))
- x IS NULL
+ {printsql}x IS NULL
SQL NULL is also explicitly available, if needed, using the
:func:`_sql.null` construct::
>>> from sqlalchemy import null
>>> print(column("x").is_(null()))
- x IS NULL
+ {printsql}x IS NULL
The :meth:`_sql.ColumnOperators.is_` operator is automatically invoked when
using the :meth:`_sql.ColumnOperators.__eq__` overloaded operator, i.e.
>>> a = None
>>> print(column("x") == a)
- x IS NULL
+ {printsql}x IS NULL
Note that the Python ``is`` operator is **not overloaded**. Even though
Python provides hooks to overload operators such as ``==`` and ``!=``,
Similar to :meth:`_sql.ColumnOperators.is_`, produces "IS NOT"::
>>> print(column("x").is_not(None))
- x IS NOT NULL
+ {printsql}x IS NOT NULL
Is similarly equivalent to ``!= None``::
>>> print(column("x") != None)
- x IS NOT NULL
+ {printsql}x IS NOT NULL
* :meth:`_sql.ColumnOperators.is_distinct_from`:
Produces SQL IS DISTINCT FROM::
>>> print(column("x").is_distinct_from("some value"))
- x IS DISTINCT FROM :x_1
+ {printsql}x IS DISTINCT FROM :x_1
* :meth:`_sql.ColumnOperators.isnot_distinct_from`:
Produces SQL IS NOT DISTINCT FROM::
>>> print(column("x").isnot_distinct_from("some value"))
- x IS NOT DISTINCT FROM :x_1
+ {printsql}x IS NOT DISTINCT FROM :x_1
String Comparisons
^^^^^^^^^^^^^^^^^^
* :meth:`_sql.ColumnOperators.like`::
>>> print(column("x").like("word"))
- x LIKE :x_1
+ {printsql}x LIKE :x_1
..
generic backend. On the PostgreSQL backend it will use ``ILIKE``::
>>> print(column("x").ilike("word"))
- lower(x) LIKE lower(:x_1)
+ {printsql}lower(x) LIKE lower(:x_1)
..
* :meth:`_sql.ColumnOperators.notlike`::
>>> print(column("x").notlike("word"))
- x NOT LIKE :x_1
+ {printsql}x NOT LIKE :x_1
..
* :meth:`_sql.ColumnOperators.notilike`::
>>> print(column("x").notilike("word"))
- lower(x) NOT LIKE lower(:x_1)
+ {printsql}lower(x) NOT LIKE lower(:x_1)
..
* :meth:`_sql.ColumnOperators.startswith`::
>>> print(column("x").startswith("word"))
- x LIKE :x_1 || '%'
+ {printsql}x LIKE :x_1 || '%'
..
* :meth:`_sql.ColumnOperators.endswith`::
>>> print(column("x").endswith("word"))
- x LIKE '%' || :x_1
+ {printsql}x LIKE '%' || :x_1
..
* :meth:`_sql.ColumnOperators.contains`::
>>> print(column("x").contains("word"))
- x LIKE '%' || :x_1 || '%'
+ {printsql}x LIKE '%' || :x_1 || '%'
..
feature of the underlying database, if available::
>>> print(column("x").match("word"))
- x MATCH :x_1
+ {printsql}x MATCH :x_1
..
>>> from sqlalchemy.dialects import postgresql
>>> print(column("x").regexp_match("word").compile(dialect=postgresql.dialect()))
- x ~ %(x_1)s
+ {printsql}x ~ %(x_1)s
Or MySQL::
>>> from sqlalchemy.dialects import mysql
>>> print(column("x").regexp_match("word").compile(dialect=mysql.dialect()))
- x REGEXP %s
+ {printsql}x REGEXP %s
..
String concatenation::
>>> print(column("x").concat("some string"))
- x || :x_1
+ {printsql}x || :x_1
This operator is available via :meth:`_sql.ColumnOperators.__add__`, that
is, the Python ``+`` operator, when working with a column expression that
derives from :class:`_types.String`::
>>> print(column("x", String) + "some string")
- x || :x_1
+ {printsql}x || :x_1
The operator will produce the appropriate database-specific construct,
such as on MySQL it's historically been the ``concat()`` SQL function::
>>> print((column("x", String) + "some string").compile(dialect=mysql.dialect()))
- concat(x, %s)
+ {printsql}concat(x, %s)
..
REPLACE equivalent for the backends which support it::
>>> print(column("x").regexp_replace("foo", "bar").compile(dialect=postgresql.dialect()))
- REGEXP_REPLACE(x, %(x_1)s, %(x_2)s)
+ {printsql}REGEXP_REPLACE(x, %(x_1)s, %(x_2)s)
..
... dialect=mysql.dialect()
... )
... )
- (x COLLATE latin1_german2_ci) = %s
+ {printsql}(x COLLATE latin1_german2_ci) = %s
To use COLLATE against a literal value, use the :func:`_sql.literal` construct::
... dialect=mysql.dialect()
... )
... )
- (%s COLLATE latin1_german2_ci) = x
+ {printsql}(%s COLLATE latin1_german2_ci) = x
..
* :meth:`_sql.ColumnOperators.__add__`, :meth:`_sql.ColumnOperators.__radd__` (Python "``+``" operator)::
>>> print(column("x") + 5)
- x + :x_1
+ {printsql}x + :x_1{stop}
>>> print(5 + column("x"))
- :x_1 + x
+ {printsql}:x_1 + x{stop}
..
* :meth:`_sql.ColumnOperators.__sub__`, :meth:`_sql.ColumnOperators.__rsub__` (Python "``-``" operator)::
>>> print(column("x") - 5)
- x - :x_1
+ {printsql}x - :x_1{stop}
>>> print(5 - column("x"))
- :x_1 - x
+ {printsql}:x_1 - x{stop}
..
* :meth:`_sql.ColumnOperators.__mul__`, :meth:`_sql.ColumnOperators.__rmul__` (Python "``*``" operator)::
>>> print(column("x") * 5)
- x * :x_1
+ {printsql}x * :x_1{stop}
>>> print(5 * column("x"))
- :x_1 * x
+ {printsql}:x_1 * x{stop}
..
This is the Python ``truediv`` operator, which will ensure integer true division occurs::
>>> print(column("x") / 5)
- x / CAST(:x_1 AS NUMERIC)
+ {printsql}x / CAST(:x_1 AS NUMERIC){stop}
>>> print(5 / column("x"))
- :x_1 / CAST(x AS NUMERIC)
+ {printsql}:x_1 / CAST(x AS NUMERIC){stop}
.. versionchanged:: 2.0 The Python ``/`` operator now ensures integer true division takes place
behaves this way for integer values::
>>> print(column("x") // 5)
- x / :x_1
+ {printsql}x / :x_1{stop}
>>> print(5 // column("x", Integer))
- :x_1 / x
+ {printsql}:x_1 / x{stop}
For backends that don't use floor division by default, or when used with numeric values,
the FLOOR() function is used to ensure floor division::
>>> print(column("x") // 5.5)
- FLOOR(x / :x_1)
+ {printsql}FLOOR(x / :x_1){stop}
>>> print(5 // column("x", Numeric))
- FLOOR(:x_1 / x)
+ {printsql}FLOOR(:x_1 / x){stop}
.. versionadded:: 2.0 Support for FLOOR division
* :meth:`_sql.ColumnOperators.__mod__`, :meth:`_sql.ColumnOperators.__rmod__` (Python "``%``" operator)::
>>> print(column("x") % 5)
- x % :x_1
+ {printsql}x % :x_1{stop}
>>> print(5 % column("x"))
- :x_1 % x
+ {printsql}:x_1 % x{stop}
..
>>> from sqlalchemy import not_
>>> print(not_(column("x") == 5))
- x != :x_1
+ {printsql}x != :x_1
It also may apply a keyword such as ``NOT`` when appropriate::
>>> from sqlalchemy import Boolean
>>> print(not_(column("x", Boolean)))
- NOT x
+ {printsql}NOT x
Conjunction Operators
as :func:`_sql.and_` (note parenthesis around the two operands)::
>>> print((column("x") == 5) & (column("y") == 10))
- x = :x_1 AND y = :y_1
+ {printsql}x = :x_1 AND y = :y_1
..
as :func:`_sql.or_` (note parenthesis around the two operands)::
>>> print((column("x") == 5) | (column("y") == 10))
- x = :x_1 OR y = :y_1
+ {printsql}x = :x_1 OR y = :y_1
..
applying the ``NOT`` keyword to the expression as a whole::
>>> print(~(column("x") == 5))
- x != :x_1
+ {printsql}x != :x_1{stop}
>>> from sqlalchemy import Boolean
>>> print(~column("x", Boolean))
- NOT x
+ {printsql}NOT x{stop}
..
background as well.
Normally, a Core SQL construct or ORM :class:`_query.Query` object can be stringified
-directly, such as when we use ``print()``::
+directly, such as when we use ``print()``:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import column
>>> print(column("x") == 5)
- x = :x_1
+ {printsql}x = :x_1
When the above SQL expression is stringified, the :class:`.StrSQLCompiler`
compiler class is used, which is a special statement compiler that is invoked
In order to stringify constructs that are specific to particular backend,
the :meth:`_expression.ClauseElement.compile` method must be used, passing either an
:class:`_engine.Engine` or a :class:`.Dialect` object which will invoke the correct
-compiler. Below we use a PostgreSQL dialect::
+compiler. Below we use a PostgreSQL dialect:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.dialects import postgresql
>>> print(insert_stmt.compile(dialect=postgresql.dialect()))
- INSERT INTO my_table (x) VALUES (%(x)s) ON CONFLICT (y) DO NOTHING
+ {printsql}INSERT INTO my_table (x) VALUES (%(x)s) ON CONFLICT (y) DO NOTHING
For an ORM :class:`_query.Query` object, the statement can be accessed using the
:attr:`~.orm.query.Query.statement` accessor::
in the majority of simple cases is as simple as using
the ``str()`` builtin function, as below when use it with the ``print``
function (note the Python ``print`` function also calls ``str()`` automatically
-if we don't use it explicitly)::
+if we don't use it explicitly):
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select
>>> t = table("my_table", column("x"))
The ``str()`` builtin, or an equivalent, can be invoked on ORM
:class:`_query.Query` object as well as any statement such as that of
:func:`_expression.select`, :func:`_expression.insert` etc. and also any expression fragment, such
-as::
+as:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import column
>>> print(column("x") == "some value")
- x = :x_1
+ {printsql}x = :x_1
Stringifying for Specific Databases
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> stmt = select(A).where(A.id.in_[1, 2, 3])
To render the IN clause with real bound parameter symbols, use the
-``render_postcompile=True`` flag with :meth:`_sql.ClauseElement.compile`::
+``render_postcompile=True`` flag with :meth:`_sql.ClauseElement.compile`:
+
+.. sourcecode:: pycon+sql
>>> e = create_engine("postgresql+psycopg2://")
>>> print(stmt.compile(e, compile_kwargs={"render_postcompile": True}))
- SELECT a.id, a.data
+ {printsql}SELECT a.id, a.data
FROM a
WHERE a.id IN (%(id_1_1)s, %(id_1_2)s, %(id_1_3)s)
The ``literal_binds`` flag, described in the previous section regarding
rendering of bound parameters, automatically sets ``render_postcompile`` to
True, so for a statement with simple ints/strings, these can be stringified
-directly::
+directly:
+
+.. sourcecode:: pycon+sql
# render_postcompile is implied by literal_binds
>>> print(stmt.compile(e, compile_kwargs={"literal_binds": True}))
- SELECT a.id, a.data
+ {printsql}SELECT a.id, a.data
FROM a
WHERE a.id IN (1, 2, 3)
The :attr:`.SQLCompiler.params` and :attr:`.SQLCompiler.positiontup` are
also compatible with ``render_postcompile``, so that
the previous recipes for rendering inline bound parameters will work here
-in the same way, such as SQLite's positional form::
+in the same way, such as SQLite's positional form:
+
+.. sourcecode:: pycon+sql
>>> u1, u2, u3 = uuid.uuid4(), uuid.uuid4(), uuid.uuid4()
>>> stmt = select(A).where(A.data.in_([u1, u2, u3]))
>>> compiled = stmt.compile(e, compile_kwargs={"render_postcompile": True})
>>> params = (repr(compiled.params[name]) for name in compiled.positiontup)
>>> print(re.sub(r"\?", lambda m: next(params), str(compiled)))
- SELECT a.id, a.data
+ {printsql}SELECT a.id, a.data
FROM a
WHERE a.data IN (UUID('aa1944d6-9a5a-45d5-b8da-0ba1ef0a4f38'), UUID('a81920e6-15e2-4392-8a3c-d775ffa9ccd2'), UUID('b5574cdb-ff9b-49a3-be52-dbc89f087bfa'))
The default compilers for databases like PostgreSQL (default DBAPI is psycopg2)
and MySQL (default DBAPI is mysqlclient) will have this percent sign
-escaping behavior::
+escaping behavior:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column
>>> from sqlalchemy.dialects import postgresql
When such a dialect is being used, if non-DBAPI statements are desired that
don't include bound parameter symbols, one quick way to remove the percent
signs is to simply substitute in an empty set of parameters using Python's
-``%`` operator directly::
+``%`` operator directly:
+
+.. sourcecode:: pycon+sql
>>> strstmt = str(t.select().compile(dialect=postgresql.dialect()))
>>> print(strstmt % ())
dialect to use the given parameter style. Below, the very common ``named``
parameter style is set within the dialect used for the compilation so that
percent signs are no longer significant in the compiled form of SQL, and will
-no longer be escaped::
+no longer be escaped:
+
+.. sourcecode:: pycon+sql
>>> print(t.select().compile(dialect=postgresql.dialect(paramstyle="named")))
{printsql}SELECT my_table."value % one", my_table."value % two"
---------------------------------------------------------------------------------------------
The :meth:`.Operators.op` method allows one to create a custom database operator
-otherwise not known by SQLAlchemy::
+otherwise not known by SQLAlchemy:
+
+.. sourcecode:: pycon+sql
>>> print(column("q").op("->")(column("p")))
- q -> p
+ {printsql}q -> p
However, when using it on the right side of a compound expression, it doesn't
-generate parenthesis as we expect::
+generate parenthesis as we expect:
+
+.. sourcecode:: pycon+sql
>>> print((column("q1") + column("q2")).op("->")(column("p")))
- q1 + q2 -> p
+ {printsql}q1 + q2 -> p
Where above, we probably want ``(q1 + q2) -> p``.
The solution to this case is to set the precedence of the operator, using
the :paramref:`.Operators.op.precedence` parameter, to a high
number, where 100 is the maximum value, and the highest number used by any
-SQLAlchemy operator is currently 15::
+SQLAlchemy operator is currently 15:
+
+.. sourcecode:: pycon+sql
>>> print((column("q1") + column("q2")).op("->", precedence=100)(column("p")))
- (q1 + q2) -> p
+ {printsql}(q1 + q2) -> p
We can also usually force parenthesization around a binary expression (e.g.
an expression that has left/right operands and an operator) using the
-:meth:`_expression.ColumnElement.self_group` method::
+:meth:`_expression.ColumnElement.self_group` method:
+
+.. sourcecode:: pycon+sql
>>> print((column("q1") + column("q2")).self_group().op("->")(column("p")))
- (q1 + q2) -> p
+ {printsql}(q1 + q2) -> p
Why are the parentheses rules like this?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
column("q", ARRAY(Integer, dimensions=2))[5][6]
-would produce::
+would produce:
+
+.. sourcecode:: sql
((q[5])[6])
What if we defaulted the value of :paramref:`.Operators.op.precedence` to 100,
e.g. the highest? Then this expression makes more parenthesis, but is
-otherwise OK, that is, these two are equivalent::
+otherwise OK, that is, these two are equivalent:
+
+.. sourcecode:: pycon+sql
>>> print((column("q") - column("y")).op("+", precedence=100)(column("z")))
- (q - y) + z
+ {printsql}(q - y) + z{stop}
>>> print((column("q") - column("y")).op("+")(column("z")))
- q - y + z
+ {printsql}q - y + z{stop}
+
+but these two are not:
-but these two are not::
+.. sourcecode:: pycon+sql
>>> print(column("q") - column("y").op("+", precedence=100)(column("z")))
- q - y + z
+ {printsql}q - y + z{stop}
>>> print(column("q") - column("y").op("+")(column("z")))
- q - (y + z)
+ {printsql}q - (y + z){stop}
For now, it's not clear that as long as we are doing parenthesization based on
operator precedence and associativity, if there is really a way to parenthesize
the ``__eq__()`` method and enforcing that the return type of an expression is
``numpy.True`` or ``numpy.False``, which breaks SQLAlchemy's expression
language behavior that expects to return :class:`_sql.ColumnElement`
-expressions from Python equality comparisons::
+expressions from Python equality comparisons:
+
+.. sourcecode:: pycon+sql
>>> import numpy
>>> from sqlalchemy import column, Integer
>>> print(column("x", Integer) == numpy.int64(10)) # works
- x = :x_1
+ {printsql}x = :x_1{stop}
>>> print(numpy.int64(10) == column("x", Integer)) # breaks
False
of :class:`.InstrumentedAttribute`, which are descriptors that
provide the above mentioned ``__get__()``, ``__set__()`` and
``__delete__()`` methods. The :class:`.InstrumentedAttribute`
- will generate a SQL expression when used at the class level::
+ will generate a SQL expression when used at the class level:
+
+ .. sourcecode:: pycon+sql
>>> print(MyClass.data == 5)
- data = :data_1
+ {printsql}data = :data_1
and at the instance level, keeps track of changes to values,
and also :term:`lazy loads` unloaded attributes
Since ``Point`` is a dataclass, we may make use of
``dataclasses.astuple()`` to get a tuple form of ``Point`` instances.
-The custom comparator then returns the appropriate SQL expression::
+The custom comparator then returns the appropriate SQL expression:
+
+.. sourcecode:: pycon+sql
>>> print(Vertex.start > Point(5, 6))
- vertices.x1 > :x1_1 AND vertices.y1 > :y1_1
+ {printsql}vertices.x1 > :x1_1 AND vertices.y1 > :y1_1
Nesting Composites
object called ``LogRecord``.
The features above may all be demonstrated using a :func:`_sql.select`
-example::
+example:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> print(select(MyModel).join(MyModel.log_record))
id: Mapped[int] = mapped_column(primary_key=True)
With the above mapping, each of ``Foo`` and ``Bar`` contain a relationship
-to ``Target`` accessed along the ``.target`` attribute::
+to ``Target`` accessed along the ``.target`` attribute:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> print(select(Foo).join(Foo.target))
{printsql}SELECT foo.id, foo.target_id
- FROM foo JOIN target ON target.id = foo.target_id
+ FROM foo JOIN target ON target.id = foo.target_id{stop}
>>> print(select(Bar).join(Bar.target))
{printsql}SELECT bar.id, bar.target_id
- FROM bar JOIN target ON target.id = bar.target_id
+ FROM bar JOIN target ON target.id = bar.target_id{stop}
Special arguments such as :paramref:`_orm.relationship.primaryjoin` may also
be used within mixed-in classmethods, which often need to refer to the class
id: Mapped[int] = mapped_column(primary_key=True)
Above, we may make use of ``Something.x_plus_y`` in a statement where
-it produces the full expression::
+it produces the full expression:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> print(select(Something.x_plus_y))
status: Mapped[str]
Below illustrates the CREATE TABLE statement generated for the above mapping,
-first on the Microsoft SQL Server backend, illustrating the ``NVARCHAR`` datatype::
+first on the Microsoft SQL Server backend, illustrating the ``NVARCHAR`` datatype:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import CreateTable
>>> from sqlalchemy.dialects import mssql, postgresql
PRIMARY KEY (id)
)
-Then on the PostgreSQL backend, illustrating ``TIMESTAMP WITH TIME ZONE``::
+Then on the PostgreSQL backend, illustrating ``TIMESTAMP WITH TIME ZONE``:
+
+.. sourcecode:: pycon+sql
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
{printsql}CREATE TABLE some_table (
short_num_value: Mapped[num_6_2]
a CREATE TABLE for the above mapping will illustrate the different variants
-of ``VARCHAR`` and ``NUMERIC`` we've configured, and looks like::
+of ``VARCHAR`` and ``NUMERIC`` we've configured, and looks like:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
- CREATE TABLE some_table (
+ {printsql}CREATE TABLE some_table (
short_name VARCHAR(30) NOT NULL,
long_name VARCHAR(50) NOT NULL,
num_value NUMERIC(12, 4) NOT NULL,
name: Mapped[required_name]
created_at: Mapped[timestamp]
-``CREATE TABLE`` for our above mapping looks like::
+``CREATE TABLE`` for our above mapping looks like:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
The CREATE TABLE statement illustrates these per-attribute settings,
adding a ``FOREIGN KEY`` constraint as well as substituting
-``UTC_TIMESTAMP`` for ``CURRENT_TIMESTAMP``::
+``UTC_TIMESTAMP`` for ``CURRENT_TIMESTAMP``:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
- CREATE TABLE some_table (
+ {printsql}CREATE TABLE some_table (
id INTEGER NOT NULL,
created_at DATETIME DEFAULT UTC_TIMESTAMP() NOT NULL,
PRIMARY KEY (id),
Where above ``User.id`` resolves to a column named ``user_id``
and ``User.name`` resolves to a column named ``user_name``. We
may write a :func:`_sql.select` statement using our Python attribute names
-and will see the SQL names generated::
+and will see the SQL names generated:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> print(select(User.id, User.name).where(User.name == "x"))
the need for additional modifications to the enclosing query. If the
immediate target of an association proxy is a **mapped column expression**,
standard column operators can be used which will be embedded in the subquery.
-For example a straight equality operator::
+For example a straight equality operator:
+
+.. sourcecode:: pycon+sql
>>> print(session.scalars(select(User).where(User.special_keys == "jek")))
{printsql}SELECT "user".id AS user_id, "user".name AS user_name
FROM user_keyword
WHERE "user".id = user_keyword.user_id AND user_keyword.special_key = :special_key_1)
-a LIKE operator::
+a LIKE operator:
+
+.. sourcecode:: pycon+sql
>>> print(session.scalars(select(User).where(User.special_keys.like("%jek"))))
{printsql}SELECT "user".id AS user_id, "user".name AS user_name
operators can be used instead, such as :meth:`_orm.PropComparator.has` and
:meth:`_orm.PropComparator.any`. The ``User.keywords`` attribute is in fact
two association proxies linked together, so when using this proxy for generating
-SQL phrases, we get two levels of EXISTS subqueries::
+SQL phrases, we get two levels of EXISTS subqueries:
+
+.. sourcecode:: pycon+sql
>>> print(session.scalars(select(User).where(User.keywords.any(Keyword.keyword == "jek"))))
{printsql}SELECT "user".id AS user_id, "user".name AS user_name
Using the above mapping, queries can be produced in terms of the ``Employee``
class and any attributes that are locally declared upon it, such as the
-``Employee.name``::
+``Employee.name``:
+
+.. sourcecode:: pycon+sql
>>> stmt = select(Employee).where(Employee.name == "n1")
>>> print(stmt)
The above class ``MyClass`` has two attributes, ``.job_status`` and
``.status`` that will behave as one attribute, both at the expression
-level::
+level:
+
+.. sourcecode:: pycon+sql
>>> print(MyClass.job_status == "some_status")
- my_table.job_status = :job_status_1
+ {printsql}my_table.job_status = :job_status_1{stop}
>>> print(MyClass.status == "some_status")
- my_table.job_status = :job_status_1
+ {printsql}my_table.job_status = :job_status_1{stop}
and at the instance level::
object given::
>>> user_obj = session.get(User, 1)
- {execsql}SELECT ...
+ {execsql}SELECT ...{stop}
>>> print(select(Address).where(Address.user == user_obj))
{printsql}SELECT address.id, address.user_id, address.email_address
FROM address
primary key equals the value of the foreign key in a related object::
>>> address_obj = session.get(Address, 1)
- {execsql}SELECT ...
+ {execsql}SELECT ...{stop}
>>> print(select(User).where(User.addresses.contains(address_obj)))
{printsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
and :meth:`_sql.ColumnElement.desc` modifiers, which are present
from ORM-bound attributes as well::
-
>>> print(select(User).order_by(User.fullname.desc()))
{printsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account ORDER BY user_account.fullname DESC
>>> from sqlalchemy.dialects import postgresql
>>> print(select(func.now()).compile(dialect=postgresql.dialect()))
- {printsql}SELECT now() AS now_1
-
+ {printsql}SELECT now() AS now_1{stop}
>>> from sqlalchemy.dialects import oracle
>>> print(select(func.now()).compile(dialect=oracle.dialect()))
- {printsql}SELECT CURRENT_TIMESTAMP AS now_1 FROM DUAL
+ {printsql}SELECT CURRENT_TIMESTAMP AS now_1 FROM DUAL{stop}
Functions Have Return Types
~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> stmt = select(function_expr["def"])
>>> print(stmt)
- SELECT json_object(:json_object_1)[:json_object_2] AS anon_1
+ {printsql}SELECT json_object(:json_object_1)[:json_object_2] AS anon_1
Built-in Functions Have Pre-Configured Return Types
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
user_account.name, address.email_address
FROM user_account JOIN address ON user_account.id = address.user_id
[...] ()
- [(1, 'sandy', 'sandy@sqlalchemy.org'), (2, 'sandy', 'sandy@squirrelpower.org'), (1, 'spongebob', 'spongebob@sqlalchemy.org')]
- ROLLBACK
+ {stop}[(1, 'sandy', 'sandy@sqlalchemy.org'), (2, 'sandy', 'sandy@squirrelpower.org'), (1, 'spongebob', 'spongebob@sqlalchemy.org')]
+ {printsql}ROLLBACK{stop}
Above, the :paramref:`_functions.FunctionElement.over.partition_by` parameter
is used so that the ``PARTITION BY`` clause is rendered within the OVER clause.
user_account.name, address.email_address
FROM user_account JOIN address ON user_account.id = address.user_id
[...] ()
- [(2, 'sandy', 'sandy@sqlalchemy.org'), (2, 'sandy', 'sandy@squirrelpower.org'), (3, 'spongebob', 'spongebob@sqlalchemy.org')]
- ROLLBACK
+ {stop}[(2, 'sandy', 'sandy@sqlalchemy.org'), (2, 'sandy', 'sandy@squirrelpower.org'), (3, 'spongebob', 'spongebob@sqlalchemy.org')]
+ {printsql}ROLLBACK{stop}
Further options for window functions include usage of ranges; see
:func:`_expression.over` for more examples.
... func.percentile_disc([0.25, 0.5, 0.75, 1]).within_group(user_table.c.name)
... )
... )
- unnest(percentile_disc(:percentile_disc_1) WITHIN GROUP (ORDER BY user_account.name))
+ {printsql}unnest(percentile_disc(:percentile_disc_1) WITHIN GROUP (ORDER BY user_account.name))
"FILTER" is supported by some backends to limit the range of an aggregate function to a
particular subset of rows compared to the total range of rows returned, available
>>> from sqlalchemy import JSON
>>> print(cast("{'a': 'b'}", JSON)["a"])
- CAST(:param_1 AS JSON)[:param_2]
+ {printsql}CAST(:param_1 AS JSON)[:param_2]
type_coerce() - a Python-only "cast"
----------------
The :class:`.Sequence` object creates "real" sequences, i.e.,
-``CREATE SEQUENCE``::
+``CREATE SEQUENCE``:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import Sequence
>>> from sqlalchemy.schema import CreateSequence
For integer primary key generation, SQL Server's ``IDENTITY`` construct should
generally be preferred vs. sequence.
-..tip::
+.. tip::
The default start value for T-SQL is ``-2**63`` instead of 1 as
in most other SQL databases. Users should explicitly set the
this legacy mode of rendering would assume that "customer_schema.account"
would not be accepted by all parts of the SQL statement, as illustrated
-below::
+below:
+
+.. sourcecode:: pycon+sql
>>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
>>> print(account_table.select().compile(eng))
Examples from PostgreSQL's reference documentation follow below:
-* ``json_each()``::
+* ``json_each()``:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
>>> stmt = select(func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value"))
{printsql}SELECT anon_1.key, anon_1.value
FROM json_each(:json_each_1) AS anon_1
-* ``json_populate_record()``::
+* ``json_populate_record()``:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func, literal_column
>>> stmt = select(
types to produce them. The :meth:`_functions.FunctionElement.table_valued`
method produces a :class:`_sql.TableValuedAlias` construct, and the method
:meth:`_sql.TableValuedAlias.render_derived` method sets up the derived
- columns specification::
+ columns specification:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func, column, Integer, Text
>>> stmt = select(
of PostgreSQL functions including ``unnest()`` and ``generate_series()``. The
:meth:`_functions.FunctionElement.table_valued` method accepts a keyword
parameter ``with_ordinality`` for this purpose, which accepts the string name
- that will be applied to the "ordinality" column::
+ that will be applied to the "ordinality" column:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
>>> stmt = select(
``unnest()`` and ``generate_series()`` may use this form. Column valued functions are available using the
:meth:`_functions.FunctionElement.column_valued` method of :class:`_functions.FunctionElement`:
-* ``json_array_elements()``::
+* ``json_array_elements()``:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
>>> stmt = select(func.json_array_elements('["one", "two"]').column_valued("x"))
>>> print(stmt)
- SELECT x
+ {printsql}SELECT x
FROM json_array_elements(:json_array_elements_1) AS x
* ``unnest()`` - in order to generate a PostgreSQL ARRAY literal, the
- :func:`_postgresql.array` construct may be used::
+ :func:`_postgresql.array` construct may be used:
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy.dialects.postgresql import array
>>> from sqlalchemy import select, func
FROM unnest(ARRAY[%(param_1)s, %(param_2)s]) AS anon_1
The function can of course be used against an existing table-bound column
- that's of type :class:`_types.ARRAY`::
+ that's of type :class:`_types.ARRAY`:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, ARRAY, Integer
>>> from sqlalchemy import select, func
Built-in support for rendering a ``ROW`` may be approximated using
``func.ROW`` with the :attr:`_sa.func` namespace, or by using the
-:func:`_sql.tuple_` construct::
+:func:`_sql.tuple_` construct:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, func, tuple_
>>> t = table("t", column("id"), column("fk"))
... func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7)
... )
>>> print(stmt)
- SELECT t.id, t.fk
+ {printsql}SELECT t.id, t.fk
FROM t
WHERE (t.id, t.fk) > (:param_1, :param_2) AND ROW(t.id, t.fk) < ROW(:ROW_1, :ROW_2)
:meth:`_sql.FromClause.table_valued` method, which is comparable to the
:meth:`_functions.FunctionElement.table_valued` method except that the collection
of columns is already established by that of the :class:`_sql.FromClause`
-itself::
+itself:
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, func, select
>>> a = table( "a", column("id"), column("x"), column("y"))
descriptor evaluates the function body given the ``Interval`` class as
the argument, which when evaluated with SQLAlchemy expression mechanics
(here using the :attr:`.QueryableAttribute.expression` accessor)
-returns a new SQL expression::
+returns a new SQL expression:
+
+.. sourcecode:: pycon+sql
>>> print(Interval.length.expression)
interval."end" - interval.start
ORM methods such as :meth:`_query.Query.filter_by`
generally use ``getattr()`` to
-locate attributes, so can also be used with hybrid attributes::
+locate attributes, so can also be used with hybrid attributes:
+
+.. sourcecode:: pycon+sql
>>> print(Session().query(Interval).filter_by(length=5))
{printsql}SELECT interval.id AS interval_id, interval.start AS interval_start,
methods that :class:`.hybrid_property` applies to attributes. The
methods return boolean values, and take advantage of the Python ``|``
and ``&`` bitwise operators to produce equivalent instance-level and
-SQL expression-level boolean behavior::
+SQL expression-level boolean behavior:
+
+.. sourcecode:: pycon+sql
>>> i1.contains(6)
True
{printsql}SELECT interval.id AS interval_id, interval.start AS interval_start,
interval."end" AS interval_end
FROM interval
- WHERE interval.start <= :start_1 AND interval."end" > :end_1
+ WHERE interval.start <= :start_1 AND interval."end" > :end_1{stop}
>>> ia = aliased(Interval)
>>> print(Session().query(Interval, ia).filter(Interval.intersects(ia)))
WHERE interval.start <= interval_1.start
AND interval."end" > interval_1.start
OR interval.start <= interval_1."end"
- AND interval."end" > interval_1."end"
+ AND interval."end" > interval_1."end"{stop}
.. _hybrid_distinct_expression:
Above the Python function ``abs()`` is used for instance-level
operations, the SQL function ``ABS()`` is used via the :data:`.func`
-object for class-level expressions::
+object for class-level expressions:
+
+.. sourcecode:: pycon+sql
>>> i1.radius
2
>>> print(Session().query(Interval).filter(Interval.radius > 5))
- SELECT interval.id AS interval_id, interval.start AS interval_start,
+ {printsql}SELECT interval.id AS interval_id, interval.start AS interval_start,
interval."end" AS interval_end
FROM interval
WHERE abs(interval."end" - interval.start) / :abs_1 > :param_1
session.query(Interval).update(
{Interval.length: 25}, synchronize_session='fetch')
-We'll get an UPDATE statement along the lines of::
+We'll get an UPDATE statement along the lines of:
+
+.. sourcecode:: sql
UPDATE interval SET end=start + :value
However, at the expression level, it's expected that the ``User`` class will
be used in an appropriate context such that an appropriate join to
-``SavingsAccount`` will be present::
+``SavingsAccount`` will be present:
+
+.. sourcecode:: pycon+sql
>>> print(Session().query(User, User.balance).
... join(User.accounts).filter(User.balance > 5000))
- SELECT "user".id AS user_id, "user".name AS user_name,
+ {printsql}SELECT "user".id AS user_id, "user".name AS user_name,
account.balance AS account_balance
FROM "user" JOIN account ON "user".id = account.user_id
WHERE account.balance > :balance_1
Note however, that while the instance level accessors need to worry
about whether ``self.accounts`` is even present, this issue expresses
itself differently at the SQL expression level, where we basically
-would use an outer join::
+would use an outer join:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import or_
>>> print (Session().query(User, User.balance).outerjoin(User.accounts).
label('total_balance')
The above recipe will give us the ``balance`` column which renders
-a correlated SELECT::
+a correlated SELECT:
+
+.. sourcecode:: pycon+sql
>>> print(s.query(User).filter(User.balance > 400))
- SELECT "user".id AS user_id, "user".name AS user_name
+ {printsql}SELECT "user".id AS user_id, "user".name AS user_name
FROM "user"
WHERE (SELECT sum(account.balance) AS sum_1
FROM account
return CaseInsensitiveComparator(cls.word)
Above, SQL expressions against ``word_insensitive`` will apply the ``LOWER()``
-SQL function to both sides::
+SQL function to both sides:
+
+.. sourcecode:: pycon+sql
>>> print(Session().query(SearchWord).filter_by(word_insensitive="Trucks"))
{printsql}SELECT searchword.id AS searchword_id, searchword.word AS searchword_word
The ``word_insensitive`` attribute now has case-insensitive comparison behavior
universally, including SQL expression vs. Python expression (note the Python
-value is converted to lower case on the Python side here)::
+value is converted to lower case on the Python side here):
+
+.. sourcecode:: pycon+sql
>>> print(Session().query(SearchWord).filter_by(word_insensitive="Trucks"))
- SELECT searchword.id AS searchword_id, searchword.word AS searchword_word
+ {printsql}SELECT searchword.id AS searchword_id, searchword.word AS searchword_word
FROM searchword
WHERE lower(searchword.word) = :lower_1
-SQL expression versus SQL expression::
+SQL expression versus SQL expression:
+
+.. sourcecode:: pycon+sql
>>> sw1 = aliased(SearchWord)
>>> sw2 = aliased(SearchWord)
def false() -> False_:
"""Return a :class:`.False_` construct.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import false
>>> print(select(t.c.x).where(false()))
{printsql}SELECT x FROM t WHERE false
A backend which does not support true/false constants will render as
- an expression against 1 or 0::
+ an expression against 1 or 0:
+
+ .. sourcecode:: pycon+sql
>>> print(select(t.c.x).where(false()))
{printsql}SELECT x FROM t WHERE 0 = 1
The :func:`.true` and :func:`.false` constants also feature
"short circuit" operation within an :func:`.and_` or :func:`.or_`
- conjunction::
+ conjunction:
+
+ .. sourcecode:: pycon+sql
>>> print(select(t.c.x).where(or_(t.c.x > 5, true())))
- {printsql}SELECT x FROM t WHERE true
+ {printsql}SELECT x FROM t WHERE true{stop}
>>> print(select(t.c.x).where(and_(t.c.x > 5, false())))
- {printsql}SELECT x FROM t WHERE false
+ {printsql}SELECT x FROM t WHERE false{stop}
.. versionchanged:: 0.9 :func:`.true` and :func:`.false` feature
better integrated behavior within conjunctions and on dialects
def true() -> True_:
"""Return a constant :class:`.True_` construct.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import true
>>> print(select(t.c.x).where(true()))
{printsql}SELECT x FROM t WHERE true
A backend which does not support true/false constants will render as
- an expression against 1 or 0::
+ an expression against 1 or 0:
+
+ .. sourcecode:: pycon+sql
>>> print(select(t.c.x).where(true()))
{printsql}SELECT x FROM t WHERE 1 = 1
The :func:`.true` and :func:`.false` constants also feature
"short circuit" operation within an :func:`.and_` or :func:`.or_`
- conjunction::
+ conjunction:
+
+ .. sourcecode:: pycon+sql
>>> print(select(t.c.x).where(or_(t.c.x > 5, true())))
- {printsql}SELECT x FROM t WHERE true
+ {printsql}SELECT x FROM t WHERE true{stop}
>>> print(select(t.c.x).where(and_(t.c.x > 5, false())))
- {printsql}SELECT x FROM t WHERE false
+ {printsql}SELECT x FROM t WHERE false{stop}
.. versionchanged:: 0.9 :func:`.true` and :func:`.false` feature
better integrated behavior within conjunctions and on dialects
The above construct will produce a :class:`.TypeCoerce` object, which
does not modify the rendering in any way on the SQL side, with the
possible exception of a generated label if used in a columns clause
- context::
+ context:
+
+ .. sourcecode:: sql
SELECT date_string AS date_string FROM log
When using :func:`.type_coerce` with composed expressions, note that
**parenthesis are not applied**. If :func:`.type_coerce` is being
used in an operator context where the parenthesis normally present from
- CAST are necessary, use the :meth:`.TypeCoerce.self_group` method::
+ CAST are necessary, use the :meth:`.TypeCoerce.self_group` method:
+
+ .. sourcecode:: pycon+sql
>>> some_integer = column("someint", Integer)
>>> some_string = column("somestr", String)
>>> expr = type_coerce(some_integer + 5, String) + some_string
>>> print(expr)
- someint + :someint_1 || somestr
+ {printsql}someint + :someint_1 || somestr{stop}
>>> expr = type_coerce(some_integer + 5, String).self_group() + some_string
>>> print(expr)
- (someint + :someint_1) || somestr
+ {printsql}(someint + :someint_1) || somestr{stop}
:param expression: A SQL expression, such as a
:class:`_expression.ColumnElement`
together with the addition operator ``+`` to produce
a :class:`.BinaryExpression`.
Both :class:`.ColumnClause` and :class:`.BinaryExpression` are subclasses
- of :class:`_expression.ColumnElement`::
+ of :class:`_expression.ColumnElement`:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
>>> column('a') + column('b')
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
>>> print(column('a') + column('b'))
- a + b
+ {printsql}a + b
.. seealso::
"""Represent an expression that is ``LEFT <operator> RIGHT``.
A :class:`.BinaryExpression` is generated automatically
- whenever two column expressions are used in a Python binary expression::
+ whenever two column expressions are used in a Python binary expression:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
>>> column('a') + column('b')
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
>>> print(column('a') + column('b'))
- a + b
+ {printsql}a + b
"""
construct, except no FROM clause is generated; the function is rendered
in the similar way as a scalar subquery.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import func, select
>>> fn = func.jsonb_each("{'k', 'v'}").scalar_table_valued("key")
r"""Return a :class:`_sql.TableValuedAlias` representation of this
:class:`_functions.FunctionElement` with table-valued expressions added.
- e.g.::
+ e.g.:
+
+ .. sourcecode:: pycon+sql
>>> fn = (
... func.generate_series(1, 5).
>>> print(select(fn))
{printsql}SELECT anon_1.value, anon_1.start, anon_1.stop, anon_1.step
- FROM generate_series(:generate_series_1, :generate_series_2) AS anon_1
+ FROM generate_series(:generate_series_1, :generate_series_2) AS anon_1{stop}
>>> print(select(fn.c.value, fn.c.stop).where(fn.c.value > 2))
{printsql}SELECT anon_1.value, anon_1.stop
FROM generate_series(:generate_series_1, :generate_series_2) AS anon_1
- WHERE anon_1.value > :value_1
+ WHERE anon_1.value > :value_1{stop}
A WITH ORDINALITY expression may be generated by passing the keyword
- argument "with_ordinality"::
+ argument "with_ordinality":
+
+ .. sourcecode:: pycon+sql
>>> fn = func.generate_series(4, 1, -1).table_valued("gen", with_ordinality="ordinality")
>>> print(select(fn))
"""Return this :class:`_functions.FunctionElement` as a column expression that
selects from itself as a FROM clause.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
>>> gs = func.generate_series(1, 5, -1).column_valued()
r"""The set of columns exported by this :class:`.FunctionElement`.
This is a placeholder collection that allows the function to be
- placed in the FROM clause of a statement::
+ placed in the FROM clause of a statement:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import column, select, func
>>> stmt = select(column('x'), column('y')).select_from(func.myfunction())
:meth:`_functions.FunctionElement.table_valued` method first to
establish named columns.
- e.g.::
+ e.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import func, select, column
>>> data_view = func.unnest([1, 2, 3]).alias("data_view")
FROM unnest(:unnest_1) AS data_view
The :meth:`_functions.FunctionElement.column_valued` method provides
- a shortcut for the above pattern::
+ a shortcut for the above pattern:
+
+ .. sourcecode:: pycon+sql
>>> data_view = func.unnest([1, 2, 3]).column_valued("data_view")
>>> print(select(data_view))
"""Generate SQL function expressions.
:data:`.func` is a special object instance which generates SQL
- functions based on name-based attributes, e.g.::
+ functions based on name-based attributes, e.g.:
+
+ .. sourcecode:: pycon+sql
>>> print(func.count(1))
- count(:param_1)
+ {printsql}count(:param_1)
The returned object is an instance of :class:`.Function`, and is a
- column-oriented SQL element like any other, and is used in that way::
+ column-oriented SQL element like any other, and is used in that way:
+
+ .. sourcecode:: pycon+sql
>>> print(select(func.count(table.c.id)))
{printsql}SELECT count(sometable.id) FROM sometable
Any name can be given to :data:`.func`. If the function name is unknown to
SQLAlchemy, it will be rendered exactly as is. For common SQL functions
which SQLAlchemy is aware of, the name may be interpreted as a *generic
- function* which will be compiled appropriately to the target database::
+ function* which will be compiled appropriately to the target database:
+
+ .. sourcecode:: pycon+sql
>>> print(func.current_timestamp())
{printsql}CURRENT_TIMESTAMP
To call functions which are present in dot-separated packages,
- specify them in the same manner::
+ specify them in the same manner:
+
+ .. sourcecode:: pycon+sql
>>> print(func.stats.yield_curve(5, 10))
{printsql}stats.yield_curve(:yield_curve_1, :yield_curve_2)
treated as a string in expressions, specify
:class:`~sqlalchemy.types.Unicode` as the type:
+ .. sourcecode:: pycon+sql
+
>>> print(func.my_string(u'hi', type_=Unicode) + ' ' +
... func.my_string(u'there', type_=Unicode))
{printsql}my_string(:my_string_1) || :my_string_2 || my_string(:my_string_3)
identifier = "buffer"
inherit_cache = True
- The above function will render as follows::
+ The above function will render as follows:
+
+ .. sourcecode:: pycon+sql
>>> print(func.geo.buffer())
- ST_Buffer()
+ {printsql}ST_Buffer()
The name will be rendered as is, however without quoting unless the name
contains special characters that require quoting. To force quoting
identifier = "buffer"
inherit_cache = True
- The above function will render as::
+ The above function will render as:
+
+ .. sourcecode:: pycon+sql
>>> print(func.geo.buffer())
- "ST_Buffer"()
+ {printsql}"ST_Buffer"()
.. versionadded:: 1.3.13 The :class:`.quoted_name` construct is now
recognized for quoting when used with the "name" attribute of the
class concat(GenericFunction[str]):
"""The SQL CONCAT() function, which concatenates strings.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> print(select(func.concat('a', 'b')))
{printsql}SELECT concat(:concat_2, :concat_3) AS concat_1
String concatenation in SQLAlchemy is more commonly available using the
Python ``+`` operator with string datatypes, which will render a
- backend-specific concatenation operator, such as ::
+ backend-specific concatenation operator, such as :
+
+ .. sourcecode:: pycon+sql
>>> print(select(literal("a") + "b"))
{printsql}SELECT :param_1 || :param_2 AS anon_1
backend dependent, and is supported in various forms by backends
such as PostgreSQL, Oracle and SQL Server.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, column, func, table
>>> a = table("a", column("id"), column("x"), column("y"))
Below, the columns named ``columna`` are both rendered as is, meaning that
the name ``columna`` can only refer to the first occurrence of this name
- within a result set, as well as if the statement were used as a subquery::
+ within a result set, as well as if the statement were used as a subquery:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select, true, LABEL_STYLE_NONE
>>> table1 = table("table1", column("columna"), column("columnb"))
Below, all column names are given a label so that the two same-named
columns ``columna`` are disambiguated as ``table1_columna`` and
- ``table2_columna``::
+ ``table2_columna``:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select, true, LABEL_STYLE_TABLENAME_PLUS_COL
>>> table1 = table("table1", column("columna"), column("columnb"))
Below, most column names are left unaffected, except for the second
occurrence of the name ``columna``, which is labeled using the
- label ``columna_1`` to disambiguate it from that of ``tablea.columna``::
+ label ``columna_1`` to disambiguate it from that of ``tablea.columna``:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, select, true, LABEL_STYLE_DISAMBIGUATE_ONLY
>>> table1 = table("table1", column("columna"), column("columnb"))
This construct provides for a SQL function that returns columns
to be used in the FROM clause of a SELECT statement. The
object is generated using the :meth:`_functions.FunctionElement.table_valued`
- method, e.g.::
+ method, e.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
>>> fn = func.json_array_elements_text('["one", "two", "three"]').table_valued("value")
:meth:`_functions.FunctionElement.column_valued` method. See that
method for further details.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> print(select(func.some_func().table_valued("value").column))
- SELECT anon_1 FROM some_func() AS anon_1
+ {printsql}SELECT anon_1 FROM some_func() AS anon_1
.. seealso::
"""Apply "render derived" to this :class:`_sql.TableValuedAlias`.
This has the effect of the individual column names listed out
- after the alias name in the "AS" sequence, e.g.::
+ after the alias name in the "AS" sequence, e.g.:
+
+ .. sourcecode:: pycon+sql
>>> print(
... select(
The ``with_types`` keyword will render column types inline within
the alias expression (this syntax currently applies to the
- PostgreSQL database)::
+ PostgreSQL database):
+
+ .. sourcecode:: pycon+sql
>>> print(
... select(
:param collation: Optional, a column-level collation for
use in DDL and CAST expressions. Renders using the
COLLATE keyword supported by SQLite, MySQL, and PostgreSQL.
- E.g.::
+ E.g.:
+
+ .. sourcecode:: pycon+sql
>>> from sqlalchemy import cast, select, String
>>> print(select(cast('some string', String(collation='utf8'))))