to Engine, Connection::
with engine.begin() as conn:
- <work with conn in a transaction>
+ # <work with conn in a transaction>
+ ...
and::
with engine.connect() as conn:
- <work with conn>
+ # <work with conn>
+ ...
Both close out the connection when done,
commit or rollback transaction with errors
Will maintain the columns clause of the SELECT as coming from the
unaliased "user", as specified; the select_from only takes place in the
- FROM clause::
+ FROM clause:
+
+ .. sourcecode:: sql
SELECT users.name AS users_name FROM users AS users_1
JOIN users ON users.name < users_1.name
session.query(User.name).select_from(user_table.select().where(user_table.c.id > 5))
- Which produces::
+ Which produces:
+
+ .. sourcecode:: sql
SELECT anon_1.name AS anon_1_name FROM (SELECT users.id AS id,
users.name AS name FROM users WHERE users.id > :id_1) AS anon_1
::
- session.query(Order).join('items')...
+ session.query(Order).join("items")
Now you can alias them:
::
- session.query(Order).join('items', aliased=True).
- filter(Item.name='item 1').join('items', aliased=True).filter(Item.name=='item 3')
+ session.query(Order).join("items", aliased=True).filter(Item.name="item 1").join(
+ "items", aliased=True
+ ).filter(Item.name == "item 3")
The above will create two joins from orders->items using
aliases. the ``filter()`` call subsequent to each will
::
- session.query(Order).join('items', id='j1', aliased=True).
- filter(Item.name == 'item 1').join('items', aliased=True, id='j2').
- filter(Item.name == 'item 3').add_entity(Item, id='j1').add_entity(Item, id='j2')
+ session.query(Order).join("items", id="j1", aliased=True).filter(
+ Item.name == "item 1"
+ ).join("items", aliased=True, id="j2").filter(Item.name == "item 3").add_entity(
+ Item, id="j1"
+ ).add_entity(
+ Item, id="j2"
+ )
Returns tuples in the form: ``(Order, Item, Item)``.
? A join along aliases, three levels deep off the parent:
-::
+.. sourcecode:: sql
SELECT
nodes_3.id AS nodes_3_id, nodes_3.parent_id AS nodes_3_parent_id, nodes_3.name AS nodes_3_name,
a typical query looks like:
-::
+.. sourcecode:: sql
SELECT (SELECT count(1) FROM posts WHERE users.id = posts.user_id) AS count,
users.firstname || users.lastname AS fullname,
and ``ThreadLocalMetaData``. The older names have been
removed in 0.4. Updating is simple:
-::
+.. sourcecode:: text
+-------------------------------------+-------------------------+
|If You Had | Now Use |
::
class MyType(AdaptOldConvertMethods, TypeEngine):
- ..
+ ...
* The ``quote`` flag on ``Column`` and ``Table`` as well as
the ``quote_schema`` flag on ``Table`` now control quoting
datetime columns to store the new format (NOTE: please
test this, I'm pretty sure its correct):
- ::
+ .. sourcecode:: sql
UPDATE mytable SET somedatecol =
substr(somedatecol, 0, 19) || '.' || substr((substr(somedatecol, 21, -1) / 1000000), 3, -1);
would produce SQL like:
- ::
+ .. sourcecode:: sql
SELECT * FROM
(SELECT * FROM addresses LIMIT 10) AS anon_1
eager loaders represent many-to-ones, in which case the
eager joins don't affect the rowcount:
- ::
+ .. sourcecode:: sql
SELECT * FROM addresses LEFT OUTER JOIN users AS users_1 ON users_1.id = addresses.user_id LIMIT 10
SQL:
-::
+.. sourcecode:: sql
SELECT empsalary.depname, empsalary.empno, empsalary.salary,
avg(empsalary.salary) OVER (PARTITION BY empsalary.depname) AS avg
The SQL emitted by ``query.count()`` is now always of the
form:
-::
+.. sourcecode:: sql
SELECT count(1) AS count_1 FROM (
SELECT user.id AS user_id, user.name AS user_name from user
In 0.6, this would render:
-::
+.. sourcecode:: sql
SELECT parent.id AS parent_id
FROM parent
in 0.7, you get:
-::
+.. sourcecode:: sql
SELECT parent.id AS parent_id
FROM parent, child
Which on both 0.6 and 0.7 renders:
-::
+.. sourcecode:: sql
SELECT parent.id AS parent_id, child.id AS child_id
FROM parent LEFT OUTER JOIN child ON parent.id = child.id
statement. Note the join condition within a basic eager
load:
- ::
+ .. sourcecode:: sql
SELECT
folder.account_id AS folder_account_id,
would produce:
-::
+.. sourcecode:: sql
UPDATE engineer SET engineer_data='java' FROM person
WHERE person.id=engineer.id AND person.name='dilbert'
Note that the SQLAlchemy APIs used by the Dogpile example as well
as the previous Beaker example have changed slightly, in particular
-this change is needed as illustrated in the Beaker example::
+this change is needed as illustrated in the Beaker example:
+
+.. sourcecode:: diff
--- examples/beaker_caching/caching_query.py
+++ examples/beaker_caching/caching_query.py
.. seealso::
- :mod:`dogpile_caching`
+ :ref:`examples_caching`
:ticket:`2589`
print(s)
-Prior to this change, the above would return::
+Prior to this change, the above would return:
+
+.. sourcecode:: sql
SELECT t1.x, t2.y FROM t2
which is invalid SQL as "t1" is not referred to in any FROM clause.
-Now, in the absence of an enclosing SELECT, it returns::
+Now, in the absence of an enclosing SELECT, it returns:
+
+.. sourcecode:: sql
SELECT t1.x, t2.y FROM t1, t2
-Within a SELECT, the correlation takes effect as expected::
+Within a SELECT, the correlation takes effect as expected:
- s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
+.. sourcecode:: python
+ s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
print(s2)
+.. sourcecode:: sql
+
SELECT t1.x, t2.y FROM t1, t2
WHERE t1.x = t2.y AND t1.x =
(SELECT t1.x, t2.y FROM t2)
reflection, in the case that additional information from the
database is needed. As this is a new event not widely used
yet, we'll be adding the ``inspector`` argument into it
-directly:
-
-::
+directly::
@event.listens_for(Table, "column_reflect")
def listen_for_col(inspector, table, column_info):
- # ...
+ ...
:ticket:`2418`
.filter(User.name == "ed")
)
-The above statement predictably renders SQL like the following::
+The above statement predictably renders SQL like the following:
+
+.. sourcecode:: sql
SELECT "user".id AS user_id, "user".name AS user_name
FROM "user" JOIN (SELECT "user".id AS id, "user".name AS name
However, in version 0.8 and earlier, the above use of :meth:`_query.Query.select_from`
would apply the ``select_stmt`` to **replace** the ``User`` entity, as it
-selects from the ``user`` table which is compatible with ``User``::
+selects from the ``user`` table which is compatible with ``User``:
+
+.. sourcecode:: sql
-- SQLAlchemy 0.8 and earlier...
SELECT anon_1.id AS anon_1_id, anon_1.name AS anon_1_name
q = session.query(user_from_stmt).filter(user_from_stmt.name == "ed")
So with SQLAlchemy 0.9, our query that selects from ``select_stmt`` produces
-the SQL we expect::
+the SQL we expect:
+
+.. sourcecode:: sql
-- SQLAlchemy 0.9
SELECT "user".id AS user_id, "user".name AS user_name
s.query(A).filter(A.b_value == None).all()
-would produce::
+would produce:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.b_id AS a_b_id
FROM a
FROM b
WHERE b.id = a.b_id AND b.value IS NULL)
-In 0.9, it now produces::
+In 0.9, it now produces:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.b_id AS a_b_id
FROM a
comparison where some parent rows have no association row.
More critically, a correct expression is emitted for ``A.b_value != None``.
-In 0.8, this would return ``True`` for ``A`` rows that had no ``b``::
+In 0.8, this would return ``True`` for ``A`` rows that had no ``b``:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.b_id AS a_b_id
FROM a
Now in 0.9, the check has been reworked so that it ensures
the A.b_id row is present, in addition to ``B.value`` being
-non-NULL::
+non-NULL:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.b_id AS a_b_id
FROM a
s.query(A).filter(A.b_value.has()).all()
-output::
+output:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.b_id AS a_b_id
FROM a
This was a very old bug for which a deprecation warning was added to the
0.8 series, but because nobody ever runs Python with the "-W" flag, it
-was mostly never seen::
+was mostly never seen:
+.. sourcecode:: text
$ python -W always::DeprecationWarning ~/dev/sqlalchemy/test.py
/Users/classic/dev/sqlalchemy/test.py:5: SADeprecationWarning: Passing arguments to
now only encodes ":", "@", or "/" and nothing else, and is now applied to both the
``username`` and ``password`` fields (previously it only applied to the
password). On parsing, encoded characters are converted, but plus signs and
-spaces are passed through as is::
+spaces are passed through as is:
+
+.. sourcecode:: text
# password: "pass word + other:words"
dbtype://user:pass word + other%3Awords@host/dbname
print((column("x") == "somevalue").collate("en_EN"))
-would produce an expression like this::
+would produce an expression like this:
+
+.. sourcecode:: sql
-- 0.8 behavior
(x = :x_1) COLLATE en_EN
The above is misunderstood by MSSQL and is generally not the syntax suggested
for any database. The expression will now produce the syntax illustrated
-by that of most database documentation::
+by that of most database documentation:
+
+.. sourcecode:: sql
-- 0.9 behavior
x = :x_1 COLLATE en_EN
print(column("x") == literal("somevalue").collate("en_EN"))
-In 0.8, this produces::
+In 0.8, this produces:
+
+.. sourcecode:: sql
x = :param_1 COLLATE en_EN
However in 0.9, will now produce the more accurate, but probably not what you
-want, form of::
+want, form of:
+
+.. sourcecode:: sql
x = (:param_1 COLLATE en_EN)
q = s.query(User.id, User.name).filter_by(name="ed")
ins = insert(Address).from_select((Address.id, Address.email_address), q)
-rendering::
+rendering:
+
+.. sourcecode:: sql
INSERT INTO addresses (id, email_address)
SELECT users.id AS users_id, users.name AS users_name
stmt = select([table]).with_for_update(read=True, nowait=True, of=table)
-On Posgtresql the above statement might render like::
+On Posgtresql the above statement might render like:
+
+.. sourcecode:: sql
SELECT table.a, table.b FROM table FOR SHARE OF table NOWAIT
For many years, the SQLAlchemy ORM has been held back from being able to nest
a JOIN inside the right side of an existing JOIN (typically a LEFT OUTER JOIN,
-as INNER JOINs could always be flattened)::
+as INNER JOINs could always be flattened):
+
+.. sourcecode:: sql
SELECT a.*, b.*, c.* FROM a LEFT OUTER JOIN (b JOIN c ON b.id = c.id) ON a.id
-This was due to the fact that SQLite up until version **3.7.16** cannot parse a statement of the above format::
+This was due to the fact that SQLite up until version **3.7.16** cannot parse a statement of the above format:
+
+.. sourcecode:: text
SQLite version 3.7.15.2 2013-01-09 11:53:05
Enter ".help" for instructions
Right-outer-joins are of course another way to work around right-side
parenthesization; this would be significantly complicated and visually unpleasant
-to implement, but fortunately SQLite doesn't support RIGHT OUTER JOIN either :)::
+to implement, but fortunately SQLite doesn't support RIGHT OUTER JOIN either :):
+
+.. sourcecode:: sql
sqlite> select a.id, b.id, c.id from b join c on b.id=c.id
...> right outer join a on b.id=a.id;
(Oracle 8, a very old database, doesn't support the JOIN keyword at all,
but SQLAlchemy has always had a simple rewriting scheme in place for Oracle's syntax).
To make matters worse, SQLAlchemy's usual workaround of applying a
-SELECT often degrades performance on platforms like PostgreSQL and MySQL::
+SELECT often degrades performance on platforms like PostgreSQL and MySQL:
+
+.. sourcecode:: sql
SELECT a.*, anon_1.* FROM a LEFT OUTER JOIN (
SELECT b.id AS b_id, c.id AS c_id
session.query(Order).outerjoin(Order.items)
Assuming a many-to-many from ``Order`` to ``Item`` which actually refers to a subclass
-like ``Subitem``, the SQL for the above would look like::
+like ``Subitem``, the SQL for the above would look like:
+
+.. sourcecode:: sql
SELECT order.id, order.name
FROM order LEFT OUTER JOIN order_item ON order.id = order_item.order_id
let us know!).
So a regular ``query(Parent).join(Subclass)`` will now usually produce a simpler
-expression::
+expression:
+
+.. sourcecode:: sql
SELECT parent.id AS parent_id
FROM parent JOIN (
ON base_table.id = subclass_table.id) ON parent.id = base_table.parent_id
Joined eager loads like ``query(Parent).options(joinedload(Parent.subclasses))``
-will alias the individual tables instead of wrapping in an ``ANON_1``::
+will alias the individual tables instead of wrapping in an ``ANON_1``:
+
+.. sourcecode:: sql
SELECT parent.*, base_table_1.*, subclass_table_1.* FROM parent
LEFT OUTER JOIN (
ON base_table_1.id = subclass_table_1.id)
ON parent.id = base_table_1.parent_id
-Many-to-many joins and eagerloads will right nest the "secondary" and "right" tables::
+Many-to-many joins and eagerloads will right nest the "secondary" and "right" tables:
+
+.. sourcecode:: sql
SELECT order.id, order.name
FROM order LEFT OUTER JOIN
joins into nested SELECT statements, while maintaining the identical labeling used by
the :class:`_expression.Select`. So SQLite, the one database that won't support this very
common SQL syntax even in 2013, shoulders the extra complexity itself,
-with the above queries rewritten as::
+with the above queries rewritten as:
+
+.. sourcecode:: sql
-- sqlite only!
SELECT parent.id AS parent_id
or_(Engineer.primary_language == "python", Manager.manager_name == "dilbert")
)
-Generates (everywhere except SQLite)::
+Generates (everywhere except SQLite):
+
+.. sourcecode:: sql
SELECT companies.company_id AS companies_company_id, companies.name AS companies_name
FROM companies JOIN (
Would not produce an inner join; because of the LEFT OUTER JOIN from user->order,
joined eager loading could not use an INNER join from order->items without changing
the user rows that are returned, and would instead ignore the "chained" ``innerjoin=True``
-directive. How 0.9.0 should have delivered this would be that instead of::
+directive. How 0.9.0 should have delivered this would be that instead of:
+
+.. sourcecode:: sql
FROM users LEFT OUTER JOIN orders ON <onclause> LEFT OUTER JOIN items ON <onclause>
-the new "right-nested joins are OK" logic would kick in, and we'd get::
+the new "right-nested joins are OK" logic would kick in, and we'd get:
+
+.. sourcecode:: sql
FROM users LEFT OUTER JOIN (orders JOIN items ON <onclause>) ON <onclause>
targeting columns that do not comprise the primary key, as in when loading
along a many to one.
-That is, when subquery loading on a many-to-one from A->B::
+That is, when subquery loading on a many-to-one from A->B:
+
+.. sourcecode:: sql
SELECT b.id AS b_id, b.name AS b_name, anon_1.b_id AS a_b_id
FROM (SELECT DISTINCT a_b_id FROM a) AS anon_1
print(stmt)
-Prior to 0.9 would render as::
+Prior to 0.9 would render as:
+
+.. sourcecode:: sql
SELECT foo(t.c1) + t.c2 AS expr
FROM t ORDER BY foo(t.c1) + t.c2
-And now renders as::
+And now renders as:
+
+.. sourcecode:: sql
SELECT foo(t.c1) + t.c2 AS expr
FROM t ORDER BY expr
outperforms both, or lags very slightly behind the faster object, based on
which scenario. In the "sweet spot", where we are both creating a good number
of new types as well as fetching a good number of rows, the lightweight
-object totally smokes both namedtuple and KeyedTuple::
+object totally smokes both namedtuple and KeyedTuple:
+
+.. sourcecode:: text
-----------------
size=10 num=10000 # few rows, lots of queries
A bench that makes use of heapy measure the startup size of Nova
illustrates a difference of about 3.7 fewer megs, or 46%,
taken up by SQLAlchemy's objects, associated dictionaries, as
-well as weakrefs, within a basic import of "nova.db.sqlalchemy.models"::
+well as weakrefs, within a basic import of "nova.db.sqlalchemy.models":
+
+.. sourcecode:: text
# reported by heapy, summation of SQLAlchemy objects +
# associated dicts + weakref-related objects with core of Nova imported:
print(sess.query(A, a1).order_by(a1.b))
-This would order by the wrong column::
+This would order by the wrong column:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, (SELECT max(b.id) AS max_1 FROM b
WHERE b.a_id = a.id) AS anon_1, a_1.id AS a_1_id,
FROM b WHERE b.a_id = a_1.id) AS anon_2
FROM a, a AS a_1 ORDER BY anon_1
-New output::
+New output:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, (SELECT max(b.id) AS max_1
FROM b WHERE b.a_id = a.id) AS anon_1, a_1.id AS a_1_id,
__mapper_args__ = {"polymorphic_on": type, "with_polymorphic": "*"}
The order_by would fail to use the label, as it would be anonymized due
-to the polymorphic loading::
+to the polymorphic loading:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.type AS a_type, (SELECT max(b.id) AS max_1
FROM b WHERE b.a_id = a.id) AS anon_1
FROM a ORDER BY (SELECT max(b.id) AS max_2
FROM b WHERE b.a_id = a.id)
-Now that the order by label tracks the anonymized label, this now works::
+Now that the order by label tracks the anonymized label, this now works:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.type AS a_type, (SELECT max(b.id) AS max_1
FROM b WHERE b.a_id = a.id) AS anon_1
CheckConstraint(foo.c.value > 5)
-Will render::
+Will render:
+
+.. sourcecode:: sql
CREATE TABLE foo (
value INTEGER,
stmt = select([t.c.x])
print(t.insert().from_select(["x"], stmt))
-Will render::
+Will render:
+
+.. sourcecode:: sql
INSERT INTO t (x, y) SELECT t.x, somefunction() AS somefunction_1
FROM t
print(CreateTable(tbl).compile(dialect=postgresql.dialect()))
-Now renders::
+Now renders:
+
+.. sourcecode:: sql
CREATE TABLE derp (
arr TEXT[] DEFAULT ARRAY['foo', 'bar', 'baz']
select([cast(("foo_%d" % random.randint(0, 1000000)).encode("ascii"), Unicode)])
)
-The format of the warning here is::
+The format of the warning here is:
+
+.. sourcecode:: text
/path/lib/sqlalchemy/sql/sqltypes.py:186: SAWarning: Unicode type received
non-unicode bind param value 'foo_4852'. (this warning may be
session.query(Address).filter(Address.user == User(id=None))
This pattern is not currently supported in SQLAlchemy. For all versions,
-it emits SQL resembling::
+it emits SQL resembling:
+
+.. sourcecode:: sql
SELECT address.id AS address_id, address.user_id AS address_user_id,
address.email_address AS address_email_address
Note above, there is a comparison ``WHERE ? = address.user_id`` where the
bound value ``?`` is receiving ``None``, or ``NULL`` in SQL. **This will
always return False in SQL**. The comparison here would in theory
-generate SQL as follows::
+generate SQL as follows:
+
+.. sourcecode:: sql
SELECT address.id AS address_id, address.user_id AS address_user_id,
address.email_address AS address_email_address
fact that "NULL = NULL" produces False in all cases run the risk that
someday, SQLAlchemy might fix this issue to generate "IS NULL", and the queries
will then produce different results. Therefore with this kind of operation,
-you will see a warning::
+you will see a warning:
+
+.. sourcecode:: text
SAWarning: Got None for value of column user.id; this is unsupported
for a relationship comparison and will not currently produce an
s.query(B).filter(B.a == a1)
-Produces::
+Produces:
+
+.. sourcecode:: sql
SELECT b.id AS b_id, b.a_id AS b_a_id
FROM b
s.query(B).filter(B.a != a1)
-Produces (in 0.9 and all versions prior to 1.0.1)::
+Produces (in 0.9 and all versions prior to 1.0.1):
+
+.. sourcecode:: sql
SELECT b.id AS b_id, b.a_id AS b_a_id
FROM b
WHERE b.a_id != ? OR b.a_id IS NULL
(7,)
-For a transient object, it would produce a broken query::
+For a transient object, it would produce a broken query:
+
+.. sourcecode:: sql
SELECT b.id, b.a_id
FROM b
WHERE b.a_id != :a_id_1 OR b.a_id IS NULL
- {u'a_id_1': symbol('NEVER_SET')}
+ -- {u'a_id_1': symbol('NEVER_SET')}
This inconsistency has been repaired, and in all queries the current attribute
value, in this example ``10``, will now be used.
print(s.query(A).join(A.bs).join(A.bs))
-Will render::
+Will render:
+
+.. sourcecode:: sql
SELECT a.id AS a_id
FROM a JOIN b ON a.id = b.a_id
That is, the ``A.bs`` is part of a "path". As part of :ticket:`3367`,
arriving at the same endpoint twice without it being part of a
-larger path will now emit a warning::
+larger path will now emit a warning:
+
+.. sourcecode:: text
SAWarning: Pathed join target A.bs has already been joined to; skipping
print(s.query(A).join(B, B.a_id == A.id).join(B, B.a_id == A.id))
-In 0.9, this would render as follows::
+In 0.9, this would render as follows:
+
+.. sourcecode:: sql
SELECT a.id AS a_id
FROM a JOIN b ON b.a_id = a.id JOIN b AS b_1 ON b_1.a_id = a.id
This is problematic since the aliasing is implicit and in the case of different
ON clauses can lead to unpredictable results.
-In 1.0, no automatic aliasing is applied and we get::
+In 1.0, no automatic aliasing is applied and we get:
+
+.. sourcecode:: sql
SELECT a.id AS a_id
FROM a JOIN b ON b.a_id = a.id JOIN b ON b.a_id = a.id
print(s.query(ASub1).join(B, ASub1.b).join(ASub2, ASub2.id == B.a_id))
The two queries at the bottom are equivalent, and should both render
-the identical SQL::
+the identical SQL:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.type AS a_type
FROM a JOIN b ON b.a_id = a.id JOIN a ON b.a_id = a.id AND a.type IN (:type_1)
The above SQL is invalid, as it renders "a" within the FROM list twice.
However, the implicit aliasing bug would occur with the second query only
-and render this instead::
+and render this instead:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, a.type AS a_type
FROM a JOIN b ON b.a_id = a.id JOIN a AS a_1
joinedload("orders", innerjoin=False).joinedload("items", innerjoin=True)
)
-With the new default, this will render the FROM clause in the form::
+With the new default, this will render the FROM clause in the form:\
+
+.. sourcecode:: text
FROM users LEFT OUTER JOIN (orders JOIN items ON <onclause>) ON <onclause>
)
This will avoid right-nested joins and chain the joins together using all
-OUTER joins despite the innerjoin directive::
+OUTER joins despite the innerjoin directive:
+
+.. sourcecode:: text
FROM users LEFT OUTER JOIN orders ON <onclause> LEFT OUTER JOIN items ON <onclause>
relationship. However, joined eager loading has always treated the
above as a situation where the main query needs to be inside a
subquery, as would normally be needed for a collection of B objects
-where the main query has a LIMIT applied::
+where the main query has a LIMIT applied:
+
+.. sourcecode:: sql
SELECT anon_1.a_id AS anon_1_a_id, b_1.id AS b_1_id, b_1.a_id AS b_1_a_id
FROM (SELECT a.id AS a_id
However, since the relationship of the inner query to the outer one is
that at most only one row is shared in the case of ``uselist=False``
(in the same way as a many-to-one), the "subquery" used with LIMIT +
-joined eager loading is now dropped in this case::
+joined eager loading is now dropped in this case:
+
+.. sourcecode:: sql
SELECT a.id AS a_id, b_1.id AS b_1_id, b_1.a_id AS b_1_a_id
FROM a LEFT OUTER JOIN b AS b_1 ON a.id = b_1.a_id
sess.query(FooWidget).from_self().all()
-rendering::
+rendering:
+
+.. sourcecode:: sql
SELECT
anon_1.widgets_id AS anon_1_widgets_id,
and produces a bad query). This decision
apparently goes way back to 0.6.5 with the note "may need to make more
adjustments to this". Well, those adjustments have arrived! So now the
-above query will render::
+above query will render:
+
+.. sourcecode:: sql
SELECT
anon_1.widgets_id AS anon_1_widgets_id,
sess.query(FooWidget.id).count()
-Renders::
+Renders:
+
+.. sourcecode:: sql
SELECT count(*) AS count_1
FROM (SELECT widgets.id AS widgets_id
s.query(Related).join(FooWidget, Related.widget).all()
-SQL output::
+SQL output:
+
+.. sourcecode:: sql
SELECT related.id AS related_id
FROM related JOIN widget ON related.id = widget.related_id AND widget.type IN (:type_1)
stmt = select(["a", "b"]).where("a = b").select_from("sometable")
The statement is built up normally, with all the same coercions as before.
-However, one will see the following warnings emitted::
+However, one will see the following warnings emitted:
+
+.. sourcecode:: text
SAWarning: Textual column expression 'a' should be explicitly declared
with text('a'), or use column('a') for more specificity
re-statement of the function. The string argument given is actively
matched to an entry in the columns clause during compilation, so the above
statement would produce as we expect, without warnings (though note that
-the ``"name"`` expression has been resolved to ``users.name``!)::
+the ``"name"`` expression has been resolved to ``users.name``!):
+
+.. sourcecode:: sql
SELECT users.name, count(users.id) AS id_count
FROM users GROUP BY users.name ORDER BY id_count
"some_label"
)
-The output does what we say, but again it warns us::
+The output does what we say, but again it warns us:
+
+.. sourcecode:: text
SAWarning: Can't resolve label reference 'some_label'; converting to
text() (this warning may be suppressed after 10 occurrences)
+.. sourcecode:: sql
+
SELECT users.name, count(users.id) AS id_count
FROM users ORDER BY some_label
)
The above example will invoke ``next(counter)`` for each row individually
-as would be expected::
+as would be expected:
+
+.. sourcecode:: sql
INSERT INTO my_table (id, data) VALUES (?, ?), (?, ?), (?, ?)
(1, 'd1', 2, 'd2', 3, 'd3')
Previously, a positional dialect would fail as a bind would not be generated
-for additional positions::
+for additional positions:
+
+.. sourcecode:: text
Incorrect number of bindings supplied. The current statement uses 6,
and there are 4 supplied.
And with a "named" dialect, the same value for "id" would be re-used in
each row (hence this change is backwards-incompatible with a system that
-relied on this)::
+relied on this):
+
+.. sourcecode:: sql
INSERT INTO my_table (id, data) VALUES (:id, :data_0), (:id, :data_1), (:id, :data_2)
- {u'data_2': 'd3', u'data_1': 'd2', u'data_0': 'd1', 'id': 1}
+ -- {u'data_2': 'd3', u'data_1': 'd2', u'data_0': 'd1', 'id': 1}
The system will also refuse to invoke a "server side" default as inline-rendered
SQL, since it cannot be guaranteed that a server side default is compatible
)
)
-will raise::
+will raise:
+
+.. sourcecode:: text
sqlalchemy.exc.CompileError: INSERT value for column my_table.data is
explicitly rendered as a boundparameter in the VALUES clause; a
Python-side value or SQL expression is required
Previously, the value "d1" would be copied into that of the third
-row (but again, only with named format!)::
+row (but again, only with named format!):
+
+.. sourcecode:: sql
INSERT INTO my_table (data) VALUES (:data_0), (:data_1), (:data_0)
- {u'data_1': 'd2', u'data_0': 'd1'}
+ -- {u'data_1': 'd2', u'data_0': 'd1'}
:ticket:`3288`
session.query(q).all()
-Produces::
+Produces:
+
+.. sourcecode:: sql
SELECT EXISTS (SELECT 1
FROM widget
s.add(A(id=1))
s.commit()
-The above program would raise::
+The above program would raise:
+
+.. sourcecode:: text
FlushError: New instance <User at 0x7f0287eca4d0> with identity key
(<class 'test.orm.test_transaction.User'>, ('u1',)) conflicts
session.delete(some_b)
session.commit()
-Will emit SQL as::
+Will emit SQL as:
+
+.. sourcecode:: sql
DELETE FROM a WHERE a.id = %(id)s
- {'id': 1}
+ -- {'id': 1}
COMMIT
As always, the target database must have foreign key support with
== "Elbonia, Inc."
)
-The above query now produces::
+The above query now produces:
+
+.. sourcecode:: sql
SELECT people.name AS people_name
FROM people
Before the fix, the call to ``correlate(Person)`` would inadvertently
attempt to correlate to the join of ``Person``, ``Engineer`` and ``Manager``
-as a single unit, so ``Person`` wouldn't be correlated::
+as a single unit, so ``Person`` wouldn't be correlated:
+
+.. sourcecode:: sql
-- old, incorrect query
SELECT people.name AS people_name
q = q.join(c_alias_2, A.c)
q = q.options(contains_eager(A.c, alias=c_alias_2))
-The above query emits SQL like this::
+The above query emits SQL like this:
+
+.. sourcecode:: sql
SELECT
d.id AS d_id,
stmt = select([selectable.c.people_id])
Assuming ``people`` with a column ``people_id``, the above
-statement would render as::
+statement would render as:
+
+.. sourcecode:: sql
SELECT alias.people_id FROM
people AS alias TABLESAMPLE bernoulli(:bernoulli_1)
Column("y", Integer, primary_key=True),
)
-An INSERT emitted with no values for this table will produce this warning::
+An INSERT emitted with no values for this table will produce this warning:
+
+.. sourcecode:: text
SAWarning: Column 'b.x' is marked as a member of the primary
key for table 'b', but has no Python-side or server-side default
ua = users.alias("ua")
stmt = select([users.c.user_id, ua.c.user_id])
-The above statement will compile to::
+The above statement will compile to:
+
+.. sourcecode:: sql
SELECT users.user_id, ua.user_id FROM users, users AS ua
expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
stmt = select([expr])
-Producing::
+Producing:
+
+.. sourcecode:: sql
SELECT array_agg(table1.a ORDER BY table1.b DESC) AS array_agg_1 FROM table1
]
)
-The above statement would produce SQL similar to::
+The above statement would produce SQL similar to:
+
+.. sourcecode:: sql
SELECT department.id, percentile_cont(0.5)
WITHIN GROUP (ORDER BY department.salary DESC)
Python. We are then using :func:`.cast` so that as a SQL expression,
the VARCHAR "id" column will be CAST to an integer for a regular non-
converted join as with :meth:`_query.Query.join` or :func:`_orm.joinedload`.
-That is, a joinedload of ``.pets`` looks like::
+That is, a joinedload of ``.pets`` looks like:
+
+.. sourcecode:: sql
SELECT person.id AS person_id, pets_1.id AS pets_1_id,
pets_1.person_id AS pets_1_person_id
the ``Person.id`` column at load time with a bound parameter, which receives
a Python-loaded value. This replacement is specifically where the intent
of our :func:`.type_coerce` function would be lost. Prior to the change,
-this lazy load comes out as::
+this lazy load comes out as:
+
+.. sourcecode:: sql
SELECT pets.id AS pets_id, pets.person_id AS pets_person_id
FROM pets
WHERE pets.person_id = CAST(CAST(%(param_1)s AS VARCHAR) AS INTEGER)
- {'param_1': 5}
+ -- {'param_1': 5}
Where above, we see that our in-Python value of ``5`` is CAST first
to a VARCHAR, then back to an INTEGER in SQL; a double CAST which works,
With the change, the :func:`.type_coerce` function maintains a wrapper
even after the column is swapped out for a bound parameter, and the query now
-looks like::
+looks like:
+
+.. sourcecode:: sql
SELECT pets.id AS pets_id, pets.person_id AS pets_person_id
FROM pets
WHERE pets.person_id = CAST(%(param_1)s AS INTEGER)
- {'param_1': 5}
+ -- {'param_1': 5}
Where our outer CAST that's in our primaryjoin still takes effect, but the
needless CAST that's in part of the ``StringAsInt`` custom type is removed
.order_by(User.id, User.name, User.fullname)
)
-Produces::
+Produces:
+
+.. sourcecode:: sql
SELECT DISTINCT user.id AS a_id, user.name AS name,
user.fullname AS a_fullname
FROM a ORDER BY user.id, user.name, user.fullname
-Previously, it would produce::
+Previously, it would produce:
+
+.. sourcecode:: sql
SELECT DISTINCT user.id AS a_id, user.name AS name, user.name AS a_name,
user.fullname AS a_fullname
configure_mappers()
-Will raise::
+Will raise:
+
+.. sourcecode:: text
- sqlalchemy.exc.InvalidRequestError: A validation function for mapped attribute 'data' on mapper Mapper|A|a already exists.
+ sqlalchemy.exc.InvalidRequestError: A validation function for mapped attribute 'data'
+ on mapper Mapper|A|a already exists.
:ticket:`3776`
An issue that, like others, was long driven by SQLite's lack of capabilities
has now been enhanced to work on all supporting backends. We refer to a query that
is a UNION of SELECT statements that themselves contain row-limiting or ordering
-features which include LIMIT, OFFSET, and/or ORDER BY::
+features which include LIMIT, OFFSET, and/or ORDER BY:
+
+.. sourcecode:: sql
(SELECT x FROM table1 ORDER BY y LIMIT 1) UNION
(SELECT x FROM table2 ORDER BY y LIMIT 2)
conn.execute(do_update_stmt)
-The above will render::
+The above will render:
+
+.. sourcecode:: sql
INSERT INTO my_table (id, data)
VALUES (:id, :data)
e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.create_all(e)
-emits::
+emits:
+
+.. sourcecode:: sql
CREATE TYPE work_place_roles AS ENUM (
'manager', 'place_admin', 'carwash_admin', 'parking_admin',
mysql_engine="InnoDB",
)
-DDL such as the following would be generated::
+DDL such as the following would be generated:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
x INTEGER NOT NULL,
the AUTO_INCREMENT would otherwise fail on InnoDB without this additional KEY.
This workaround has been removed and replaced with the much better system
-of just stating the AUTO_INCREMENT column *first* within the primary key::
+of just stating the AUTO_INCREMENT column *first* within the primary key:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
x INTEGER NOT NULL,
)
The SQL produced would be the query against ``User`` followed by the
-subqueryload for ``User.addresses`` (note the parameters are also listed)::
+subqueryload for ``User.addresses`` (note the parameters are also listed):
+
+.. sourcecode:: sql
SELECT users.id AS users_id, users.name AS users_name
FROM users
.options(selectinload(User.addresses))
)
-Produces::
+Produces:
+
+.. sourcecode:: sql
SELECT users.id AS users_id, users.name AS users_name
FROM users
SELECT statement, but then the attributes of the additional subclasses
are loaded with additional SELECT statements:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
- from sqlalchemy.orm import selectin_polymorphic
+ >>> from sqlalchemy.orm import selectin_polymorphic
- query = session.query(Employee).options(
- selectin_polymorphic(Employee, [Manager, Engineer])
- )
+ >>> query = session.query(Employee).options(
+ ... selectin_polymorphic(Employee, [Manager, Engineer])
+ ... )
- {opensql}query.all()
- SELECT
+ >>> query.all()
+ {opensql}SELECT
employee.id AS employee_id,
employee.name AS employee_name,
employee.type AS employee_type
In SQL, the IN and NOT IN operators do not support comparison to a
collection of values that is explicitly empty; meaning, this syntax is
-illegal::
+illegal:
+
+.. sourcecode:: sql
mycolumn IN ()
only theoretical, and could not be tested since databases don't support that
syntax. However, as it turns out, you can in fact ask a relational database
what value it would return for "NULL IN ()" by simulating the empty set as
-follows::
+follows:
+
+.. sourcecode:: sql
SELECT NULL IN (SELECT 1 WHERE 1 != 1)
conn.execute(stmt)
The resulting SQL from the above statement on a PostgreSQL backend
-would render as::
+would render as:
+
+.. sourcecode:: sql
DELETE FROM users USING addresses
WHERE users.id = addresses.id
sess.query(Manager.id)
-Would generate SQL as::
+Would generate SQL as:
+
+.. sourcecode:: sql
SELECT employee.id FROM employee WHERE employee.type IN ('manager')
sess.query(func.count(1)).select_from(Manager)
-would generate::
+would generate:
+
+.. sourcecode:: sql
SELECT count(1) FROM employee
-With the fix, :meth:`_query.Query.select_from` now works correctly and we get::
+With the fix, :meth:`_query.Query.select_from` now works correctly and we get:
+
+.. sourcecode:: sql
SELECT count(1) FROM employee WHERE employee.type IN ('manager')
Above, the previous behavior would be that an UPDATE would emit after the
INSERT, thus triggering the "onupdate" and overwriting the value
-"5". The SQL now looks like::
+"5". The SQL now looks like:
+
+.. sourcecode:: sql
INSERT INTO a (favorite_b_id, updated) VALUES (?, ?)
(None, 5)
mytable.c.somecolumn.collate("fr_FR")
)
-now renders::
+now renders:
+
+.. sourcecode:: sql
SELECT mytable.x, mytable.y,
FROM mytable ORDER BY mytable.somecolumn COLLATE "fr_FR"
conn.execute(on_conflict_stmt)
-The above will render::
+The above will render:
+
+.. sourcecode:: sql
INSERT INTO my_table (id, data)
VALUES (:id, :data)
That is, the JOIN would implicitly be against the first entity that matches.
The new behavior is that an exception requests that this ambiguity be
-resolved::
+resolved:
+
+.. sourcecode:: text
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to
join from, there are multiple FROMS which can join to this entity.
session.query(func.current_timestamp(), User).join(Address)
-Prior to this enhancement, the above query would raise::
+Prior to this enhancement, the above query would raise:
+
+.. sourcecode:: text
sqlalchemy.exc.InvalidRequestError: Don't know how to join from
CURRENT_TIMESTAMP; please use select_from() to establish the
session.query(A).options(joinedload(A.b)).limit(5)
The :class:`_query.Query` object renders a SELECT of the following form when joined
-eager loading is combined with LIMIT::
+eager loading is combined with LIMIT:
+
+.. sourcecode:: sql
SELECT subq.a_id, subq.a_data, b_alias.id, b_alias.data FROM (
SELECT a.id AS a_id, a.data AS a_data FROM a LIMIT 5
This is so that the limit of rows takes place for the primary entity without
affecting the joined eager load of related items. When the above query is
-combined with "SELECT..FOR UPDATE", the behavior has been this::
+combined with "SELECT..FOR UPDATE", the behavior has been this:
+
+.. sourcecode:: sql
SELECT subq.a_id, subq.a_data, b_alias.id, b_alias.data FROM (
SELECT a.id AS a_id, a.data AS a_data FROM a LIMIT 5
However, MySQL due to https://bugs.mysql.com/bug.php?id=90693 does not lock
the rows inside the subquery, unlike that of PostgreSQL and other databases.
-So the above query now renders as::
+So the above query now renders as:
+
+.. sourcecode:: sql
SELECT subq.a_id, subq.a_data, b_alias.id, b_alias.data FROM (
SELECT a.id AS a_id, a.data AS a_data FROM a LIMIT 5 FOR UPDATE
session.query(A).options(joinedload(A.b)).with_for_update(of=A).limit(5)
-The query would now render as::
+The query would now render as:
+
+.. sourcecode:: sql
SELECT subq.a_id, subq.a_data, b_alias.id, b_alias.data FROM (
SELECT a.id AS a_id, a.data AS a_data FROM a LIMIT 5 FOR UPDATE OF a
UniqueConstraint("a", "b", "c"),
)
-The CREATE TABLE for the above table will render as::
+The CREATE TABLE for the above table will render as:
+
+.. sourcecode:: sql
CREATE TABLE info (
a INTEGER,
)
The truncation logic will ensure a too-long name isn't generated for the
-UNIQUE constraint::
+UNIQUE constraint:
+
+.. sourcecode:: sql
CREATE TABLE long_names (
information_channel_code INTEGER,
print(AddConstraint(uq).compile(dialect=postgresql.dialect()))
-will output::
+will output:
+
+.. sourcecode:: text
sqlalchemy.exc.IdentifierError: Identifier
'this_is_too_long_of_a_name_for_any_database_backend_even_postgresql'
name=conv("this_is_too_long_of_a_name_for_any_database_backend_even_postgresql"),
)
-This will again output deterministically truncated SQL as in::
+This will again output deterministically truncated SQL as in:
+
+.. sourcecode:: sql
ALTER TABLE t ADD CONSTRAINT this_is_too_long_of_a_name_for_any_database_backend_eve_ac05 UNIQUE (x)
Above, the :paramref:`_orm.relationship.primaryjoin` of the "descendants" relationship
will produce a "left" and a "right" expression based on the first and second
arguments passed to ``instr()``. This allows features like the ORM
-lazyload to produce SQL like::
+lazyload to produce SQL like:
+
+.. sourcecode:: sql
SELECT venue.id AS venue_id, venue.name AS venue_name
FROM venue
.one()
)
-to work as::
+to work as:
+
+.. sourcecode:: sql
SELECT venue.id AS venue_id, venue.name AS venue_name,
venue_1.id AS venue_1_id, venue_1.name AS venue_1_name
print(select([column("x", CompressedLargeBinary)]).compile(dialect=sqlite.dialect()))
-will render::
+will render:
+
+.. sourcecode:: sql
SELECT uncompress(x) AS x
engine.begin()
- table.insert().execute(<params>)
+ table.insert().execute(parameters)
result = table.select().execute()
- table.update().execute(<params>)
+ table.update().execute(parameters)
engine.commit()
try:
trans = conn.begin()
- conn.execute(table.insert(), <params>)
+ conn.execute(table.insert(), parameters)
result = conn.execute(table.select())
- conn.execute(table.update(), <params>)
+ conn.execute(table.update(), parameters)
trans.commit()
except:
the original pattern, thanks to context managers::
with engine.begin() as conn:
- conn.execute(table.insert(), <params>)
+ conn.execute(table.insert(), parameters)
result = conn.execute(table.select())
- conn.execute(table.update(), <params>)
+ conn.execute(table.update(), parameters)
At this point, any remaining code that is still relying upon the "threadlocal"
style will be encouraged via this deprecation to modernize - the feature should
UniqueConstraint("id", "data", sqlite_on_conflict="IGNORE"),
)
-The above table would render in a CREATE TABLE statement as::
+The above table would render in a CREATE TABLE statement as:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
id INTEGER NOT NULL,
statement are maintained. The goal is to improve readability while still
keeping the original error message on one line for logging purposes.
-This means that an error message that previously looked like this::
+This means that an error message that previously looked like this:
+
+.. sourcecode:: text
+
+ sqlalchemy.exc.StatementError: (sqlalchemy.exc.InvalidRequestError) A value is
+ required for bind parameter 'id' [SQL: 'select * from reviews\nwhere id = ?']
+ (Background on this error at: https://sqlalche.me/e/cd3x)
- sqlalchemy.exc.StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required for bind parameter 'id' [SQL: 'select * from reviews\nwhere id = ?'] (Background on this error at: https://sqlalche.me/e/cd3x)
+Will now look like this:
-Will now look like this::
+.. sourcecode:: text
sqlalchemy.exc.StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required for bind parameter 'id'
[SQL: select * from reviews
result = session.query(Customer).filter(Customer.id == id_).one()
This example in the 1.3 release of SQLAlchemy on a Dell XPS13 running Linux
-completes as follows::
+completes as follows:
+
+.. sourcecode:: text
test_orm_query : (10000 iterations); total time 3.440652 sec
-In 1.4, the code above without modification completes::
+In 1.4, the code above without modification completes:
+
+.. sourcecode:: text
test_orm_query : (10000 iterations); total time 2.367934 sec
stmt += lambda s: s.where(Customer.id == id_)
session.execute(stmt).scalar_one()
-The code above completes::
+The code above completes:
+
+.. sourcecode:: text
test_orm_query_newstyle_w_lambdas : (10000 iterations); total time 1.247092 sec
stmt1 = select(user.c.id, user.c.name)
stmt2 = select(addresses, stmt1).select_from(addresses.join(stmt1))
-Raising::
+Raising:
+
+.. sourcecode:: text
sqlalchemy.exc.ArgumentError: Column expression or FROM clause expected,
got <...Select object ...>. To create a FROM clause from a <class
without first creating an alias or subquery would be that it creates an
unnamed subquery. While standard SQL does support this syntax, in practice
it is rejected by most databases. For example, both the MySQL and PostgreSQL
- outright reject the usage of unnamed subqueries::
+ outright reject the usage of unnamed subqueries:
+
+ .. sourcecode:: sql
# MySQL / MariaDB:
HINT: For example, FROM (SELECT ...) [AS] foo.
A database like SQLite accepts them, however it is still often the case that
- the names produced from such a subquery are too ambiguous to be useful::
+ the names produced from such a subquery are too ambiguous to be useful:
+
+ .. sourcecode:: sql
sqlite> CREATE TABLE a(id integer);
sqlite> CREATE TABLE b(id integer);
addresses_table, user_table.c.id == addresses_table.c.user_id
)
-producing::
+producing:
+
+.. sourcecode:: sql
SELECT user.id, user.name FROM user JOIN address ON user.id=address.user_id
stmt = select(Address.email_address, User.name).join_from(User, Address)
-producing::
+producing:
+
+.. sourcecode:: sql
SELECT address.email_address, user.name FROM user JOIN address ON user.id == address.user_id
FROM a
WHERE a.id IN (:id_1_1, :id_1_2, :id_1_3)
-Engine logging output shows the ultimate rendered statement as well::
+Engine logging output shows the ultimate rendered statement as well:
+
+.. sourcecode:: sql
INFO sqlalchemy.engine.base.Engine SELECT a.id, a.data
FROM a
The above query selects from a JOIN of ``User`` and ``address_alias``, the
latter of which is an alias of the ``Address`` entity. However, the
``Address`` entity is used within the WHERE clause directly, so the above would
-result in the SQL::
+result in the SQL:
+
+.. sourcecode:: sql
SELECT
users.id AS users_id, users.name AS users_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The difference between a named tuple and a mapping as far as boolean operators
-can be summarized. Given a "named tuple" in pseudo code as::
+can be summarized. Given a "named tuple" in pseudo code as:
+
+.. sourcecode:: text
row = (id: 5, name: 'some name')
A user pointed out that the PostgreSQL database has a convenient behavior when
using functions like CAST against a named column, in that the result column name
-is named the same as the inner expression::
+is named the same as the inner expression:
+
+.. sourcecode:: text
test=> SELECT CAST(data AS VARCHAR) FROM foo;
This allows one to apply CAST to table columns while not losing the column
name (above using the name ``"data"``) in the result row. Compare to
databases such as MySQL/MariaDB, as well as most others, where the column
-name is taken from the full SQL expression and is not very portable::
+name is taken from the full SQL expression and is not very portable:
+
+.. sourcecode:: text
MariaDB [test]> SELECT CAST(data AS CHAR) FROM foo;
+--------------------+
While SQLAlchemy has used bound parameters for LIMIT/OFFSET schemes for many
years, a few outliers remained where such parameters were not allowed, including
-a SQL Server "TOP N" statement, such as::
+a SQL Server "TOP N" statement, such as:
+
+.. sourcecode:: sql
SELECT TOP 5 mytable.id, mytable.data FROM mytable
use if the ``optimize_limits=True`` parameter is passed to
:func:`_sa.create_engine` with an Oracle URL) does not allow them,
but also that using bound parameters with ROWNUM comparisons has been reported
-as producing slower query plans::
+as producing slower query plans:
+
+.. sourcecode:: sql
SELECT anon_1.id, anon_1.data FROM (
SELECT /*+ FIRST_ROWS(5) */
SQL Server and Oracle dialects, so that the drivers receive the literal
rendered value but the rest of SQLAlchemy can still consider this as a
bound parameter. The above two statements when stringified using
-``str(statement.compile(dialect=<dialect>))`` now look like::
+``str(statement.compile(dialect=<dialect>))`` now look like:
+
+.. sourcecode:: sql
SELECT TOP [POSTCOMPILE_param_1] mytable.id, mytable.data FROM mytable
-and::
+and:
+
+.. sourcecode:: sql
SELECT anon_1.id, anon_1.data FROM (
SELECT /*+ FIRST_ROWS([POSTCOMPILE__ora_frow_1]) */
"expanding IN" is used.
When viewing the SQL logging output, the final form of the statement will
-be seen::
+be seen:
+
+.. sourcecode:: sql
SELECT anon_1.id, anon_1.data FROM (
SELECT /*+ FIRST_ROWS(5) */
SQLAlchemy includes a :ref:`performance suite <examples_performance>` within
its examples, where we can compare the times generated for the "batch_inserts"
runner against 1.3 and 1.4, revealing a 3x-5x speedup for most flavors
-of batch insert::
+of batch insert:
+
+.. sourcecode:: text
# 1.3
$ python -m examples.performance bulk_inserts --dburl postgresql://scott:tiger@localhost/test
Note that the ``execute_values()`` extension modifies the INSERT statement in the psycopg2
layer, **after** it's been logged by SQLAlchemy. So with SQL logging, one will see the
parameter sets batched together, but the joining of multiple "values" will not be visible
-on the application side::
+on the application side:
+
+.. sourcecode:: text
2020-06-27 19:08:18,166 INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (%(data)s) RETURNING a.id
2020-06-27 19:08:18,166 INFO sqlalchemy.engine.Engine [generated in 0.00698s] ({'data': 'data 1'}, {'data': 'data 2'}, {'data': 'data 3'}, {'data': 'data 4'}, {'data': 'data 5'}, {'data': 'data 6'}, {'data': 'data 7'}, {'data': 'data 8'} ... displaying 10 of 4999 total bound parameter sets ... {'data': 'data 4998'}, {'data': 'data 4999'})
2020-06-27 19:08:18,254 INFO sqlalchemy.engine.Engine COMMIT
-The ultimate INSERT statement can be seen by enabling statement logging on the PostgreSQL side::
+The ultimate INSERT statement can be seen by enabling statement logging on the PostgreSQL side:
+
+.. sourcecode:: text
2020-06-27 19:08:18.169 EDT [26960] LOG: statement: INSERT INTO a (data)
VALUES ('data 1'),('data 2'),('data 3'),('data 4'),('data 5'),('data 6'),('data
Will now use RETURNING if the backend database supports it; this currently
includes PostgreSQL and SQL Server (the Oracle dialect does not support RETURNING
-of multiple rows)::
+of multiple rows):
+
+.. sourcecode:: text
UPDATE users SET age_int=(users.age_int - %(age_int_1)s) WHERE users.age_int > %(age_int_2)s RETURNING users.id
[generated in 0.00060s] {'age_int_1': 10, 'age_int_2': 29}
Row (4,)
For backends that do not support RETURNING of multiple rows, the previous approach
-of emitting SELECT for the primary keys beforehand is still used::
+of emitting SELECT for the primary keys beforehand is still used:
+
+.. sourcecode:: text
SELECT users.id FROM users WHERE users.age_int > %(age_int_1)s
[generated in 0.00043s] {'age_int_1': 29}
session.add(Product(id=1))
s.commit() # <-- will raise FlushError
-The change is that the :class:`.FlushError` is altered to be only a warning::
+The change is that the :class:`.FlushError` is altered to be only a warning:
+
+.. sourcecode:: text
sqlalchemy/orm/persistence.py:408: SAWarning: New instance <Product at 0x7f1ff65e0ba8> with identity key (<class '__main__.Product'>, (1,), None) conflicts with persistent instance <Product at 0x7f1ff60a4550>
Subsequent to that, the condition will attempt to insert the row into the
database which will emit :class:`.IntegrityError`, which is the same error that
would be raised if the primary key identity was not already present in the
-:class:`.Session`::
+:class:`.Session`:
+
+.. sourcecode:: text
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: product.id
duplicates to function regardless of the existing state of the
:class:`.Session`, as is often done using savepoints::
-
# add another Product with same primary key
try:
with session.begin_nested():
# this is now an error
addresses = relationship("Address", viewonly=True, cascade="all, delete-orphan")
-The above will raise::
+The above will raise:
+
+.. sourcecode:: text
sqlalchemy.exc.ArgumentError: Cascade settings
"delete, delete-orphan, merge, save-update" apply to persistence
The subquery selects both the ``Engineer`` and the ``Manager`` rows, and
even though the outer query is against ``Manager``, we get a non ``Manager``
-object back::
+object back:
+
+.. sourcecode:: text
SELECT anon_1.type AS anon_1_type, anon_1.id AS anon_1_id
FROM (SELECT employee.type AS type, employee.id AS id
2020-01-29 18:04:13,524 INFO sqlalchemy.engine.base.Engine ()
[<__main__.Engineer object at 0x7f7f5b9a9810>, <__main__.Manager object at 0x7f7f5b9a9750>]
-The new behavior is that this condition raises an error::
+The new behavior is that this condition raises an error:
+
+.. sourcecode:: text
sqlalchemy.exc.InvalidRequestError: Row with identity key
(<class '__main__.Employee'>, (1,), None) can't be loaded into an object;
In the case of single inheritance mapping, the change in behavior is slightly
more involved; if ``Engineer`` and ``Manager`` above are mapped with
single table inheritance, in 1.3 the following query would be emitted and
-only a ``Manager`` object is returned::
+only a ``Manager`` object is returned:
+
+.. sourcecode:: text
SELECT anon_1.type AS anon_1_type, anon_1.id AS anon_1_id
FROM (SELECT employee.type AS type, employee.id AS id
entity are NULL, which is a valid use case. The behavior is now equivalent
to that of joined table inheritance, where it is assumed that the subquery
returns the correct rows and an error is raised if an unexpected polymorphic
-identity is encountered::
+identity is encountered:
+
+.. sourcecode:: text
SELECT anon_1.type AS anon_1_type, anon_1.id AS anon_1_id
FROM (SELECT employee.type AS type, employee.id AS id
discriminator column::
print(
- s.query(Manager).select_entity_from(
- s.query(Employee).filter(Employee.discriminator == 'manager').
- subquery()).all()
+ s.query(Manager)
+ .select_entity_from(
+ s.query(Employee).filter(Employee.discriminator == "manager").subquery()
+ )
+ .all()
)
+.. sourcecode:: sql
+
SELECT anon_1.type AS anon_1_type, anon_1.id AS anon_1_id
FROM (SELECT employee.type AS type, employee.id AS id
FROM employee
A quick test of the ``execute_values()`` approach using the
``bulk_inserts.py`` script in the :ref:`examples_performance` example
-suite reveals an approximate **fivefold performance increase**::
+suite reveals an approximate **fivefold performance increase**:
+
+.. sourcecode:: text
$ python -m examples.performance bulk_inserts --test test_core_insert --num 100000 --dburl postgresql://scott:tiger@localhost/test
The above program uses several patterns that many users will already identify
as "legacy", namely the use of the :meth:`_engine.Engine.execute` method
that's part of the "connectionless execution" API. When we run the above
-program against 1.4, it returns a single line::
+program against 1.4, it returns a single line:
+
+.. sourcecode:: text
$ python test3.py
[(1,)]
To enable "2.0 deprecations mode", we enable the ``SQLALCHEMY_WARN_20=1``
variable, and additionally ensure that a `warnings filter`_ that will not
-suppress any warnings is selected::
+suppress any warnings is selected:
+
+.. sourcecode:: text
SQLALCHEMY_WARN_20=1 python -W always::DeprecationWarning test3.py
.. _warnings filter: https://docs.python.org/3/library/warnings.html#the-warnings-filter
-With warnings turned on, our program now has a lot to say::
+With warnings turned on, our program now has a lot to say:
+
+.. sourcecode:: text
$ SQLALCHEMY_WARN_20=1 python2 -W always::DeprecationWarning test3.py
test3.py:9: RemovedIn20Warning: The Engine.execute() function/method is considered legacy as of the 1.x series of SQLAlchemy and will be removed in 2.0. All statement execution in SQLAlchemy 2.0 is performed by the Connection.execute() method of Connection, or in the ORM by the Session.execute() method of Session. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
result = session.execute(stmt)
The above query will disambiguate the ``.id`` column of ``User`` and
-``Address``, where ``Address.id`` is rendered and tracked as ``id_1``::
+``Address``, where ``Address.id`` is rendered and tracked as ``id_1``:
+
+.. sourcecode:: sql
SELECT anon_1.id AS anon_1_id, anon_1.id_1 AS anon_1_id_1,
anon_1.user_id AS anon_1_user_id,
session.flush()
session.commit()
-This test can be run from any SQLAlchemy source tree as follows::
+This test can be run from any SQLAlchemy source tree as follows:
+
+.. sourcecode:: text
python -m examples.performance.bulk_inserts --test test_flush_no_pk
import typing
-
from sqlalchemy import String
from sqlalchemy.dialects.mysql import VARCHAR
-
type_ = String(255).with_variant(VARCHAR(255, charset="utf8mb4"), "mysql", "mariadb")
if typing.TYPE_CHECKING:
reveal_type(type_)
-A type checker like pyright will now report the type as::
+A type checker like pyright will now report the type as:
+
+.. sourcecode:: text
info: Type of "type_" is "String"
The SQL division operator on PostgreSQL for example normally acts as "floor division"
when used against integers, meaning the above result would return the integer
"0". For this and similar backends, SQLAlchemy now renders the SQL using
-a form which is equivalent towards::
+a form which is equivalent towards:
+
+.. sourcecode:: text
%(param_1)s / CAST(%(param_2)s AS NUMERIC)
The SQL division operator on MySQL and Oracle for example normally acts
as "true division" when used against integers, meaning the above result
would return the floating point value "0.5". For these and similar backends,
-SQLAlchemy now renders the SQL using a form which is equivalent towards::
+SQLAlchemy now renders the SQL using a form which is equivalent towards:
+
+.. sourcecode:: text
FLOOR(%(param_1)s / %(param_2)s)
method that wants to get the current connection to run a database query.
Using the test script illustrated at :ticket:`7433`, the previous
-error case looks like::
+error case looks like:
+
+.. sourcecode:: text
Traceback (most recent call last):
File "/home/classic/dev/sqlalchemy/test3.py", line 30, in worker
Where the ``_connection_for_bind()`` method isn't able to continue since
concurrent access placed it into an invalid state. Using the new approach, the
-originator of the state change throws the error instead::
+originator of the state change throws the error instead:
+
+.. sourcecode:: text
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1785, in close
self._close_impl(invalidate=False)
result = connection.execute(user_table.select())
-The above code will invoke SQL on the database of the form::
+The above code will invoke SQL on the database of the form:
+
+.. sourcecode:: sql
SELECT user_schema_one.user.id, user_schema_one.user.name FROM
user_schema_one.user
is nothing known about what kinds of result rows will be returned since
SQLAlchemy does not parse SQL strings ahead of time.
-The next statements we see are the CREATE TABLE statements::
+The next statements we see are the CREATE TABLE statements:
+
+.. sourcecode:: sql
INFO sqlalchemy.engine.Engine
CREATE TABLE a (
a segment looks like::
+.. sourcecode:: sql
+
INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?)
INFO sqlalchemy.engine.Engine [generated in 0.00011s] (None,)
INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?)
Our example program then performs some SELECTs where we can see the same
pattern of "generated" then "cached", for the SELECT of the "a" table as well
-as for subsequent lazy loads of the "b" table::
+as for subsequent lazy loads of the "b" table:
+
+.. sourcecode:: text
INFO sqlalchemy.engine.Engine SELECT a.id AS a_id, a.data AS a_data
FROM a
return text
-The approach above will generate a compiled SELECT statement that looks like::
+The approach above will generate a compiled SELECT statement that looks like:
+
+.. sourcecode:: sql
SELECT x FROM y
LIMIT __[POSTCOMPILE_param_1]
Concretely, for most backends the behavior will rewrite a statement of the
form:
-.. sourcecode:: none
+.. sourcecode:: sql
INSERT INTO a (data, x, y) VALUES (%(data)s, %(x)s, %(y)s) RETURNING a.id
into a "batched" form as:
-.. sourcecode:: none
+.. sourcecode:: sql
INSERT INTO a (data, x, y) VALUES
(%(data_0)s, %(x_0)s, %(y_0)s),
SQLAlchemy, where the production of multiple INSERT statements was hidden from
logging and events. Logging display will truncate the long lists of parameters for readability,
and will also indicate the specific batch of each statement. The example below illustrates
-an excerpt of this logging::
+an excerpt of this logging:
+
+.. sourcecode:: text
INSERT INTO a (data, x, y) VALUES (?, ?, ?), ... 795 characters truncated ... (?, ?, ?), (?, ?, ?) RETURNING id
[generated in 0.00177s (insertmanyvalues)] ('d0', 0, 0, 'd1', ...
)
On the PostgreSQL dialect, names longer than 63 characters will be truncated
-as in the following example::
+as in the following example:
+
+.. sourcecode:: sql
CREATE TABLE long_names (
information_channel_code INTEGER,
CheckConstraint("value > 5", name="value_gt_5"),
)
-The above table will produce the name ``ck_foo_value_gt_5``::
+The above table will produce the name ``ck_foo_value_gt_5``:
+
+.. sourcecode:: sql
CREATE TABLE foo (
value INTEGER,
"foo", metadata_obj, Column("value", Integer), CheckConstraint(column("value") > 5)
)
-Both will produce the name ``ck_foo_value``::
+Both will produce the name ``ck_foo_value``:
+
+.. sourcecode:: sql
CREATE TABLE foo (
value INTEGER,
Table("foo", metadata_obj, Column("flag", Boolean(name="flag_bool")))
-The above table will produce the constraint name ``ck_foo_flag_bool``::
+The above table will produce the constraint name ``ck_foo_flag_bool``:
+
+.. sourcecode:: sql
CREATE TABLE foo (
flag BOOL,
Table("foo", metadata_obj, Column("flag", Boolean()))
-The above schema will produce::
+The above schema will produce:
+
+.. sourcecode:: sql
CREATE TABLE foo (
flag BOOL,
The resulting SQL embeds both functions as appropriate. ``ST_AsText``
is applied to the columns clause so that the return value is run through
the function before passing into a result set, and ``ST_GeomFromText``
-is run on the bound parameter so that the passed-in value is converted::
+is run on the bound parameter so that the passed-in value is converted:
+
+.. sourcecode:: sql
SELECT geometry.geom_id, ST_AsText(geometry.geom_data) AS geom_data_1
FROM geometry
print(select(geometry.c.geom_data.label("my_data")))
-Output::
+Output:
+
+.. sourcecode:: sql
SELECT ST_AsText(geometry.geom_data) AS my_data
FROM geometry
)
The ``pgp_sym_encrypt`` and ``pgp_sym_decrypt`` functions are applied
-to the INSERT and SELECT statements::
+to the INSERT and SELECT statements:
+
+.. sourcecode:: sql
INSERT INTO message (username, message)
VALUES (%(username)s, pgp_sym_encrypt(%(message)s, %(pgp_sym_encrypt_1)s))
- {'username': 'some user', 'message': 'this is my message',
- 'pgp_sym_encrypt_1': 'this is my passphrase'}
+ -- {'username': 'some user', 'message': 'this is my message',
+ -- 'pgp_sym_encrypt_1': 'this is my passphrase'}
SELECT pgp_sym_decrypt(message.message, %(pgp_sym_decrypt_1)s) AS message_1
FROM message
WHERE message.username = %(username_1)s
- {'pgp_sym_decrypt_1': 'this is my passphrase', 'username_1': 'some user'}
+ -- {'pgp_sym_decrypt_1': 'this is my passphrase', 'username_1': 'some user'}
PostgreSQL dialect. If we run ``meta.create_all()`` against the SQLite
dialect, for example, neither construct will be included:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import create_engine
>>> sqlite_engine = create_engine("sqlite+pysqlite://", echo=True)
see inline DDL for the CHECK constraint as well as a separate CREATE
statement emitted for the index:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import create_engine
>>> postgresql_engine = create_engine(
Column("index_value", Integer, server_default=text("0")),
)
-A create call for the above table will produce::
+A create call for the above table will produce:
+
+.. sourcecode:: sql
CREATE TABLE test (
abc varchar(20) default 'abc',
is passed for the "cart_id" column, the "cart_id_seq" sequence will be used to
generate a value. Typically, the sequence function is embedded in the
INSERT statement, which is combined with RETURNING so that the newly generated
-value can be returned to the Python code::
+value can be returned to the Python code:
+
+.. sourcecode:: sql
INSERT INTO cartitems (cart_id, description, createdate)
VALUES (next_val(cart_id_seq), 'some description', '2015-10-15 12:00:15')
createdate = Column(DateTime)
When the "CREATE TABLE" statement is emitted, on PostgreSQL it would be
-emitted as::
+emitted as:
+
+.. sourcecode:: sql
CREATE TABLE cartitems (
cart_id INTEGER DEFAULT nextval('cart_id_seq') NOT NULL,
)
The DDL for the ``square`` table when run on a PostgreSQL 12 backend will look
-like::
+like:
+
+.. sourcecode:: sql
CREATE TABLE square (
id SERIAL NOT NULL,
)
The DDL for the ``data`` table when run on a PostgreSQL 12 backend will look
-like::
+like:
+
+.. sourcecode:: sql
CREATE TABLE data (
id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 42 CYCLE) NOT NULL,
error, depending on the backend. To activate this mode, set the parameter
:paramref:`_schema.Identity.always` to ``True`` in the
:class:`.Identity` construct. Updating the previous
-example to include this parameter will generate the following DDL::
+example to include this parameter will generate the following DDL:
+
+.. sourcecode:: sql
CREATE TABLE data (
id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 42 CYCLE) NOT NULL,
a file path is accepted, and in others a "data source name" replaces the "host"
and "database" portions. The typical form of a database URL is:
-.. sourcecode:: none
+.. sourcecode:: text
dialect+driver://username:password@host:port/database
"at" sign and slash characters are represented as ``%40`` and ``%2F``,
respectively:
-.. sourcecode:: none
+.. sourcecode:: text
postgresql+pg8000://dbuser:kx%40jj5%2Fg@pghost10/appdb
schema is the default schema of our database connection, or if using a database
such as PostgreSQL suppose the "project" schema is set up in the PostgreSQL
``search_path``. This would mean that the database accepts the following
-two SQL statements as equivalent::
+two SQL statements as equivalent:
+
+.. sourcecode:: sql
-- schema qualified
SELECT message_id FROM project.messages
in conjunction with the :meth:`_types.TypeEngine.as_generic` method.
Given a table in MySQL (chosen because MySQL has a lot of vendor-specific
-datatypes and options)::
+datatypes and options):
+
+.. sourcecode:: sql
CREATE TABLE IF NOT EXISTS my_table (
id INTEGER PRIMARY KEY AUTO_INCREMENT,
parent_id = Column(ForeignKey("parent.id"))
parent = relationship("Parent")
-The above mapping will generate warnings::
+The above mapping will generate warnings:
+
+.. sourcecode:: text
SAWarning: relationship 'Child.parent' will copy column parent.id to column child.parent_id,
which conflicts with relationship(s): 'Parent.children' (copies parent.id to child.parent_id).
All tables in a relational database should have primary keys. Even a many-to-many
association table - the primary key would be the composite of the two association
-columns::
+columns:
+
+.. sourcecode:: sql
CREATE TABLE my_association (
user_id INTEGER REFERENCES user(id),
the results of which are matched up to the results from the first query.
We see two queries emitted like this:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> session.scalars(select(User).options(subqueryload(User.addresses))).all()
{opensql}-- the "main" query
When the inner query uses ``OFFSET`` and/or ``LIMIT`` without ordering,
the two queries may not see the same results:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> user = session.scalars(
... select(User).options(subqueryload(User.addresses)).limit(1)
or via the ``echo=True`` argument on :func:`_sa.create_engine`) can give an
idea how long things are taking. For example, if you log something
right after a SQL operation, you'd see something like this in your
-log::
+log:
+
+.. sourcecode:: text
17:37:48,325 INFO [sqlalchemy.engine.base.Engine.0x...048c] SELECT ...
17:37:48,326 INFO [sqlalchemy.engine.base.Engine.0x...048c] {<params>}
While this is theoretically possible, the usefulness of the enhancement is
greatly decreased by the fact that many database operations require a ROLLBACK
in any case. Postgres in particular has operations which, once failed, the
-transaction is not allowed to continue::
+transaction is not allowed to continue:
+
+.. sourcecode:: text
test=> create table foo(id integer primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for table "foo"
print(cursor.mogrify(str(compiled), compiled.params))
- The above code will produce psycopg2's raw bytestring::
+ The above code will produce psycopg2's raw bytestring:
+
+ .. sourcecode:: sql
b"SELECT a.id, a.data \nFROM a \nWHERE a.data = 'a511b0fc-76da-4c47-a4b4-716a8189b7ac'::uuid"
print(str(compiled) % compiled.params)
This will produce a non-working string, that nonetheless is suitable for
- debugging::
+ debugging:
+
+ .. sourcecode:: sql
SELECT a.id, a.data
FROM a
print(re.sub(r"\?", lambda m: next(params), str(compiled)))
- The above snippet prints::
+ The above snippet prints:
+
+ .. sourcecode:: sql
SELECT a.id, a.data
FROM a
e = create_engine("postgresql+psycopg2://")
print(stmt.compile(e, compile_kwargs={"use_my_literal_recipe": True}))
- The above recipe will print::
+ The above recipe will print:
+
+ .. sourcecode:: sql
SELECT a.id, a.data
FROM a
print(stmt.compile(e, compile_kwargs={"literal_binds": True}))
- Again printing the same form::
+ Again printing the same form:
+
+ .. sourcecode:: sql
SELECT a.id, a.data
FROM a
`paramstyle <https://www.python.org/dev/peps/pep-0249/#paramstyle>`_, which
necessarily involve percent signs in their syntax. Most DBAPIs that do this
expect percent signs used for other reasons to be doubled up (i.e. escaped) in
-the string form of the statements used, e.g.::
+the string form of the statements used, e.g.:
+
+.. sourcecode:: sql
SELECT a, b FROM some_table WHERE a = %s AND c = %s AND num %% modulus = 0
substitution of bound parameters works in the same way as the Python string
interpolation operator ``%``, and in many cases the DBAPI actually uses this
operator directly. Above, the substitution of bound parameters would then look
-like::
+like:
+
+.. sourcecode:: sql
SELECT a, b FROM some_table WHERE a = 5 AND c = 10 AND num % modulus = 0
were created, as well as a way to get at server-generated
default values in an atomic way.
- An example of RETURNING, idiomatic to PostgreSQL, looks like::
+ An example of RETURNING, idiomatic to PostgreSQL, looks like:
+
+ .. sourcecode:: sql
INSERT INTO user_account (name) VALUES ('new name') RETURNING id, timestamp
the latest 1.4 release.
When ``pip`` is available, the distribution can be
-downloaded from PyPI and installed in one step::
+downloaded from PyPI and installed in one step:
+
+.. sourcecode:: text
pip install SQLAlchemy
downloaded which provides native Cython / C extensions prebuilt.
In order to install the latest **prerelease** version, such as ``2.0.0b1``,
-pip requires that the ``--pre`` flag be used::
+pip requires that the ``--pre`` flag be used:
+
+.. sourcecode:: text
pip install --pre SQLAlchemy
-------------------------------------------------
When not installing from pip, the source distribution may be installed
-using the ``setup.py`` script::
+using the ``setup.py`` script:
+
+.. sourcecode:: text
python setup.py install
``setup.py`` will automatically build the extensions if an appropriate platform
is detected, assuming the Cython package is installed. A complete manual
-build looks like::
+build looks like:
+
+.. sourcecode:: text
# cd into SQLAlchemy source distribution
cd path/to/sqlalchemy
python setup.py install
Source builds may also be performed using :pep:`517` techniques, such as
-using build_::
+using build_:
+
+.. sourcecode:: text
# cd into SQLAlchemy source distribution
cd path/to/sqlalchemy
extensions, the ``DISABLE_SQLALCHEMY_CEXT`` environment variable may be
specified. The use case for this is either for special testing circumstances,
or in the rare case of compatibility/build issues not overcome by the usual
-"rebuild" mechanism::
+"rebuild" mechanism:
+
+.. sourcecode:: text
export DISABLE_SQLALCHEMY_CEXT=1; python setup.py install
This documentation covers SQLAlchemy version 2.0. If you're working on a
system that already has SQLAlchemy installed, check the version from your
-Python prompt like this:
-
-.. sourcecode:: python+sql
+Python prompt like this::
>>> import sqlalchemy
>>> sqlalchemy.__version__ # doctest: +SKIP
If we mark ``user1`` for deletion, after the flush operation proceeds,
``address1`` and ``address2`` will also be deleted:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> sess.delete(user1)
>>> sess.commit()
Upon deletion of a parent ``User`` object, the rows in ``address`` are not
deleted, but are instead de-associated:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> sess.delete(user1)
>>> sess.commit()
def __repr__(self):
return f"Vertex(start={self.start}, end={self.end})"
-The above mapping would correspond to a CREATE TABLE statement as::
+The above mapping would correspond to a CREATE TABLE statement as:
+
+.. sourcecode:: pycon+sql
>>> from sqlalchemy.schema import CreateTable
{sql}>>> print(CreateTable(Vertex.__table__))
-
CREATE TABLE vertices (
id INTEGER NOT NULL,
x1 INTEGER NOT NULL,
We can create a ``Vertex`` object, assign ``Point`` objects as members,
and they will be persisted as expected:
- .. sourcecode:: python+sql
+ .. sourcecode:: pycon+sql
>>> v = Vertex(start=Point(3, 4), end=Point(5, 6))
>>> session.add(v)
as possible when using the ORM :class:`_orm.Session` (including the legacy
:class:`_orm.Query` object) to select ``Point`` objects:
- .. sourcecode:: python+sql
+ .. sourcecode:: pycon+sql
>>> stmt = select(Vertex.start, Vertex.end)
{sql}>>> session.execute(stmt).all()
The ``Vertex.start`` and ``Vertex.end`` attributes may be used in
WHERE criteria and similar, using ad-hoc ``Point`` objects for comparisons:
- .. sourcecode:: python+sql
+ .. sourcecode:: pycon+sql
>>> stmt = select(Vertex).where(Vertex.start == Point(3, 4)).where(Vertex.end < Point(7, 8))
{sql}>>> session.scalars(stmt).all()
By default, the ``Point`` object **must be replaced by a new object** for
changes to be detected:
- .. sourcecode:: python+sql
+ .. sourcecode:: pycon+sql
{sql}>>> v1 = session.scalars(select(Vertex)).one()
SELECT vertices.id, vertices.x1, vertices.y1, vertices.x2, vertices.y2
[...] ()
{stop}
- v1.end = Point(x=10, y=14)
- {sql}session.commit()
+ >>> v1.end = Point(x=10, y=14)
+ {sql}>>> session.commit()
UPDATE vertices SET x2=?, y2=? WHERE vertices.id = ?
[...] (10, 14, 1)
COMMIT
remote_side=ip_address,
)
-The above relationship will produce a join like::
+The above relationship will produce a join like:
+
+.. sourcecode:: sql
SELECT host_entry.id, host_entry.ip_address, host_entry.content
FROM host_entry JOIN host_entry AS host_entry_1
select(IPA).join(IPA.network)
-Will render as::
+Will render as:
+
+.. sourcecode:: sql
SELECT ip_address.id AS ip_address_id, ip_address.v4address AS ip_address_v4address
FROM ip_address JOIN network ON ip_address.v4address << network.v4representation
)
Above, if given an ``Element`` object with a path attribute of ``"/foo/bar2"``,
-we seek for a load of ``Element.descendants`` to look like::
+we seek for a load of ``Element.descendants`` to look like:
+
+.. sourcecode:: sql
SELECT element.path AS element_path
FROM element
With a mapping similar to the above, the SQL rendered by the ORM for
INSERT and UPDATE will include ``created`` and ``updated`` in the RETURNING
-clause::
+clause:
+
+.. sourcecode:: sql
INSERT INTO my_table (created) VALUES (now()) RETURNING my_table.id, my_table.created, my_table.updated
trigger typically issues a SQL call at the point of access
in order to load the related object or objects:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> spongebob.addresses
{opensql}SELECT
collections rather than many-to-one-references. This is achieved
using the :func:`_orm.joinedload` loader option:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy.orm import joinedload
The JOIN will right-nest itself when applied in a chain that includes
an OUTER JOIN:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy.orm import joinedload
against ``Address.email_address`` is not valid - the ``Address`` entity is not
named in the query:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy.orm import joinedload
FROM list. The correct way to load the ``User`` records and order by email
address is to use :meth:`_sql.Select.join`:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> stmt = (
are ordering on, the other is used anonymously to load the contents of the
``User.addresses`` collection:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> stmt = (
to see why :func:`joinedload` does what it does, consider if we were
**filtering** on a particular ``Address``:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> stmt = (
... select(User)
retrieve the actual ``User`` rows we want. Below we change :func:`_orm.joinedload`
into :func:`.subqueryload`:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> stmt = (
... select(User)
relationship to the those of the child objects, inside of an IN clause, in
order to load related associations:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy import selectinload
For simple [1]_ many-to-one loads, a JOIN is also not needed as the foreign key
value from the parent object is used:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy import selectinload
for the primary object being returned, then link that to the sum of all
the collection members to load them at once:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
>>> from sqlalchemy.orm import subqueryload
Most :meth:`~.Session.merge` issues can be examined by first checking -
is the object prematurely in the session ?
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> a1 = Address(id=existing_a1, user_id=user.id)
>>> assert a1 not in session
correspond to the ``id`` and ``name`` columns are gone. If we were to access
one of these columns and are watching SQL, we'd see this:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> print(user.name)
{opensql}SELECT user.id AS user_id, user.name AS user_name
``version_id``. When an object of type ``User`` is first flushed, the
``version_id`` column will be given a value of "1". Then, an UPDATE
of the table later on will always be emitted in a manner similar to the
-following::
+following:
+
+.. sourcecode:: sql
UPDATE user SET version_id=:version_id, name=:name
WHERE user.id = :user_id AND user.version_id = :user_version_id
- {"name": "new name", "version_id": 2, "user_id": 1, "user_version_id": 1}
+ -- {"name": "new name", "version_id": 2, "user_id": 1, "user_version_id": 1}
The above UPDATE statement is updating the row that not only matches
``user.id = 1``, it also is requiring that ``user.version_id = 1``, where "1"
race condition where the version counter may change before it can be fetched.
When the target database supports RETURNING, an INSERT statement for our ``User`` class will look
-like this::
+like this:
+
+.. sourcecode:: sql
INSERT INTO "user" (name) VALUES (%(name)s) RETURNING "user".id, "user".xmin
- {'name': 'ed'}
+ -- {'name': 'ed'}
Where above, the ORM can acquire any newly generated primary key values along
with server-generated version identifiers in one statement. When the backend
does not support RETURNING, an additional SELECT must be emitted for **every**
INSERT and UPDATE, which is much less efficient, and also introduces the possibility of
-missed version counters::
+missed version counters:
+
+.. sourcecode:: sql
INSERT INTO "user" (name) VALUES (%(name)s)
- {'name': 'ed'}
+ -- {'name': 'ed'}
SELECT "user".version_id AS user_version_id FROM "user" where
"user".id = :param_1
- {"param_1": 1}
+ -- {"param_1": 1}
It is *strongly recommended* that server side version counters only be used
when absolutely necessary and only on backends that support :term:`RETURNING`,
git+https://github.com/sqlalchemyorg/changelog.git#egg=changelog
git+https://github.com/sqlalchemyorg/sphinx-paramlinks.git#egg=sphinx-paramlinks
git+https://github.com/sqlalchemyorg/zzzeeksphinx.git#egg=zzzeeksphinx
-sphinx-copybutton
\ No newline at end of file
+sphinx-copybutton
+sphinx-autobuild
user name fields as well as count of addresses, for those users that have more
than one address:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> with engine.connect() as conn:
... result = conn.execute(
each ``Address`` object ultimately came from a subquery against the
``address`` table rather than that table directly:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> subq = select(Address).where(~Address.email_address.like("%@aol.com")).subquery()
>>> address_subq = aliased(Address, subq)
Another example follows, which is exactly the same except it makes use of the
:class:`_sql.CTE` construct instead:
-.. sourcecode:: python+sql
+.. sourcecode:: pycon+sql
>>> cte_obj = select(Address).where(~Address.email_address.like("%@aol.com")).cte()
>>> address_cte = aliased(Address, cte_obj)
.. sourcecode:: pycon+sql
- {sql}>>> sandy = session.execute(select(User).filter_by(name="sandy")).scalar_one()
- BEGIN (implicit)
+ >>> sandy = session.execute(select(User).filter_by(name="sandy")).scalar_one()
+ {opensql}BEGIN (implicit)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
WHERE user_account.name = ?
.. sourcecode:: pycon+sql
- {sql}>>> patrick = session.get(User, 3)
- SELECT user_account.id AS user_account_id, user_account.name AS user_account_name,
+ >>> patrick = session.get(User, 3)
+ {opensql}SELECT user_account.id AS user_account_id, user_account.name AS user_account_name,
user_account.fullname AS user_account_fullname
FROM user_account
WHERE user_account.id = ?
.. sourcecode:: pycon+sql
- {sql}>>> session.execute(select(User).where(User.name == 'patrick')).scalar_one() is patrick
- SELECT user_account.id, user_account.name, user_account.fullname
+ >>> session.execute(select(User).where(User.name == "patrick")).scalar_one() is patrick
+ {opensql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
WHERE user_account.name = ?
[...] ('patrick',){stop}
else:
add_padding = None
code = "\n".join(c for *_, c in input_block)
+
try:
formatted = format_str(code, mode=BLACK_MODE)
except Exception as e:
disable_format = False
for line_no, line in enumerate(original.splitlines(), 1):
# start_code_section requires no spaces at the start
+
if start_code_section.match(line.strip()):
if plain_block:
buffer.extend(