From: provinzkraut <25355197+provinzkraut@users.noreply.github.com> Date: Tue, 22 Mar 2022 17:17:56 +0000 (-0400) Subject: Upgrade parts of the documentation to 2.0 style X-Git-Tag: rel_2_0_0b1~410 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=6652c62bd90a455843c77f41acd50af920126351;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git Upgrade parts of the documentation to 2.0 style I've started to work on #7659, implementing the low hanging fruit changes for now. Some still remain, which I've outlined as a [comment](https://github.com/sqlalchemy/sqlalchemy/issues/7659#issuecomment-1073029151), and probably also some that I didn't catch. This pull request is: - [x] A documentation / typographical error fix - Good to go, no issue or tests are needed - [ ] A short code fix - please include the issue number, and create an issue if none exists, which must include a complete example of the issue. one line code fixes without an issue and demonstration will not be accepted. - Please include: `Fixes: #` in the commit message - please include tests. one line code fixes without tests will not be accepted. - [ ] A new feature implementation - please include the issue number, and create an issue if none exists, which must include a complete example of how the feature would look. - Please include: `Fixes: #` in the commit message - please include tests. **Have a nice day!** Closes: #7829 Pull-request: https://github.com/sqlalchemy/sqlalchemy/pull/7829 Pull-request-sha: a89561dd0c96c2f9a6d992fa0fb94683afaf7e30 Change-Id: Ibc5ea94b5c4b2d7b1cf7bea24f4394d1fde749be --- diff --git a/doc/build/faq/ormconfiguration.rst b/doc/build/faq/ormconfiguration.rst index 3eab218547..d33046685f 100644 --- a/doc/build/faq/ormconfiguration.rst +++ b/doc/build/faq/ormconfiguration.rst @@ -234,25 +234,22 @@ The same idea applies to all the other arguments, such as ``foreign_keys``:: .. _faq_subqueryload_limit_sort: -Why is ``ORDER BY`` required with ``LIMIT`` (especially with ``subqueryload()``)? ---------------------------------------------------------------------------------- - -A relational database can return rows in any -arbitrary order, when an explicit ordering is not set. -While this ordering very often corresponds to the natural -order of rows within a table, this is not the case for all databases and -all queries. The consequence of this is that any query that limits rows -using ``LIMIT`` or ``OFFSET`` should **always** specify an ``ORDER BY``. -Otherwise, it is not deterministic which rows will actually be returned. - -When we use a SQLAlchemy method like :meth:`_query.Query.first`, we are in fact -applying a ``LIMIT`` of one to the query, so without an explicit ordering -it is not deterministic what row we actually get back. +Why is ``ORDER BY`` recommended with ``LIMIT`` (especially with ``subqueryload()``)? +------------------------------------------------------------------------------------ + +When ORDER BY is not used for a SELECT statement that returns rows, the +relational database is free to returned matched rows in any arbitrary +order. While this ordering very often corresponds to the natural +order of rows within a table, this is not the case for all databases and all +queries. The consequence of this is that any query that limits rows using +``LIMIT`` or ``OFFSET``, or which merely selects the first row of the result, +discarding the rest, will not be deterministic in terms of what result row is +returned, assuming there's more than one row that matches the query's criteria. + While we may not notice this for simple queries on databases that usually -returns rows in their natural -order, it becomes much more of an issue if we also use :func:`_orm.subqueryload` -to load related collections, and we may not be loading the collections -as intended. +returns rows in their natural order, it becomes more of an issue if we +also use :func:`_orm.subqueryload` to load related collections, and we may not +be loading the collections as intended. SQLAlchemy implements :func:`_orm.subqueryload` by issuing a separate query, the results of which are matched up to the results from the first query. @@ -260,7 +257,7 @@ We see two queries emitted like this: .. sourcecode:: python+sql - >>> session.query(User).options(subqueryload(User.addresses)).all() + >>> session.scalars(select(User).options(subqueryload(User.addresses))).all() {opensql}-- the "main" query SELECT users.id AS users_id FROM users @@ -279,7 +276,7 @@ the two queries may not see the same results: .. sourcecode:: python+sql - >>> user = session.query(User).options(subqueryload(User.addresses)).first() + >>> user = session.scalars(select(User).options(subqueryload(User.addresses)).limit(1)).first() {opensql}-- the "main" query SELECT users.id AS users_id FROM users @@ -321,10 +318,10 @@ won't see that anything actually went wrong. The solution to this problem is to always specify a deterministic sort order, so that the main query always returns the same set of rows. This generally -means that you should :meth:`_query.Query.order_by` on a unique column on the table. +means that you should :meth:`_sql.Select.order_by` on a unique column on the table. The primary key is a good choice for this:: - session.query(User).options(subqueryload(User.addresses)).order_by(User.id).first() + session.scalars(select(User).options(subqueryload(User.addresses)).order_by(User.id).limit(1)).first() Note that the :func:`_orm.joinedload` eager loader strategy does not suffer from the same problem because only one query is ever issued, so the load query diff --git a/doc/build/faq/performance.rst b/doc/build/faq/performance.rst index 781d6c79d3..9da73c7a7d 100644 --- a/doc/build/faq/performance.rst +++ b/doc/build/faq/performance.rst @@ -271,7 +271,7 @@ Below is a simple recipe which works profiling into a context manager:: To profile a section of code:: with profiled(): - Session.query(FooClass).filter(FooClass.somevalue==8).all() + session.scalars(select(FooClass).where(FooClass.somevalue==8)).all() The output of profiling can be used to give an idea where time is being spent. A section of profiling output looks like this:: @@ -403,18 +403,18 @@ Common strategies to mitigate this include: * fetch individual columns instead of full entities, that is:: - session.query(User.id, User.name) + select(User.id, User.name) instead of:: - session.query(User) + select(User) * Use :class:`.Bundle` objects to organize column-based results:: u_b = Bundle('user', User.id, User.name) a_b = Bundle('address', Address.id, Address.email) - for user, address in session.query(u_b, a_b).join(User.addresses): + for user, address in session.execute(select(u_b, a_b).join(User.addresses)): # ... * Use result caching - see :ref:`examples_caching` for an in-depth example diff --git a/doc/build/faq/sessions.rst b/doc/build/faq/sessions.rst index 6027ab3714..8281b4bf55 100644 --- a/doc/build/faq/sessions.rst +++ b/doc/build/faq/sessions.rst @@ -386,7 +386,7 @@ ORM behind the scenes, the end user sets up object relationships naturally. Therefore, the recommended way to set ``o.foo`` is to do just that - set it!:: - foo = Session.query(Foo).get(7) + foo = session.get(Foo, 7) o.foo = foo Session.commit() @@ -395,7 +395,7 @@ setting a foreign-key attribute to a new value currently does not trigger an "expire" event of the :func:`_orm.relationship` in which it's involved. This means that for the following sequence:: - o = Session.query(SomeClass).first() + o = session.scalars(select(SomeClass).limit(1)).first() assert o.foo is None # accessing an un-set attribute sets it to None o.foo_id = 7 @@ -413,18 +413,18 @@ and expires all state:: Session.commit() # expires all attributes - foo_7 = Session.query(Foo).get(7) + foo_7 = session.get(Foo, 7) assert o.foo is foo_7 # o.foo lazyloads on access A more minimal operation is to expire the attribute individually - this can be performed for any :term:`persistent` object using :meth:`.Session.expire`:: - o = Session.query(SomeClass).first() + o = session.scalars(select(SomeClass).limit(1)).first() o.foo_id = 7 Session.expire(o, ['foo']) # object must be persistent for this - foo_7 = Session.query(Foo).get(7) + foo_7 = session.get(Foo, 7) assert o.foo is foo_7 # o.foo lazyloads on access diff --git a/doc/build/orm/extensions/associationproxy.rst b/doc/build/orm/extensions/associationproxy.rst index aef046b049..13882b8991 100644 --- a/doc/build/orm/extensions/associationproxy.rst +++ b/doc/build/orm/extensions/associationproxy.rst @@ -461,7 +461,7 @@ immediate target of an association proxy is a **mapped column expression**, standard column operators can be used which will be embedded in the subquery. For example a straight equality operator:: - >>> print(session.query(User).filter(User.special_keys == "jek")) + >>> print(session.scalars(select(User).where(User.special_keys == "jek"))) SELECT "user".id AS user_id, "user".name AS user_name FROM "user" WHERE EXISTS (SELECT 1 @@ -470,7 +470,7 @@ For example a straight equality operator:: a LIKE operator:: - >>> print(session.query(User).filter(User.special_keys.like("%jek"))) + >>> print(session.scalars(select(User).where(User.special_keys.like("%jek")))) SELECT "user".id AS user_id, "user".name AS user_name FROM "user" WHERE EXISTS (SELECT 1 @@ -484,7 +484,7 @@ operators can be used instead, such as :meth:`_orm.PropComparator.has` and two association proxies linked together, so when using this proxy for generating SQL phrases, we get two levels of EXISTS subqueries:: - >>> print(session.query(User).filter(User.keywords.any(Keyword.keyword == "jek"))) + >>> print(session.scalars(select(User).where(User.keywords.any(Keyword.keyword == "jek")))) SELECT "user".id AS user_id, "user".name AS user_name FROM "user" WHERE EXISTS (SELECT 1 diff --git a/doc/build/orm/join_conditions.rst b/doc/build/orm/join_conditions.rst index a6d0309918..51385efed8 100644 --- a/doc/build/orm/join_conditions.rst +++ b/doc/build/orm/join_conditions.rst @@ -166,7 +166,7 @@ is generally only significant when SQLAlchemy is rendering SQL in order to load or represent this relationship. That is, it's used in the SQL statement that's emitted in order to perform a per-attribute lazy load, or when a join is constructed at query time, such as via -:meth:`_query.Query.join`, or via the eager "joined" or "subquery" styles of +:meth:`Select.join`, or via the eager "joined" or "subquery" styles of loading. When in-memory objects are being manipulated, we can place any ``Address`` object we'd like into the ``boston_addresses`` collection, regardless of what the value of the ``.city`` attribute @@ -299,7 +299,7 @@ A complete example:: Above, a query such as:: - session.query(IPA).join(IPA.network) + select(IPA).join(IPA.network) Will render as:: @@ -700,7 +700,7 @@ directly. A query from ``A`` to ``D`` looks like: .. sourcecode:: python+sql - sess.query(A).join(A.d).all() + sess.scalars(select(A).join(A.d)).all() {opensql}SELECT a.id AS a_id, a.b_id AS a_b_id FROM a JOIN ( @@ -801,7 +801,7 @@ With the above mapping, a simple join looks like: .. sourcecode:: python+sql - sess.query(A).join(A.b).all() + sess.scalars(select(A).join(A.b)).all() {opensql}SELECT a.id AS a_id, a.b_id AS a_b_id FROM a JOIN (b JOIN d ON d.b_id = b.id JOIN c ON c.id = d.c_id) ON a.b_id = b.id @@ -827,7 +827,7 @@ A query using the above ``A.b`` relationship will render a subquery: .. sourcecode:: python+sql - sess.query(A).join(A.b).all() + sess.scalars(select(A).join(A.b)).all() {opensql}SELECT a.id AS a_id, a.b_id AS a_b_id FROM a JOIN (SELECT b.id AS id, b.some_b_column AS some_b_column @@ -838,10 +838,11 @@ so in terms of ``B_viacd_subquery`` rather than ``B`` directly: .. sourcecode:: python+sql - ( - sess.query(A).join(A.b). - filter(B_viacd_subquery.some_b_column == "some b"). - order_by(B_viacd_subquery.id) + sess.scalars( + select(A) + .join(A.b) + .where(B_viacd_subquery.some_b_column == "some b") + .order_by(B_viacd_subquery.id) ).all() {opensql}SELECT a.id AS a_id, a.b_id AS a_b_id @@ -890,8 +891,8 @@ ten items for each collection:: We can use the above ``partitioned_bs`` relationship with most of the loader strategies, such as :func:`.selectinload`:: - for a1 in s.query(A).options(selectinload(A.partitioned_bs)): - print(a1.partitioned_bs) # <-- will be no more than ten objects + for a1 in session.scalars(select(A).options(selectinload(A.partitioned_bs))): + print(a1.partitioned_bs) # <-- will be no more than ten objects Where above, the "selectinload" query looks like: diff --git a/doc/build/orm/loading_columns.rst b/doc/build/orm/loading_columns.rst index de10901e46..6b3673dbae 100644 --- a/doc/build/orm/loading_columns.rst +++ b/doc/build/orm/loading_columns.rst @@ -15,7 +15,7 @@ Deferred Column Loading Deferred column loading allows particular columns of a table be loaded only upon direct access, instead of when the entity is queried using -:class:`_query.Query`. This feature is useful when one wants to avoid +:class:`_sql.Select`. This feature is useful when one wants to avoid loading a large text or binary field into memory when it's not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together, using the :func:`_orm.deferred` function to @@ -65,16 +65,18 @@ Deferred Column Loader Query Options ------------------------------------ Columns can be marked as "deferred" or reset to "undeferred" at query time -using options which are passed to the :meth:`_query.Query.options` method; the most +using options which are passed to the :meth:`_sql.Select.options` method; the most basic query options are :func:`_orm.defer` and :func:`_orm.undefer`:: from sqlalchemy.orm import defer from sqlalchemy.orm import undefer + from sqlalchemy import select + + stmt = select(Book) + stmt = stmt.options(defer('summary'), undefer('excerpt')) + session.scalars(stmt).all() - query = session.query(Book) - query = query.options(defer('summary'), undefer('excerpt')) - query.all() Above, the "summary" column will not load until accessed, and the "excerpt" column will load immediately even if it was mapped as a "deferred" column. @@ -83,23 +85,28 @@ column will load immediately even if it was mapped as a "deferred" column. using :func:`_orm.undefer_group`, sending in the group name:: from sqlalchemy.orm import undefer_group + from sqlalchemy import select + + stmt = select(Book) + stmt = stmt.options(undefer_group('photos')) + session.scalars(stmt).all() - query = session.query(Book) - query.options(undefer_group('photos')).all() .. _deferred_loading_w_multiple: Deferred Loading across Multiple Entities ----------------------------------------- -To specify column deferral for a :class:`_query.Query` that loads multiple types of +To specify column deferral for a :class:`_sql.Select` that loads multiple types of entities at once, the deferral options may be specified more explicitly using class-bound attributes, rather than string names:: from sqlalchemy.orm import defer + from sqlalchemy import select + + stmt = select(Book, Author).join(Book.author) + stmt = stmt.options(defer(Author.bio)) - query = session.query(Book, Author).join(Book.author) - query = query.options(defer(Author.bio)) Column deferral options may also indicate that they take place along various relationship paths, which are themselves often :ref:`eagerly loaded @@ -114,11 +121,12 @@ option (described later in this section) to defer all ``Book`` columns except those explicitly specified:: from sqlalchemy.orm import joinedload + from sqlalchemy import select - query = session.query(Author) - query = query.options( - joinedload(Author.books).load_only(Book.summary, Book.excerpt), - ) + stmt = select(Author) + stmt = stmt.options( + joinedload(Author.books).load_only(Book.summary, Book.excerpt) + ) Option structures as above can also be organized in more complex ways, such as hierarchically using the :meth:`_orm.Load.options` @@ -129,17 +137,20 @@ may be used:: from sqlalchemy.orm import defer from sqlalchemy.orm import joinedload from sqlalchemy.orm import load_only - - query = session.query(Author) - query = query.options( - joinedload(Author.book).options( - load_only(Book.summary, Book.excerpt), - joinedload(Book.citations).options( - joinedload(Citation.author), - defer(Citation.fulltext) - ) - ) + from sqlalchemy import select + + stmt = select(Author) + stmt = stmt.options( + joinedload(Author.book).options( + load_only(Book.summary, Book.excerpt), + joinedload(Book.citations).options( + joinedload(Citation.author), + defer(Citation.fulltext) ) + ) + ) + + .. versionadded:: 1.3.6 Added :meth:`_orm.Load.options` to allow easier construction of hierarchies of loader options. @@ -150,8 +161,11 @@ option structure without actually setting any options at that level, so that fur sub-options may be applied. The :func:`_orm.defaultload` function can be used to create the same structure as we did above using :meth:`_orm.Load.options` as:: - query = session.query(Author) - query = query.options( + from sqlalchemy import select + from sqlalchemy.orm import defaultload + + stmt = select(Author) + stmt = stmt.options( joinedload(Author.book).load_only(Book.summary, Book.excerpt), defaultload(Author.book).joinedload(Book.citations).joinedload(Citation.author), defaultload(Author.book).defaultload(Book.citations).defer(Citation.fulltext) @@ -172,8 +186,9 @@ the "summary" and "excerpt" columns, we could say:: from sqlalchemy.orm import defer from sqlalchemy.orm import undefer + from sqlalchemy import select - session.query(Book).options( + select(Book).options( defer('*'), undefer("summary"), undefer("excerpt")) Above, the :func:`.defer` option is applied using a wildcard to all column @@ -189,8 +204,9 @@ which will apply deferred behavior to all column attributes except those that are named:: from sqlalchemy.orm import load_only + from sqlalchemy import select - session.query(Book).options(load_only(Book.summary, Book.excerpt)) + select(Book).options(load_only(Book.summary, Book.excerpt)) Wildcard and Exclusionary Options with Multiple-Entity Queries ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -235,7 +251,7 @@ loading, discussed at :ref:`prevent_lazy_with_raiseload`. Using the :paramref:`.orm.defer.raiseload` parameter on the :func:`.defer` option, an exception is raised if the attribute is accessed:: - book = session.query(Book).options(defer(Book.summary, raiseload=True)).first() + book = session.scalars(select(Book).options(defer(Book.summary, raiseload=True)).limit(1)).first() # would raise an exception book.summary @@ -253,7 +269,8 @@ Deferred "raiseload" can be configured at the mapper level via summary = deferred(Column(String(2000)), raiseload=True) excerpt = deferred(Column(Text), raiseload=True) - book_w_excerpt = session.query(Book).options(undefer(Book.excerpt)).first() + book_w_excerpt = session.scalars(select(Book).options(undefer(Book.excerpt)).limit(1)).first() + @@ -285,9 +302,10 @@ namespace. The bundle allows columns to be grouped together:: from sqlalchemy.orm import Bundle + from sqlalchemy import select bn = Bundle('mybundle', MyClass.data1, MyClass.data2) - for row in session.query(bn).filter(bn.c.data1 == 'd1'): + for row in session.execute(select(bn).where(bn.c.title == "d1")): print(row.mybundle.data1, row.mybundle.data2) The bundle can be subclassed to provide custom behaviors when results @@ -323,7 +341,7 @@ return structure with a straight Python dictionary:: A result from the above bundle will return dictionary values:: bn = DictBundle('mybundle', MyClass.data1, MyClass.data2) - for row in session.query(bn).filter(bn.c.data1 == 'd1'): + for row in session.execute(select(bn)).where(bn.c.data1 == 'd1'): print(row.mybundle['data1'], row.mybundle['data2']) The :class:`.Bundle` construct is also integrated into the behavior diff --git a/doc/build/orm/mapped_attributes.rst b/doc/build/orm/mapped_attributes.rst index a4fd3115d5..cd36384c55 100644 --- a/doc/build/orm/mapped_attributes.rst +++ b/doc/build/orm/mapped_attributes.rst @@ -153,7 +153,7 @@ The approach above will work, but there's more we can add. While our ``EmailAddress`` object will shuttle the value through the ``email`` descriptor and into the ``_email`` mapped attribute, the class level ``EmailAddress.email`` attribute does not have the usual expression semantics -usable with :class:`_query.Query`. To provide these, we instead use the +usable with :class:`_sql.Select`. To provide these, we instead use the :mod:`~sqlalchemy.ext.hybrid` extension as follows:: from sqlalchemy.ext.hybrid import hybrid_property @@ -180,11 +180,12 @@ that is, from the ``EmailAddress`` class directly: .. sourcecode:: python+sql from sqlalchemy.orm import Session + from sqlalchemy import select session = Session() - {sql}address = session.query(EmailAddress).\ - filter(EmailAddress.email == 'address@example.com').\ - one() + {sql}address = session.scalars( + select(EmailAddress).where(EmailAddress.email == 'address@example.com' + ).one() SELECT address.email AS address_email, address.id AS address_id FROM address WHERE address.email = ? @@ -240,7 +241,7 @@ attribute, a SQL function is rendered which produces the same effect: .. sourcecode:: python+sql - {sql}address = session.query(EmailAddress).filter(EmailAddress.email == 'address').one() + {sql}address = session.scalars(select(EmailAddress).where(EmailAddress.email == 'address')).one() SELECT address.email AS address_email, address.id AS address_id FROM address WHERE substr(address.email, ?, length(address.email) - ?) = ? diff --git a/doc/build/orm/mapped_sql_expr.rst b/doc/build/orm/mapped_sql_expr.rst index d9d675fd70..818a6a9520 100644 --- a/doc/build/orm/mapped_sql_expr.rst +++ b/doc/build/orm/mapped_sql_expr.rst @@ -34,12 +34,13 @@ will provide for us the ``fullname``, which is the string concatenation of the t Above, the ``fullname`` attribute is interpreted at both the instance and class level, so that it is available from an instance:: - some_user = session.query(User).first() + some_user = session.scalars(select(User).limit(1)).first() print(some_user.fullname) as well as usable within queries:: - some_user = session.query(User).filter(User.fullname == "John Smith").first() + some_user = session.scalars(select(User).where(User.fullname == "John Smith").limit(1)).first() + The string concatenation example is a simple one, where the Python expression can be dual purposed at the instance and class level. Often, the SQL expression @@ -251,7 +252,7 @@ assigned to ``filename`` and ``path`` are usable directly. The use of the :attr:`.ColumnProperty.expression` attribute is only necessary when using the :class:`.ColumnProperty` directly within the mapping definition:: - q = session.query(File.path).filter(File.filename == 'foo.txt') + stmt = select(File.path).where(File.filename == 'foo.txt') Using a plain descriptor diff --git a/doc/build/orm/nonstandard_mappings.rst b/doc/build/orm/nonstandard_mappings.rst index ff02109e89..cd7638ee0c 100644 --- a/doc/build/orm/nonstandard_mappings.rst +++ b/doc/build/orm/nonstandard_mappings.rst @@ -78,7 +78,7 @@ time while making use of the proper context, that is, accommodating for aliases and similar, the accessor :attr:`.ColumnProperty.Comparator.expressions` may be used:: - q = session.query(AddressUser).group_by(*AddressUser.id.expressions) + stmt = select(AddressUser).group_by(*AddressUser.id.expressions) .. versionadded:: 1.3.17 Added the :attr:`.ColumnProperty.Comparator.expressions` accessor. diff --git a/doc/build/orm/persistence_techniques.rst b/doc/build/orm/persistence_techniques.rst index 18bb984cdd..0d7f1684b0 100644 --- a/doc/build/orm/persistence_techniques.rst +++ b/doc/build/orm/persistence_techniques.rst @@ -21,7 +21,7 @@ an attribute:: value = Column(Integer) - someobject = session.query(SomeClass).get(5) + someobject = session.get(SomeClass, 5) # set 'value' attribute to a SQL expression adding one someobject.value = SomeClass.value + 1 diff --git a/doc/build/orm/self_referential.rst b/doc/build/orm/self_referential.rst index 2f1c021020..b1afb1a461 100644 --- a/doc/build/orm/self_referential.rst +++ b/doc/build/orm/self_referential.rst @@ -130,7 +130,7 @@ Self-Referential Query Strategies Querying of self-referential structures works like any other query:: # get all nodes named 'child2' - session.query(Node).filter(Node.data=='child2') + session.scalars(select(Node).where(Node.data=='child2')) However extra care is needed when attempting to join along the foreign key from one level of the tree to the next. In SQL, @@ -147,10 +147,12 @@ looks like: from sqlalchemy.orm import aliased nodealias = aliased(Node) - session.query(Node).filter(Node.data=='subchild1').\ - join(Node.parent.of_type(nodealias)).\ - filter(nodealias.data=="child2").\ - all() + session.scalars( + select(Node) + .where(Node.data == "subchild1") + .join(Node.parent.of_type(nodealias)) + .where(nodealias.data == "child2") + ).all() {opensql}SELECT node.id AS node_id, node.parent_id AS node_parent_id, node.data AS node_data @@ -190,7 +192,7 @@ configured via :paramref:`~.relationships.join_depth`: lazy="joined", join_depth=2) - session.query(Node).all() + session.scalars(select(Node)).all() {opensql}SELECT node_1.id AS node_1_id, node_1.parent_id AS node_1_parent_id, node_1.data AS node_1_data, diff --git a/doc/build/orm/session_basics.rst b/doc/build/orm/session_basics.rst index 3047fdc4fd..1755c62fe7 100644 --- a/doc/build/orm/session_basics.rst +++ b/doc/build/orm/session_basics.rst @@ -481,8 +481,8 @@ This means if we emit two separate queries, each for the same row, and get a mapped object back, the two queries will have returned the same Python object:: - >>> u1 = session.query(User).filter(id=5).first() - >>> u2 = session.query(User).filter(id=5).first() + >>> u1 = session.scalars(select(User).where(User.id == 5)).one() + >>> u2 = session.scalars(select(User).where(User.id == 5)).one() >>> u1 is u2 True @@ -522,7 +522,11 @@ ways to refresh its contents with new data from the current transaction: and indicates that it should return objects that are unconditionally re-populated from their contents in the database:: - u2 = session.query(User).populate_existing().filter(id=5).first() + u2 = session.scalars( + select(User) + .where(User.id == 5) + .execution_options(populate_existing=True) + ).one() .. @@ -972,7 +976,7 @@ E.g. **don't do this**:: def go(self): session = Session() try: - session.query(FooBar).update({"x": 5}) + session.execute(update(FooBar).values(x=5)) session.commit() except: session.rollback() @@ -982,7 +986,7 @@ E.g. **don't do this**:: def go(self): session = Session() try: - session.query(Widget).update({"q": 18}) + session.execute(update(Widget).values(q=18)) session.commit() except: session.rollback() @@ -1002,11 +1006,11 @@ transaction automatically:: class ThingOne: def go(self, session): - session.query(FooBar).update({"x": 5}) + session.execute(update(FooBar).values(x=5)) class ThingTwo: def go(self, session): - session.query(Widget).update({"q": 18}) + session.execute(update(Widget).values(q=18)) def run_my_program(): with Session() as session: @@ -1024,7 +1028,7 @@ Is the Session a cache? Yeee...no. It's somewhat used as a cache, in that it implements the :term:`identity map` pattern, and stores objects keyed to their primary key. However, it doesn't do any kind of query caching. This means, if you say -``session.query(Foo).filter_by(name='bar')``, even if ``Foo(name='bar')`` +``session.scalars(select(Foo).filter_by(name='bar'))``, even if ``Foo(name='bar')`` is right there, in the identity map, the session has no idea about that. It has to issue SQL to the database, get the rows back, and then when it sees the primary key in the row, *then* it can look in the local identity diff --git a/doc/build/orm/session_state_management.rst b/doc/build/orm/session_state_management.rst index 47b4fbe7fd..b3e15e7689 100644 --- a/doc/build/orm/session_state_management.rst +++ b/doc/build/orm/session_state_management.rst @@ -414,7 +414,7 @@ When we talk about expiration of data we are usually talking about an object that is in the :term:`persistent` state. For example, if we load an object as follows:: - user = session.query(User).filter_by(name='user1').first() + user = session.scalars(select(User).filter_by(name='user1').limit(1)).first() The above ``User`` object is persistent, and has a series of attributes present; if we were to look inside its ``__dict__``, we'd see that state diff --git a/doc/build/orm/session_transaction.rst b/doc/build/orm/session_transaction.rst index 465668c1b5..26a7491975 100644 --- a/doc/build/orm/session_transaction.rst +++ b/doc/build/orm/session_transaction.rst @@ -343,8 +343,8 @@ point at which the "begin" operation occurs. To suit this, the session = Session() session.begin() try: - item1 = session.query(Item).get(1) - item2 = session.query(Item).get(2) + item1 = session.get(Item, 1) + item2 = session.get(Item, 2) item1.foo = 'bar' item2.bar = 'foo' session.commit() @@ -357,8 +357,8 @@ The above pattern is more idiomatically invoked using a context manager:: Session = sessionmaker(bind=engine) session = Session() with session.begin(): - item1 = session.query(Item).get(1) - item2 = session.query(Item).get(2) + item1 = session.get(Item, 1) + item2 = session.get(Item, 2) item1.foo = 'bar' item2.bar = 'foo'