From: Mike Bayer Date: Mon, 16 Dec 2019 22:06:43 +0000 (-0500) Subject: introduce deferred lambdas X-Git-Tag: rel_1_4_0b1~242 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=3dc9a4a2392d033f9d1bd79dd6b6ecea6281a61c;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git introduce deferred lambdas The coercions system allows us to add in lambdas as arguments to Core and ORM elements without changing them at all. By allowing the lambda to produce a deterministic cache key where we can also cheat and yank out literal parameters means we can move towards having 90% of "baked" functionality in a clearer way right in Core / ORM. As a second step, we can have whole statements inside the lambda, and can then add generation with __add__(), so then we have 100% of "baked" functionality with full support of ad-hoc literal values. Adds some more short_selects tests for the moment for comparison. Other tweaks inside cache key generation as we're trying to approach a certain level of performance such that we can remove the use of "baked" from the loader strategies. As we have not yet closed #4639, however the caching feature has been fully integrated as of b0cfa7379cf8513a821a3dbe3028c4965d9f85bd, we will also add complete caching documentation here and close that issue as well. Closes: #4639 Fixes: #5380 Change-Id: If91f61527236fd4d7ae3cad1f24c38be921c90ba --- diff --git a/doc/build/changelog/migration_14.rst b/doc/build/changelog/migration_14.rst index bfd5be4814..0ea6faf35b 100644 --- a/doc/build/changelog/migration_14.rst +++ b/doc/build/changelog/migration_14.rst @@ -23,6 +23,212 @@ What's New in SQLAlchemy 1.4? Behavioral Changes - General ============================ +.. _change_4639: + +Transparent SQL Compilation Caching added to All DQL, DML Statements in Core, ORM +---------------------------------------------------------------------------------- + +One of the most broadly encompassing changes to ever land in a single +SQLAlchemy version, a many-month reorganization and refactoring of all querying +systems from the base of Core all the way through ORM now allows the +majority of Python computation involved producing SQL strings and related +statement metadata from a user-constructed statement to be cached in memory, +such that subsequent invocations of an identical statement construct will use +35-60% fewer resources. + +This caching goes beyond the construction of the SQL string to also include the +construction of result fetching structures that link the SQL construct to the +result set, and in the ORM it includes the accommodation of ORM-enabled +attribute loaders, relationship eager loaders and other options, and object +construction routines that must be built up each time an ORM query seeks to run +and construct ORM objects from result sets. + +To introduce the general idea of the feature, given code from the +:ref:`examples_performance` suite as follows, which will invoke +a very simple query "n" times, for a default value of n=10000. The +query returns only a single row, as the overhead we are looking to decrease +is that of **many small queries**. The optimization is not as significant +for queries that return many rows:: + + session = Session(bind=engine) + for id_ in random.sample(ids, n): + result = session.query(Customer).filter(Customer.id == id_).one() + +This example in the 1.3 release of SQLAlchemy on a Dell XPS13 running Linux +completes as follows:: + + test_orm_query : (10000 iterations); total time 3.440652 sec + +In 1.4, the code above without modification completes:: + + test_orm_query : (10000 iterations); total time 2.367934 sec + +This first test indicates that regular ORM queries when using caching can run +over many iterations in the range of **30% faster**. + +"Baked Query" style construction now available for all Core and ORM Queries +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The "Baked Query" extension has been in SQLAlchemy for several years and +provides a caching system that is based on defining segments of SQL statements +within Python functions, so that the functions both serve as cache keys +(since they uniquely and persistently identify a specific line in the +source code) as well as that they allow the construction of a statement +to be deferred so that it only need to be invoked once, rather than every +time the query is rendered. The functionality of "Baked Query" is now a native +part of the new caching system, which is available by simply using Python +functions, typically lambda expressions, either inside of a statement, +or on the outside using the ``lambda_stmt()`` function that works just +like a Baked Query. + +Making use of the newer 2.0 style of using ``select()`` and adding the use +of **optional** lambdas to defer the computation:: + + session = Session(bind=engine) + for id_ in random.sample(ids, n): + stmt = lambda_stmt(lambda: future_select(Customer)) + stmt += lambda s: s.where(Customer.id == id_) + session.execute(stmt).scalar_one() + +The code above completes:: + + test_orm_query_newstyle_w_lambdas : (10000 iterations); total time 1.247092 sec + +This test indicates that using the newer "select()" style of ORM querying, +in conjunction with a full "baked" style invocation that caches the entire +construction, can run over many iterations in the range of **60% faster**. +This performance is roughly the same as what the Baked Query extension +provides as well. The new approach effectively supersedes the Baked Query +extension. + +For comparison, a Baked Query looks like the following:: + + bakery = baked.bakery() + s = Session(bind=engine) + for id_ in random.sample(ids, n): + q = bakery(lambda s: s.query(Customer)) + q += lambda q: q.filter(Customer.id == bindparam("id")) + q(s).params(id=id_).one() + +The new API allows the same very fast "baked query" approach of building up a +statement with lambdas, but does not require any other syntactical changes from +regular statements. It also no longer requires that "bindparam()" is used for +literal values that may change; the "closure" of the Python function is scanned +on every call to extract Python literal values that should be turned into +parameters. + +Methodology Overview +^^^^^^^^^^^^^^^^^^^^ + +SQLAlchemy has also for many years included a "compiled_cache" option that is +used internally by the ORM flush process as well as the Baked Query extension, +which caches a SQL expression object based on the identity of the object +itself. That is, if you create a particular select() object and make use of +the compiled cache feature, if you pass the same select() object each time, the +SQL compilation would be cached. This feature was of limited use since +SQLAlchemy's programming paradigm is based on the continuous construction of +new SQL expression objects each time one is required. + +The new caching feature uses the same "compiled_cache", however instead of +using the statement object itself as the cache key, a separate tuple-oriented +cache key is generated which represents the complete structure of the +statement. Two SQL constructs that are composed in exactly the same way will +produce the same cache key, independent of the bound parameter values that are +bundled with the statement; these are collected separately from each statement +and are used when the cached SQL is executed. The ORM ``Query`` integrates by +producing a ``select()`` object from itself that is interpreted as an +ORM-enabled SELECT within the SQL compilation process that occurs beyond the +cache boundary. + +A general listing of architectural changes needed to support this feature: + +* The system by which arguments passed to SQL constructs are type-checked and + coerced into their desired form was rewritten from an ad-hoc and disorganized + system into the ``sqlalchemy.sql.roles`` and + ``sqlalchemy.sql.coercions`` modules which provide a type-based approach + to the task of composing SQL expression objects, error handling, coercion + of objects such as turning SELECT statements into subqueries, as well as + integrating with a new "plugin" system that allows SQL constructs to include + ORM functionality. + +* The system by which clause expressions constructs are iterated and compared + from an object structure point of view was also + rewritten from one which was ad-hoc and inconsistent into a complete system + within the new ``sqlalchemy.sql.traversals`` module. A test suite was added + which ensures that all SQL construction objects include fully consistent + comparison and iteration behavior. This work began with :ticket:`4336`. + +* The new iteration system naturally gave rise to the cache-key creation + system, which also uses a performance-optimized version of the + ``sqlalchemy.sql.traversals`` module to generate a deterministic cache key + for any SQL expression based on its structure. Two instances of a SQL + expression that represent the same SQL structure, such as ``select(table('x', + column('q'))).where(column('z') > 5)``, are guaranteed to produce the same + cache key, independent of the bound parameters which for this statement would + be one parameter with the value "5". Two instances of a SQL expression + where any elements are different will produce different cache keys. When + the cache key is generated, the parameters are also collected which will be + used to formulate the final parameter list. This work was completed over + many merges and was overall related to :ticket:`4639`. + +* The mechanism by which statements such as ``select()`` generate expensive + collections and datamembers that are only used for SQL compilation, such + as the list of columns and their labels, were organized into a new + decoupled system called ``CompileState``. + +* All elements of queries that needed to be made compatible with the concept of + deterministic SQL compilation were updated, including an expansion of the + "postcompile" concept used to render individual parameters inside of "IN" + expressions first included in 1.3 as well as alterations to how dialects like + the SQL Server dialect render LIMIT / OFFSET expressions that are not + compatible with bound parameters. + +* The ORM ``Query`` object was fully refactored such that all of the intense + computation which would previously occur whenever methods of ``Query`` were + called, such as the construction of the ``Query`` itself, when methods + ``filter()`` or ``join()`` would be called, etc., was completely reorganized + to take place within the ``CompileState`` architecture, meaning the ORM + process that generates a Core ``select()`` to render now takes place + **within** the SQL compilation process, beyond the caching boundary. More + detail on this change is at + :ref:`change_deferred_construction`. + +* The ``Query`` object was unified with the ``select()`` object, such that + these two objects now have cross-compatible internal state. The ``Query`` + can turn itself into a ``select()`` that generates ORM queries by copying its + ``__dict__`` into a new ``Select`` object. + +* The 2.0-style :class:`.Result` object as well as the "future" version of + :class:`_engine.Engine` were developed and integrated into Core and later + the ORM also integrated on top of :class:`.Result`. + +* The Core and ORM execution models were completely reworked to integrate the + new cache key system, and in particular the ORM ``Query`` was reworked such + that its execution model now produces a ``Select`` which is passed to + ``Session.execute()``, which then invokes the 2.0-style execution model that + allows the ``Select`` to be processed as an ORM query beyond the caching + boundary. + +* Other systems such as ``Query`` bulk updates and deletes, the horizontal + sharding extension, the Baked Query extension, and the dogpile caching + example were updated to integrate with the new execution model and a new + event hook :meth:`.SessionEvents.do_orm_execute` has been added. + +* The caching has been enabled via the :paramref:`.create_engine.query_cache_size` + parameter, new logging features were added, and the "lambda" argument + construction module was added. + +.. seealso:: + + :ref:`sql_caching` + +:ticket:`4639` +:ticket:`5380` +:ticket:`4645` +:ticket:`4808` +:ticket:`5004` + + .. _change_deferred_construction: @@ -35,20 +241,28 @@ of statement creation and compilation, where the compilation step would be cached, based on a cache key generated by the created statement object, which itself is newly created for each use. Towards this goal, much of the Python computation which occurs within the construction of statements, particularly -the ORM :class:`_query.Query`, is being moved to occur only when the statement is -invoked. This means that some of the error messages which can arise based on -arguments passed to the object will no longer be raised immediately, and -instead will occur only when the statement is invoked. +the ORM :class:`_query.Query`, is being moved to occur later, when the +statement is actually compiled, and additionally that it will only occur if the +compiled form of the statement is not already cached. This means that some of +the error messages which can arise based on arguments passed to the object will +no longer be raised immediately, and instead will occur only when the statement +is invoked and its compiled form is not yet cached. Error conditions which fall under this category include: * when a :class:`_selectable.CompoundSelect` is constructed (e.g. a UNION, EXCEPT, etc.) and the SELECT statements passed do not have the same number of columns, a - :class:`.CompileError` is now raised to this effect; previously, a + :class:`.CompileError` is now raised to this effect; previously, an :class:`.ArgumentError` would be raised immediately upon statement construction. -* To be continued... +* Various error conditions which may arise when calling upon :meth:`.Query.join` + will be evaluated at statement compilation time rather than when the method + is first called. + +.. seealso:: + + :ref:`change_4639` .. _change_4656: diff --git a/doc/build/changelog/unreleased_14/4639.rst b/doc/build/changelog/unreleased_14/4639.rst new file mode 100644 index 0000000000..51255fa22f --- /dev/null +++ b/doc/build/changelog/unreleased_14/4639.rst @@ -0,0 +1,25 @@ +.. change:: + :tags: feature, performance + :tickets: 4639 + + An all-encompassing reorganization and refactoring of Core and ORM + internals now allows all Core and ORM statements within the areas of + DQL (e.g. SELECTs) and DML (e.g. INSERT, UPDATE, DELETE) to allow their + SQL compilation as well as the construction of result-fetching metadata + to be fully cached in most cases. This effectively provides a transparent + and generalized version of what the "Baked Query" extension has offered + for the ORM in past versions. The new feature can calculate the + cache key for any given SQL construction based on the string that + it would ultimately produce for a given dialect, allowing functions that + compose the equivalent select(), Query(), insert(), update() or delete() + object each time to have that statement cached after it's generated + the first time. + + The feature is enabled transparently but includes some new programming + paradigms that may be employed to make the caching even more efficient. + + .. seealso:: + + :ref:`change_4639` + + :ref:`sql_caching` diff --git a/doc/build/changelog/unreleased_14/5380.rst b/doc/build/changelog/unreleased_14/5380.rst new file mode 100644 index 0000000000..d1f7e02d18 --- /dev/null +++ b/doc/build/changelog/unreleased_14/5380.rst @@ -0,0 +1,24 @@ +.. change:: + :tags: feature, performance + :tickets: 5380 + + Along with the new transparent statement caching feature introduced as part + of :ticket:`4369`, a new feature intended to decrease the Python overhead + of creating statements is added, allowing lambdas to be used when + indicating arguments being passed to a statement object such as select(), + Query(), update(), etc., as well as allowing the construction of full + statements within lambdas in a similar manner as that of the "baked query" + system. The rationale of using lambdas is adapted from that of the "baked + query" approach which uses lambdas to encapsulate any amount of Python code + into a callable that only needs to be called when the statement is first + constructed into a string. The new feature however is more sophisticated + in that Python literal values that would be passed as parameters are + automatically extracted, so that there is no longer a need to use + bindparam() objects with such queries. Use of the feature is optional and + can be used to as small or as great a degree as is desired, while still + allowing statements to be fully cacheable. + + .. seealso:: + + :ref:`engine_lambda_caching` + diff --git a/doc/build/core/connections.rst b/doc/build/core/connections.rst index 976ac27e14..83ad86e25d 100644 --- a/doc/build/core/connections.rst +++ b/doc/build/core/connections.rst @@ -415,6 +415,478 @@ as the schema name is passed to these methods explicitly. .. versionadded:: 1.1 +.. _sql_caching: + + +SQL Compilation Caching +======================= + +.. versionadded:: 1.4 SQLAlchemy now has a transparent query caching system + that substantially lowers the Python computational overhead involved in + converting SQL statement constructs into SQL strings across both + Core and ORM. See the introduction at :ref:`change_4639`. + +SQLAlchemy includes a comprehensive caching system for the SQL compiler as well +as its ORM variants. This caching system is transparent within the +:class:`.Engine` and provides that the SQL compilation process for a given Core +or ORM SQL statement, as well as related computations which assemble +result-fetching mechanics for that statement, will only occur once for that +statement object and all others with the identical +structure, for the duration that the particular structure remains within the +engine's "compiled cache". By "statement objects that have the identical +structure", this generally corresponds to a SQL statement that is +constructed within a function and is built each time that function runs:: + + def run_my_statement(connection, parameter): + stmt = select(table) + stmt = stmt.where(table.c.col == parameter) + stmt = stmt.order_by(table.c.id) + return connection.execute(stmt) + +The above statement will generate SQL resembling +``SELECT id, col FROM table WHERE col = :col ORDER BY id``, noting that +while the value of ``parameter`` is a plain Python object such as a string +or an integer, the string SQL form of the statement does not include this +value as it uses bound parameters. Subsequent invocations of the above +``run_my_statement()`` function will use a cached compilation construct +within the scope of the ``connection.execute()`` call for enhanced performance. + +.. note:: it is important to note that the SQL compilation cache is caching + the **SQL string that is passed to the database only**, and **not** the + results returned by a query. It is in no way a data cache and does not + impact the results returned for a particular SQL statement nor does it + imply any memory use linked to fetching of result rows. + +While SQLAlchemy has had a rudimentary statement cache since the early 1.x +series, and additionally has featured the "Baked Query" extension for the ORM, +both of these systems required a high degree of special API use in order for +the cache to be effective. The new cache as of 1.4 is instead completely +automatic and requires no change in programming style to be effective. + +The cache is automatically used without any configurational changes and no +special steps are needed in order to enable it. The following sections +detail the configuration and advanced usage patterns for the cache. + + +Configuration +------------- + +The cache itself is a dictionary-like object called an ``LRUCache``, which is +an internal SQLAlchemy dictionary subclass that tracks the usage of particular +keys and features a periodic "pruning" step which removes the least recently +used items when the size of the cache reaches a certain threshold. The size +of this cache defaults to 500 and may be configured using the +:paramref:`_sa.create_engine.query_cache_size` parameter:: + + engine = create_engine("postgresql://scott:tiger@localhost/test", query_cache_size=1200) + +The size of the cache can grow to be a factor of 150% of the size given, before +it's pruned back down to the target size. A cache of size 1200 above can therefore +grow to be 1800 elements in size at which point it will be pruned to 1200. + +The sizing of the cache is based on a single entry per unique SQL statement rendered, +per engine. SQL statements generated from both the Core and the ORM are +treated equally. DDL statements will usually not be cached. In order to determine +what the cache is doing, engine logging will include details about the +cache's behavior, described in the next section. + + +Estimating Cache Performance Using Logging +------------------------------------------ + +The above cache size of 1200 is actually fairly large. For small applications, +a size of 100 is likely sufficient. To estimate the optimal size of the cache, +assuming enough memory is present on the target host, the size of the cache +should be based on the number of unique SQL strings that may be rendered for the +target engine in use. The most expedient way to see this is to use +SQL echoing, which is most directly enabled by using the +:paramref:`_sa.create_engine.echo` flag, or by using Python logging; see the +section :ref:`dbengine_logging` for background on logging configuration. + +As an example, we will examine the logging produced by the following program:: + + from sqlalchemy import Column + from sqlalchemy import create_engine + from sqlalchemy import ForeignKey + from sqlalchemy import Integer + from sqlalchemy import String + from sqlalchemy.ext.declarative import declarative_base + from sqlalchemy.orm import relationship + from sqlalchemy.orm import Session + + Base = declarative_base() + + + class A(Base): + __tablename__ = "a" + + id = Column(Integer, primary_key=True) + data = Column(String) + bs = relationship("B") + + + class B(Base): + __tablename__ = "b" + id = Column(Integer, primary_key=True) + a_id = Column(ForeignKey("a.id")) + data = Column(String) + + + e = create_engine("sqlite://", echo=True) + Base.metadata.create_all(e) + + s = Session(e) + + s.add_all( + [A(bs=[B(), B(), B()]), A(bs=[B(), B(), B()]), A(bs=[B(), B(), B()])] + ) + s.commit() + + for a_rec in s.query(A): + print(a_rec.bs) + +When run, each SQL statement that's logged will include a bracketed +cache statistics badge to the left of the parameters passed. The four +types of message we may see are summarized as follows: + +* ``[raw sql]`` - the driver or the end-user emitted raw SQL using + :meth:`.Connection.exec_driver_sql` - caching does not apply + +* ``[no key]`` - the statement object is a DDL statement that is not cached, or + the statement object contains uncacheable elements such as user-defined + constructs or arbitrarily large VALUES clauses. + +* ``[generated in Xs]`` - the statement was a **cache miss** and had to be + compiled, then stored in the cache. it took X seconds to produce the + compiled construct. The number X will be in the small fractional seconds. + +* ``[cached since Xs ago]`` - the statement was a **cache hit** and did not + have to be recompiled. The statement has been stored in the cache since + X seconds ago. The number X will be proportional to how long the application + has been running and how long the statement has been cached, so for example + would be 86400 for a 24 hour period. + +Each badge is described in more detail below. + +The first statements we see for the above program will be the SQLite dialect +checking for the existence of the "a" and "b" tables:: + + INFO sqlalchemy.engine.Engine PRAGMA temp.table_info("a") + INFO sqlalchemy.engine.Engine [raw sql] () + INFO sqlalchemy.engine.Engine PRAGMA main.table_info("b") + INFO sqlalchemy.engine.Engine [raw sql] () + +For the above two SQLite PRAGMA statements, the badge reads ``[raw sql]``, +which indicates the driver is sending a Python string directly to the +database using :meth:`.Connection.exec_driver_sql`. Caching does not apply +to such statements because they already exist in string form, and there +is nothing known about what kinds of result rows will be returned since +SQLAlchemy does not parse SQL strings ahead of time. + +The next statements we see are the CREATE TABLE statements:: + + INFO sqlalchemy.engine.Engine + CREATE TABLE a ( + id INTEGER NOT NULL, + data VARCHAR, + PRIMARY KEY (id) + ) + + INFO sqlalchemy.engine.Engine [no key 0.00007s] () + INFO sqlalchemy.engine.Engine + CREATE TABLE b ( + id INTEGER NOT NULL, + a_id INTEGER, + data VARCHAR, + PRIMARY KEY (id), + FOREIGN KEY(a_id) REFERENCES a (id) + ) + + INFO sqlalchemy.engine.Engine [no key 0.00006s] () + +For each of these statements, the badge reads ``[no key 0.00006s]``. This +indicates that these two particular statements, caching did not occur because +the DDL-oriented :class:`_schema.CreateTable` construct did not produce a +cache key. DDL constructs generally do not participate in caching because +they are not typically subject to being repeated a second time and DDL +is also a database configurational step where performance is not as critical. + +The ``[no key]`` badge is important for one other reason, as it can be produced +for SQL statements that are cacheable except for some particular sub-construct +that is not currently cacheable. Examples of this include custom user-defined +SQL elements that don't define caching parameters, as well as some constructs +that generate arbitrarily long and non-reproducible SQL strings, the main +examples being the :class:`.Values` construct as well as when using "multivalued +inserts" with the :meth:`.Insert.values` method. + +So far our cache is still empty. The next statements will be cached however, +a segment looks like:: + + + INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?) + INFO sqlalchemy.engine.Engine [generated in 0.00011s] (None,) + INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?) + INFO sqlalchemy.engine.Engine [cached since 0.0003533s ago] (None,) + INFO sqlalchemy.engine.Engine INSERT INTO a (data) VALUES (?) + INFO sqlalchemy.engine.Engine [cached since 0.0005326s ago] (None,) + INFO sqlalchemy.engine.Engine INSERT INTO b (a_id, data) VALUES (?, ?) + INFO sqlalchemy.engine.Engine [generated in 0.00010s] (1, None) + INFO sqlalchemy.engine.Engine INSERT INTO b (a_id, data) VALUES (?, ?) + INFO sqlalchemy.engine.Engine [cached since 0.0003232s ago] (1, None) + INFO sqlalchemy.engine.Engine INSERT INTO b (a_id, data) VALUES (?, ?) + INFO sqlalchemy.engine.Engine [cached since 0.0004887s ago] (1, None) + +Above, we see essentially two unique SQL strings; ``"INSERT INTO a (data) VALUES (?)"`` +and ``"INSERT INTO b (a_id, data) VALUES (?, ?)"``. Since SQLAlchemy uses +bound parameters for all literal values, even though these statements are +repeated many times for different objects, because the parameters are separate, +the actual SQL string stays the same. + +.. note:: the above two statements are generated by the ORM unit of work + process, and in fact will be caching these in a separate cache that is + local to each mapper. However the mechanics and terminology are the same. + The section :ref:`engine_compiled_cache` below will describe how user-facing + code can also use an alternate caching container on a per-statement basis. + +The caching badge we see for the first occurrence of each of these two +statements is ``[generated in 0.00011s]``. This indicates that the statement +was **not in the cache, was compiled into a String in .00011s and was then +cached**. When we see the ``[generated]`` badge, we know that this means +there was a **cache miss**. This is to be expected for the first occurrence of +a particular statement. However, if lots of new ``[generated]`` badges are +observed for a long-running application that is generally using the same series +of SQL statements over and over, this may be a sign that the +:paramref:`_sa.create_engine.query_cache_size` parameter is too small. When a +statement that was cached is then evicted from the cache due to the LRU +cache pruning lesser used items, it will display the ``[generated]`` badge +when it is next used. + +The caching badge that we then see for the subsequent occurrences of each of +these two statements looks like ``[cached since 0.0003533s ago]``. This +indicates that the statement **was found in the cache, and was originally +placed into the cache .0003533 seconds ago**. It is important to note that +while the ``[generated]`` and ``[cached since]`` badges refer to a number of +seconds, they mean different things; in the case of ``[generated]``, the number +is a rough timing of how long it took to compile the statement, and will be an +extremely small amount of time. In the case of ``[cached since]``, this is +the total time that a statement has been present in the cache. For an +application that's been running for six hours, this number may read ``[cached +since 21600 seconds ago]``, and that's a good thing. Seeing high numbers for +"cached since" is an indication that these statements have not been subject to +cache misses for a long time. Statements that frequently have a low number of +"cached since" even if the application has been running a long time may +indicate these statements are too frequently subject to cache misses, and that +the +:paramref:`_sa.create_engine.query_cache_size` may need to be increased. + +Our example program then performs some SELECTs where we can see the same +pattern of "generated" then "cached", for the SELECT of the "a" table as well +as for subsequent lazy loads of the "b" table:: + + INFO sqlalchemy.engine.Engine SELECT a.id AS a_id, a.data AS a_data + FROM a + INFO sqlalchemy.engine.Engine [generated in 0.00009s] () + INFO sqlalchemy.engine.Engine SELECT b.id AS b_id, b.a_id AS b_a_id, b.data AS b_data + FROM b + WHERE ? = b.a_id + INFO sqlalchemy.engine.Engine [generated in 0.00010s] (1,) + INFO sqlalchemy.engine.Engine SELECT b.id AS b_id, b.a_id AS b_a_id, b.data AS b_data + FROM b + WHERE ? = b.a_id + INFO sqlalchemy.engine.Engine [cached since 0.0005922s ago] (2,) + INFO sqlalchemy.engine.Engine SELECT b.id AS b_id, b.a_id AS b_a_id, b.data AS b_data + FROM b + WHERE ? = b.a_id + +From our above program, a full run shows a total of four distinct SQL strings +being cached. Which indicates a cache size of **four** would be sufficient. This is +obviously an extremely small size, and the default size of 500 is fine to be left +at its default. + +How much memory does the cache use? +----------------------------------- + +The previous section detailed some techniques to check if the +:paramref:`_sa.create_engine.query_cache_size` needs to be bigger. How do we know +if the cache is not too large? The reason we may want to set +:paramref:`_sa.create_engine.query_cache_size` to not be higher than a certain +number would be because we have an application that may make use of a very large +number of different statements, such as an application that is building queries +on the fly from a search UX, and we don't want our host to run out of memory +if for example, a hundred thousand different queries were run in the past 24 hours +and they were all cached. + +It is extremely difficult to measure how much memory is occupied by Python +data structures, however using a process to measure growth in memory via ``top`` as a +successive series of 250 new statements are added to the cache suggest a +moderate Core statement takes up about 12K while a small ORM statement takes about +20K, including result-fetching structures which for the ORM will be much greater. + + +.. _engine_compiled_cache: + +Disabling or using an alternate dictionary to cache some (or all) statements +----------------------------------------------------------------------------- + +The internal cache used is known as ``LRUCache``, but this is mostly just +a dictionary. Any dictionary may be used as a cache for any series of +statements by using the :paramref:`.Connection.execution_options.compiled_cache` +option as an execution option. Execution options may be set on a statement, +on an :class:`_engine.Engine` or :class:`_engine.Connection`, as well as +when using the ORM :meth:`_orm.Session.execute` method for SQLAlchemy-2.0 +style invocations. For example, to run a series of SQL statements and have +them cached in a particular dictionary:: + + my_cache = {} + with engine.connect().execution_options(compiled_cache=my_cache) as conn: + conn.execute(table.select()) + +The SQLAlchemy ORM uses the above technique to hold onto per-mapper caches +within the unit of work "flush" process that are separate from the default +cache configured on the :class:`_engine.Engine`, as well as for some +relationship loader queries. + +The cache can also be disabled with this argument by sending a value of +``None``:: + + # disable caching for this connection + with engine.connect().execution_options(compiled_cache=None) as conn: + conn.execute(table.select()) + +.. _engine_lambda_caching: + +Using Lambdas to add significant speed gains to statement production +-------------------------------------------------------------------- + +.. warning:: This technique is generally non-essential except in very performance + intensive scenarios, and intended for experienced Python programmers. + While fairly straightforward, it involves metaprogramming concepts that are + not appropriate for novice Python developers. The lambda approach can be + applied to at a later time to existing code with a minimal amount of effort. + +The caching system has in its roots the SQLAlchemy :ref:`"baked query" +` extension, which made novel use of Python lambdas in order to +produce SQL statements that were intrinsically cacheable, while at the same +time decreasing not just the overhead involved to compile the statement into +SQL, but also the overhead in constructing the statement object from a Python +perspective. The new caching in SQLAlchemy by default does not substantially +optimize the construction of SQL constructs. This refers to the Python +overhead taken up to construct the statement object itself before it is +compiled or executed, such as the :class:`_sql.Select` object used in the +example below:: + + def run_my_statement(connection, parameter): + stmt = select(table) + stmt = stmt.where(table.c.col == parameter) + stmt = stmt.order_by(table.c.id) + + return connection.execute(stmt) + +Above, in order to construct ``stmt``, we see three Python functions or methods +``select()``, ``.where()`` and ``.order_by()`` being invoked directly, and +additionally there is a Python method invoked when we construct ``table.c.col +== 'foo'``, as the expression language overrides the ``__eq__()`` method to +produce a SQL construct. Within each of these calls is a series of argument +checking and internal construction logic that makes use of many more Python +function calls. With intense production of thousands of statement objects, +these function calls can add up. Using the recipe for profiling at +:ref:`faq_code_profiling`, the above Python code within the scope of the +``select()`` call down to the ``.order_by()`` call uses 73 Python function +calls to produce. + +Additionally, statement caching requires that a cache key be generated against +the above statement, which must be composed of all elements within the +statement that uniquely identify the SQL that it would produce. Measuring +this process for the above statement takes another 40 Python function calls. + +In order to ensure the full performance gains of the prior "baked query" +extension are still available, the "lambda:" system used by baked queries has +been adapted into a more capable and easier to use system as an intrinsic part +of the SQLAlchemy Core expression language (which by extension then includes +ORM queries, which as of SQLAlchemy 1.4 using 2.0-style APIs may also be +invoked directly from SQLAlchemy Core expression objects). We can +adapt our statement above to be built using "lambdas" by making use of the +:func:`_sql.lambda_stmt` element. Using this approach, we indicate that the +:func:`_sql.select` should be returned by a lambda. We can then add new +criteria to the statement by composing further lambdas onto the object in a +similar manner as how "baked queries" worked:: + + from sqlalchemy import lambda_stmt + + def run_my_statement(connection, parameter): + stmt = lambda_stmt(lambda: select(table)) + stmt += lambda s: s.where(table.c.col == parameter) + stmt += lambda s: s.order_by(table.c.id) + + return connection.execute(stmt) + + result = run_my_statement(some_connection, "some parameter") + +The above code produces a :class:`.StatementLambdaElement`, which behaves like a +Core SQL construct but defers the construction of the statement in most +cases until it is needed by the compiler. If the statement is already cached, +the lambdas will not be called. + +The cache key is based on the **Python source code location of each lambda +itself**, which in the Python interpreter is essentially the ``__code__`` +element of the Python function. This means that the lambda approach should only +be used inside of a function where the lambdas themselves will be the **same +lambdas each time, from a Python source code perspective**. + +The execution process for the above lambda will **extract literal parameters** +from the statement each time, without needing to actually run the lambdas. In +the above example, each time the variable ``parameter`` is used within the +lambda to generate the WHERE clause of the statement, while the actual lambda +present will not actually be run, the value of ``parameter`` will be tracked +and the current value of the variable will be used within the statement +parameters at execution time. This is a feature that was not possible with the +"baked query" extension and involves the use of up-front analysis of the +incoming ``__code__`` object to determine how parameters can be extracted from +future lambdas against that same code object. + +More simply, this means it's safe for the lambda statement +to use arbitrary literal parameters, which don't modify the structure +of the statement, on each invocation:: + + def run_my_statement(connection, parameter): + stmt = lambda_stmt(lambda: select(table)) + stmt += lambda s: s.where(table.c.col == parameter) + stmt += lambda s: s.order_by(table.c.id) + + return connection.execute(stmt) + +However, it's not safe for an individual lambda so modify the SQL structure +of the statement across calls:: + + # incorrect example + def run_my_statement(connection, parameter, add_criteria=False): + stmt = lambda_stmt(lambda: select(table)) + + # will not be cached correctly as add_criteria changes + stmt += lambda s: s.where( + and_(add_criteria, table.c.col == parameter) + if add_criteria + else s.where(table.c.col == parameter) + ) + + stmt += lambda s: s.order_by(table.c.id) + + return connection.execute(stmt) + +The lambda statements indicated above will invoke all of the lambdas the first +time they are constructed; subsequent to that, the lambdas will not be invoked. +On these subsequent runs, a lambda construct will use far fewer Python function +calls in order to construct the un-cached object as well as to generate the +cache key after the first call. The above statement using lambdas takes only +41 Python function calls to generate the whole structure as well as to produce +the cache key, including the extraction of the bound parameters. This is +compared to a total of about 115 Python function calls for the non-lambda +version. + +For a series of examples of "lambda" caching with performance comparisons, +see the "short_selects" test suite within the :ref:`examples_performance` +performance example. + .. _engine_disposal: Engine Disposal diff --git a/doc/build/core/sqlelement.rst b/doc/build/core/sqlelement.rst index 46cda7bf0d..3e2c1d7fbf 100644 --- a/doc/build/core/sqlelement.rst +++ b/doc/build/core/sqlelement.rst @@ -48,6 +48,8 @@ is placed in the FROM clause of a SELECT statement. .. autofunction:: label +.. autofunction:: lambda_stmt + .. autofunction:: literal .. autofunction:: literal_column @@ -132,12 +134,18 @@ is placed in the FROM clause of a SELECT statement. .. autoclass:: Label :members: +.. autoclass:: LambdaElement + :members: + .. autoclass:: sqlalchemy.sql.elements.Null :members: .. autoclass:: Over :members: +.. autoclass:: StatementLambdaElement + :members: + .. autoclass:: TextClause :members: diff --git a/doc/build/faq/performance.rst b/doc/build/faq/performance.rst index f636d7cf1a..ff02e7cc95 100644 --- a/doc/build/faq/performance.rst +++ b/doc/build/faq/performance.rst @@ -74,6 +74,8 @@ point around when a statement is executed. We attach a timer onto the connection using the :class:`._ConnectionRecord.info` dictionary; we use a stack here for the occasional case where the cursor execute events may be nested. +.. _faq_code_profiling: + Code Profiling ^^^^^^^^^^^^^^ diff --git a/doc/build/orm/extensions/baked.rst b/doc/build/orm/extensions/baked.rst index 72479e64d5..9ff3432392 100644 --- a/doc/build/orm/extensions/baked.rst +++ b/doc/build/orm/extensions/baked.rst @@ -22,8 +22,12 @@ the caching of the SQL calls and result sets themselves is available in .. deprecated:: 1.4 SQLAlchemy 1.4 and 2.0 feature an all-new direct query caching system that removes the need for the :class:`.BakedQuery` system. - Caching is now built in to all Core and ORM queries using the - :paramref:`_engine.create_engine.query_cache_size` parameter. + Caching is now transparently active for all Core and ORM queries with no + action taken by the user, using the system described at :ref:`sql_caching`. + For background on using lambda-style construction for cacheable Core and ORM + SQL constructs, which is now an optional technique to provide additional + performance gains, see the section :ref:`engine_lambda_caching`. + .. versionadded:: 1.0.0 diff --git a/examples/performance/short_selects.py b/examples/performance/short_selects.py index 64d9b05516..ff9156360b 100644 --- a/examples/performance/short_selects.py +++ b/examples/performance/short_selects.py @@ -16,6 +16,7 @@ from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.future import select as future_select from sqlalchemy.orm import deferred from sqlalchemy.orm import Session +from sqlalchemy.sql import lambdas from . import Profiler @@ -65,76 +66,67 @@ def setup_database(dburl, echo, num): @Profiler.profile -def test_orm_query(n): - """test a straight ORM query of the full entity.""" +def test_orm_query_classic_style(n): + """classic ORM query of the full entity.""" session = Session(bind=engine) for id_ in random.sample(ids, n): - # new style - # stmt = future_select(Customer).where(Customer.id == id_) - # session.execute(stmt).scalars().unique().one() session.query(Customer).filter(Customer.id == id_).one() @Profiler.profile -def test_orm_query_newstyle(n): - """test a straight ORM query of the full entity.""" - - # the newstyle query is faster for the following reasons: - # 1. it uses LABEL_STYLE_DISAMBIGUATE_ONLY, which saves on a huge amount - # of label generation and compilation calls - # 2. it does not use the Query @_assertions decorators. - - # however, both test_orm_query and test_orm_query_newstyle are still - # 25-30% slower than the full blown Query version in 1.3.x and this - # continues to be concerning. +def test_orm_query_new_style(n): + """new style ORM select() of the full entity.""" session = Session(bind=engine) for id_ in random.sample(ids, n): stmt = future_select(Customer).where(Customer.id == id_) - session.execute(stmt).scalars().unique().one() + session.execute(stmt).scalar_one() @Profiler.profile -def test_orm_query_cols_only(n): - """test an ORM query of only the entity columns.""" +def test_orm_query_new_style_using_embedded_lambdas(n): + """new style ORM select() of the full entity w/ embedded lambdas.""" session = Session(bind=engine) for id_ in random.sample(ids, n): - # new style - # stmt = future_select( - # Customer.id, Customer.name, Customer.description - # ).filter(Customer.id == id_) - # session.execute(stmt).scalars().unique().one() - session.query(Customer.id, Customer.name, Customer.description).filter( - Customer.id == id_ - ).one() + stmt = future_select(lambda: Customer).where( + lambda: Customer.id == id_ + ) + session.execute(stmt).scalar_one() -cache = {} +@Profiler.profile +def test_orm_query_new_style_using_external_lambdas(n): + """new style ORM select() of the full entity w/ external lambdas.""" + + session = Session(bind=engine) + for id_ in random.sample(ids, n): + + stmt = lambdas.lambda_stmt(lambda: future_select(Customer)) + stmt += lambda s: s.where(Customer.id == id_) + session.execute(stmt).scalar_one() @Profiler.profile -def test_cached_orm_query(n): - """test new style cached queries of the full entity.""" - s = Session(bind=engine) +def test_orm_query_classic_style_cols_only(n): + """classic ORM query against columns""" + session = Session(bind=engine) for id_ in random.sample(ids, n): - # this runs significantly faster - stmt = future_select(Customer).where(Customer.id == id_) - # stmt = s.query(Customer).filter(Customer.id == id_) - s.execute(stmt, execution_options={"compiled_cache": cache}).one() + session.query(Customer.id, Customer.name, Customer.description).filter( + Customer.id == id_ + ).one() @Profiler.profile -def test_cached_orm_query_cols_only(n): - """test new style cached queries of the full entity.""" +def test_orm_query_new_style_ext_lambdas_cols_only(n): + """new style ORM query w/ external lambdas against columns.""" s = Session(bind=engine) for id_ in random.sample(ids, n): - stmt = future_select( - Customer.id, Customer.name, Customer.description - ).filter(Customer.id == id_) - # stmt = s.query( - # Customer.id, Customer.name, Customer.description - # ).filter(Customer.id == id_) - s.execute(stmt, execution_options={"compiled_cache": cache}).one() + stmt = lambdas.lambda_stmt( + lambda: future_select( + Customer.id, Customer.name, Customer.description + ) + ) + (lambda s: s.filter(Customer.id == id_)) + s.execute(stmt).one() @Profiler.profile @@ -212,15 +204,5 @@ def test_core_reuse_stmt_compiled_cache(n): tuple(row) -@Profiler.profile -def test_core_just_statement_construct_plus_cache_key(n): - for i in range(n): - stmt = future_select(Customer.__table__).where( - Customer.id == bindparam("id") - ) - - stmt._generate_cache_key() - - if __name__ == "__main__": Profiler.main() diff --git a/lib/sqlalchemy/__init__.py b/lib/sqlalchemy/__init__.py index d52393d0b1..3a244a95f8 100644 --- a/lib/sqlalchemy/__init__.py +++ b/lib/sqlalchemy/__init__.py @@ -54,6 +54,7 @@ from .sql import insert # noqa from .sql import intersect # noqa from .sql import intersect_all # noqa from .sql import join # noqa +from .sql import lambda_stmt # noqa from .sql import lateral # noqa from .sql import literal # noqa from .sql import literal_column # noqa diff --git a/lib/sqlalchemy/engine/base.py b/lib/sqlalchemy/engine/base.py index ed4133dbbb..9ac61fe127 100644 --- a/lib/sqlalchemy/engine/base.py +++ b/lib/sqlalchemy/engine/base.py @@ -1046,6 +1046,7 @@ class Connection(Connectable): distilled_parameters, _EMPTY_EXECUTION_OPTS, ) + try: meth = object_._execute_on_connection except AttributeError as err: diff --git a/lib/sqlalchemy/engine/create.py b/lib/sqlalchemy/engine/create.py index cc138412b1..c199c21e07 100644 --- a/lib/sqlalchemy/engine/create.py +++ b/lib/sqlalchemy/engine/create.py @@ -459,8 +459,7 @@ def create_engine(url, **kwargs): .. seealso:: - ``engine_caching`` - TODO: this will be an upcoming section describing - the SQL caching system. + :ref:`sql_caching` .. versionadded:: 1.4 diff --git a/lib/sqlalchemy/engine/default.py b/lib/sqlalchemy/engine/default.py index 51bff223c9..f1fc505acf 100644 --- a/lib/sqlalchemy/engine/default.py +++ b/lib/sqlalchemy/engine/default.py @@ -1031,7 +1031,7 @@ class DefaultExecutionContext(interfaces.ExecutionContext): if self.compiled.cache_key is None: return "no key %.5fs" % (now - self.compiled._gen_time,) elif self.cache_hit: - return "cached for %.4gs" % (now - self.compiled._gen_time,) + return "cached since %.4gs ago" % (now - self.compiled._gen_time,) else: return "generated in %.5fs" % (now - self.compiled._gen_time,) diff --git a/lib/sqlalchemy/engine/result.py b/lib/sqlalchemy/engine/result.py index f75cba57db..7df17cf22d 100644 --- a/lib/sqlalchemy/engine/result.py +++ b/lib/sqlalchemy/engine/result.py @@ -963,7 +963,10 @@ class Result(InPlaceGenerative): else: return None - make_row = self._row_getter + if scalar and self._source_supports_scalars: + make_row = None + else: + make_row = self._row_getter row = make_row(row) if make_row else row @@ -1016,7 +1019,7 @@ class Result(InPlaceGenerative): if post_creational_filter: row = post_creational_filter(row) - if scalar and row: + if scalar and make_row: return row[0] else: return row diff --git a/lib/sqlalchemy/future/selectable.py b/lib/sqlalchemy/future/selectable.py index 473242bf83..9d0ae7c89e 100644 --- a/lib/sqlalchemy/future/selectable.py +++ b/lib/sqlalchemy/future/selectable.py @@ -124,6 +124,8 @@ class Select(_LegacySelect): target = coercions.expect( roles.JoinTargetRole, target, apply_propagate_attrs=self ) + if onclause is not None: + onclause = coercions.expect(roles.OnClauseRole, onclause) self._setup_joins += ( (target, onclause, None, {"isouter": isouter, "full": full}), ) diff --git a/lib/sqlalchemy/orm/attributes.py b/lib/sqlalchemy/orm/attributes.py index bf07061c68..6dd95a5a90 100644 --- a/lib/sqlalchemy/orm/attributes.py +++ b/lib/sqlalchemy/orm/attributes.py @@ -59,6 +59,8 @@ class QueryableAttribute( interfaces.InspectionAttr, interfaces.PropComparator, roles.JoinTargetRole, + roles.OnClauseRole, + sql_base.Immutable, sql_base.MemoizedHasCacheKey, ): """Base class for :term:`descriptor` objects that intercept diff --git a/lib/sqlalchemy/orm/context.py b/lib/sqlalchemy/orm/context.py index 3a0cce609e..09163d4e99 100644 --- a/lib/sqlalchemy/orm/context.py +++ b/lib/sqlalchemy/orm/context.py @@ -4,7 +4,6 @@ # # This module is part of SQLAlchemy and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php - from . import attributes from . import interfaces from . import loading @@ -664,10 +663,13 @@ class ORMSelectCompileState(ORMCompileState, SelectState): self._aliased_generations = {} self._polymorphic_adapters = {} + compile_options = cls.default_compile_options.safe_merge( + query.compile_options + ) # legacy: only for query.with_polymorphic() - if query.compile_options._with_polymorphic_adapt_map: + if compile_options._with_polymorphic_adapt_map: self._with_polymorphic_adapt_map = dict( - query.compile_options._with_polymorphic_adapt_map + compile_options._with_polymorphic_adapt_map ) self._setup_with_polymorphics() @@ -1065,6 +1067,10 @@ class ORMSelectCompileState(ORMCompileState, SelectState): # maybe? self._reset_joinpoint() + right = inspect(right) + if onclause is not None: + onclause = inspect(onclause) + if onclause is None and isinstance( right, interfaces.PropComparator ): @@ -1084,23 +1090,23 @@ class ORMSelectCompileState(ORMCompileState, SelectState): onclause = right right = None elif "parententity" in right._annotations: - right = right._annotations["parententity"].entity + right = right._annotations["parententity"] if onclause is None: - r_info = inspect(right) - if not r_info.is_selectable and not hasattr(r_info, "mapper"): + if not right.is_selectable and not hasattr(right, "mapper"): raise sa_exc.ArgumentError( "Expected mapped entity or " "selectable/table as join target" ) - if isinstance(onclause, interfaces.PropComparator): - of_type = getattr(onclause, "_of_type", None) - else: - of_type = None + + of_type = None if isinstance(onclause, interfaces.PropComparator): # descriptor/property given (or determined); this tells us # explicitly what the expected "left" side of the join is. + + of_type = getattr(onclause, "_of_type", None) + if right is None: if of_type: right = of_type @@ -1164,6 +1170,14 @@ class ORMSelectCompileState(ORMCompileState, SelectState): full = flags["full"] aliased_generation = flags["aliased_generation"] + # do a quick inspect to accommodate for a lambda + if right is not None and not isinstance(right, util.string_types): + right = inspect(right) + if onclause is not None and not isinstance( + onclause, util.string_types + ): + onclause = inspect(onclause) + # legacy vvvvvvvvvvvvvvvvvvvvvvvvvv if not from_joinpoint: self._reset_joinpoint() @@ -1190,11 +1204,10 @@ class ORMSelectCompileState(ORMCompileState, SelectState): onclause = right right = None elif "parententity" in right._annotations: - right = right._annotations["parententity"].entity + right = right._annotations["parententity"] if onclause is None: - r_info = inspect(right) - if not r_info.is_selectable and not hasattr(r_info, "mapper"): + if not right.is_selectable and not hasattr(right, "mapper"): raise sa_exc.ArgumentError( "Expected mapped entity or " "selectable/table as join target" @@ -1379,7 +1392,7 @@ class ORMSelectCompileState(ORMCompileState, SelectState): self.from_clauses = self.from_clauses + [ orm_join( - left_clause, right, onclause, isouter=outerjoin, full=full + left_clause, r_info, onclause, isouter=outerjoin, full=full ) ] @@ -1964,6 +1977,13 @@ class _QueryEntity(object): @classmethod def to_compile_state(cls, compile_state, entities): for entity in entities: + if entity._is_lambda_element: + if entity._is_sequence: + cls.to_compile_state(compile_state, entity._resolved) + continue + else: + entity = entity._resolved + if entity.is_clause_element: if entity.is_selectable: if "parententity" in entity._annotations: diff --git a/lib/sqlalchemy/orm/query.py b/lib/sqlalchemy/orm/query.py index 336b7d9aa8..1ca65c7335 100644 --- a/lib/sqlalchemy/orm/query.py +++ b/lib/sqlalchemy/orm/query.py @@ -20,6 +20,7 @@ database to return iterable result sets. """ import itertools import operator +import types from . import attributes from . import exc as orm_exc @@ -2229,7 +2230,8 @@ class Query( # non legacy argument form _props = [(target,)] elif not legacy and isinstance( - target, (expression.Selectable, type, AliasedClass,) + target, + (expression.Selectable, type, AliasedClass, types.FunctionType), ): # non legacy argument form _props = [(target, onclause)] @@ -2284,7 +2286,13 @@ class Query( legacy=True, apply_propagate_attrs=self, ), - prop[1] if len(prop) == 2 else None, + ( + coercions.expect(roles.OnClauseRole, prop[1]) + if not isinstance(prop[1], str) + else prop[1] + ) + if len(prop) == 2 + else None, None, { "isouter": isouter, diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index 5f039aff71..53cc99ccdd 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -1524,6 +1524,16 @@ class SubqueryLoader(PostLoader): # orig_compile_state = compile_state_cls.create_for_statement( # orig_query, None) + if orig_query._is_lambda_element: + util.warn( + 'subqueryloader for "%s" must invoke lambda callable at %r in ' + "order to produce a new query, decreasing the efficiency " + "of caching for this statement. Consider using " + "selectinload() for more effective full-lambda caching" + % (self, orig_query) + ) + orig_query = orig_query._resolved + # this is the more "quick" version, however it's not clear how # much of this we need. in particular I can't get a test to # fail if the "set_base_alias" is missing and not sure why that is. diff --git a/lib/sqlalchemy/sql/__init__.py b/lib/sqlalchemy/sql/__init__.py index a25c1b0832..2fe6f35d2b 100644 --- a/lib/sqlalchemy/sql/__init__.py +++ b/lib/sqlalchemy/sql/__init__.py @@ -46,6 +46,8 @@ from .expression import intersect_all # noqa from .expression import Join # noqa from .expression import join # noqa from .expression import label # noqa +from .expression import lambda_stmt # noqa +from .expression import LambdaElement # noqa from .expression import lateral # noqa from .expression import literal # noqa from .expression import literal_column # noqa @@ -62,6 +64,7 @@ from .expression import quoted_name # noqa from .expression import Select # noqa from .expression import select # noqa from .expression import Selectable # noqa +from .expression import StatementLambdaElement # noqa from .expression import Subquery # noqa from .expression import subquery # noqa from .expression import table # noqa @@ -106,18 +109,22 @@ def __go(lcls): from . import coercions from . import elements from . import events # noqa + from . import lambdas from . import selectable from . import schema from . import sqltypes + from . import traversals from . import type_api base.coercions = elements.coercions = coercions base.elements = elements base.type_api = type_api coercions.elements = elements + coercions.lambdas = lambdas coercions.schema = schema coercions.selectable = selectable coercions.sqltypes = sqltypes + coercions.traversals = traversals _prepare_annotations(ColumnElement, AnnotatedColumnElement) _prepare_annotations(FromClause, AnnotatedFromClause) diff --git a/lib/sqlalchemy/sql/base.py b/lib/sqlalchemy/sql/base.py index 9dcd7dca92..6cdab8eacf 100644 --- a/lib/sqlalchemy/sql/base.py +++ b/lib/sqlalchemy/sql/base.py @@ -14,6 +14,7 @@ import itertools import operator import re +from . import roles from .traversals import HasCacheKey # noqa from .traversals import MemoizedHasCacheKey # noqa from .visitors import ClauseVisitor @@ -447,13 +448,17 @@ class CompileState(object): "compile_state_plugin", "default" ) klass = cls.plugins.get( - (plugin_name, statement.__visit_name__), None + (plugin_name, statement._effective_plugin_target), None ) if klass is None: - klass = cls.plugins[("default", statement.__visit_name__)] + klass = cls.plugins[ + ("default", statement._effective_plugin_target) + ] else: - klass = cls.plugins[("default", statement.__visit_name__)] + klass = cls.plugins[ + ("default", statement._effective_plugin_target) + ] if klass is cls: return cls(statement, compiler, **kw) @@ -469,14 +474,18 @@ class CompileState(object): "compile_state_plugin", "default" ) try: - return cls.plugins[(plugin_name, statement.__visit_name__)] + return cls.plugins[ + (plugin_name, statement._effective_plugin_target) + ] except KeyError: return None @classmethod def _get_plugin_class_for_plugin(cls, statement, plugin_name): try: - return cls.plugins[(plugin_name, statement.__visit_name__)] + return cls.plugins[ + (plugin_name, statement._effective_plugin_target) + ] except KeyError: return None @@ -637,6 +646,10 @@ class Executable(Generative): ("_propagate_attrs", ExtendedInternalTraversal.dp_propagate_attrs), ] + @property + def _effective_plugin_target(self): + return self.__visit_name__ + @_generative def options(self, *options): """Apply options to this statement. @@ -667,7 +680,9 @@ class Executable(Generative): to the usage of ORM queries """ - self._with_options += options + self._with_options += tuple( + coercions.expect(roles.HasCacheKeyRole, opt) for opt in options + ) @_generative def _add_context_option(self, callable_, cache_args): diff --git a/lib/sqlalchemy/sql/coercions.py b/lib/sqlalchemy/sql/coercions.py index 4c6a0317a4..be412c7700 100644 --- a/lib/sqlalchemy/sql/coercions.py +++ b/lib/sqlalchemy/sql/coercions.py @@ -21,9 +21,11 @@ if util.TYPE_CHECKING: from types import ModuleType elements = None # type: ModuleType +lambdas = None # type: ModuleType schema = None # type: ModuleType selectable = None # type: ModuleType sqltypes = None # type: ModuleType +traversals = None # type: ModuleType def _is_literal(element): @@ -51,6 +53,23 @@ def _document_text_coercion(paramname, meth_rst, param_rst): def expect(role, element, apply_propagate_attrs=None, argname=None, **kw): + if ( + role.allows_lambda + # note callable() will not invoke a __getattr__() method, whereas + # hasattr(obj, "__call__") will. by keeping the callable() check here + # we prevent most needless calls to hasattr() and therefore + # __getattr__(), which is present on ColumnElement. + and callable(element) + and hasattr(element, "__code__") + ): + return lambdas.LambdaElement( + element, + role, + apply_propagate_attrs=apply_propagate_attrs, + argname=argname, + **kw + ) + # major case is that we are given a ClauseElement already, skip more # elaborate logic up front if possible impl = _impl_lookup[role] @@ -106,7 +125,12 @@ def expect(role, element, apply_propagate_attrs=None, argname=None, **kw): if impl._role_class in resolved.__class__.__mro__: if impl._post_coercion: - resolved = impl._post_coercion(resolved, argname=argname, **kw) + resolved = impl._post_coercion( + resolved, + argname=argname, + original_element=original_element, + **kw + ) return resolved else: return impl._implicit_coercions( @@ -230,6 +254,8 @@ class _ColumnCoercions(object): ): self._warn_for_scalar_subquery_coercion() return resolved.element.scalar_subquery() + elif self._role_class.allows_lambda and resolved._is_lambda_element: + return resolved else: self._raise_for_expected(original_element, argname, resolved) @@ -319,6 +345,21 @@ class _SelectIsNotFrom(object): ) +class HasCacheKeyImpl(RoleImpl): + __slots__ = () + + def _implicit_coercions( + self, original_element, resolved, argname=None, **kw + ): + if isinstance(original_element, traversals.HasCacheKey): + return original_element + else: + self._raise_for_expected(original_element, argname, resolved) + + def _literal_coercion(self, element, **kw): + return element + + class ExpressionElementImpl(_ColumnCoercions, RoleImpl): __slots__ = () @@ -420,7 +461,14 @@ class InElementImpl(RoleImpl): assert not len(element.clauses) == 0 return element.self_group(against=operator) - elif isinstance(element, elements.BindParameter) and element.expanding: + elif isinstance(element, elements.BindParameter): + if not element.expanding: + # coercing to expanding at the moment to work with the + # lambda system. not sure if this is the right approach. + # is there a valid use case to send a single non-expanding + # param to IN? check for ARRAY type? + element = element._clone(maintain_key=True) + element.expanding = True if isinstance(expr, elements.Tuple): element = element._with_expanding_in_types( [elem.type for elem in expr] @@ -431,6 +479,22 @@ class InElementImpl(RoleImpl): return element +class OnClauseImpl(_CoerceLiterals, _ColumnCoercions, RoleImpl): + __slots__ = () + + _coerce_consts = True + + def _post_coercion(self, resolved, original_element=None, **kw): + # this is a hack right now as we want to use coercion on an + # ORM InstrumentedAttribute, but we want to return the object + # itself if it is one, not its clause element. + # ORM context _join and _legacy_join() would need to be improved + # to look for annotations in a clause element form. + if isinstance(original_element, roles.JoinTargetRole): + return original_element + return resolved + + class WhereHavingImpl(_CoerceLiterals, _ColumnCoercions, RoleImpl): __slots__ = () @@ -635,6 +699,24 @@ class StatementImpl(_NoTextCoercion, RoleImpl): class CoerceTextStatementImpl(_CoerceLiterals, RoleImpl): __slots__ = () + def _literal_coercion(self, element, **kw): + if callable(element) and hasattr(element, "__code__"): + return lambdas.StatementLambdaElement(element, self._role_class) + else: + return super(CoerceTextStatementImpl, self)._literal_coercion( + element, **kw + ) + + def _implicit_coercions( + self, original_element, resolved, argname=None, **kw + ): + if resolved._is_lambda_element: + return resolved + else: + return super(CoerceTextStatementImpl, self)._implicit_coercions( + original_element, resolved, argname=argname, **kw + ) + def _text_coercion(self, element, argname=None): # TODO: this should emit deprecation warning, # see deprecation warning in engine/base.py execute() diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py index 6152a28e78..3a3ce5c45d 100644 --- a/lib/sqlalchemy/sql/compiler.py +++ b/lib/sqlalchemy/sql/compiler.py @@ -1296,6 +1296,10 @@ class SQLCompiler(Compiled): "Cannot compile Column object until " "its 'name' is assigned." ) + def visit_lambda_element(self, element, **kw): + sql_element = element._resolved + return self.process(sql_element, **kw) + def visit_column( self, column, @@ -1624,7 +1628,7 @@ class SQLCompiler(Compiled): return func.clause_expr._compiler_dispatch(self, **kwargs) def visit_compound_select( - self, cs, asfrom=False, compound_index=0, **kwargs + self, cs, asfrom=False, compound_index=None, **kwargs ): toplevel = not self.stack @@ -1635,10 +1639,14 @@ class SQLCompiler(Compiled): entry = self._default_stack_entry if toplevel else self.stack[-1] need_result_map = toplevel or ( - compound_index == 0 + not compound_index and entry.get("need_result_map_for_compound", False) ) + # indicates there is already a CompoundSelect in play + if compound_index == 0: + entry["select_0"] = cs + self.stack.append( { "correlate_froms": entry["correlate_froms"], @@ -2654,7 +2662,7 @@ class SQLCompiler(Compiled): select_stmt, asfrom=False, fromhints=None, - compound_index=0, + compound_index=None, select_wraps_for=None, lateral=False, from_linter=None, @@ -2709,7 +2717,9 @@ class SQLCompiler(Compiled): or entry.get("need_result_map_for_nested", False) ) - if compound_index > 0: + # indicates there is a CompoundSelect in play and we are not the + # first select + if compound_index: populate_result_map = False # this was first proposed as part of #3372; however, it is not @@ -2844,11 +2854,10 @@ class SQLCompiler(Compiled): correlate_froms = entry["correlate_froms"] asfrom_froms = entry["asfrom_froms"] - if compound_index > 0: - # note this is cached - select_0 = entry["selectable"].selects[0] - if select_0._is_select_container: - select_0 = select_0.element + if compound_index == 0: + entry["select_0"] = select + elif compound_index: + select_0 = entry["select_0"] numcols = len(select_0.selected_columns) if len(compile_state.columns_plus_names) != numcols: diff --git a/lib/sqlalchemy/sql/elements.py b/lib/sqlalchemy/sql/elements.py index af5eab257c..6ce5054121 100644 --- a/lib/sqlalchemy/sql/elements.py +++ b/lib/sqlalchemy/sql/elements.py @@ -215,6 +215,7 @@ class ClauseElement( _is_select_statement = False _is_bind_parameter = False _is_clause_list = False + _is_lambda_element = False _order_by_label_element = None @@ -1337,9 +1338,6 @@ class BindParameter(roles.InElementRole, ColumnElement): :ref:`change_4808`. - - - """ if required is NO_ARG: @@ -1406,15 +1404,15 @@ class BindParameter(roles.InElementRole, ColumnElement): the context of an expanding IN against a tuple. """ - cloned = self._clone() + cloned = self._clone(maintain_key=True) cloned._expanding_in_types = types return cloned - def _with_value(self, value): + def _with_value(self, value, maintain_key=False): """Return a copy of this :class:`.BindParameter` with the given value set. """ - cloned = self._clone() + cloned = self._clone(maintain_key=maintain_key) cloned.value = value cloned.callable = None cloned.required = False @@ -1442,9 +1440,9 @@ class BindParameter(roles.InElementRole, ColumnElement): c.type = type_ return c - def _clone(self): + def _clone(self, maintain_key=False): c = ClauseElement._clone(self) - if self.unique: + if not maintain_key and self.unique: c.key = _anonymous_label( "%%(%d %s)s" % (id(c), c._orig_key or "param") ) diff --git a/lib/sqlalchemy/sql/expression.py b/lib/sqlalchemy/sql/expression.py index e25063372b..37441a125a 100644 --- a/lib/sqlalchemy/sql/expression.py +++ b/lib/sqlalchemy/sql/expression.py @@ -29,6 +29,8 @@ __all__ = [ "Insert", "Join", "Lateral", + "LambdaElement", + "StatementLambdaElement", "Select", "Selectable", "TableClause", @@ -59,6 +61,7 @@ __all__ = [ "join", "label", "lateral", + "lambda_stmt", "literal", "literal_column", "not_", @@ -135,6 +138,9 @@ from .functions import func # noqa from .functions import Function # noqa from .functions import FunctionElement # noqa from .functions import modifier # noqa +from .lambdas import lambda_stmt # noqa +from .lambdas import LambdaElement # noqa +from .lambdas import StatementLambdaElement # noqa from .selectable import Alias # noqa from .selectable import AliasedReturnsRows # noqa from .selectable import CompoundSelect # noqa diff --git a/lib/sqlalchemy/sql/functions.py b/lib/sqlalchemy/sql/functions.py index 6fff26842d..c1b8bbd27a 100644 --- a/lib/sqlalchemy/sql/functions.py +++ b/lib/sqlalchemy/sql/functions.py @@ -614,7 +614,7 @@ class Function(FunctionElement): new :class:`.Function` instances. """ - self.packagenames = kw.pop("packagenames", None) or [] + self.packagenames = kw.pop("packagenames", None) or () self.name = name self._bind = kw.get("bind", None) self.type = sqltypes.to_instance(kw.get("type_", None)) @@ -759,7 +759,7 @@ class GenericFunction(util.with_metaclass(_GenericMeta, Function)): for c in args ] self._has_args = self._has_args or bool(parsed_args) - self.packagenames = [] + self.packagenames = () self._bind = kwargs.get("bind", None) self.clause_expr = ClauseList( operator=operators.comma_op, group_contents=True, *parsed_args diff --git a/lib/sqlalchemy/sql/lambdas.py b/lib/sqlalchemy/sql/lambdas.py new file mode 100644 index 0000000000..7924111896 --- /dev/null +++ b/lib/sqlalchemy/sql/lambdas.py @@ -0,0 +1,607 @@ +# sql/lambdas.py +# Copyright (C) 2005-2019 the SQLAlchemy authors and contributors +# +# +# This module is part of SQLAlchemy and is released under +# the MIT License: http://www.opensource.org/licenses/mit-license.php + +import itertools +import operator +import sys +import weakref + +from . import coercions +from . import elements +from . import roles +from . import schema +from . import traversals +from . import type_api +from . import visitors +from .operators import ColumnOperators +from .. import exc +from .. import inspection +from .. import util +from ..util import collections_abc + +_trackers = weakref.WeakKeyDictionary() + + +_TRACKERS = 0 +_STALE_CHECK = 1 +_REAL_FN = 2 +_EXPR = 3 +_IS_SEQUENCE = 4 +_PROPAGATE_ATTRS = 5 + + +def lambda_stmt(lmb): + """Produce a SQL statement that is cached as a lambda. + + This SQL statement will only be constructed if element has not been + compiled yet. The approach is used to save on Python function overhead + when constructing statements that will be cached. + + E.g.:: + + from sqlalchemy import lambda_stmt + + stmt = lambda_stmt(lambda: table.select()) + stmt += lambda s: s.where(table.c.id == 5) + + result = connection.execute(stmt) + + The object returned is an instance of :class:`_sql.StatementLambdaElement`. + + .. versionadded:: 1.4 + + .. seealso:: + + :ref:`engine_lambda_caching` + + + """ + return coercions.expect(roles.CoerceTextStatementRole, lmb) + + +class LambdaElement(elements.ClauseElement): + """A SQL construct where the state is stored as an un-invoked lambda. + + The :class:`_sql.LambdaElement` is produced transparently whenever + passing lambda expressions into SQL constructs, such as:: + + stmt = select(table).where(lambda: table.c.col == parameter) + + The :class:`_sql.LambdaElement` is the base of the + :class:`_sql.StatementLambdaElement` which represents a full statement + within a lambda. + + .. versionadded:: 1.4 + + .. seealso:: + + :ref:`engine_lambda_caching` + + """ + + __visit_name__ = "lambda_element" + + _is_lambda_element = True + + _resolved_bindparams = () + + _traverse_internals = [ + ("_resolved", visitors.InternalTraversal.dp_clauseelement) + ] + + def __repr__(self): + return "%s(%r)" % (self.__class__.__name__, self.fn.__code__) + + def __init__(self, fn, role, apply_propagate_attrs=None, **kw): + self.fn = fn + self.role = role + self.parent_lambda = None + + if apply_propagate_attrs is None and ( + role is roles.CoerceTextStatementRole + ): + apply_propagate_attrs = self + + if fn.__code__ not in _trackers: + rec = self._initialize_var_trackers( + role, apply_propagate_attrs, kw + ) + else: + rec = _trackers[self.fn.__code__] + closure = fn.__closure__ + + # check if the objects fixed inside the lambda that we've cached + # have been changed. This can apply to things like mappers that + # were recreated in test suites. if so, re-initialize. + # + # this is a small performance hit on every use for a not very + # common situation, however it's very hard to debug if the + # condition does occur. + for idx, obj in rec[_STALE_CHECK]: + if closure[idx].cell_contents is not obj: + rec = self._initialize_var_trackers( + role, apply_propagate_attrs, kw + ) + break + self._rec = rec + + if apply_propagate_attrs is not None: + propagate_attrs = rec[_PROPAGATE_ATTRS] + if propagate_attrs: + apply_propagate_attrs._propagate_attrs = propagate_attrs + + if rec[_TRACKERS]: + self._resolved_bindparams = bindparams = [] + for tracker in rec[_TRACKERS]: + tracker(self.fn, bindparams) + + def __getattr__(self, key): + return getattr(self._rec[_EXPR], key) + + @property + def _is_sequence(self): + return self._rec[_IS_SEQUENCE] + + @property + def _select_iterable(self): + if self._is_sequence: + return itertools.chain.from_iterable( + [element._select_iterable for element in self._resolved] + ) + + else: + return self._resolved._select_iterable + + @property + def _from_objects(self): + if self._is_sequence: + return itertools.chain.from_iterable( + [element._from_objects for element in self._resolved] + ) + + else: + return self._resolved._from_objects + + def _param_dict(self): + return {b.key: b.value for b in self._resolved_bindparams} + + @util.memoized_property + def _resolved(self): + bindparam_lookup = {b.key: b for b in self._resolved_bindparams} + + def replace(thing): + if ( + isinstance(thing, elements.BindParameter) + and thing.key in bindparam_lookup + ): + bind = bindparam_lookup[thing.key] + # TODO: consider + # if we should clone the bindparam here, re-cache the new + # version, etc. also we make an assumption about "expanding" + # in this case. + if thing.expanding: + bind.expanding = True + return bind + + expr = self._rec[_EXPR] + + if self._rec[_IS_SEQUENCE]: + expr = [ + visitors.replacement_traverse(sub_expr, {}, replace) + for sub_expr in expr + ] + elif getattr(expr, "is_clause_element", False): + expr = visitors.replacement_traverse(expr, {}, replace) + + return expr + + def _gen_cache_key(self, anon_map, bindparams): + + cache_key = (self.fn.__code__, self.__class__) + + if self._resolved_bindparams: + bindparams.extend(self._resolved_bindparams) + + return cache_key + + def _invoke_user_fn(self, fn, *arg): + return fn() + + def _initialize_var_trackers(self, role, apply_propagate_attrs, coerce_kw): + fn = self.fn + + # track objects referenced inside of lambdas, create bindparams + # ahead of time for literal values. If bindparams are produced, + # then rewrite the function globals and closure as necessary so that + # it refers to the bindparams, then invoke the function + new_closure = {} + new_globals = fn.__globals__.copy() + tracker_collection = [] + check_closure_for_stale = [] + + for name in fn.__code__.co_names: + if name not in new_globals: + continue + + bound_value = _roll_down_to_literal(new_globals[name]) + + if coercions._is_literal(bound_value): + new_globals[name] = bind = PyWrapper(name, bound_value) + tracker_collection.append(_globals_tracker(name, bind)) + + if fn.__closure__: + for closure_index, (fv, cell) in enumerate( + zip(fn.__code__.co_freevars, fn.__closure__) + ): + + bound_value = _roll_down_to_literal(cell.cell_contents) + + if coercions._is_literal(bound_value): + new_closure[fv] = bind = PyWrapper(fv, bound_value) + tracker_collection.append( + _closure_tracker(fv, bind, closure_index) + ) + else: + new_closure[fv] = cell.cell_contents + # for normal cell contents, add them to a list that + # we can compare later when we get new lambdas. if + # any identities have changed, then we will recalculate + # the whole lambda and run it again. + check_closure_for_stale.append( + (closure_index, cell.cell_contents) + ) + + if tracker_collection: + new_fn = _rewrite_code_obj( + fn, + [new_closure[name] for name in fn.__code__.co_freevars], + new_globals, + ) + expr = self._invoke_user_fn(new_fn) + + else: + new_fn = fn + expr = self._invoke_user_fn(new_fn) + tracker_collection = [] + + if self.parent_lambda is None: + if isinstance(expr, collections_abc.Sequence): + expected_expr = [ + coercions.expect( + role, + sub_expr, + apply_propagate_attrs=apply_propagate_attrs, + **coerce_kw + ) + for sub_expr in expr + ] + is_sequence = True + else: + expected_expr = coercions.expect( + role, + expr, + apply_propagate_attrs=apply_propagate_attrs, + **coerce_kw + ) + is_sequence = False + else: + expected_expr = expr + is_sequence = False + + if apply_propagate_attrs is not None: + propagate_attrs = apply_propagate_attrs._propagate_attrs + else: + propagate_attrs = util.immutabledict() + + rec = _trackers[self.fn.__code__] = ( + tracker_collection, + check_closure_for_stale, + new_fn, + expected_expr, + is_sequence, + propagate_attrs, + ) + return rec + + +class StatementLambdaElement(roles.AllowsLambdaRole, LambdaElement): + """Represent a composable SQL statement as a :class:`_sql.LambdaElement`. + + The :class:`_sql.StatementLambdaElement` is constructed using the + :func:`_sql.lambda_stmt` function:: + + + from sqlalchemy import lambda_stmt + + stmt = lambda_stmt(lambda: select(table)) + + Once constructed, additional criteria can be built onto the statement + by adding subsequent lambdas, which accept the existing statement + object as a single parameter:: + + stmt += lambda s: s.where(table.c.col == parameter) + + + .. versionadded:: 1.4 + + .. seealso:: + + :ref:`engine_lambda_caching` + + """ + + def __add__(self, other): + return LinkedLambdaElement(other, parent_lambda=self) + + def _execute_on_connection( + self, connection, multiparams, params, execution_options + ): + if self._rec[_EXPR].supports_execution: + return connection._execute_clauseelement( + self, multiparams, params, execution_options + ) + else: + raise exc.ObjectNotExecutableError(self) + + @property + def _with_options(self): + return self._rec[_EXPR]._with_options + + @property + def _effective_plugin_target(self): + return self._rec[_EXPR]._effective_plugin_target + + @property + def _is_future(self): + return self._rec[_EXPR]._is_future + + @property + def _execution_options(self): + return self._rec[_EXPR]._execution_options + + +class LinkedLambdaElement(StatementLambdaElement): + def __init__(self, fn, parent_lambda, **kw): + self.fn = fn + self.parent_lambda = parent_lambda + role = None + + apply_propagate_attrs = self + + if fn.__code__ not in _trackers: + rec = self._initialize_var_trackers( + role, apply_propagate_attrs, kw + ) + else: + rec = _trackers[self.fn.__code__] + + closure = fn.__closure__ + + # check if objects referred to by the lambda have changed and + # re-scan the lambda if so. see comments for this same section in + # LambdaElement. + for idx, obj in rec[_STALE_CHECK]: + if closure[idx].cell_contents is not obj: + rec = self._initialize_var_trackers( + role, apply_propagate_attrs, kw + ) + break + + self._rec = rec + + self._propagate_attrs = parent_lambda._propagate_attrs + + self._resolved_bindparams = bindparams = [] + rec = self._rec + while True: + if rec[_TRACKERS]: + for tracker in rec[_TRACKERS]: + tracker(self.fn, bindparams) + if self.parent_lambda is not None: + self = self.parent_lambda + rec = self._rec + else: + break + + def _invoke_user_fn(self, fn, *arg): + return fn(self.parent_lambda._rec[_EXPR]) + + def _gen_cache_key(self, anon_map, bindparams): + if self._resolved_bindparams: + bindparams.extend(self._resolved_bindparams) + + cache_key = (self.fn.__code__, self.__class__) + + parent = self.parent_lambda + while parent is not None: + cache_key = (parent.fn.__code__,) + cache_key + parent = parent.parent_lambda + + return cache_key + + +class PyWrapper(ColumnOperators): + def __init__(self, name, to_evaluate, getter=None): + self._name = name + self._to_evaluate = to_evaluate + self._param = None + self._bind_paths = {} + self._getter = getter + + def __call__(self, *arg, **kw): + elem = object.__getattribute__(self, "_to_evaluate") + value = elem(*arg, **kw) + if coercions._is_literal(value) and not isinstance( + # TODO: coverage where an ORM option or similar is here + value, + traversals.HasCacheKey, + ): + # TODO: we can instead scan the arguments and make sure they + # are all Python literals + + # TODO: coverage + name = object.__getattribute__(self, "_name") + raise exc.InvalidRequestError( + "Can't invoke Python callable %s() inside of lambda " + "expression argument; lambda cache keys should not call " + "regular functions since the caching " + "system does not track the values of the arguments passed " + "to the functions. Call the function outside of the lambda " + "and assign to a local variable that is used in the lambda." + % (name) + ) + else: + return value + + def operate(self, op, *other, **kwargs): + elem = object.__getattribute__(self, "__clause_element__")() + return op(elem, *other, **kwargs) + + def reverse_operate(self, op, other, **kwargs): + elem = object.__getattribute__(self, "__clause_element__")() + return op(other, elem, **kwargs) + + def _extract_bound_parameters(self, starting_point, result_list): + param = object.__getattribute__(self, "_param") + if param is not None: + param = param._with_value(starting_point, maintain_key=True) + result_list.append(param) + for pywrapper in object.__getattribute__(self, "_bind_paths").values(): + getter = object.__getattribute__(pywrapper, "_getter") + element = getter(starting_point) + pywrapper._sa__extract_bound_parameters(element, result_list) + + def __clause_element__(self): + param = object.__getattribute__(self, "_param") + to_evaluate = object.__getattribute__(self, "_to_evaluate") + if param is None: + name = object.__getattribute__(self, "_name") + self._param = param = elements.BindParameter(name, unique=True) + param.type = type_api._resolve_value_to_type(to_evaluate) + + return param._with_value(to_evaluate, maintain_key=True) + + def __getattribute__(self, key): + if key.startswith("_sa_"): + return object.__getattribute__(self, key[4:]) + elif key in ("__clause_element__", "operate", "reverse_operate"): + return object.__getattribute__(self, key) + + if key.startswith("__"): + elem = object.__getattribute__(self, "_to_evaluate") + return getattr(elem, key) + else: + return self._sa__add_getter(key, operator.attrgetter) + + def __getitem__(self, key): + if isinstance(key, PyWrapper): + # TODO: coverage + raise exc.InvalidRequestError( + "Dictionary keys / list indexes inside of a cached " + "lambda must be Python literals only" + ) + return self._sa__add_getter(key, operator.itemgetter) + + def _add_getter(self, key, getter_fn): + + bind_paths = object.__getattribute__(self, "_bind_paths") + + bind_path_key = (key, getter_fn) + if bind_path_key in bind_paths: + return bind_paths[bind_path_key] + + getter = getter_fn(key) + elem = object.__getattribute__(self, "_to_evaluate") + value = getter(elem) + + if coercions._is_literal(value): + wrapper = PyWrapper(key, value, getter) + bind_paths[bind_path_key] = wrapper + return wrapper + else: + return value + + +def _roll_down_to_literal(element): + is_clause_element = hasattr(element, "__clause_element__") + + if is_clause_element: + while not isinstance( + element, (elements.ClauseElement, schema.SchemaItem) + ): + try: + element = element.__clause_element__() + except AttributeError: + break + + if not is_clause_element: + insp = inspection.inspect(element, raiseerr=False) + if insp is not None: + try: + return insp.__clause_element__() + except AttributeError: + return insp + + # TODO: should we coerce consts None/True/False here? + return element + else: + return element + + +def _globals_tracker(name, wrapper): + def extract_parameter_value(current_fn, result): + object.__getattribute__(wrapper, "_extract_bound_parameters")( + current_fn.__globals__[name], result + ) + + return extract_parameter_value + + +def _closure_tracker(name, wrapper, closure_index): + def extract_parameter_value(current_fn, result): + object.__getattribute__(wrapper, "_extract_bound_parameters")( + current_fn.__closure__[closure_index].cell_contents, result + ) + + return extract_parameter_value + + +def _rewrite_code_obj(f, cell_values, globals_): + """Return a copy of f, with a new closure and new globals + + yes it works in pypy :P + + """ + + argrange = range(len(cell_values)) + + code = "def make_cells():\n" + if cell_values: + code += " (%s) = (%s)\n" % ( + ", ".join("i%d" % i for i in argrange), + ", ".join("o%d" % i for i in argrange), + ) + code += " def closure():\n" + code += " return %s\n" % ", ".join("i%d" % i for i in argrange) + code += " return closure.__closure__" + vars_ = {"o%d" % i: cell_values[i] for i in argrange} + exec(code, vars_, vars_) + closure = vars_["make_cells"]() + + func = type(f)(f.__code__, globals_, f.__name__, f.__defaults__, closure) + if sys.version_info >= (3,): + func.__annotations__ = f.__annotations__ + func.__kwdefaults__ = f.__kwdefaults__ + func.__doc__ = f.__doc__ + func.__module__ = f.__module__ + + return func + + +@inspection._inspects(LambdaElement) +def insp(lmb): + return inspection.inspect(lmb._resolved) diff --git a/lib/sqlalchemy/sql/roles.py b/lib/sqlalchemy/sql/roles.py index 3d94ec9ff5..4205d9f0d3 100644 --- a/lib/sqlalchemy/sql/roles.py +++ b/lib/sqlalchemy/sql/roles.py @@ -19,9 +19,21 @@ class SQLRole(object): """ + allows_lambda = False + uses_inspection = False + class UsesInspection(object): _post_inspect = None + uses_inspection = True + + +class AllowsLambdaRole(object): + allows_lambda = True + + +class HasCacheKeyRole(SQLRole): + _role_name = "Cacheable Core or ORM object" class ColumnArgumentRole(SQLRole): @@ -40,7 +52,7 @@ class TruncatedLabelRole(SQLRole): _role_name = "String SQL identifier" -class ColumnsClauseRole(UsesInspection, ColumnListRole): +class ColumnsClauseRole(AllowsLambdaRole, UsesInspection, ColumnListRole): _role_name = "Column expression or FROM clause" @property @@ -56,7 +68,7 @@ class ByOfRole(ColumnListRole): _role_name = "GROUP BY / OF / etc. expression" -class GroupByRole(UsesInspection, ByOfRole): +class GroupByRole(AllowsLambdaRole, UsesInspection, ByOfRole): # note there's a special case right now where you can pass a whole # ORM entity to group_by() and it splits out. we may not want to keep # this around @@ -64,7 +76,7 @@ class GroupByRole(UsesInspection, ByOfRole): _role_name = "GROUP BY expression" -class OrderByRole(ByOfRole): +class OrderByRole(AllowsLambdaRole, ByOfRole): _role_name = "ORDER BY expression" @@ -76,7 +88,11 @@ class StatementOptionRole(StructuralRole): _role_name = "statement sub-expression element" -class WhereHavingRole(StructuralRole): +class OnClauseRole(AllowsLambdaRole, StructuralRole): + _role_name = "SQL expression for ON clause" + + +class WhereHavingRole(OnClauseRole): _role_name = "SQL expression for WHERE/HAVING role" @@ -102,7 +118,7 @@ class InElementRole(SQLRole): ) -class JoinTargetRole(UsesInspection, StructuralRole): +class JoinTargetRole(AllowsLambdaRole, UsesInspection, StructuralRole): _role_name = ( "Join target, typically a FROM expression, or ORM " "relationship attribute" @@ -176,7 +192,7 @@ class HasCTERole(ReturnsRowsRole): pass -class CompoundElementRole(SQLRole): +class CompoundElementRole(AllowsLambdaRole, SQLRole): """SELECT statements inside a CompoundSelect, e.g. UNION, EXTRACT, etc.""" _role_name = ( diff --git a/lib/sqlalchemy/sql/selectable.py b/lib/sqlalchemy/sql/selectable.py index 59c292a079..832da1a577 100644 --- a/lib/sqlalchemy/sql/selectable.py +++ b/lib/sqlalchemy/sql/selectable.py @@ -847,7 +847,7 @@ class Join(roles.DMLTableRole, FromClause): # note: taken from If91f61527236fd4d7ae3cad1f24c38be921c90ba # not merged yet self.onclause = coercions.expect( - roles.WhereHavingRole, onclause + roles.OnClauseRole, onclause ).self_group(against=operators._asbool) self.isouter = isouter diff --git a/lib/sqlalchemy/sql/traversals.py b/lib/sqlalchemy/sql/traversals.py index 8d01b7ff7d..f41480a947 100644 --- a/lib/sqlalchemy/sql/traversals.py +++ b/lib/sqlalchemy/sql/traversals.py @@ -115,45 +115,37 @@ class HasCacheKey(object): in the structures that would affect the SQL string or the type handlers should result in a different cache key. - If a structure cannot produce a useful cache key, it should raise - NotImplementedError, which will result in the entire structure - for which it's part of not being useful as a cache key. - + If a structure cannot produce a useful cache key, the NO_CACHE + symbol should be added to the anon_map and the method should + return None. """ - elements = util.preloaded.sql_elements - idself = id(self) + cls = self.__class__ - if anon_map is not None: - if idself in anon_map: - return (anon_map[idself], self.__class__) - else: - # inline of - # id_ = anon_map[idself] - anon_map[idself] = id_ = str(anon_map.index) - anon_map.index += 1 + if idself in anon_map: + return (anon_map[idself], cls) else: - id_ = None + # inline of + # id_ = anon_map[idself] + anon_map[idself] = id_ = str(anon_map.index) + anon_map.index += 1 try: - dispatcher = self.__class__.__dict__[ - "_generated_cache_key_traversal" - ] + dispatcher = cls.__dict__["_generated_cache_key_traversal"] except KeyError: # most of the dispatchers are generated up front # in sqlalchemy/sql/__init__.py -> # traversals.py-> _preconfigure_traversals(). # this block will generate any remaining dispatchers. - dispatcher = self.__class__._generate_cache_attrs() + dispatcher = cls._generate_cache_attrs() if dispatcher is NO_CACHE: - if anon_map is not None: - anon_map[NO_CACHE] = True + anon_map[NO_CACHE] = True return None - result = (id_, self.__class__) + result = (id_, cls) # inline of _cache_key_traversal_visitor.run_generated_dispatch() @@ -163,15 +155,12 @@ class HasCacheKey(object): if obj is not None: # TODO: see if C code can help here as Python lacks an # efficient switch construct - if meth is CACHE_IN_PLACE: - # cache in place is always going to be a Python - # tuple, dict, list, etc. so we can do a boolean check - if obj: - result += (attrname, obj) - elif meth is STATIC_CACHE_KEY: + + if meth is STATIC_CACHE_KEY: result += (attrname, obj._static_cache_key) elif meth is ANON_NAME: - if elements._anonymous_label in obj.__class__.__mro__: + elements = util.preloaded.sql_elements + if isinstance(obj, elements._anonymous_label): obj = obj.apply_map(anon_map) result += (attrname, obj) elif meth is CALL_GEN_CACHE_KEY: @@ -179,8 +168,14 @@ class HasCacheKey(object): attrname, obj._gen_cache_key(anon_map, bindparams), ) - elif meth is PROPAGATE_ATTRS: - if obj: + + # remaining cache functions are against + # Python tuples, dicts, lists, etc. so we can skip + # if they are empty + elif obj: + if meth is CACHE_IN_PLACE: + result += (attrname, obj) + elif meth is PROPAGATE_ATTRS: result += ( attrname, obj["compile_state_plugin"], @@ -188,16 +183,14 @@ class HasCacheKey(object): anon_map, bindparams ), ) - elif meth is InternalTraversal.dp_annotations_key: - # obj is here is the _annotations dict. however, - # we want to use the memoized cache key version of it. - # for Columns, this should be long lived. For select() - # statements, not so much, but they usually won't have - # annotations. - if obj: + elif meth is InternalTraversal.dp_annotations_key: + # obj is here is the _annotations dict. however, we + # want to use the memoized cache key version of it. for + # Columns, this should be long lived. For select() + # statements, not so much, but they usually won't have + # annotations. result += self._annotations_cache_key - elif meth is InternalTraversal.dp_clauseelement_list: - if obj: + elif meth is InternalTraversal.dp_clauseelement_list: result += ( attrname, tuple( @@ -207,14 +200,7 @@ class HasCacheKey(object): ] ), ) - else: - # note that all the "ClauseElement" standalone cases - # here have been handled by inlines above; so we can - # safely assume the object is a standard list/tuple/dict - # which we can skip if it evaluates to false. - # improvement would be to have this as a flag delivered - # up front in the dispatcher list - if obj: + else: result += meth( attrname, obj, self, anon_map, bindparams ) @@ -384,6 +370,14 @@ class CacheKey(namedtuple("CacheKey", ["key", "bindparams"])): return "CacheKey(key=%s)" % ("\n".join(output),) + def _generate_param_dict(self): + """used for testing""" + + from .compiler import prefix_anon_map + + _anon_map = prefix_anon_map() + return {b.key % _anon_map: b.effective_value for b in self.bindparams} + def _clone(element, **kw): return element._clone() @@ -506,6 +500,7 @@ class _CacheKey(ExtendedInternalTraversal): ): if not obj: return () + return ( attrname, tuple( diff --git a/test/base/test_result.py b/test/base/test_result.py index bacf09d39d..8b2c253ad7 100644 --- a/test/base/test_result.py +++ b/test/base/test_result.py @@ -898,6 +898,19 @@ class OnlyScalarsTest(fixtures.TestBase): return chunks + @testing.fixture + def no_tuple_one_fixture(self): + data = [(1, 1, 1)] + + def chunks(num): + while data: + rows = data[0:num] + data[:] = [] + + yield [row[0] for row in rows] + + return chunks + @testing.fixture def normal_fixture(self): data = [(1, 1, 1), (2, 1, 2), (1, 1, 1), (1, 3, 2), (4, 1, 2)] @@ -1004,3 +1017,21 @@ class OnlyScalarsTest(fixtures.TestBase): ) eq_(list(r), [(1,), (2,), (1,), (1,), (4,)]) + + def test_scalar_mode_first(self, no_tuple_one_fixture): + metadata = result.SimpleResultMetaData(["a", "b", "c"]) + + r = result.ChunkedIteratorResult( + metadata, no_tuple_one_fixture, source_supports_scalars=True + ) + + eq_(r.one(), (1,)) + + def test_scalar_mode_scalar_one(self, no_tuple_one_fixture): + metadata = result.SimpleResultMetaData(["a", "b", "c"]) + + r = result.ChunkedIteratorResult( + metadata, no_tuple_one_fixture, source_supports_scalars=True + ) + + eq_(r.scalar_one(), 1) diff --git a/test/ext/test_baked.py b/test/ext/test_baked.py index 15919765cb..89fff954a3 100644 --- a/test/ext/test_baked.py +++ b/test/ext/test_baked.py @@ -1294,6 +1294,7 @@ class LazyLoaderTest(testing.AssertsCompiledSQL, BakedTest): def _test_baked_lazy_loading_relationship_flag(self, flag): User, Address = self._o2m_fixture(bake_queries=flag) from sqlalchemy import inspect + from sqlalchemy.orm.interfaces import UserDefinedOption address_mapper = inspect(Address) sess = Session(testing.db) @@ -1302,13 +1303,40 @@ class LazyLoaderTest(testing.AssertsCompiledSQL, BakedTest): # or core level and it is not easy to patch. the option object # is the one thing that will get carried into the lazyload from the # outside and invoked on a per-compile basis - mock_opt = mock.Mock( - _is_compile_state=True, - propagate_to_loaders=True, - _gen_cache_key=lambda *args: ("hi",), - _generate_path_cache_key=lambda path: ("hi",), - _generate_cache_key=lambda *args: (("hi",), []), - ) + + class MockOpt(UserDefinedOption): + _is_compile_state = True + propagate_to_loaders = True + _is_legacy_option = True + + def _gen_cache_key(self, *args): + return ("hi",) + + def _generate_path_cache_key(self, *args): + return ("hi",) + + def _generate_cache_key(self, *args): + return (("hi",), []) + + _mock = mock.Mock() + + def process_query(self, *args): + self._mock.process_query(*args) + + def process_query_conditionally(self, *args): + self._mock.process_query_conditionally(*args) + + def process_compile_state(self, *args): + self._mock.process_compile_state(*args) + + def orm_execute(self): + self._mock.orm_execute() + + @property + def mock_calls(self): + return self._mock.mock_calls + + mock_opt = MockOpt() u1 = sess.query(User).options(mock_opt).first() diff --git a/test/orm/test_cache_key.py b/test/orm/test_cache_key.py index 3ade732472..c02eca8591 100644 --- a/test/orm/test_cache_key.py +++ b/test/orm/test_cache_key.py @@ -1,5 +1,9 @@ +import random + from sqlalchemy import inspect +from sqlalchemy import testing from sqlalchemy import text +from sqlalchemy.future import select from sqlalchemy.future import select as future_select from sqlalchemy.orm import aliased from sqlalchemy.orm import defaultload @@ -7,15 +11,19 @@ from sqlalchemy.orm import defer from sqlalchemy.orm import join as orm_join from sqlalchemy.orm import joinedload from sqlalchemy.orm import Load +from sqlalchemy.orm import mapper +from sqlalchemy.orm import relationship from sqlalchemy.orm import selectinload from sqlalchemy.orm import Session from sqlalchemy.orm import subqueryload from sqlalchemy.orm import with_polymorphic from sqlalchemy.sql.base import CacheableOptions from sqlalchemy.sql.visitors import InternalTraversal +from sqlalchemy.testing import AssertsCompiledSQL from sqlalchemy.testing import eq_ from test.orm import _fixtures from .inheritance import _poly_fixtures +from .test_query import QueryTest from ..sql.test_compare import CacheKeyFixture @@ -419,3 +427,82 @@ class PolyCacheKeyTest(CacheKeyFixture, _poly_fixtures._Polymorphic): self._run_cache_key_fixture( lambda: stmt_20(one(), two(), three()), compare_values=True, ) + + +class RoundTripTest(QueryTest, AssertsCompiledSQL): + __dialect__ = "default" + + run_setup_mappers = None + + @testing.fixture + def plain_fixture(self): + users, Address, addresses, User = ( + self.tables.users, + self.classes.Address, + self.tables.addresses, + self.classes.User, + ) + + mapper( + User, + users, + properties={ + "addresses": relationship(Address, back_populates="user") + }, + ) + + mapper( + Address, + addresses, + properties={ + "user": relationship(User, back_populates="addresses") + }, + ) + + return User, Address + + def test_subqueryload(self, plain_fixture): + + # subqueryload works pretty poorly w/ caching because it has + # to create a new query. previously, baked query went through a + # bunch of hoops to improve upon this and they were found to be + # broken anyway. so subqueryload currently pulls out the original + # query as well as the requested query and works with them at row + # processing time to create its own query. all of which is fairly + # non-performant compared to the selectinloader that has a fixed + # query. + User, Address = plain_fixture + + s = Session() + + def query(names): + stmt = ( + select(User) + .where(User.name.in_(names)) + .options(subqueryload(User.addresses)) + .order_by(User.id) + ) + return s.execute(stmt) + + def go1(): + r1 = query(["ed"]) + eq_( + r1.scalars().all(), + [User(name="ed", addresses=[Address(), Address(), Address()])], + ) + + def go2(): + r1 = query(["ed", "fred"]) + eq_( + r1.scalars().all(), + [ + User( + name="ed", addresses=[Address(), Address(), Address()] + ), + User(name="fred", addresses=[Address()]), + ], + ) + + for i in range(5): + fn = random.choice([go1, go2]) + self.assert_sql_count(testing.db, fn, 2) diff --git a/test/orm/test_lambdas.py b/test/orm/test_lambdas.py new file mode 100644 index 0000000000..407f70094e --- /dev/null +++ b/test/orm/test_lambdas.py @@ -0,0 +1,438 @@ +import random + +from sqlalchemy import exc +from sqlalchemy import ForeignKey +from sqlalchemy import Integer +from sqlalchemy import lambda_stmt +from sqlalchemy import String +from sqlalchemy import testing +from sqlalchemy import update +from sqlalchemy.future import select +from sqlalchemy.orm import mapper +from sqlalchemy.orm import relationship +from sqlalchemy.orm import selectinload +from sqlalchemy.orm import Session +from sqlalchemy.orm import subqueryload +from sqlalchemy.testing import assert_raises_message +from sqlalchemy.testing import AssertsCompiledSQL +from sqlalchemy.testing import eq_ +from sqlalchemy.testing import fixtures +from sqlalchemy.testing.schema import Column +from sqlalchemy.testing.schema import Table +from .inheritance import _poly_fixtures +from .test_query import QueryTest + + +class LambdaTest(QueryTest, AssertsCompiledSQL): + __dialect__ = "default" + + # we want to test the lambda expiration logic so use backend + # to exercise that + + __backend__ = True + run_setup_mappers = None + + @testing.fixture + def plain_fixture(self): + users, Address, addresses, User = ( + self.tables.users, + self.classes.Address, + self.tables.addresses, + self.classes.User, + ) + + mapper( + User, + users, + properties={ + "addresses": relationship(Address, back_populates="user") + }, + ) + + mapper( + Address, + addresses, + properties={ + "user": relationship(User, back_populates="addresses") + }, + ) + + return User, Address + + def test_user_cols_single_lambda(self, plain_fixture): + User, Address = plain_fixture + + q = select(lambda: (User.id, User.name)).select_from(lambda: User) + + self.assert_compile(q, "SELECT users.id, users.name FROM users") + + def test_user_cols_single_lambda_query(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + q = s.query(lambda: (User.id, User.name)).select_from(lambda: User) + + self.assert_compile( + q, + "SELECT users.id AS users_id, users.name AS users_name FROM users", + ) + + def test_multiple_entities_single_lambda(self, plain_fixture): + User, Address = plain_fixture + + q = select(lambda: (User, Address)).join(lambda: User.addresses) + + self.assert_compile( + q, + "SELECT users.id, users.name, addresses.id AS id_1, " + "addresses.user_id, addresses.email_address " + "FROM users JOIN addresses ON users.id = addresses.user_id", + ) + + def test_cols_round_trip(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + + # note this does a traversal + _clone of the InstrumentedAttribute + # for the first time ever + def query(names): + stmt = lambda_stmt( + lambda: select(User.name, Address.email_address) + .where(User.name.in_(names)) + .join(User.addresses) + ) + (lambda s: s.order_by(User.id, Address.id)) + + return s.execute(stmt) + + def go1(): + r1 = query(["ed"]) + eq_( + r1.all(), + [ + ("ed", "ed@wood.com"), + ("ed", "ed@bettyboop.com"), + ("ed", "ed@lala.com"), + ], + ) + + def go2(): + r1 = query(["ed", "fred"]) + eq_( + r1.all(), + [ + ("ed", "ed@wood.com"), + ("ed", "ed@bettyboop.com"), + ("ed", "ed@lala.com"), + ("fred", "fred@fred.com"), + ], + ) + + for i in range(5): + fn = random.choice([go1, go2]) + fn() + + def test_entity_round_trip(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + + def query(names): + stmt = lambda_stmt( + lambda: select(User) + .where(User.name.in_(names)) + .options(selectinload(User.addresses)) + ) + (lambda s: s.order_by(User.id)) + + return s.execute(stmt) + + def go1(): + r1 = query(["ed"]) + eq_( + r1.scalars().all(), + [User(name="ed", addresses=[Address(), Address(), Address()])], + ) + + def go2(): + r1 = query(["ed", "fred"]) + eq_( + r1.scalars().all(), + [ + User( + name="ed", addresses=[Address(), Address(), Address()] + ), + User(name="fred", addresses=[Address()]), + ], + ) + + for i in range(5): + fn = random.choice([go1, go2]) + self.assert_sql_count(testing.db, fn, 2) + + def test_lambdas_rejected_in_options(self, plain_fixture): + User, Address = plain_fixture + + assert_raises_message( + exc.ArgumentError, + "Cacheable Core or ORM object expected, got", + select(lambda: User).options, + lambda: subqueryload(User.addresses), + ) + + def test_subqueryload_internal_lambda(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + + def query(names): + stmt = ( + select(lambda: User) + .where(lambda: User.name.in_(names)) + .options(subqueryload(User.addresses)) + .order_by(lambda: User.id) + ) + + return s.execute(stmt) + + def go1(): + r1 = query(["ed"]) + eq_( + r1.scalars().all(), + [User(name="ed", addresses=[Address(), Address(), Address()])], + ) + + def go2(): + r1 = query(["ed", "fred"]) + eq_( + r1.scalars().all(), + [ + User( + name="ed", addresses=[Address(), Address(), Address()] + ), + User(name="fred", addresses=[Address()]), + ], + ) + + for i in range(5): + fn = random.choice([go1, go2]) + self.assert_sql_count(testing.db, fn, 2) + + def test_subqueryload_external_lambda_caveats(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + + def query(names): + stmt = lambda_stmt( + lambda: select(User) + .where(User.name.in_(names)) + .options(subqueryload(User.addresses)) + ) + (lambda s: s.order_by(User.id)) + + return s.execute(stmt) + + def go1(): + r1 = query(["ed"]) + eq_( + r1.scalars().all(), + [User(name="ed", addresses=[Address(), Address(), Address()])], + ) + + def go2(): + r1 = query(["ed", "fred"]) + eq_( + r1.scalars().all(), + [ + User( + name="ed", addresses=[Address(), Address(), Address()] + ), + User(name="fred", addresses=[Address()]), + ], + ) + + for i in range(5): + fn = random.choice([go1, go2]) + with testing.expect_warnings( + 'subqueryloader for "User.addresses" must invoke lambda ' + r"callable at .*LambdaElement\( " + r".*test_lambdas.py.* in order to produce a new query, " + r"decreasing the efficiency of caching" + ): + self.assert_sql_count(testing.db, fn, 2) + + def test_does_filter_aliasing_work(self, plain_fixture): + User, Address = plain_fixture + + s = Session() + + # aliased=True is to be deprecated, other filter lambdas + # that go into effect include polymorphic filtering. + q = ( + s.query(lambda: User) + .join(lambda: User.addresses, aliased=True) + .filter(lambda: Address.email_address == "foo") + ) + self.assert_compile( + q, + "SELECT users.id AS users_id, users.name AS users_name " + "FROM users JOIN addresses AS addresses_1 " + "ON users.id = addresses_1.user_id " + "WHERE addresses_1.email_address = :email_address_1", + ) + + @testing.combinations( + lambda s, User, Address: s.query(lambda: User).join(lambda: Address), + lambda s, User, Address: s.query(lambda: User).join( + lambda: User.addresses + ), + lambda s, User, Address: s.query(lambda: User).join( + lambda: Address, lambda: User.addresses + ), + lambda s, User, Address: s.query(lambda: User).join( + Address, lambda: User.addresses + ), + lambda s, User, Address: s.query(lambda: User).join( + lambda: Address, User.addresses + ), + lambda User, Address: select(lambda: User) + .join(lambda: Address) + .apply_labels(), + lambda User, Address: select(lambda: User) + .join(lambda: User.addresses) + .apply_labels(), + lambda User, Address: select(lambda: User) + .join(lambda: Address, lambda: User.addresses) + .apply_labels(), + lambda User, Address: select(lambda: User) + .join(Address, lambda: User.addresses) + .apply_labels(), + lambda User, Address: select(lambda: User) + .join(lambda: Address, User.addresses) + .apply_labels(), + argnames="test_case", + ) + def test_join_entity_arg(self, plain_fixture, test_case): + User, Address = plain_fixture + + s = Session() + + stmt = testing.resolve_lambda(test_case, **locals()) + self.assert_compile( + stmt, + "SELECT users.id AS users_id, users.name AS users_name " + "FROM users JOIN addresses ON users.id = addresses.user_id", + ) + + +class PolymorphicTest(_poly_fixtures._Polymorphic): + run_setup_mappers = "once" + __dialect__ = "default" + + def test_join_second_prop_lambda(self): + Company = self.classes.Company + Manager = self.classes.Manager + + s = Session() + + q = s.query(Company).join(lambda: Manager, lambda: Company.employees) + + self.assert_compile( + q, + "SELECT companies.company_id AS companies_company_id, " + "companies.name AS companies_name FROM companies " + "JOIN (people JOIN managers ON people.person_id = " + "managers.person_id) ON companies.company_id = people.company_id", + ) + + +class UpdateDeleteTest(fixtures.MappedTest): + __backend__ = True + + run_setup_mappers = "once" + + @classmethod + def define_tables(cls, metadata): + Table( + "users", + metadata, + Column( + "id", Integer, primary_key=True, test_needs_autoincrement=True + ), + Column("name", String(32)), + Column("age_int", Integer), + ) + Table( + "addresses", + metadata, + Column("id", Integer, primary_key=True), + Column("user_id", ForeignKey("users.id")), + ) + + @classmethod + def setup_classes(cls): + class User(cls.Comparable): + pass + + class Address(cls.Comparable): + pass + + @classmethod + def insert_data(cls, connection): + users = cls.tables.users + + connection.execute( + users.insert(), + [ + dict(id=1, name="john", age_int=25), + dict(id=2, name="jack", age_int=47), + dict(id=3, name="jill", age_int=29), + dict(id=4, name="jane", age_int=37), + ], + ) + + @classmethod + def setup_mappers(cls): + User = cls.classes.User + users = cls.tables.users + + Address = cls.classes.Address + addresses = cls.tables.addresses + + mapper( + User, + users, + properties={ + "age": users.c.age_int, + "addresses": relationship(Address), + }, + ) + mapper(Address, addresses) + + def test_update(self): + User, Address = self.classes("User", "Address") + + s = Session() + + def go(ids, values): + stmt = lambda_stmt(lambda: update(User).where(User.id.in_(ids))) + s.execute( + stmt, + values, + # note this currently just unrolls the lambda on the statement. + # so lambda caching for updates is not actually that useful + # unless synchronize_session is turned off. + # evaluate is similar just doesn't work for IN yet. + execution_options={"synchronize_session": "fetch"}, + ) + + go([1, 2], {"name": "jack2"}) + eq_( + s.execute(select(User.id, User.name).order_by(User.id)).all(), + [(1, "jack2"), (2, "jack2"), (3, "jill"), (4, "jane")], + ) + + go([3], {"name": "jane2"}) + eq_( + s.execute(select(User.id, User.name).order_by(User.id)).all(), + [(1, "jack2"), (2, "jack2"), (3, "jane2"), (4, "jane")], + ) diff --git a/test/profiles.txt b/test/profiles.txt index 79097197f0..1903d35275 100644 --- a/test/profiles.txt +++ b/test/profiles.txt @@ -1,30 +1,30 @@ # /home/classic/dev/sqlalchemy/test/profiles.txt # This file is written out on a per-environment basis. -# For each test in aaa_profiling, the corresponding function and +# For each test in aaa_profiling, the corresponding function and # environment is located within this file. If it doesn't exist, # the test is skipped. -# If a callcount does exist, it is compared to what we received. +# If a callcount does exist, it is compared to what we received. # assertions are raised if the counts do not match. -# -# To add a new callcount test, apply the function_call_count -# decorator and re-run the tests using the --write-profiles +# +# To add a new callcount test, apply the function_call_count +# decorator and re-run the tests using the --write-profiles # option - this file will be rewritten including the new count. -# +# # TEST: test.aaa_profiling.test_compiler.CompileTest.test_insert -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_cextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_nocextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_cextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_nocextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_cextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_nocextensions 62 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_cextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mssql_pyodbc_dbapiunicode_nocextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_cextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_mysqldb_dbapiunicode_nocextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_cextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_mysql_pymysql_dbapiunicode_nocextensions 63 test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_oracle_cx_oracle_dbapiunicode_cextensions 62 test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_cextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_cextensions 62 -test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 62 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_cextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_cextensions 63 +test.aaa_profiling.test_compiler.CompileTest.test_insert 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 63 test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_cextensions 67 test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mssql_pyodbc_dbapiunicode_nocextensions 67 test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_mysql_mysqldb_dbapiunicode_cextensions 67 @@ -40,18 +40,18 @@ test.aaa_profiling.test_compiler.CompileTest.test_insert 3.8_sqlite_pysqlite_dba # TEST: test.aaa_profiling.test_compiler.CompileTest.test_select -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_cextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_nocextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_cextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_nocextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_cextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_nocextensions 152 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_cextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mssql_pyodbc_dbapiunicode_nocextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_cextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_mysqldb_dbapiunicode_nocextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_cextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_mysql_pymysql_dbapiunicode_nocextensions 154 test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_oracle_cx_oracle_dbapiunicode_cextensions 152 test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_cextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_cextensions 152 -test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 152 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_cextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_cextensions 154 +test.aaa_profiling.test_compiler.CompileTest.test_select 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 154 test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_cextensions 167 test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mssql_pyodbc_dbapiunicode_nocextensions 167 test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_mysql_mysqldb_dbapiunicode_cextensions 167 @@ -67,18 +67,18 @@ test.aaa_profiling.test_compiler.CompileTest.test_select 3.8_sqlite_pysqlite_dba # TEST: test.aaa_profiling.test_compiler.CompileTest.test_select_labels -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_cextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_nocextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_cextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_nocextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_cextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_nocextensions 170 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_cextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mssql_pyodbc_dbapiunicode_nocextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_cextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_mysqldb_dbapiunicode_nocextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_cextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_mysql_pymysql_dbapiunicode_nocextensions 171 test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_oracle_cx_oracle_dbapiunicode_cextensions 170 test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_cextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_cextensions 170 -test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 170 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_cextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_cextensions 171 +test.aaa_profiling.test_compiler.CompileTest.test_select_labels 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 171 test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_cextensions 185 test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mssql_pyodbc_dbapiunicode_nocextensions 185 test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_mysql_mysqldb_dbapiunicode_cextensions 185 @@ -94,18 +94,18 @@ test.aaa_profiling.test_compiler.CompileTest.test_select_labels 3.8_sqlite_pysql # TEST: test.aaa_profiling.test_compiler.CompileTest.test_update -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_cextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_nocextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_cextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_nocextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_cextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_nocextensions 67 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_cextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mssql_pyodbc_dbapiunicode_nocextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_cextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_mysqldb_dbapiunicode_nocextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_cextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_mysql_pymysql_dbapiunicode_nocextensions 68 test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_oracle_cx_oracle_dbapiunicode_cextensions 67 test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_cextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_cextensions 67 -test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 67 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_cextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_cextensions 68 +test.aaa_profiling.test_compiler.CompileTest.test_update 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 68 test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_cextensions 70 test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mssql_pyodbc_dbapiunicode_nocextensions 70 test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_mysql_mysqldb_dbapiunicode_cextensions 70 @@ -121,18 +121,18 @@ test.aaa_profiling.test_compiler.CompileTest.test_update 3.8_sqlite_pysqlite_dba # TEST: test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_cextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_nocextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_cextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_nocextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_cextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_nocextensions 150 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_cextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mssql_pyodbc_dbapiunicode_nocextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_cextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_mysqldb_dbapiunicode_nocextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_cextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_mysql_pymysql_dbapiunicode_nocextensions 151 test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_oracle_cx_oracle_dbapiunicode_cextensions 150 test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_cextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_cextensions 150 -test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 150 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_cextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_cextensions 151 +test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 151 test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_cextensions 156 test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mssql_pyodbc_dbapiunicode_nocextensions 156 test.aaa_profiling.test_compiler.CompileTest.test_update_whereclause 3.8_mysql_mysqldb_dbapiunicode_cextensions 156 @@ -153,8 +153,8 @@ test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_cached 3.8_sqlit # TEST: test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_not_cached -test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_not_cached 3.8_sqlite_pysqlite_dbapiunicode_cextensions 4303 -test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_not_cached 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 4303 +test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_not_cached 3.8_sqlite_pysqlite_dbapiunicode_cextensions 5425 +test.aaa_profiling.test_misc.CacheKeyTest.test_statement_key_is_not_cached 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 5425 # TEST: test.aaa_profiling.test_misc.EnumTest.test_create_enum_from_pep_435_w_expensive_members @@ -165,66 +165,66 @@ test.aaa_profiling.test_misc.EnumTest.test_create_enum_from_pep_435_w_expensive_ # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 2.7_sqlite_pysqlite_dbapiunicode_cextensions 46005 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 56805 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 3.8_sqlite_pysqlite_dbapiunicode_cextensions 49505 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 61105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 2.7_sqlite_pysqlite_dbapiunicode_cextensions 45105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 55905 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 3.8_sqlite_pysqlite_dbapiunicode_cextensions 49105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_w_annotation 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 60705 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 2.7_sqlite_pysqlite_dbapiunicode_cextensions 44905 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 55705 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 3.8_sqlite_pysqlite_dbapiunicode_cextensions 48405 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 60005 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 2.7_sqlite_pysqlite_dbapiunicode_cextensions 44005 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 54805 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 3.8_sqlite_pysqlite_dbapiunicode_cextensions 48005 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_bundle_wo_annotation 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 59605 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 44005 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 52305 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46805 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55905 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 43105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 51405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55505 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 43205 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 51505 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46005 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 42305 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 50605 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 45605 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 54705 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 2.7_sqlite_pysqlite_dbapiunicode_cextensions 42605 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 45905 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 3.8_sqlite_pysqlite_dbapiunicode_cextensions 44805 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 48905 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 2.7_sqlite_pysqlite_dbapiunicode_cextensions 41705 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 45005 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 3.8_sqlite_pysqlite_dbapiunicode_cextensions 44405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 48505 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 44005 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 52305 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46805 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55905 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 43105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 51405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55505 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 43205 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 51505 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 46005 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 55105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 42305 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 50605 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 45605 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_bundle_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 54705 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 28105 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 30305 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 30505 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 32905 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 27205 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 29405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 30105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_w_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 32505 # TEST: test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 27305 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 29505 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 29705 -test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 32105 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_cextensions 26405 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 28605 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_cextensions 29305 +test.aaa_profiling.test_orm.AnnotatedOverheadTest.test_no_entity_wo_annotations 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 31705 # TEST: test.aaa_profiling.test_orm.AttributeOverheadTest.test_attribute_set @@ -242,73 +242,73 @@ test.aaa_profiling.test_orm.AttributeOverheadTest.test_collection_append_remove # TEST: test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 2.7_sqlite_pysqlite_dbapiunicode_cextensions 61 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 61 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 3.8_sqlite_pysqlite_dbapiunicode_cextensions 74 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 74 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 2.7_sqlite_pysqlite_dbapiunicode_cextensions 60 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 60 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 3.8_sqlite_pysqlite_dbapiunicode_cextensions 73 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_key_bound_branching 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 73 # TEST: test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 2.7_sqlite_pysqlite_dbapiunicode_cextensions 409 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 409 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 3.8_sqlite_pysqlite_dbapiunicode_cextensions 415 -test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 415 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 2.7_sqlite_pysqlite_dbapiunicode_cextensions 408 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 408 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 3.8_sqlite_pysqlite_dbapiunicode_cextensions 414 +test.aaa_profiling.test_orm.BranchedOptionTest.test_query_opts_unbound_branching 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 414 # TEST: test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline -test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 2.7_sqlite_pysqlite_dbapiunicode_cextensions 15156 -test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 26165 -test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 3.8_sqlite_pysqlite_dbapiunicode_cextensions 15189 -test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 27201 +test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 2.7_sqlite_pysqlite_dbapiunicode_cextensions 15150 +test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 26159 +test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 3.8_sqlite_pysqlite_dbapiunicode_cextensions 15188 +test.aaa_profiling.test_orm.DeferOptionsTest.test_baseline 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 27200 # TEST: test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols -test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 2.7_sqlite_pysqlite_dbapiunicode_cextensions 21313 -test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 26322 -test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 3.8_sqlite_pysqlite_dbapiunicode_cextensions 21353 -test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 27365 +test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 2.7_sqlite_pysqlite_dbapiunicode_cextensions 21294 +test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 26303 +test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 3.8_sqlite_pysqlite_dbapiunicode_cextensions 21339 +test.aaa_profiling.test_orm.DeferOptionsTest.test_defer_many_cols 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 27351 # TEST: test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 2.7_sqlite_pysqlite_dbapiunicode_cextensions 9503 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 9653 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 3.8_sqlite_pysqlite_dbapiunicode_cextensions 9954 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 10104 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 2.7_sqlite_pysqlite_dbapiunicode_cextensions 9703 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 9853 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 3.8_sqlite_pysqlite_dbapiunicode_cextensions 10154 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_aliased 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 10304 # TEST: test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 2.7_sqlite_pysqlite_dbapiunicode_cextensions 3703 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 3853 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 3.8_sqlite_pysqlite_dbapiunicode_cextensions 3704 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 3854 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 2.7_sqlite_pysqlite_dbapiunicode_cextensions 3953 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 4103 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 3.8_sqlite_pysqlite_dbapiunicode_cextensions 3954 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_b_plain 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 4104 # TEST: test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 2.7_sqlite_pysqlite_dbapiunicode_cextensions 93188 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 93338 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 3.8_sqlite_pysqlite_dbapiunicode_cextensions 100804 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 100954 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 2.7_sqlite_pysqlite_dbapiunicode_cextensions 93738 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 94088 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 3.8_sqlite_pysqlite_dbapiunicode_cextensions 101554 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 101704 # TEST: test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 2.7_sqlite_pysqlite_dbapiunicode_cextensions 91288 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 91438 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 3.8_sqlite_pysqlite_dbapiunicode_cextensions 99219 -test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 99369 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 2.7_sqlite_pysqlite_dbapiunicode_cextensions 91788 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 92138 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 3.8_sqlite_pysqlite_dbapiunicode_cextensions 99919 +test.aaa_profiling.test_orm.JoinConditionTest.test_a_to_d_aliased 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 100069 # TEST: test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 2.7_sqlite_pysqlite_dbapiunicode_cextensions 434604 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 436461 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 3.8_sqlite_pysqlite_dbapiunicode_cextensions 465386 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 467228 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 2.7_sqlite_pysqlite_dbapiunicode_cextensions 434915 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 436762 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 3.8_sqlite_pysqlite_dbapiunicode_cextensions 465676 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_build_query 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 467518 # TEST: test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 2.7_sqlite_pysqlite_dbapiunicode_cextensions 389400 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 405407 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 3.8_sqlite_pysqlite_dbapiunicode_cextensions 395813 -test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 412127 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 2.7_sqlite_pysqlite_dbapiunicode_cextensions 391100 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 405907 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 3.8_sqlite_pysqlite_dbapiunicode_cextensions 395113 +test.aaa_profiling.test_orm.JoinedEagerLoadTest.test_fetch_results 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 412627 # TEST: test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_identity @@ -319,24 +319,24 @@ test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_ # TEST: test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity -test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 2.7_sqlite_pysqlite_dbapiunicode_cextensions 79757 -test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 81819 -test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 3.8_sqlite_pysqlite_dbapiunicode_cextensions 81093 -test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 83860 +test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 2.7_sqlite_pysqlite_dbapiunicode_cextensions 77560 +test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 79780 +test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 3.8_sqlite_pysqlite_dbapiunicode_cextensions 80098 +test.aaa_profiling.test_orm.LoadManyToOneFromIdentityTest.test_many_to_one_load_no_identity 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 83365 # TEST: test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks -test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 2.7_sqlite_pysqlite_dbapiunicode_cextensions 18871 -test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 19225 -test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 3.8_sqlite_pysqlite_dbapiunicode_cextensions 19702 -test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 20198 +test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 2.7_sqlite_pysqlite_dbapiunicode_cextensions 18701 +test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 19147 +test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 3.8_sqlite_pysqlite_dbapiunicode_cextensions 19670 +test.aaa_profiling.test_orm.MergeBackrefsTest.test_merge_pending_with_all_pks 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 20154 # TEST: test.aaa_profiling.test_orm.MergeTest.test_merge_load -test.aaa_profiling.test_orm.MergeTest.test_merge_load 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1054 -test.aaa_profiling.test_orm.MergeTest.test_merge_load 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 1081 -test.aaa_profiling.test_orm.MergeTest.test_merge_load 3.8_sqlite_pysqlite_dbapiunicode_cextensions 1088 -test.aaa_profiling.test_orm.MergeTest.test_merge_load 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 1122 +test.aaa_profiling.test_orm.MergeTest.test_merge_load 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1035 +test.aaa_profiling.test_orm.MergeTest.test_merge_load 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 1062 +test.aaa_profiling.test_orm.MergeTest.test_merge_load 3.8_sqlite_pysqlite_dbapiunicode_cextensions 1079 +test.aaa_profiling.test_orm.MergeTest.test_merge_load 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 1113 # TEST: test.aaa_profiling.test_orm.MergeTest.test_merge_no_load @@ -347,10 +347,10 @@ test.aaa_profiling.test_orm.MergeTest.test_merge_no_load 3.8_sqlite_pysqlite_dba # TEST: test.aaa_profiling.test_orm.QueryTest.test_query_cols -test.aaa_profiling.test_orm.QueryTest.test_query_cols 2.7_sqlite_pysqlite_dbapiunicode_cextensions 5355 -test.aaa_profiling.test_orm.QueryTest.test_query_cols 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6075 -test.aaa_profiling.test_orm.QueryTest.test_query_cols 3.8_sqlite_pysqlite_dbapiunicode_cextensions 5673 -test.aaa_profiling.test_orm.QueryTest.test_query_cols 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 6413 +test.aaa_profiling.test_orm.QueryTest.test_query_cols 2.7_sqlite_pysqlite_dbapiunicode_cextensions 5306 +test.aaa_profiling.test_orm.QueryTest.test_query_cols 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6026 +test.aaa_profiling.test_orm.QueryTest.test_query_cols 3.8_sqlite_pysqlite_dbapiunicode_cextensions 5674 +test.aaa_profiling.test_orm.QueryTest.test_query_cols 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 6414 # TEST: test.aaa_profiling.test_orm.SelectInEagerLoadTest.test_round_trip_results @@ -361,10 +361,10 @@ test.aaa_profiling.test_orm.SelectInEagerLoadTest.test_round_trip_results 3.8_sq # TEST: test.aaa_profiling.test_orm.SessionTest.test_expire_lots -test.aaa_profiling.test_orm.SessionTest.test_expire_lots 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1145 +test.aaa_profiling.test_orm.SessionTest.test_expire_lots 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1135 test.aaa_profiling.test_orm.SessionTest.test_expire_lots 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 1150 -test.aaa_profiling.test_orm.SessionTest.test_expire_lots 3.8_sqlite_pysqlite_dbapiunicode_cextensions 1245 -test.aaa_profiling.test_orm.SessionTest.test_expire_lots 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 1245 +test.aaa_profiling.test_orm.SessionTest.test_expire_lots 3.8_sqlite_pysqlite_dbapiunicode_cextensions 1241 +test.aaa_profiling.test_orm.SessionTest.test_expire_lots 3.8_sqlite_pysqlite_dbapiunicode_nocextensions 1256 # TEST: test.aaa_profiling.test_pool.QueuePoolTest.test_first_connect @@ -463,18 +463,18 @@ test.aaa_profiling.test_resultset.ResultSetTest.test_contains_doesnt_compile 3.8 # TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mssql_pyodbc_dbapiunicode_cextensions 1508 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mssql_pyodbc_dbapiunicode_nocextensions 13511 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_mysqldb_dbapiunicode_cextensions 1515 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_mysqldb_dbapiunicode_nocextensions 13518 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_pymysql_dbapiunicode_cextensions 123482 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_pymysql_dbapiunicode_nocextensions 135485 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mssql_pyodbc_dbapiunicode_cextensions 1530 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mssql_pyodbc_dbapiunicode_nocextensions 13532 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_mysqldb_dbapiunicode_cextensions 1535 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_mysqldb_dbapiunicode_nocextensions 13537 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_pymysql_dbapiunicode_cextensions 123501 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_mysql_pymysql_dbapiunicode_nocextensions 135503 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_oracle_cx_oracle_dbapiunicode_cextensions 1541 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 43564 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_postgresql_psycopg2_dbapiunicode_cextensions 1484 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 13487 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1438 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 13441 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_postgresql_psycopg2_dbapiunicode_cextensions 1505 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 13507 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_sqlite_pysqlite_dbapiunicode_cextensions 1458 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 13460 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 3.8_mssql_pyodbc_dbapiunicode_cextensions 1509 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 3.8_mssql_pyodbc_dbapiunicode_nocextensions 13512 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 3.8_mysql_mysqldb_dbapiunicode_cextensions 1516 @@ -490,18 +490,18 @@ test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_legacy 3.8_sql # TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mssql_pyodbc_dbapiunicode_cextensions 2515 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mssql_pyodbc_dbapiunicode_cextensions 2535 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mssql_pyodbc_dbapiunicode_nocextensions 15518 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_mysqldb_dbapiunicode_cextensions 2522 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_mysqldb_dbapiunicode_nocextensions 15525 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_pymysql_dbapiunicode_cextensions 124489 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_pymysql_dbapiunicode_nocextensions 137492 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_mysqldb_dbapiunicode_cextensions 2542 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_mysqldb_dbapiunicode_nocextensions 15544 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_pymysql_dbapiunicode_cextensions 124508 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_mysql_pymysql_dbapiunicode_nocextensions 137510 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_oracle_cx_oracle_dbapiunicode_cextensions 2548 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 45571 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_postgresql_psycopg2_dbapiunicode_cextensions 2491 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 15494 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_sqlite_pysqlite_dbapiunicode_cextensions 2445 -test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 15448 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_postgresql_psycopg2_dbapiunicode_cextensions 2510 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 15512 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_sqlite_pysqlite_dbapiunicode_cextensions 2465 +test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 15467 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 3.8_mssql_pyodbc_dbapiunicode_cextensions 2517 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 3.8_mssql_pyodbc_dbapiunicode_nocextensions 15520 test.aaa_profiling.test_resultset.ResultSetTest.test_fetch_by_key_mappings 3.8_mysql_mysqldb_dbapiunicode_cextensions 2524 @@ -679,18 +679,18 @@ test.aaa_profiling.test_resultset.ResultSetTest.test_raw_unicode 3.8_sqlite_pysq # TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_string -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mssql_pyodbc_dbapiunicode_cextensions 517 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mssql_pyodbc_dbapiunicode_cextensions 528 test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mssql_pyodbc_dbapiunicode_nocextensions 6517 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_mysqldb_dbapiunicode_cextensions 524 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_mysqldb_dbapiunicode_nocextensions 6524 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_pymysql_dbapiunicode_cextensions 122491 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_pymysql_dbapiunicode_nocextensions 128491 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_mysqldb_dbapiunicode_cextensions 535 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_mysqldb_dbapiunicode_nocextensions 6537 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_pymysql_dbapiunicode_cextensions 122501 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_mysql_pymysql_dbapiunicode_nocextensions 128503 test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_oracle_cx_oracle_dbapiunicode_cextensions 550 test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 36570 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_postgresql_psycopg2_dbapiunicode_cextensions 493 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 6493 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_sqlite_pysqlite_dbapiunicode_cextensions 447 -test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6447 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_postgresql_psycopg2_dbapiunicode_cextensions 503 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 6505 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_sqlite_pysqlite_dbapiunicode_cextensions 458 +test.aaa_profiling.test_resultset.ResultSetTest.test_string 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6460 test.aaa_profiling.test_resultset.ResultSetTest.test_string 3.8_mssql_pyodbc_dbapiunicode_cextensions 521 test.aaa_profiling.test_resultset.ResultSetTest.test_string 3.8_mssql_pyodbc_dbapiunicode_nocextensions 6521 test.aaa_profiling.test_resultset.ResultSetTest.test_string 3.8_mysql_mysqldb_dbapiunicode_cextensions 528 @@ -706,18 +706,18 @@ test.aaa_profiling.test_resultset.ResultSetTest.test_string 3.8_sqlite_pysqlite_ # TEST: test.aaa_profiling.test_resultset.ResultSetTest.test_unicode -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mssql_pyodbc_dbapiunicode_cextensions 517 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mssql_pyodbc_dbapiunicode_cextensions 528 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mssql_pyodbc_dbapiunicode_nocextensions 6517 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_mysqldb_dbapiunicode_cextensions 524 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_mysqldb_dbapiunicode_nocextensions 6524 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_pymysql_dbapiunicode_cextensions 122491 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_pymysql_dbapiunicode_nocextensions 128491 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_mysqldb_dbapiunicode_cextensions 535 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_mysqldb_dbapiunicode_nocextensions 6537 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_pymysql_dbapiunicode_cextensions 122501 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_mysql_pymysql_dbapiunicode_nocextensions 128503 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_oracle_cx_oracle_dbapiunicode_cextensions 550 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_oracle_cx_oracle_dbapiunicode_nocextensions 36570 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_postgresql_psycopg2_dbapiunicode_cextensions 493 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 6493 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_sqlite_pysqlite_dbapiunicode_cextensions 447 -test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6447 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_postgresql_psycopg2_dbapiunicode_cextensions 503 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_postgresql_psycopg2_dbapiunicode_nocextensions 6505 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_sqlite_pysqlite_dbapiunicode_cextensions 458 +test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 2.7_sqlite_pysqlite_dbapiunicode_nocextensions 6460 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 3.8_mssql_pyodbc_dbapiunicode_cextensions 521 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 3.8_mssql_pyodbc_dbapiunicode_nocextensions 6521 test.aaa_profiling.test_resultset.ResultSetTest.test_unicode 3.8_mysql_mysqldb_dbapiunicode_cextensions 528 diff --git a/test/sql/test_compare.py b/test/sql/test_compare.py index 6944101346..da588b988b 100644 --- a/test/sql/test_compare.py +++ b/test/sql/test_compare.py @@ -37,6 +37,7 @@ from sqlalchemy.sql import dml from sqlalchemy.sql import False_ from sqlalchemy.sql import func from sqlalchemy.sql import operators +from sqlalchemy.sql import roles from sqlalchemy.sql import True_ from sqlalchemy.sql import type_coerce from sqlalchemy.sql import visitors @@ -55,6 +56,8 @@ from sqlalchemy.sql.elements import UnaryExpression from sqlalchemy.sql.functions import FunctionElement from sqlalchemy.sql.functions import GenericFunction from sqlalchemy.sql.functions import ReturnTypeFromArgs +from sqlalchemy.sql.lambdas import lambda_stmt +from sqlalchemy.sql.lambdas import LambdaElement from sqlalchemy.sql.selectable import _OffsetLimitParam from sqlalchemy.sql.selectable import AliasedReturnsRows from sqlalchemy.sql.selectable import FromGrouping @@ -791,6 +794,69 @@ class CoreFixtures(object): if util.py37: fixtures.append(_update_dml_w_dicts) + def _lambda_fixtures(): + def one(): + return LambdaElement( + lambda: table_a.c.a == column("q"), roles.WhereHavingRole + ) + + def two(): + r = random.randint(1, 10) + q = 20 + return LambdaElement( + lambda: table_a.c.a + q == r, roles.WhereHavingRole + ) + + some_value = random.randint(20, 30) + + def three(y): + return LambdaElement( + lambda: and_(table_a.c.a == some_value, table_a.c.b > y), + roles.WhereHavingRole, + ) + + class Foo: + x = 10 + y = 15 + + def four(): + return LambdaElement( + lambda: and_(table_a.c.a == Foo.x), roles.WhereHavingRole + ) + + def five(): + return LambdaElement( + lambda: and_(table_a.c.a == Foo.x, table_a.c.b == Foo.y), + roles.WhereHavingRole, + ) + + def six(): + d = {"g": random.randint(40, 45)} + + return LambdaElement( + lambda: and_(table_a.c.b == d["g"]), roles.WhereHavingRole + ) + + def seven(): + # lambda statements don't collect bindparameter objects + # for fixed values, has to be in a variable + value = random.randint(10, 20) + return lambda_stmt(lambda: future_select(table_a)) + ( + lambda s: s.where(table_a.c.a == value) + ) + + return [ + one(), + two(), + three(random.randint(5, 10)), + four(), + five(), + six(), + seven(), + ] + + dont_compare_values_fixtures.append(_lambda_fixtures) + class CacheKeyFixture(object): def _run_cache_key_fixture(self, fixture, compare_values): @@ -1076,7 +1142,7 @@ class CompareAndCopyTest(CoreFixtures, fixtures.TestBase): need = set( cls for cls in class_hierarchy(ClauseElement) - if issubclass(cls, (ColumnElement, Selectable)) + if issubclass(cls, (ColumnElement, Selectable, LambdaElement)) and ( "__init__" in cls.__dict__ or issubclass(cls, AliasedReturnsRows) diff --git a/test/sql/test_lambdas.py b/test/sql/test_lambdas.py new file mode 100644 index 0000000000..53f6a9544c --- /dev/null +++ b/test/sql/test_lambdas.py @@ -0,0 +1,679 @@ +from sqlalchemy import exc +from sqlalchemy import testing +from sqlalchemy.future import select as future_select +from sqlalchemy.schema import Column +from sqlalchemy.schema import ForeignKey +from sqlalchemy.schema import Table +from sqlalchemy.sql import and_ +from sqlalchemy.sql import coercions +from sqlalchemy.sql import column +from sqlalchemy.sql import join +from sqlalchemy.sql import lambda_stmt +from sqlalchemy.sql import lambdas +from sqlalchemy.sql import roles +from sqlalchemy.sql import select +from sqlalchemy.sql import table +from sqlalchemy.sql import util as sql_util +from sqlalchemy.testing import assert_raises_message +from sqlalchemy.testing import AssertsCompiledSQL +from sqlalchemy.testing import eq_ +from sqlalchemy.testing import fixtures +from sqlalchemy.testing import is_ +from sqlalchemy.testing.assertsql import CompiledSQL +from sqlalchemy.types import Integer +from sqlalchemy.types import String + + +class DeferredLambdaTest( + fixtures.TestBase, testing.AssertsExecutionResults, AssertsCompiledSQL +): + __dialect__ = "default" + + def test_select_whereclause(self): + t1 = table("t1", column("q"), column("p")) + + x = 10 + y = 5 + + def go(): + return select([t1]).where(lambda: and_(t1.c.q == x, t1.c.p == y)) + + self.assert_compile( + go(), "SELECT t1.q, t1.p FROM t1 WHERE t1.q = :x_1 AND t1.p = :y_1" + ) + + self.assert_compile( + go(), "SELECT t1.q, t1.p FROM t1 WHERE t1.q = :x_1 AND t1.p = :y_1" + ) + + def test_stale_checker_embedded(self): + def go(x): + + stmt = select([lambda: x]) + return stmt + + c1 = column("x") + s1 = go(c1) + s2 = go(c1) + + self.assert_compile(s1, "SELECT x") + self.assert_compile(s2, "SELECT x") + + c1 = column("q") + + s3 = go(c1) + self.assert_compile(s3, "SELECT q") + + def test_stale_checker_statement(self): + def go(x): + + stmt = lambdas.lambda_stmt(lambda: select([x])) + return stmt + + c1 = column("x") + s1 = go(c1) + s2 = go(c1) + + self.assert_compile(s1, "SELECT x") + self.assert_compile(s2, "SELECT x") + + c1 = column("q") + + s3 = go(c1) + self.assert_compile(s3, "SELECT q") + + def test_stale_checker_linked(self): + def go(x, y): + + stmt = lambdas.lambda_stmt(lambda: select([x])) + ( + lambda s: s.where(y > 5) + ) + return stmt + + c1 = column("x") + c2 = column("y") + s1 = go(c1, c2) + s2 = go(c1, c2) + + self.assert_compile(s1, "SELECT x WHERE y > :y_1") + self.assert_compile(s2, "SELECT x WHERE y > :y_1") + + c1 = column("q") + c2 = column("p") + + s3 = go(c1, c2) + self.assert_compile(s3, "SELECT q WHERE p > :p_1") + + def test_coercion_cols_clause(self): + assert_raises_message( + exc.ArgumentError, + "Textual column expression 'f' should be explicitly declared", + select, + [lambda: "foo"], + ) + + def test_coercion_where_clause(self): + assert_raises_message( + exc.ArgumentError, + "SQL expression for WHERE/HAVING role expected, got 5", + select([column("q")]).where, + 5, + ) + + def test_propagate_attrs_full_stmt(self): + col = column("q") + col._propagate_attrs = col._propagate_attrs.union( + {"compile_state_plugin": "x", "plugin_subject": "y"} + ) + + stmt = lambdas.lambda_stmt(lambda: select([col])) + + eq_( + stmt._propagate_attrs, + {"compile_state_plugin": "x", "plugin_subject": "y"}, + ) + + def test_propagate_attrs_cols_clause(self): + col = column("q") + col._propagate_attrs = col._propagate_attrs.union( + {"compile_state_plugin": "x", "plugin_subject": "y"} + ) + + stmt = select([lambda: col]) + + eq_( + stmt._propagate_attrs, + {"compile_state_plugin": "x", "plugin_subject": "y"}, + ) + + def test_propagate_attrs_from_clause(self): + col = column("q") + + t = table("t", column("y")) + + t._propagate_attrs = t._propagate_attrs.union( + {"compile_state_plugin": "x", "plugin_subject": "y"} + ) + + stmt = future_select(lambda: col).join(t) + + eq_( + stmt._propagate_attrs, + {"compile_state_plugin": "x", "plugin_subject": "y"}, + ) + + def test_select_legacy_expanding_columns(self): + q, p, r = column("q"), column("p"), column("r") + + stmt = select([lambda: (q, p, r)]) + + self.assert_compile(stmt, "SELECT q, p, r") + + def test_select_future_expanding_columns(self): + q, p, r = column("q"), column("p"), column("r") + + stmt = future_select(lambda: (q, p, r)) + + self.assert_compile(stmt, "SELECT q, p, r") + + def test_select_fromclause(self): + t1 = table("t1", column("q"), column("p")) + t2 = table("t2", column("y")) + + def go(): + return select([t1]).select_from( + lambda: join(t1, t2, lambda: t1.c.q == t2.c.y) + ) + + self.assert_compile( + go(), "SELECT t1.q, t1.p FROM t1 JOIN t2 ON t1.q = t2.y" + ) + + self.assert_compile( + go(), "SELECT t1.q, t1.p FROM t1 JOIN t2 ON t1.q = t2.y" + ) + + def test_in_parameters_one(self): + + expr1 = select([1]).where(column("q").in_(["a", "b", "c"])) + self.assert_compile(expr1, "SELECT 1 WHERE q IN ([POSTCOMPILE_q_1])") + + self.assert_compile( + expr1, + "SELECT 1 WHERE q IN (:q_1_1, :q_1_2, :q_1_3)", + render_postcompile=True, + checkparams={"q_1_1": "a", "q_1_2": "b", "q_1_3": "c"}, + ) + + def test_in_parameters_two(self): + expr2 = select([1]).where(lambda: column("q").in_(["a", "b", "c"])) + self.assert_compile(expr2, "SELECT 1 WHERE q IN ([POSTCOMPILE_q_1])") + self.assert_compile( + expr2, + "SELECT 1 WHERE q IN (:q_1_1, :q_1_2, :q_1_3)", + render_postcompile=True, + checkparams={"q_1_1": "a", "q_1_2": "b", "q_1_3": "c"}, + ) + + def test_in_parameters_three(self): + expr3 = lambdas.lambda_stmt( + lambda: select([1]).where(column("q").in_(["a", "b", "c"])) + ) + self.assert_compile(expr3, "SELECT 1 WHERE q IN ([POSTCOMPILE_q_1])") + self.assert_compile( + expr3, + "SELECT 1 WHERE q IN (:q_1_1, :q_1_2, :q_1_3)", + render_postcompile=True, + checkparams={"q_1_1": "a", "q_1_2": "b", "q_1_3": "c"}, + ) + + def test_in_parameters_four(self): + def go(names): + return lambdas.lambda_stmt( + lambda: select([1]).where(column("q").in_(names)) + ) + + expr4 = go(["a", "b", "c"]) + self.assert_compile( + expr4, "SELECT 1 WHERE q IN ([POSTCOMPILE_names_1])" + ) + self.assert_compile( + expr4, + "SELECT 1 WHERE q IN (:names_1_1, :names_1_2, :names_1_3)", + render_postcompile=True, + checkparams={"names_1_1": "a", "names_1_2": "b", "names_1_3": "c"}, + ) + + def test_in_parameters_five(self): + def go(n1, n2): + stmt = lambdas.lambda_stmt( + lambda: select([1]).where(column("q").in_(n1)) + ) + stmt += lambda s: s.where(column("y").in_(n2)) + return stmt + + expr = go(["a", "b", "c"], ["d", "e", "f"]) + self.assert_compile( + expr, + "SELECT 1 WHERE q IN (:n1_1_1, :n1_1_2, :n1_1_3) " + "AND y IN (:n2_1_1, :n2_1_2, :n2_1_3)", + render_postcompile=True, + checkparams={ + "n1_1_1": "a", + "n1_1_2": "b", + "n1_1_3": "c", + "n2_1_1": "d", + "n2_1_2": "e", + "n2_1_3": "f", + }, + ) + + def test_select_columns_clause(self): + t1 = table("t1", column("q"), column("p")) + + g = 5 + + def go(): + return select([lambda: t1.c.q, lambda: t1.c.p + g]) + + stmt = go() + self.assert_compile( + stmt, + "SELECT t1.q, t1.p + :g_1 AS anon_1 FROM t1", + checkparams={"g_1": 5}, + ) + eq_(stmt._generate_cache_key()._generate_param_dict(), {"g_1": 5}) + + g = 10 + stmt = go() + self.assert_compile( + stmt, + "SELECT t1.q, t1.p + :g_1 AS anon_1 FROM t1", + checkparams={"g_1": 10}, + ) + eq_(stmt._generate_cache_key()._generate_param_dict(), {"g_1": 10}) + + @testing.metadata_fixture() + def user_address_fixture(self, metadata): + users = Table( + "users", + metadata, + Column("id", Integer, primary_key=True), + Column("name", String(50)), + ) + addresses = Table( + "addresses", + metadata, + Column("id", Integer), + Column("user_id", ForeignKey("users.id")), + Column("email", String(50)), + ) + return users, addresses + + def test_adapt_select(self, user_address_fixture): + users, addresses = user_address_fixture + + stmt = ( + select([users]) + .select_from( + users.join( + addresses, lambda: users.c.id == addresses.c.user_id + ) + ) + .where(lambda: users.c.name == "ed") + ) + + self.assert_compile( + stmt, + "SELECT users.id, users.name FROM users " + "JOIN addresses ON users.id = addresses.user_id " + "WHERE users.name = :name_1", + ) + + u1 = users.alias() + adapter = sql_util.ClauseAdapter(u1) + + s2 = adapter.traverse(stmt) + + self.assert_compile( + s2, + "SELECT users_1.id, users_1.name FROM users AS users_1 " + "JOIN addresses ON users_1.id = addresses.user_id " + "WHERE users_1.name = :name_1", + ) + + def test_no_var_dict_keys(self, user_address_fixture): + users, addresses = user_address_fixture + + names = {"x": "some name"} + foo = "x" + expr = lambda: users.c.name == names[foo] # noqa + + assert_raises_message( + exc.InvalidRequestError, + "Dictionary keys / list indexes inside of a cached " + "lambda must be Python literals only", + coercions.expect, + roles.WhereHavingRole, + expr, + ) + + def test_dict_literal_keys(self, user_address_fixture): + users, addresses = user_address_fixture + + names = {"x": "some name"} + lmb = lambda: users.c.name == names["x"] # noqa + + expr = coercions.expect(roles.WhereHavingRole, lmb) + + self.assert_compile( + expr, + "users.name = :x_1", + params=expr._param_dict(), + checkparams={"x_1": "some name"}, + ) + + def test_assignment_one(self, user_address_fixture): + users, addresses = user_address_fixture + + x = 5 + + def my_lambda(): + + y = 10 + z = y + 18 + + expr1 = users.c.name > x + expr2 = users.c.name < z + return and_(expr1, expr2) + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :name_1", + params=expr._param_dict(), + checkparams={"name_1": 28, "x_1": 5}, + ) + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :name_1", + params=expr._param_dict(), + checkparams={"name_1": 28, "x_1": 5}, + ) + + def test_assignment_two(self, user_address_fixture): + users, addresses = user_address_fixture + + x = 5 + z = 10 + + def my_lambda(): + + y = x + z + + expr1 = users.c.name > x + expr2 = users.c.name < y + return and_(expr1, expr2) + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :x_1 + :z_1", + params=expr._param_dict(), + checkparams={"x_1": 5, "z_1": 10}, + ) + + x = 15 + z = 18 + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :x_1 + :z_1", + params=expr._param_dict(), + checkparams={"x_1": 15, "z_1": 18}, + ) + + def test_assignment_three(self, user_address_fixture): + users, addresses = user_address_fixture + + x = 5 + z = 10 + + def my_lambda(): + + y = 10 + z + + expr1 = users.c.name > x + expr2 = users.c.name < y + return and_(expr1, expr2) + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :param_1 + :z_1", + params=expr._param_dict(), + checkparams={"x_1": 5, "z_1": 10, "param_1": 10}, + ) + + x = 15 + z = 18 + + expr = coercions.expect(roles.WhereHavingRole, my_lambda) + self.assert_compile( + expr, + "users.name > :x_1 AND users.name < :param_1 + :z_1", + params=expr._param_dict(), + checkparams={"x_1": 15, "z_1": 18, "param_1": 10}, + ) + + def test_op_reverse(self, user_address_fixture): + user, addresses = user_address_fixture + + x = "foo" + + def mylambda(): + return x + user.c.name + + expr = coercions.expect(roles.WhereHavingRole, mylambda) + self.assert_compile( + expr, ":x_1 || users.name", checkparams={"x_1": "foo"} + ) + + x = "bar" + expr = coercions.expect(roles.WhereHavingRole, mylambda) + self.assert_compile( + expr, ":x_1 || users.name", checkparams={"x_1": "bar"} + ) + + def test_op_forwards(self, user_address_fixture): + user, addresses = user_address_fixture + + x = "foo" + + def mylambda(): + return user.c.name + x + + expr = coercions.expect(roles.WhereHavingRole, mylambda) + self.assert_compile( + expr, "users.name || :x_1", checkparams={"x_1": "foo"} + ) + + x = "bar" + expr = coercions.expect(roles.WhereHavingRole, mylambda) + self.assert_compile( + expr, "users.name || :x_1", checkparams={"x_1": "bar"} + ) + + def test_execute_constructed_uncached(self, user_address_fixture): + users, addresses = user_address_fixture + + def go(name): + stmt = select([lambda: users.c.id]).where( + lambda: users.c.name == name + ) + with testing.db.connect().execution_options( + compiled_cache=None + ) as conn: + conn.execute(stmt) + + with self.sql_execution_asserter(testing.db) as asserter: + go("name1") + go("name2") + go("name1") + go("name3") + + asserter.assert_( + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name2"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name3"}], + ), + ) + + def test_execute_full_uncached(self, user_address_fixture): + users, addresses = user_address_fixture + + def go(name): + stmt = lambda_stmt( + lambda: select([users.c.id]).where( # noqa + users.c.name == name + ) + ) + + with testing.db.connect().execution_options( + compiled_cache=None + ) as conn: + conn.execute(stmt) + + with self.sql_execution_asserter(testing.db) as asserter: + go("name1") + go("name2") + go("name1") + go("name3") + + asserter.assert_( + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name2"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name3"}], + ), + ) + + def test_execute_constructed_cached(self, user_address_fixture): + users, addresses = user_address_fixture + + cache = {} + + def go(name): + stmt = select([lambda: users.c.id]).where( + lambda: users.c.name == name + ) + + with testing.db.connect().execution_options( + compiled_cache=cache + ) as conn: + conn.execute(stmt) + + with self.sql_execution_asserter(testing.db) as asserter: + go("name1") + go("name2") + go("name1") + go("name3") + + asserter.assert_( + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name2"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name3"}], + ), + ) + + def test_execute_full_cached(self, user_address_fixture): + users, addresses = user_address_fixture + + cache = {} + + def go(name): + stmt = lambda_stmt( + lambda: select([users.c.id]).where( # noqa + users.c.name == name + ) + ) + + with testing.db.connect().execution_options( + compiled_cache=cache + ) as conn: + conn.execute(stmt) + + with self.sql_execution_asserter(testing.db) as asserter: + go("name1") + go("name2") + go("name1") + go("name3") + + asserter.assert_( + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name2"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name1"}], + ), + CompiledSQL( + "SELECT users.id FROM users WHERE users.name = :name_1", + lambda ctx: [{"name_1": "name3"}], + ), + ) + + def test_cache_key_thing(self): + t1 = table("t1", column("q"), column("p")) + + def go(x): + return coercions.expect(roles.WhereHavingRole, lambda: t1.c.q == x) + + expr1 = go(5) + expr2 = go(10) + + is_(expr1._generate_cache_key().bindparams[0], expr1._resolved.right) + is_(expr2._generate_cache_key().bindparams[0], expr2._resolved.right)