dan [Fri, 8 Aug 2014 16:52:28 +0000 (16:52 +0000)]
Because SQLite internally calculates query plan costs using a logarithmic scale, very large estimated sorting costs can cause all other estimated costs to be rounded down to zero. In these cases break ties between plans with the same total cost by comparing the costs with sorting excluded. This is an alternative fix for the same problem as addressed by [2af630c572].
drh [Fri, 8 Aug 2014 15:38:11 +0000 (15:38 +0000)]
The SQLITE_IOERR_BLOCKED extended error code is not longer used, so remove
assert() statements and documentation for that error code. Also make other
documentation improvements.
drh [Thu, 7 Aug 2014 20:42:33 +0000 (20:42 +0000)]
When the estimated sorting cost overwhelms the estimated lookup cost, ensure
that lookup costs are still taken into account when selecting a lookup
algorithm.
drh [Thu, 7 Aug 2014 16:50:00 +0000 (16:50 +0000)]
When the estimated cost to do a sort overwhelms the estimated cost to do
individual table lookups, make sure that the table lookup costs are still
taken into consideration when selecting the lookup algorithm.
drh [Wed, 6 Aug 2014 00:29:06 +0000 (00:29 +0000)]
A simpler fix for ticket [3a88d85f36704eebe1] - one that uses less code.
The error message is not quite as good, but as this error has apparently
not previously occurred in over 8 years of heavy use, that is not seen as
a serious problem.
drh [Tue, 5 Aug 2014 19:16:22 +0000 (19:16 +0000)]
Add the ability to evaluate IN operators as a sequence of comparisons as
an alternative to the long-standing algorithm of building a lookup table.
Use the new implementation in circumstances where it is likely to be faster,
such as when the RHS of the IN changes between successive evaluations.
drh [Mon, 4 Aug 2014 21:26:58 +0000 (21:26 +0000)]
Part of the change in the previous check-in was incorrect and can result
in an incorrect UPDATE for WITHOUT ROWID tables. This check-in fixes the
problem.
drh [Sat, 2 Aug 2014 21:03:33 +0000 (21:03 +0000)]
Enhancements to the code generator for the IN operator that result in much
faster queries in some cases, for example when the RHS of the IN operator
changes for each row of a large table scan.
drh [Fri, 1 Aug 2014 18:00:24 +0000 (18:00 +0000)]
Remove an unnecessary OP_Null in the IN-operator logic. Attempt to clarify
comments explaining the IN-operator code, though it is not clear that the
comments are correct even yet - more work to be done.
drh [Fri, 1 Aug 2014 15:51:36 +0000 (15:51 +0000)]
Clean up the IN operator code generation logic to make it easier to reason
about. In the process, improve code generation to omit some unused OP_Null
operations.
drh [Fri, 1 Aug 2014 15:34:36 +0000 (15:34 +0000)]
The idea of coding IN operator with a short list on the RHS as an OR expression
turns out to be helpful. If the list is of length 1 or 2, the OR expression
is very slightly faster, but the ephemeral table approach is clearly better for
all list lengths greater than 2. Better to keep the code simple.
Deactivate the DISTINCT in a SELECT on the right-hand side of an IN operator,
since it should not make any difference in the output but dues consume extra
memory and CPU time.
Add the "eForce" parameter to the sqlite3_multiplex_shutdown() entry point
in test_multiplex.c. Shutdown is forced if true. Shutdown is not done if
there are pending database connections and eForce is false, but an error log
entry is made instead.
Ensure that the correct number of columns in a UNIQUE index are checked for
uniqueness, regardless of whether or not the original table has a ROWID or
if the columns are NOT NULL, etc. Ticket [9a6daf340df99ba93c].
Fix a bug in the whereRangeSkipScanEst() procedure (added by
check-in [d09ca6d5efad3e4cfa]) where it fails to consider the possibility
of a ROWID column when computing the affinity of a table column.
In the command-line shell, in CSV output mode, terminate rows with CRNL but
do not expand NL characters in data into CRNL. Provide the extra -newline
command-line option and the extra argument to .separator to designate an
alternative newline character sequence for CSV output.
Ugh. Consecutive UNIQUE index entries are only distinct if the index is
on NOT NULL columns. So the previous version was not quite right. This
check-in fixes the problem.
Change the hex literal processing so that only the SQL parser understands
hex literals. Casting and coercing string literals into numeric values does
not understand hexadecimal integers. This preserves backwards compatibility.
Also: Throw an error on any hex literal that is too big to fit into 64 bits.
When running ANALYZE, it is not necessary to check the right-most key column
for changes since that column will always change if none of the previous
columns have.
Add the OP_ReopenIdx opcode that works like OP_OpenRead except that it becomes
a no-op if the cursor is already open on the same index. Update the
OR-optimization logic to make use of OP_ReopenIdx in order to avoid
unnecessary cursor open requests sent to the B-Tree layer.
The optimization of check-in [b67a6e33f2] does not work (it generates
incorrect VDBE code) if an OR term is AND-ed with a constant expression.
So back that optimization out and add a test case to make sure it does not
get added back in.
Fix the index name for the shadow tables in the spellfix1 extension so that
multiple instances of the spellfix1 virtual table can each have their own
index.
SQLite has long accepted some unadvertised and non-standard join syntax.
Add a test to ensure that future versions continue to accept this non-standard
syntax, to avoid breaking legacy applications that use the undefined syntax.