sqlite_memory_db = create_engine('sqlite://')
-The :class:`~sqlalchemy.engine.base.Engine` will ask the connection pool for a
+The :class:`.Engine` will ask the connection pool for a
connection when the ``connect()`` or ``execute()`` methods are called. The
-default connection pool, :class:`~sqlalchemy.pool.QueuePool`, as well as the
-default connection pool used with SQLite,
-:class:`~sqlalchemy.pool.SingletonThreadPool`, will open connections to the
+default connection pool, :class:`~.QueuePool`, will open connections to the
database on an as-needed basis. As concurrent statements are executed,
-:class:`~sqlalchemy.pool.QueuePool` will grow its pool of connections to a
+:class:`.QueuePool` will grow its pool of connections to a
default size of five, and will allow a default "overflow" of ten. Since the
-:class:`~sqlalchemy.engine.base.Engine` is essentially "home base" for the
+:class:`.Engine` is essentially "home base" for the
connection pool, it follows that you should keep a single
-:class:`~sqlalchemy.engine.base.Engine` per database established within an
+:class:`.Engine` per database established within an
application, rather than creating a new one for each connection.
+.. note:: :class:`.QueuePool` is not used by default for SQLite engines. See
+ :ref:`sqlite_toplevel` for details on SQLite connection pool usage.
+
.. autoclass:: sqlalchemy.engine.url.URL
:members:
engine = create_engine('postgresql://me@localhost/mydb',
pool_size=20, max_overflow=0)
-In the case of SQLite, a :class:`SingletonThreadPool` is provided instead,
-to provide compatibility with SQLite's restricted threading model, as well
-as to provide a reasonable default behavior to SQLite "memory" databases,
-which maintain their entire dataset within the scope of a single connection.
+In the case of SQLite, the :class:`.SingletonThreadPool` or
+:class:`.NullPool` are selected by the dialect to provide
+greater compatibility with SQLite's threading and locking
+model, as well as to provide a reasonable default behavior
+to SQLite "memory" databases, which maintain their entire
+dataset within the scope of a single connection.
All SQLAlchemy pool implementations have in common
that none of them "pre create" connections - all implementations wait
+.. _sqlite_toplevel:
+
SQLite
======
import datetime
-# step 2. databases
+# step 2. databases.
+# db1 is used for id generation. The "pool_threadlocal"
+# causes the id_generator() to use the same connection as that
+# of an ongoing transaction within db1.
echo = True
-db1 = create_engine('sqlite://', echo=echo)
+db1 = create_engine('sqlite://', echo=echo, pool_threadlocal=True)
db2 = create_engine('sqlite://', echo=echo)
db3 = create_engine('sqlite://', echo=echo)
db4 = create_engine('sqlite://', echo=echo)
"func.current_timestamp()" is registered as returning a DATETIME type in
SQLAlchemy, so this function still receives SQLAlchemy-level result processing.
-Threading Behavior
+Pooling Behavior
------------------
Pysqlite connections do not support being moved between threads, unless
unless access to the connection is limited to a single worker thread which communicates
through a queueing mechanism to concurrent threads.
-To provide a default which accomodates SQLite's default threading capabilities
-somewhat reasonably, the SQLite dialect will specify that the :class:`~sqlalchemy.pool.SingletonThreadPool`
-be used by default. This pool maintains a single SQLite connection per thread
-that is held open up to a count of five concurrent threads. When more than five threads
-are used, a cleanup mechanism will dispose of excess unused connections.
-
-Two optional pool implementations that may be appropriate for particular SQLite usage scenarios:
-
- * the :class:`sqlalchemy.pool.StaticPool` might be appropriate for a multithreaded
- application using an in-memory database, assuming the threading issues inherent in
- pysqlite are somehow accomodated for. This pool holds persistently onto a single connection
- which is never closed, and is returned for all requests.
+To provide for these two behaviors, the pysqlite dialect will select a :class:`.Pool`
+implementation suitable:
+
+* When a ``:memory:`` SQLite database is specified, the dialect will use :class:`.SingletonThreadPool`.
+ This pool maintains a single connection per thread, so that all access to the engine within
+ the current thread use the same ``:memory:`` database.
+* When a file-based database is specified, the dialect will use :class:`.NullPool` as the source
+ of connections. This pool closes and discards connections which are returned to the pool immediately.
+ SQLite file-based connections have extermely low overhead, so pooling is not necessary.
+ The scheme also prevents a connection from being used again in a different thread
+ and works best with SQLite's coarse-grained file locking.
- * the :class:`sqlalchemy.pool.NullPool` might be appropriate for an application that
- makes use of a file-based sqlite database. This pool disables any actual "pooling"
- behavior, and simply opens and closes real connections corresonding to the :func:`connect()`
- and :func:`close()` methods. SQLite can "connect" to a particular file with very high
- efficiency, so this option may actually perform better without the extra overhead
- of :class:`SingletonThreadPool`. NullPool will of course render a ``:memory:`` connection
- useless since the database would be lost as soon as the connection is "returned" to the pool.
+ .. note:: The default selection of :class:`.NullPool` for SQLite file-based databases
+ is new in SQLAlchemy 0.7. Previous versions
+ select :class:`.SingletonThreadPool` by
+ default for all SQLite databases.
Unicode
-------
class SQLiteDialect_pysqlite(SQLiteDialect):
default_paramstyle = 'qmark'
- poolclass = pool.SingletonThreadPool
colspecs = util.update_copy(
SQLiteDialect.colspecs,
raise e
return sqlite
+ @classmethod
+ def get_pool_class(cls, url):
+ if url.database and url.database != ':memory:':
+ return pool.NullPool
+ else:
+ return pool.SingletonThreadPool
+
def _get_server_version_info(self, connection):
return self.dbapi.sqlite_version_info
import re, random
from sqlalchemy.engine import base, reflection
from sqlalchemy.sql import compiler, expression
-from sqlalchemy import exc, types as sqltypes, util
+from sqlalchemy import exc, types as sqltypes, util, pool
AUTOCOMMIT_REGEXP = re.compile(
r'\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER)',
@property
def dialect_description(self):
return self.name + "+" + self.driver
+
+ @classmethod
+ def get_pool_class(cls, url):
+ return getattr(cls, 'poolclass', pool.QueuePool)
def initialize(self, connection):
try:
class DefaultEngineStrategy(EngineStrategy):
"""Base class for built-in stratgies."""
- pool_threadlocal = False
-
def create(self, name_or_url, **kwargs):
# create url.URL object
u = url.make_url(name_or_url)
creator = kwargs.pop('creator', connect)
- poolclass = (kwargs.pop('poolclass', None) or
- getattr(dialect_cls, 'poolclass', poollib.QueuePool))
+ poolclass = kwargs.pop('poolclass', None)
+ if poolclass is None:
+ poolclass = dialect_cls.get_pool_class(u)
pool_args = {}
# consume pool arguments from kwargs, translating a few of
tk = translate.get(k, k)
if tk in kwargs:
pool_args[k] = kwargs.pop(tk)
- pool_args.setdefault('use_threadlocal', self.pool_threadlocal)
pool = poolclass(creator, **pool_args)
else:
if isinstance(pool, poollib._DBProxy):
"""Strategy for configuring an Engine with thredlocal behavior."""
name = 'threadlocal'
- pool_threadlocal = True
engine_cls = threadlocal.TLEngine
ThreadLocalEngineStrategy()
Maintains one connection per each thread, never moving a connection to a
thread other than the one which it was created in.
- This is used for SQLite, which both does not handle multithreading by
- default, and also requires a singleton connection if a :memory: database
- is being used.
-
Options are the same as those of :class:`Pool`, as well as:
:param pool_size: The number of threads in which to maintain connections
at once. Defaults to five.
-
+
+ :class:`.SingletonThreadPool` is used by the SQLite dialect
+ automatically when a memory-based database is used.
+ See :ref:`sqlite_toplevel`.
+
"""
def __init__(self, creator, pool_size=5, **kw):
return c
class QueuePool(Pool):
- """A Pool that imposes a limit on the number of open connections."""
+ """A :class:`Pool` that imposes a limit on the number of open connections.
+
+ :class:`.QueuePool` is the default pooling implementation used for
+ all :class:`.Engine` objects, unless the SQLite dialect is in use.
+
+ """
def __init__(self, creator, pool_size=5, max_overflow=10, timeout=30,
**kw):
Reconnect-related functions such as ``recycle`` and connection
invalidation are not supported by this Pool implementation, since
no connections are held persistently.
+
+ :class:`.NullPool` is used by the SQlite dilalect automatically
+ when a file-based database is used (as of SQLAlchemy 0.7).
+ See :ref:`sqlite_toplevel`.
"""
assert_raises_message
import datetime
from sqlalchemy import *
-from sqlalchemy import exc, sql, schema
+from sqlalchemy import exc, sql, schema, pool
from sqlalchemy.dialects.sqlite import base as sqlite, \
pysqlite as pysqlite_dialect
from sqlalchemy.test import *
except exc.DBAPIError:
pass
raise
+
+ def test_pool_class(self):
+ e = create_engine('sqlite+pysqlite://')
+ assert e.pool.__class__ is pool.SingletonThreadPool
+ e = create_engine('sqlite+pysqlite:///:memory:')
+ assert e.pool.__class__ is pool.SingletonThreadPool
+
+ e = create_engine('sqlite+pysqlite:///foo.db')
+ assert e.pool.__class__ is pool.NullPool
+
+
def test_dont_reflect_autoindex(self):
meta = MetaData(testing.db)
t = Table('foo', meta, Column('bar', String, primary_key=True))
global db1, db2, db3, db4, weather_locations, weather_reports
try:
- db1 = create_engine('sqlite:///shard1.db')
+ db1 = create_engine('sqlite:///shard1.db', pool_threadlocal=True)
except ImportError:
raise SkipTest('Requires sqlite')
db2 = create_engine('sqlite:///shard2.db')
def id_generator(ctx):
# in reality, might want to use a separate transaction for this.
- c = db1.connect()
+
+ c = db1.contextual_connect()
nextid = c.execute(ids.select(for_update=True)).scalar()
c.execute(ids.update(values={ids.c.nextid : ids.c.nextid + 1}))
return nextid