succeeded. added a test script to attempt testing this.
- fixed mysql reflection of default values to be PassiveDefault
- added reflected 'tinyint' type to MS-SQL [ticket:263]
-- temporary workaround dispose_local() added to SingletonThreadPool
-for sqlite applications that dispose of threads en masse
+- SingletonThreadPool has a size and does a cleanup pass, so that
+only a given number of thread-local connections stay around (needed
+for sqlite applications that dispose of threads en masse)
0.2.6
- big overhaul to schema to allow truly composite primary and foreign
engine = create_engine('mysql', pool=pool.QueuePool(getconn, pool_size=20, max_overflow=40))
-* pool_size=5 : the number of connections to keep open inside the connection pool. This is only used with `QueuePool`.
+* pool_size=5 : the number of connections to keep open inside the connection pool. This used with `QueuePool` as well as `SingletonThreadPool` as of 0.2.7.
* max_overflow=10 : the number of connections to allow in "overflow", that is connections that can be opened above and beyond the initial five. this is only used with `QueuePool`.
* pool_timeout=30 : number of seconds to wait before giving up on getting a connection from the pool. This is only used with `QueuePool`.
* echo=False : if True, the Engine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. The `echo` attribute of `ComposedSQLEngine` can be modified at any time to turn logging on and off. If set to the string `"debug"`, result rows will be printed to the standard output as well.
* echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.
* use\_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the **same** connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using. Note that once the connection is returned to the pool, it then may be used by another thread. To guarantee a single unique connection per thread that **never** changes, use the option `poolclass=SingletonThreadPool`, in which case the use_threadlocal parameter is not used.
* poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python `Queue.Queue` class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm. Also included is the `SingletonThreadPool`, which provides a single distinct connection per thread and is required with SQLite.
-* pool\_size=5 : used by `QueuePool` - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.
+* pool\_size=5 : used by `QueuePool` as well as `SingletonThreadPool` as of 0.2.7 - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.
* max\_overflow=10 : used by `QueuePool` - the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is `pool_size` + `max_overflow`, and the total number of "sleeping" connections the pool will allow is `pool_size`. `max_overflow` can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.
* timeout=30 : used by `QueuePool` - the timeout before giving up on returning a connection, if none are available and the `max_overflow` has been reached.
# SQLite connections require the SingletonThreadPool
p = pool.SingletonThreadPool(getconn)
-#### Important Note about Disposing Threads with SingletonThreadPool {@name=note}
-
-SQLite connections automatically use `SingletonThreadPool` to manage connections. This is a simple dictionary of connections mapped to the identifiers of threads. If you are running an application which disposes of threads, such as FastCGI or SimpleHTTPServer, it is *extremely important* that the corresponding SQLite connection within the exiting thread also be removed, or the connection will remain:
-
- {python}
- pool.dispose_local()
-
-Or with an engine:
-
- {python}
- engine.connection_provider._pool.dispose_local()
-
-A future release of SQLAlchemy will address this in a more automated way.
\ No newline at end of file
class SingletonThreadPool(Pool):
"""Maintains one connection per each thread, never moving to another thread. this is
used for SQLite."""
- def __init__(self, creator, **params):
+ def __init__(self, creator, pool_size=5, **params):
Pool.__init__(self, **params)
self._conns = {}
self._creator = creator
+ self.size = pool_size
def dispose(self):
for key, conn in self._conns.items():
del self._conns[thread.get_ident()]
except KeyError:
pass
-
+
+ def cleanup(self):
+ for key in self._conns.keys():
+ try:
+ del self._conns[key]
+ except KeyError:
+ pass
+ if len(self._conns) <= self.size:
+ return
+
def status(self):
return "SingletonThreadPool id:%d thread:%d size: %d" % (id(self), thread.get_ident(), len(self._conns))
except KeyError:
c = self._creator()
self._conns[thread.get_ident()] = c
+ if len(self._conns) > self.size:
+ self.cleanup()
return c
class QueuePool(Pool):