removed pre_exec assertion from oracle/firebird regarding "check for sequence/primary key value"
fix to Unicode type to check for null, fixes [ticket:109]
create_engine() now uses genericized parameters; host/hostname, db/dbname/database, password/passwd, etc. for all engine connections
fix to select([func(column)]) so that it creates a FROM clause to the column's table, fixes [ticket:111]
doc updates for column defaults, indexes, connection pooling, engine params
unit tests for the above bugfixes
0.1.4
+- create_engine() now uses genericized parameters; host/hostname, db/dbname/database, password/passwd, etc. for all engine connections. makes
+engine URIs much more "universal"
+- another overhaul to EagerLoading when used in conjunction with mappers that
+inherit; improvements to eager loads figuring out their aliased queries
+correctly, also relations set up against a mapper with inherited mappers will
+create joins against the table that is specific to the mapper itself (i.e. and
+not any tables that are inherited/are further down the inheritance chain),
+this can be overridden by using custom primary/secondary joins.
+- added onupdate parameter to Column, will exec SQL/python upon an update
+statement.Also adds "for_update=True" to all DefaultGenerator subclasses
+- added rudimentary support for Oracle table reflection.
+- checked in an initial Firebird module, awaiting testing.
+- added sql.ClauseParameters dictionary object as the result for
+compiled.get_params(), does late-typeprocessing of bind parameters so
+that the original values are easier to access
+- more docs for indexes, column defaults, connection pooling, engine construction
+- overhaul to the construction of the types system. uses a simpler inheritance pattern
+so that any of the generic types can be easily subclassed, with no need for TypeDecorator.
+- added "convert_unicode=False" parameter to SQLEngine, will cause all String types to
+perform unicode encoding/decoding (makes Strings act like Unicodes)
+- added 'encoding="utf8"' parameter to engine. the given encoding will be
+used for all encode/decode calls within Unicode types as well as Strings
+when convert_unicode=True.
+- improved support to column defaults when used by mappers; mappers will pull
+pre-executed defaults from statement's executed bind parameters
+(pre-conversion) to populate them into a saved object's attributes; if any
+PassiveDefaults have fired off, will instead post-fetch the row from the DB to
+populate the object.
+- added 'get_session().invalidate(*obj)' method to objectstore, instances will
+refresh() themselves upon the next attribute access.
+- improvements to SQL func calls including an "engine" keyword argument so
+they can be execute()d or scalar()ed standalone, also added func accessor to
+SQLEngine
0.1.3
-- completed "post_update" feature, will add a second update statement before inserts
-and after deletes in order to reconcile a relationship without any dependencies
-being created; used when persisting two rows that are dependent on each other
-- completed mapper.using(session) function, localized per-object Session functionality;
-objects can be declared and manipulated as local to any user-defined Session
+- completed "post_update" feature, will add a second update statement before
+inserts and after deletes in order to reconcile a relationship without any
+dependencies being created; used when persisting two rows that are dependent
+on each other
+- completed mapper.using(session) function, localized per-object Session
+functionality; objects can be declared and manipulated as local to any
+user-defined Session
- fix to Oracle "row_number over" clause with multiple tables
- mapper.get() was not selecting multiple-keyed objects if the mapper's table was a join,
such as in an inheritance relationship, this is fixed.
<p>
Engines exist for SQLite, Postgres, MySQL, and Oracle, using the Pysqlite, Psycopg (1 or 2), MySQLDB, and cx_Oracle modules. Each engine imports its corresponding module which is required to be installed. For Postgres and Oracle, an alternate module may be specified at construction time as well.
</p>
- <p>An example of connecting to each engine is as follows:</p>
+ <p>The string based argument names for connecting are translated to the appropriate names when the connection is made; argument names include "host" or "hostname" for database host, "database", "db", or "dbname" for the database name (also is dsn for Oracle), "user" or "username" for the user, and "password", "pw", or "passwd" for the password. SQLite expects "filename" or "file" for the filename, or if None it defaults to "":memory:".</p>
+ <p>The connection arguments can be specified as a string + dictionary pair, or a single URL-encoded string, as follows:</p>
<&|formatting.myt:code&>
from sqlalchemy import *
# sqlite in memory
sqlite_engine = create_engine('sqlite', {'filename':':memory:'}, **opts)
+
+ # via URL
+ sqlite_engine = create_engine('sqlite://', **opts)
# sqlite using a file
sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}, **opts)
+ # via URL
+ sqlite_engine = create_engine('sqlite://filename=querytest.db', **opts)
+
# postgres
postgres_engine = create_engine('postgres',
{'database':'test',
'user':'scott',
'password':'tiger'}, **opts)
+ # via URL
+ postgres_engine = create_engine('postgres://database=test&host=127.0.0.1&user=scott&password=tiger')
+
# mysql
mysql_engine = create_engine('mysql',
{
</&>
<p>Note that the general form of connecting to an engine is:</p>
- <&|formatting.myt:code&>
+ <&|formatting.myt:code &>
+ # separate arguments
engine = create_engine(
<enginename>,
{<named DBAPI arguments>},
- <sqlalchemy options>
+ <sqlalchemy options>;
)
+
+ # url
+ engine = create_engine('<enginename>://<named DBAPI arguments>', <sqlalchemy options>)
</&>
- <p>The second argument is a dictionary whose key/value pairs will be passed to the underlying DBAPI connect() method as keyword arguments. Any keyword argument supported by the DBAPI module can be in this dictionary.</p>
- <p>Engines can also be loaded by URL. The above format is converted into <span class="codeline"><% '<enginename>://key=val&key=val' |h %></span>:
- <&|formatting.myt:code&>
- sqlite_engine = create_engine('sqlite://filename=querytest.db')
- postgres_engine = create_engine('postgres://database=test&user=scott&password=tiger')
- </&>
- </p>
</&>
<&|doclib.myt:item, name="methods", description="Database Engine Methods" &>
<p>A few useful methods off the SQLEngine are described here:</p>
<&|doclib.myt:item, name="options", description="Database Engine Options" &>
<p>The remaining arguments to <span class="codeline">create_engine</span> are keyword arguments that are passed to the specific subclass of <span class="codeline">sqlalchemy.engine.SQLEngine</span> being used, as well as the underlying <span class="codeline">sqlalchemy.pool.Pool</span> instance. All of the options described in the previous section <&formatting.myt:link, path="pooling_configuration"&> can be specified, as well as engine-specific options:</p>
<ul>
- <li>pool=None : an instance of <span class="codeline">sqlalchemy.pool.DBProxy</span> to be used as the underlying source for connections (DBProxy is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments.</li>
+ <li><p>pool=None : an instance of <span class="codeline">sqlalchemy.pool.Pool</span> to be used as the underlying source for connections, overriding the engine's connect arguments (pooling is described in the previous section). If None, a default Pool (QueuePool or SingletonThreadPool as appropriate) will be created using the engine's connect arguments.</p>
+ <p>Example:</p>
+ <&|formatting.myt:code&>
+ from sqlalchemy import *
+ import sqlalchemy.pool as pool
+ import MySQLdb
+
+ def getconn():
+ return MySQLdb.connect(user='ed', dbname='mydb')
+
+ engine = create_engine('mysql', pool=pool.QueuePool(getconn, pool_size=20, max_overflow=40))
+ </&></li>
<li>echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well.</li>
<li>logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.</li>
<li>module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle.</li>
<li>use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <% "<column1>(+)=<column2>" |h%> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.</li>
<li>use_oids=False : used only by Postgres, will enable the column name "oid" as the object ID column. Postgres as of 8.1 has object IDs disabled by default.</li>
<li>convert_unicode=False : if set to True, all String/character based types will convert Unicode values to raw byte values going into the database, and all raw byte values to Python Unicode coming out in result sets. This is an engine-wide method to provide unicode across the board. For unicode conversion on a column-by-column level, use the Unicode column type instead.</li>
- <li>encoding='utf-8' : the encoding to use when doing unicode translations.</li>
+ <li>encoding='utf-8' : the encoding to use for Unicode translations - passed to all encode/decode functions.</li>
+ <li>echo_uow=False : when True, logs unit of work commit plans to the standard output.</li>
</ul>
</&>
<&|doclib.myt:item, name="proxy", description="Using the Proxy Engine" &>
<& pydoc.myt:obj_doc, obj=schema &>
<& pydoc.myt:obj_doc, obj=engine, classes=[engine.SQLEngine, engine.ResultProxy, engine.RowProxy] &>
-<& pydoc.myt:obj_doc, obj=sql, classes=[sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &>
+<& pydoc.myt:obj_doc, obj=sql, classes=[sql.ClauseParameters, sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &>
<& pydoc.myt:obj_doc, obj=pool, classes=[pool.DBProxy, pool.Pool, pool.QueuePool, pool.SingletonThreadPool] &>
<& pydoc.myt:obj_doc, obj=mapping &>
<& pydoc.myt:obj_doc, obj=mapping.objectstore, classes=[mapping.objectstore.Session, mapping.objectstore.Session.SessionTrans, mapping.objectstore.UnitOfWork] &>
onepage='documentation'
index='index'
title='SQLAlchemy Documentation'
- version = '0.1.3'
+ version = '0.1.4'
</%attr>
<%method title>
>>> othertable is news_articles
True
</&>
-
-
-
</&>
<&|doclib.myt:item, name="creating", description="Creating and Dropping Database Tables" &>
<p>Creating and dropping is easy, just use the <span class="codeline">create()</span> and <span class="codeline">drop()</span> methods:
<&|formatting.myt:code&>
- <&formatting.myt:poplink&>
- employees = Table('employees', engine,
+ <&formatting.myt:poplink&>employees = Table('employees', engine,
Column('employee_id', Integer, primary_key=True),
Column('employee_name', String(60), nullable=False, key='name'),
Column('employee_dept', Integer, ForeignKey("departments.department_id"))
employee_name VARCHAR(60) NOT NULL,
employee_dept INTEGER REFERENCES departments(department_id)
)
-
{} </&>
-
- <&formatting.myt:poplink&>
- employees.drop() <&|formatting.myt:codepopper, link="sql" &>
+
+ <&formatting.myt:poplink&>employees.drop() <&|formatting.myt:codepopper, link="sql" &>
DROP TABLE employees
{} </&>
</&>
</&>
+
+ <&|doclib.myt:item, name="defaults", description="Column Defaults and OnUpdates" &>
+ <p>SQLAlchemy includes flexible constructs in which to create default values for columns upon the insertion of rows, as well as upon update. These defaults can take several forms: a constant, a Python callable to be pre-executed before the SQL is executed, a SQL expression or function to be pre-executed before the SQL is executed, a pre-executed Sequence (for databases that support sequences), or a "passive" default, which is a default function triggered by the database itself upon insert, the value of which can then be post-fetched by the engine, provided the row provides a primary key in which to call upon.</p>
+ <&|doclib.myt:item, name="oninsert", description="Pre-Executed Insert Defaults" &>
+ <p>A basic default is most easily specified by the "default" keyword argument to Column:</p>
+ <&|formatting.myt:code&>
+ # a function to create primary key ids
+ i = 0
+ def mydefault():
+ i += 1
+ return i
+
+ t = Table("mytable", db,
+ # function-based default
+ Column('id', Integer, primary_key=True, default=mydefault),
+
+ # a scalar default
+ Column('key', String(10), default="default")
+ )
+ </&>
+ <p>The "default" keyword can also take SQL expressions, including select statements or direct function calls:</p>
+ <&|formatting.myt:code&>
+ t = Table("mytable", db,
+ Column('id', Integer, primary_key=True),
+
+ # define 'create_date' to default to now()
+ Column('create_date', DateTime, default=func.now()),
+
+ # define 'key' to pull its default from the 'keyvalues' table
+ Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1))
+ )
+ </&>
+ <p>The "default" keyword argument is shorthand for using a ColumnDefault object in a column definition. This syntax is optional, but is required for other types of defaults, futher described below:</p>
+ <&|formatting.myt:code&>
+ Column('mycolumn', String(30), ColumnDefault(func.get_data()))
+ </&>
+ </&>
+
+ <&|doclib.myt:item, name="onupdate", description="Pre-Executed OnUpdate Defaults" &>
+ <p>Similar to an on-insert default is an on-update default, which is most easily specified by the "onupdate" keyword to Column, which also can be a constanct, plain Python function or SQL expression:</p>
+ <&|formatting.myt:code&>
+ t = Table("mytable", db,
+ Column('id', Integer, primary_key=True),
+
+ # define 'last_updated' to be populated with current_timestamp (the ANSI-SQL version of now())
+ Column('last_updated', DateTime, onupdate=func.current_timestamp()),
+ )
+ </&>
+ <p>To use a ColumnDefault explicitly for an on-update, use the "for_update" keyword argument:</p>
+ <&|formatting.myt:code&>
+ Column('mycolumn', String(30), ColumnDefault(func.get_data(), for_update=True))
+ </&>
+ </&>
+
+ <&|doclib.myt:item, name="passive", description="Inline Default Execution: PassiveDefault" &>
+ <p>A PassiveDefault indicates a column default or on-update value that is executed automatically by the database. This construct is used to specify a SQL function that will be specified as "DEFAULT" when creating tables, and also to indicate the presence of new data that is available to be "post-fetched" after an insert or update execution.</p>
+ <&|formatting.myt:code&>
+ t = Table('test', e,
+ Column('mycolumn', DateTime, PassiveDefault("sysdate"))
+ )
+ </&>
+ <p>A create call for the above table will produce:</p>
+ <&|formatting.myt:code&>
+ CREATE TABLE test (
+ mycolumn datetime default sysdate
+ )
+ </&>
+ <p>PassiveDefaults also send a message to the SQLEngine that data is available after update or insert. The object-relational mapper system uses this information to post-fetch rows after insert or update, so that instances can be refreshed with the new data. Below is a simplified version:</p>
+ <&|formatting.myt:code&>
+ # table with passive defaults
+ mytable = Table('mytable', engine,
+ Column('my_id', Integer, primary_key=True),
+
+ # an on-insert database-side default
+ Column('data1', Integer, PassiveDefault("d1_func")),
+
+ # an on-update database-side default
+ Column('data2', Integer, PassiveDefault("d2_func", for_update=True))
+ )
+ # insert a row
+ mytable.insert().execute(name='fred')
+
+ # ask the engine: were there defaults fired off on that row ?
+ if table.engine.lastrow_has_defaults():
+ # postfetch the row based on primary key.
+ # this only works for a table with primary key columns defined
+ primary_key = table.engine.last_inserted_ids()
+ row = table.select(table.c.id == primary_key[0])
+ </&>
+ <p>Tables that are reflected from the database which have default values set on them, will receive those defaults as PassiveDefaults.</p>
+
+ <&|doclib.myt:item, name="postgres", description="The Catch: Postgres Primary Key Defaults always Pre-Execute" &>
+ <p>Current Postgres support does not rely upon OID's to determine the identity of a row. This is because the usage of OIDs has been deprecated with Postgres and they are disabled by default for table creates as of PG version 8. Pyscopg2's "cursor.lastrowid" function only returns OIDs. Therefore, when inserting a new row which has passive defaults set on the primary key columns, the default function is <b>still pre-executed</b> since SQLAlchemy would otherwise have no way of retrieving the row just inserted.</p>
+ </&>
+ </&>
+ <&|doclib.myt:item, name="sequences", description="Defining Sequences" &>
+ <P>A table with a sequence looks like:</p>
+ <&|formatting.myt:code&>
+ table = Table("cartitems", db,
+ Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
+ Column("description", String(40)),
+ Column("createdate", DateTime())
+ )
+ </&>
+ <p>The Sequence is used with Postgres or Oracle to indicate the name of a Sequence that will be used to create default values for a column. When a table with a Sequence on a column is created by SQLAlchemy, the Sequence object is also created. Similarly, the Sequence is dropped when the table is dropped. Sequences are typically used with primary key columns. When using Postgres, if an integer primary key column defines no explicit Sequence or other default method, SQLAlchemy will create the column with the SERIAL keyword, and will pre-execute a sequence named "tablename_columnname_seq" in order to retrieve new primary key values. Oracle, which has no "auto-increment" keyword, requires that a Sequence be created for a table if automatic primary key generation is desired. Note that for all databases, primary key values can always be explicitly stated within the bind parameters for any insert statement as well, removing the need for any kind of default generation function.</p>
+
+ <p>A Sequence object can be defined on a Table that is then used for a non-sequence-supporting database. In that case, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p>
+
+ <p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p>
+ </&>
+ </&>
+ <&|doclib.myt:item, name="indexes", description="Defining Indexes" &>
+ <p>Indexes can be defined on table columns, including named indexes, non-unique or unique, multiple column. Indexes are included along with table create and drop statements. They are not used for any kind of run-time constraint checking...SQLAlchemy leaves that job to the expert on constraint checking, the database itself.</p>
+ <&|formatting.myt:code&>
+ mytable = Table('mytable', engine,
+
+ # define a unique index
+ Column('col1', Integer, unique=True),
+
+ # define a unique index with a specific name
+ Column('col2', Integer, unique='mytab_idx_1'),
+
+ # define a non-unique index
+ Column('col3', Integer, index=True),
+
+ # define a non-unique index with a specific name
+ Column('col4', Integer, index='mytab_idx_2'),
+
+ # pass the same name to multiple columns to add them to the same index
+ Column('col5', Integer, index='mytab_idx_2'),
+
+ Column('col6', Integer),
+ Column('col7', Integer)
+ )
+
+ # create the table. all the indexes will be created along with it.
+ mytable.create()
+
+ # indexes can also be specified standalone
+ i = Index('mytab_idx_3', mytable.c.col6, mytable.c.col7, unique=False)
+
+ # which can then be created separately (will also get created with table creates)
+ i.create()
+
+ </&>
+ </&>
<&|doclib.myt:item, name="adapting", description="Adapting Tables to Alternate Engines" &>
- <p>Occasionally an application will need to reference the same tables within multiple databases simultaneously. Since a Table object is specific to a SQLEngine, an extra method is provided to create copies of the Table object for a different SQLEngine instance, which can represent a different set of connection parameters, or a totally different database driver:
+ <p>A Table object created against a specific engine can be re-created against a new engine using the <span class="codeline">toengine</span> method:</p>
<&|formatting.myt:code&>
# create two engines
pg_users = users.toengine(postgres_engine)
</&>
- <p>You can also create tables using a "database neutral" engine, which can serve as a starting point for tables that are then adapted to specific engines:</p>
+ <p>Also available is the "database neutral" ansisql engine:</p>
<&|formatting.myt:code&>
import sqlalchemy.ansisql as ansisql
generic_engine = ansisql.engine()
Column('user_id', Integer),
Column('user_name', String(50))
)
-
- sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'})
- sqlite_users = users.toengine(sqlite_engine)
- sqlite_users.create()
</&>
-
- </&>
+ <p>Flexible "multi-engined" tables can also be achieved via the proxy engine, described in the section <&formatting.myt:link, path="dbengine_proxy"&>.</p>
- <&|doclib.myt:item, name="sequences", description="Defining Sequences" &>
- <P>A table with a sequence looks like:</p>
- <&|formatting.myt:code&>
- table = Table("cartitems", db,
- Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True),
- Column("description", String(40)),
- Column("createdate", DateTime())
- )
- </&>
- <p>The Sequence is used when a Postgres or Oracle database schema defines a sequence of a specific name which must be used to create integer values. If a Sequence is not defined, Postgres will default to regular SERIAL access. Oracle currently has no default primary key method; so explicit primary key values or Sequence objects are required to insert new rows.</p>
-
-<p>Defining a Sequence means that it will be created along with the table.create() call, and that the sequence will be explicitly used when inserting new rows for this table, for databases that support sequences. If the Table is connected to a database that doesnt support sequences, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p>
+ <&|doclib.myt:item, name="primitives", description="Non-engine primitives: TableClause/ColumnClause" &>
-<p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p>
-
-<p>More docs TODO in this area include the ColumnDefault and PassiveDefault objects which provide more options to automatic generation of column values.</p>
+ <p>TableClause and ColumnClause are "primitive" versions of the Table and Column objects which dont use engines at all; applications that just want to generate SQL strings but not directly communicate with a database can use TableClause and ColumnClause objects, which are non-singleton and serve as the "lexical" base class of Table and Column:</p>
+ <&|formatting.myt:code&>
+ tab1 = TableClause('table1',
+ ColumnClause('id'),
+ ColumnClause('name'))
+
+ tab2 = TableClause('table2',
+ ColumnClause('id'),
+ ColumnClause('email'))
+
+ tab1.select(tab1.c.name == 'foo')
+ </&>
+
+ <p>TableClause and ColumnClause are strictly lexical. This means they are fully supported within the full range of SQL statement generation, but they don't support schema concepts like creates, drops, primary keys, defaults, nullable status, indexes, or foreign keys.</p>
+ </&>
</&>
+
</&>
</p>
<ul>
<li>echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.</li>
- <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.</li>
- <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm.</li>
+ <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using. Note that once the connection is returned to the pool, it then may be used by another thread. To guarantee a single unique connection per thread that <b>never</b> changes, use the option <span class="codeline">poolclass=SingletonThreadPool</span>, in which case the use_threadlocal parameter is automatically set to False.</li>
+ <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm. Also included is the ThreadSingletonPool, which provides a single distinct connection per thread and is required with SQLite.</li>
<li>pool_size=5 : used by QueuePool - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.</li>
- <li>max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li>
+ <li>max_overflow=10 : used by QueuePool - the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li>
</ul>
</&>
+
+ <&|doclib.myt:item, name="custom", description="Custom Pool Construction" &>
+ <p>One level below using a DBProxy to make transparent pools is creating the pool yourself. The pool module comes with two implementations of connection pools: <span class="codeline">QueuePool</span> and <span class="codeline">SingletonThreadPool</span>. While QueuePool uses Queue.Queue to provide connections, SingletonThreadPool provides a single per-thread connection which SQLite requires.</p>
+
+ <p>Constructing your own pool involves passing a callable used to create a connection. Through this method, custom connection schemes can be made, such as a connection that automatically executes some initialization commands to start. The options from the previous section can be used as they apply to QueuePool or SingletonThreadPool.</p>
+ <&|formatting.myt:code, title="Plain QueuePool"&>
+ import sqlalchemy.pool as pool
+ import psycopg2
+
+ def getconn():
+ c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
+ # execute an initialization function on the connection before returning
+ c.cursor.execute("setup_encodings()")
+ return c
+
+ p = pool.QueuePool(getconn, max_overflow=10, pool_size=5, use_threadlocal=True)
+ </&>
+
+ <&|formatting.myt:code, title="SingletonThreadPool"&>
+ import sqlalchemy.pool as pool
+ import sqlite
+
+ def getconn():
+ return sqlite.connect(filename='myfile.db')
+
+ # SQLite connections require the SingletonThreadPool
+ p = pool.SingletonThreadPool(getconn)
+ </&>
+
+ </&>
</&>
\ No newline at end of file
objects compiled within this object. The output is dependent on the paramstyle
of the DBAPI being used; if a named style, the return result will be a dictionary
with keynames matching the compiled statement. If a positional style, the output
- will be a list corresponding to the bind positions in the compiled statement.
+ will be a list, with an iterator that will return parameter
+ values in an order corresponding to the bind positions in the compiled statement.
for an executemany style of call, this method should be called for each element
in the list of parameter groups that will ultimately be executed.
bindparams = {}
bindparams.update(params)
+ d = sql.ClauseParameters(self.engine)
if self.positional:
- d = OrderedDict()
for k in self.positiontup:
b = self.binds[k]
- if self.engine is not None:
- d[k] = b.typeprocess(b.value, self.engine)
- else:
- d[k] = b.value
+ d.set_parameter(k, b.value, b)
else:
- d = {}
for b in self.binds.values():
- if self.engine is not None:
- d[b.key] = b.typeprocess(b.value, self.engine)
- else:
- d[b.key] = b.value
+ d.set_parameter(b.key, b.value, b)
for key, value in bindparams.iteritems():
try:
b = self.binds[key]
except KeyError:
continue
- if self.engine is not None:
- d[b.key] = b.typeprocess(value, self.engine)
- else:
- d[b.key] = value
+ d.set_parameter(b.key, value, b)
+ #print "FROM", params, "TO", d
return d
def get_named_params(self, parameters):
return self.context.last_inserted_ids
def pre_exec(self, proxy, compiled, parameters, **kwargs):
- # this is just an assertion that all the primary key columns in an insert statement
- # have a value set up, or have a default generator ready to go
- if getattr(compiled, "isinsert", False):
- if isinstance(parameters, list):
- plist = parameters
- else:
- plist = [parameters]
- for param in plist:
- for primary_key in compiled.statement.table.primary_key:
- if not param.has_key(primary_key.key) or param[primary_key.key] is None:
- if primary_key.default is None:
- raise "Column '%s.%s': Firebird primary key columns require a default value or a schema.Sequence to create ids" % (primary_key.table.name, primary_key.name)
-
+ pass
+
def _executemany(self, c, statement, parameters):
rowcount = 0
for param in parameters:
def __init__(self, opts, module = None, **params):
if module is None:
self.module = mysql
- self.opts = opts or {}
+ self.opts = self._translate_connect_args(('host', 'db', 'user', 'passwd'), opts)
ansisql.ANSISQLEngine.__init__(self, **params)
def connect_args(self):
class OracleSQLEngine(ansisql.ANSISQLEngine):
def __init__(self, opts, use_ansi = True, module = None, **params):
self._use_ansi = use_ansi
- self.opts = opts or {}
+ self.opts = self._translate_connect_args((None, 'dsn', 'user', 'password'), opts)
if module is None:
self.module = cx_Oracle
else:
return self.context.last_inserted_ids
def pre_exec(self, proxy, compiled, parameters, **kwargs):
- # this is just an assertion that all the primary key columns in an insert statement
- # have a value set up, or have a default generator ready to go
- if getattr(compiled, "isinsert", False):
- if isinstance(parameters, list):
- plist = parameters
- else:
- plist = [parameters]
- for param in plist:
- for primary_key in compiled.statement.table.primary_key:
- if not param.has_key(primary_key.key) or param[primary_key.key] is None:
- if primary_key.default is None:
- raise "Column '%s.%s': Oracle primary key columns require a default value or a schema.Sequence to create ids" % (primary_key.table.name, primary_key.name)
+ pass
def _executemany(self, c, statement, parameters):
rowcount = 0
self.version = 1
except:
self.version = 1
- self.opts = opts or {}
+ self.opts = self._translate_connect_args(('host', 'database', 'user', 'password'), opts)
if self.opts.has_key('port'):
if self.version == 2:
self.opts['port'] = int(self.opts['port'])
self._figure_paramstyle()
self.logger = logger or util.Logger(origin='engine')
+ def _translate_connect_args(self, names, args):
+ """translates a dictionary of connection arguments to those used by a specific dbapi.
+ the names parameter is a tuple of argument names in the form ('host', 'database', 'user', 'password')
+ where the given strings match the corresponding argument names for the dbapi. Will return a dictionary
+ with the dbapi-specific parameters, the generic ones removed, and any additional parameters still remaining,
+ from the dictionary represented by args. Will return a blank dictionary if args is null."""
+ if args is None:
+ return {}
+ a = args.copy()
+ standard_names = [('host','hostname'), ('database', 'dbname'), ('user', 'username'), ('password', 'passwd', 'pw')]
+ for n in names:
+ sname = standard_names.pop(0)
+ if n is None:
+ continue
+ for sn in sname:
+ if sn != n and a.has_key(sn):
+ a[n] = a[sn]
+ del a[sn]
+ return a
def _get_ischema(self):
# We use a property for ischema so that the accessor
# creation only happens as needed, since otherwise we
parameters = [compiled.get_params(**m) for m in parameters]
else:
parameters = compiled.get_params(**parameters)
-
def proxy(statement=None, parameters=None):
if statement is None:
return cursor
for c in table.c:
if c.primary_key or not params.has_key(c.name):
continue
- if self._getattrbycolumn(obj, c) != params[c.name]:
- self._setattrbycolumn(obj, c, params[c.name])
+ if self._getattrbycolumn(obj, c) != params.get_original(c.name):
+ self._setattrbycolumn(obj, c, params.get_original(c.name))
def delete_obj(self, objects, uow):
"""called by a UnitOfWork object to delete objects, which involves a
name will all be included in the index, in the order of their
creation.
- unique=None : True or undex name. Indicates that this column is
+ unique=None : True or index name. Indicates that this column is
indexed in a unique index . Pass true to autogenerate the index
name. Pass a string to specify the index name. Multiple columns that
specify the same index name will all be included in the index, in the
def is_column(col):
return isinstance(col, ColumnElement)
+class ClauseParameters(util.OrderedDict):
+ """represents a dictionary/iterator of bind parameter key names/values. Includes parameters compiled with a Compiled object as well as additional arguments passed to the Compiled object's get_params() method. Parameter values will be converted as per the TypeEngine objects present in the bind parameter objects. The non-converted value can be retrieved via the get_original method. For Compiled objects that compile positional parameters, the values() iteration of the object will return the parameter values in the correct order."""
+ def __init__(self, engine=None):
+ super(ClauseParameters, self).__init__(self)
+ self.engine = engine
+ self.binds = {}
+ def set_parameter(self, key, value, bindparam):
+ self[key] = value
+ self.binds[key] = bindparam
+ def get_original(self, key):
+ return super(ClauseParameters, self).__getitem__(key)
+ def __getitem__(self, key):
+ v = super(ClauseParameters, self).__getitem__(key)
+ if self.engine is not None and self.binds.has_key(key):
+ v = self.binds[key].typeprocess(v, self.engine)
+ return v
+ def values(self):
+ return [self[key] for key in self]
+ def get_original_dict(self):
+ return self.copy()
+
class ClauseVisitor(object):
"""Defines the visiting of ClauseElements."""
def visit_column(self, column):pass
clause = BindParamClause(self.name, clause, shortname=self.name, type=None)
self.clauses.append(clause)
def _process_from_dict(self, data, asfrom):
+ super(Function, self)._process_from_dict(data, asfrom)
+ # this helps a Select object get the engine from us
data.setdefault(self, self)
def copy_container(self):
clauses = [clause.copy_container() for clause in self.clauses]
class Unicode(String):
def convert_bind_param(self, value, engine):
- if isinstance(value, unicode):
+ if value is not None and isinstance(value, unicode):
return value.encode(engine.encoding)
else:
return value
def convert_result_value(self, value, engine):
- if not isinstance(value, unicode):
+ if value is not None and not isinstance(value, unicode):
return value.decode(engine.encoding)
else:
return value
def itervalues(self):
return iter([self[key] for key in self._list])
- def iterkeys(self): return self.__iter__()
+ def iterkeys(self):
+ return self.__iter__()
def iteritems(self):
return iter([(key, self[key]) for key in self.keys()])
from setuptools import setup, find_packages
setup(name = "SQLAlchemy",
- version = "0.1.3",
+ version = "0.1.4",
description = "Database Abstraction Library",
author = "Mike Bayer",
author_email = "mike_mp@zzzcomputing.com",
def test_index_create_inline(self):
"""Test indexes defined with tables"""
- testbase.db.echo = True
capt = []
class dummy:
pass
au = AddressUser.mapper.get_by(user_name='jack')
self.assert_(au.email_address == 'jack@gmail.com')
-
+
class DeferredTest(MapperSuperTest):
def testbasic(self):
self.assert_(name_of(7) != name1, msg="user_name should not be %s" % name1)
self.assert_(name_of(8) != name2, msg="user_name should not be %s" % name2)
+class UnicodeTest(AssertMixin):
+ def setUpAll(self):
+ global uni_table
+ uni_table = Table('test', db,
+ Column('id', Integer, primary_key=True),
+ Column('txt', Unicode(50))).create()
+
+ def tearDownAll(self):
+ uni_table.drop()
+ uni_table.deregister()
+
+ def testbasic(self):
+ class Test(object):
+ pass
+ assign_mapper(Test, uni_table)
+
+ txt = u"\u0160\u0110\u0106\u010c\u017d"
+ t1 = Test(id=1, txt = txt)
+ self.assert_(t1.txt == txt)
+ objectstore.commit()
+ self.assert_(t1.txt == txt)
+
+
class PKTest(AssertMixin):
def setUpAll(self):
db.echo = False
)
]
)
- return
objects[2].keywords.append(k)
dkid = objects[5].keywords[1].keyword_id
del objects[5].keywords[1]
def testliteral(self):
self.runtest(select([literal("foo") + literal("bar")], from_obj=[table1]),
- "SELECT :literal + :literal_1 FROM mytable", engine=db)
+ "SELECT :literal + :literal_1 FROM mytable")
def testfunction(self):
self.runtest(func.lala(3, 4, literal("five"), table1.c.myid) * table2.c.otherid,
- "lala(:lala, :lala_1, :literal, mytable.myid) * myothertable.otherid", engine=db)
+ "lala(:lala, :lala_1, :literal, mytable.myid) * myothertable.otherid")
+
+ self.runtest(select([func.count(table1.c.myid)]),
+ "SELECT count(mytable.myid) FROM mytable")
def testjoin(self):
self.runtest(
elif DBTYPE == 'postgres':
db_uri = 'postgres://database=test&port=5432&host=127.0.0.1&user=scott&password=tiger'
elif DBTYPE == 'mysql':
- db_uri = 'mysql://db=test&host=127.0.0.1&user=scott&passwd=tiger'
+ db_uri = 'mysql://database=test&host=127.0.0.1&user=scott&password=tiger'
elif DBTYPE == 'oracle':
db_uri = 'oracle://user=scott&password=tiger'