From: Mike Bayer Date: Fri, 23 Mar 2007 21:38:12 +0000 (+0000) Subject: cleanup continued X-Git-Tag: rel_0_3_6 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=e07cf69deff7e2a9152b41385c4505f1acaa958b;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git cleanup continued --- diff --git a/CHANGES b/CHANGES index 4a958c2521..02a361d4b4 100644 --- a/CHANGES +++ b/CHANGES @@ -1,81 +1,64 @@ 0.3.6 - sql: - bindparam() names are now repeatable! specify two - distinct bindparam()s with the same name in a single statement, - and the key will be shared. proper positional/named args translate - at compile time. for the old behavior of "aliasing" bind parameters - with conflicting names, specify "unique=True" - this option is - still used internally for all the auto-genererated (value-based) - bind parameters. + distinct bindparam()s with the same name in a single statement, + and the key will be shared. proper positional/named args translate + at compile time. for the old behavior of "aliasing" bind parameters + with conflicting names, specify "unique=True" - this option is + still used internally for all the auto-genererated (value-based) + bind parameters. - slightly better support for bind params as column clauses, either - via bindparam() or via literal(), i.e. select([literal('foo')]) + via bindparam() or via literal(), i.e. select([literal('foo')]) - MetaData can bind to an engine either via "url" or "engine" kwargs - to constructor, or by using connect() method. BoundMetaData is - identical to MetaData except engine_or_url param is required. - DynamicMetaData is the same and provides thread-local connections - be default. + to constructor, or by using connect() method. BoundMetaData is + identical to MetaData except engine_or_url param is required. + DynamicMetaData is the same and provides thread-local connections be + default. - - exists() becomes useable as a standalone selectable, not just in a - WHERE clause, i.e. exists([columns], criterion).select() + - exists() becomes useable as a standalone selectable, not just in a + WHERE clause, i.e. exists([columns], criterion).select() - correlated subqueries work inside of ORDER BY, GROUP BY - - fixed function execution with explicit connections, i.e. - conn.execute(func.dosomething()) + - fixed function execution with explicit connections, i.e. + conn.execute(func.dosomething()) - use_labels flag on select() wont auto-create labels for literal text column elements, since we can make no assumptions about the text. to - create labels for literal columns, you can say "somecol AS somelabel", - or use literal_column("somecol").label("somelabel") + create labels for literal columns, you can say "somecol AS + somelabel", or use literal_column("somecol").label("somelabel") - - quoting wont occur for literal columns when they are "proxied" into the - column collection for their selectable (is_literal flag is propigated). - literal columns are specified via literal_column("somestring"). + - quoting wont occur for literal columns when they are "proxied" into + the column collection for their selectable (is_literal flag is + propigated). literal columns are specified via + literal_column("somestring"). - - added "fold_equivalents" boolean argument to Join.select(), which removes - 'duplicate' columns from the resulting column clause that are known to be - equivalent based on the join condition. this is of great usage when - constructing subqueries of joins which Postgres complains about if - duplicate column names are present. + - added "fold_equivalents" boolean argument to Join.select(), which + removes 'duplicate' columns from the resulting column clause that + are known to be equivalent based on the join condition. this is of + great usage when constructing subqueries of joins which Postgres + complains about if duplicate column names are present. - fixed use_alter flag on ForeignKeyConstraint [ticket:503] - fixed usage of 2.4-only "reversed" in topological.py [ticket:506] - for hackers, refactored the "visitor" system of ClauseElement and - SchemaItem so that the traversal of items is controlled by the - ClauseVisitor itself, using the method visitor.traverse(item). - accept_visitor() methods can still be called directly but will - not do any traversal of child items. ClauseElement/SchemaItem now - have a configurable get_children() method to return the collection - of child elements for each parent object. This allows the full - traversal of items to be clear and unambiguous (as well as loggable), - with an easy method of limiting a traversal (just pass flags which - are picked up by appropriate get_children() methods). [ticket:501] + SchemaItem so that the traversal of items is controlled by the + ClauseVisitor itself, using the method visitor.traverse(item). + accept_visitor() methods can still be called directly but will not + do any traversal of child items. ClauseElement/SchemaItem now have a + configurable get_children() method to return the collection of child + elements for each parent object. This allows the full traversal of + items to be clear and unambiguous (as well as loggable), with an + easy method of limiting a traversal (just pass flags which are + picked up by appropriate get_children() methods). [ticket:501] - the "else_" parameter to the case statement now properly works when - set to zero. + set to zero. - -- oracle: - - got binary working for any size input ! cx_oracle works fine, - it was my fault as BINARY was being passed and not BLOB for - setinputsizes (also unit tests werent even setting input sizes). - - - also fixed CLOB read/write on a separate changeset. - - - auto_setinputsizes defaults to True for Oracle, fixed cases where - it improperly propigated bad types. - -- mysql: - - added a catchall **kwargs to MSString, to help reflection of - obscure types (like "varchar() binary" in MS 4.0) - - - added explicit MSTimeStamp type which takes effect when using - types.TIMESTAMP. - - orm: - the full featureset of the SelectResults extension has been merged into a new set of methods available off of Query. These methods @@ -100,41 +83,42 @@ like they always did. join_to()/join_via() are still there although the generative join()/outerjoin() methods are easier to use. - - the return value for multiple mappers used with instances() now returns - a cartesian product of the requested list of mappers, represented - as a list of tuples. this corresponds to the documented behavior. - So that instances match up properly, the "uniquing" is disabled when - this feature is used. + - the return value for multiple mappers used with instances() now + returns a cartesian product of the requested list of mappers, + represented as a list of tuples. this corresponds to the documented + behavior. So that instances match up properly, the "uniquing" is + disabled when this feature is used. - - many-to-many table will be properly handled even for operations that - occur on the "backref" side of the operation [ticket:249] - - - Query has add_entity() and add_column() generative methods. these - will add the given mapper/class or ColumnElement to the query at compile - time, and apply them to the instances() method. the user is responsible - for constructing reasonable join conditions (otherwise you can get - full cartesian products). result set is the list of tuples, non-uniqued. + - Query has add_entity() and add_column() generative methods. these + will add the given mapper/class or ColumnElement to the query at + compile time, and apply them to the instances() method. the user is + responsible for constructing reasonable join conditions (otherwise + you can get full cartesian products). result set is the list of + tuples, non-uniqued. - - eager loading will not "aliasize" "order by" clauses that were placed - in the select statement by something other than the eager loader - itself, to fix possibility of dupe columns as illustrated in - [ticket:495]. however, this means you have to be more careful with - the columns placed in the "order by" of Query.select(), that you have - explicitly named them in your criterion (i.e. you cant rely on the - eager loader adding them in for you) - - - strings and columns can also be sent to the *args of instances() where - those exact result columns will be part of the result tuples. + - strings and columns can also be sent to the *args of instances() + where those exact result columns will be part of the result tuples. - a full select() construct can be passed to query.select() (which - worked anyway), but also query.selectfirst(), query.selectone() which - will be used as is (i.e. no query is compiled). works similarly to - sending the results to instances(). + worked anyway), but also query.selectfirst(), query.selectone() + which will be used as is (i.e. no query is compiled). works + similarly to sending the results to instances(). + - eager loading will not "aliasize" "order by" clauses that were + placed in the select statement by something other than the eager + loader itself, to fix possibility of dupe columns as illustrated in + [ticket:495]. however, this means you have to be more careful with + the columns placed in the "order by" of Query.select(), that you + have explicitly named them in your criterion (i.e. you cant rely on + the eager loader adding them in for you) + - added a handy multi-use "identity_key()" method to Session, allowing the generation of identity keys for primary key values, instances, and rows, courtesy Daniel Miller - + + - many-to-many table will be properly handled even for operations that + occur on the "backref" side of the operation [ticket:249] + - added "refresh-expire" cascade [ticket:492]. allows refresh() and expire() calls to propigate along relationships. @@ -148,24 +132,23 @@ in other tables into the join condition which arent parent of the relationship's parent/child mappings - - flush fixes on cyclical-referential relationships that contain references - to other instances outside of the cyclical chain, when some of the - objects in the cycle are not actually part of the flush + - flush fixes on cyclical-referential relationships that contain + references to other instances outside of the cyclical chain, when + some of the objects in the cycle are not actually part of the flush - - put an aggressive check for "flushing object A with a collection - of B's, but you put a C in the collection" error condition - - **even if C is a subclass of B**, unless B's mapper loads polymorphically. - Otherwise, the collection will later load a "B" which should be a "C" - (since its not polymorphic) which breaks in bi-directional relationships - (i.e. C has its A, but A's backref will lazyload it as a different - instance of type "B") [ticket:500] - This check is going to bite some of you who do this without issues, - so the error message will also document a flag "enable_typechecks=False" - to disable this checking. But be aware that bi-directional relationships - in particular become fragile without this check. + - put an aggressive check for "flushing object A with a collection of + B's, but you put a C in the collection" error condition - **even if + C is a subclass of B**, unless B's mapper loads polymorphically. + Otherwise, the collection will later load a "B" which should be a + "C" (since its not polymorphic) which breaks in bi-directional + relationships (i.e. C has its A, but A's backref will lazyload it as + a different instance of type "B") [ticket:500] This check is going + to bite some of you who do this without issues, so the error message + will also document a flag "enable_typechecks=False" to disable this + checking. But be aware that bi-directional relationships in + particular become fragile without this check. - extensions: - - options() method on SelectResults now implemented "generatively" like the rest of the SelectResults methods [ticket:472]. But you're going to just use Query now anyway. @@ -191,6 +174,23 @@ - cleanup of module importing code; specifiable DB-API module; more explicit ordering of module preferences. [ticket:480] +- oracle: + - got binary working for any size input ! cx_oracle works fine, + it was my fault as BINARY was being passed and not BLOB for + setinputsizes (also unit tests werent even setting input sizes). + + - also fixed CLOB read/write on a separate changeset. + + - auto_setinputsizes defaults to True for Oracle, fixed cases where + it improperly propigated bad types. + +- mysql: + - added a catchall **kwargs to MSString, to help reflection of + obscure types (like "varchar() binary" in MS 4.0) + + - added explicit MSTimeStamp type which takes effect when using + types.TIMESTAMP. + 0.3.5 - sql: diff --git a/doc/build/content/dbengine.txt b/doc/build/content/dbengine.txt index c04d6a6bbe..92767f3aff 100644 --- a/doc/build/content/dbengine.txt +++ b/doc/build/content/dbengine.txt @@ -135,7 +135,7 @@ For example, to log SQL queries as well as unit of work debugging: By default, the log level is set to `logging.ERROR` within the entire `sqlalchemy` namespace so that no log operations occur, even within an application that has logging enabled otherwise. -The `echo` flags present as keyword arguments to `create_engine()` and others as well as the `echo` property on `Engine`, when set to `True`, will first attempt to insure that logging is enabled. Unfortunately, the `logging` module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set). For this reason, any `echo=True` flags will result in a call to `logging.basicConfig()` using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this configuration has the affect of being configured **in addition** to any existing logger configurations. Therefore, **when using Python logging, insure all echo flags are set to False at all times**, to avoid getting duplicate log lines. +The `echo` flags present as keyword arguments to `create_engine()` and others as well as the `echo` property on `Engine`, when set to `True`, will first attempt to ensure that logging is enabled. Unfortunately, the `logging` module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set). For this reason, any `echo=True` flags will result in a call to `logging.basicConfig()` using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this configuration has the affect of being configured **in addition** to any existing logger configurations. Therefore, **when using Python logging, ensure all echo flags are set to False at all times**, to avoid getting duplicate log lines. ### Using Connections {@name=connections} diff --git a/lib/sqlalchemy/orm/properties.py b/lib/sqlalchemy/orm/properties.py index 45a0f9517e..c3bbef1ef1 100644 --- a/lib/sqlalchemy/orm/properties.py +++ b/lib/sqlalchemy/orm/properties.py @@ -194,7 +194,7 @@ class PropertyLoader(StrategizedProperty): else: raise exceptions.ArgumentError("relation '%s' expects a class or a mapper argument (received: %s)" % (self.key, type(self.argument))) - # insure the "select_mapper", if different from the regular target mapper, is compiled. + # ensure the "select_mapper", if different from the regular target mapper, is compiled. self.mapper.get_select_mapper()._check_compile() if self.association is not None: