From: brln Date: Tue, 2 Aug 2016 22:37:35 +0000 (-0400) Subject: Warn that bulk save groups inserts/updates by type X-Git-Tag: rel_1_1_0~58 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=ce1492ef3aae692a3dc10fff400e178e7b2edff8;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git Warn that bulk save groups inserts/updates by type Users who pass many different object types to bulk_save_objects may be surprised that the INSERT/UPDATE batches must necessarily be broken up by type. Add this to the list of caveats. Co-authored-by: Mike Bayer Change-Id: I8390c1c971ced50c41268b479a9dcd09c695b135 Pull-request: https://github.com/zzzeek/sqlalchemy/pull/294 --- diff --git a/doc/build/orm/persistence_techniques.rst b/doc/build/orm/persistence_techniques.rst index a30d486b52..06b8faff73 100644 --- a/doc/build/orm/persistence_techniques.rst +++ b/doc/build/orm/persistence_techniques.rst @@ -307,6 +307,14 @@ to this approach is strictly one of reduced Python overhead: objects and assigning state to them, which normally is also subject to expensive tracking of history on a per-attribute basis. +* The set of objects passed to all bulk methods are processed + in the order they are received. In the case of + :meth:`.Session.bulk_save_objects`, when objects of different types are passed, + the INSERT and UPDATE statements are necessarily broken up into per-type + groups. In order to reduce the number of batch INSERT or UPDATE statements + passed to the DBAPI, ensure that the incoming list of objects + are grouped by type. + * The process of fetching primary keys after an INSERT also is disabled by default. When performed correctly, INSERT statements can now more readily be batched by the unit of work process into ``executemany()`` blocks, which