table to the new one, then drop the old table. In order to accommodate this
workflow in a way that is reasonably predictable, while remaining compatible
with other databases, Alembic provides the "batch" operations context.
-This context provides the table recreate plus data copy operation for SQLite,
-and can also be instructed to use this flow for other databases as well,
-providing an alternative scheme in some cases where the blocking behavior of
-some ALTER statements is to be avoided.
Within this context, a relational table is named, and then a series of
-any number of mutation operations to that table then proceed within the
-block. When the context is complete, the above mentioned table-recreate
-and data copy operation proceeds, but *only* if appropriate for the target
-database; for other databases, the batch context proceeds using traditional
-ALTER statements, unless the context is set to "always".
+any number of mutation operations to that table may be specified within
+the block. When the context is complete, a process begins whereby the
+existing table structure is reflected from the database, a new version of
+this table is created with the given changes, data is copied from the
+old table to the new table, and finally the old table is dropped and the
+new one renamed to the original name.
+
+The :meth:`.Operations.batch_alter_table` provides the gateway to this
+process::
+
+ with op.batch_alter_table("some_table") as batch_op:
+ batch_op.add_column(Column('foo', Integer))
+ batch_op.drop_column('bar')
+
+With the above directives, on a SQLite backend we would see SQL like::
+
+ CREATE TABLE _alembic_batch_temp (
+ id INTEGER NOT NULL,
+ foo INTEGER,
+ PRIMARY KEY (id)
+ );
+ INSERT INTO _alembic_batch_temp (id) SELECT some_table.id FROM some_table;
+ DROP TABLE some_table;
+ ALTER TABLE _alembic_batch_temp RENAME TO some_table;
+
+On other backends, we'd see the usual ``ALTER`` statements done as though
+there were no batch directive - the batch context by default only does
+the "move and copy" process if SQLite is in use. It can be configured
+to run "move and copy" on other backends as well, if desired.
+
+Batch mode with Autogenerate
+----------------------------
+
+The syntax of batch mode is essentially that :meth:`.Operations.batch_alter_table`
+is used to enter a batch block, and the returned :class:`.BatchOperations` context
+works just like the regular :class:`.Operations` context, except that
+the "table name" and "schema name" arguments are omitted.
+
+To support rendering of migration commands in batch mode for autogenerate,
+configure the :paramref:`.EnvironmentContext.configure.render_as_batch`
+flag in ``env.py``::
+
+ context.configure(
+ connection=connection,
+ target_metadata=target_metadata,
+ render_as_batch=True
+ )
+
+Autogenerate will now generate along the lines of::
+
+ def upgrade():
+ ### commands auto generated by Alembic - please adjust! ###
+ with op.batch_alter_table('address', schema=None) as batch_op:
+ batch_op.add_column(sa.Column('street', sa.String(length=50), nullable=True))
+
+
+Batch mode with databases other than SQLite
+--------------------------------------------
+
+There's an odd use case some shops have, where the "move and copy" style
+of migration is useful in some cases for databases that do already support
+ALTER. There's some cases where an ALTER operation may block access to the
+table for a long time, which might not be acceptable. "move and copy" can
+be made to work on other backends, though with a few extra caveats.
+
+The batch mode directive will run the "recreate" system regardless of
+backend if the flag ``recreate='always'`` is passed::
+
+ with op.batch_alter_table("some_table", recreate='always') as batch_op:
+ batch_op.add_column(Column('foo', Integer))
+
+
+
TODO: "recreate" doesn't work very well for Postgresql due to constraint naming
TODO: caveats, beta status