From: Mike Bayer Date: Tue, 4 Aug 2020 14:13:51 +0000 (-0400) Subject: Add note that fast_executemany uses memory X-Git-Tag: rel_1_3_19~15 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=460de4963d9a8b4dae39374c21eed976dbf10597;p=thirdparty%2Fsqlalchemy%2Fsqlalchemy.git Add note that fast_executemany uses memory Ideally this would be a per-execution option, or Pyodbc could perhaps run the data in chunks. Fixes: #5334 Change-Id: If4a11b312346b8e4c2b8cd38840b3a2ba56dec3b (cherry picked from commit 64f3c097970dd8f2e141ef9bc71c685b19671267) --- diff --git a/lib/sqlalchemy/dialects/mssql/pyodbc.py b/lib/sqlalchemy/dialects/mssql/pyodbc.py index 26c5d3d545..783a65e3dd 100644 --- a/lib/sqlalchemy/dialects/mssql/pyodbc.py +++ b/lib/sqlalchemy/dialects/mssql/pyodbc.py @@ -125,18 +125,22 @@ Fast Executemany Mode The Pyodbc driver has added support for a "fast executemany" mode of execution which greatly reduces round trips for a DBAPI ``executemany()`` call when using -Microsoft ODBC drivers. The feature is enabled by setting the flag -``.fast_executemany`` on the DBAPI cursor when an executemany call is to be -used. The SQLAlchemy pyodbc SQL Server dialect supports setting this flag -automatically when the ``.fast_executemany`` flag is passed to -:func:`_sa.create_engine` -; note that the ODBC driver must be the Microsoft driver -in order to use this flag:: +Microsoft ODBC drivers, for **limited size batches that fit in memory**. The +feature is enabled by setting the flag ``.fast_executemany`` on the DBAPI +cursor when an executemany call is to be used. The SQLAlchemy pyodbc SQL +Server dialect supports setting this flag automatically when the +``.fast_executemany`` flag is passed to +:func:`_sa.create_engine` ; note that the ODBC driver must be the Microsoft + driver in order to use this flag:: engine = create_engine( "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server", fast_executemany=True) +.. warning:: The pyodbc fast_executemany mode **buffers all rows in memory** and is + not compatible with very large batches of data. A future version of SQLAlchemy + may support this flag as a per-execution option instead. + .. versionadded:: 1.3 .. seealso::