Dialect Layout
===============
-The file structure of a dialect is typically similar to the following::
+The file structure of a dialect is typically similar to the following:
+
+.. sourcecode:: text
sqlalchemy-<dialect>/
setup.py
dialect to be usable from create_engine(), e.g.::
entry_points = {
- 'sqlalchemy.dialects': [
- 'access.pyodbc = sqlalchemy_access.pyodbc:AccessDialect_pyodbc',
- ]
+ "sqlalchemy.dialects": [
+ "access.pyodbc = sqlalchemy_access.pyodbc:AccessDialect_pyodbc",
+ ]
}
Above, the entrypoint ``access.pyodbc`` allow URLs to be used such as::
* setup.cfg - this file contains the traditional contents such as
[tool:pytest] directives, but also contains new directives that are used
- by SQLAlchemy's testing framework. E.g. for Access::
+ by SQLAlchemy's testing framework. E.g. for Access:
+
+ .. sourcecode:: text
[tool:pytest]
addopts= --tb native -v -r fxX --maxfail=25 -p no:warnings
from sqlalchemy.testing import exclusions
+
class Requirements(SuiteRequirements):
@property
def nullable_booleans(self):
The requirements system can also be used when running SQLAlchemy's
primary test suite against the external dialect. In this use case,
a ``--dburi`` as well as a ``--requirements`` flag are passed to SQLAlchemy's
- test runner so that exclusions specific to the dialect take place::
+ test runner so that exclusions specific to the dialect take place:
+
+ .. sourcecode:: text
cd /path/to/sqlalchemy
pytest -v \
from sqlalchemy.testing.suite import IntegerTest as _IntegerTest
+
class IntegerTest(_IntegerTest):
@testing.skip("access")
A generic pytest run looks like::
- pytest -n4
+ pytest - n4
Above, the full test suite will run against SQLite, using four processes.
If the "-n" flag is not used, the pytest-xdist is skipped and the tests will
E.g.::
- node = TreeNode('rootnode')
- node.append('node1')
- node.append('node3')
+ node = TreeNode("rootnode")
+ node.append("node1")
+ node.append("node3")
session.add(node)
session.commit()
The demo scripts themselves, in order of complexity, are run as Python
modules so that relative imports work::
- python -m examples.dogpile_caching.helloworld
+ $ python -m examples.dogpile_caching.helloworld
- python -m examples.dogpile_caching.relationship_caching
+ $ python -m examples.dogpile_caching.relationship_caching
- python -m examples.dogpile_caching.advanced
+ $ python -m examples.dogpile_caching.advanced
- python -m examples.dogpile_caching.local_session_caching
+ $ python -m examples.dogpile_caching.local_session_caching
.. autosource::
:files: environment.py, caching_query.py, model.py, fixture_data.py, \
class Parent(Base):
- __tablename__ = 'parent'
+ __tablename__ = "parent"
id = Column(Integer, primary_key=True)
children = relationship("Child")
class Child(Base):
- __tablename__ = 'child'
+ __tablename__ = "child"
id = Column(Integer, primary_key=True)
- parent_id = Column(Integer, ForeignKey('parent.id'))
+ parent_id = Column(Integer, ForeignKey("parent.id"))
# Init with name of file, default number of items
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
sess = Session(engine)
- sess.add_all([
- Parent(children=[Child() for j in range(100)])
- for i in range(num)
- ])
+ sess.add_all(
+ [
+ Parent(children=[Child() for j in range(100)])
+ for i in range(num)
+ ]
+ )
sess.commit()
for parent in session.query(Parent).options(subqueryload("children")):
parent.children
- if __name__ == '__main__':
+
+ if __name__ == "__main__":
Profiler.main()
We can run our new script directly::
To run::
- python -m examples.space_invaders.space_invaders
+ $ python -m examples.space_invaders.space_invaders
While it runs, watch the SQL output in the log::
- tail -f space_invaders.log
+ $ tail -f space_invaders.log
enjoy!
Usage is illustrated via a unit test module ``test_versioning.py``, which is
run using SQLAlchemy's internal pytest plugin::
- pytest test/base/test_examples.py
+ $ pytest test/base/test_examples.py
A fragment of example usage, using declarative::
from history_meta import Versioned, versioned_session
+
class Base(DeclarativeBase):
pass
+
class SomeClass(Versioned, Base):
- __tablename__ = 'sometable'
+ __tablename__ = "sometable"
id = Column(Integer, primary_key=True)
name = Column(String(50))
def __eq__(self, other):
assert type(other) is SomeClass and other.id == self.id
+
Session = sessionmaker(bind=engine)
versioned_session(Session)
sess = Session()
- sc = SomeClass(name='sc1')
+ sc = SomeClass(name="sc1")
sess.add(sc)
sess.commit()
- sc.name = 'sc1modified'
+ sc.name = "sc1modified"
sess.commit()
assert sc.version == 2
SomeClassHistory = SomeClass.__history_mapper__.class_
- assert sess.query(SomeClassHistory).\\
- filter(SomeClassHistory.version == 1).\\
- all() \\
- == [SomeClassHistory(version=1, name='sc1')]
+ assert sess.query(SomeClassHistory).filter(
+ SomeClassHistory.version == 1
+ ).all() == [SomeClassHistory(version=1, name="sc1")]
The ``Versioned`` mixin is designed to work with declarative. To use
the extension with classical mappers, the ``_history_mapper`` function
set the flag ``Versioned.use_mapper_versioning`` to True::
class SomeClass(Versioned, Base):
- __tablename__ = 'sometable'
+ __tablename__ = "sometable"
use_mapper_versioning = True
Example::
- shrew = Animal(u'shrew')
- shrew[u'cuteness'] = 5
- shrew[u'weasel-like'] = False
- shrew[u'poisonous'] = True
+ shrew = Animal("shrew")
+ shrew["cuteness"] = 5
+ shrew["weasel-like"] = False
+ shrew["poisonous"] = True
session.add(shrew)
session.flush()
- q = (session.query(Animal).
- filter(Animal.facts.any(
- and_(AnimalFact.key == u'weasel-like',
- AnimalFact.value == True))))
- print('weasel-like animals', q.all())
+ q = session.query(Animal).filter(
+ Animal.facts.any(
+ and_(AnimalFact.key == "weasel-like", AnimalFact.value == True)
+ )
+ )
+ print("weasel-like animals", q.all())
.. autosource::
Builds upon the dictlike.py example to also add differently typed
columns to the "fact" table, e.g.::
- Table('properties', metadata
- Column('owner_id', Integer, ForeignKey('owner.id'),
- primary_key=True),
- Column('key', UnicodeText),
- Column('type', Unicode(16)),
- Column('int_value', Integer),
- Column('char_value', UnicodeText),
- Column('bool_value', Boolean),
- Column('decimal_value', Numeric(10,2)))
+ Table(
+ "properties",
+ metadata,
+ Column("owner_id", Integer, ForeignKey("owner.id"), primary_key=True),
+ Column("key", UnicodeText),
+ Column("type", Unicode(16)),
+ Column("int_value", Integer),
+ Column("char_value", UnicodeText),
+ Column("bool_value", Boolean),
+ Column("decimal_value", Numeric(10, 2)),
+ )
For any given properties row, the value of the 'type' column will point to the
'_value' column active for that row.
example, instead of::
# A regular ("horizontal") table has columns for 'species' and 'size'
- Table('animal', metadata,
- Column('id', Integer, primary_key=True),
- Column('species', Unicode),
- Column('size', Unicode))
+ Table(
+ "animal",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("species", Unicode),
+ Column("size", Unicode),
+ )
A vertical table models this as two tables: one table for the base or parent
entity, and another related table holding key/value pairs::
- Table('animal', metadata,
- Column('id', Integer, primary_key=True))
+ Table("animal", metadata, Column("id", Integer, primary_key=True))
# The properties table will have one row for a 'species' value, and
# another row for the 'size' value.
- Table('properties', metadata
- Column('animal_id', Integer, ForeignKey('animal.id'),
- primary_key=True),
- Column('key', UnicodeText),
- Column('value', UnicodeText))
+ Table(
+ "properties",
+ metadata,
+ Column(
+ "animal_id", Integer, ForeignKey("animal.id"), primary_key=True
+ ),
+ Column("key", UnicodeText),
+ Column("value", UnicodeText),
+ )
Because the key/value pairs in a vertical scheme are not fixed in advance,
accessing them like a Python dict can be very convenient. The example below
styles are otherwise equivalent to those documented in the pyodbc section::
from sqlalchemy.ext.asyncio import create_async_engine
+
engine = create_async_engine(
"mssql+aioodbc://scott:tiger@mssql2017:1433/test?"
"driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes"
)
-
-
"""
from __future__ import annotations
from sqlalchemy import Table, MetaData, Column, Integer
m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True),
- Column('x', Integer))
+ t = Table(
+ "t",
+ m,
+ Column("id", Integer, primary_key=True),
+ Column("x", Integer),
+ )
m.create_all(engine)
The above example will generate DDL as:
on the first integer primary key column::
m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True, autoincrement=False),
- Column('x', Integer))
+ t = Table(
+ "t",
+ m,
+ Column("id", Integer, primary_key=True, autoincrement=False),
+ Column("x", Integer),
+ )
m.create_all(engine)
To add the ``IDENTITY`` keyword to a non-primary key column, specify
is set to ``False`` on any integer primary key column::
m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True, autoincrement=False),
- Column('x', Integer, autoincrement=True))
+ t = Table(
+ "t",
+ m,
+ Column("id", Integer, primary_key=True, autoincrement=False),
+ Column("x", Integer, autoincrement=True),
+ )
m.create_all(engine)
.. versionchanged:: 1.4 Added :class:`_schema.Identity` construct
from sqlalchemy import Table, Integer, Column, Identity
test = Table(
- 'test', metadata,
+ "test",
+ metadata,
Column(
- 'id',
- Integer,
- primary_key=True,
- Identity(start=100, increment=10)
+ "id", Integer, primary_key=True, Identity(start=100, increment=10)
),
- Column('name', String(20))
+ Column("name", String(20)),
)
The CREATE TABLE for the above :class:`_schema.Table` object would be:
CREATE TABLE test (
id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
name VARCHAR(20) NULL,
- )
+ )
.. note::
Base = declarative_base()
+
class TestTable(Base):
__tablename__ = "test"
id = Column(
from sqlalchemy import TypeDecorator
+
class NumericAsInteger(TypeDecorator):
- '''normalize floating point return values into ints'''
+ "normalize floating point return values into ints"
impl = Numeric(10, 0, asdecimal=False)
cache_ok = True
value = int(value)
return value
+
class TestTable(Base):
__tablename__ = "test"
id = Column(
fetched in order to receive the value. Given a table as::
t = Table(
- 't',
+ "t",
metadata,
- Column('id', Integer, primary_key=True),
- Column('x', Integer),
- implicit_returning=False
+ Column("id", Integer, primary_key=True),
+ Column("x", Integer),
+ implicit_returning=False,
)
an INSERT will look like:
execution. Given this example::
m = MetaData()
- t = Table('t', m, Column('id', Integer, primary_key=True),
- Column('x', Integer))
+ t = Table(
+ "t", m, Column("id", Integer, primary_key=True), Column("x", Integer)
+ )
m.create_all(engine)
with engine.begin() as conn:
- conn.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})
+ conn.execute(t.insert(), {"id": 1, "x": 1}, {"id": 2, "x": 2})
The above column will be created with IDENTITY, however the INSERT statement
we emit is specifying explicit values. In the echo output we can see
>>> from sqlalchemy import Sequence
>>> from sqlalchemy.schema import CreateSequence
>>> from sqlalchemy.dialects import mssql
- >>> print(CreateSequence(Sequence("my_seq", start=1)).compile(dialect=mssql.dialect()))
+ >>> print(
+ ... CreateSequence(Sequence("my_seq", start=1)).compile(
+ ... dialect=mssql.dialect()
+ ... )
+ ... )
{printsql}CREATE SEQUENCE my_seq START WITH 1
For integer primary key generation, SQL Server's ``IDENTITY`` construct should
To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None::
my_table = Table(
- 'my_table', metadata,
- Column('my_data', VARCHAR(None)),
- Column('my_n_data', NVARCHAR(None))
+ "my_table",
+ metadata,
+ Column("my_data", VARCHAR(None)),
+ Column("my_n_data", NVARCHAR(None)),
)
-
Collation Support
-----------------
specified by the string argument "collation"::
from sqlalchemy import VARCHAR
- Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))
+
+ Column("login", VARCHAR(32, collation="Latin1_General_CI_AS"))
When such a column is associated with a :class:`_schema.Table`, the
-CREATE TABLE statement for this column will yield::
+CREATE TABLE statement for this column will yield:
+
+.. sourcecode:: sql
login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL
select(some_table).limit(5)
-will render similarly to::
+will render similarly to:
+
+.. sourcecode:: sql
SELECT TOP 5 col1, col2.. FROM table
select(some_table).order_by(some_table.c.col3).limit(5).offset(10)
-will render similarly to::
+will render similarly to:
+
+.. sourcecode:: sql
SELECT anon_1.col1, anon_1.col2 FROM (SELECT col1, col2,
ROW_NUMBER() OVER (ORDER BY col3) AS
To set isolation level using :func:`_sa.create_engine`::
engine = create_engine(
- "mssql+pyodbc://scott:tiger@ms_2008",
- isolation_level="REPEATABLE READ"
+ "mssql+pyodbc://scott:tiger@ms_2008", isolation_level="REPEATABLE READ"
)
To set using per-connection execution options::
connection = engine.connect()
- connection = connection.execution_options(
- isolation_level="READ COMMITTED"
- )
+ connection = connection.execution_options(isolation_level="READ COMMITTED")
Valid values for ``isolation_level`` include:
mssql_engine = create_engine(
"mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server",
-
# disable default reset-on-return scheme
pool_reset_on_return=None,
)
-----------
MSSQL has support for three levels of column nullability. The default
nullability allows nulls and is explicit in the CREATE TABLE
-construct::
+construct:
+
+.. sourcecode:: sql
name VARCHAR(20) NULL
If ``nullable=None`` is specified then no specification is made. In
other words the database's configured default is used. This will
-render::
+render:
+
+.. sourcecode:: sql
name VARCHAR(20)
* The flag can be set to either ``True`` or ``False`` when the dialect
is created, typically via :func:`_sa.create_engine`::
- eng = create_engine("mssql+pymssql://user:pass@host/db",
- deprecate_large_types=True)
+ eng = create_engine(
+ "mssql+pymssql://user:pass@host/db", deprecate_large_types=True
+ )
* Complete control over whether the "old" or "new" types are rendered is
available in all SQLAlchemy versions by using the UPPERCASE type objects
:class:`_schema.Table`::
Table(
- "some_table", metadata,
+ "some_table",
+ metadata,
Column("q", String(50)),
- schema="mydatabase.dbo"
+ schema="mydatabase.dbo",
)
When performing operations such as table or component reflection, a schema
special characters. Given an argument as below::
Table(
- "some_table", metadata,
+ "some_table",
+ metadata,
Column("q", String(50)),
- schema="MyDataBase.dbo"
+ schema="MyDataBase.dbo",
)
The above schema would be rendered as ``[MyDataBase].dbo``, and also in
"database" will be None::
Table(
- "some_table", metadata,
+ "some_table",
+ metadata,
Column("q", String(50)),
- schema="[MyDataBase.dbo]"
+ schema="[MyDataBase.dbo]",
)
To individually specify both database and owner name with special characters
or embedded dots, use two sets of brackets::
Table(
- "some_table", metadata,
+ "some_table",
+ metadata,
Column("q", String(50)),
- schema="[MyDataBase.Period].[MyOwner.Dot]"
+ schema="[MyDataBase.Period].[MyOwner.Dot]",
)
-
.. versionchanged:: 1.2 the SQL Server dialect now treats brackets as
identifier delimiters splitting the schema into separate database
and owner tokens, to allow dots within either name itself.
SELECT statement; given a table::
account_table = Table(
- 'account', metadata,
- Column('id', Integer, primary_key=True),
- Column('info', String(100)),
- schema="customer_schema"
+ "account",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("info", String(100)),
+ schema="customer_schema",
)
this legacy mode of rendering would assume that "customer_schema.account"
To generate a clustered primary key use::
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x", "y", mssql_clustered=True))
+ Table(
+ "my_table",
+ metadata,
+ Column("x", ...),
+ Column("y", ...),
+ PrimaryKeyConstraint("x", "y", mssql_clustered=True),
+ )
-which will render the table, for example, as::
+which will render the table, for example, as:
- CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
- PRIMARY KEY CLUSTERED (x, y))
+.. sourcecode:: sql
+
+ CREATE TABLE my_table (
+ x INTEGER NOT NULL,
+ y INTEGER NOT NULL,
+ PRIMARY KEY CLUSTERED (x, y)
+ )
Similarly, we can generate a clustered unique constraint using::
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x"),
- UniqueConstraint("y", mssql_clustered=True),
- )
+ Table(
+ "my_table",
+ metadata,
+ Column("x", ...),
+ Column("y", ...),
+ PrimaryKeyConstraint("x"),
+ UniqueConstraint("y", mssql_clustered=True),
+ )
To explicitly request a non-clustered primary key (for example, when
a separate clustered index is desired), use::
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x", "y", mssql_clustered=False))
+ Table(
+ "my_table",
+ metadata,
+ Column("x", ...),
+ Column("y", ...),
+ PrimaryKeyConstraint("x", "y", mssql_clustered=False),
+ )
-which will render the table, for example, as::
+which will render the table, for example, as:
+
+.. sourcecode:: sql
- CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
- PRIMARY KEY NONCLUSTERED (x, y))
+ CREATE TABLE my_table (
+ x INTEGER NOT NULL,
+ y INTEGER NOT NULL,
+ PRIMARY KEY NONCLUSTERED (x, y)
+ )
Columnstore Index Support
-------------------------
The ``mssql_include`` option renders INCLUDE(colname) for the given string
names::
- Index("my_index", table.c.x, mssql_include=['y'])
+ Index("my_index", table.c.x, mssql_include=["y"])
would render the index as ``CREATE INDEX my_index ON table (x) INCLUDE (y)``
specify ``implicit_returning=False`` for each :class:`_schema.Table`
which has triggers::
- Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
+ Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, primary_key=True),
# ...,
- implicit_returning=False
+ implicit_returning=False,
)
Declarative form::
class MyClass(Base):
# ...
- __table_args__ = {'implicit_returning':False}
-
+ __table_args__ = {"implicit_returning": False}
.. _mssql_rowcount_versioning:
applications to have long held locks and frequent deadlocks.
Enabling snapshot isolation for the database as a whole is recommended
for modern levels of concurrency support. This is accomplished via the
-following ALTER DATABASE commands executed at the SQL prompt::
+following ALTER DATABASE commands executed at the SQL prompt:
+
+.. sourcecode:: sql
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
dictionary or list, the :meth:`_types.JSON.Comparator.as_json` accessor
should be used::
- stmt = select(
- data_table.c.data["some key"].as_json()
- ).where(
+ stmt = select(data_table.c.data["some key"].as_json()).where(
data_table.c.data["some key"].as_json() == {"sub": "structure"}
)
:meth:`_types.JSON.Comparator.as_integer`,
:meth:`_types.JSON.Comparator.as_float`::
- stmt = select(
- data_table.c.data["some key"].as_string()
- ).where(
+ stmt = select(data_table.c.data["some key"].as_string()).where(
data_table.c.data["some key"].as_string() == "some string"
)
engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")
-Which above, will pass the following connection string to PyODBC::
+Which above, will pass the following connection string to PyODBC:
+
+.. sourcecode:: text
DSN=some_dsn;UID=scott;PWD=tiger
query parameters of the URL. As these names usually have spaces in them, the
name must be URL encoded which means using plus signs for spaces::
- engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=ODBC+Driver+17+for+SQL+Server")
+ engine = create_engine(
+ "mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=ODBC+Driver+17+for+SQL+Server"
+ )
The ``driver`` keyword is significant to the pyodbc dialect and must be
specified in lowercase.
The equivalent URL can be constructed using :class:`_sa.engine.URL`::
from sqlalchemy.engine import URL
+
connection_url = URL.create(
"mssql+pyodbc",
username="scott",
},
)
-
Pass through exact Pyodbc string
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
can help make this easier::
from sqlalchemy.engine import URL
+
connection_string = "DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password"
- connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
+ connection_url = URL.create(
+ "mssql+pyodbc", query={"odbc_connect": connection_string}
+ )
engine = create_engine(connection_url)
from sqlalchemy.engine.url import URL
from azure import identity
- SQL_COPT_SS_ACCESS_TOKEN = 1256 # Connection option for access tokens, as defined in msodbcsql.h
+ # Connection option for access tokens, as defined in msodbcsql.h
+ SQL_COPT_SS_ACCESS_TOKEN = 1256
TOKEN_URL = "https://database.windows.net/" # The token URL for any Azure SQL database
connection_string = "mssql+pyodbc://@my-server.database.windows.net/myDb?driver=ODBC+Driver+17+for+SQL+Server"
azure_credentials = identity.DefaultAzureCredential()
+
@event.listens_for(engine, "do_connect")
def provide_token(dialect, conn_rec, cargs, cparams):
# remove the "Trusted_Connection" parameter that SQLAlchemy adds
cargs[0] = cargs[0].replace(";Trusted_Connection=Yes", "")
# create token credential
- raw_token = azure_credentials.get_token(TOKEN_URL).token.encode("utf-16-le")
- token_struct = struct.pack(f"<I{len(raw_token)}s", len(raw_token), raw_token)
+ raw_token = azure_credentials.get_token(TOKEN_URL).token.encode(
+ "utf-16-le"
+ )
+ token_struct = struct.pack(
+ f"<I{len(raw_token)}s", len(raw_token), raw_token
+ )
# apply it to keyword arguments
cparams["attrs_before"] = {SQL_COPT_SS_ACCESS_TOKEN: token_struct}
This specific case can be handled by passing ``ignore_no_transaction_on_rollback=True`` to
the SQL Server dialect via the :func:`_sa.create_engine` function as follows::
- engine = create_engine(connection_url, ignore_no_transaction_on_rollback=True)
+ engine = create_engine(
+ connection_url, ignore_no_transaction_on_rollback=True
+ )
Using the above parameter, the dialect will catch ``ProgrammingError``
exceptions raised during ``connection.rollback()`` and emit a warning
},
)
-
Pyodbc Pooling / connection close behavior
------------------------------------------
engine = create_engine(
"mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server",
- fast_executemany=True)
+ fast_executemany=True,
+ )
.. versionchanged:: 2.0.9 - the ``fast_executemany`` parameter now has its
intended effect of this PyODBC feature taking effect for all INSERT
:func:`_asyncio.create_async_engine` engine creation function::
from sqlalchemy.ext.asyncio import create_async_engine
- engine = create_async_engine("mysql+aiomysql://user:pass@hostname/dbname?charset=utf8mb4")
+ engine = create_async_engine(
+ "mysql+aiomysql://user:pass@hostname/dbname?charset=utf8mb4"
+ )
""" # noqa
from .pymysql import MySQLDialect_pymysql
:func:`_asyncio.create_async_engine` engine creation function::
from sqlalchemy.ext.asyncio import create_async_engine
- engine = create_async_engine("mysql+asyncmy://user:pass@hostname/dbname?charset=utf8mb4")
+ engine = create_async_engine(
+ "mysql+asyncmy://user:pass@hostname/dbname?charset=utf8mb4"
+ )
""" # noqa
from __future__ import annotations
To connect to a MariaDB database, no changes to the database URL are required::
- engine = create_engine("mysql+pymysql://user:pass@some_mariadb/dbname?charset=utf8mb4")
+ engine = create_engine(
+ "mysql+pymysql://user:pass@some_mariadb/dbname?charset=utf8mb4"
+ )
Upon first connect, the SQLAlchemy dialect employs a
server version detection scheme that determines if the
and is not compatible with a MySQL database. To use this mode of operation,
replace the "mysql" token in the above URL with "mariadb"::
- engine = create_engine("mariadb+pymysql://user:pass@some_mariadb/dbname?charset=utf8mb4")
+ engine = create_engine(
+ "mariadb+pymysql://user:pass@some_mariadb/dbname?charset=utf8mb4"
+ )
The above engine, upon first connect, will raise an error if the server version
detection detects that the backing database is not MariaDB.
a connection will be discarded and replaced with a new one if it has been
present in the pool for a fixed number of seconds::
- engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)
+ engine = create_engine("mysql+mysqldb://...", pool_recycle=3600)
For more comprehensive disconnect detection of pooled connections, including
accommodation of server restarts and network issues, a pre-ping approach may
``ENGINE`` of ``InnoDB``, ``CHARSET`` of ``utf8mb4``, and ``KEY_BLOCK_SIZE``
of ``1024``::
- Table('mytable', metadata,
- Column('data', String(32)),
- mysql_engine='InnoDB',
- mysql_charset='utf8mb4',
- mysql_key_block_size="1024"
- )
+ Table(
+ "mytable",
+ metadata,
+ Column("data", String(32)),
+ mysql_engine="InnoDB",
+ mysql_charset="utf8mb4",
+ mysql_key_block_size="1024",
+ )
When supporting :ref:`mysql_mariadb_only_mode` mode, similar keys against
the "mariadb" prefix must be included as well. The values can of course
# support both "mysql" and "mariadb-only" engine URLs
- Table('mytable', metadata,
- Column('data', String(32)),
-
- mysql_engine='InnoDB',
- mariadb_engine='InnoDB',
-
- mysql_charset='utf8mb4',
- mariadb_charset='utf8',
-
- mysql_key_block_size="1024"
- mariadb_key_block_size="1024"
-
- )
+ Table(
+ "mytable",
+ metadata,
+ Column("data", String(32)),
+ mysql_engine="InnoDB",
+ mariadb_engine="InnoDB",
+ mysql_charset="utf8mb4",
+ mariadb_charset="utf8",
+ mysql_key_block_size="1024",
+ mariadb_key_block_size="1024",
+ )
The MySQL / MariaDB dialects will normally transfer any keyword specified as
``mysql_keyword_name`` to be rendered as ``KEYWORD_NAME`` in the
To set isolation level using :func:`_sa.create_engine`::
engine = create_engine(
- "mysql+mysqldb://scott:tiger@localhost/test",
- isolation_level="READ UNCOMMITTED"
- )
+ "mysql+mysqldb://scott:tiger@localhost/test",
+ isolation_level="READ UNCOMMITTED",
+ )
To set using per-connection execution options::
connection = engine.connect()
- connection = connection.execution_options(
- isolation_level="READ COMMITTED"
- )
+ connection = connection.execution_options(isolation_level="READ COMMITTED")
Valid values for ``isolation_level`` include:
the first :class:`.Integer` primary key column which is not marked as a
foreign key::
- >>> t = Table('mytable', metadata,
- ... Column('mytable_id', Integer, primary_key=True)
+ >>> t = Table(
+ ... "mytable", metadata, Column("mytable_id", Integer, primary_key=True)
... )
>>> t.create()
CREATE TABLE mytable (
can also be used to enable auto-increment on a secondary column in a
multi-column key for some storage engines::
- Table('mytable', metadata,
- Column('gid', Integer, primary_key=True, autoincrement=False),
- Column('id', Integer, primary_key=True)
- )
+ Table(
+ "mytable",
+ metadata,
+ Column("gid", Integer, primary_key=True, autoincrement=False),
+ Column("id", Integer, primary_key=True),
+ )
.. _mysql_ss_cursors:
option::
with engine.connect() as conn:
- result = conn.execution_options(stream_results=True).execute(text("select * from table"))
+ result = conn.execution_options(stream_results=True).execute(
+ text("select * from table")
+ )
Note that some kinds of SQL statements may not be supported with
server side cursors; generally, only SQL statements that return rows should be
in the URL, such as::
e = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4")
+ "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4"
+ )
This charset is the **client character set** for the connection. Some
MySQL DBAPIs will default this to a value such as ``latin1``, and some
DBAPI, as in::
e = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4")
+ "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4"
+ )
All modern DBAPIs should support the ``utf8mb4`` charset.
MySQL versions 5.6, 5.7 and later (not MariaDB at the time of this writing) now
emit a warning when attempting to pass binary data to the database, while a
character set encoding is also in place, when the binary data itself is not
-valid for that encoding::
+valid for that encoding:
+
+.. sourcecode:: text
default.py:509: Warning: (1300, "Invalid utf8mb4 character string:
'F9876A'")
interpret the binary string as a unicode object even if a datatype such
as :class:`.LargeBinary` is in use. To resolve this, the SQL statement requires
a binary "character set introducer" be present before any non-NULL value
-that renders like this::
+that renders like this:
+
+.. sourcecode:: sql
INSERT INTO table (data) VALUES (_binary %s)
# mysqlclient
engine = create_engine(
- "mysql+mysqldb://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true")
+ "mysql+mysqldb://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true"
+ )
# PyMySQL
engine = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true")
-
+ "mysql+pymysql://scott:tiger@localhost/test?charset=utf8mb4&binary_prefix=true"
+ )
The ``binary_prefix`` flag may or may not be supported by other MySQL drivers.
from sqlalchemy import create_engine, event
- eng = create_engine("mysql+mysqldb://scott:tiger@localhost/test", echo='debug')
+ eng = create_engine(
+ "mysql+mysqldb://scott:tiger@localhost/test", echo="debug"
+ )
+
# `insert=True` will ensure this is the very first listener to run
@event.listens_for(eng, "connect", insert=True)
cursor = dbapi_connection.cursor()
cursor.execute("SET sql_mode = 'STRICT_ALL_TABLES'")
+
conn = eng.connect()
In the example illustrated above, the "connect" event will invoke the "SET"
Many of the MySQL / MariaDB SQL extensions are handled through SQLAlchemy's generic
function and operator support::
- table.select(table.c.password==func.md5('plaintext'))
- table.select(table.c.username.op('regexp')('^[a-d]'))
+ table.select(table.c.password == func.md5("plaintext"))
+ table.select(table.c.username.op("regexp")("^[a-d]"))
And of course any valid SQL statement can be executed as a string as well.
* SELECT pragma, use :meth:`_expression.Select.prefix_with` and
:meth:`_query.Query.prefix_with`::
- select(...).prefix_with(['HIGH_PRIORITY', 'SQL_SMALL_RESULT'])
+ select(...).prefix_with(["HIGH_PRIORITY", "SQL_SMALL_RESULT"])
* UPDATE with LIMIT::
select(...).with_hint(some_table, "USE INDEX xyz")
-* MATCH operator support::
+* MATCH
+ operator support::
+
+ from sqlalchemy.dialects.mysql import match
- from sqlalchemy.dialects.mysql import match
- select(...).where(match(col1, col2, against="some expr").in_boolean_mode())
+ select(...).where(match(col1, col2, against="some expr").in_boolean_mode())
- .. seealso::
+ .. seealso::
- :class:`_mysql.match`
+ :class:`_mysql.match`
INSERT/DELETE...RETURNING
-------------------------
# INSERT..RETURNING
result = connection.execute(
- table.insert().
- values(name='foo').
- returning(table.c.col1, table.c.col2)
+ table.insert().values(name="foo").returning(table.c.col1, table.c.col2)
)
print(result.all())
# DELETE..RETURNING
result = connection.execute(
- table.delete().
- where(table.c.name=='foo').
- returning(table.c.col1, table.c.col2)
+ table.delete()
+ .where(table.c.name == "foo")
+ .returning(table.c.col1, table.c.col2)
)
print(result.all())
>>> from sqlalchemy.dialects.mysql import insert
>>> insert_stmt = insert(my_table).values(
- ... id='some_existing_id',
- ... data='inserted value')
+ ... id="some_existing_id", data="inserted value"
+ ... )
>>> on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
- ... data=insert_stmt.inserted.data,
- ... status='U'
+ ... data=insert_stmt.inserted.data, status="U"
... )
>>> print(on_duplicate_key_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%s, %s)
.. sourcecode:: pycon+sql
>>> insert_stmt = insert(my_table).values(
- ... id='some_existing_id',
- ... data='inserted value')
+ ... id="some_existing_id", data="inserted value"
+ ... )
>>> on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
... data="some data",
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
- ... id='some_id',
- ... data='inserted value',
- ... author='jlh')
+ ... id="some_id", data="inserted value", author="jlh"
+ ... )
>>> do_update_stmt = stmt.on_duplicate_key_update(
- ... data="updated value",
- ... author=stmt.inserted.author
+ ... data="updated value", author=stmt.inserted.author
... )
>>> print(do_update_stmt)
become part of the index. SQLAlchemy provides this feature via the
``mysql_length`` and/or ``mariadb_length`` parameters::
- Index('my_index', my_table.c.data, mysql_length=10, mariadb_length=10)
+ Index("my_index", my_table.c.data, mysql_length=10, mariadb_length=10)
- Index('a_b_idx', my_table.c.a, my_table.c.b, mysql_length={'a': 4,
- 'b': 9})
+ Index("a_b_idx", my_table.c.a, my_table.c.b, mysql_length={"a": 4, "b": 9})
- Index('a_b_idx', my_table.c.a, my_table.c.b, mariadb_length={'a': 4,
- 'b': 9})
+ Index(
+ "a_b_idx", my_table.c.a, my_table.c.b, mariadb_length={"a": 4, "b": 9}
+ )
Prefix lengths are given in characters for nonbinary string types and in bytes
for binary string types. The value passed to the keyword argument *must* be
an index. SQLAlchemy provides this feature via the
``mysql_prefix`` parameter on :class:`.Index`::
- Index('my_index', my_table.c.data, mysql_prefix='FULLTEXT')
+ Index("my_index", my_table.c.data, mysql_prefix="FULLTEXT")
The value passed to the keyword argument will be simply passed through to the
underlying CREATE INDEX, so it *must* be a valid index prefix for your MySQL
an index or primary key constraint. SQLAlchemy provides this feature via the
``mysql_using`` parameter on :class:`.Index`::
- Index('my_index', my_table.c.data, mysql_using='hash', mariadb_using='hash')
+ Index(
+ "my_index", my_table.c.data, mysql_using="hash", mariadb_using="hash"
+ )
As well as the ``mysql_using`` parameter on :class:`.PrimaryKeyConstraint`::
- PrimaryKeyConstraint("data", mysql_using='hash', mariadb_using='hash')
+ PrimaryKeyConstraint("data", mysql_using="hash", mariadb_using="hash")
The value passed to the keyword argument will be simply passed through to the
underlying CREATE INDEX or PRIMARY KEY clause, so it *must* be a valid index
is available using the keyword argument ``mysql_with_parser``::
Index(
- 'my_index', my_table.c.data,
- mysql_prefix='FULLTEXT', mysql_with_parser="ngram",
- mariadb_prefix='FULLTEXT', mariadb_with_parser="ngram",
+ "my_index",
+ my_table.c.data,
+ mysql_prefix="FULLTEXT",
+ mysql_with_parser="ngram",
+ mariadb_prefix="FULLTEXT",
+ mariadb_with_parser="ngram",
)
.. versionadded:: 1.3
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.schema import ForeignKeyConstraint
+
@compiles(ForeignKeyConstraint, "mysql", "mariadb")
def process(element, compiler, **kw):
element.deferrable = element.initially = None
reflection will not include foreign keys. For these tables, you may supply a
:class:`~sqlalchemy.ForeignKeyConstraint` at reflection time::
- Table('mytable', metadata,
- ForeignKeyConstraint(['other_id'], ['othertable.other_id']),
- autoload_with=engine
- )
+ Table(
+ "mytable",
+ metadata,
+ ForeignKeyConstraint(["other_id"], ["othertable.other_id"]),
+ autoload_with=engine,
+ )
.. seealso::
mytable = Table(
"mytable",
metadata,
- Column('id', Integer, primary_key=True),
- Column('data', String(50)),
+ Column("id", Integer, primary_key=True),
+ Column("data", String(50)),
Column(
- 'last_updated',
+ "last_updated",
TIMESTAMP,
- server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP")
- )
+ server_default=text(
+ "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
+ ),
+ ),
)
The same instructions apply to use of the :class:`_types.DateTime` and
mytable = Table(
"mytable",
metadata,
- Column('id', Integer, primary_key=True),
- Column('data', String(50)),
+ Column("id", Integer, primary_key=True),
+ Column("data", String(50)),
Column(
- 'last_updated',
+ "last_updated",
DateTime,
- server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP")
- )
+ server_default=text(
+ "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
+ ),
+ ),
)
-
Even though the :paramref:`_schema.Column.server_onupdate` feature does not
generate this DDL, it still may be desirable to signal to the ORM that this
updated value should be fetched. This syntax looks like the following::
from sqlalchemy.schema import FetchedValue
+
class MyClass(Base):
- __tablename__ = 'mytable'
+ __tablename__ = "mytable"
id = Column(Integer, primary_key=True)
data = Column(String(50))
last_updated = Column(
TIMESTAMP,
- server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"),
- server_onupdate=FetchedValue()
+ server_default=text(
+ "CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"
+ ),
+ server_onupdate=FetchedValue(),
)
-
.. _mysql_timestamp_null:
TIMESTAMP Columns and NULL
TIMESTAMP datatype implicitly includes a default value of
CURRENT_TIMESTAMP, even though this is not stated, and additionally
sets the column as NOT NULL, the opposite behavior vs. that of all
-other datatypes::
+other datatypes:
+
+.. sourcecode:: text
mysql> CREATE TABLE ts_test (
-> a INTEGER,
from sqlalchemy.dialects.mysql import TIMESTAMP
m = MetaData()
- t = Table('ts_test', m,
- Column('a', Integer),
- Column('b', Integer, nullable=False),
- Column('c', TIMESTAMP),
- Column('d', TIMESTAMP, nullable=False)
- )
+ t = Table(
+ "ts_test",
+ m,
+ Column("a", Integer),
+ Column("b", Integer, nullable=False),
+ Column("c", TIMESTAMP),
+ Column("d", TIMESTAMP, nullable=False),
+ )
from sqlalchemy import create_engine
+
e = create_engine("mysql+mysqldb://scott:tiger@localhost/test", echo=True)
m.create_all(e)
-output::
+output:
+
+.. sourcecode:: sql
CREATE TABLE ts_test (
a INTEGER,
in :ref:`tutorial_parameter_ordered_updates`::
insert().on_duplicate_key_update(
- [("name", "some name"), ("value", "some value")])
+ [
+ ("name", "some name"),
+ ("value", "some value"),
+ ]
+ )
.. versionchanged:: 1.3 parameters can be specified as a dictionary
or list of 2-tuples; the latter form provides for parameter
E.g.::
- Column('myenum', ENUM("foo", "bar", "baz"))
+ Column("myenum", ENUM("foo", "bar", "baz"))
:param enums: The range of valid values for this ENUM. Values in
enums are not quoted, they will be escaped and surrounded by single
E.g.::
- Column('myset', SET("foo", "bar", "baz"))
-
+ Column("myset", SET("foo", "bar", "baz"))
The list of potential values is required in the case that this
set will be used to generate DDL for a table, or if the
.order_by(desc(match_expr))
)
- Would produce SQL resembling::
+ Would produce SQL resembling:
+
+ .. sourcecode:: sql
SELECT id, firstname, lastname
FROM user
"ssl": {
"ca": "/home/gord/client-ssl/ca.pem",
"cert": "/home/gord/client-ssl/client-cert.pem",
- "key": "/home/gord/client-ssl/client-key.pem"
+ "key": "/home/gord/client-ssl/client-key.pem",
}
- }
+ },
)
For convenience, the following keys may also be specified inline within the URL
-----------------------------------
Google Cloud SQL now recommends use of the MySQLdb dialect. Connect
-using a URL like the following::
+using a URL like the following:
+
+.. sourcecode:: text
mysql+mysqldb://root@/<dbname>?unix_socket=/cloudsql/<projectid>:<instancename>
"&ssl_check_hostname=false"
)
-
MySQL-Python Compatibility
--------------------------
Pass through exact pyodbc connection string::
import urllib
+
connection_string = (
- 'DRIVER=MySQL ODBC 8.0 ANSI Driver;'
- 'SERVER=localhost;'
- 'PORT=3307;'
- 'DATABASE=mydb;'
- 'UID=root;'
- 'PWD=(whatever);'
- 'charset=utf8mb4;'
+ "DRIVER=MySQL ODBC 8.0 ANSI Driver;"
+ "SERVER=localhost;"
+ "PORT=3307;"
+ "DATABASE=mydb;"
+ "UID=root;"
+ "PWD=(whatever);"
+ "charset=utf8mb4;"
)
params = urllib.parse.quote_plus(connection_string)
connection_uri = "mysql+pyodbc:///?odbc_connect=%s" % params
Starting from version 12, Oracle Database can make use of identity columns
using the :class:`_sql.Identity` to specify the autoincrementing behavior::
- t = Table('mytable', metadata,
- Column('id', Integer, Identity(start=3), primary_key=True),
- Column(...), ...
+ t = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, Identity(start=3), primary_key=True),
+ Column(...),
+ ...,
)
The CREATE TABLE for the above :class:`_schema.Table` object would be:
sequences, use the sqlalchemy.schema.Sequence object which is passed to a
Column construct::
- t = Table('mytable', metadata,
- Column('id', Integer, Sequence('id_seq', start=1), primary_key=True),
- Column(...), ...
+ t = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, Sequence("id_seq", start=1), primary_key=True),
+ Column(...),
+ ...,
)
This step is also required when using table reflection, i.e. autoload_with=engine::
- t = Table('mytable', metadata,
- Column('id', Integer, Sequence('id_seq', start=1), primary_key=True),
- autoload_with=engine
+ t = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, Sequence("id_seq", start=1), primary_key=True),
+ autoload_with=engine,
)
In addition to the standard options, Oracle Database supports the following
To set using per-connection execution options::
connection = engine.connect()
- connection = connection.execution_options(
- isolation_level="AUTOCOMMIT"
- )
+ connection = connection.execution_options(isolation_level="AUTOCOMMIT")
For ``READ COMMITTED`` and ``SERIALIZABLE``, the Oracle Database dialects sets
the level at the session level using ``ALTER SESSION``, which is reverted back
engine = create_engine(
"oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1",
- max_identifier_length=30)
+ max_identifier_length=30,
+ )
If :paramref:`_sa.create_engine.max_identifier_length` is not set, the oracledb
dialect internally uses the ``max_identifier_length`` attribute available on
oracle_dialect = oracle.dialect(max_identifier_length=30)
print(CreateIndex(ix).compile(dialect=oracle_dialect))
-With an identifier length of 30, the above CREATE INDEX looks like::
+With an identifier length of 30, the above CREATE INDEX looks like:
+
+.. sourcecode:: sql
CREATE INDEX ix_some_column_name_1s_70cd ON t
(some_column_name_1, some_column_name_2, some_column_name_3)
However with length of 128, it becomes::
+.. sourcecode:: sql
+
CREATE INDEX ix_some_column_name_1some_column_name_2some_column_name_3 ON t
(some_column_name_1, some_column_name_2, some_column_name_3)
accessed over DBLINK, by passing the flag ``oracle_resolve_synonyms=True`` as
a keyword argument to the :class:`_schema.Table` construct::
- some_table = Table('some_table', autoload_with=some_engine,
- oracle_resolve_synonyms=True)
+ some_table = Table(
+ "some_table", autoload_with=some_engine, oracle_resolve_synonyms=True
+ )
When this flag is set, the given name (such as ``some_table`` above) will be
searched not just in the ``ALL_TABLES`` view, but also within the
from sqlalchemy import create_engine, inspect
- engine = create_engine("oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1")
+ engine = create_engine(
+ "oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1"
+ )
inspector = inspect(engine)
all_check_constraints = inspector.get_check_constraints(
- "some_table", include_all=True)
+ "some_table", include_all=True
+ )
* in most cases, when reflecting a :class:`_schema.Table`, a UNIQUE constraint
will **not** be available as a :class:`.UniqueConstraint` object, as Oracle
# exclude SYSAUX and SOME_TABLESPACE, but not SYSTEM
e = create_engine(
- "oracle+oracledb://scott:tiger@localhost:1521/?service_name=freepdb1",
- exclude_tablespaces=["SYSAUX", "SOME_TABLESPACE"])
+ "oracle+oracledb://scott:tiger@localhost:1521/?service_name=freepdb1",
+ exclude_tablespaces=["SYSAUX", "SOME_TABLESPACE"],
+ )
DateTime Compatibility
----------------------
* ``ON COMMIT``::
Table(
- "some_table", metadata, ...,
- prefixes=['GLOBAL TEMPORARY'], oracle_on_commit='PRESERVE ROWS')
+ "some_table",
+ metadata,
+ ...,
+ prefixes=["GLOBAL TEMPORARY"],
+ oracle_on_commit="PRESERVE ROWS",
+ )
-* ``COMPRESS``::
+*
+ ``COMPRESS``::
- Table('mytable', metadata, Column('data', String(32)),
- oracle_compress=True)
+ Table(
+ "mytable", metadata, Column("data", String(32)), oracle_compress=True
+ )
- Table('mytable', metadata, Column('data', String(32)),
- oracle_compress=6)
+ Table("mytable", metadata, Column("data", String(32)), oracle_compress=6)
- The ``oracle_compress`` parameter accepts either an integer compression
- level, or ``True`` to use the default compression level.
+ The ``oracle_compress`` parameter accepts either an integer compression
+ level, or ``True`` to use the default compression level.
-* ``TABLESPACE``::
+*
+ ``TABLESPACE``::
- Table('mytable', metadata, ...,
- oracle_tablespace="EXAMPLE_TABLESPACE")
+ Table("mytable", metadata, ..., oracle_tablespace="EXAMPLE_TABLESPACE")
- The ``oracle_tablespace`` parameter specifies the tablespace in which the
- table is to be created. This is useful when you want to create a table in a
- tablespace other than the default tablespace of the user.
+ The ``oracle_tablespace`` parameter specifies the tablespace in which the
+ table is to be created. This is useful when you want to create a table in a
+ tablespace other than the default tablespace of the user.
- .. versionadded:: 2.0.37
+ .. versionadded:: 2.0.37
.. _oracle_index_options:
You can specify the ``oracle_bitmap`` parameter to create a bitmap index
instead of a B-tree index::
- Index('my_index', my_table.c.data, oracle_bitmap=True)
+ Index("my_index", my_table.c.data, oracle_bitmap=True)
Bitmap indexes cannot be unique and cannot be compressed. SQLAlchemy will not
check for such limitations, only the database will.
of repeated values. Use the ``oracle_compress`` parameter to turn on key
compression::
- Index('my_index', my_table.c.data, oracle_compress=True)
+ Index("my_index", my_table.c.data, oracle_compress=True)
- Index('my_index', my_table.c.data1, my_table.c.data2, unique=True,
- oracle_compress=1)
+ Index(
+ "my_index",
+ my_table.c.data1,
+ my_table.c.data2,
+ unique=True,
+ oracle_compress=1,
+ )
The ``oracle_compress`` parameter accepts either an integer specifying the
number of prefix columns to compress, or ``True`` to use the default (all
from Oracle Database's Easy Connect syntax then connect in SQLAlchemy using the
``service_name`` query string parameter::
- engine = create_engine("oracle+cx_oracle://scott:tiger@hostname:port?service_name=myservice&encoding=UTF-8&nencoding=UTF-8")
+ engine = create_engine(
+ "oracle+cx_oracle://scott:tiger@hostname:port?service_name=myservice&encoding=UTF-8&nencoding=UTF-8"
+ )
Note that the default driver value for encoding and nencoding was changed to
“UTF-8” in cx_Oracle 8.0 so these parameters can be omitted when using that
:paramref:`_sa.create_engine.connect_args` dictionary::
import cx_Oracle
+
e = create_engine(
"oracle+cx_oracle://@",
connect_args={
"user": "scott",
"password": "tiger",
- "dsn": "hostname:port/myservice?transport_connect_timeout=30&expire_time=60"
- }
+ "dsn": "hostname:port/myservice?transport_connect_timeout=30&expire_time=60",
+ },
)
Connections with tnsnames.ora or to Oracle Autonomous Database
Alternatively, if no port, database name, or service name is provided, the
dialect will use an Oracle Database DSN "connection string". This takes the
"hostname" portion of the URL as the data source name. For example, if the
-``tnsnames.ora`` file contains a TNS Alias of ``myalias`` as below::
+``tnsnames.ora`` file contains a TNS Alias of ``myalias`` as below:
+
+.. sourcecode:: text
myalias =
(DESCRIPTION =
To use Oracle Database's obsolete System Identifier connection syntax, the SID
can be passed in a "database name" portion of the URL::
- engine = create_engine("oracle+cx_oracle://scott:tiger@hostname:port/dbname")
+ engine = create_engine(
+ "oracle+cx_oracle://scott:tiger@hostname:port/dbname"
+ )
Above, the DSN passed to cx_Oracle is created by ``cx_Oracle.makedsn()`` as
follows::
symbol::
e = create_engine(
- "oracle+cx_oracle://user:pass@dsn?encoding=UTF-8&nencoding=UTF-8&mode=SYSDBA&events=true")
+ "oracle+cx_oracle://user:pass@dsn?encoding=UTF-8&nencoding=UTF-8&mode=SYSDBA&events=true"
+ )
.. versionchanged:: 1.3 the cx_Oracle dialect now accepts all argument names
within the URL string itself, to be passed to the cx_Oracle DBAPI. As
Any cx_Oracle parameter value and/or constant may be passed, such as::
import cx_Oracle
+
e = create_engine(
"oracle+cx_oracle://user:pass@dsn",
connect_args={
"encoding": "UTF-8",
"nencoding": "UTF-8",
"mode": cx_Oracle.SYSDBA,
- "events": True
- }
+ "events": True,
+ },
)
Note that the default driver value for ``encoding`` and ``nencoding`` was
, such as::
e = create_engine(
- "oracle+cx_oracle://user:pass@dsn", coerce_to_decimal=False)
+ "oracle+cx_oracle://user:pass@dsn", coerce_to_decimal=False
+ )
The parameters accepted by the cx_oracle dialect are as follows:
from sqlalchemy.pool import NullPool
pool = cx_Oracle.SessionPool(
- user="scott", password="tiger", dsn="orclpdb",
- min=1, max=4, increment=1, threaded=True,
- encoding="UTF-8", nencoding="UTF-8"
+ user="scott",
+ password="tiger",
+ dsn="orclpdb",
+ min=1,
+ max=4,
+ increment=1,
+ threaded=True,
+ encoding="UTF-8",
+ nencoding="UTF-8",
)
- engine = create_engine("oracle+cx_oracle://", creator=pool.acquire, poolclass=NullPool)
+ engine = create_engine(
+ "oracle+cx_oracle://", creator=pool.acquire, poolclass=NullPool
+ )
The above engine may then be used normally where cx_Oracle's pool handles
connection pooling::
from sqlalchemy.pool import NullPool
pool = cx_Oracle.SessionPool(
- user="scott", password="tiger", dsn="orclpdb",
- min=2, max=5, increment=1, threaded=True,
- encoding="UTF-8", nencoding="UTF-8"
+ user="scott",
+ password="tiger",
+ dsn="orclpdb",
+ min=2,
+ max=5,
+ increment=1,
+ threaded=True,
+ encoding="UTF-8",
+ nencoding="UTF-8",
)
+
def creator():
- return pool.acquire(cclass="MYCLASS", purity=cx_Oracle.ATTR_PURITY_SELF)
+ return pool.acquire(
+ cclass="MYCLASS", purity=cx_Oracle.ATTR_PURITY_SELF
+ )
+
- engine = create_engine("oracle+cx_oracle://", creator=creator, poolclass=NullPool)
+ engine = create_engine(
+ "oracle+cx_oracle://", creator=creator, poolclass=NullPool
+ )
The above engine may then be used normally where cx_Oracle handles session
pooling and Oracle Database additionally uses DRCP::
the ``encoding`` and ``nencoding`` parameters directly to its ``.connect()``
function. These can be present in the URL as follows::
- engine = create_engine("oracle+cx_oracle://scott:tiger@tnsalias?encoding=UTF-8&nencoding=UTF-8")
+ engine = create_engine(
+ "oracle+cx_oracle://scott:tiger@tnsalias?encoding=UTF-8&nencoding=UTF-8"
+ )
For the meaning of the ``encoding`` and ``nencoding`` parameters, please
consult
engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe")
+
@event.listens_for(engine, "do_setinputsizes")
def _log_setinputsizes(inputsizes, cursor, statement, parameters, context):
for bindparam, dbapitype in inputsizes.items():
- log.info(
- "Bound parameter name: %s SQLAlchemy type: %r "
- "DBAPI object: %s",
- bindparam.key, bindparam.type, dbapitype)
+ log.info(
+ "Bound parameter name: %s SQLAlchemy type: %r DBAPI object: %s",
+ bindparam.key,
+ bindparam.type,
+ dbapitype,
+ )
Example 2 - remove all bindings to CLOB
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe")
+
@event.listens_for(engine, "do_setinputsizes")
def _remove_clob(inputsizes, cursor, statement, parameters, context):
for bindparam, dbapitype in list(inputsizes.items()):
automatically select the sync version::
from sqlalchemy import create_engine
- sync_engine = create_engine("oracle+oracledb://scott:tiger@localhost?service_name=FREEPDB1")
+
+ sync_engine = create_engine(
+ "oracle+oracledb://scott:tiger@localhost?service_name=FREEPDB1"
+ )
* calling :func:`_asyncio.create_async_engine` with ``oracle+oracledb://...``
will automatically select the async version::
from sqlalchemy.ext.asyncio import create_async_engine
- asyncio_engine = create_async_engine("oracle+oracledb://scott:tiger@localhost?service_name=FREEPDB1")
+
+ asyncio_engine = create_async_engine(
+ "oracle+oracledb://scott:tiger@localhost?service_name=FREEPDB1"
+ )
The asyncio version of the dialect may also be specified explicitly using the
``oracledb_async`` suffix::
from sqlalchemy.ext.asyncio import create_async_engine
- asyncio_engine = create_async_engine("oracle+oracledb_async://scott:tiger@localhost?service_name=FREEPDB1")
+
+ asyncio_engine = create_async_engine(
+ "oracle+oracledb_async://scott:tiger@localhost?service_name=FREEPDB1"
+ )
.. versionadded:: 2.0.25 added support for the async version of oracledb.
``init_oracle_client()``, like the ``lib_dir`` path, a dict may be passed, for
example::
- engine = sa.create_engine("oracle+oracledb://...", thick_mode={
- "lib_dir": "/path/to/oracle/client/lib",
- "config_dir": "/path/to/network_config_file_directory",
- "driver_name": "my-app : 1.0.0"
- })
+ engine = sa.create_engine(
+ "oracle+oracledb://...",
+ thick_mode={
+ "lib_dir": "/path/to/oracle/client/lib",
+ "config_dir": "/path/to/network_config_file_directory",
+ "driver_name": "my-app : 1.0.0",
+ },
+ )
Note that passing a ``lib_dir`` path should only be done on macOS or
Windows. On Linux it does not behave as you might expect.
Given the hostname, port and service name of the target database, you can
connect in SQLAlchemy using the ``service_name`` query string parameter::
- engine = create_engine("oracle+oracledb://scott:tiger@hostname:port?service_name=myservice")
+ engine = create_engine(
+ "oracle+oracledb://scott:tiger@hostname:port?service_name=myservice"
+ )
Connecting with Easy Connect strings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
connect_args={
"user": "scott",
"password": "tiger",
- "dsn": "hostname:port/myservice?transport_connect_timeout=30&expire_time=60"
- }
+ "dsn": "hostname:port/myservice?transport_connect_timeout=30&expire_time=60",
+ },
)
The Easy Connect syntax has been enhanced during the life of Oracle Database.
is at `Understanding the Easy Connect Naming Method
<https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=GUID-B0437826-43C1-49EC-A94D-B650B6A4A6EE>`_.
-The general syntax is similar to::
+The general syntax is similar to:
+
+.. sourcecode:: text
[[protocol:]//]host[:port][/[service_name]][?parameter_name=value{¶meter_name=value}]
"password": "tiger",
"dsn": "hostname:port/myservice",
"events": True,
- "mode": oracledb.AUTH_MODE_SYSDBA
- }
+ "mode": oracledb.AUTH_MODE_SYSDBA,
+ },
)
Connecting with tnsnames.ora TNS aliases
the URL as the data source name. For example, if the ``tnsnames.ora`` file
contains a `TNS Alias
<https://python-oracledb.readthedocs.io/en/latest/user_guide/connection_handling.html#tns-aliases-for-connection-strings>`_
-of ``myalias`` as below::
+of ``myalias`` as below:
+
+.. sourcecode:: text
myalias =
(DESCRIPTION =
path in ``sqlnet.ora`` appropriately::
e = create_engine(
- "oracle+oracledb://@",
- thick_mode={
- # directory containing tnsnames.ora and cwallet.so
- "config_dir": "/opt/oracle/wallet_dir",
- },
- connect_args={
- "user": "scott",
- "password": "tiger",
- "dsn": "mydb_high"
- }
- )
+ "oracle+oracledb://@",
+ thick_mode={
+ # directory containing tnsnames.ora and cwallet.so
+ "config_dir": "/opt/oracle/wallet_dir",
+ },
+ connect_args={
+ "user": "scott",
+ "password": "tiger",
+ "dsn": "mydb_high",
+ },
+ )
Thin mode users of mTLS should pass the appropriate directories and PEM wallet
password when creating the engine, similar to::
e = create_engine(
- "oracle+oracledb://@",
- connect_args={
- "user": "scott",
- "password": "tiger",
- "dsn": "mydb_high",
- "config_dir": "/opt/oracle/wallet_dir", # directory containing tnsnames.ora
- "wallet_location": "/opt/oracle/wallet_dir", # directory containing ewallet.pem
- "wallet_password": "top secret" # password for the PEM file
- }
- )
+ "oracle+oracledb://@",
+ connect_args={
+ "user": "scott",
+ "password": "tiger",
+ "dsn": "mydb_high",
+ "config_dir": "/opt/oracle/wallet_dir", # directory containing tnsnames.ora
+ "wallet_location": "/opt/oracle/wallet_dir", # directory containing ewallet.pem
+ "wallet_password": "top secret", # password for the PEM file
+ },
+ )
Typically ``config_dir`` and ``wallet_location`` are the same directory, which
is where the Oracle Autonomous Database wallet zip file was extracted. Note
# Uncomment to use the optional python-oracledb Thick mode.
# Review the python-oracledb doc for the appropriate parameters
- #oracledb.init_oracle_client(<your parameters>)
-
- pool = oracledb.create_pool(user="scott", password="tiger", dsn="localhost:1521/freepdb1",
- min=1, max=4, increment=1)
- engine = create_engine("oracle+oracledb://", creator=pool.acquire, poolclass=NullPool)
+ # oracledb.init_oracle_client(<your parameters>)
+
+ pool = oracledb.create_pool(
+ user="scott",
+ password="tiger",
+ dsn="localhost:1521/freepdb1",
+ min=1,
+ max=4,
+ increment=1,
+ )
+ engine = create_engine(
+ "oracle+oracledb://", creator=pool.acquire, poolclass=NullPool
+ )
The above engine may then be used normally. Internally, python-oracledb handles
connection pooling::
# Uncomment to use the optional python-oracledb Thick mode.
# Review the python-oracledb doc for the appropriate parameters
- #oracledb.init_oracle_client(<your parameters>)
-
- pool = oracledb.create_pool(user="scott", password="tiger", dsn="localhost:1521/freepdb1",
- min=1, max=4, increment=1,
- cclass="MYCLASS", purity=oracledb.PURITY_SELF)
- engine = create_engine("oracle+oracledb://", creator=pool.acquire, poolclass=NullPool)
+ # oracledb.init_oracle_client(<your parameters>)
+
+ pool = oracledb.create_pool(
+ user="scott",
+ password="tiger",
+ dsn="localhost:1521/freepdb1",
+ min=1,
+ max=4,
+ increment=1,
+ cclass="MYCLASS",
+ purity=oracledb.PURITY_SELF,
+ )
+ engine = create_engine(
+ "oracle+oracledb://", creator=pool.acquire, poolclass=NullPool
+ )
The above engine may then be used normally where python-oracledb handles
application connection pooling and Oracle Database additionally uses DRCP::
# Uncomment to use python-oracledb Thick mode.
# Review the python-oracledb doc for the appropriate parameters
- #oracledb.init_oracle_client(<your parameters>)
+ # oracledb.init_oracle_client(<your parameters>)
+
+ pool = oracledb.create_pool(
+ user="scott",
+ password="tiger",
+ dsn="localhost:1521/freepdb1",
+ min=1,
+ max=4,
+ increment=1,
+ cclass="MYCLASS",
+ purity=oracledb.PURITY_SELF,
+ )
- pool = oracledb.create_pool(user="scott", password="tiger", dsn="localhost:1521/freepdb1",
- min=1, max=4, increment=1,
- cclass="MYCLASS", purity=oracledb.PURITY_SELF)
def creator():
return pool.acquire(cclass="MYOTHERCLASS", purity=oracledb.PURITY_NEW)
- engine = create_engine("oracle+oracledb://", creator=creator, poolclass=NullPool)
+
+ engine = create_engine(
+ "oracle+oracledb://", creator=creator, poolclass=NullPool
+ )
Engine Options consumed by the SQLAlchemy oracledb dialect outside of the driver
--------------------------------------------------------------------------------
itself. These options are always passed directly to :func:`_sa.create_engine`,
such as::
- e = create_engine(
- "oracle+oracledb://user:pass@tnsalias", arraysize=500)
+ e = create_engine("oracle+oracledb://user:pass@tnsalias", arraysize=500)
The parameters accepted by the oracledb dialect are as follows:
from sqlalchemy import create_engine, event
- engine = create_engine("oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1")
+ engine = create_engine(
+ "oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1"
+ )
+
@event.listens_for(engine, "do_setinputsizes")
def _log_setinputsizes(inputsizes, cursor, statement, parameters, context):
for bindparam, dbapitype in inputsizes.items():
- log.info(
- "Bound parameter name: %s SQLAlchemy type: %r "
- "DBAPI object: %s",
- bindparam.key, bindparam.type, dbapitype)
+ log.info(
+ "Bound parameter name: %s SQLAlchemy type: %r DBAPI object: %s",
+ bindparam.key,
+ bindparam.type,
+ dbapitype,
+ )
Example 2 - remove all bindings to CLOB
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
from sqlalchemy import create_engine, event
from oracledb import CLOB
- engine = create_engine("oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1")
+ engine = create_engine(
+ "oracle+oracledb://scott:tiger@localhost:1521?service_name=freepdb1"
+ )
+
@event.listens_for(engine, "do_setinputsizes")
def _remove_clob(inputsizes, cursor, statement, parameters, context):
disable this coercion to decimal for performance reasons, pass the flag
``coerce_to_decimal=False`` to :func:`_sa.create_engine`::
- engine = create_engine("oracle+oracledb://scott:tiger@tnsalias", coerce_to_decimal=False)
+ engine = create_engine(
+ "oracle+oracledb://scott:tiger@tnsalias", coerce_to_decimal=False
+ )
The ``coerce_to_decimal`` flag only impacts the results of plain string
SQL statements that are not otherwise associated with a :class:`.Numeric`
from sqlalchemy.dialects import postgresql
from sqlalchemy import select, func
- stmt = select(array([1,2]) + array([3,4,5]))
+ stmt = select(array([1, 2]) + array([3, 4, 5]))
print(stmt.compile(dialect=postgresql.dialect()))
- Produces the SQL::
+ Produces the SQL:
+
+ .. sourcecode:: sql
SELECT ARRAY[%(param_1)s, %(param_2)s] ||
ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
:class:`_types.ARRAY`. The "inner" type of the array is inferred from
the values present, unless the ``type_`` keyword argument is passed::
- array(['foo', 'bar'], type_=CHAR)
+ array(["foo", "bar"], type_=CHAR)
Multidimensional arrays are produced by nesting :class:`.array` constructs.
The dimensionality of the final :class:`_types.ARRAY`
type::
stmt = select(
- array([
- array([1, 2]), array([3, 4]), array([column('q'), column('x')])
- ])
+ array(
+ [array([1, 2]), array([3, 4]), array([column("q"), column("x")])]
+ )
)
print(stmt.compile(dialect=postgresql.dialect()))
- Produces::
+ Produces:
- SELECT ARRAY[ARRAY[%(param_1)s, %(param_2)s],
- ARRAY[%(param_3)s, %(param_4)s], ARRAY[q, x]] AS anon_1
+ .. sourcecode:: sql
+
+ SELECT ARRAY[
+ ARRAY[%(param_1)s, %(param_2)s],
+ ARRAY[%(param_3)s, %(param_4)s],
+ ARRAY[q, x]
+ ] AS anon_1
.. versionadded:: 1.3.6 added support for multidimensional array literals
:class:`_postgresql.ARRAY`
- """
+ """ # noqa: E501
__visit_name__ = "array"
from sqlalchemy.dialects import postgresql
- mytable = Table("mytable", metadata,
- Column("data", postgresql.ARRAY(Integer, dimensions=2))
- )
+ mytable = Table(
+ "mytable",
+ metadata,
+ Column("data", postgresql.ARRAY(Integer, dimensions=2)),
+ )
The :class:`_postgresql.ARRAY` type provides all operations defined on the
core :class:`_types.ARRAY` type, including support for "dimensions",
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.ext.mutable import MutableList
+
class SomeOrmClass(Base):
# ...
E.g.::
- Column('myarray', ARRAY(Integer))
+ Column("myarray", ARRAY(Integer))
Arguments are:
:func:`_asyncio.create_async_engine` engine creation function::
from sqlalchemy.ext.asyncio import create_async_engine
- engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname")
+
+ engine = create_async_engine(
+ "postgresql+asyncpg://user:pass@hostname/dbname"
+ )
.. versionadded:: 1.4
argument)::
- engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=500")
+ engine = create_async_engine(
+ "postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=500"
+ )
To disable the prepared statement cache, use a value of zero::
- engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")
+ engine = create_async_engine(
+ "postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0"
+ )
.. versionadded:: 1.4.0b2 Added ``prepared_statement_cache_size`` for asyncpg.
"postgresql+asyncpg://user:pass@somepgbouncer/dbname",
poolclass=NullPool,
connect_args={
- 'prepared_statement_name_func': lambda: f'__asyncpg_{uuid4()}__',
+ "prepared_statement_name_func": lambda: f"__asyncpg_{uuid4()}__",
},
)
metadata,
Column(
"id", Integer, Sequence("some_id_seq", start=1), primary_key=True
- )
+ ),
)
When SQLAlchemy issues a single INSERT statement, to fulfill the contract of
"data",
metadata,
Column(
- 'id', Integer, Identity(start=42, cycle=True), primary_key=True
+ "id", Integer, Identity(start=42, cycle=True), primary_key=True
),
- Column('data', String)
+ Column("data", String),
)
The CREATE TABLE for the above :class:`_schema.Table` object would be:
from sqlalchemy.ext.compiler import compiles
- @compiles(CreateColumn, 'postgresql')
+ @compiles(CreateColumn, "postgresql")
def use_identity(element, compiler, **kw):
text = compiler.visit_create_column(element, **kw)
- text = text.replace(
- "SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY"
- )
+ text = text.replace("SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY")
return text
Using the above, a table such as::
t = Table(
- 't', m,
- Column('id', Integer, primary_key=True),
- Column('data', String)
+ "t", m, Column("id", Integer, primary_key=True), Column("data", String)
)
- Will generate on the backing database as::
+ Will generate on the backing database as:
+
+ .. sourcecode:: sql
CREATE TABLE t (
id INT GENERATED BY DEFAULT AS IDENTITY,
option::
with engine.connect() as conn:
- result = conn.execution_options(stream_results=True).execute(text("select * from table"))
+ result = conn.execution_options(stream_results=True).execute(
+ text("select * from table")
+ )
Note that some kinds of SQL statements may not be supported with
server side cursors; generally, only SQL statements that return rows should be
engine = create_engine(
"postgresql+pg8000://scott:tiger@localhost/test",
- isolation_level = "REPEATABLE READ"
+ isolation_level="REPEATABLE READ",
)
To set using per-connection execution options::
with engine.connect() as conn:
- conn = conn.execution_options(
- isolation_level="REPEATABLE READ"
- )
+ conn = conn.execution_options(isolation_level="REPEATABLE READ")
with conn.begin():
- # ... work with transaction
+ ... # work with transaction
There are also more options for isolation level configurations, such as
"sub-engine" objects linked to a main :class:`_engine.Engine` which each apply
conn = conn.execution_options(
isolation_level="SERIALIZABLE",
postgresql_readonly=True,
- postgresql_deferrable=True
+ postgresql_deferrable=True,
)
with conn.begin():
- # ... work with transaction
+ ... # work with transaction
Note that some DBAPIs such as asyncpg only support "readonly" with
SERIALIZABLE isolation.
postgresql_engine = create_engine(
"postgresql+pyscopg2://scott:tiger@hostname/dbname",
-
# disable default reset-on-return scheme
pool_reset_on_return=None,
)
engine = create_engine("postgresql+psycopg2://scott:tiger@host/dbname")
+
@event.listens_for(engine, "connect", insert=True)
def set_search_path(dbapi_connection, connection_record):
existing_autocommit = dbapi_connection.autocommit
:ref:`schema_set_default_connections` - in the :ref:`metadata_toplevel` documentation
-
-
-
.. _postgresql_schema_reflection:
Remote-Schema Table Introspection and PostgreSQL search_path
to **determine the default schema for the current database connection**.
It does this using the PostgreSQL ``current_schema()``
function, illustated below using a PostgreSQL client session (i.e. using
-the ``psql`` tool)::
+the ``psql`` tool):
+
+.. sourcecode:: sql
test=> select current_schema();
current_schema
However, if your database username **matches the name of a schema**, PostgreSQL's
default is to then **use that name as the default schema**. Below, we log in
using the username ``scott``. When we create a schema named ``scott``, **it
-implicitly changes the default schema**::
+implicitly changes the default schema**:
+
+.. sourcecode:: sql
test=> select current_schema();
current_schema
The behavior of ``current_schema()`` is derived from the
`PostgreSQL search path
<https://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH>`_
-variable ``search_path``, which in modern PostgreSQL versions defaults to this::
+variable ``search_path``, which in modern PostgreSQL versions defaults to this:
+
+.. sourcecode:: sql
test=> show search_path;
search_path
returns a sample definition for a particular foreign key constraint,
omitting the referenced schema name from that definition when the name is
also in the PostgreSQL schema search path. The interaction below
-illustrates this behavior::
+illustrates this behavior:
+
+.. sourcecode:: sql
test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
CREATE TABLE
the function.
On the other hand, if we set the search path back to the typical default
-of ``public``::
+of ``public``:
+
+.. sourcecode:: sql
test=> SET search_path TO public;
SET
The same query against ``pg_get_constraintdef()`` now returns the fully
-schema-qualified name for us::
+schema-qualified name for us:
+
+.. sourcecode:: sql
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... metadata_obj = MetaData()
- ... referring = Table('referring', metadata_obj,
- ... autoload_with=conn)
- ...
+ ... referring = Table("referring", metadata_obj, autoload_with=conn)
<sqlalchemy.engine.result.CursorResult object at 0x101612ed0>
The above process would deliver to the :attr:`_schema.MetaData.tables`
collection
``referred`` table named **without** the schema::
- >>> metadata_obj.tables['referred'].schema is None
+ >>> metadata_obj.tables["referred"].schema is None
True
To alter the behavior of reflection such that the referred schema is
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... metadata_obj = MetaData()
- ... referring = Table('referring', metadata_obj,
- ... autoload_with=conn,
- ... postgresql_ignore_search_path=True)
- ...
+ ... referring = Table(
+ ... "referring",
+ ... metadata_obj,
+ ... autoload_with=conn,
+ ... postgresql_ignore_search_path=True,
+ ... )
<sqlalchemy.engine.result.CursorResult object at 0x1016126d0>
We will now have ``test_schema.referred`` stored as schema-qualified::
- >>> metadata_obj.tables['test_schema.referred'].schema
+ >>> metadata_obj.tables["test_schema.referred"].schema
'test_schema'
.. sidebar:: Best Practices for PostgreSQL Schema reflection
use the :meth:`._UpdateBase.returning` method on a per-statement basis::
# INSERT..RETURNING
- result = table.insert().returning(table.c.col1, table.c.col2).\
- values(name='foo')
+ result = (
+ table.insert().returning(table.c.col1, table.c.col2).values(name="foo")
+ )
print(result.fetchall())
# UPDATE..RETURNING
- result = table.update().returning(table.c.col1, table.c.col2).\
- where(table.c.name=='foo').values(name='bar')
+ result = (
+ table.update()
+ .returning(table.c.col1, table.c.col2)
+ .where(table.c.name == "foo")
+ .values(name="bar")
+ )
print(result.fetchall())
# DELETE..RETURNING
- result = table.delete().returning(table.c.col1, table.c.col2).\
- where(table.c.name=='foo')
+ result = (
+ table.delete()
+ .returning(table.c.col1, table.c.col2)
+ .where(table.c.name == "foo")
+ )
print(result.fetchall())
.. _postgresql_insert_on_conflict:
>>> from sqlalchemy.dialects.postgresql import insert
>>> insert_stmt = insert(my_table).values(
- ... id='some_existing_id',
- ... data='inserted value')
- >>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
- ... index_elements=['id']
+ ... id="some_existing_id", data="inserted value"
... )
+ >>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(index_elements=["id"])
>>> print(do_nothing_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... constraint='pk_my_table',
- ... set_=dict(data='updated value')
+ ... constraint="pk_my_table", set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value')
+ ... index_elements=["id"], set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... index_elements=[my_table.c.id],
- ... set_=dict(data='updated value')
+ ... index_elements=[my_table.c.id], set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(user_email='a@b.com', data='inserted data')
+ >>> stmt = insert(my_table).values(user_email="a@b.com", data="inserted data")
>>> stmt = stmt.on_conflict_do_update(
... index_elements=[my_table.c.user_email],
- ... index_where=my_table.c.user_email.like('%@gmail.com'),
- ... set_=dict(data=stmt.excluded.data)
+ ... index_where=my_table.c.user_email.like("%@gmail.com"),
+ ... set_=dict(data=stmt.excluded.data),
... )
>>> print(stmt)
{printsql}INSERT INTO my_table (data, user_email)
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... constraint='my_table_idx_1',
- ... set_=dict(data='updated value')
+ ... constraint="my_table_idx_1", set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... constraint='my_table_pk',
- ... set_=dict(data='updated value')
+ ... constraint="my_table_pk", set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... constraint=my_table.primary_key,
- ... set_=dict(data='updated value')
+ ... constraint=my_table.primary_key, set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
>>> do_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value')
+ ... index_elements=["id"], set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
- ... id='some_id',
- ... data='inserted value',
- ... author='jlh'
+ ... id="some_id", data="inserted value", author="jlh"
... )
>>> do_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value', author=stmt.excluded.author)
+ ... index_elements=["id"],
+ ... set_=dict(data="updated value", author=stmt.excluded.author),
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data, author)
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
- ... id='some_id',
- ... data='inserted value',
- ... author='jlh'
+ ... id="some_id", data="inserted value", author="jlh"
... )
>>> on_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value', author=stmt.excluded.author),
- ... where=(my_table.c.status == 2)
+ ... index_elements=["id"],
+ ... set_=dict(data="updated value", author=stmt.excluded.author),
+ ... where=(my_table.c.status == 2),
... )
>>> print(on_update_stmt)
{printsql}INSERT INTO my_table (id, data, author)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
- >>> stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
+ >>> stmt = stmt.on_conflict_do_nothing(index_elements=["id"])
>>> print(stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
>>> stmt = stmt.on_conflict_do_nothing()
>>> print(stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
select(sometable.c.text.match("search string"))
-would emit to the database::
+would emit to the database:
+
+.. sourcecode:: sql
SELECT text @@ plainto_tsquery('search string') FROM table
from sqlalchemy import func
- select(
- sometable.c.text.bool_op("@@")(func.to_tsquery("search string"))
- )
+ select(sometable.c.text.bool_op("@@")(func.to_tsquery("search string")))
- Which would emit::
+ Which would emit:
+
+ .. sourcecode:: sql
SELECT text @@ to_tsquery('search string') FROM table
For example, the query::
- select(
- func.to_tsquery('cat').bool_op("@>")(func.to_tsquery('cat & rat'))
- )
+ select(func.to_tsquery("cat").bool_op("@>")(func.to_tsquery("cat & rat")))
would generate:
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy import select, cast
+
select(cast("some text", TSVECTOR))
-produces a statement equivalent to::
+produces a statement equivalent to:
+
+.. sourcecode:: sql
SELECT CAST('some text' AS TSVECTOR) AS anon_1
specified using the ``postgresql_regconfig`` parameter, such as::
select(mytable.c.id).where(
- mytable.c.title.match('somestring', postgresql_regconfig='english')
+ mytable.c.title.match("somestring", postgresql_regconfig="english")
)
-Which would emit::
+Which would emit:
+
+.. sourcecode:: sql
SELECT mytable.id FROM mytable
WHERE mytable.title @@ plainto_tsquery('english', 'somestring')
)
)
-produces a statement equivalent to::
+produces a statement equivalent to:
+
+.. sourcecode:: sql
SELECT mytable.id FROM mytable
WHERE to_tsvector('english', mytable.title) @@
syntaxes. It uses SQLAlchemy's hints mechanism::
# SELECT ... FROM ONLY ...
- result = table.select().with_hint(table, 'ONLY', 'postgresql')
+ result = table.select().with_hint(table, "ONLY", "postgresql")
print(result.fetchall())
# UPDATE ONLY ...
- table.update(values=dict(foo='bar')).with_hint('ONLY',
- dialect_name='postgresql')
+ table.update(values=dict(foo="bar")).with_hint(
+ "ONLY", dialect_name="postgresql"
+ )
# DELETE FROM ONLY ...
- table.delete().with_hint('ONLY', dialect_name='postgresql')
-
+ table.delete().with_hint("ONLY", dialect_name="postgresql")
.. _postgresql_indexes:
The ``postgresql_include`` option renders INCLUDE(colname) for the given
string names::
- Index("my_index", table.c.x, postgresql_include=['y'])
+ Index("my_index", table.c.x, postgresql_include=["y"])
would render the index as ``CREATE INDEX my_index ON table (x) INCLUDE (y)``
applied to a subset of rows. These can be specified on :class:`.Index`
using the ``postgresql_where`` keyword argument::
- Index('my_index', my_table.c.id, postgresql_where=my_table.c.value > 10)
+ Index("my_index", my_table.c.id, postgresql_where=my_table.c.value > 10)
.. _postgresql_operator_classes:
``postgresql_ops`` keyword argument::
Index(
- 'my_index', my_table.c.id, my_table.c.data,
- postgresql_ops={
- 'data': 'text_pattern_ops',
- 'id': 'int4_ops'
- })
+ "my_index",
+ my_table.c.id,
+ my_table.c.data,
+ postgresql_ops={"data": "text_pattern_ops", "id": "int4_ops"},
+ )
Note that the keys in the ``postgresql_ops`` dictionaries are the
"key" name of the :class:`_schema.Column`, i.e. the name used to access it from
that is identified in the dictionary by name, e.g.::
Index(
- 'my_index', my_table.c.id,
- func.lower(my_table.c.data).label('data_lower'),
- postgresql_ops={
- 'data_lower': 'text_pattern_ops',
- 'id': 'int4_ops'
- })
+ "my_index",
+ my_table.c.id,
+ func.lower(my_table.c.data).label("data_lower"),
+ postgresql_ops={"data_lower": "text_pattern_ops", "id": "int4_ops"},
+ )
Operator classes are also supported by the
:class:`_postgresql.ExcludeConstraint` construct using the
https://www.postgresql.org/docs/current/static/indexes-types.html). These can be
specified on :class:`.Index` using the ``postgresql_using`` keyword argument::
- Index('my_index', my_table.c.data, postgresql_using='gin')
+ Index("my_index", my_table.c.data, postgresql_using="gin")
The value passed to the keyword argument will be simply passed through to the
underlying CREATE INDEX command, so it *must* be a valid index type for your
parameters can be specified on :class:`.Index` using the ``postgresql_with``
keyword argument::
- Index('my_index', my_table.c.data, postgresql_with={"fillfactor": 50})
+ Index("my_index", my_table.c.data, postgresql_with={"fillfactor": 50})
PostgreSQL allows to define the tablespace in which to create the index.
The tablespace can be specified on :class:`.Index` using the
``postgresql_tablespace`` keyword argument::
- Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')
+ Index("my_index", my_table.c.data, postgresql_tablespace="my_tablespace")
Note that the same option is available on :class:`_schema.Table` as well.
The PostgreSQL index option CONCURRENTLY is supported by passing the
flag ``postgresql_concurrently`` to the :class:`.Index` construct::
- tbl = Table('testtbl', m, Column('data', Integer))
+ tbl = Table("testtbl", m, Column("data", Integer))
- idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)
+ idx1 = Index("test_idx1", tbl.c.data, postgresql_concurrently=True)
The above index construct will render DDL for CREATE INDEX, assuming
-PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as::
+PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as:
+
+.. sourcecode:: sql
CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)
For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or for
-a connection-less dialect, it will emit::
+a connection-less dialect, it will emit:
+
+.. sourcecode:: sql
DROP INDEX CONCURRENTLY test_idx1
construct, the DBAPI's "autocommit" mode must be used::
metadata = MetaData()
- table = Table(
- "foo", metadata,
- Column("id", String))
- index = Index(
- "foo_idx", table.c.id, postgresql_concurrently=True)
+ table = Table("foo", metadata, Column("id", String))
+ index = Index("foo_idx", table.c.id, postgresql_concurrently=True)
with engine.connect() as conn:
- with conn.execution_options(isolation_level='AUTOCOMMIT'):
+ with conn.execution_options(isolation_level="AUTOCOMMIT"):
table.create(conn)
.. seealso::
* ``ON COMMIT``::
- Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
+ Table("some_table", metadata, ..., postgresql_on_commit="PRESERVE ROWS")
-* ``PARTITION BY``::
+*
+ ``PARTITION BY``::
- Table("some_table", metadata, ...,
- postgresql_partition_by='LIST (part_column)')
+ Table(
+ "some_table",
+ metadata,
+ ...,
+ postgresql_partition_by="LIST (part_column)",
+ )
- .. versionadded:: 1.2.6
+ .. versionadded:: 1.2.6
-* ``TABLESPACE``::
+*
+ ``TABLESPACE``::
- Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')
+ Table("some_table", metadata, ..., postgresql_tablespace="some_tablespace")
The above option is also available on the :class:`.Index` construct.
-* ``USING``::
+*
+ ``USING``::
- Table("some_table", metadata, ..., postgresql_using='heap')
+ Table("some_table", metadata, ..., postgresql_using="heap")
- .. versionadded:: 2.0.26
+ .. versionadded:: 2.0.26
* ``WITH OIDS``::
"user",
["user_id"],
["id"],
- postgresql_not_valid=True
+ postgresql_not_valid=True,
)
The keyword is ultimately accepted directly by the
CheckConstraint("some_field IS NOT NULL", postgresql_not_valid=True)
- ForeignKeyConstraint(["some_id"], ["some_table.some_id"], postgresql_not_valid=True)
+ ForeignKeyConstraint(
+ ["some_id"], ["some_table.some_id"], postgresql_not_valid=True
+ )
.. versionadded:: 1.4.32
.. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
- >>> stmt = select(func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value"))
+ >>> stmt = select(
+ ... func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value")
+ ... )
>>> print(stmt)
{printsql}SELECT anon_1.key, anon_1.value
FROM json_each(:json_each_1) AS anon_1
>>> from sqlalchemy import select, func, literal_column
>>> stmt = select(
... func.json_populate_record(
- ... literal_column("null::myrowtype"),
- ... '{"a":1,"b":2}'
+ ... literal_column("null::myrowtype"), '{"a":1,"b":2}'
... ).table_valued("a", "b", name="x")
... )
>>> print(stmt)
>>> from sqlalchemy import select, func, column, Integer, Text
>>> stmt = select(
- ... func.json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}').table_valued(
- ... column("a", Integer), column("b", Text), column("d", Text),
- ... ).render_derived(name="x", with_types=True)
+ ... func.json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}')
+ ... .table_valued(
+ ... column("a", Integer),
+ ... column("b", Text),
+ ... column("d", Text),
+ ... )
+ ... .render_derived(name="x", with_types=True)
... )
>>> print(stmt)
{printsql}SELECT x.a, x.b, x.d
>>> from sqlalchemy import select, func
>>> stmt = select(
- ... func.generate_series(4, 1, -1).
- ... table_valued("value", with_ordinality="ordinality").
- ... render_derived()
+ ... func.generate_series(4, 1, -1)
+ ... .table_valued("value", with_ordinality="ordinality")
+ ... .render_derived()
... )
>>> print(stmt)
{printsql}SELECT anon_1.value, anon_1.ordinality
.. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
- >>> stmt = select(func.json_array_elements('["one", "two"]').column_valued("x"))
+ >>> stmt = select(
+ ... func.json_array_elements('["one", "two"]').column_valued("x")
+ ... )
>>> print(stmt)
{printsql}SELECT x
FROM json_array_elements(:json_array_elements_1) AS x
>>> from sqlalchemy import table, column, ARRAY, Integer
>>> from sqlalchemy import select, func
- >>> t = table("t", column('value', ARRAY(Integer)))
+ >>> t = table("t", column("value", ARRAY(Integer)))
>>> stmt = select(func.unnest(t.c.value).column_valued("unnested_value"))
>>> print(stmt)
{printsql}SELECT unnested_value
>>> from sqlalchemy import table, column, func, tuple_
>>> t = table("t", column("id"), column("fk"))
- >>> stmt = t.select().where(
- ... tuple_(t.c.id, t.c.fk) > (1,2)
- ... ).where(
- ... func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7)
+ >>> stmt = (
+ ... t.select()
+ ... .where(tuple_(t.c.id, t.c.fk) > (1, 2))
+ ... .where(func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7))
... )
>>> print(stmt)
{printsql}SELECT t.id, t.fk
.. sourcecode:: pycon+sql
>>> from sqlalchemy import table, column, func, select
- >>> a = table( "a", column("id"), column("x"), column("y"))
+ >>> a = table("a", column("id"), column("x"), column("y"))
>>> stmt = select(func.row_to_json(a.table_valued()))
>>> print(stmt)
{printsql}SELECT row_to_json(a) AS row_to_json_1
E.g.::
from sqlalchemy.dialects.postgresql import aggregate_order_by
+
expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
stmt = select(expr)
- would represent the expression::
+ would represent the expression:
+
+ .. sourcecode:: sql
SELECT array_agg(a ORDER BY b DESC) FROM table;
Similarly::
expr = func.string_agg(
- table.c.a,
- aggregate_order_by(literal_column("','"), table.c.a)
+ table.c.a, aggregate_order_by(literal_column("','"), table.c.a)
)
stmt = select(expr)
- Would represent::
+ Would represent:
+
+ .. sourcecode:: sql
SELECT string_agg(a, ',' ORDER BY a) FROM table;
E.g.::
const = ExcludeConstraint(
- (Column('period'), '&&'),
- (Column('group'), '='),
- where=(Column('group') != 'some group'),
- ops={'group': 'my_operator_class'}
+ (Column("period"), "&&"),
+ (Column("group"), "="),
+ where=(Column("group") != "some group"),
+ ops={"group": "my_operator_class"},
)
The constraint is normally embedded into the :class:`_schema.Table`
directly, or added later using :meth:`.append_constraint`::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('period', TSRANGE()),
- Column('group', String)
+ "some_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("period", TSRANGE()),
+ Column("group", String),
)
some_table.append_constraint(
ExcludeConstraint(
- (some_table.c.period, '&&'),
- (some_table.c.group, '='),
- where=some_table.c.group != 'some group',
- name='some_table_excl_const',
- ops={'group': 'my_operator_class'}
+ (some_table.c.period, "&&"),
+ (some_table.c.group, "="),
+ where=some_table.c.group != "some group",
+ name="some_table_excl_const",
+ ops={"group": "my_operator_class"},
)
)
The :class:`.HSTORE` type stores dictionaries containing strings, e.g.::
- data_table = Table('data_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', HSTORE)
+ data_table = Table(
+ "data_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", HSTORE),
)
with engine.connect() as conn:
conn.execute(
- data_table.insert(),
- data = {"key1": "value1", "key2": "value2"}
+ data_table.insert(), data={"key1": "value1", "key2": "value2"}
)
:class:`.HSTORE` provides for a wide range of operations, including:
* Index operations::
- data_table.c.data['some key'] == 'some value'
+ data_table.c.data["some key"] == "some value"
* Containment operations::
- data_table.c.data.has_key('some key')
+ data_table.c.data.has_key("some key")
- data_table.c.data.has_all(['one', 'two', 'three'])
+ data_table.c.data.has_all(["one", "two", "three"])
* Concatenation::
from sqlalchemy.ext.mutable import MutableDict
+
class MyClass(Base):
- __tablename__ = 'data_table'
+ __tablename__ = "data_table"
id = Column(Integer, primary_key=True)
data = Column(MutableDict.as_mutable(HSTORE))
+
my_object = session.query(MyClass).one()
# in-place mutation, requires Mutable extension
# in order for the ORM to detect
- my_object.data['some_key'] = 'some value'
+ my_object.data["some_key"] = "some value"
session.commit()
:class:`.hstore` - render the PostgreSQL ``hstore()`` function.
- """
+ """ # noqa: E501
__visit_name__ = "HSTORE"
hashable = False
from sqlalchemy.dialects.postgresql import array, hstore
- select(hstore('key1', 'value1'))
+ select(hstore("key1", "value1"))
select(
hstore(
- array(['key1', 'key2', 'key3']),
- array(['value1', 'value2', 'value3'])
+ array(["key1", "key2", "key3"]),
+ array(["value1", "value2", "value3"]),
)
)
* Index operations (the ``->`` operator)::
- data_table.c.data['some key']
+ data_table.c.data["some key"]
data_table.c.data[5]
+ * Index operations returning text
+ (the ``->>`` operator)::
- * Index operations returning text (the ``->>`` operator)::
-
- data_table.c.data['some key'].astext == 'some value'
+ data_table.c.data["some key"].astext == "some value"
Note that equivalent functionality is available via the
:attr:`.JSON.Comparator.as_string` accessor.
* Index operations with CAST
(equivalent to ``CAST(col ->> ['some key'] AS <type>)``)::
- data_table.c.data['some key'].astext.cast(Integer) == 5
+ data_table.c.data["some key"].astext.cast(Integer) == 5
Note that equivalent functionality is available via the
:attr:`.JSON.Comparator.as_integer` and similar accessors.
* Path index operations (the ``#>`` operator)::
- data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
+ data_table.c.data[("key_1", "key_2", 5, ..., "key_n")]
* Path index operations returning text (the ``#>>`` operator)::
- data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == 'some value'
+ data_table.c.data[
+ ("key_1", "key_2", 5, ..., "key_n")
+ ].astext == "some value"
Index operations return an expression object whose type defaults to
:class:`_types.JSON` by default,
using psycopg2, the DBAPI only allows serializers at the per-cursor
or per-connection level. E.g.::
- engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
- json_serializer=my_serialize_fn,
- json_deserializer=my_deserialize_fn
- )
+ engine = create_engine(
+ "postgresql+psycopg2://scott:tiger@localhost/test",
+ json_serializer=my_serialize_fn,
+ json_deserializer=my_deserialize_fn,
+ )
When using the psycopg2 dialect, the json_deserializer is registered
against the database using ``psycopg2.extras.register_default_json``.
be used to persist a NULL value::
from sqlalchemy import null
+
conn.execute(table.insert(), {"data": null()})
.. seealso::
E.g.::
- select(data_table.c.data['some key'].astext)
+ select(data_table.c.data["some key"].astext)
.. seealso::
The :class:`_postgresql.JSONB` type stores arbitrary JSONB format data,
e.g.::
- data_table = Table('data_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', JSONB)
+ data_table = Table(
+ "data_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", JSONB),
)
with engine.connect() as conn:
conn.execute(
- data_table.insert(),
- data = {"key1": "value1", "key2": "value2"}
+ data_table.insert(), data={"key1": "value1", "key2": "value2"}
)
The :class:`_postgresql.JSONB` type includes all operations provided by
:meth:`_schema.Table.drop`
methods are called::
- table = Table('sometable', metadata,
- Column('some_enum', ENUM('a', 'b', 'c', name='myenum'))
+ table = Table(
+ "sometable",
+ metadata,
+ Column("some_enum", ENUM("a", "b", "c", name="myenum")),
)
table.create(engine) # will emit CREATE ENUM and CREATE TABLE
:class:`_postgresql.ENUM` independently, and associate it with the
:class:`_schema.MetaData` object itself::
- my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)
+ my_enum = ENUM("a", "b", "c", name="myenum", metadata=metadata)
- t1 = Table('sometable_one', metadata,
- Column('some_enum', myenum)
- )
+ t1 = Table("sometable_one", metadata, Column("some_enum", myenum))
- t2 = Table('sometable_two', metadata,
- Column('some_enum', myenum)
- )
+ t2 = Table("sometable_two", metadata, Column("some_enum", myenum))
When this pattern is used, care must still be taken at the level
of individual table creates. Emitting CREATE TABLE without also
specifying ``checkfirst=True`` will still cause issues::
- t1.create(engine) # will fail: no such type 'myenum'
+ t1.create(engine) # will fail: no such type 'myenum'
If we specify ``checkfirst=True``, the individual table-level create
operation will check for the ``ENUM`` and create if not exists::
A domain is essentially a data type with optional constraints
that restrict the allowed set of values. E.g.::
- PositiveInt = DOMAIN(
- "pos_int", Integer, check="VALUE > 0", not_null=True
- )
+ PositiveInt = DOMAIN("pos_int", Integer, check="VALUE > 0", not_null=True)
UsPostalCode = DOMAIN(
"us_postal_code",
Text,
- check="VALUE ~ '^\d{5}$' OR VALUE ~ '^\d{5}-\d{4}$'"
+ check="VALUE ~ '^\d{5}$' OR VALUE ~ '^\d{5}-\d{4}$'",
)
See the `PostgreSQL documentation`__ for additional details
.. versionadded:: 2.0
- """
+ """ # noqa: E501
DDLGenerator = DomainGenerator
DDLDropper = DomainDropper
the ``postgresql.conf`` file, which often defaults to ``SQL_ASCII``.
Typically, this can be changed to ``utf-8``, as a more useful default::
- #client_encoding = sql_ascii # actually, defaults to database
- # encoding
+ # client_encoding = sql_ascii # actually, defaults to database encoding
client_encoding = utf8
The ``client_encoding`` can be overridden for a session by executing the SQL:
-SET CLIENT_ENCODING TO 'utf8';
+.. sourcecode:: sql
+
+ SET CLIENT_ENCODING TO 'utf8';
SQLAlchemy will execute this SQL on all new connections based on the value
passed to :func:`_sa.create_engine` using the ``client_encoding`` parameter::
engine = create_engine(
- "postgresql+pg8000://user:pass@host/dbname", client_encoding='utf8')
+ "postgresql+pg8000://user:pass@host/dbname", client_encoding="utf8"
+ )
.. _pg8000_ssl:
:paramref:`_sa.create_engine.connect_args` dictionary::
import ssl
+
ssl_context = ssl.create_default_context()
engine = sa.create_engine(
"postgresql+pg8000://scott:tiger@192.168.0.199/test",
necessary to disable hostname checking::
import ssl
+
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
automatically select the sync version, e.g.::
from sqlalchemy import create_engine
- sync_engine = create_engine("postgresql+psycopg://scott:tiger@localhost/test")
+
+ sync_engine = create_engine(
+ "postgresql+psycopg://scott:tiger@localhost/test"
+ )
* calling :func:`_asyncio.create_async_engine` with
``postgresql+psycopg://...`` will automatically select the async version,
e.g.::
from sqlalchemy.ext.asyncio import create_async_engine
- asyncio_engine = create_async_engine("postgresql+psycopg://scott:tiger@localhost/test")
+
+ asyncio_engine = create_async_engine(
+ "postgresql+psycopg://scott:tiger@localhost/test"
+ )
The asyncio version of the dialect may also be specified explicitly using the
``psycopg_async`` suffix, as::
from sqlalchemy.ext.asyncio import create_async_engine
- asyncio_engine = create_async_engine("postgresql+psycopg_async://scott:tiger@localhost/test")
+
+ asyncio_engine = create_async_engine(
+ "postgresql+psycopg_async://scott:tiger@localhost/test"
+ )
.. seealso::
"postgresql+psycopg2://scott:tiger@192.168.0.199:5432/test?sslmode=require"
)
-
Unix Domain Connections
------------------------
was built. This value can be overridden by passing a pathname to psycopg2,
using ``host`` as an additional keyword argument::
- create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")
+ create_engine(
+ "postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql"
+ )
.. warning:: The format accepted here allows for a hostname in the main URL
in addition to the "host" query string argument. **When using this URL
format, the initial host is silently ignored**. That is, this URL::
- engine = create_engine("postgresql+psycopg2://user:password@myhost1/dbname?host=myhost2")
+ engine = create_engine(
+ "postgresql+psycopg2://user:password@myhost1/dbname?host=myhost2"
+ )
Above, the hostname ``myhost1`` is **silently ignored and discarded.** The
host which is connected is the ``myhost2`` host.
For this form, the URL can be passed without any elements other than the
initial scheme::
- engine = create_engine('postgresql+psycopg2://')
+ engine = create_engine("postgresql+psycopg2://")
In the above form, a blank "dsn" string is passed to the ``psycopg2.connect()``
function which in turn represents an empty DSN passed to libpq.
engine = create_engine(
"postgresql+psycopg2://scott:tiger@host/dbname",
- executemany_mode='values_plus_batch')
-
+ executemany_mode="values_plus_batch",
+ )
Possible options for ``executemany_mode`` include:
engine = create_engine(
"postgresql+psycopg2://scott:tiger@host/dbname",
- executemany_mode='values_plus_batch',
- insertmanyvalues_page_size=5000, executemany_batch_page_size=500)
+ executemany_mode="values_plus_batch",
+ insertmanyvalues_page_size=5000,
+ executemany_batch_page_size=500,
+ )
.. seealso::
passed in the database URL; this parameter is consumed by the underlying
``libpq`` PostgreSQL client library::
- engine = create_engine("postgresql+psycopg2://user:pass@host/dbname?client_encoding=utf8")
+ engine = create_engine(
+ "postgresql+psycopg2://user:pass@host/dbname?client_encoding=utf8"
+ )
Alternatively, the above ``client_encoding`` value may be passed using
:paramref:`_sa.create_engine.connect_args` for programmatic establishment with
engine = create_engine(
"postgresql+psycopg2://user:pass@host/dbname",
- connect_args={'client_encoding': 'utf8'}
+ connect_args={"client_encoding": "utf8"},
)
* For all PostgreSQL versions, psycopg2 supports a client-side encoding
``client_encoding`` parameter passed to :func:`_sa.create_engine`::
engine = create_engine(
- "postgresql+psycopg2://user:pass@host/dbname",
- client_encoding="utf8"
+ "postgresql+psycopg2://user:pass@host/dbname", client_encoding="utf8"
)
.. tip:: The above ``client_encoding`` parameter admittedly is very similar
# postgresql.conf file
# client_encoding = sql_ascii # actually, defaults to database
- # encoding
+ # encoding
client_encoding = utf8
-
-
Transactions
------------
import logging
- logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
+ logging.getLogger("sqlalchemy.dialects.postgresql").setLevel(logging.INFO)
Above, it is assumed that logging is configured externally. If this is not
the case, configuration such as ``logging.basicConfig()`` must be utilized::
import logging
- logging.basicConfig() # log messages to stdout
- logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
+ logging.basicConfig() # log messages to stdout
+ logging.getLogger("sqlalchemy.dialects.postgresql").setLevel(logging.INFO)
.. seealso::
use of the hstore extension by setting ``use_native_hstore`` to ``False`` as
follows::
- engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
- use_native_hstore=False)
+ engine = create_engine(
+ "postgresql+psycopg2://scott:tiger@localhost/test",
+ use_native_hstore=False,
+ )
The ``HSTORE`` type is **still supported** when the
``psycopg2.extensions.register_hstore()`` extension is not used. It merely
from sqlalchemy import Dialect
from sqlalchemy import TypeDecorator
+
class NumericMoney(TypeDecorator):
impl = MONEY
- def process_result_value(
- self, value: Any, dialect: Dialect
- ) -> None:
+ def process_result_value(self, value: Any, dialect: Dialect) -> None:
if value is not None:
# adjust this for the currency and numeric
m = re.match(r"\$([\d.]+)", value)
from sqlalchemy import cast
from sqlalchemy import TypeDecorator
+
class NumericMoney(TypeDecorator):
impl = MONEY
.. versionadded:: 1.2
- """
+ """ # noqa: E501
__visit_name__ = "MONEY"
:func:`_asyncio.create_async_engine` engine creation function::
from sqlalchemy.ext.asyncio import create_async_engine
+
engine = create_async_engine("sqlite+aiosqlite:///filename")
The URL passes through all arguments to the ``pysqlite`` driver, so all
engine = create_async_engine("sqlite+aiosqlite:///myfile.db")
+
@event.listens_for(engine.sync_engine, "connect")
def do_connect(dbapi_connection, connection_record):
# disable aiosqlite's emitting of the BEGIN statement entirely.
# also stops it from emitting COMMIT before any DDL.
dbapi_connection.isolation_level = None
+
@event.listens_for(engine.sync_engine, "begin")
def do_begin(conn):
# emit our own BEGIN
# mypy: ignore-errors
-r"""
+r'''
.. dialect:: sqlite
:name: SQLite
:normal_support: 3.12+
when rendering DDL, add the flag ``sqlite_autoincrement=True`` to the Table
construct::
- Table('sometable', metadata,
- Column('id', Integer, primary_key=True),
- sqlite_autoincrement=True)
+ Table(
+ "sometable",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ sqlite_autoincrement=True,
+ )
Allowing autoincrement behavior SQLAlchemy types other than Integer/INTEGER
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
only using :meth:`.TypeEngine.with_variant`::
table = Table(
- "my_table", metadata,
- Column("id", BigInteger().with_variant(Integer, "sqlite"), primary_key=True)
+ "my_table",
+ metadata,
+ Column(
+ "id",
+ BigInteger().with_variant(Integer, "sqlite"),
+ primary_key=True,
+ ),
)
Another is to use a subclass of :class:`.BigInteger` that overrides its DDL
from sqlalchemy import BigInteger
from sqlalchemy.ext.compiler import compiles
+
class SLBigInteger(BigInteger):
pass
- @compiles(SLBigInteger, 'sqlite')
+
+ @compiles(SLBigInteger, "sqlite")
def bi_c(element, compiler, **kw):
return "INTEGER"
+
@compiles(SLBigInteger)
def bi_c(element, compiler, **kw):
return compiler.visit_BIGINT(element, **kw)
table = Table(
- "my_table", metadata,
- Column("id", SLBigInteger(), primary_key=True)
+ "my_table", metadata, Column("id", SLBigInteger(), primary_key=True)
)
.. seealso::
# INSERT..RETURNING
result = connection.execute(
- table.insert().
- values(name='foo').
- returning(table.c.col1, table.c.col2)
+ table.insert().values(name="foo").returning(table.c.col1, table.c.col2)
)
print(result.all())
# UPDATE..RETURNING
result = connection.execute(
- table.update().
- where(table.c.name=='foo').
- values(name='bar').
- returning(table.c.col1, table.c.col2)
+ table.update()
+ .where(table.c.name == "foo")
+ .values(name="bar")
+ .returning(table.c.col1, table.c.col2)
)
print(result.all())
# DELETE..RETURNING
result = connection.execute(
- table.delete().
- where(table.c.name=='foo').
- returning(table.c.col1, table.c.col2)
+ table.delete()
+ .where(table.c.name == "foo")
+ .returning(table.c.col1, table.c.col2)
)
print(result.all())
from sqlalchemy.engine import Engine
from sqlalchemy import event
+
@event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
that specifies the IGNORE algorithm::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', Integer),
- UniqueConstraint('id', 'data', sqlite_on_conflict='IGNORE')
+ "some_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", Integer),
+ UniqueConstraint("id", "data", sqlite_on_conflict="IGNORE"),
)
-The above renders CREATE TABLE DDL as::
+The above renders CREATE TABLE DDL as:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
id INTEGER NOT NULL,
UNIQUE constraint in the DDL::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', Integer, unique=True,
- sqlite_on_conflict_unique='IGNORE')
+ "some_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column(
+ "data", Integer, unique=True, sqlite_on_conflict_unique="IGNORE"
+ ),
)
-rendering::
+rendering:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
id INTEGER NOT NULL,
``sqlite_on_conflict_not_null`` is used::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', Integer, nullable=False,
- sqlite_on_conflict_not_null='FAIL')
+ "some_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column(
+ "data", Integer, nullable=False, sqlite_on_conflict_not_null="FAIL"
+ ),
)
-this renders the column inline ON CONFLICT phrase::
+this renders the column inline ON CONFLICT phrase:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
id INTEGER NOT NULL,
Similarly, for an inline primary key, use ``sqlite_on_conflict_primary_key``::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, primary_key=True,
- sqlite_on_conflict_primary_key='FAIL')
+ "some_table",
+ metadata,
+ Column(
+ "id",
+ Integer,
+ primary_key=True,
+ sqlite_on_conflict_primary_key="FAIL",
+ ),
)
SQLAlchemy renders the PRIMARY KEY constraint separately, so the conflict
-resolution algorithm is applied to the constraint itself::
+resolution algorithm is applied to the constraint itself:
+
+.. sourcecode:: sql
CREATE TABLE some_table (
id INTEGER NOT NULL,
.. _sqlite_on_conflict_insert:
INSERT...ON CONFLICT (Upsert)
------------------------------------
+-----------------------------
.. seealso:: This section describes the :term:`DML` version of "ON CONFLICT" for
SQLite, which occurs within an INSERT statement. For "ON CONFLICT" as
>>> from sqlalchemy.dialects.sqlite import insert
>>> insert_stmt = insert(my_table).values(
- ... id='some_existing_id',
- ... data='inserted value')
+ ... id="some_existing_id", data="inserted value"
+ ... )
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value')
+ ... index_elements=["id"], set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (?, ?)
ON CONFLICT (id) DO UPDATE SET data = ?{stop}
- >>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
- ... index_elements=['id']
- ... )
+ >>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(index_elements=["id"])
>>> print(do_nothing_stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (?, ?)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(user_email='a@b.com', data='inserted data')
+ >>> stmt = insert(my_table).values(user_email="a@b.com", data="inserted data")
>>> do_update_stmt = stmt.on_conflict_do_update(
... index_elements=[my_table.c.user_email],
- ... index_where=my_table.c.user_email.like('%@gmail.com'),
- ... set_=dict(data=stmt.excluded.data)
- ... )
+ ... index_where=my_table.c.user_email.like("%@gmail.com"),
+ ... set_=dict(data=stmt.excluded.data),
+ ... )
>>> print(do_update_stmt)
{printsql}INSERT INTO my_table (data, user_email) VALUES (?, ?)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
>>> do_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value')
+ ... index_elements=["id"], set_=dict(data="updated value")
... )
>>> print(do_update_stmt)
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
- ... id='some_id',
- ... data='inserted value',
- ... author='jlh'
+ ... id="some_id", data="inserted value", author="jlh"
... )
>>> do_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value', author=stmt.excluded.author)
+ ... index_elements=["id"],
+ ... set_=dict(data="updated value", author=stmt.excluded.author),
... )
>>> print(do_update_stmt)
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
- ... id='some_id',
- ... data='inserted value',
- ... author='jlh'
+ ... id="some_id", data="inserted value", author="jlh"
... )
>>> on_update_stmt = stmt.on_conflict_do_update(
- ... index_elements=['id'],
- ... set_=dict(data='updated value', author=stmt.excluded.author),
- ... where=(my_table.c.status == 2)
+ ... index_elements=["id"],
+ ... set_=dict(data="updated value", author=stmt.excluded.author),
+ ... where=(my_table.c.status == 2),
... )
>>> print(on_update_stmt)
{printsql}INSERT INTO my_table (id, data, author) VALUES (?, ?, ?)
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
- >>> stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
+ >>> stmt = stmt.on_conflict_do_nothing(index_elements=["id"])
>>> print(stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (?, ?) ON CONFLICT (id) DO NOTHING
.. sourcecode:: pycon+sql
- >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
+ >>> stmt = insert(my_table).values(id="some_id", data="inserted value")
>>> stmt = stmt.on_conflict_do_nothing()
>>> print(stmt)
{printsql}INSERT INTO my_table (id, data) VALUES (?, ?) ON CONFLICT DO NOTHING
A partial index, e.g. one which uses a WHERE clause, can be specified
with the DDL system using the argument ``sqlite_where``::
- tbl = Table('testtbl', m, Column('data', Integer))
- idx = Index('test_idx1', tbl.c.data,
- sqlite_where=and_(tbl.c.data > 5, tbl.c.data < 10))
+ tbl = Table("testtbl", m, Column("data", Integer))
+ idx = Index(
+ "test_idx1",
+ tbl.c.data,
+ sqlite_where=and_(tbl.c.data > 5, tbl.c.data < 10),
+ )
+
+The index will be rendered at create time as:
-The index will be rendered at create time as::
+.. sourcecode:: sql
CREATE INDEX test_idx1 ON testtbl (data)
WHERE data > 5 AND data < 10
import sqlite3
- assert sqlite3.sqlite_version_info < (3, 10, 0), "bug is fixed in this version"
+ assert sqlite3.sqlite_version_info < (
+ 3,
+ 10,
+ 0,
+ ), "bug is fixed in this version"
conn = sqlite3.connect(":memory:")
cursor = conn.cursor()
cursor.execute("insert into x (a, b) values (2, 2)")
cursor.execute("select x.a, x.b from x")
- assert [c[0] for c in cursor.description] == ['a', 'b']
+ assert [c[0] for c in cursor.description] == ["a", "b"]
- cursor.execute('''
+ cursor.execute(
+ """
select x.a, x.b from x where a=1
union
select x.a, x.b from x where a=2
- ''')
- assert [c[0] for c in cursor.description] == ['a', 'b'], \
- [c[0] for c in cursor.description]
+ """
+ )
+ assert [c[0] for c in cursor.description] == ["a", "b"], [
+ c[0] for c in cursor.description
+ ]
-The second assertion fails::
+The second assertion fails:
+
+.. sourcecode:: text
Traceback (most recent call last):
File "test.py", line 19, in <module>
result = conn.exec_driver_sql("select x.a, x.b from x")
assert result.keys() == ["a", "b"]
- result = conn.exec_driver_sql('''
+ result = conn.exec_driver_sql(
+ """
select x.a, x.b from x where a=1
union
select x.a, x.b from x where a=2
- ''')
+ """
+ )
assert result.keys() == ["a", "b"]
Note that above, even though SQLAlchemy filters out the dots, *both
the ``sqlite_raw_colnames`` execution option may be provided, either on a
per-:class:`_engine.Connection` basis::
- result = conn.execution_options(sqlite_raw_colnames=True).exec_driver_sql('''
+ result = conn.execution_options(sqlite_raw_colnames=True).exec_driver_sql(
+ """
select x.a, x.b from x where a=1
union
select x.a, x.b from x where a=2
- ''')
+ """
+ )
assert result.keys() == ["x.a", "x.b"]
or on a per-:class:`_engine.Engine` basis::
- engine = create_engine("sqlite://", execution_options={"sqlite_raw_colnames": True})
+ engine = create_engine(
+ "sqlite://", execution_options={"sqlite_raw_colnames": True}
+ )
When using the per-:class:`_engine.Engine` execution option, note that
**Core and ORM queries that use UNION may not function properly**.
`SQLite Internal Schema Objects <https://www.sqlite.org/fileformat2.html#intschema>`_ - in the SQLite
documentation.
-""" # noqa
+''' # noqa
from __future__ import annotations
import datetime
"%(year)04d-%(month)02d-%(day)02d %(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
- e.g.::
+ e.g.:
+
+ .. sourcecode:: text
2021-03-15 12:05:57.105542
import re
from sqlalchemy.dialects.sqlite import DATETIME
- dt = DATETIME(storage_format="%(year)04d/%(month)02d/%(day)02d "
- "%(hour)02d:%(minute)02d:%(second)02d",
- regexp=r"(\d+)/(\d+)/(\d+) (\d+)-(\d+)-(\d+)"
+ dt = DATETIME(
+ storage_format=(
+ "%(year)04d/%(month)02d/%(day)02d %(hour)02d:%(minute)02d:%(second)02d"
+ ),
+ regexp=r"(\d+)/(\d+)/(\d+) (\d+)-(\d+)-(\d+)",
)
:param storage_format: format string which will be applied to the dict
"%(year)04d-%(month)02d-%(day)02d"
- e.g.::
+ e.g.:
+
+ .. sourcecode:: text
2011-03-15
from sqlalchemy.dialects.sqlite import DATE
d = DATE(
- storage_format="%(month)02d/%(day)02d/%(year)04d",
- regexp=re.compile("(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)")
- )
+ storage_format="%(month)02d/%(day)02d/%(year)04d",
+ regexp=re.compile("(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)"),
+ )
:param storage_format: format string which will be applied to the
dict with keys year, month, and day.
"%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
- e.g.::
+ e.g.:
+
+ .. sourcecode:: text
12:05:57.10558
import re
from sqlalchemy.dialects.sqlite import TIME
- t = TIME(storage_format="%(hour)02d-%(minute)02d-"
- "%(second)02d-%(microsecond)06d",
- regexp=re.compile("(\d+)-(\d+)-(\d+)-(?:-(\d+))?")
+ t = TIME(
+ storage_format="%(hour)02d-%(minute)02d-%(second)02d-%(microsecond)06d",
+ regexp=re.compile("(\d+)-(\d+)-(\d+)-(?:-(\d+))?"),
)
:param storage_format: format string which will be applied to the dict
e = create_engine(
"sqlite+pysqlcipher://:password@/dbname.db",
- module=sqlcipher_compatible_driver
+ module=sqlcipher_compatible_driver,
)
These drivers make use of the SQLCipher engine. This system essentially
of the :mod:`~sqlalchemy.dialects.sqlite.pysqlite` driver, except that the
"password" field is now accepted, which should contain a passphrase::
- e = create_engine('sqlite+pysqlcipher://:testing@/foo.db')
+ e = create_engine("sqlite+pysqlcipher://:testing@/foo.db")
For an absolute file path, two leading slashes should be used for the
database name::
- e = create_engine('sqlite+pysqlcipher://:testing@//path/to/foo.db')
+ e = create_engine("sqlite+pysqlcipher://:testing@//path/to/foo.db")
A selection of additional encryption-related pragmas supported by SQLCipher
as documented at https://www.zetetic.net/sqlcipher/sqlcipher-api/ can be passed
new connection. Currently, ``cipher``, ``kdf_iter``
``cipher_page_size`` and ``cipher_use_hmac`` are supported::
- e = create_engine('sqlite+pysqlcipher://:testing@/foo.db?cipher=aes-256-cfb&kdf_iter=64000')
+ e = create_engine(
+ "sqlite+pysqlcipher://:testing@/foo.db?cipher=aes-256-cfb&kdf_iter=64000"
+ )
.. warning:: Previous versions of sqlalchemy did not take into consideration
the encryption-related pragmas passed in the url string, that were silently
---------------
The file specification for the SQLite database is taken as the "database"
-portion of the URL. Note that the format of a SQLAlchemy url is::
+portion of the URL. Note that the format of a SQLAlchemy url is:
+
+.. sourcecode:: text
driver://user:pass@host/database
looks like::
# relative path
- e = create_engine('sqlite:///path/to/database.db')
+ e = create_engine("sqlite:///path/to/database.db")
An absolute path, which is denoted by starting with a slash, means you
need **four** slashes::
# absolute path
- e = create_engine('sqlite:////path/to/database.db')
+ e = create_engine("sqlite:////path/to/database.db")
To use a Windows path, regular drive specifications and backslashes can be
used. Double backslashes are probably needed::
# absolute path on Windows
- e = create_engine('sqlite:///C:\\path\\to\\database.db')
+ e = create_engine("sqlite:///C:\\path\\to\\database.db")
To use sqlite ``:memory:`` database specify it as the filename using
``sqlite:///:memory:``. It's also the default if no filepath is
present, specifying only ``sqlite://`` and nothing else::
# in-memory database (note three slashes)
- e = create_engine('sqlite:///:memory:')
+ e = create_engine("sqlite:///:memory:")
# also in-memory database
- e2 = create_engine('sqlite://')
+ e2 = create_engine("sqlite://")
.. _pysqlite_uri_connections:
sqlite3.connect(
"file:path/to/database?mode=ro&nolock=1",
- check_same_thread=True, timeout=10, uri=True
+ check_same_thread=True,
+ timeout=10,
+ uri=True,
)
Regarding future parameters added to either the Python or native drivers. new
def regexp(a, b):
return re.search(a, b) is not None
+
sqlite_connection.create_function(
- "regexp", 2, regexp,
+ "regexp",
+ 2,
+ regexp,
)
There is currently no support for regular expression flags as a separate
nor should be necessary, for use with SQLAlchemy, usage of PARSE_DECLTYPES
can be forced if one configures "native_datetime=True" on create_engine()::
- engine = create_engine('sqlite://',
- connect_args={'detect_types':
- sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES},
- native_datetime=True
+ engine = create_engine(
+ "sqlite://",
+ connect_args={
+ "detect_types": sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES
+ },
+ native_datetime=True,
)
With this flag enabled, the DATE and TIMESTAMP types (but note - not the
parameter::
from sqlalchemy import NullPool
+
engine = create_engine("sqlite:///myfile.db", poolclass=NullPool)
It's been observed that the :class:`.NullPool` implementation incurs an
as ``False``::
from sqlalchemy.pool import StaticPool
- engine = create_engine('sqlite://',
- connect_args={'check_same_thread':False},
- poolclass=StaticPool)
+
+ engine = create_engine(
+ "sqlite://",
+ connect_args={"check_same_thread": False},
+ poolclass=StaticPool,
+ )
Note that using a ``:memory:`` database in multiple threads requires a recent
version of SQLite.
# maintain the same connection per thread
from sqlalchemy.pool import SingletonThreadPool
- engine = create_engine('sqlite:///mydb.db',
- poolclass=SingletonThreadPool)
+
+ engine = create_engine("sqlite:///mydb.db", poolclass=SingletonThreadPool)
# maintain the same connection across all threads
from sqlalchemy.pool import StaticPool
- engine = create_engine('sqlite:///mydb.db',
- poolclass=StaticPool)
+
+ engine = create_engine("sqlite:///mydb.db", poolclass=StaticPool)
Note that :class:`.SingletonThreadPool` should be configured for the number
of threads that are to be used; beyond that number, connections will be
from sqlalchemy import String
from sqlalchemy import TypeDecorator
+
class MixedBinary(TypeDecorator):
impl = String
cache_ok = True
def process_result_value(self, value, dialect):
if isinstance(value, str):
- value = bytes(value, 'utf-8')
+ value = bytes(value, "utf-8")
elif value is not None:
value = bytes(value)
engine = create_engine("sqlite:///myfile.db")
+
@event.listens_for(engine, "connect")
def do_connect(dbapi_connection, connection_record):
# disable pysqlite's emitting of the BEGIN statement entirely.
# also stops it from emitting COMMIT before any DDL.
dbapi_connection.isolation_level = None
+
@event.listens_for(engine, "begin")
def do_begin(conn):
# emit our own BEGIN
with engine.connect() as conn:
print(conn.scalar(text("SELECT UDF()")))
-
""" # noqa
import math
with conn.begin() as trans:
conn.execute(table.insert(), {"username": "sandy"})
-
The returned object is an instance of :class:`_engine.RootTransaction`.
This object represents the "scope" of the transaction,
which completes when either the :meth:`_engine.Transaction.rollback`
trans.rollback() # rollback to savepoint
# outer transaction continues
- connection.execute( ... )
+ connection.execute(...)
If :meth:`_engine.Connection.begin_nested` is called without first
calling :meth:`_engine.Connection.begin` or
with engine.connect() as connection: # begin() wasn't called
- with connection.begin_nested(): will auto-"begin()" first
- connection.execute( ... )
+ with connection.begin_nested(): # will auto-"begin()" first
+ connection.execute(...)
# savepoint is released
- connection.execute( ... )
+ connection.execute(...)
# explicitly commit outer transaction
connection.commit()
conn.exec_driver_sql(
"INSERT INTO table (id, value) VALUES (%(id)s, %(value)s)",
- [{"id":1, "value":"v1"}, {"id":2, "value":"v2"}]
+ [{"id": 1, "value": "v1"}, {"id": 2, "value": "v2"}],
)
Single dictionary::
conn.exec_driver_sql(
"INSERT INTO table (id, value) VALUES (%(id)s, %(value)s)",
- dict(id=1, value="v1")
+ dict(id=1, value="v1"),
)
Single tuple::
conn.exec_driver_sql(
- "INSERT INTO table (id, value) VALUES (?, ?)",
- (1, 'v1')
+ "INSERT INTO table (id, value) VALUES (?, ?)", (1, "v1")
)
.. note:: The :meth:`_engine.Connection.exec_driver_sql` method does
:class:`_engine.Connection`::
from sqlalchemy import create_engine
+
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")
connection = engine.connect()
trans = connection.begin()
shards = {"default": "base", "shard_1": "db1", "shard_2": "db2"}
+
@event.listens_for(Engine, "before_cursor_execute")
- def _switch_shard(conn, cursor, stmt,
- params, context, executemany):
- shard_id = conn.get_execution_options().get('shard_id', "default")
+ def _switch_shard(conn, cursor, stmt, params, context, executemany):
+ shard_id = conn.get_execution_options().get("shard_id", "default")
current_shard = conn.info.get("current_shard", None)
if current_shard != shard_id:
E.g.::
with engine.begin() as conn:
- conn.execute(
- text("insert into table (x, y, z) values (1, 2, 3)")
- )
+ conn.execute(text("insert into table (x, y, z) values (1, 2, 3)"))
conn.execute(text("my_special_procedure(5)"))
Upon successful operation, the :class:`.Transaction`
:meth:`_engine.Connection.begin` - start a :class:`.Transaction`
for a particular :class:`_engine.Connection`.
- """
+ """ # noqa: E501
with self.connect() as conn:
with conn.begin():
yield conn
and its underlying :class:`.Dialect` and :class:`_pool.Pool`
constructs::
- engine = create_engine("mysql+mysqldb://scott:tiger@hostname/dbname",
- pool_recycle=3600, echo=True)
+ engine = create_engine(
+ "mysql+mysqldb://scott:tiger@hostname/dbname",
+ pool_recycle=3600,
+ echo=True,
+ )
The string form of the URL is
``dialect[+driver]://user:password@host/dbname[?key=value..]``, where
result = conn.execution_options(
stream_results=True, max_row_buffer=50
- ).execute(text("select * from table"))
+ ).execute(text("select * from table"))
.. versionadded:: 1.4 ``max_row_buffer`` may now exceed 1000 rows.
r1 = connection.execute(
users.insert().returning(
- users.c.user_name,
- users.c.user_id,
- sort_by_parameter_order=True
+ users.c.user_name, users.c.user_id, sort_by_parameter_order=True
),
- user_values
+ user_values,
)
r2 = connection.execute(
addresses.c.address_id,
addresses.c.address,
addresses.c.user_id,
- sort_by_parameter_order=True
+ sort_by_parameter_order=True,
),
- address_values
+ address_values,
)
rows = r1.splice_horizontally(r2).all()
- assert (
- rows ==
- [
- ("john", 1, 1, "foo@bar.com", 1),
- ("jack", 2, 2, "bar@bat.com", 2),
- ]
- )
+ assert rows == [
+ ("john", 1, 1, "foo@bar.com", 1),
+ ("jack", 2, 2, "bar@bat.com", 2),
+ ]
.. versionadded:: 2.0
:meth:`.CursorResult.splice_vertically`
- """
+ """ # noqa: E501
clone = self._generate()
total_rows = [
from sqlalchemy import event, create_engine
- def before_cursor_execute(conn, cursor, statement, parameters, context,
- executemany):
+
+ def before_cursor_execute(
+ conn, cursor, statement, parameters, context, executemany
+ ):
log.info("Received statement: %s", statement)
- engine = create_engine('postgresql+psycopg2://scott:tiger@localhost/test')
+
+ engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")
event.listen(engine, "before_cursor_execute", before_cursor_execute)
or with a specific :class:`_engine.Connection`::
with engine.begin() as conn:
- @event.listens_for(conn, 'before_cursor_execute')
- def before_cursor_execute(conn, cursor, statement, parameters,
- context, executemany):
+
+ @event.listens_for(conn, "before_cursor_execute")
+ def before_cursor_execute(
+ conn, cursor, statement, parameters, context, executemany
+ ):
log.info("Received statement: %s", statement)
When the methods are called with a `statement` parameter, such as in
from sqlalchemy.engine import Engine
from sqlalchemy import event
+
@event.listens_for(Engine, "before_cursor_execute", retval=True)
- def comment_sql_calls(conn, cursor, statement, parameters,
- context, executemany):
+ def comment_sql_calls(
+ conn, cursor, statement, parameters, context, executemany
+ ):
statement = statement + " -- some comment"
return statement, parameters
returned as a two-tuple in this case::
@event.listens_for(Engine, "before_cursor_execute", retval=True)
- def before_cursor_execute(conn, cursor, statement,
- parameters, context, executemany):
+ def before_cursor_execute(
+ conn, cursor, statement, parameters, context, executemany
+ ):
# do something with statement, parameters
return statement, parameters
@event.listens_for(Engine, "handle_error")
def handle_exception(context):
- if isinstance(context.original_exception,
- psycopg2.OperationalError) and \
- "failed" in str(context.original_exception):
+ if isinstance(
+ context.original_exception, psycopg2.OperationalError
+ ) and "failed" in str(context.original_exception):
raise MySpecialException("failed operation")
.. warning:: Because the
@event.listens_for(Engine, "handle_error", retval=True)
def handle_exception(context):
- if context.chained_exception is not None and \
- "special" in context.chained_exception.message:
- return MySpecialException("failed",
- cause=context.chained_exception)
+ if (
+ context.chained_exception is not None
+ and "special" in context.chained_exception.message
+ ):
+ return MySpecialException(
+ "failed", cause=context.chained_exception
+ )
Handlers that return ``None`` may be used within the chain; when
a handler returns ``None``, the previous exception instance,
e = create_engine("postgresql+psycopg2://user@host/dbname")
- @event.listens_for(e, 'do_connect')
+
+ @event.listens_for(e, "do_connect")
def receive_do_connect(dialect, conn_rec, cargs, cparams):
cparams["password"] = "some_password"
e = create_engine("postgresql+psycopg2://user@host/dbname")
- @event.listens_for(e, 'do_connect')
+
+ @event.listens_for(e, "do_connect")
def receive_do_connect(dialect, conn_rec, cargs, cparams):
return psycopg2.connect(*cargs, **cparams)
To implement, establish as a series of tuples, as in::
construct_arguments = [
- (schema.Index, {
- "using": False,
- "where": None,
- "ops": None
- })
+ (schema.Index, {"using": False, "where": None, "ops": None}),
]
If the above construct is established on the PostgreSQL dialect,
from sqlalchemy.engine import CreateEnginePlugin
from sqlalchemy import event
+
class LogCursorEventsPlugin(CreateEnginePlugin):
def __init__(self, url, kwargs):
# consume the parameter "log_cursor_logging_name" from the
# URL query
- logging_name = url.query.get("log_cursor_logging_name", "log_cursor")
+ logging_name = url.query.get(
+ "log_cursor_logging_name", "log_cursor"
+ )
self.log = logging.getLogger(logging_name)
"attach an event listener after the new Engine is constructed"
event.listen(engine, "before_cursor_execute", self._log_event)
-
def _log_event(
self,
conn,
statement,
parameters,
context,
- executemany):
+ executemany,
+ ):
self.log.info("Plugin logged cursor event: %s", statement)
-
-
Plugins are registered using entry points in a similar way as that
of dialects::
- entry_points={
- 'sqlalchemy.plugins': [
- 'log_cursor_plugin = myapp.plugins:LogCursorEventsPlugin'
+ entry_points = {
+ "sqlalchemy.plugins": [
+ "log_cursor_plugin = myapp.plugins:LogCursorEventsPlugin"
]
+ }
A plugin that uses the above names would be invoked from a database
URL as in::
in the URL::
engine = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test?"
- "plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")
+ "mysql+pymysql://scott:tiger@localhost/test?"
+ "plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three"
+ )
The plugin names may also be passed directly to :func:`_sa.create_engine`
using the :paramref:`_sa.create_engine.plugins` argument::
engine = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test",
- plugins=["myplugin"])
+ "mysql+pymysql://scott:tiger@localhost/test", plugins=["myplugin"]
+ )
.. versionadded:: 1.2.3 plugin names can also be specified
to :func:`_sa.create_engine` as a list
class MyPlugin(CreateEnginePlugin):
def __init__(self, url, kwargs):
- self.my_argument_one = url.query['my_argument_one']
- self.my_argument_two = url.query['my_argument_two']
- self.my_argument_three = kwargs.pop('my_argument_three', None)
+ self.my_argument_one = url.query["my_argument_one"]
+ self.my_argument_two = url.query["my_argument_two"]
+ self.my_argument_three = kwargs.pop("my_argument_three", None)
def update_url(self, url):
return url.difference_update_query(
from sqlalchemy import create_engine
engine = create_engine(
- "mysql+pymysql://scott:tiger@localhost/test?"
- "plugin=myplugin&my_argument_one=foo&my_argument_two=bar",
- my_argument_three='bat'
+ "mysql+pymysql://scott:tiger@localhost/test?"
+ "plugin=myplugin&my_argument_one=foo&my_argument_two=bar",
+ my_argument_three="bat",
)
.. versionchanged:: 1.4
def __init__(self, url, kwargs):
if hasattr(CreateEnginePlugin, "update_url"):
# detect the 1.4 API
- self.my_argument_one = url.query['my_argument_one']
- self.my_argument_two = url.query['my_argument_two']
+ self.my_argument_one = url.query["my_argument_one"]
+ self.my_argument_two = url.query["my_argument_two"]
else:
# detect the 1.3 and earlier API - mutate the
# URL directly
- self.my_argument_one = url.query.pop('my_argument_one')
- self.my_argument_two = url.query.pop('my_argument_two')
+ self.my_argument_one = url.query.pop("my_argument_one")
+ self.my_argument_two = url.query.pop("my_argument_two")
- self.my_argument_three = kwargs.pop('my_argument_three', None)
+ self.my_argument_three = kwargs.pop("my_argument_three", None)
def update_url(self, url):
# this method is only called in the 1.4 version
engine = create_async_engine(...)
+
@event.listens_for(engine.sync_engine, "connect")
- def register_custom_types(dbapi_connection, ...):
+ def register_custom_types(
+ dbapi_connection, # ...
+ ):
dbapi_connection.run_async(
lambda connection: connection.set_type_codec(
- 'MyCustomType', encoder, decoder, ...
+ "MyCustomType", encoder, decoder, ...
)
)
from sqlalchemy import create_mock_engine
+
def dump(sql, *multiparams, **params):
print(sql.compile(dialect=engine.dialect))
- engine = create_mock_engine('postgresql+psycopg2://', dump)
+
+ engine = create_mock_engine("postgresql+psycopg2://", dump)
metadata.create_all(engine, checkfirst=False)
:param url: A string URL which typically needs to contain only the
or a :class:`_engine.Connection`::
from sqlalchemy import inspect, create_engine
- engine = create_engine('...')
+
+ engine = create_engine("...")
insp = inspect(engine)
Where above, the :class:`~sqlalchemy.engine.interfaces.Dialect` associated
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy import inspect
- engine = create_engine('...')
+ engine = create_engine("...")
meta = MetaData()
- user_table = Table('user', meta)
+ user_table = Table("user", meta)
insp = inspect(engine)
insp.reflect_table(user_table, None)
statement = select(table.c.x, table.c.y, table.c.z)
result = connection.execute(statement)
- for z, y in result.columns('z', 'y'):
- # ...
-
+ for z, y in result.columns("z", "y"):
+ ...
Example of using the column objects from the statement itself::
for z, y in result.columns(
- statement.selected_columns.c.z,
- statement.selected_columns.c.y
+ statement.selected_columns.c.z, statement.selected_columns.c.y
):
- # ...
+ ...
.. versionadded:: 1.4
as iteration of keys, values, and items::
for row in result:
- if 'a' in row._mapping:
- print("Column 'a': %s" % row._mapping['a'])
+ if "a" in row._mapping:
+ print("Column 'a': %s" % row._mapping["a"])
print("Column b: %s" % row._mapping[table.c.b])
-
.. versionadded:: 1.4 The :class:`.RowMapping` object replaces the
mapping-like access previously provided by a database result row,
which now seeks to behave mostly like a named tuple.
for keys and either strings or tuples of strings for values, e.g.::
>>> from sqlalchemy.engine import make_url
- >>> url = make_url("postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt")
+ >>> url = make_url(
+ ... "postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt"
+ ... )
>>> url.query
immutabledict({'alt_host': ('host1', 'host2'), 'ssl_cipher': '/path/to/crt'})
>>> from sqlalchemy.engine import make_url
>>> url = make_url("postgresql+psycopg2://user:pass@host/dbname")
- >>> url = url.update_query_string("alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt")
+ >>> url = url.update_query_string(
+ ... "alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt"
+ ... )
>>> str(url)
'postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt'
>>> from sqlalchemy.engine import make_url
>>> url = make_url("postgresql+psycopg2://user:pass@host/dbname")
- >>> url = url.update_query_pairs([("alt_host", "host1"), ("alt_host", "host2"), ("ssl_cipher", "/path/to/crt")])
+ >>> url = url.update_query_pairs(
+ ... [
+ ... ("alt_host", "host1"),
+ ... ("alt_host", "host2"),
+ ... ("ssl_cipher", "/path/to/crt"),
+ ... ]
+ ... )
>>> str(url)
'postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt'
>>> from sqlalchemy.engine import make_url
>>> url = make_url("postgresql+psycopg2://user:pass@host/dbname")
- >>> url = url.update_query_dict({"alt_host": ["host1", "host2"], "ssl_cipher": "/path/to/crt"})
+ >>> url = url.update_query_dict(
+ ... {"alt_host": ["host1", "host2"], "ssl_cipher": "/path/to/crt"}
+ ... )
>>> str(url)
'postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt'
E.g.::
- url = url.difference_update_query(['foo', 'bar'])
+ url = url.difference_update_query(["foo", "bar"])
Equivalent to using :meth:`_engine.URL.set` as follows::
url = url.set(
query={
key: url.query[key]
- for key in set(url.query).difference(['foo', 'bar'])
+ for key in set(url.query).difference(["foo", "bar"])
}
)
>>> from sqlalchemy.engine import make_url
- >>> url = make_url("postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt")
+ >>> url = make_url(
+ ... "postgresql+psycopg2://user:pass@host/dbname?alt_host=host1&alt_host=host2&ssl_cipher=%2Fpath%2Fto%2Fcrt"
+ ... )
>>> url.query
immutabledict({'alt_host': ('host1', 'host2'), 'ssl_cipher': '/path/to/crt'})
>>> url.normalized_query
from sqlalchemy import event
from sqlalchemy.schema import UniqueConstraint
+
def unique_constraint_name(const, table):
- const.name = "uq_%s_%s" % (
- table.name,
- list(const.columns)[0].name
- )
+ const.name = "uq_%s_%s" % (table.name, list(const.columns)[0].name)
+
+
event.listen(
- UniqueConstraint,
- "after_parent_attach",
- unique_constraint_name)
+ UniqueConstraint, "after_parent_attach", unique_constraint_name
+ )
:param bool insert: The default behavior for event handlers is to append
the decorated user defined function to an internal list of registered
from sqlalchemy import event
from sqlalchemy.schema import UniqueConstraint
+
@event.listens_for(UniqueConstraint, "after_parent_attach")
def unique_constraint_name(const, table):
- const.name = "uq_%s_%s" % (
- table.name,
- list(const.columns)[0].name
- )
+ const.name = "uq_%s_%s" % (table.name, list(const.columns)[0].name)
A given function can also be invoked for only the first invocation
of the event using the ``once`` argument::
def on_config():
do_config()
-
.. warning:: The ``once`` argument does not imply automatic de-registration
of the listener function after it has been invoked a first time; a
listener entry will remain associated with the target object.
def my_listener_function(*arg):
pass
+
# ... it's removed like this
event.remove(SomeMappedClass, "before_insert", my_listener_function)
from sqlalchemy.exc import DontWrapMixin
+
class MyCustomException(Exception, DontWrapMixin):
pass
+
class MySpecialType(TypeDecorator):
impl = String
def process_bind_param(self, value, dialect):
- if value == 'invalid':
+ if value == "invalid":
raise MyCustomException("invalid!")
"""
class User(Base):
# ...
- keywords = association_proxy('kws', 'keyword')
+ keywords = association_proxy("kws", "keyword")
If we access this :class:`.AssociationProxy` from
:attr:`_orm.Mapper.all_orm_descriptors`, and we want to view the
:attr:`.AssociationProxyInstance.remote_attr` attributes separately::
stmt = (
- select(Parent).
- join(Parent.proxied.local_attr).
- join(Parent.proxied.remote_attr)
+ select(Parent)
+ .join(Parent.proxied.local_attr)
+ .join(Parent.proxied.remote_attr)
)
A future release may seek to provide a more succinct join pattern
``@contextlib.asynccontextmanager`` supports, and the usage pattern
is different as well.
- Typical usage::
+ Typical usage:
+
+ .. sourcecode:: text
@asyncstartablecontext
async def some_async_generator(<arguments>):
method of :class:`_asyncio.AsyncEngine`::
from sqlalchemy.ext.asyncio import create_async_engine
+
engine = create_async_engine("postgresql+asyncpg://user:pass@host/dbname")
async with engine.connect() as conn:
E.g.::
- result = await conn.stream(stmt):
+ result = await conn.stream(stmt)
async for row in result:
print(f"{row}")
*arg: _P.args,
**kw: _P.kwargs,
) -> _T:
- """Invoke the given synchronous (i.e. not async) callable,
+ '''Invoke the given synchronous (i.e. not async) callable,
passing a synchronous-style :class:`_engine.Connection` as the first
argument.
E.g.::
def do_something_with_core(conn: Connection, arg1: int, arg2: str) -> str:
- '''A synchronous function that does not require awaiting
+ """A synchronous function that does not require awaiting
:param conn: a Core SQLAlchemy Connection, used synchronously
:return: an optional return value is supported
- '''
- conn.execute(
- some_table.insert().values(int_col=arg1, str_col=arg2)
- )
+ """
+ conn.execute(some_table.insert().values(int_col=arg1, str_col=arg2))
return "success"
async def do_something_async(async_engine: AsyncEngine) -> None:
- '''an async function that uses awaiting'''
+ """an async function that uses awaiting"""
async with async_engine.begin() as async_conn:
# run do_something_with_core() with a sync-style
# Connection, proxied into an awaitable
- return_code = await async_conn.run_sync(do_something_with_core, 5, "strval")
+ return_code = await async_conn.run_sync(
+ do_something_with_core, 5, "strval"
+ )
print(return_code)
This method maintains the asyncio event loop all the way through
:ref:`session_run_sync`
- """ # noqa: E501
+ ''' # noqa: E501
return await greenlet_spawn(
fn, self._proxied, *arg, _require_await=False, **kw
:func:`_asyncio.create_async_engine` function::
from sqlalchemy.ext.asyncio import create_async_engine
+
engine = create_async_engine("postgresql+asyncpg://user:pass@host/dbname")
.. versionadded:: 1.4
)
await conn.execute(text("my_special_procedure(5)"))
-
"""
conn = self.connect()
object is entered::
async with async_session.begin():
- # .. ORM transaction is begun
+ ... # ORM transaction is begun
Note that database IO will not normally occur when the session-level
transaction is begun, as database transactions begin on an
# construct async engines w/ async drivers
engines = {
- 'leader':create_async_engine("sqlite+aiosqlite:///leader.db"),
- 'other':create_async_engine("sqlite+aiosqlite:///other.db"),
- 'follower1':create_async_engine("sqlite+aiosqlite:///follower1.db"),
- 'follower2':create_async_engine("sqlite+aiosqlite:///follower2.db"),
+ "leader": create_async_engine("sqlite+aiosqlite:///leader.db"),
+ "other": create_async_engine("sqlite+aiosqlite:///other.db"),
+ "follower1": create_async_engine("sqlite+aiosqlite:///follower1.db"),
+ "follower2": create_async_engine("sqlite+aiosqlite:///follower2.db"),
}
+
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None, **kw):
# within get_bind(), return sync engines
if mapper and issubclass(mapper.class_, MyOtherClass):
- return engines['other'].sync_engine
+ return engines["other"].sync_engine
elif self._flushing or isinstance(clause, (Update, Delete)):
- return engines['leader'].sync_engine
+ return engines["leader"].sync_engine
else:
return engines[
- random.choice(['follower1','follower2'])
+ random.choice(["follower1", "follower2"])
].sync_engine
+
# apply to AsyncSession using sync_session_class
- AsyncSessionMaker = async_sessionmaker(
- sync_session_class=RoutingSession
- )
+ AsyncSessionMaker = async_sessionmaker(sync_session_class=RoutingSession)
The :meth:`_orm.Session.get_bind` method is called in a non-asyncio,
implicitly non-blocking context in the same manner as ORM event hooks
*arg: _P.args,
**kw: _P.kwargs,
) -> _T:
- """Invoke the given synchronous (i.e. not async) callable,
+ '''Invoke the given synchronous (i.e. not async) callable,
passing a synchronous-style :class:`_orm.Session` as the first
argument.
E.g.::
def some_business_method(session: Session, param: str) -> str:
- '''A synchronous function that does not require awaiting
+ """A synchronous function that does not require awaiting
:param session: a SQLAlchemy Session, used synchronously
:return: an optional return value is supported
- '''
+ """
session.add(MyObject(param=param))
session.flush()
return "success"
async def do_something_async(async_engine: AsyncEngine) -> None:
- '''an async function that uses awaiting'''
+ """an async function that uses awaiting"""
with AsyncSession(async_engine) as async_session:
# run some_business_method() with a sync-style
# Session, proxied into an awaitable
- return_code = await async_session.run_sync(some_business_method, param="param1")
+ return_code = await async_session.run_sync(
+ some_business_method, param="param1"
+ )
print(return_code)
This method maintains the asyncio event loop all the way through
:meth:`.AsyncConnection.run_sync`
:ref:`session_run_sync`
- """ # noqa: E501
+ ''' # noqa: E501
return await greenlet_spawn(
fn, self.sync_session, *arg, _require_await=False, **kw
# construct async engines w/ async drivers
engines = {
- 'leader':create_async_engine("sqlite+aiosqlite:///leader.db"),
- 'other':create_async_engine("sqlite+aiosqlite:///other.db"),
- 'follower1':create_async_engine("sqlite+aiosqlite:///follower1.db"),
- 'follower2':create_async_engine("sqlite+aiosqlite:///follower2.db"),
+ "leader": create_async_engine("sqlite+aiosqlite:///leader.db"),
+ "other": create_async_engine("sqlite+aiosqlite:///other.db"),
+ "follower1": create_async_engine("sqlite+aiosqlite:///follower1.db"),
+ "follower2": create_async_engine("sqlite+aiosqlite:///follower2.db"),
}
+
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None, **kw):
# within get_bind(), return sync engines
if mapper and issubclass(mapper.class_, MyOtherClass):
- return engines['other'].sync_engine
+ return engines["other"].sync_engine
elif self._flushing or isinstance(clause, (Update, Delete)):
- return engines['leader'].sync_engine
+ return engines["leader"].sync_engine
else:
return engines[
- random.choice(['follower1','follower2'])
+ random.choice(["follower1", "follower2"])
].sync_engine
+
# apply to AsyncSession using sync_session_class
- AsyncSessionMaker = async_sessionmaker(
- sync_session_class=RoutingSession
- )
+ AsyncSessionMaker = async_sessionmaker(sync_session_class=RoutingSession)
The :meth:`_orm.Session.get_bind` method is called in a non-asyncio,
implicitly non-blocking context in the same manner as ORM event hooks
object is entered::
async with async_session.begin():
- # .. ORM transaction is begun
+ ... # ORM transaction is begun
Note that database IO will not normally occur when the session-level
transaction is begun, as database transactions begin on an
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.ext.asyncio import async_sessionmaker
- async def run_some_sql(async_session: async_sessionmaker[AsyncSession]) -> None:
+
+ async def run_some_sql(
+ async_session: async_sessionmaker[AsyncSession],
+ ) -> None:
async with async_session() as session:
session.add(SomeObject(data="object"))
session.add(SomeOtherObject(name="other object"))
await session.commit()
+
async def main() -> None:
# an AsyncEngine, which the AsyncSession will use for connection
# resources
- engine = create_async_engine('postgresql+asyncpg://scott:tiger@localhost/')
+ engine = create_async_engine(
+ "postgresql+asyncpg://scott:tiger@localhost/"
+ )
# create a reusable factory for new AsyncSession instances
async_session = async_sessionmaker(engine)
# commits transaction, closes session
-
"""
session = self()
AsyncSession = async_sessionmaker(some_engine)
- AsyncSession.configure(bind=create_async_engine('sqlite+aiosqlite://'))
+ AsyncSession.configure(bind=create_async_engine("sqlite+aiosqlite://"))
""" # noqa E501
self.kw.update(new_kw)
Base = automap_base()
Base.prepare(e, modulename_for_table=module_name_for_table)
- Base.prepare(e, schema="test_schema", modulename_for_table=module_name_for_table)
- Base.prepare(e, schema="test_schema_2", modulename_for_table=module_name_for_table)
+ Base.prepare(
+ e, schema="test_schema", modulename_for_table=module_name_for_table
+ )
+ Base.prepare(
+ e, schema="test_schema_2", modulename_for_table=module_name_for_table
+ )
The same named-classes are organized into a hierarchical collection available
at :attr:`.AutomapBase.by_module`. This collection is traversed using the
id = Column(Integer, ForeignKey("employee.id"), primary_key=True)
favorite_employee_id = Column(Integer, ForeignKey("employee.id"))
- favorite_employee = relationship(Employee, foreign_keys=favorite_employee_id)
+ favorite_employee = relationship(
+ Employee, foreign_keys=favorite_employee_id
+ )
__mapper_args__ = {
"polymorphic_identity": "engineer",
We can resolve this conflict by using an underscore as follows::
- def name_for_scalar_relationship(base, local_cls, referred_cls, constraint):
+ def name_for_scalar_relationship(
+ base, local_cls, referred_cls, constraint
+ ):
name = referred_cls.__name__.lower()
local_table = local_cls.__table__
if name in local_table.columns:
newname = name + "_"
- warnings.warn("Already detected name %s present. using %s" % (name, newname))
+ warnings.warn(
+ "Already detected name %s present. using %s" % (name, newname)
+ )
return newname
return name
is passed to the lambda::
sub_bq = self.bakery(lambda s: s.query(User.name))
- sub_bq += lambda q: q.filter(
- User.id == Address.user_id).correlate(Address)
+ sub_bq += lambda q: q.filter(User.id == Address.user_id).correlate(Address)
main_bq = self.bakery(lambda s: s.query(Address))
- main_bq += lambda q: q.filter(
- sub_bq.to_query(q).exists())
+ main_bq += lambda q: q.filter(sub_bq.to_query(q).exists())
In the case where the subquery is used in the first callable against
a :class:`.Session`, the :class:`.Session` is also accepted::
sub_bq = self.bakery(lambda s: s.query(User.name))
- sub_bq += lambda q: q.filter(
- User.id == Address.user_id).correlate(Address)
+ sub_bq += lambda q: q.filter(User.id == Address.user_id).correlate(Address)
main_bq = self.bakery(
- lambda s: s.query(
- Address.id, sub_bq.to_query(q).scalar_subquery())
+ lambda s: s.query(Address.id, sub_bq.to_query(q).scalar_subquery())
)
:param query_or_session: a :class:`_query.Query` object or a class
.. versionadded:: 1.3
- """
+ """ # noqa: E501
if isinstance(query_or_session, Session):
session = query_or_session
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ColumnClause
+
class MyColumn(ColumnClause):
inherit_cache = True
+
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
from sqlalchemy import select
- s = select(MyColumn('x'), MyColumn('y'))
+ s = select(MyColumn("x"), MyColumn("y"))
print(str(s))
-Produces::
+Produces:
+
+.. sourcecode:: sql
SELECT [x], [y]
from sqlalchemy.schema import DDLElement
+
class AlterColumn(DDLElement):
inherit_cache = False
self.column = column
self.cmd = cmd
+
@compiles(AlterColumn)
def visit_alter_column(element, compiler, **kw):
return "ALTER COLUMN %s ..." % element.column.name
- @compiles(AlterColumn, 'postgresql')
+
+ @compiles(AlterColumn, "postgresql")
def visit_alter_column(element, compiler, **kw):
- return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name,
- element.column.name)
+ return "ALTER TABLE %s ALTER COLUMN %s ..." % (
+ element.table.name,
+ element.column.name,
+ )
The second ``visit_alter_table`` will be invoked when any ``postgresql``
dialect is used.
from sqlalchemy.sql.expression import Executable, ClauseElement
+
class InsertFromSelect(Executable, ClauseElement):
inherit_cache = False
self.table = table
self.select = select
+
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return "INSERT INTO %s (%s)" % (
compiler.process(element.table, asfrom=True, **kw),
- compiler.process(element.select, **kw)
+ compiler.process(element.select, **kw),
)
- insert = InsertFromSelect(t1, select(t1).where(t1.c.x>5))
+
+ insert = InsertFromSelect(t1, select(t1).where(t1.c.x > 5))
print(insert)
-Produces::
+Produces (formatted for readability):
+
+.. sourcecode:: sql
- "INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z
- FROM mytable WHERE mytable.x > :x_1)"
+ INSERT INTO mytable (
+ SELECT mytable.x, mytable.y, mytable.z
+ FROM mytable
+ WHERE mytable.x > :x_1
+ )
.. note::
@compiles(MyConstraint)
def compile_my_constraint(constraint, ddlcompiler, **kw):
- kw['literal_binds'] = True
+ kw["literal_binds"] = True
return "CONSTRAINT %s CHECK (%s)" % (
constraint.name,
- ddlcompiler.sql_compiler.process(
- constraint.expression, **kw)
+ ddlcompiler.sql_compiler.process(constraint.expression, **kw),
)
Above, we add an additional flag to the process step as called by
from sqlalchemy.sql.expression import Insert
+
@compiles(Insert)
def prefix_inserts(insert, compiler, **kw):
return compiler.visit_insert(insert.prefix_with("some prefix"), **kw)
``compiler`` works for types, too, such as below where we implement the
MS-SQL specific 'max' keyword for ``String``/``VARCHAR``::
- @compiles(String, 'mssql')
- @compiles(VARCHAR, 'mssql')
+ @compiles(String, "mssql")
+ @compiles(VARCHAR, "mssql")
def compile_varchar(element, compiler, **kw):
- if element.length == 'max':
+ if element.length == "max":
return "VARCHAR('max')"
else:
return compiler.visit_VARCHAR(element, **kw)
- foo = Table('foo', metadata,
- Column('data', VARCHAR('max'))
- )
+
+ foo = Table("foo", metadata, Column("data", VARCHAR("max")))
Subclassing Guidelines
======================
from sqlalchemy.sql.expression import FunctionElement
+
class coalesce(FunctionElement):
- name = 'coalesce'
+ name = "coalesce"
inherit_cache = True
+
@compiles(coalesce)
def compile(element, compiler, **kw):
return "coalesce(%s)" % compiler.process(element.clauses, **kw)
- @compiles(coalesce, 'oracle')
+
+ @compiles(coalesce, "oracle")
def compile(element, compiler, **kw):
if len(element.clauses) > 2:
- raise TypeError("coalesce only supports two arguments on "
- "Oracle Database")
+ raise TypeError(
+ "coalesce only supports two arguments on " "Oracle Database"
+ )
return "nvl(%s)" % compiler.process(element.clauses, **kw)
* :class:`.ExecutableDDLElement` - The root of all DDL expressions,
class MyColumn(ColumnClause):
inherit_cache = True
+
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
self.table = table
self.select = select
+
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return "INSERT INTO %s (%s)" % (
compiler.process(element.table, asfrom=True, **kw),
- compiler.process(element.select, **kw)
+ compiler.process(element.select, **kw),
)
While it is also possible that the above ``InsertFromSelect`` could be made to
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import DateTime
+
class utcnow(expression.FunctionElement):
type = DateTime()
inherit_cache = True
- @compiles(utcnow, 'postgresql')
+
+ @compiles(utcnow, "postgresql")
def pg_utcnow(element, compiler, **kw):
return "TIMEZONE('utc', CURRENT_TIMESTAMP)"
- @compiles(utcnow, 'mssql')
+
+ @compiles(utcnow, "mssql")
def ms_utcnow(element, compiler, **kw):
return "GETUTCDATE()"
Example usage::
- from sqlalchemy import (
- Table, Column, Integer, String, DateTime, MetaData
- )
+ from sqlalchemy import Table, Column, Integer, String, DateTime, MetaData
+
metadata = MetaData()
- event = Table("event", metadata,
+ event = Table(
+ "event",
+ metadata,
Column("id", Integer, primary_key=True),
Column("description", String(50), nullable=False),
- Column("timestamp", DateTime, server_default=utcnow())
+ Column("timestamp", DateTime, server_default=utcnow()),
)
"GREATEST" function
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import Numeric
+
class greatest(expression.FunctionElement):
type = Numeric()
- name = 'greatest'
+ name = "greatest"
inherit_cache = True
+
@compiles(greatest)
def default_greatest(element, compiler, **kw):
return compiler.visit_function(element)
- @compiles(greatest, 'sqlite')
- @compiles(greatest, 'mssql')
- @compiles(greatest, 'oracle')
+
+ @compiles(greatest, "sqlite")
+ @compiles(greatest, "mssql")
+ @compiles(greatest, "oracle")
def case_greatest(element, compiler, **kw):
arg1, arg2 = list(element.clauses)
return compiler.process(case((arg1 > arg2, arg1), else_=arg2), **kw)
Example usage::
- Session.query(Account).\
- filter(
- greatest(
- Account.checking_balance,
- Account.savings_balance) > 10000
- )
+ Session.query(Account).filter(
+ greatest(Account.checking_balance, Account.savings_balance) > 10000
+ )
"false" expression
------------------
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
+
class sql_false(expression.ColumnElement):
inherit_cache = True
+
@compiles(sql_false)
def default_false(element, compiler, **kw):
return "false"
- @compiles(sql_false, 'mssql')
- @compiles(sql_false, 'mysql')
- @compiles(sql_false, 'oracle')
+
+ @compiles(sql_false, "mssql")
+ @compiles(sql_false, "mysql")
+ @compiles(sql_false, "oracle")
def int_false(element, compiler, **kw):
return "0"
exp = union_all(
select(users.c.name, sql_false().label("enrolled")),
- select(customers.c.name, customers.c.enrolled)
+ select(customers.c.name, customers.c.enrolled),
)
"""
from sqlalchemy.ext.declarative import ConcreteBase
+
class Employee(ConcreteBase, Base):
- __tablename__ = 'employee'
+ __tablename__ = "employee"
employee_id = Column(Integer, primary_key=True)
name = Column(String(50))
__mapper_args__ = {
- 'polymorphic_identity':'employee',
- 'concrete':True}
+ "polymorphic_identity": "employee",
+ "concrete": True,
+ }
+
class Manager(Employee):
- __tablename__ = 'manager'
+ __tablename__ = "manager"
employee_id = Column(Integer, primary_key=True)
name = Column(String(50))
manager_data = Column(String(40))
__mapper_args__ = {
- 'polymorphic_identity':'manager',
- 'concrete':True}
-
+ "polymorphic_identity": "manager",
+ "concrete": True,
+ }
The name of the discriminator column used by :func:`.polymorphic_union`
defaults to the name ``type``. To suit the use case of a mapping where an
``_concrete_discriminator_name`` attribute::
class Employee(ConcreteBase, Base):
- _concrete_discriminator_name = '_concrete_discriminator'
+ _concrete_discriminator_name = "_concrete_discriminator"
.. versionadded:: 1.3.19 Added the ``_concrete_discriminator_name``
attribute to :class:`_declarative.ConcreteBase` so that the
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.ext.declarative import AbstractConcreteBase
+
class Base(DeclarativeBase):
pass
+
class Employee(AbstractConcreteBase, Base):
pass
+
class Manager(Employee):
- __tablename__ = 'manager'
+ __tablename__ = "manager"
employee_id = Column(Integer, primary_key=True)
name = Column(String(50))
manager_data = Column(String(40))
__mapper_args__ = {
- 'polymorphic_identity':'manager',
- 'concrete':True
+ "polymorphic_identity": "manager",
+ "concrete": True,
}
+
Base.registry.configure()
The abstract base class is handled by declarative in a special way;
from sqlalchemy.ext.declarative import AbstractConcreteBase
+
class Company(Base):
- __tablename__ = 'company'
+ __tablename__ = "company"
id = Column(Integer, primary_key=True)
+
class Employee(AbstractConcreteBase, Base):
strict_attrs = True
@declared_attr
def company_id(cls):
- return Column(ForeignKey('company.id'))
+ return Column(ForeignKey("company.id"))
@declared_attr
def company(cls):
return relationship("Company")
+
class Manager(Employee):
- __tablename__ = 'manager'
+ __tablename__ = "manager"
name = Column(String(50))
manager_data = Column(String(40))
__mapper_args__ = {
- 'polymorphic_identity':'manager',
- 'concrete':True
+ "polymorphic_identity": "manager",
+ "concrete": True,
}
+
Base.registry.configure()
When we make use of our mappings however, both ``Manager`` and
``Employee`` will have an independently usable ``.company`` attribute::
- session.execute(
- select(Employee).filter(Employee.company.has(id=5))
- )
+ session.execute(select(Employee).filter(Employee.company.has(id=5)))
:param strict_attrs: when specified on the base class, "strict" attribute
mode is enabled which attempts to limit ORM mapped attributes on the
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import DeferredReflection
+
Base = declarative_base()
+
class MyClass(DeferredReflection, Base):
- __tablename__ = 'mytable'
+ __tablename__ = "mytable"
Above, ``MyClass`` is not yet mapped. After a series of
classes have been defined in the above fashion, all tables
class ReflectedOne(DeferredReflection, Base):
__abstract__ = True
+
class ReflectedTwo(DeferredReflection, Base):
__abstract__ = True
+
class MyClass(ReflectedOne):
- __tablename__ = 'mytable'
+ __tablename__ = "mytable"
+
class MyOtherClass(ReflectedOne):
- __tablename__ = 'myothertable'
+ __tablename__ = "myothertable"
+
class YetAnotherClass(ReflectedTwo):
- __tablename__ = 'yetanothertable'
+ __tablename__ = "yetanothertable"
+
# ... etc.
The shard_id can be passed for a 2.0 style execution to the
bind_arguments dictionary of :meth:`.Session.execute`::
- results = session.execute(
- stmt,
- bind_arguments={"shard_id": "my_shard"}
- )
+ results = session.execute(stmt, bind_arguments={"shard_id": "my_shard"})
- """
+ """ # noqa: E501
return self.execution_options(_sa_shard_id=shard_id)
the :meth:`_sql.Executable.options` method of any executable statement::
stmt = (
- select(MyObject).
- where(MyObject.name == 'some name').
- options(set_shard_id("shard1"))
+ select(MyObject)
+ .where(MyObject.name == "some name")
+ .options(set_shard_id("shard1"))
)
Above, the statement when invoked will limit to the "shard1" shard
class Base(DeclarativeBase):
pass
+
class Interval(Base):
- __tablename__ = 'interval'
+ __tablename__ = "interval"
id: Mapped[int] = mapped_column(primary_key=True)
start: Mapped[int]
def intersects(self, other: Interval) -> bool:
return self.contains(other.start) | self.contains(other.end)
-
Above, the ``length`` property returns the difference between the
``end`` and ``start`` attributes. With an instance of ``Interval``,
this subtraction occurs in Python, using normal Python descriptor
from sqlalchemy import func
from sqlalchemy import type_coerce
+
class Interval(Base):
# ...
# correct use, however is not accepted by pep-484 tooling
+
class Interval(Base):
# ...
# correct use which is also accepted by pep-484 tooling
+
class Interval(Base):
# ...
``Interval.start``, this could be substituted directly::
from sqlalchemy import update
+
stmt = update(Interval).values({Interval.start_point: 10})
However, when using a composite hybrid like ``Interval.length``, this
from typing import List, Tuple, Any
+
class Interval(Base):
# ...
self.end = self.start + value
@length.inplace.update_expression
- def _length_update_expression(cls, value: Any) -> List[Tuple[Any, Any]]:
- return [
- (cls.end, cls.start + value)
- ]
+ def _length_update_expression(
+ cls, value: Any
+ ) -> List[Tuple[Any, Any]]:
+ return [(cls.end, cls.start + value)]
Above, if we use ``Interval.length`` in an UPDATE expression, we get
a hybrid SET expression:
class SavingsAccount(Base):
- __tablename__ = 'account'
+ __tablename__ = "account"
id: Mapped[int] = mapped_column(primary_key=True)
- user_id: Mapped[int] = mapped_column(ForeignKey('user.id'))
+ user_id: Mapped[int] = mapped_column(ForeignKey("user.id"))
balance: Mapped[Decimal] = mapped_column(Numeric(15, 5))
owner: Mapped[User] = relationship(back_populates="accounts")
+
class User(Base):
- __tablename__ = 'user'
+ __tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(100))
@balance.inplace.expression
@classmethod
def _balance_expression(cls) -> SQLColumnExpression[Optional[Decimal]]:
- return cast("SQLColumnExpression[Optional[Decimal]]", SavingsAccount.balance)
+ return cast(
+ "SQLColumnExpression[Optional[Decimal]]",
+ SavingsAccount.balance,
+ )
The above hybrid property ``balance`` works with the first
``SavingsAccount`` entry in the list of accounts for this user. The
.. sourcecode:: pycon+sql
>>> from sqlalchemy import select
- >>> print(select(User, User.balance).
- ... join(User.accounts).filter(User.balance > 5000))
+ >>> print(
+ ... select(User, User.balance)
+ ... .join(User.accounts)
+ ... .filter(User.balance > 5000)
+ ... )
{printsql}SELECT "user".id AS user_id, "user".name AS user_name,
account.balance AS account_balance
FROM "user" JOIN account ON "user".id = account.user_id
>>> from sqlalchemy import select
>>> from sqlalchemy import or_
- >>> print (select(User, User.balance).outerjoin(User.accounts).
- ... filter(or_(User.balance < 5000, User.balance == None)))
+ >>> print(
+ ... select(User, User.balance)
+ ... .outerjoin(User.accounts)
+ ... .filter(or_(User.balance < 5000, User.balance == None))
+ ... )
{printsql}SELECT "user".id AS user_id, "user".name AS user_name,
account.balance AS account_balance
FROM "user" LEFT OUTER JOIN account ON "user".id = account.user_id
class SavingsAccount(Base):
- __tablename__ = 'account'
+ __tablename__ = "account"
id: Mapped[int] = mapped_column(primary_key=True)
- user_id: Mapped[int] = mapped_column(ForeignKey('user.id'))
+ user_id: Mapped[int] = mapped_column(ForeignKey("user.id"))
balance: Mapped[Decimal] = mapped_column(Numeric(15, 5))
owner: Mapped[User] = relationship(back_populates="accounts")
+
class User(Base):
- __tablename__ = 'user'
+ __tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(100))
@hybrid_property
def balance(self) -> Decimal:
- return sum((acc.balance for acc in self.accounts), start=Decimal("0"))
+ return sum(
+ (acc.balance for acc in self.accounts), start=Decimal("0")
+ )
@balance.inplace.expression
@classmethod
.label("total_balance")
)
-
The above recipe will give us the ``balance`` column which renders
a correlated SELECT:
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
+
class Base(DeclarativeBase):
pass
def __eq__(self, other: Any) -> ColumnElement[bool]: # type: ignore[override] # noqa: E501
return func.lower(self.__clause_element__()) == func.lower(other)
+
class SearchWord(Base):
- __tablename__ = 'searchword'
+ __tablename__ = "searchword"
id: Mapped[int] = mapped_column(primary_key=True)
word: Mapped[str]
def _name_setter(self, value: str) -> None:
self.first_name = value
+
class FirstNameLastName(FirstNameOnly):
# ...
# of FirstNameOnly.name that is local to FirstNameLastName
@FirstNameOnly.name.getter
def name(self) -> str:
- return self.first_name + ' ' + self.last_name
+ return self.first_name + " " + self.last_name
@name.inplace.setter
def _name_setter(self, value: str) -> None:
- self.first_name, self.last_name = value.split(' ', 1)
+ self.first_name, self.last_name = value.split(" ", 1)
Above, the ``FirstNameLastName`` class refers to the hybrid from
``FirstNameOnly.name`` to repurpose its getter and setter for the subclass.
@FirstNameOnly.name.overrides.expression
@classmethod
def name(cls):
- return func.concat(cls.first_name, ' ', cls.last_name)
-
+ return func.concat(cls.first_name, " ", cls.last_name)
Hybrid Value Objects
--------------------
def __str__(self):
return self.word
- key = 'word'
+ key = "word"
"Label to apply to Query tuple results"
Above, the ``CaseInsensitiveWord`` object represents ``self.word``, which may
``CaseInsensitiveWord`` object unconditionally from a single hybrid call::
class SearchWord(Base):
- __tablename__ = 'searchword'
+ __tablename__ = "searchword"
id: Mapped[int] = mapped_column(primary_key=True)
word: Mapped[str]
from sqlalchemy.ext.hybrid import hybrid_method
+
class SomeClass:
@hybrid_method
def value(self, x, y):
from sqlalchemy.ext.hybrid import hybrid_property
+
class SomeClass:
@hybrid_property
def value(self):
def foobar(self):
return self._foobar
+
class SubClass(SuperClass):
# ...
@fullname.update_expression
def fullname(cls, value):
fname, lname = value.split(" ", 1)
- return [
- (cls.first_name, fname),
- (cls.last_name, lname)
- ]
+ return [(cls.first_name, fname), (cls.last_name, lname)]
.. versionadded:: 1.2
Base = declarative_base()
+
class Person(Base):
- __tablename__ = 'person'
+ __tablename__ = "person"
id = Column(Integer, primary_key=True)
data = Column(JSON)
- name = index_property('data', 'name')
-
+ name = index_property("data", "name")
Above, the ``name`` attribute now behaves like a mapped column. We
can compose a new ``Person`` and set the value of ``name``::
- >>> person = Person(name='Alchemist')
+ >>> person = Person(name="Alchemist")
The value is now accessible::
and the field was set::
>>> person.data
- {"name": "Alchemist'}
+ {'name': 'Alchemist'}
The field is mutable in place::
- >>> person.name = 'Renamed'
+ >>> person.name = "Renamed"
>>> person.name
'Renamed'
>>> person.data
>>> person = Person()
>>> person.name
- ...
AttributeError: 'name'
Unless you set a default value::
>>> class Person(Base):
- >>> __tablename__ = 'person'
- >>>
- >>> id = Column(Integer, primary_key=True)
- >>> data = Column(JSON)
- >>>
- >>> name = index_property('data', 'name', default=None) # See default
+ ... __tablename__ = "person"
+ ...
+ ... id = Column(Integer, primary_key=True)
+ ... data = Column(JSON)
+ ...
+ ... name = index_property("data", "name", default=None) # See default
>>> person = Person()
>>> print(person.name)
>>> from sqlalchemy.orm import Session
>>> session = Session()
- >>> query = session.query(Person).filter(Person.name == 'Alchemist')
+ >>> query = session.query(Person).filter(Person.name == "Alchemist")
The above query is equivalent to::
- >>> query = session.query(Person).filter(Person.data['name'] == 'Alchemist')
+ >>> query = session.query(Person).filter(Person.data["name"] == "Alchemist")
Multiple :class:`.index_property` objects can be chained to produce
multiple levels of indexing::
Base = declarative_base()
+
class Person(Base):
- __tablename__ = 'person'
+ __tablename__ = "person"
id = Column(Integer, primary_key=True)
data = Column(JSON)
- birthday = index_property('data', 'birthday')
- year = index_property('birthday', 'year')
- month = index_property('birthday', 'month')
- day = index_property('birthday', 'day')
+ birthday = index_property("data", "birthday")
+ year = index_property("birthday", "year")
+ month = index_property("birthday", "month")
+ day = index_property("birthday", "day")
Above, a query such as::
- q = session.query(Person).filter(Person.year == '1980')
+ q = session.query(Person).filter(Person.year == "1980")
-On a PostgreSQL backend, the above query will render as::
+On a PostgreSQL backend, the above query will render as:
+
+.. sourcecode:: sql
SELECT person.id, person.data
FROM person
Base = declarative_base()
+
class Person(Base):
- __tablename__ = 'person'
+ __tablename__ = "person"
id = Column(Integer, primary_key=True)
data = Column(JSON)
- age = pg_json_property('data', 'age', Integer)
+ age = pg_json_property("data", "age", Integer)
The ``age`` attribute at the instance level works as before; however
when rendering SQL, PostgreSQL's ``->>`` operator will be used
>>> query = session.query(Person).filter(Person.age < 20)
-The above query will render::
+The above query will render:
+.. sourcecode:: sql
SELECT person.id, person.data
FROM person
from sqlalchemy.types import TypeDecorator, VARCHAR
import json
+
class JSONEncodedDict(TypeDecorator):
"Represents an immutable structure as a json-encoded string."
from sqlalchemy.ext.mutable import Mutable
+
class MutableDict(Mutable, dict):
@classmethod
def coerce(cls, key, value):
from sqlalchemy import Table, Column, Integer
- my_data = Table('my_data', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', MutableDict.as_mutable(JSONEncodedDict))
+ my_data = Table(
+ "my_data",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", MutableDict.as_mutable(JSONEncodedDict)),
)
Above, :meth:`~.Mutable.as_mutable` returns an instance of ``JSONEncodedDict``
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
+
class Base(DeclarativeBase):
pass
+
class MyDataClass(Base):
- __tablename__ = 'my_data'
+ __tablename__ = "my_data"
id: Mapped[int] = mapped_column(primary_key=True)
- data: Mapped[dict[str, str]] = mapped_column(MutableDict.as_mutable(JSONEncodedDict))
+ data: Mapped[dict[str, str]] = mapped_column(
+ MutableDict.as_mutable(JSONEncodedDict)
+ )
The ``MyDataClass.data`` member will now be notified of in place changes
to its value.
>>> from sqlalchemy.orm import Session
>>> sess = Session(some_engine)
- >>> m1 = MyDataClass(data={'value1':'foo'})
+ >>> m1 = MyDataClass(data={"value1": "foo"})
>>> sess.add(m1)
>>> sess.commit()
- >>> m1.data['value1'] = 'bar'
+ >>> m1.data["value1"] = "bar"
>>> assert m1 in sess.dirty
True
MutableDict.associate_with(JSONEncodedDict)
+
class Base(DeclarativeBase):
pass
+
class MyDataClass(Base):
- __tablename__ = 'my_data'
+ __tablename__ = "my_data"
id: Mapped[int] = mapped_column(primary_key=True)
data: Mapped[dict[str, str]] = mapped_column(JSONEncodedDict)
-
Supporting Pickling
--------------------
class MyMutableType(Mutable):
def __getstate__(self):
d = self.__dict__.copy()
- d.pop('_parents', None)
+ d.pop("_parents", None)
return d
With our dictionary example, we need to return the contents of the dict itself
from sqlalchemy.orm import mapped_column
from sqlalchemy import event
+
class Base(DeclarativeBase):
pass
+
class MyDataClass(Base):
- __tablename__ = 'my_data'
+ __tablename__ = "my_data"
id: Mapped[int] = mapped_column(primary_key=True)
- data: Mapped[dict[str, str]] = mapped_column(MutableDict.as_mutable(JSONEncodedDict))
+ data: Mapped[dict[str, str]] = mapped_column(
+ MutableDict.as_mutable(JSONEncodedDict)
+ )
+
@event.listens_for(MyDataClass.data, "modified")
def modified_json(instance, initiator):
import dataclasses
from sqlalchemy.ext.mutable import MutableComposite
+
@dataclasses.dataclass
class Point(MutableComposite):
x: int
# alert all parents to the change
self.changed()
-
The :class:`.MutableComposite` class makes use of class mapping events to
automatically establish listeners for any usage of :func:`_orm.composite` that
specifies our ``Point`` type. Below, when ``Point`` is mapped to the ``Vertex``
from sqlalchemy.orm import DeclarativeBase, Mapped
from sqlalchemy.orm import composite, mapped_column
+
class Base(DeclarativeBase):
pass
id: Mapped[int] = mapped_column(primary_key=True)
- start: Mapped[Point] = composite(mapped_column("x1"), mapped_column("y1"))
- end: Mapped[Point] = composite(mapped_column("x2"), mapped_column("y2"))
+ start: Mapped[Point] = composite(
+ mapped_column("x1"), mapped_column("y1")
+ )
+ end: Mapped[Point] = composite(
+ mapped_column("x2"), mapped_column("y2")
+ )
def __repr__(self):
return f"Vertex(start={self.start}, end={self.end})"
The type is returned, unconditionally as an instance, so that
:meth:`.as_mutable` can be used inline::
- Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', MyMutableType.as_mutable(PickleType))
+ Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", MyMutableType.as_mutable(PickleType)),
)
Note that the returned type is always an instance, even if a class
To one that describes the final Python behavior to Mypy::
+ ... format: off
+
class User(Base):
# ...
attrname : Mapped[Optional[int]] = <meaningless temp node>
+ ... format: on
+
"""
left_node = lvalue.node
assert isinstance(left_node, Var)
class MyClass:
# ...
- a : Mapped[int]
+ a: Mapped[int]
- b : Mapped[str]
+ b: Mapped[str]
c: Mapped[int]
Base = declarative_base()
+
class Slide(Base):
- __tablename__ = 'slide'
+ __tablename__ = "slide"
id = Column(Integer, primary_key=True)
name = Column(String)
bullets = relationship("Bullet", order_by="Bullet.position")
+
class Bullet(Base):
- __tablename__ = 'bullet'
+ __tablename__ = "bullet"
id = Column(Integer, primary_key=True)
- slide_id = Column(Integer, ForeignKey('slide.id'))
+ slide_id = Column(Integer, ForeignKey("slide.id"))
position = Column(Integer)
text = Column(String)
Base = declarative_base()
+
class Slide(Base):
- __tablename__ = 'slide'
+ __tablename__ = "slide"
id = Column(Integer, primary_key=True)
name = Column(String)
- bullets = relationship("Bullet", order_by="Bullet.position",
- collection_class=ordering_list('position'))
+ bullets = relationship(
+ "Bullet",
+ order_by="Bullet.position",
+ collection_class=ordering_list("position"),
+ )
+
class Bullet(Base):
- __tablename__ = 'bullet'
+ __tablename__ = "bullet"
id = Column(Integer, primary_key=True)
- slide_id = Column(Integer, ForeignKey('slide.id'))
+ slide_id = Column(Integer, ForeignKey("slide.id"))
position = Column(Integer)
text = Column(String)
from sqlalchemy.ext.orderinglist import ordering_list
+
class Slide(Base):
- __tablename__ = 'slide'
+ __tablename__ = "slide"
id = Column(Integer, primary_key=True)
name = Column(String)
- bullets = relationship("Bullet", order_by="Bullet.position",
- collection_class=ordering_list('position'))
+ bullets = relationship(
+ "Bullet",
+ order_by="Bullet.position",
+ collection_class=ordering_list("position"),
+ )
:param attr:
Name of the mapped attribute to use for storage and retrieval of
Usage is nearly the same as that of the standard Python pickle module::
from sqlalchemy.ext.serializer import loads, dumps
+
metadata = MetaData(bind=some_engine)
Session = scoped_session(sessionmaker())
# ... define mappers
- query = Session.query(MyClass).
- filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)
+ query = (
+ Session.query(MyClass)
+ .filter(MyClass.somedata == "foo")
+ .order_by(MyClass.sortkey)
+ )
# pickle the query
serialized = dumps(query)
# unpickle. Pass in metadata + scoped_session
query2 = loads(serialized, metadata, Session)
- print query2.all()
+ print(query2.all())
Similar restrictions as when using raw pickle apply; mapped classes must be
themselves be pickleable, meaning they are importable from a module-level
stmt = select(User).options(
selectinload(User.addresses),
- with_loader_criteria(Address, Address.email_address != 'foo'))
+ with_loader_criteria(Address, Address.email_address != "foo"),
)
Above, the "selectinload" for ``User.addresses`` will apply the
ON clause of the join, in this example using :term:`1.x style`
queries::
- q = session.query(User).outerjoin(User.addresses).options(
- with_loader_criteria(Address, Address.email_address != 'foo'))
+ q = (
+ session.query(User)
+ .outerjoin(User.addresses)
+ .options(with_loader_criteria(Address, Address.email_address != "foo"))
)
The primary purpose of :func:`_orm.with_loader_criteria` is to use
session = Session(bind=engine)
+
@event.listens_for("do_orm_execute", session)
def _add_filtering_criteria(execute_state):
execute_state.statement = execute_state.statement.options(
with_loader_criteria(
SecurityRole,
- lambda cls: cls.role.in_(['some_role']),
- include_aliases=True
+ lambda cls: cls.role.in_(["some_role"]),
+ include_aliases=True,
)
)
``A -> A.bs -> B``, the given :func:`_orm.with_loader_criteria`
option will affect the way in which the JOIN is rendered::
- stmt = select(A).join(A.bs).options(
- contains_eager(A.bs),
- with_loader_criteria(B, B.flag == 1)
+ stmt = (
+ select(A)
+ .join(A.bs)
+ .options(contains_eager(A.bs), with_loader_criteria(B, B.flag == 1))
)
Above, the given :func:`_orm.with_loader_criteria` option will
affect the ON clause of the JOIN that is specified by
``.join(A.bs)``, so is applied as expected. The
:func:`_orm.contains_eager` option has the effect that columns from
- ``B`` are added to the columns clause::
+ ``B`` are added to the columns clause:
+
+ .. sourcecode:: sql
SELECT
b.id, b.a_id, b.data, b.flag,
.. versionadded:: 1.4.0b2
- """
+ """ # noqa: E501
return LoaderCriteriaOption(
entity_or_base,
where_criteria,
e.g.::
class MyClass(Base):
- __tablename__ = 'my_table'
+ __tablename__ = "my_table"
id = Column(Integer, primary_key=True)
job_status = Column(String(50))
status = synonym("job_status")
-
:param name: the name of the existing mapped property. This
can refer to the string name ORM-mapped attribute
configured on the class, including column-bound attributes
:paramref:`.synonym.descriptor` parameter::
my_table = Table(
- "my_table", metadata,
- Column('id', Integer, primary_key=True),
- Column('job_status', String(50))
+ "my_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("job_status", String(50)),
)
+
class MyClass:
@property
def _job_status_descriptor(self):
mapper(
- MyClass, my_table, properties={
+ MyClass,
+ my_table,
+ properties={
"job_status": synonym(
- "_job_status", map_column=True,
- descriptor=MyClass._job_status_descriptor)
- }
+ "_job_status",
+ map_column=True,
+ descriptor=MyClass._job_status_descriptor,
+ )
+ },
)
Above, the attribute named ``_job_status`` is automatically
E.g.::
- 'items':relationship(
- SomeItem, backref=backref('parent', lazy='subquery'))
+ "items": relationship(SomeItem, backref=backref("parent", lazy="subquery"))
The :paramref:`_orm.relationship.backref` parameter is generally
considered to be legacy; for modern applications, using
:ref:`relationships_backref` - background on backrefs
- """
+ """ # noqa: E501
return (name, kwargs)
aggregate functions::
class UnitPrice(Base):
- __tablename__ = 'unit_price'
+ __tablename__ = "unit_price"
...
unit_id = Column(Integer)
price = Column(Numeric)
- aggregated_unit_price = Session.query(
- func.sum(UnitPrice.price).label('price')
- ).group_by(UnitPrice.unit_id).subquery()
- aggregated_unit_price = aliased(UnitPrice,
- alias=aggregated_unit_price, adapt_on_names=True)
+ aggregated_unit_price = (
+ Session.query(func.sum(UnitPrice.price).label("price"))
+ .group_by(UnitPrice.unit_id)
+ .subquery()
+ )
+
+ aggregated_unit_price = aliased(
+ UnitPrice, alias=aggregated_unit_price, adapt_on_names=True
+ )
Above, functions on ``aggregated_unit_price`` which refer to
``.price`` will return the
:meth:`_sql.Select.select_from` method, as in::
from sqlalchemy.orm import join
- stmt = select(User).\
- select_from(join(User, Address, User.addresses)).\
- filter(Address.email_address=='foo@bar.com')
+
+ stmt = (
+ select(User)
+ .select_from(join(User, Address, User.addresses))
+ .filter(Address.email_address == "foo@bar.com")
+ )
In modern SQLAlchemy the above join can be written more
succinctly as::
- stmt = select(User).\
- join(User.addresses).\
- filter(Address.email_address=='foo@bar.com')
+ stmt = (
+ select(User)
+ .join(User.addresses)
+ .filter(Address.email_address == "foo@bar.com")
+ )
.. warning:: using :func:`_orm.join` directly may not work properly
with modern ORM options such as :func:`_orm.with_loader_criteria`.
This function is used to provide direct access to collection internals
for a previously unloaded attribute. e.g.::
- collection_adapter = init_collection(someobject, 'elements')
+ collection_adapter = init_collection(someobject, "elements")
for elem in values:
collection_adapter.append_without_event(elem)
and return values to events::
from sqlalchemy.orm.collections import collection
+
+
class MyClass:
# ...
def pop(self):
return self.data.pop()
-
The second approach is a bundle of targeted decorators that wrap appropriate
append and remove notifiers around the mutation methods present in the
standard Python ``list``, ``set`` and ``dict`` interfaces. These could be
method that's already instrumented. For example::
class QueueIsh(list):
- def push(self, item):
- self.append(item)
- def shift(self):
- return self.pop(0)
+ def push(self, item):
+ self.append(item)
+
+ def shift(self):
+ return self.pop(0)
There's no need to decorate these methods. ``append`` and ``pop`` are already
instrumented as part of the ``list`` interface. Decorating them would fire
The recipe decorators all require parens, even those that take no
arguments::
- @collection.adds('entity')
+ @collection.adds("entity")
def insert(self, position, entity): ...
+
@collection.removes_return()
def popitem(self): ...
@collection.appender
def add(self, append): ...
+
# or, equivalently
@collection.appender
@collection.adds(1)
def add(self, append): ...
+
# for mapping type, an 'append' may kick out a previous value
# that occupies that slot. consider d['a'] = 'foo'- any previous
# value in d['a'] is discarded.
@collection.remover
def zap(self, entity): ...
+
# or, equivalently
@collection.remover
@collection.removes_return()
- def zap(self, ): ...
+ def zap(self): ...
If the value to remove is not present in the collection, you may
raise an exception or return None to ignore the error.
@collection.adds(1)
def push(self, item): ...
- @collection.adds('entity')
+
+ @collection.adds("entity")
def do_stuff(self, thing, entity=None): ...
"""
:paramref:`.orm.synonym.descriptor` parameter::
class MyClass(Base):
- __tablename__ = 'my_table'
+ __tablename__ = "my_table"
id = Column(Integer, primary_key=True)
_job_status = Column("job_status", String(50))
for subclasses::
class Employee(Base):
- __tablename__ = 'employee'
+ __tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
type: Mapped[str] = mapped_column(String(50))
@declared_attr.directive
def __mapper_args__(cls) -> Dict[str, Any]:
- if cls.__name__ == 'Employee':
+ if cls.__name__ == "Employee":
return {
- "polymorphic_on":cls.type,
- "polymorphic_identity":"Employee"
+ "polymorphic_on": cls.type,
+ "polymorphic_identity": "Employee",
}
else:
- return {"polymorphic_identity":cls.__name__}
+ return {"polymorphic_identity": cls.__name__}
+
class Engineer(Employee):
pass
from sqlalchemy.orm import declared_attr
from sqlalchemy.orm import declarative_mixin
+
@declarative_mixin
class MyMixin:
def __tablename__(cls):
return cls.__name__.lower()
- __table_args__ = {'mysql_engine': 'InnoDB'}
- __mapper_args__= {'always_refresh': True}
+ __table_args__ = {"mysql_engine": "InnoDB"}
+ __mapper_args__ = {"always_refresh": True}
+
+ id = Column(Integer, primary_key=True)
- id = Column(Integer, primary_key=True)
class MyModel(MyMixin, Base):
name = Column(String(1000))
from sqlalchemy.orm import DeclarativeBase
+
class Base(DeclarativeBase):
pass
-
The above ``Base`` class is now usable as the base for new declarative
mappings. The superclass makes use of the ``__init_subclass__()``
method to set up new classes and metaclasses aren't used.
bigint = Annotated[int, "bigint"]
my_metadata = MetaData()
+
class Base(DeclarativeBase):
metadata = my_metadata
type_annotation_map = {
str: String().with_variant(String(255), "mysql", "mariadb"),
- bigint: BigInteger()
+ bigint: BigInteger(),
}
Class-level attributes which may be specified include:
Base = mapper_registry.generate_base()
+
class MyClass(Base):
__tablename__ = "my_table"
id = Column(Integer, primary_key=True)
mapper_registry = registry()
+
class Base(metaclass=DeclarativeMeta):
__abstract__ = True
registry = mapper_registry
mapper_registry = registry()
+
@mapper_registry.mapped
class Foo:
- __tablename__ = 'some_table'
+ __tablename__ = "some_table"
id = Column(Integer, primary_key=True)
name = Column(String)
mapper_registry = registry()
+
@mapper_registry.as_declarative_base()
class Base:
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
+
id = Column(Integer, primary_key=True)
- class MyMappedClass(Base):
- # ...
+
+ class MyMappedClass(Base): ...
All keyword arguments passed to
:meth:`_orm.registry.as_declarative_base` are passed
mapper_registry = registry()
+
class Foo:
- __tablename__ = 'some_table'
+ __tablename__ = "some_table"
id = Column(Integer, primary_key=True)
name = Column(String)
+
mapper = mapper_registry.map_declaratively(Foo)
This function is more conveniently invoked indirectly via either the
my_table = Table(
"my_table",
mapper_registry.metadata,
- Column('id', Integer, primary_key=True)
+ Column("id", Integer, primary_key=True),
)
+
class MyClass:
pass
+
mapper_registry.map_imperatively(MyClass, my_table)
See the section :ref:`orm_imperative_mapping` for complete background
from sqlalchemy.orm import as_declarative
+
@as_declarative()
class Base:
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
+
id = Column(Integer, primary_key=True)
- class MyMappedClass(Base):
- # ...
+
+ class MyMappedClass(Base): ...
.. seealso::
from sqlalchemy import event
+
def my_load_listener(target, context):
print("on load!")
- event.listen(SomeClass, 'load', my_load_listener)
+
+ event.listen(SomeClass, "load", my_load_listener)
Available targets include:
the existing loading context is maintained for the object after the
event is called::
- @event.listens_for(
- SomeClass, "load", restore_load_context=True)
+ @event.listens_for(SomeClass, "load", restore_load_context=True)
def on_load(instance, context):
instance.some_unloaded_attribute
:meth:`.SessionEvents.loaded_as_persistent`
- """
+ """ # noqa: E501
def refresh(
self, target: _O, context: QueryContext, attrs: Optional[Iterable[str]]
from sqlalchemy import event
+
def my_before_insert_listener(mapper, connection, target):
# execute a stored procedure upon INSERT,
# apply the value to the row to be inserted
text("select my_special_function(%d)" % target.special_number)
).scalar()
+
# associate the listener function with SomeClass,
# to execute during the "before_insert" hook
- event.listen(
- SomeClass, 'before_insert', my_before_insert_listener)
+ event.listen(SomeClass, "before_insert", my_before_insert_listener)
Available targets include:
Base = declarative_base()
+
@event.listens_for(Base, "instrument_class", propagate=True)
def on_new_class(mapper, cls_):
- " ... "
+ "..."
:param mapper: the :class:`_orm.Mapper` which is the target
of this event.
DontConfigureBase = declarative_base()
+
@event.listens_for(
DontConfigureBase,
- "before_mapper_configured", retval=True, propagate=True)
+ "before_mapper_configured",
+ retval=True,
+ propagate=True,
+ )
def dont_configure(mapper, cls):
return EXT_SKIP
-
.. seealso::
:meth:`.MapperEvents.before_configured`
from sqlalchemy.orm import Mapper
+
@event.listens_for(Mapper, "before_configured")
- def go():
- ...
+ def go(): ...
Contrast this event to :meth:`.MapperEvents.after_configured`,
which is invoked after the series of mappers has been configured,
from sqlalchemy.orm import mapper
- @event.listens_for(mapper, "before_configured", once=True)
- def go():
- ...
+ @event.listens_for(mapper, "before_configured", once=True)
+ def go(): ...
.. seealso::
from sqlalchemy.orm import Mapper
+
@event.listens_for(Mapper, "after_configured")
- def go():
- # ...
+ def go(): ...
Theoretically this event is called once per
application, but is actually called any time new mappers
from sqlalchemy.orm import mapper
+
@event.listens_for(mapper, "after_configured", once=True)
- def go():
- # ...
+ def go(): ...
.. seealso::
from sqlalchemy import event
from sqlalchemy.orm import sessionmaker
+
def my_before_commit(session):
print("before commit!")
+
Session = sessionmaker()
event.listen(Session, "before_commit", my_before_commit)
@event.listens_for(session, "after_transaction_create")
def after_transaction_create(session, transaction):
if transaction.parent is None:
- # work with top-level transaction
+ ... # work with top-level transaction
To detect if the :class:`.SessionTransaction` is a SAVEPOINT, use the
:attr:`.SessionTransaction.nested` attribute::
@event.listens_for(session, "after_transaction_create")
def after_transaction_create(session, transaction):
if transaction.nested:
- # work with SAVEPOINT transaction
-
+ ... # work with SAVEPOINT transaction
.. seealso::
@event.listens_for(session, "after_transaction_create")
def after_transaction_end(session, transaction):
if transaction.parent is None:
- # work with top-level transaction
+ ... # work with top-level transaction
To detect if the :class:`.SessionTransaction` is a SAVEPOINT, use the
:attr:`.SessionTransaction.nested` attribute::
@event.listens_for(session, "after_transaction_create")
def after_transaction_end(session, transaction):
if transaction.nested:
- # work with SAVEPOINT transaction
-
+ ... # work with SAVEPOINT transaction
.. seealso::
from sqlalchemy import event
- @event.listens_for(MyClass.collection, 'append', propagate=True)
+
+ @event.listens_for(MyClass.collection, "append", propagate=True)
def my_append_listener(target, value, initiator):
print("received append event for target: %s" % target)
-
Listeners have the option to return a possibly modified version of the
value, when the :paramref:`.AttributeEvents.retval` flag is passed to
:func:`.event.listen` or :func:`.event.listens_for`, such as below,
def validate_phone(target, value, oldvalue, initiator):
"Strip non-numeric characters from a phone number"
- return re.sub(r'\D', '', value)
+ return re.sub(r"\D", "", value)
+
# setup listener on UserContact.phone attribute, instructing
# it to use the return value
- listen(UserContact.phone, 'set', validate_phone, retval=True)
+ listen(UserContact.phone, "set", validate_phone, retval=True)
A validation function like the above can also raise an exception
such as :exc:`ValueError` to halt the operation.
as when using mapper inheritance patterns::
- @event.listens_for(MySuperClass.attr, 'set', propagate=True)
+ @event.listens_for(MySuperClass.attr, "set", propagate=True)
def receive_set(target, value, initiator):
print("value set: %s" % target)
from sqlalchemy.orm.attributes import OP_BULK_REPLACE
+
@event.listens_for(SomeObject.collection, "bulk_replace")
def process_collection(target, values, initiator):
values[:] = [_make_value(value) for value in values]
+
@event.listens_for(SomeObject.collection, "append", retval=True)
def process_collection(target, value, initiator):
# make sure bulk_replace didn't already do it
SOME_CONSTANT = 3.1415926
+
class MyClass(Base):
# ...
some_attribute = Column(Numeric, default=SOME_CONSTANT)
+
@event.listens_for(
- MyClass.some_attribute, "init_scalar",
- retval=True, propagate=True)
+ MyClass.some_attribute, "init_scalar", retval=True, propagate=True
+ )
def _init_some_attribute(target, dict_, value):
- dict_['some_attribute'] = SOME_CONSTANT
+ dict_["some_attribute"] = SOME_CONSTANT
return SOME_CONSTANT
Above, we initialize the attribute ``MyClass.some_attribute`` to the
SOME_CONSTANT = 3.1415926
+
@event.listens_for(
- MyClass.some_attribute, "init_scalar",
- retval=True, propagate=True)
+ MyClass.some_attribute, "init_scalar", retval=True, propagate=True
+ )
def _init_some_attribute(target, dict_, value):
# will also fire off attribute set events
target.some_attribute = SOME_CONSTANT
:ref:`examples_instrumentation` - see the
``active_column_defaults.py`` example.
- """
+ """ # noqa: E501
def init_collection(
self,
@event.listens_for(Query, "before_compile", retval=True)
def no_deleted(query):
for desc in query.column_descriptions:
- if desc['type'] is User:
- entity = desc['entity']
+ if desc["type"] is User:
+ entity = desc["entity"]
query = query.filter(entity.deleted == False)
return query
re-establish the query being cached, apply the event adding the
``bake_ok`` flag::
- @event.listens_for(
- Query, "before_compile", retval=True, bake_ok=True)
+ @event.listens_for(Query, "before_compile", retval=True, bake_ok=True)
def my_event(query):
for desc in query.column_descriptions:
- if desc['type'] is User:
- entity = desc['entity']
+ if desc["type"] is User:
+ entity = desc["entity"]
query = query.filter(entity.deleted == False)
return query
:ref:`baked_with_before_compile`
- """
+ """ # noqa: E501
def before_compile_update(
self, query: Query[Any], update_context: BulkUpdate
@event.listens_for(Query, "before_compile_update", retval=True)
def no_deleted(query, update_context):
for desc in query.column_descriptions:
- if desc['type'] is User:
- entity = desc['entity']
+ if desc["type"] is User:
+ entity = desc["entity"]
query = query.filter(entity.deleted == False)
- update_context.values['timestamp'] = (
- datetime.datetime.now(datetime.UTC)
+ update_context.values["timestamp"] = datetime.datetime.now(
+ datetime.UTC
)
return query
:meth:`.QueryEvents.before_compile_delete`
- """
+ """ # noqa: E501
def before_compile_delete(
self, query: Query[Any], delete_context: BulkDelete
@event.listens_for(Query, "before_compile_delete", retval=True)
def no_deleted(query, delete_context):
for desc in query.column_descriptions:
- if desc['type'] is User:
- entity = desc['entity']
+ if desc["type"] is User:
+ entity = desc["entity"]
query = query.filter(entity.deleted == False)
return query
# definition of custom PropComparator subclasses
- from sqlalchemy.orm.properties import \
- ColumnProperty,\
- Composite,\
- Relationship
+ from sqlalchemy.orm.properties import (
+ ColumnProperty,
+ Composite,
+ Relationship,
+ )
+
class MyColumnComparator(ColumnProperty.Comparator):
def __eq__(self, other):
return self.__clause_element__() == other
+
class MyRelationshipComparator(Relationship.Comparator):
def any(self, expression):
"define the 'any' operation"
# ...
+
class MyCompositeComparator(Composite.Comparator):
def __gt__(self, other):
"redefine the 'greater than' operation"
- return sql.and_(*[a>b for a, b in
- zip(self.__clause_element__().clauses,
- other.__composite_values__())])
+ return sql.and_(
+ *[
+ a > b
+ for a, b in zip(
+ self.__clause_element__().clauses,
+ other.__composite_values__(),
+ )
+ ]
+ )
# application of custom PropComparator subclasses
from sqlalchemy.orm import column_property, relationship, composite
from sqlalchemy import Column, String
+
class SomeMappedClass(Base):
- some_column = column_property(Column("some_column", String),
- comparator_factory=MyColumnComparator)
+ some_column = column_property(
+ Column("some_column", String),
+ comparator_factory=MyColumnComparator,
+ )
- some_relationship = relationship(SomeOtherClass,
- comparator_factory=MyRelationshipComparator)
+ some_relationship = relationship(
+ SomeOtherClass, comparator_factory=MyRelationshipComparator
+ )
some_composite = composite(
- Column("a", String), Column("b", String),
- comparator_factory=MyCompositeComparator
- )
+ Column("a", String),
+ Column("b", String),
+ comparator_factory=MyCompositeComparator,
+ )
Note that for column-level operator redefinition, it's usually
simpler to define the operators at the Core level, using the
e.g.::
- query.join(Company.employees.of_type(Engineer)).\
- filter(Engineer.name=='foo')
+ query.join(Company.employees.of_type(Engineer)).filter(
+ Engineer.name == "foo"
+ )
:param \class_: a class or mapper indicating that criterion will be
against this specific subclass.
stmt = select(User).join(
- User.addresses.and_(Address.email_address != 'foo')
+ User.addresses.and_(Address.email_address != "foo")
)
stmt = select(User).options(
- joinedload(User.addresses.and_(Address.email_address != 'foo'))
+ joinedload(User.addresses.and_(Address.email_address != "foo"))
)
.. versionadded:: 1.4
class User(Base):
__table__ = user_table
- __mapper_args__ = {'column_prefix':'_'}
+ __mapper_args__ = {"column_prefix": "_"}
The above mapping will assign the ``user_id``, ``user_name``, and
``password`` columns to attributes named ``_user_id``,
base-most mapped :class:`.Table`::
class Employee(Base):
- __tablename__ = 'employee'
+ __tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
discriminator: Mapped[str] = mapped_column(String(50))
__mapper_args__ = {
- "polymorphic_on":discriminator,
- "polymorphic_identity":"employee"
+ "polymorphic_on": discriminator,
+ "polymorphic_identity": "employee",
}
It may also be specified
approach::
class Employee(Base):
- __tablename__ = 'employee'
+ __tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
discriminator: Mapped[str] = mapped_column(String(50))
__mapper_args__ = {
- "polymorphic_on":case(
+ "polymorphic_on": case(
(discriminator == "EN", "engineer"),
(discriminator == "MA", "manager"),
- else_="employee"),
- "polymorphic_identity":"employee"
+ else_="employee",
+ ),
+ "polymorphic_identity": "employee",
}
It may also refer to any attribute using its string name,
configurations::
class Employee(Base):
- __tablename__ = 'employee'
+ __tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
discriminator: Mapped[str]
__mapper_args__ = {
"polymorphic_on": "discriminator",
- "polymorphic_identity": "employee"
+ "polymorphic_identity": "employee",
}
When setting ``polymorphic_on`` to reference an
from sqlalchemy import event
from sqlalchemy.orm import object_mapper
+
@event.listens_for(Employee, "init", propagate=True)
def set_identity(instance, *arg, **kw):
mapper = object_mapper(instance)
The resulting structure is a dictionary of columns mapped
to lists of equivalent columns, e.g.::
- {
- tablea.col1:
- {tableb.col1, tablec.col1},
- tablea.col2:
- {tabled.col2}
- }
+ {tablea.col1: {tableb.col1, tablec.col1}, tablea.col2: {tabled.col2}}
- """
+ """ # noqa: E501
result: _EquivalentColumnMap = {}
def visit_binary(binary):
given::
- class A:
- ...
+ class A: ...
+
class B(A):
__mapper_args__ = {"polymorphic_load": "selectin"}
- class C(B):
- ...
+
+ class C(B): ...
+
class D(B):
__mapper_args__ = {"polymorphic_load": "selectin"}
name = Column(String(64))
extension = Column(String(8))
- filename = column_property(name + '.' + extension)
- path = column_property('C:/' + filename.expression)
+ filename = column_property(name + "." + extension)
+ path = column_property("C:/" + filename.expression)
.. seealso::
from sqlalchemy.orm import aliased
+
class Part(Base):
- __tablename__ = 'part'
+ __tablename__ = "part"
part = Column(String, primary_key=True)
sub_part = Column(String, primary_key=True)
quantity = Column(Integer)
- included_parts = session.query(
- Part.sub_part,
- Part.part,
- Part.quantity).\
- filter(Part.part=="our part").\
- cte(name="included_parts", recursive=True)
+
+ included_parts = (
+ session.query(Part.sub_part, Part.part, Part.quantity)
+ .filter(Part.part == "our part")
+ .cte(name="included_parts", recursive=True)
+ )
incl_alias = aliased(included_parts, name="pr")
parts_alias = aliased(Part, name="p")
included_parts = included_parts.union_all(
session.query(
- parts_alias.sub_part,
- parts_alias.part,
- parts_alias.quantity).\
- filter(parts_alias.part==incl_alias.c.sub_part)
- )
+ parts_alias.sub_part, parts_alias.part, parts_alias.quantity
+ ).filter(parts_alias.part == incl_alias.c.sub_part)
+ )
q = session.query(
- included_parts.c.sub_part,
- func.sum(included_parts.c.quantity).
- label('total_quantity')
- ).\
- group_by(included_parts.c.sub_part)
+ included_parts.c.sub_part,
+ func.sum(included_parts.c.quantity).label("total_quantity"),
+ ).group_by(included_parts.c.sub_part)
.. seealso::
:meth:`_sql.Select.cte` - v2 equivalent method.
- """
+ """ # noqa: E501
return (
self.enable_eagerloads(False)
._get_select_statement_only()
:attr:`_query.Query.statement` using :meth:`.Session.execute`::
result = session.execute(
- query
- .set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL)
- .statement
+ query.set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL).statement
)
.. versionadded:: 1.4
some_object = session.query(VersionedFoo).get((5, 10))
- some_object = session.query(VersionedFoo).get(
- {"id": 5, "version_id": 10})
+ some_object = session.query(VersionedFoo).get({"id": 5, "version_id": 10})
:meth:`_query.Query.get` is special in that it provides direct
access to the identity map of the owning :class:`.Session`.
:return: The object instance, or ``None``.
- """
+ """ # noqa: E501
self._no_criterion_assertion("get", order_by=False, distinct=False)
# we still implement _get_impl() so that baked query can override
# Users, filtered on some arbitrary criterion
# and then ordered by related email address
- q = session.query(User).\
- join(User.address).\
- filter(User.name.like('%ed%')).\
- order_by(Address.email)
+ q = (
+ session.query(User)
+ .join(User.address)
+ .filter(User.name.like("%ed%"))
+ .order_by(Address.email)
+ )
# given *only* User.id==5, Address.email, and 'q', what
# would the *next* User in the result be ?
- subq = q.with_entities(Address.email).\
- order_by(None).\
- filter(User.id==5).\
- subquery()
- q = q.join((subq, subq.c.email < Address.email)).\
- limit(1)
+ subq = (
+ q.with_entities(Address.email)
+ .order_by(None)
+ .filter(User.id == 5)
+ .subquery()
+ )
+ q = q.join((subq, subq.c.email < Address.email)).limit(1)
.. seealso::
def filter_something(criterion):
def transform(q):
return q.filter(criterion)
+
return transform
- q = q.with_transformation(filter_something(x==5))
+
+ q = q.with_transformation(filter_something(x == 5))
This allows ad-hoc recipes to be created for :class:`_query.Query`
objects.
E.g.::
- q = sess.query(User).populate_existing().with_for_update(nowait=True, of=User)
+ q = (
+ sess.query(User)
+ .populate_existing()
+ .with_for_update(nowait=True, of=User)
+ )
+
+ The above query on a PostgreSQL backend will render like:
- The above query on a PostgreSQL backend will render like::
+ .. sourcecode:: sql
SELECT users.id AS users_id FROM users FOR UPDATE OF users NOWAIT
e.g.::
- session.query(MyClass).filter(MyClass.name == 'some name')
+ session.query(MyClass).filter(MyClass.name == "some name")
Multiple criteria may be specified as comma separated; the effect
is that they will be joined together using the :func:`.and_`
function::
- session.query(MyClass).\
- filter(MyClass.name == 'some name', MyClass.id > 5)
+ session.query(MyClass).filter(MyClass.name == "some name", MyClass.id > 5)
The criterion is any SQL expression object applicable to the
WHERE clause of a select. String expressions are coerced
:meth:`_sql.Select.where` - v2 equivalent method.
- """
+ """ # noqa: E501
for crit in list(criterion):
crit = coercions.expect(
roles.WhereHavingRole, crit, apply_propagate_attrs=self
e.g.::
- session.query(MyClass).filter_by(name = 'some name')
+ session.query(MyClass).filter_by(name="some name")
Multiple criteria may be specified as comma separated; the effect
is that they will be joined together using the :func:`.and_`
function::
- session.query(MyClass).\
- filter_by(name = 'some name', id = 5)
+ session.query(MyClass).filter_by(name="some name", id=5)
The keyword expressions are extracted from the primary
entity of the query, or the last entity that was the
HAVING criterion makes it possible to use filters on aggregate
functions like COUNT, SUM, AVG, MAX, and MIN, eg.::
- q = session.query(User.id).\
- join(User.addresses).\
- group_by(User.id).\
- having(func.count(Address.id) > 2)
+ q = (
+ session.query(User.id)
+ .join(User.addresses)
+ .group_by(User.id)
+ .having(func.count(Address.id) > 2)
+ )
.. seealso::
e.g.::
- q1 = sess.query(SomeClass).filter(SomeClass.foo=='bar')
- q2 = sess.query(SomeClass).filter(SomeClass.bar=='foo')
+ q1 = sess.query(SomeClass).filter(SomeClass.foo == "bar")
+ q2 = sess.query(SomeClass).filter(SomeClass.bar == "foo")
q3 = q1.union(q2)
x.union(y).union(z).all()
- will nest on each ``union()``, and produces::
+ will nest on each ``union()``, and produces:
+
+ .. sourcecode:: sql
SELECT * FROM (SELECT * FROM (SELECT * FROM X UNION
SELECT * FROM y) UNION SELECT * FROM Z)
x.union(y, z).all()
- produces::
+ produces:
+
+ .. sourcecode:: sql
SELECT * FROM (SELECT * FROM X UNION SELECT * FROM y UNION
SELECT * FROM Z)
q = session.query(User).join(User.addresses)
Where above, the call to :meth:`_query.Query.join` along
- ``User.addresses`` will result in SQL approximately equivalent to::
+ ``User.addresses`` will result in SQL approximately equivalent to:
+
+ .. sourcecode:: sql
SELECT user.id, user.name
FROM user JOIN address ON user.id = address.user_id
calls may be used. The relationship-bound attribute implies both
the left and right side of the join at once::
- q = session.query(User).\
- join(User.orders).\
- join(Order.items).\
- join(Item.keywords)
+ q = (
+ session.query(User)
+ .join(User.orders)
+ .join(Order.items)
+ .join(Item.keywords)
+ )
.. note:: as seen in the above example, **the order in which each
call to the join() method occurs is important**. Query would not,
as the ON clause to be passed explicitly. A example that includes
a SQL expression as the ON clause is as follows::
- q = session.query(User).join(Address, User.id==Address.user_id)
+ q = session.query(User).join(Address, User.id == Address.user_id)
The above form may also use a relationship-bound attribute as the
ON clause as well::
a1 = aliased(Address)
a2 = aliased(Address)
- q = session.query(User).\
- join(a1, User.addresses).\
- join(a2, User.addresses).\
- filter(a1.email_address=='ed@foo.com').\
- filter(a2.email_address=='ed@bar.com')
+ q = (
+ session.query(User)
+ .join(a1, User.addresses)
+ .join(a2, User.addresses)
+ .filter(a1.email_address == "ed@foo.com")
+ .filter(a2.email_address == "ed@bar.com")
+ )
The relationship-bound calling form can also specify a target entity
using the :meth:`_orm.PropComparator.of_type` method; a query
a1 = aliased(Address)
a2 = aliased(Address)
- q = session.query(User).\
- join(User.addresses.of_type(a1)).\
- join(User.addresses.of_type(a2)).\
- filter(a1.email_address == 'ed@foo.com').\
- filter(a2.email_address == 'ed@bar.com')
+ q = (
+ session.query(User)
+ .join(User.addresses.of_type(a1))
+ .join(User.addresses.of_type(a2))
+ .filter(a1.email_address == "ed@foo.com")
+ .filter(a2.email_address == "ed@bar.com")
+ )
**Augmenting Built-in ON Clauses**
with the default criteria using AND::
q = session.query(User).join(
- User.addresses.and_(Address.email_address != 'foo@bar.com')
+ User.addresses.and_(Address.email_address != "foo@bar.com")
)
.. versionadded:: 1.4
appropriate ``.subquery()`` method in order to make a subquery
out of a query::
- subq = session.query(Address).\
- filter(Address.email_address == 'ed@foo.com').\
- subquery()
+ subq = (
+ session.query(Address)
+ .filter(Address.email_address == "ed@foo.com")
+ .subquery()
+ )
- q = session.query(User).join(
- subq, User.id == subq.c.user_id
- )
+ q = session.query(User).join(subq, User.id == subq.c.user_id)
Joining to a subquery in terms of a specific relationship and/or
target entity may be achieved by linking the subquery to the
entity using :func:`_orm.aliased`::
- subq = session.query(Address).\
- filter(Address.email_address == 'ed@foo.com').\
- subquery()
+ subq = (
+ session.query(Address)
+ .filter(Address.email_address == "ed@foo.com")
+ .subquery()
+ )
address_subq = aliased(Address, subq)
- q = session.query(User).join(
- User.addresses.of_type(address_subq)
- )
-
+ q = session.query(User).join(User.addresses.of_type(address_subq))
**Controlling what to Join From**
:class:`_query.Query` is not in line with what we want to join from,
the :meth:`_query.Query.select_from` method may be used::
- q = session.query(Address).select_from(User).\
- join(User.addresses).\
- filter(User.name == 'ed')
+ q = (
+ session.query(Address)
+ .select_from(User)
+ .join(User.addresses)
+ .filter(User.name == "ed")
+ )
+
+ Which will produce SQL similar to:
- Which will produce SQL similar to::
+ .. sourcecode:: sql
SELECT address.* FROM user
JOIN address ON user.id=address.user_id
A typical example::
- q = session.query(Address).select_from(User).\
- join(User.addresses).\
- filter(User.name == 'ed')
+ q = (
+ session.query(Address)
+ .select_from(User)
+ .join(User.addresses)
+ .filter(User.name == "ed")
+ )
- Which produces SQL equivalent to::
+ Which produces SQL equivalent to:
+
+ .. sourcecode:: sql
SELECT address.* FROM user
JOIN address ON user.id=address.user_id
Format is a list of dictionaries::
- user_alias = aliased(User, name='user2')
+ user_alias = aliased(User, name="user2")
q = sess.query(User, User.id, user_alias)
# this expression:
# would return:
[
{
- 'name':'User',
- 'type':User,
- 'aliased':False,
- 'expr':User,
- 'entity': User
+ "name": "User",
+ "type": User,
+ "aliased": False,
+ "expr": User,
+ "entity": User,
},
{
- 'name':'id',
- 'type':Integer(),
- 'aliased':False,
- 'expr':User.id,
- 'entity': User
+ "name": "id",
+ "type": Integer(),
+ "aliased": False,
+ "expr": User.id,
+ "entity": User,
},
{
- 'name':'user2',
- 'type':User,
- 'aliased':True,
- 'expr':user_alias,
- 'entity': user_alias
- }
+ "name": "user2",
+ "type": User,
+ "aliased": True,
+ "expr": user_alias,
+ "entity": user_alias,
+ },
]
.. seealso::
e.g.::
- q = session.query(User).filter(User.name == 'fred')
+ q = session.query(User).filter(User.name == "fred")
session.query(q.exists())
- Producing SQL similar to::
+ Producing SQL similar to:
+
+ .. sourcecode:: sql
SELECT EXISTS (
SELECT 1 FROM users WHERE users.name = :name_1
r"""Return a count of rows this the SQL formed by this :class:`Query`
would return.
- This generates the SQL for this Query as follows::
+ This generates the SQL for this Query as follows:
+
+ .. sourcecode:: sql
SELECT count(1) AS count_1 FROM (
SELECT <rest of query follows...>
# return count of user "id" grouped
# by "name"
- session.query(func.count(User.id)).\
- group_by(User.name)
+ session.query(func.count(User.id)).group_by(User.name)
from sqlalchemy import distinct
E.g.::
- sess.query(User).filter(User.age == 25).\
- delete(synchronize_session=False)
+ sess.query(User).filter(User.age == 25).delete(synchronize_session=False)
- sess.query(User).filter(User.age == 25).\
- delete(synchronize_session='evaluate')
+ sess.query(User).filter(User.age == 25).delete(
+ synchronize_session="evaluate"
+ )
.. warning::
:ref:`orm_expression_update_delete`
- """
+ """ # noqa: E501
bulk_del = BulkDelete(self)
if self.dispatch.before_compile_delete:
E.g.::
- sess.query(User).filter(User.age == 25).\
- update({User.age: User.age - 10}, synchronize_session=False)
+ sess.query(User).filter(User.age == 25).update(
+ {User.age: User.age - 10}, synchronize_session=False
+ )
- sess.query(User).filter(User.age == 25).\
- update({"age": User.age - 10}, synchronize_session='evaluate')
+ sess.query(User).filter(User.age == 25).update(
+ {"age": User.age - 10}, synchronize_session="evaluate"
+ )
.. warning::
def __eq__(self, other: Any) -> ColumnElement[bool]: # type: ignore[override] # noqa: E501
"""Implement the ``==`` operator.
- In a many-to-one context, such as::
+ In a many-to-one context, such as:
+
+ .. sourcecode:: text
MyClass.some_prop == <some object>
this will typically produce a
- clause such as::
+ clause such as:
+
+ .. sourcecode:: text
mytable.related_id == <some id>
An expression like::
session.query(MyClass).filter(
- MyClass.somereference.any(SomeRelated.x==2)
+ MyClass.somereference.any(SomeRelated.x == 2)
)
+ Will produce a query like:
- Will produce a query like::
+ .. sourcecode:: sql
SELECT * FROM my_table WHERE
EXISTS (SELECT 1 FROM related WHERE related.my_id=my_table.id
:meth:`~.Relationship.Comparator.any` is particularly
useful for testing for empty collections::
- session.query(MyClass).filter(
- ~MyClass.somereference.any()
- )
+ session.query(MyClass).filter(~MyClass.somereference.any())
+
+ will produce:
- will produce::
+ .. sourcecode:: sql
SELECT * FROM my_table WHERE
NOT (EXISTS (SELECT 1 FROM related WHERE
An expression like::
session.query(MyClass).filter(
- MyClass.somereference.has(SomeRelated.x==2)
+ MyClass.somereference.has(SomeRelated.x == 2)
)
+ Will produce a query like:
- Will produce a query like::
+ .. sourcecode:: sql
SELECT * FROM my_table WHERE
EXISTS (SELECT 1 FROM related WHERE
MyClass.contains(other)
- Produces a clause like::
+ Produces a clause like:
+
+ .. sourcecode:: sql
mytable.id == <some id>
query(MyClass).filter(MyClass.contains(other))
- Produces a query like::
+ Produces a query like:
+
+ .. sourcecode:: sql
SELECT * FROM my_table, my_association_table AS
my_association_table_1 WHERE
def __ne__(self, other: Any) -> ColumnElement[bool]: # type: ignore[override] # noqa: E501
"""Implement the ``!=`` operator.
- In a many-to-one context, such as::
+ In a many-to-one context, such as:
+
+ .. sourcecode:: text
MyClass.some_prop != <some object>
- This will typically produce a clause such as::
+ This will typically produce a clause such as:
+
+ .. sourcecode:: sql
mytable.related_id != <some id>
Session = scoped_session(sessionmaker())
+
class MyClass:
query: QueryPropertyDescriptor = Session.query_property()
+
# after mappers are defined
- result = MyClass.query.filter(MyClass.name=='foo').all()
+ result = MyClass.query.filter(MyClass.name == "foo").all()
Produces instances of the session's configured query class by
default. To override and use a custom implementation, provide
E.g.::
from sqlalchemy import select
- result = session.execute(
- select(User).where(User.id == 5)
- )
+
+ result = session.execute(select(User).where(User.id == 5))
The API contract of :meth:`_orm.Session.execute` is similar to that
of :meth:`_engine.Connection.execute`, the :term:`2.0 style` version
some_object = session.get(VersionedFoo, (5, 10))
- some_object = session.get(
- VersionedFoo,
- {"id": 5, "version_id": 10}
- )
+ some_object = session.get(VersionedFoo, {"id": 5, "version_id": 10})
.. versionadded:: 1.4 Added :meth:`_orm.Session.get`, which is moved
from the now legacy :meth:`_orm.Query.get` method.
operation. The complete heuristics for resolution are
described at :meth:`.Session.get_bind`. Usage looks like::
- Session = sessionmaker(binds={
- SomeMappedClass: create_engine('postgresql+psycopg2://engine1'),
- SomeDeclarativeBase: create_engine('postgresql+psycopg2://engine2'),
- some_mapper: create_engine('postgresql+psycopg2://engine3'),
- some_table: create_engine('postgresql+psycopg2://engine4'),
- })
+ Session = sessionmaker(
+ binds={
+ SomeMappedClass: create_engine("postgresql+psycopg2://engine1"),
+ SomeDeclarativeBase: create_engine(
+ "postgresql+psycopg2://engine2"
+ ),
+ some_mapper: create_engine("postgresql+psycopg2://engine3"),
+ some_table: create_engine("postgresql+psycopg2://engine4"),
+ }
+ )
.. seealso::
E.g.::
from sqlalchemy import select
- result = session.execute(
- select(User).where(User.id == 5)
- )
+
+ result = session.execute(select(User).where(User.id == 5))
The API contract of :meth:`_orm.Session.execute` is similar to that
of :meth:`_engine.Connection.execute`, the :term:`2.0 style` version
e.g.::
- obj = session._identity_lookup(inspect(SomeClass), (1, ))
+ obj = session._identity_lookup(inspect(SomeClass), (1,))
:param mapper: mapper in use
:param primary_key_identity: the primary key we are searching for, as
some_object = session.get(VersionedFoo, (5, 10))
- some_object = session.get(
- VersionedFoo,
- {"id": 5, "version_id": 10}
- )
+ some_object = session.get(VersionedFoo, {"id": 5, "version_id": 10})
.. versionadded:: 1.4 Added :meth:`_orm.Session.get`, which is moved
from the now legacy :meth:`_orm.Query.get` method.
:return: The object instance, or ``None``.
- """
+ """ # noqa: E501
return self._get_impl(
entity,
ident,
# an Engine, which the Session will use for connection
# resources
- engine = create_engine('postgresql+psycopg2://scott:tiger@localhost/')
+ engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/")
Session = sessionmaker(engine)
with engine.connect() as connection:
with Session(bind=connection) as session:
- # work with session
+ ... # work with session
The class also includes a method :meth:`_orm.sessionmaker.configure`, which
can be used to specify additional keyword arguments to the factory, which
# ... later, when an engine URL is read from a configuration
# file or other events allow the engine to be created
- engine = create_engine('sqlite:///foo.db')
+ engine = create_engine("sqlite:///foo.db")
Session.configure(bind=engine)
sess = Session()
Session = sessionmaker()
- Session.configure(bind=create_engine('sqlite://'))
+ Session.configure(bind=create_engine("sqlite://"))
"""
self.kw.update(new_kw)
The option is used in conjunction with an explicit join that loads
the desired rows, i.e.::
- sess.query(Order).join(Order.user).options(
- contains_eager(Order.user)
- )
+ sess.query(Order).join(Order.user).options(contains_eager(Order.user))
The above query would join from the ``Order`` entity to its related
``User`` entity, and the returned ``Order`` objects would have the
select(User).options(joinedload(User.orders))
# joined-load Order.items and then Item.keywords
- select(Order).options(
- joinedload(Order.items).joinedload(Item.keywords)
- )
+ select(Order).options(joinedload(Order.items).joinedload(Item.keywords))
# lazily load Order.items, but when Items are loaded,
# joined-load the keywords collection
- select(Order).options(
- lazyload(Order.items).joinedload(Item.keywords)
- )
+ select(Order).options(lazyload(Order.items).joinedload(Item.keywords))
:param innerjoin: if ``True``, indicates that the joined eager load
should use an inner join instead of the default of left outer join::
OUTER and others INNER, right-nested joins are used to link them::
select(A).options(
- joinedload(A.bs, innerjoin=False).joinedload(
- B.cs, innerjoin=True
- )
+ joinedload(A.bs, innerjoin=False).joinedload(B.cs, innerjoin=True)
)
The above query, linking A.bs via "outer" join and B.cs via "inner"
will render as LEFT OUTER JOIN. For example, supposing ``A.bs``
is an outerjoin::
- select(A).options(
- joinedload(A.bs).joinedload(B.cs, innerjoin="unnested")
- )
-
+ select(A).options(joinedload(A.bs).joinedload(B.cs, innerjoin="unnested"))
The above join will render as "a LEFT OUTER JOIN b LEFT OUTER JOIN c",
rather than as "a LEFT OUTER JOIN (b JOIN c)".
:ref:`joined_eager_loading`
- """
+ """ # noqa: E501
loader = self._set_relationship_strategy(
attr,
{"lazy": "joined"},
# lazily load Order.items, but when Items are loaded,
# subquery-load the keywords collection
- select(Order).options(
- lazyload(Order.items).subqueryload(Item.keywords)
- )
-
+ select(Order).options(lazyload(Order.items).subqueryload(Item.keywords))
.. seealso::
# lazily load Order.items, but when Items are loaded,
# selectin-load the keywords collection
- select(Order).options(
- lazyload(Order.items).selectinload(Item.keywords)
- )
+ select(Order).options(lazyload(Order.items).selectinload(Item.keywords))
:param recursion_depth: optional int; when set to a positive integer
in conjunction with a self-referential relationship,
from sqlalchemy.orm import defer
session.query(MyClass).options(
- defer(MyClass.attribute_one),
- defer(MyClass.attribute_two)
+ defer(MyClass.attribute_one), defer(MyClass.attribute_two)
)
To specify a deferred load of an attribute on a related class,
defaultload(MyClass.someattr).options(
defer(RelatedClass.some_column),
defer(RelatedClass.some_other_column),
- defer(RelatedClass.another_column)
+ defer(RelatedClass.another_column),
)
)
)
# undefer all columns specific to a single class using Load + *
- session.query(MyClass, MyOtherClass).options(
- Load(MyClass).undefer("*")
- )
+ session.query(MyClass, MyOtherClass).options(Load(MyClass).undefer("*"))
# undefer a column on a related object
- select(MyClass).options(
- defaultload(MyClass.items).undefer(MyClass.text)
- )
+ select(MyClass).options(defaultload(MyClass.items).undefer(MyClass.text))
:param key: Attribute to be undeferred.
:func:`_orm.undefer_group`
- """
+ """ # noqa: E501
return self._set_column_strategy(
(key,), {"deferred": False, "instrument": True}
)
query = session.query(Author)
query = query.options(
- joinedload(Author.book).options(
- load_only(Book.summary, Book.excerpt),
- joinedload(Book.citations).options(
- joinedload(Citation.author)
- )
- )
- )
+ joinedload(Author.book).options(
+ load_only(Book.summary, Book.excerpt),
+ joinedload(Book.citations).options(joinedload(Citation.author)),
+ )
+ )
:param \*opts: A series of loader option objects (ultimately
:class:`_orm.Load` objects) which should be applied to the path
loads, and adjusts the given path to be relative to the
current_path.
- E.g. given a loader path and current path::
+ E.g. given a loader path and current path:
+
+ .. sourcecode:: text
lp: User -> orders -> Order -> items -> Item -> keywords -> Keyword
cp: User -> orders -> Order -> items
- The adjusted path would be::
+ The adjusted path would be:
+
+ .. sourcecode:: text
Item -> keywords -> Keyword
e.g.::
- raiseload('*')
- Load(User).lazyload('*')
- defer('*')
+ raiseload("*")
+ Load(User).lazyload("*")
+ defer("*")
load_only(User.name, User.email) # will create a defer('*')
- joinedload(User.addresses).raiseload('*')
+ joinedload(User.addresses).raiseload("*")
"""
E.g.::
- >>> row = engine.execute(\
- text("select * from table where a=1 and b=2")\
- ).first()
+ >>> row = engine.execute(text("select * from table where a=1 and b=2")).first()
>>> identity_key(MyClass, row=row)
(<class '__main__.MyClass'>, (1, 2), None)
.. versionadded:: 1.2 added identity_token
- """
+ """ # noqa: E501
if class_ is not None:
mapper = class_mapper(class_)
if row is None:
# find all pairs of users with the same name
user_alias = aliased(User)
- session.query(User, user_alias).\
- join((user_alias, User.id > user_alias.id)).\
- filter(User.name == user_alias.name)
+ session.query(User, user_alias).join(
+ (user_alias, User.id > user_alias.id)
+ ).filter(User.name == user_alias.name)
:class:`.AliasedClass` is also capable of mapping an existing mapped
class to an entirely new selectable, provided this selectable is column-
using :func:`_sa.inspect`::
from sqlalchemy import inspect
+
my_alias = aliased(MyClass)
insp = inspect(my_alias)
bn = Bundle("mybundle", MyClass.x, MyClass.y)
- for row in session.query(bn).filter(
- bn.c.x == 5).filter(bn.c.y == 4):
+ for row in session.query(bn).filter(bn.c.x == 5).filter(bn.c.y == 4):
print(row.mybundle.x, row.mybundle.y)
:param name: name of the bundle.
can be returned as a "single entity" outside of any enclosing tuple
in the same manner as a mapped entity.
- """
+ """ # noqa: E501
self.name = self._label = name
coerced_exprs = [
coercions.expect(
Nesting of bundles is also supported::
- b1 = Bundle("b1",
- Bundle('b2', MyClass.a, MyClass.b),
- Bundle('b3', MyClass.x, MyClass.y)
- )
+ b1 = Bundle(
+ "b1",
+ Bundle("b2", MyClass.a, MyClass.b),
+ Bundle("b3", MyClass.x, MyClass.y),
+ )
- q = sess.query(b1).filter(
- b1.c.b2.c.a == 5).filter(b1.c.b3.c.y == 9)
+ q = sess.query(b1).filter(b1.c.b2.c.a == 5).filter(b1.c.b3.c.y == 9)
.. seealso::
:attr:`.Bundle.c`
- """
+ """ # noqa: E501
c: ReadOnlyColumnCollection[str, KeyedColumnElement[Any]]
"""An alias for :attr:`.Bundle.columns`."""
from sqlalchemy.orm import Bundle
+
class DictBundle(Bundle):
def create_row_processor(self, query, procs, labels):
- 'Override create_row_processor to return values as
- dictionaries'
+ "Override create_row_processor to return values as dictionaries"
def proc(row):
- return dict(
- zip(labels, (proc(row) for proc in procs))
- )
+ return dict(zip(labels, (proc(row) for proc in procs)))
+
return proc
A result from the above :class:`_orm.Bundle` will return dictionary
values::
- bn = DictBundle('mybundle', MyClass.data1, MyClass.data2)
- for row in session.execute(select(bn)).where(bn.c.data1 == 'd1'):
- print(row.mybundle['data1'], row.mybundle['data2'])
+ bn = DictBundle("mybundle", MyClass.data1, MyClass.data2)
+ for row in session.execute(select(bn)).where(bn.c.data1 == "d1"):
+ print(row.mybundle["data1"], row.mybundle["data2"])
- """
+ """ # noqa: E501
keyed_tuple = result_tuple(labels, [() for l in labels])
def proc(row: Row[Unpack[TupleAny]]) -> Any:
stmt = select(Address).where(with_parent(some_user, User.addresses))
-
The SQL rendered is the same as that rendered when a lazy loader
would fire off from the given parent on that attribute, meaning
that the appropriate state is taken from the parent object in
a1 = aliased(Address)
a2 = aliased(Address)
- stmt = select(a1, a2).where(
- with_parent(u1, User.addresses.of_type(a2))
- )
+ stmt = select(a1, a2).where(with_parent(u1, User.addresses.of_type(a2)))
The above use is equivalent to using the
:func:`_orm.with_parent.from_entity` argument::
.. versionadded:: 1.2
- """
+ """ # noqa: E501
prop_t: RelationshipProperty[Any]
if isinstance(prop, str):
someoption(A).someoption(C.d) # -> fn(A, C) -> False
a1 = aliased(A)
- someoption(a1).someoption(A.b) # -> fn(a1, A) -> False
- someoption(a1).someoption(a1.b) # -> fn(a1, a1) -> True
+ someoption(a1).someoption(A.b) # -> fn(a1, A) -> False
+ someoption(a1).someoption(a1.b) # -> fn(a1, a1) -> True
wp = with_polymorphic(A, [A1, A2])
someoption(wp).someoption(A1.foo) # -> fn(wp, A1) -> False
someoption(wp).someoption(wp.A1.foo) # -> fn(wp, wp.A1) -> True
-
"""
if insp_is_aliased_class(given):
return (
from sqlalchemy import event
+
def my_on_checkout(dbapi_conn, connection_rec, connection_proxy):
"handle an on checkout event"
- event.listen(Pool, 'checkout', my_on_checkout)
+
+ event.listen(Pool, "checkout", my_on_checkout)
In addition to accepting the :class:`_pool.Pool` class and
:class:`_pool.Pool` instances, :class:`_events.PoolEvents` also accepts
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")
# will associate with engine.pool
- event.listen(engine, 'checkout', my_on_checkout)
+ event.listen(engine, "checkout", my_on_checkout)
""" # noqa: E501
from sqlalchemy import insert
- stmt = (
- insert(user_table).
- values(name='username', fullname='Full Username')
- )
+ stmt = insert(user_table).values(name="username", fullname="Full Username")
Similar functionality is available via the
:meth:`_expression.TableClause.insert` method on
:ref:`tutorial_core_insert` - in the :ref:`unified_tutorial`
- """
+ """ # noqa: E501
return Insert(table)
from sqlalchemy import update
stmt = (
- update(user_table).
- where(user_table.c.id == 5).
- values(name='user #5')
+ update(user_table).where(user_table.c.id == 5).values(name="user #5")
)
Similar functionality is available via the
:ref:`tutorial_core_update_delete` - in the :ref:`unified_tutorial`
- """
+ """ # noqa: E501
return Update(table)
from sqlalchemy import delete
- stmt = (
- delete(user_table).
- where(user_table.c.id == 5)
- )
+ stmt = delete(user_table).where(user_table.c.id == 5)
Similar functionality is available via the
:meth:`_expression.TableClause.delete` method on
from sqlalchemy import and_
stmt = select(users_table).where(
- and_(
- users_table.c.name == 'wendy',
- users_table.c.enrolled == True
- )
- )
+ and_(users_table.c.name == "wendy", users_table.c.enrolled == True)
+ )
The :func:`.and_` conjunction is also available using the
Python ``&`` operator (though note that compound expressions
operator precedence behavior)::
stmt = select(users_table).where(
- (users_table.c.name == 'wendy') &
- (users_table.c.enrolled == True)
- )
+ (users_table.c.name == "wendy") & (users_table.c.enrolled == True)
+ )
The :func:`.and_` operation is also implicit in some cases;
the :meth:`_expression.Select.where`
times against a statement, which will have the effect of each
clause being combined using :func:`.and_`::
- stmt = select(users_table).\
- where(users_table.c.name == 'wendy').\
- where(users_table.c.enrolled == True)
+ stmt = (
+ select(users_table)
+ .where(users_table.c.name == "wendy")
+ .where(users_table.c.enrolled == True)
+ )
The :func:`.and_` construct must be given at least one positional
argument in order to be valid; a :func:`.and_` construct with no
specified::
from sqlalchemy import true
+
criteria = and_(true(), *expressions)
The above expression will compile to SQL as the expression ``true``
from sqlalchemy import and_
stmt = select(users_table).where(
- and_(
- users_table.c.name == 'wendy',
- users_table.c.enrolled == True
- )
- )
+ and_(users_table.c.name == "wendy", users_table.c.enrolled == True)
+ )
The :func:`.and_` conjunction is also available using the
Python ``&`` operator (though note that compound expressions
operator precedence behavior)::
stmt = select(users_table).where(
- (users_table.c.name == 'wendy') &
- (users_table.c.enrolled == True)
- )
+ (users_table.c.name == "wendy") & (users_table.c.enrolled == True)
+ )
The :func:`.and_` operation is also implicit in some cases;
the :meth:`_expression.Select.where`
times against a statement, which will have the effect of each
clause being combined using :func:`.and_`::
- stmt = select(users_table).\
- where(users_table.c.name == 'wendy').\
- where(users_table.c.enrolled == True)
+ stmt = (
+ select(users_table)
+ .where(users_table.c.name == "wendy")
+ .where(users_table.c.enrolled == True)
+ )
The :func:`.and_` construct must be given at least one positional
argument in order to be valid; a :func:`.and_` construct with no
specified::
from sqlalchemy import true
+
criteria = and_(true(), *expressions)
The above expression will compile to SQL as the expression ``true``
:func:`.or_`
- """
+ """ # noqa: E501
return BooleanClauseList.and_(*clauses)
e.g.::
from sqlalchemy import asc
+
stmt = select(users_table).order_by(asc(users_table.c.name))
- will produce SQL as::
+ will produce SQL as:
+
+ .. sourcecode:: sql
SELECT id, name FROM user ORDER BY name ASC
e.g.::
- collate(mycolumn, 'utf8_bin')
+ collate(mycolumn, "utf8_bin")
+
+ produces:
- produces::
+ .. sourcecode:: sql
mycolumn COLLATE utf8_bin
E.g.::
from sqlalchemy import between
+
stmt = select(users_table).where(between(users_table.c.id, 5, 7))
- Would produce SQL resembling::
+ Would produce SQL resembling:
+
+ .. sourcecode:: sql
SELECT id, name FROM user WHERE id BETWEEN :id_1 AND :id_2
users_table.c.name == bindparam("username")
)
- The above statement, when rendered, will produce SQL similar to::
+ The above statement, when rendered, will produce SQL similar to:
+
+ .. sourcecode:: sql
SELECT id, name FROM user WHERE name = :username
coerced into fixed :func:`.bindparam` constructs. For example, given
a comparison operation such as::
- expr = users_table.c.name == 'Wendy'
+ expr = users_table.c.name == "Wendy"
The above expression will produce a :class:`.BinaryExpression`
construct, where the left side is the :class:`_schema.Column` object
:class:`.BindParameter` representing the literal value::
print(repr(expr.right))
- BindParameter('%(4327771088 name)s', 'Wendy', type_=String())
+ BindParameter("%(4327771088 name)s", "Wendy", type_=String())
- The expression above will render SQL such as::
+ The expression above will render SQL such as:
+
+ .. sourcecode:: sql
user.name = :name_1
along where it is later used within statement execution. If we
invoke a statement like the following::
- stmt = select(users_table).where(users_table.c.name == 'Wendy')
+ stmt = select(users_table).where(users_table.c.name == "Wendy")
result = connection.execute(stmt)
- We would see SQL logging output as::
+ We would see SQL logging output as:
+
+ .. sourcecode:: sql
SELECT "user".id, "user".name
FROM "user"
stmt = users_table.insert()
result = connection.execute(stmt, {"name": "Wendy"})
- The above will produce SQL output as::
+ The above will produce SQL output as:
+
+ .. sourcecode:: sql
INSERT INTO "user" (name) VALUES (%(name)s)
{'name': 'Wendy'}
from sqlalchemy import case
- stmt = select(users_table).\
- where(
- case(
- (users_table.c.name == 'wendy', 'W'),
- (users_table.c.name == 'jack', 'J'),
- else_='E'
- )
- )
+ stmt = select(users_table).where(
+ case(
+ (users_table.c.name == "wendy", "W"),
+ (users_table.c.name == "jack", "J"),
+ else_="E",
+ )
+ )
+
+ The above statement will produce SQL resembling:
- The above statement will produce SQL resembling::
+ .. sourcecode:: sql
SELECT id, name FROM user
WHERE CASE
compared against keyed to result expressions. The statement below is
equivalent to the preceding statement::
- stmt = select(users_table).\
- where(
- case(
- {"wendy": "W", "jack": "J"},
- value=users_table.c.name,
- else_='E'
- )
- )
+ stmt = select(users_table).where(
+ case({"wendy": "W", "jack": "J"}, value=users_table.c.name, else_="E")
+ )
The values which are accepted as result values in
:paramref:`.case.whens` as well as with :paramref:`.case.else_` are
from sqlalchemy import case, literal_column
case(
- (
- orderline.c.qty > 100,
- literal_column("'greaterthan100'")
- ),
- (
- orderline.c.qty > 10,
- literal_column("'greaterthan10'")
- ),
- else_=literal_column("'lessthan10'")
+ (orderline.c.qty > 100, literal_column("'greaterthan100'")),
+ (orderline.c.qty > 10, literal_column("'greaterthan10'")),
+ else_=literal_column("'lessthan10'"),
)
The above will render the given constants without using bound
parameters for the result values (but still for the comparison
- values), as in::
+ values), as in:
+
+ .. sourcecode:: sql
CASE
WHEN (orderline.qty > :qty_1) THEN 'greaterthan100'
resulting value, e.g.::
case(
- (users_table.c.name == 'wendy', 'W'),
- (users_table.c.name == 'jack', 'J')
+ (users_table.c.name == "wendy", "W"),
+ (users_table.c.name == "jack", "J"),
)
In the second form, it accepts a Python dictionary of comparison
:paramref:`.case.value` to be present, and values will be compared
using the ``==`` operator, e.g.::
- case(
- {"wendy": "W", "jack": "J"},
- value=users_table.c.name
- )
+ case({"wendy": "W", "jack": "J"}, value=users_table.c.name)
:param value: An optional SQL expression which will be used as a
fixed "comparison point" for candidate values within a dictionary
expressions evaluate to true.
- """
+ """ # noqa: E501
return Case(*whens, value=value, else_=else_)
stmt = select(cast(product_table.c.unit_price, Numeric(10, 4)))
- The above statement will produce SQL resembling::
+ The above statement will produce SQL resembling:
+
+ .. sourcecode:: sql
SELECT CAST(unit_price AS NUMERIC(10, 4)) FROM product
from sqlalchemy import select, try_cast, Numeric
- stmt = select(
- try_cast(product_table.c.unit_price, Numeric(10, 4))
- )
+ stmt = select(try_cast(product_table.c.unit_price, Numeric(10, 4)))
- The above would render on Microsoft SQL Server as::
+ The above would render on Microsoft SQL Server as:
+
+ .. sourcecode:: sql
SELECT TRY_CAST (product_table.unit_price AS NUMERIC(10, 4))
FROM product_table
id, name = column("id"), column("name")
stmt = select(id, name).select_from("user")
- The above statement would produce SQL like::
+ The above statement would produce SQL like:
+
+ .. sourcecode:: sql
SELECT id, name FROM user
from sqlalchemy import table, column, select
- user = table("user",
- column("id"),
- column("name"),
- column("description"),
+ user = table(
+ "user",
+ column("id"),
+ column("name"),
+ column("description"),
)
- stmt = select(user.c.description).where(user.c.name == 'wendy')
+ stmt = select(user.c.description).where(user.c.name == "wendy")
A :func:`_expression.column` / :func:`.table`
construct like that illustrated
stmt = select(users_table).order_by(desc(users_table.c.name))
- will produce SQL as::
+ will produce SQL as:
+
+ .. sourcecode:: sql
SELECT id, name FROM user ORDER BY name DESC
an aggregate function, as in::
from sqlalchemy import distinct, func
+
stmt = select(users_table.c.id, func.count(distinct(users_table.c.name)))
- The above would produce an statement resembling::
+ The above would produce an statement resembling:
+
+ .. sourcecode:: sql
SELECT user.id, count(DISTINCT user.name) FROM user
from sqlalchemy import extract
from sqlalchemy import table, column
- logged_table = table("user",
- column("id"),
- column("date_created"),
+ logged_table = table(
+ "user",
+ column("id"),
+ column("date_created"),
)
stmt = select(logged_table.c.id).where(
Similarly, one can also select an extracted component::
- stmt = select(
- extract("YEAR", logged_table.c.date_created)
- ).where(logged_table.c.id == 1)
+ stmt = select(extract("YEAR", logged_table.c.date_created)).where(
+ logged_table.c.id == 1
+ )
The implementation of ``EXTRACT`` may vary across database backends.
Users are reminded to consult their database documentation.
E.g.::
from sqlalchemy import funcfilter
- funcfilter(func.count(1), MyClass.name == 'some name')
+
+ funcfilter(func.count(1), MyClass.name == "some name")
Would produce "COUNT(1) FILTER (WHERE myclass.name = 'some name')".
from sqlalchemy import desc, nulls_first
- stmt = select(users_table).order_by(
- nulls_first(desc(users_table.c.name)))
+ stmt = select(users_table).order_by(nulls_first(desc(users_table.c.name)))
- The SQL expression from the above would resemble::
+ The SQL expression from the above would resemble:
+
+ .. sourcecode:: sql
SELECT id, name FROM user ORDER BY name DESC NULLS FIRST
function version, as in::
stmt = select(users_table).order_by(
- users_table.c.name.desc().nulls_first())
+ users_table.c.name.desc().nulls_first()
+ )
.. versionchanged:: 1.4 :func:`.nulls_first` is renamed from
:func:`.nullsfirst` in previous releases.
:meth:`_expression.Select.order_by`
- """
+ """ # noqa: E501
return UnaryExpression._create_nulls_first(column)
from sqlalchemy import desc, nulls_last
- stmt = select(users_table).order_by(
- nulls_last(desc(users_table.c.name)))
+ stmt = select(users_table).order_by(nulls_last(desc(users_table.c.name)))
- The SQL expression from the above would resemble::
+ The SQL expression from the above would resemble:
+
+ .. sourcecode:: sql
SELECT id, name FROM user ORDER BY name DESC NULLS LAST
rather than as its standalone
function version, as in::
- stmt = select(users_table).order_by(
- users_table.c.name.desc().nulls_last())
+ stmt = select(users_table).order_by(users_table.c.name.desc().nulls_last())
.. versionchanged:: 1.4 :func:`.nulls_last` is renamed from
:func:`.nullslast` in previous releases.
:meth:`_expression.Select.order_by`
- """
+ """ # noqa: E501
return UnaryExpression._create_nulls_last(column)
from sqlalchemy import or_
stmt = select(users_table).where(
- or_(
- users_table.c.name == 'wendy',
- users_table.c.name == 'jack'
- )
- )
+ or_(users_table.c.name == "wendy", users_table.c.name == "jack")
+ )
The :func:`.or_` conjunction is also available using the
Python ``|`` operator (though note that compound expressions
operator precedence behavior)::
stmt = select(users_table).where(
- (users_table.c.name == 'wendy') |
- (users_table.c.name == 'jack')
- )
+ (users_table.c.name == "wendy") | (users_table.c.name == "jack")
+ )
The :func:`.or_` construct must be given at least one positional
argument in order to be valid; a :func:`.or_` construct with no
specified::
from sqlalchemy import false
+
or_criteria = or_(false(), *expressions)
The above expression will compile to SQL as the expression ``false``
from sqlalchemy import or_
stmt = select(users_table).where(
- or_(
- users_table.c.name == 'wendy',
- users_table.c.name == 'jack'
- )
- )
+ or_(users_table.c.name == "wendy", users_table.c.name == "jack")
+ )
The :func:`.or_` conjunction is also available using the
Python ``|`` operator (though note that compound expressions
operator precedence behavior)::
stmt = select(users_table).where(
- (users_table.c.name == 'wendy') |
- (users_table.c.name == 'jack')
- )
+ (users_table.c.name == "wendy") | (users_table.c.name == "jack")
+ )
The :func:`.or_` construct must be given at least one positional
argument in order to be valid; a :func:`.or_` construct with no
specified::
from sqlalchemy import false
+
or_criteria = or_(false(), *expressions)
The above expression will compile to SQL as the expression ``false``
:func:`.and_`
- """
+ """ # noqa: E501
return BooleanClauseList.or_(*clauses)
func.row_number().over(order_by=mytable.c.some_column)
- Would produce::
+ Would produce:
+
+ .. sourcecode:: sql
ROW_NUMBER() OVER(ORDER BY some_column)
mutually-exclusive parameters each accept a 2-tuple, which contains
a combination of integers and None::
- func.row_number().over(
- order_by=my_table.c.some_column, range_=(None, 0))
+ func.row_number().over(order_by=my_table.c.some_column, range_=(None, 0))
+
+ The above would produce:
- The above would produce::
+ .. sourcecode:: sql
ROW_NUMBER() OVER(ORDER BY some_column
RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
* RANGE BETWEEN 5 PRECEDING AND 10 FOLLOWING::
- func.row_number().over(order_by='x', range_=(-5, 10))
+ func.row_number().over(order_by="x", range_=(-5, 10))
* ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW::
- func.row_number().over(order_by='x', rows=(None, 0))
+ func.row_number().over(order_by="x", rows=(None, 0))
* RANGE BETWEEN 2 PRECEDING AND UNBOUNDED FOLLOWING::
- func.row_number().over(order_by='x', range_=(-2, None))
+ func.row_number().over(order_by="x", range_=(-2, None))
* RANGE BETWEEN 1 FOLLOWING AND 3 FOLLOWING::
- func.row_number().over(order_by='x', range_=(1, 3))
+ func.row_number().over(order_by="x", range_=(1, 3))
:param element: a :class:`.FunctionElement`, :class:`.WithinGroup`,
or other compatible construct.
:func:`_expression.within_group`
- """
+ """ # noqa: E501
return Over(element, partition_by, order_by, range_, rows)
method allows
specification of return columns including names and types::
- t = text("SELECT * FROM users WHERE id=:user_id").\
- bindparams(user_id=7).\
- columns(id=Integer, name=String)
+ t = (
+ text("SELECT * FROM users WHERE id=:user_id")
+ .bindparams(user_id=7)
+ .columns(id=Integer, name=String)
+ )
for id, name in connection.execute(t):
print(id, name)
from sqlalchemy import tuple_
- tuple_(table.c.col1, table.c.col2).in_(
- [(1, 2), (5, 12), (10, 19)]
- )
+ tuple_(table.c.col1, table.c.col2).in_([(1, 2), (5, 12), (10, 19)])
.. versionchanged:: 1.3.6 Added support for SQLite IN tuples.
:meth:`_expression.ColumnElement.label`::
stmt = select(
- type_coerce(log_table.date_string, StringDateTime()).label('date')
+ type_coerce(log_table.date_string, StringDateTime()).label("date")
)
-
A type that features bound-value handling will also have that behavior
take effect when literal values or :func:`.bindparam` constructs are
passed to :func:`.type_coerce` as targets.
the :meth:`.FunctionElement.within_group` method, e.g.::
from sqlalchemy import within_group
+
stmt = select(
department.c.id,
- func.percentile_cont(0.5).within_group(
- department.c.salary.desc()
- )
+ func.percentile_cont(0.5).within_group(department.c.salary.desc()),
)
The above statement would produce SQL similar to
:meth:`_sql.SelectBase.exists` method::
exists_criteria = (
- select(table2.c.col2).
- where(table1.c.col1 == table2.c.col2).
- exists()
+ select(table2.c.col2).where(table1.c.col1 == table2.c.col2).exists()
)
The EXISTS criteria is then used inside of an enclosing SELECT::
stmt = select(table1.c.col1).where(exists_criteria)
- The above statement will then be of the form::
+ The above statement will then be of the form:
+
+ .. sourcecode:: sql
SELECT col1 FROM table1 WHERE EXISTS
(SELECT table2.col2 FROM table2 WHERE table2.col2 = table1.col1)
E.g.::
- j = join(user_table, address_table,
- user_table.c.id == address_table.c.user_id)
+ j = join(
+ user_table, address_table, user_table.c.id == address_table.c.user_id
+ )
stmt = select(user_table).select_from(j)
- would emit SQL along the lines of::
+ would emit SQL along the lines of:
+
+ .. sourcecode:: sql
SELECT user.id, user.name FROM user
JOIN address ON user.id = address.user_id
:class:`_expression.Join` - the type of object produced.
- """
+ """ # noqa: E501
return Join(left, right, onclause, isouter, full)
from sqlalchemy import func
selectable = people.tablesample(
- func.bernoulli(1),
- name='alias',
- seed=func.random())
+ func.bernoulli(1), name="alias", seed=func.random()
+ )
stmt = select(selectable.c.people_id)
Assuming ``people`` with a column ``people_id``, the above
- statement would render as::
+ statement would render as:
+
+ .. sourcecode:: sql
SELECT alias.people_id FROM
people AS alias TABLESAMPLE bernoulli(:bernoulli_1)
from sqlalchemy import values
value_expr = values(
- column('id', Integer),
- column('name', String),
- name="my_values"
- ).data(
- [(1, 'name1'), (2, 'name2'), (3, 'name3')]
- )
+ column("id", Integer),
+ column("name", String),
+ name="my_values",
+ ).data([(1, "name1"), (2, "name2"), (3, "name3")])
:param \*columns: column expressions, typically composed using
:func:`_expression.column` objects.
Index.argument_for("mydialect", "length", None)
- some_index = Index('a', 'b', mydialect_length=5)
+ some_index = Index("a", "b", mydialect_length=5)
The :meth:`.DialectKWArgs.argument_for` method is a per-argument
way adding extra arguments to the
and ``<argument_name>``. For example, the ``postgresql_where``
argument would be locatable as::
- arg = my_object.dialect_options['postgresql']['where']
+ arg = my_object.dialect_options["postgresql"]["where"]
.. versionadded:: 0.9.2
execution_options,
) = QueryContext.default_load_options.from_execution_options(
"_sa_orm_load_options",
- {
- "populate_existing",
- "autoflush",
- "yield_per"
- },
+ {"populate_existing", "autoflush", "yield_per"},
execution_options,
statement._execution_options,
)
from sqlalchemy import event
+
@event.listens_for(some_engine, "before_execute")
def _process_opt(conn, statement, multiparams, params, execution_options):
"run a SQL function before invoking a statement"
mean either two columns with the same key, in which case the column
returned by key access is **arbitrary**::
- >>> x1, x2 = Column('x', Integer), Column('x', Integer)
+ >>> x1, x2 = Column("x", Integer), Column("x", Integer)
>>> cc = ColumnCollection(columns=[(x1.name, x1), (x2.name, x2)])
>>> list(cc)
[Column('x', Integer(), table=None),
Column('x', Integer(), table=None)]
- >>> cc['x'] is x1
+ >>> cc["x"] is x1
False
- >>> cc['x'] is x2
+ >>> cc["x"] is x2
True
Or it can also mean the same column multiple times. These cases are
e.g.::
- t = Table('sometable', metadata, Column('col1', Integer))
- t.columns.replace(Column('col1', Integer, key='columnone'))
+ t = Table("sometable", metadata, Column("col1", Integer))
+ t.columns.replace(Column("col1", Integer, key="columnone"))
will remove the original 'col1' from the collection, and add
the new column under the name 'columnname'.
event.listen(
users,
- 'after_create',
- AddConstraint(constraint).execute_if(dialect='postgresql')
+ "after_create",
+ AddConstraint(constraint).execute_if(dialect="postgresql"),
)
.. seealso::
Used to provide a wrapper for event listening::
event.listen(
- metadata,
- 'before_create',
- DDL("my_ddl").execute_if(dialect='postgresql')
- )
+ metadata,
+ "before_create",
+ DDL("my_ddl").execute_if(dialect="postgresql"),
+ )
:param dialect: May be a string or tuple of strings.
If a string, it will be compared to the name of the
executing database dialect::
- DDL('something').execute_if(dialect='postgresql')
+ DDL("something").execute_if(dialect="postgresql")
If a tuple, specifies multiple dialect names::
- DDL('something').execute_if(dialect=('postgresql', 'mysql'))
+ DDL("something").execute_if(dialect=("postgresql", "mysql"))
:param callable\_: A callable, which will be invoked with
three positional arguments as well as optional keyword
from sqlalchemy import event, DDL
- tbl = Table('users', metadata, Column('uid', Integer))
- event.listen(tbl, 'before_create', DDL('DROP TRIGGER users_trigger'))
+ tbl = Table("users", metadata, Column("uid", Integer))
+ event.listen(tbl, "before_create", DDL("DROP TRIGGER users_trigger"))
- spow = DDL('ALTER TABLE %(table)s SET secretpowers TRUE')
- event.listen(tbl, 'after_create', spow.execute_if(dialect='somedb'))
+ spow = DDL("ALTER TABLE %(table)s SET secretpowers TRUE")
+ event.listen(tbl, "after_create", spow.execute_if(dialect="somedb"))
- drop_spow = DDL('ALTER TABLE users SET secretpowers FALSE')
+ drop_spow = DDL("ALTER TABLE users SET secretpowers FALSE")
connection.execute(drop_spow)
When operating on Table events, the following ``statement``
- string substitutions are available::
+ string substitutions are available:
+
+ .. sourcecode:: text
%(table)s - the Table name, with any required quoting applied
%(schema)s - the schema name, with any required quoting applied
from sqlalchemy import schema
from sqlalchemy.ext.compiler import compiles
+
@compiles(schema.CreateColumn)
def compile(element, compiler, **kw):
column = element.element
return compiler.visit_create_column(element, **kw)
text = "%s SPECIAL DIRECTIVE %s" % (
- column.name,
- compiler.type_compiler.process(column.type)
- )
+ column.name,
+ compiler.type_compiler.process(column.type),
+ )
default = compiler.get_column_default_string(column)
if default is not None:
text += " DEFAULT " + default
if column.constraints:
text += " ".join(
- compiler.process(const)
- for const in column.constraints)
+ compiler.process(const) for const in column.constraints
+ )
return text
The above construct can be applied to a :class:`_schema.Table`
metadata = MetaData()
- table = Table('mytable', MetaData(),
- Column('x', Integer, info={"special":True}, primary_key=True),
- Column('y', String(50)),
- Column('z', String(20), info={"special":True})
- )
+ table = Table(
+ "mytable",
+ MetaData(),
+ Column("x", Integer, info={"special": True}, primary_key=True),
+ Column("y", String(50)),
+ Column("z", String(20), info={"special": True}),
+ )
metadata.create_all(conn)
Above, the directives we've added to the :attr:`_schema.Column.info`
collection
- will be detected by our custom compilation scheme::
+ will be detected by our custom compilation scheme:
+
+ .. sourcecode:: sql
CREATE TABLE mytable (
x SPECIAL DIRECTIVE INTEGER NOT NULL,
from sqlalchemy.schema import CreateColumn
+
@compiles(CreateColumn, "postgresql")
def skip_xmin(element, compiler, **kw):
- if element.element.name == 'xmin':
+ if element.element.name == "xmin":
return None
else:
return compiler.visit_create_column(element, **kw)
- my_table = Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
- Column('xmin', Integer)
- )
+ my_table = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("xmin", Integer),
+ )
Above, a :class:`.CreateTable` construct will generate a ``CREATE TABLE``
which only includes the ``id`` column in the string; the ``xmin`` column
E.g.::
- stmt = table.insert().values(data='newdata').return_defaults()
+ stmt = table.insert().values(data="newdata").return_defaults()
result = connection.execute(stmt)
- server_created_at = result.returned_defaults['created_at']
+ server_created_at = result.returned_defaults["created_at"]
When used against an UPDATE statement
:meth:`.UpdateBase.return_defaults` instead looks for columns that
users.insert().values(name="some name")
- users.update().where(users.c.id==5).values(name="some name")
+ users.update().where(users.c.id == 5).values(name="some name")
:param \*args: As an alternative to passing key/value parameters,
a dictionary, tuple, or list of dictionaries or tuples can be passed
this syntax is supported on backends such as SQLite, PostgreSQL,
MySQL, but not necessarily others::
- users.insert().values([
- {"name": "some name"},
- {"name": "some other name"},
- {"name": "yet another name"},
- ])
+ users.insert().values(
+ [
+ {"name": "some name"},
+ {"name": "some other name"},
+ {"name": "yet another name"},
+ ]
+ )
+
+ The above form would render a multiple VALUES statement similar to:
- The above form would render a multiple VALUES statement similar to::
+ .. sourcecode:: sql
INSERT INTO users (name) VALUES
(:name_1),
e.g.::
sel = select(table1.c.a, table1.c.b).where(table1.c.c > 5)
- ins = table2.insert().from_select(['a', 'b'], sel)
+ ins = table2.insert().from_select(["a", "b"], sel)
:param names: a sequence of string column names or
:class:`_schema.Column`
E.g.::
- stmt = table.update().ordered_values(
- ("name", "ed"), ("ident", "foo")
- )
+ stmt = table.update().ordered_values(("name", "ed"), ("ident", "foo"))
.. seealso::
:paramref:`_expression.update.preserve_parameter_order`
parameter, which will be removed in SQLAlchemy 2.0.
- """
+ """ # noqa: E501
if self._values:
raise exc.ArgumentError(
"This statement already has values present"
from sqlalchemy.sql import table, column, select
- t = table('t', column('x'))
+ t = table("t", column("x"))
s = select(t).where(t.c.x == 5)
:func:`_expression.bindparam`
elements replaced with values taken from the given dictionary::
- >>> clause = column('x') + bindparam('foo')
+ >>> clause = column("x") + bindparam("foo")
>>> print(clause.compile().params)
{'foo':None}
- >>> print(clause.params({'foo':7}).compile().params)
+ >>> print(clause.params({"foo": 7}).compile().params)
{'foo':7}
"""
.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
- >>> column('a') + column('b')
+ >>> column("a") + column("b")
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
- >>> print(column('a') + column('b'))
+ >>> print(column("a") + column("b"))
{printsql}a + b
.. seealso::
SQL.
Concretely, this is the "name" of a column or a label in a
- SELECT statement; ``<columnname>`` and ``<labelname>`` below::
+ SELECT statement; ``<columnname>`` and ``<labelname>`` below:
+
+ .. sourcecode:: sql
SELECT <columnmame> FROM table
t = text("SELECT * FROM users")
result = connection.execute(t)
-
The :class:`_expression.TextClause` construct is produced using the
:func:`_expression.text`
function; see that function for full documentation.
Given a text construct such as::
from sqlalchemy import text
- stmt = text("SELECT id, name FROM user WHERE name=:name "
- "AND timestamp=:timestamp")
+
+ stmt = text(
+ "SELECT id, name FROM user WHERE name=:name AND timestamp=:timestamp"
+ )
the :meth:`_expression.TextClause.bindparams`
method can be used to establish
the initial value of ``:name`` and ``:timestamp``,
using simple keyword arguments::
- stmt = stmt.bindparams(name='jack',
- timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5))
+ stmt = stmt.bindparams(
+ name="jack", timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5)
+ )
Where above, new :class:`.BindParameter` objects
will be generated with the names ``name`` and ``timestamp``, and
argument, then an optional value and type::
from sqlalchemy import bindparam
+
stmt = stmt.bindparams(
- bindparam('name', value='jack', type_=String),
- bindparam('timestamp', type_=DateTime)
- )
+ bindparam("name", value="jack", type_=String),
+ bindparam("timestamp", type_=DateTime),
+ )
Above, we specified the type of :class:`.DateTime` for the
``timestamp`` bind, and the type of :class:`.String` for the ``name``
Additional bound parameters can be supplied at statement execution
time, e.g.::
- result = connection.execute(stmt,
- timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5))
+ result = connection.execute(
+ stmt, timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5)
+ )
The :meth:`_expression.TextClause.bindparams`
method can be called repeatedly,
first with typing information, and a
second time with value information, and it will be combined::
- stmt = text("SELECT id, name FROM user WHERE name=:name "
- "AND timestamp=:timestamp")
+ stmt = text(
+ "SELECT id, name FROM user WHERE name=:name "
+ "AND timestamp=:timestamp"
+ )
stmt = stmt.bindparams(
- bindparam('name', type_=String),
- bindparam('timestamp', type_=DateTime)
+ bindparam("name", type_=String), bindparam("timestamp", type_=DateTime)
)
stmt = stmt.bindparams(
- name='jack',
- timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5)
+ name="jack", timestamp=datetime.datetime(2012, 10, 8, 15, 12, 5)
)
The :meth:`_expression.TextClause.bindparams`
object::
stmt1 = text("select id from table where name=:name").bindparams(
- bindparam("name", value='name1', unique=True)
+ bindparam("name", value="name1", unique=True)
)
stmt2 = text("select id from table where name=:name").bindparams(
- bindparam("name", value='name2', unique=True)
+ bindparam("name", value="name2", unique=True)
)
- union = union_all(
- stmt1.columns(column("id")),
- stmt2.columns(column("id"))
- )
+ union = union_all(stmt1.columns(column("id")), stmt2.columns(column("id")))
+
+ The above statement will render as:
- The above statement will render as::
+ .. sourcecode:: sql
select id from table where name=:name_1
UNION ALL select id from table where name=:name_2
:func:`_expression.text`
constructs.
- """
+ """ # noqa: E501
self._bindparams = new_params = self._bindparams.copy()
for bind in binds:
from sqlalchemy.sql import column, text
stmt = text("SELECT id, name FROM some_table")
- stmt = stmt.columns(column('id'), column('name')).subquery('st')
+ stmt = stmt.columns(column("id"), column("name")).subquery("st")
- stmt = select(mytable).\
- select_from(
- mytable.join(stmt, mytable.c.name == stmt.c.name)
- ).where(stmt.c.id > 5)
+ stmt = (
+ select(mytable)
+ .select_from(mytable.join(stmt, mytable.c.name == stmt.c.name))
+ .where(stmt.c.id > 5)
+ )
Above, we pass a series of :func:`_expression.column` elements to the
:meth:`_expression.TextClause.columns` method positionally. These
stmt = text("SELECT id, name, timestamp FROM some_table")
stmt = stmt.columns(
- column('id', Integer),
- column('name', Unicode),
- column('timestamp', DateTime)
- )
+ column("id", Integer),
+ column("name", Unicode),
+ column("timestamp", DateTime),
+ )
for id, name, timestamp in connection.execute(stmt):
print(id, name, timestamp)
types alone may be used, if only type conversion is needed::
stmt = text("SELECT id, name, timestamp FROM some_table")
- stmt = stmt.columns(
- id=Integer,
- name=Unicode,
- timestamp=DateTime
- )
+ stmt = stmt.columns(id=Integer, name=Unicode, timestamp=DateTime)
for id, name, timestamp in connection.execute(stmt):
print(id, name, timestamp)
the result set will match to those columns positionally, meaning the
name or origin of the column in the textual SQL doesn't matter::
- stmt = text("SELECT users.id, addresses.id, users.id, "
- "users.name, addresses.email_address AS email "
- "FROM users JOIN addresses ON users.id=addresses.user_id "
- "WHERE users.id = 1").columns(
- User.id,
- Address.id,
- Address.user_id,
- User.name,
- Address.email_address
- )
+ stmt = text(
+ "SELECT users.id, addresses.id, users.id, "
+ "users.name, addresses.email_address AS email "
+ "FROM users JOIN addresses ON users.id=addresses.user_id "
+ "WHERE users.id = 1"
+ ).columns(
+ User.id,
+ Address.id,
+ Address.user_id,
+ User.name,
+ Address.email_address,
+ )
- query = session.query(User).from_statement(stmt).options(
- contains_eager(User.addresses))
+ query = (
+ session.query(User)
+ .from_statement(stmt)
+ .options(contains_eager(User.addresses))
+ )
The :meth:`_expression.TextClause.columns` method provides a direct
route to calling :meth:`_expression.FromClause.subquery` as well as
:meth:`_expression.SelectBase.cte`
against a textual SELECT statement::
- stmt = stmt.columns(id=Integer, name=String).cte('st')
+ stmt = stmt.columns(id=Integer, name=String).cte("st")
stmt = select(sometable).where(sometable.c.id == stmt.c.id)
from sqlalchemy import case
- stmt = select(users_table).\
- where(
- case(
- (users_table.c.name == 'wendy', 'W'),
- (users_table.c.name == 'jack', 'J'),
- else_='E'
- )
- )
+ stmt = select(users_table).where(
+ case(
+ (users_table.c.name == "wendy", "W"),
+ (users_table.c.name == "jack", "J"),
+ else_="E",
+ )
+ )
Details on :class:`.Case` usage is at :func:`.case`.
.. sourcecode:: pycon+sql
>>> from sqlalchemy.sql import column
- >>> column('a') + column('b')
+ >>> column("a") + column("b")
<sqlalchemy.sql.expression.BinaryExpression object at 0x101029dd0>
- >>> print(column('a') + column('b'))
+ >>> print(column("a") + column("b"))
{printsql}a + b
"""
The rationale here is so that ColumnElement objects can be hashable.
What? Well, suppose you do this::
- c1, c2 = column('x'), column('y')
+ c1, c2 = column("x"), column("y")
s1 = set([c1, c2])
We do that **a lot**, columns inside of sets is an extremely basic
The expression::
- func.rank().filter(MyClass.y > 5).over(order_by='x')
+ func.rank().filter(MyClass.y > 5).over(order_by="x")
is shorthand for::
from sqlalchemy import over, funcfilter
- over(funcfilter(func.rank(), MyClass.y > 5), order_by='x')
+
+ over(funcfilter(func.rank(), MyClass.y > 5), order_by="x")
See :func:`_expression.over` for a full description.
id, name = column("id"), column("name")
stmt = select(id, name).select_from("user")
- The above statement would produce SQL like::
+ The above statement would produce SQL like:
+
+ .. sourcecode:: sql
SELECT id, name FROM user
E.g. when we create a :class:`.Constraint` using a naming convention
as follows::
- m = MetaData(naming_convention={
- "ck": "ck_%(table_name)s_%(constraint_name)s"
- })
- t = Table('t', m, Column('x', Integer),
- CheckConstraint('x > 5', name='x5'))
+ m = MetaData(
+ naming_convention={"ck": "ck_%(table_name)s_%(constraint_name)s"}
+ )
+ t = Table(
+ "t", m, Column("x", Integer), CheckConstraint("x > 5", name="x5")
+ )
The name of the above constraint will be rendered as ``"ck_t_x5"``.
That is, the existing name ``x5`` is used in the naming convention as the
use this explicitly as follows::
- m = MetaData(naming_convention={
- "ck": "ck_%(table_name)s_%(constraint_name)s"
- })
- t = Table('t', m, Column('x', Integer),
- CheckConstraint('x > 5', name=conv('ck_t_x5')))
+ m = MetaData(
+ naming_convention={"ck": "ck_%(table_name)s_%(constraint_name)s"}
+ )
+ t = Table(
+ "t",
+ m,
+ Column("x", Integer),
+ CheckConstraint("x > 5", name=conv("ck_t_x5")),
+ )
Where above, the :func:`_schema.conv` marker indicates that the constraint
name here is final, and the name will render as ``"ck_t_x5"`` and not
from sqlalchemy import Table, Column, Metadata, Integer
m = MetaData()
- some_table = Table('some_table', m, Column('data', Integer))
+ some_table = Table("some_table", m, Column("data", Integer))
+
@event.listens_for(some_table, "after_create")
def after_create(target, connection, **kw):
- connection.execute(text(
- "ALTER TABLE %s SET name=foo_%s" % (target.name, target.name)
- ))
+ connection.execute(
+ text("ALTER TABLE %s SET name=foo_%s" % (target.name, target.name))
+ )
some_engine = create_engine("postgresql://scott:tiger@host/test")
as listener callables::
from sqlalchemy import DDL
+
event.listen(
some_table,
"after_create",
- DDL("ALTER TABLE %(table)s SET name=foo_%(table)s")
+ DDL("ALTER TABLE %(table)s SET name=foo_%(table)s"),
)
**Event Propagation to MetaData Copies**
some_table,
"after_create",
DDL("ALTER TABLE %(table)s SET name=foo_%(table)s"),
- propagate=True
+ propagate=True,
)
new_metadata = MetaData()
:ref:`schema_ddl_sequences`
- """
+ """ # noqa: E501
_target_class_doc = "SomeSchemaClassOrObject"
_dispatch_target = SchemaEventTarget
metadata = MetaData()
- @event.listens_for(metadata, 'column_reflect')
+
+ @event.listens_for(metadata, "column_reflect")
def receive_column_reflect(inspector, table, column_info):
# receives for all Table objects that are reflected
# under this MetaData
+ ...
# will use the above event hook
my_table = Table("my_table", metadata, autoload_with=some_engine)
-
.. versionadded:: 1.4.0b2 The :meth:`_events.DDLEvents.column_reflect`
hook may now be applied to a :class:`_schema.MetaData` object as
well as the :class:`_schema.MetaData` class itself where it will
from sqlalchemy import Table
- @event.listens_for(Table, 'column_reflect')
+
+ @event.listens_for(Table, "column_reflect")
def receive_column_reflect(inspector, table, column_info):
# receives for all Table objects that are reflected
+ ...
It can also be applied to a specific :class:`_schema.Table` at the
point that one is being reflected using the
t1 = Table(
"my_table",
autoload_with=some_engine,
- listeners=[
- ('column_reflect', receive_column_reflect)
- ]
+ listeners=[("column_reflect", receive_column_reflect)],
)
The dictionary of column information as returned by the
.. sourcecode:: pycon+sql
- >>> fn = (
- ... func.generate_series(1, 5).
- ... table_valued("value", "start", "stop", "step")
+ >>> fn = func.generate_series(1, 5).table_valued(
+ ... "value", "start", "stop", "step"
... )
>>> print(select(fn))
.. sourcecode:: pycon+sql
- >>> fn = func.generate_series(4, 1, -1).table_valued("gen", with_ordinality="ordinality")
+ >>> fn = func.generate_series(4, 1, -1).table_valued(
+ ... "gen", with_ordinality="ordinality"
+ ... )
>>> print(select(fn))
{printsql}SELECT anon_1.gen, anon_1.ordinality
FROM generate_series(:generate_series_1, :generate_series_2, :generate_series_3) WITH ORDINALITY AS anon_1
.. sourcecode:: pycon+sql
>>> from sqlalchemy import column, select, func
- >>> stmt = select(column('x'), column('y')).select_from(func.myfunction())
+ >>> stmt = select(column("x"), column("y")).select_from(func.myfunction())
>>> print(stmt)
{printsql}SELECT x, y FROM myfunction()
The expression::
- func.row_number().over(order_by='x')
+ func.row_number().over(order_by="x")
is shorthand for::
from sqlalchemy import over
- over(func.row_number(), order_by='x')
+
+ over(func.row_number(), order_by="x")
See :func:`_expression.over` for a full description.
is shorthand for::
from sqlalchemy import funcfilter
+
funcfilter(func.count(1), True)
.. seealso::
An ORM example is as follows::
class Venue(Base):
- __tablename__ = 'venue'
+ __tablename__ = "venue"
id = Column(Integer, primary_key=True)
name = Column(String)
"Venue",
primaryjoin=func.instr(
remote(foreign(name)), name + "/"
- ).as_comparison(1, 2) == 1,
+ ).as_comparison(1, 2)
+ == 1,
viewonly=True,
- order_by=name
+ order_by=name,
)
Above, the "Venue" class can load descendant "Venue" objects by
.. sourcecode:: pycon+sql
- >>> print(func.my_string(u'hi', type_=Unicode) + ' ' +
- ... func.my_string(u'there', type_=Unicode))
+ >>> print(
+ ... func.my_string("hi", type_=Unicode)
+ ... + " "
+ ... + func.my_string("there", type_=Unicode)
+ ... )
{printsql}my_string(:my_string_1) || :my_string_2 || my_string(:my_string_3)
The object returned by a :data:`.func` call is usually an instance of
from sqlalchemy.sql.functions import GenericFunction
from sqlalchemy.types import DateTime
+
class as_utc(GenericFunction):
type = DateTime()
inherit_cache = True
+
print(select(func.as_utc()))
User-defined generic functions can be organized into
from sqlalchemy.sql import quoted_name
+
class GeoBuffer(GenericFunction):
type = Geometry()
package = "geo"
.. sourcecode:: pycon+sql
- >>> print(select(func.concat('a', 'b')))
+ >>> print(select(func.concat("a", "b")))
{printsql}SELECT concat(:concat_2, :concat_3) AS concat_1
String concatenation in SQLAlchemy is more commonly available using the
from sqlalchemy import select
from sqlalchemy import table, column
- my_table = table('some_table', column('id'))
+ my_table = table("some_table", column("id"))
stmt = select(func.count()).select_from(my_table)
- Executing ``stmt`` would emit::
+ Executing ``stmt`` would emit:
+
+ .. sourcecode:: sql
SELECT count(*) AS count_1
FROM some_table
from sqlalchemy import tuple_
stmt = select(
- func.sum(table.c.value),
- table.c.col_1, table.c.col_2,
- table.c.col_3
+ func.sum(table.c.value), table.c.col_1, table.c.col_2, table.c.col_3
).group_by(
func.grouping_sets(
tuple_(table.c.col_1, table.c.col_2),
)
)
-
.. versionadded:: 1.2
- """
+ """ # noqa: E501
_has_args = True
inherit_cache = True
stmt += lambda s: s.where(table.c.col == parameter)
-
.. versionadded:: 1.4
.. seealso::
... stmt = lambda_stmt(
... lambda: select(table.c.x, table.c.y),
... )
- ... stmt = stmt.add_criteria(
- ... lambda: table.c.x > parameter
- ... )
+ ... stmt = stmt.add_criteria(lambda: table.c.x > parameter)
... return stmt
The :meth:`_sql.StatementLambdaElement.add_criteria` method is
>>> def my_stmt(self, foo):
... stmt = lambda_stmt(
... lambda: select(func.max(foo.x, foo.y)),
- ... track_closure_variables=False
- ... )
- ... stmt = stmt.add_criteria(
- ... lambda: self.where_criteria,
- ... track_on=[self]
+ ... track_closure_variables=False,
... )
+ ... stmt = stmt.add_criteria(lambda: self.where_criteria, track_on=[self])
... return stmt
See :func:`_sql.lambda_stmt` for a description of the parameters
accepted.
- """
+ """ # noqa: E501
opts = self.opts + dict(
enable_tracking=enable_tracking,
is equivalent to::
from sqlalchemy import and_
+
and_(a, b)
Care should be taken when using ``&`` regarding
is equivalent to::
from sqlalchemy import or_
+
or_(a, b)
Care should be taken when using ``|`` regarding
is equivalent to::
from sqlalchemy import not_
+
not_(a)
"""
This function can also be used to make bitwise operators explicit. For
example::
- somecolumn.op('&')(0xff)
+ somecolumn.op("&")(0xFF)
is a bitwise AND of the value in ``somecolumn``.
e.g.::
- >>> expr = column('x').op('+', python_impl=lambda a, b: a + b)('y')
+ >>> expr = column("x").op("+", python_impl=lambda a, b: a + b)("y")
The operator for the above expression will also work for non-SQL
left and right objects::
from sqlalchemy.sql import operators
from sqlalchemy import Numeric
- unary = UnaryExpression(table.c.somecolumn,
- modifier=operators.custom_op("!"),
- type_=Numeric)
-
+ unary = UnaryExpression(
+ table.c.somecolumn, modifier=operators.custom_op("!"), type_=Numeric
+ )
.. seealso::
:meth:`.Operators.bool_op`
- """
+ """ # noqa: E501
__name__ = "custom_op"
) -> ColumnOperators:
r"""Implement the ``like`` operator.
- In a column context, produces the expression::
+ In a column context, produces the expression:
+
+ .. sourcecode:: sql
a LIKE other
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.like("%foobar%"))
+ stmt = select(sometable).where(sometable.c.column.like("%foobar%"))
:param other: expression to be compared
:param escape: optional escape character, renders the ``ESCAPE``
) -> ColumnOperators:
r"""Implement the ``ilike`` operator, e.g. case insensitive LIKE.
- In a column context, produces an expression either of the form::
+ In a column context, produces an expression either of the form:
+
+ .. sourcecode:: sql
lower(a) LIKE lower(other)
- Or on backends that support the ILIKE operator::
+ Or on backends that support the ILIKE operator:
+
+ .. sourcecode:: sql
a ILIKE other
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.ilike("%foobar%"))
+ stmt = select(sometable).where(sometable.c.column.ilike("%foobar%"))
:param other: expression to be compared
:param escape: optional escape character, renders the ``ESCAPE``
:meth:`.ColumnOperators.like`
- """
+ """ # noqa: E501
return self.operate(ilike_op, other, escape=escape)
def bitwise_xor(self, other: Any) -> ColumnOperators:
The given parameter ``other`` may be:
- * A list of literal values, e.g.::
+ * A list of literal values,
+ e.g.::
stmt.where(column.in_([1, 2, 3]))
In this calling form, the list of items is converted to a set of
- bound parameters the same length as the list given::
+ bound parameters the same length as the list given:
+
+ .. sourcecode:: sql
WHERE COL IN (?, ?, ?)
:func:`.tuple_` containing multiple expressions::
from sqlalchemy import tuple_
+
stmt.where(tuple_(col1, col2).in_([(1, 10), (2, 20), (3, 30)]))
- * An empty list, e.g.::
+ * An empty list,
+ e.g.::
stmt.where(column.in_([]))
In this calling form, the expression renders an "empty set"
expression. These expressions are tailored to individual backends
and are generally trying to get an empty SELECT statement as a
- subquery. Such as on SQLite, the expression is::
+ subquery. Such as on SQLite, the expression is:
+
+ .. sourcecode:: sql
WHERE col IN (SELECT 1 FROM (SELECT 1) WHERE 1!=1)
* A bound parameter, e.g. :func:`.bindparam`, may be used if it
includes the :paramref:`.bindparam.expanding` flag::
- stmt.where(column.in_(bindparam('value', expanding=True)))
+ stmt.where(column.in_(bindparam("value", expanding=True)))
In this calling form, the expression renders a special non-SQL
- placeholder expression that looks like::
+ placeholder expression that looks like:
+
+ .. sourcecode:: sql
WHERE COL IN ([EXPANDING_value])
connection.execute(stmt, {"value": [1, 2, 3]})
- The database would be passed a bound parameter for each value::
+ The database would be passed a bound parameter for each value:
+
+ .. sourcecode:: sql
WHERE COL IN (?, ?, ?)
If an empty list is passed, a special "empty list" expression,
which is specific to the database in use, is rendered. On
- SQLite this would be::
+ SQLite this would be:
+
+ .. sourcecode:: sql
WHERE COL IN (SELECT 1 FROM (SELECT 1) WHERE 1!=1)
correlated scalar select::
stmt.where(
- column.in_(
- select(othertable.c.y).
- where(table.c.x == othertable.c.x)
- )
+ column.in_(select(othertable.c.y).where(table.c.x == othertable.c.x))
)
- In this calling form, :meth:`.ColumnOperators.in_` renders as given::
+ In this calling form, :meth:`.ColumnOperators.in_` renders as given:
+
+ .. sourcecode:: sql
WHERE COL IN (SELECT othertable.y
FROM othertable WHERE othertable.x = table.x)
construct, or a :func:`.bindparam` construct that includes the
:paramref:`.bindparam.expanding` flag set to True.
- """
+ """ # noqa: E501
return self.operate(in_op, other)
def not_in(self, other: Any) -> ColumnOperators:
r"""Implement the ``startswith`` operator.
Produces a LIKE expression that tests against a match for the start
- of a string value::
+ of a string value:
+
+ .. sourcecode:: sql
column LIKE <other> || '%'
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.startswith("foobar"))
+ stmt = select(sometable).where(sometable.c.column.startswith("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.startswith("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE :param || '%' ESCAPE '/'
somecolumn.startswith("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE :param || '%' ESCAPE '^'
:meth:`.ColumnOperators.like`
- """
+ """ # noqa: E501
return self.operate(
startswith_op, other, escape=escape, autoescape=autoescape
)
version of :meth:`.ColumnOperators.startswith`.
Produces a LIKE expression that tests against an insensitive
- match for the start of a string value::
+ match for the start of a string value:
+
+ .. sourcecode:: sql
lower(column) LIKE lower(<other>) || '%'
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.istartswith("foobar"))
+ stmt = select(sometable).where(sometable.c.column.istartswith("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.istartswith("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE lower(:param) || '%' ESCAPE '/'
somecolumn.istartswith("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE lower(:param) || '%' ESCAPE '^'
.. seealso::
:meth:`.ColumnOperators.startswith`
- """
+ """ # noqa: E501
return self.operate(
istartswith_op, other, escape=escape, autoescape=autoescape
)
r"""Implement the 'endswith' operator.
Produces a LIKE expression that tests against a match for the end
- of a string value::
+ of a string value:
+
+ .. sourcecode:: sql
column LIKE '%' || <other>
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.endswith("foobar"))
+ stmt = select(sometable).where(sometable.c.column.endswith("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.endswith("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE '%' || :param ESCAPE '/'
somecolumn.endswith("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE '%' || :param ESCAPE '^'
:meth:`.ColumnOperators.like`
- """
+ """ # noqa: E501
return self.operate(
endswith_op, other, escape=escape, autoescape=autoescape
)
version of :meth:`.ColumnOperators.endswith`.
Produces a LIKE expression that tests against an insensitive match
- for the end of a string value::
+ for the end of a string value:
+
+ .. sourcecode:: sql
lower(column) LIKE '%' || lower(<other>)
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.iendswith("foobar"))
+ stmt = select(sometable).where(sometable.c.column.iendswith("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.iendswith("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE '%' || lower(:param) ESCAPE '/'
somecolumn.iendswith("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE '%' || lower(:param) ESCAPE '^'
.. seealso::
:meth:`.ColumnOperators.endswith`
- """
+ """ # noqa: E501
return self.operate(
iendswith_op, other, escape=escape, autoescape=autoescape
)
r"""Implement the 'contains' operator.
Produces a LIKE expression that tests against a match for the middle
- of a string value::
+ of a string value:
+
+ .. sourcecode:: sql
column LIKE '%' || <other> || '%'
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.contains("foobar"))
+ stmt = select(sometable).where(sometable.c.column.contains("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.contains("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE '%' || :param || '%' ESCAPE '/'
somecolumn.contains("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
somecolumn LIKE '%' || :param || '%' ESCAPE '^'
:meth:`.ColumnOperators.like`
- """
+ """ # noqa: E501
return self.operate(contains_op, other, **kw)
def icontains(self, other: Any, **kw: Any) -> ColumnOperators:
version of :meth:`.ColumnOperators.contains`.
Produces a LIKE expression that tests against an insensitive match
- for the middle of a string value::
+ for the middle of a string value:
+
+ .. sourcecode:: sql
lower(column) LIKE '%' || lower(<other>) || '%'
E.g.::
- stmt = select(sometable).\
- where(sometable.c.column.icontains("foobar"))
+ stmt = select(sometable).where(sometable.c.column.icontains("foobar"))
Since the operator uses ``LIKE``, wildcard characters
``"%"`` and ``"_"`` that are present inside the <other> expression
somecolumn.icontains("foo%bar", autoescape=True)
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE '%' || lower(:param) || '%' ESCAPE '/'
somecolumn.icontains("foo/%bar", escape="^")
- Will render as::
+ Will render as:
+
+ .. sourcecode:: sql
lower(somecolumn) LIKE '%' || lower(:param) || '%' ESCAPE '^'
:meth:`.ColumnOperators.contains`
- """
+ """ # noqa: E501
return self.operate(icontains_op, other, **kw)
def match(self, other: Any, **kwargs: Any) -> ColumnOperators:
E.g.::
stmt = select(table.c.some_column).where(
- table.c.some_column.regexp_match('^(b|c)')
+ table.c.some_column.regexp_match("^(b|c)")
)
:meth:`_sql.ColumnOperators.regexp_match` attempts to resolve to
E.g.::
stmt = select(
- table.c.some_column.regexp_replace(
- 'b(..)',
- 'X\1Y',
- flags='g'
- )
+ table.c.some_column.regexp_replace("b(..)", "X\1Y", flags="g")
)
:meth:`_sql.ColumnOperators.regexp_replace` attempts to resolve to
e.g.::
mytable = Table(
- "mytable", metadata,
- Column('mytable_id', Integer, primary_key=True),
- Column('value', String(50))
+ "mytable",
+ metadata,
+ Column("mytable_id", Integer, primary_key=True),
+ Column("value", String(50)),
)
The :class:`_schema.Table`
:class:`_schema.Column`
named "y"::
- Table("mytable", metadata,
- Column('y', Integer),
- extend_existing=True,
- autoload_with=engine
- )
+ Table(
+ "mytable",
+ metadata,
+ Column("y", Integer),
+ extend_existing=True,
+ autoload_with=engine,
+ )
.. seealso::
"handle the column reflection event"
# ...
+
t = Table(
- 'sometable',
+ "sometable",
autoload_with=engine,
- listeners=[
- ('column_reflect', listen_for_reflect)
- ])
+ listeners=[("column_reflect", listen_for_reflect)],
+ )
.. seealso::
m1 = MetaData()
- user = Table('user', m1, Column('id', Integer, primary_key=True))
+ user = Table("user", m1, Column("id", Integer, primary_key=True))
m2 = MetaData()
user_copy = user.to_metadata(m2)
unless
set explicitly::
- m2 = MetaData(schema='newschema')
+ m2 = MetaData(schema="newschema")
# user_copy_one will have "newschema" as the schema name
user_copy_one = user.to_metadata(m2, schema=None)
E.g.::
- def referred_schema_fn(table, to_schema,
- constraint, referred_schema):
- if referred_schema == 'base_tables':
+ def referred_schema_fn(table, to_schema, constraint, referred_schema):
+ if referred_schema == "base_tables":
return referred_schema
else:
return to_schema
- new_table = table.to_metadata(m2, schema="alt_schema",
- referred_schema_fn=referred_schema_fn)
+
+ new_table = table.to_metadata(
+ m2, schema="alt_schema", referred_schema_fn=referred_schema_fn
+ )
:param name: optional string name indicating the target table name.
If not specified or None, the table name is retained. This allows
:class:`_schema.MetaData` target
with a new name.
- """
+ """ # noqa: E501
if name is None:
name = self.name
as well, e.g.::
# use a type with arguments
- Column('data', String(50))
+ Column("data", String(50))
# use no arguments
- Column('level', Integer)
+ Column("level", Integer)
The ``type`` argument may be the second positional argument
or specified by keyword.
# turn on autoincrement for this column despite
# the ForeignKey()
- Column('id', ForeignKey('other.id'),
- primary_key=True, autoincrement='ignore_fk')
+ Column(
+ "id",
+ ForeignKey("other.id"),
+ primary_key=True,
+ autoincrement="ignore_fk",
+ )
It is typically not desirable to have "autoincrement" enabled on a
column that refers to another via foreign key, as such a column is
"some_table",
metadata,
Column("x", Integer),
- Index("ix_some_table_x", "x")
+ Index("ix_some_table_x", "x"),
)
To add the :paramref:`_schema.Index.unique` flag to the
String types will be emitted as-is, surrounded by single quotes::
- Column('x', Text, server_default="val")
+ Column("x", Text, server_default="val")
+
+ will render:
+
+ .. sourcecode:: sql
x TEXT DEFAULT 'val'
A :func:`~sqlalchemy.sql.expression.text` expression will be
rendered as-is, without quotes::
- Column('y', DateTime, server_default=text('NOW()'))
+ Column("y", DateTime, server_default=text("NOW()"))
+
+ will render:
+
+ .. sourcecode:: sql
y DATETIME DEFAULT NOW()
from sqlalchemy.dialects.postgresql import array
engine = create_engine(
- 'postgresql+psycopg2://scott:tiger@localhost/mydatabase'
+ "postgresql+psycopg2://scott:tiger@localhost/mydatabase"
)
metadata_obj = MetaData()
tbl = Table(
- "foo",
- metadata_obj,
- Column("bar",
- ARRAY(Text),
- server_default=array(["biz", "bang", "bash"])
- )
+ "foo",
+ metadata_obj,
+ Column(
+ "bar", ARRAY(Text), server_default=array(["biz", "bang", "bash"])
+ ),
)
metadata_obj.create_all(engine)
- The above results in a table created with the following SQL::
+ The above results in a table created with the following SQL:
+
+ .. sourcecode:: sql
CREATE TABLE foo (
bar TEXT[] DEFAULT ARRAY['biz', 'bang', 'bash']
:class:`_schema.UniqueConstraint` construct explicitly at the
level of the :class:`_schema.Table` construct itself::
- Table(
- "some_table",
- metadata,
- Column("x", Integer),
- UniqueConstraint("x")
- )
+ Table("some_table", metadata, Column("x", Integer), UniqueConstraint("x"))
The :paramref:`_schema.UniqueConstraint.name` parameter
of the unique constraint object is left at its default value
object,
e.g.::
- t = Table("remote_table", metadata,
- Column("remote_id", ForeignKey("main_table.id"))
+ t = Table(
+ "remote_table",
+ metadata,
+ Column("remote_id", ForeignKey("main_table.id")),
)
Note that ``ForeignKey`` is only a marker object that defines
For example, the following::
- Column('foo', Integer, default=50)
+ Column("foo", Integer, default=50)
Is equivalent to::
- Column('foo', Integer, ColumnDefault(50))
-
+ Column("foo", Integer, ColumnDefault(50))
"""
The :class:`.Sequence` is typically associated with a primary key column::
some_table = Table(
- 'some_table', metadata,
- Column('id', Integer, Sequence('some_table_seq', start=1),
- primary_key=True)
+ "some_table",
+ metadata,
+ Column(
+ "id",
+ Integer,
+ Sequence("some_table_seq", start=1),
+ primary_key=True,
+ ),
)
When CREATE TABLE is emitted for the above :class:`_schema.Table`, if the
E.g.::
- Column('foo', Integer, FetchedValue())
+ Column("foo", Integer, FetchedValue())
Would indicate that some trigger or default generator
will create a new value for the ``foo`` column during an
For example, the following::
- Column('foo', Integer, server_default="50")
+ Column("foo", Integer, server_default="50")
Is equivalent to::
- Column('foo', Integer, DefaultClause("50"))
+ Column("foo", Integer, DefaultClause("50"))
"""
:class:`_schema.Column` objects corresponding to those marked with
the :paramref:`_schema.Column.primary_key` flag::
- >>> my_table = Table('mytable', metadata,
- ... Column('id', Integer, primary_key=True),
- ... Column('version_id', Integer, primary_key=True),
- ... Column('data', String(50))
- ... )
+ >>> my_table = Table(
+ ... "mytable",
+ ... metadata,
+ ... Column("id", Integer, primary_key=True),
+ ... Column("version_id", Integer, primary_key=True),
+ ... Column("data", String(50)),
+ ... )
>>> my_table.primary_key
PrimaryKeyConstraint(
Column('id', Integer(), table=<mytable>,
the "name" of the constraint can also be specified, as well as other
options which may be recognized by dialects::
- my_table = Table('mytable', metadata,
- Column('id', Integer),
- Column('version_id', Integer),
- Column('data', String(50)),
- PrimaryKeyConstraint('id', 'version_id',
- name='mytable_pk')
- )
+ my_table = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer),
+ Column("version_id", Integer),
+ Column("data", String(50)),
+ PrimaryKeyConstraint("id", "version_id", name="mytable_pk"),
+ )
The two styles of column-specification should generally not be mixed.
An warning is emitted if the columns present in the
primary key column collection from the :class:`_schema.Table` based on the
flags::
- my_table = Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
- Column('version_id', Integer, primary_key=True),
- Column('data', String(50)),
- PrimaryKeyConstraint(name='mytable_pk',
- mssql_clustered=True)
- )
+ my_table = Table(
+ "mytable",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("version_id", Integer, primary_key=True),
+ Column("data", String(50)),
+ PrimaryKeyConstraint(name="mytable_pk", mssql_clustered=True),
+ )
"""
E.g.::
- sometable = Table("sometable", metadata,
- Column("name", String(50)),
- Column("address", String(100))
- )
+ sometable = Table(
+ "sometable",
+ metadata,
+ Column("name", String(50)),
+ Column("address", String(100)),
+ )
Index("some_index", sometable.c.name)
For a no-frills, single column index, adding
:class:`_schema.Column` also supports ``index=True``::
- sometable = Table("sometable", metadata,
- Column("name", String(50), index=True)
- )
+ sometable = Table(
+ "sometable", metadata, Column("name", String(50), index=True)
+ )
For a composite index, multiple columns can be specified::
the names
of the indexed columns can be specified as strings::
- Table("sometable", metadata,
- Column("name", String(50)),
- Column("address", String(100)),
- Index("some_index", "name", "address")
- )
+ Table(
+ "sometable",
+ metadata,
+ Column("name", String(50)),
+ Column("address", String(100)),
+ Index("some_index", "name", "address"),
+ )
To support functional or expression-based indexes in this form, the
:func:`_expression.text` construct may be used::
from sqlalchemy import text
- Table("sometable", metadata,
- Column("name", String(50)),
- Column("address", String(100)),
- Index("some_index", text("lower(name)"))
- )
+ Table(
+ "sometable",
+ metadata,
+ Column("name", String(50)),
+ Column("address", String(100)),
+ Index("some_index", text("lower(name)")),
+ )
.. seealso::
from sqlalchemy import Computed
- Table('square', metadata_obj,
- Column('side', Float, nullable=False),
- Column('area', Float, Computed('side * side'))
+ Table(
+ "square",
+ metadata_obj,
+ Column("side", Float, nullable=False),
+ Column("area", Float, Computed("side * side")),
)
See the linked documentation below for complete details.
from sqlalchemy import Identity
- Table('foo', metadata_obj,
- Column('id', Integer, Identity())
- Column('description', Text),
+ Table(
+ "foo",
+ metadata_obj,
+ Column("id", Integer, Identity()),
+ Column("description", Text),
)
See the linked documentation below for complete details.
stmt = table.insert().prefix_with("LOW_PRIORITY", dialect="mysql")
# MySQL 5.7 optimizer hints
- stmt = select(table).prefix_with(
- "/*+ BKA(t1) */", dialect="mysql")
+ stmt = select(table).prefix_with("/*+ BKA(t1) */", dialect="mysql")
Multiple prefixes can be specified by multiple calls
to :meth:`_expression.HasPrefixes.prefix_with`.
E.g.::
- stmt = select(col1, col2).cte().suffix_with(
- "cycle empno set y_cycle to 1 default 0", dialect="oracle")
+ stmt = (
+ select(col1, col2)
+ .cte()
+ .suffix_with(
+ "cycle empno set y_cycle to 1 default 0", dialect="oracle"
+ )
+ )
Multiple suffixes can be specified by multiple calls
to :meth:`_expression.HasSuffixes.suffix_with`.
the table or alias. E.g. when using Oracle Database, the
following::
- select(mytable).\
- with_hint(mytable, "index(%(name)s ix_mytable)")
+ select(mytable).with_hint(mytable, "index(%(name)s ix_mytable)")
- Would render SQL as::
+ Would render SQL as:
+
+ .. sourcecode:: sql
select /*+ index(mytable ix_mytable) */ ... from mytable
The ``dialect_name`` option will limit the rendering of a particular
hint to a particular backend. Such as, to add hints for both Oracle
- Database and Sybase simultaneously::
+ Database and MSSql simultaneously::
- select(mytable).\
- with_hint(mytable, "index(%(name)s ix_mytable)", 'oracle').\
- with_hint(mytable, "WITH INDEX ix_mytable", 'mssql')
+ select(mytable).with_hint(
+ mytable, "index(%(name)s ix_mytable)", "oracle"
+ ).with_hint(mytable, "WITH INDEX ix_mytable", "mssql")
.. seealso::
from sqlalchemy import join
- j = user_table.join(address_table,
- user_table.c.id == address_table.c.user_id)
+ j = user_table.join(
+ address_table, user_table.c.id == address_table.c.user_id
+ )
stmt = select(user_table).select_from(j)
- would emit SQL along the lines of::
+ would emit SQL along the lines of:
+
+ .. sourcecode:: sql
SELECT user.id, user.name FROM user
JOIN address ON user.id = address.user_id
from sqlalchemy import outerjoin
- j = user_table.outerjoin(address_table,
- user_table.c.id == address_table.c.user_id)
+ j = user_table.outerjoin(
+ address_table, user_table.c.id == address_table.c.user_id
+ )
The above is equivalent to::
j = user_table.join(
- address_table,
- user_table.c.id == address_table.c.user_id,
- isouter=True)
+ address_table, user_table.c.id == address_table.c.user_id, isouter=True
+ )
:param right: the right side of the join; this is any
:class:`_expression.FromClause` object such as a
:class:`_expression.Join`
- """
+ """ # noqa: E501
return Join(self, right, onclause, True, full)
E.g.::
- a2 = some_table.alias('a2')
+ a2 = some_table.alias("a2")
The above code creates an :class:`_expression.Alias`
object which can be used
This is the namespace that is used to resolve "filter_by()" type
expressions, such as::
- stmt.filter_by(address='some address')
+ stmt.filter_by(address="some address")
It defaults to the ``.c`` collection, however internally it can
be overridden using the "entity_namespace" annotation to deliver
>>> from sqlalchemy import table, column, select, true, LABEL_STYLE_NONE
>>> table1 = table("table1", column("columna"), column("columnb"))
>>> table2 = table("table2", column("columna"), column("columnc"))
- >>> print(select(table1, table2).join(table2, true()).set_label_style(LABEL_STYLE_NONE))
+ >>> print(
+ ... select(table1, table2)
+ ... .join(table2, true())
+ ... .set_label_style(LABEL_STYLE_NONE)
+ ... )
{printsql}SELECT table1.columna, table1.columnb, table2.columna, table2.columnc
FROM table1 JOIN table2 ON true
.. sourcecode:: pycon+sql
- >>> from sqlalchemy import table, column, select, true, LABEL_STYLE_TABLENAME_PLUS_COL
+ >>> from sqlalchemy import (
+ ... table,
+ ... column,
+ ... select,
+ ... true,
+ ... LABEL_STYLE_TABLENAME_PLUS_COL,
+ ... )
>>> table1 = table("table1", column("columna"), column("columnb"))
>>> table2 = table("table2", column("columna"), column("columnc"))
- >>> print(select(table1, table2).join(table2, true()).set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL))
+ >>> print(
+ ... select(table1, table2)
+ ... .join(table2, true())
+ ... .set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL)
+ ... )
{printsql}SELECT table1.columna AS table1_columna, table1.columnb AS table1_columnb, table2.columna AS table2_columna, table2.columnc AS table2_columnc
FROM table1 JOIN table2 ON true
.. sourcecode:: pycon+sql
- >>> from sqlalchemy import table, column, select, true, LABEL_STYLE_DISAMBIGUATE_ONLY
+ >>> from sqlalchemy import (
+ ... table,
+ ... column,
+ ... select,
+ ... true,
+ ... LABEL_STYLE_DISAMBIGUATE_ONLY,
+ ... )
>>> table1 = table("table1", column("columna"), column("columnb"))
>>> table2 = table("table2", column("columna"), column("columnc"))
- >>> print(select(table1, table2).join(table2, true()).set_label_style(LABEL_STYLE_DISAMBIGUATE_ONLY))
+ >>> print(
+ ... select(table1, table2)
+ ... .join(table2, true())
+ ... .set_label_style(LABEL_STYLE_DISAMBIGUATE_ONLY)
+ ... )
{printsql}SELECT table1.columna, table1.columnb, table2.columna AS columna_1, table2.columnc
FROM table1 JOIN table2 ON true
stmt = stmt.select()
- The above will produce a SQL string resembling::
+ The above will produce a SQL string resembling:
+
+ .. sourcecode:: sql
SELECT table_a.id, table_a.col, table_b.id, table_b.a_id
FROM table_a JOIN table_b ON table_a.id = table_b.a_id
.. sourcecode:: pycon+sql
>>> from sqlalchemy import select, func
- >>> fn = func.json_array_elements_text('["one", "two", "three"]').table_valued("value")
+ >>> fn = func.json_array_elements_text('["one", "two", "three"]').table_valued(
+ ... "value"
+ ... )
>>> print(select(fn.c.value))
{printsql}SELECT anon_1.value
FROM json_array_elements_text(:json_array_elements_text_1) AS anon_1
>>> print(
... select(
- ... func.unnest(array(["one", "two", "three"])).
- table_valued("x", with_ordinality="o").render_derived()
+ ... func.unnest(array(["one", "two", "three"]))
+ ... .table_valued("x", with_ordinality="o")
+ ... .render_derived()
... )
... )
{printsql}SELECT anon_1.x, anon_1.o
>>> print(
... select(
- ... func.json_to_recordset(
- ... '[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]'
- ... )
+ ... func.json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]')
... .table_valued(column("a", Integer), column("b", String))
... .render_derived(with_types=True)
... )
E.g.::
from sqlalchemy import table, column, select
- t = table('t', column('c1'), column('c2'))
+
+ t = table("t", column("c1"), column("c2"))
ins = t.insert().values({"c1": "x", "c2": "y"}).cte()
stmt = select(t).add_cte(ins)
- Would render::
+ Would render:
+
+ .. sourcecode:: sql
- WITH anon_1 AS
- (INSERT INTO t (c1, c2) VALUES (:param_1, :param_2))
+ WITH anon_1 AS (
+ INSERT INTO t (c1, c2) VALUES (:param_1, :param_2)
+ )
SELECT t.c1, t.c2
FROM t
t = table("t", column("c1"), column("c2"))
- delete_statement_cte = (
- t.delete().where(t.c.c1 < 1).cte("deletions")
- )
+ delete_statement_cte = t.delete().where(t.c.c1 < 1).cte("deletions")
insert_stmt = insert(t).values({"c1": 1, "c2": 2})
update_statement = insert_stmt.on_conflict_do_update(
print(update_statement)
- The above statement renders as::
+ The above statement renders as:
+
+ .. sourcecode:: sql
- WITH deletions AS
- (DELETE FROM t WHERE t.c1 < %(c1_1)s)
+ WITH deletions AS (
+ DELETE FROM t WHERE t.c1 < %(c1_1)s
+ )
INSERT INTO t (c1, c2) VALUES (%(c1)s, %(c2)s)
ON CONFLICT (c1) DO UPDATE SET c1 = excluded.c1, c2 = excluded.c2
:paramref:`.HasCTE.cte.nesting`
- """
- opt = _CTEOpts(
- nest_here,
- )
+ """ # noqa: E501
+ opt = _CTEOpts(nest_here)
for cte in ctes:
cte = coercions.expect(roles.IsCTERole, cte)
self._independent_ctes += (cte,)
Example 1, non recursive::
- from sqlalchemy import (Table, Column, String, Integer,
- MetaData, select, func)
+ from sqlalchemy import (
+ Table,
+ Column,
+ String,
+ Integer,
+ MetaData,
+ select,
+ func,
+ )
metadata = MetaData()
- orders = Table('orders', metadata,
- Column('region', String),
- Column('amount', Integer),
- Column('product', String),
- Column('quantity', Integer)
+ orders = Table(
+ "orders",
+ metadata,
+ Column("region", String),
+ Column("amount", Integer),
+ Column("product", String),
+ Column("quantity", Integer),
)
- regional_sales = select(
- orders.c.region,
- func.sum(orders.c.amount).label('total_sales')
- ).group_by(orders.c.region).cte("regional_sales")
+ regional_sales = (
+ select(orders.c.region, func.sum(orders.c.amount).label("total_sales"))
+ .group_by(orders.c.region)
+ .cte("regional_sales")
+ )
- top_regions = select(regional_sales.c.region).\
- where(
- regional_sales.c.total_sales >
- select(
- func.sum(regional_sales.c.total_sales) / 10
- )
- ).cte("top_regions")
+ top_regions = (
+ select(regional_sales.c.region)
+ .where(
+ regional_sales.c.total_sales
+ > select(func.sum(regional_sales.c.total_sales) / 10)
+ )
+ .cte("top_regions")
+ )
- statement = select(
- orders.c.region,
- orders.c.product,
- func.sum(orders.c.quantity).label("product_units"),
- func.sum(orders.c.amount).label("product_sales")
- ).where(orders.c.region.in_(
- select(top_regions.c.region)
- )).group_by(orders.c.region, orders.c.product)
+ statement = (
+ select(
+ orders.c.region,
+ orders.c.product,
+ func.sum(orders.c.quantity).label("product_units"),
+ func.sum(orders.c.amount).label("product_sales"),
+ )
+ .where(orders.c.region.in_(select(top_regions.c.region)))
+ .group_by(orders.c.region, orders.c.product)
+ )
result = conn.execute(statement).fetchall()
Example 2, WITH RECURSIVE::
- from sqlalchemy import (Table, Column, String, Integer,
- MetaData, select, func)
+ from sqlalchemy import (
+ Table,
+ Column,
+ String,
+ Integer,
+ MetaData,
+ select,
+ func,
+ )
metadata = MetaData()
- parts = Table('parts', metadata,
- Column('part', String),
- Column('sub_part', String),
- Column('quantity', Integer),
+ parts = Table(
+ "parts",
+ metadata,
+ Column("part", String),
+ Column("sub_part", String),
+ Column("quantity", Integer),
)
- included_parts = select(\
- parts.c.sub_part, parts.c.part, parts.c.quantity\
- ).\
- where(parts.c.part=='our part').\
- cte(recursive=True)
+ included_parts = (
+ select(parts.c.sub_part, parts.c.part, parts.c.quantity)
+ .where(parts.c.part == "our part")
+ .cte(recursive=True)
+ )
incl_alias = included_parts.alias()
parts_alias = parts.alias()
included_parts = included_parts.union_all(
select(
- parts_alias.c.sub_part,
- parts_alias.c.part,
- parts_alias.c.quantity
- ).\
- where(parts_alias.c.part==incl_alias.c.sub_part)
+ parts_alias.c.sub_part, parts_alias.c.part, parts_alias.c.quantity
+ ).where(parts_alias.c.part == incl_alias.c.sub_part)
)
statement = select(
- included_parts.c.sub_part,
- func.sum(included_parts.c.quantity).
- label('total_quantity')
- ).\
- group_by(included_parts.c.sub_part)
+ included_parts.c.sub_part,
+ func.sum(included_parts.c.quantity).label("total_quantity"),
+ ).group_by(included_parts.c.sub_part)
result = conn.execute(statement).fetchall()
Example 3, an upsert using UPDATE and INSERT with CTEs::
from datetime import date
- from sqlalchemy import (MetaData, Table, Column, Integer,
- Date, select, literal, and_, exists)
+ from sqlalchemy import (
+ MetaData,
+ Table,
+ Column,
+ Integer,
+ Date,
+ select,
+ literal,
+ and_,
+ exists,
+ )
metadata = MetaData()
- visitors = Table('visitors', metadata,
- Column('product_id', Integer, primary_key=True),
- Column('date', Date, primary_key=True),
- Column('count', Integer),
+ visitors = Table(
+ "visitors",
+ metadata,
+ Column("product_id", Integer, primary_key=True),
+ Column("date", Date, primary_key=True),
+ Column("count", Integer),
)
# add 5 visitors for the product_id == 1
update_cte = (
visitors.update()
- .where(and_(visitors.c.product_id == product_id,
- visitors.c.date == day))
+ .where(
+ and_(visitors.c.product_id == product_id, visitors.c.date == day)
+ )
.values(count=visitors.c.count + count)
.returning(literal(1))
- .cte('update_cte')
+ .cte("update_cte")
)
upsert = visitors.insert().from_select(
[visitors.c.product_id, visitors.c.date, visitors.c.count],
- select(literal(product_id), literal(day), literal(count))
- .where(~exists(update_cte.select()))
+ select(literal(product_id), literal(day), literal(count)).where(
+ ~exists(update_cte.select())
+ ),
)
connection.execute(upsert)
Example 4, Nesting CTE (SQLAlchemy 1.4.24 and above)::
- value_a = select(
- literal("root").label("n")
- ).cte("value_a")
+ value_a = select(literal("root").label("n")).cte("value_a")
# A nested CTE with the same name as the root one
- value_a_nested = select(
- literal("nesting").label("n")
- ).cte("value_a", nesting=True)
+ value_a_nested = select(literal("nesting").label("n")).cte(
+ "value_a", nesting=True
+ )
# Nesting CTEs takes ascendency locally
# over the CTEs at a higher level
value_ab = select(value_a.c.n.label("a"), value_b.c.n.label("b"))
The above query will render the second CTE nested inside the first,
- shown with inline parameters below as::
+ shown with inline parameters below as:
+
+ .. sourcecode:: sql
WITH
value_a AS
The same CTE can be set up using the :meth:`.HasCTE.add_cte` method
as follows (SQLAlchemy 2.0 and above)::
- value_a = select(
- literal("root").label("n")
- ).cte("value_a")
+ value_a = select(literal("root").label("n")).cte("value_a")
# A nested CTE with the same name as the root one
- value_a_nested = select(
- literal("nesting").label("n")
- ).cte("value_a")
+ value_a_nested = select(literal("nesting").label("n")).cte("value_a")
# Nesting CTEs takes ascendency locally
# over the CTEs at a higher level
value_b = (
- select(value_a_nested.c.n).
- add_cte(value_a_nested, nest_here=True).
- cte("value_b")
+ select(value_a_nested.c.n)
+ .add_cte(value_a_nested, nest_here=True)
+ .cte("value_b")
)
value_ab = select(value_a.c.n.label("a"), value_b.c.n.label("b"))
Column("right", Integer),
)
- root_node = select(literal(1).label("node")).cte(
- "nodes", recursive=True
- )
+ root_node = select(literal(1).label("node")).cte("nodes", recursive=True)
left_edge = select(edge.c.left).join(
root_node, edge.c.right == root_node.c.node
subgraph = select(subgraph_cte)
- The above query will render 2 UNIONs inside the recursive CTE::
+ The above query will render 2 UNIONs inside the recursive CTE:
+
+ .. sourcecode:: sql
WITH RECURSIVE nodes(node) AS (
SELECT 1 AS node
:meth:`_orm.Query.cte` - ORM version of
:meth:`_expression.HasCTE.cte`.
- """
+ """ # noqa: E501
return CTE._construct(
self, name=name, recursive=recursive, nesting=nesting
)
from sqlalchemy import table, column
- user = table("user",
- column("id"),
- column("name"),
- column("description"),
+ user = table(
+ "user",
+ column("id"),
+ column("name"),
+ column("description"),
)
The :class:`_expression.TableClause` construct serves as the base for
E.g.::
- table.insert().values(name='foo')
+ table.insert().values(name="foo")
See :func:`_expression.insert` for argument and usage information.
E.g.::
- table.update().where(table.c.id==7).values(name='foo')
+ table.update().where(table.c.id == 7).values(name="foo")
See :func:`_expression.update` for argument and usage information.
E.g.::
- table.delete().where(table.c.id==7)
+ table.delete().where(table.c.id == 7)
See :func:`_expression.delete` for argument and usage information.
E.g.::
- my_values = my_values.data([(1, 'value 1'), (2, 'value2')])
+ my_values = my_values.data([(1, "value 1"), (2, "value2")])
:param values: a sequence (i.e. list) of tuples that map to the
column expressions given in the :class:`_expression.Values`
stmt = select(table.c.id, table.c.name)
- The above statement might look like::
+ The above statement might look like:
+
+ .. sourcecode:: sql
SELECT table.id, table.name FROM table
subq = stmt.subquery()
new_stmt = select(subq)
- The above renders as::
+ The above renders as:
+
+ .. sourcecode:: sql
SELECT anon_1.id, anon_1.name
FROM (SELECT table.id, table.name FROM table) AS anon_1
stmt = select(table).with_for_update(nowait=True)
On a database like PostgreSQL or Oracle Database, the above would
- render a statement like::
+ render a statement like:
+
+ .. sourcecode:: sql
SELECT table.a, table.b FROM table FOR UPDATE NOWAIT
on other backends, the ``nowait`` option is ignored and instead
- would produce::
+ would produce:
+
+ .. sourcecode:: sql
SELECT table.a, table.b FROM table FOR UPDATE
e.g.::
- stmt = select(table.c.name, func.max(table.c.stat)).\
- group_by(table.c.name)
+ stmt = select(table.c.name, func.max(table.c.stat)).group_by(table.c.name)
:param \*clauses: a series of :class:`_expression.ColumnElement`
constructs
:ref:`tutorial_order_by_label` - in the :ref:`unified_tutorial`
- """
+ """ # noqa: E501
if not clauses and __first is None:
self._group_by_clauses = ()
E.g.::
- stmt = select(user_table).join(address_table, user_table.c.id == address_table.c.user_id)
+ stmt = select(user_table).join(
+ address_table, user_table.c.id == address_table.c.user_id
+ )
- The above statement generates SQL similar to::
+ The above statement generates SQL similar to:
- SELECT user.id, user.name FROM user JOIN address ON user.id = address.user_id
+ .. sourcecode:: sql
+
+ SELECT user.id, user.name
+ FROM user
+ JOIN address ON user.id = address.user_id
.. versionchanged:: 1.4 :meth:`_expression.Select.join` now creates
a :class:`_sql.Join` object between a :class:`_sql.FromClause`
user_table, address_table, user_table.c.id == address_table.c.user_id
)
- The above statement generates SQL similar to::
+ The above statement generates SQL similar to:
+
+ .. sourcecode:: sql
SELECT user.id, user.name, address.id, address.email, address.user_id
FROM user JOIN address ON user.id = address.user_id
E.g.::
from sqlalchemy import select
+
stmt = select(users_table.c.id, users_table.c.name).distinct()
- The above would produce an statement resembling::
+ The above would produce an statement resembling:
+
+ .. sourcecode:: sql
SELECT DISTINCT user.id, user.name FROM user
E.g.::
- table1 = table('t1', column('a'))
- table2 = table('t2', column('b'))
- s = select(table1.c.a).\
- select_from(
- table1.join(table2, table1.c.a==table2.c.b)
- )
+ table1 = table("t1", column("a"))
+ table2 = table("t2", column("b"))
+ s = select(table1.c.a).select_from(
+ table1.join(table2, table1.c.a == table2.c.b)
+ )
The "from" list is a unique set on the identity of each element,
so adding an already present :class:`_schema.Table`
if desired, in the case that the FROM clause cannot be fully
derived from the columns clause::
- select(func.count('*')).select_from(table1)
+ select(func.count("*")).select_from(table1)
"""
:class:`_expression.ColumnElement` objects are directly present as they
were given, e.g.::
- col1 = column('q', Integer)
- col2 = column('p', Integer)
+ col1 = column("q", Integer)
+ col2 = column("p", Integer)
stmt = select(col1, col2)
Above, ``stmt.selected_columns`` would be a collection that contains
criteria, e.g.::
def filter_on_id(my_select, id):
- return my_select.where(my_select.selected_columns['id'] == id)
+ return my_select.where(my_select.selected_columns["id"] == id)
+
stmt = select(MyModel)
stmt = exists(some_table.c.id).where(some_table.c.id == 5).select()
- This will produce a statement resembling::
+ This will produce a statement resembling:
+
+ .. sourcecode:: sql
SELECT EXISTS (SELECT id FROM some_table WHERE some_table = :param) AS anon_1
.. sourcecode:: pycon+sql
>>> from sqlalchemy import cast, select, String
- >>> print(select(cast('some string', String(collation='utf8'))))
+ >>> print(select(cast("some string", String(collation="utf8"))))
{printsql}SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
.. note::
Column(
"float_data",
- Float(5).with_variant(oracle.FLOAT(binary_precision=16), "oracle")
+ Float(5).with_variant(oracle.FLOAT(binary_precision=16), "oracle"),
)
:param asdecimal: the same flag as that of :class:`.Numeric`, but
import enum
from sqlalchemy import Enum
+
class MyEnum(enum.Enum):
one = 1
two = 2
three = 3
- t = Table(
- 'data', MetaData(),
- Column('value', Enum(MyEnum))
- )
+
+ t = Table("data", MetaData(), Column("value", Enum(MyEnum)))
connection.execute(t.insert(), {"value": MyEnum.two})
assert connection.scalar(t.select()) is MyEnum.two
The :class:`_types.JSON` type stores arbitrary JSON format data, e.g.::
- data_table = Table('data_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', JSON)
+ data_table = Table(
+ "data_table",
+ metadata,
+ Column("id", Integer, primary_key=True),
+ Column("data", JSON),
)
with engine.connect() as conn:
conn.execute(
- data_table.insert(),
- {"data": {"key1": "value1", "key2": "value2"}}
+ data_table.insert(), {"data": {"key1": "value1", "key2": "value2"}}
)
**JSON-Specific Expression Operators**
* Keyed index operations::
- data_table.c.data['some key']
+ data_table.c.data["some key"]
* Integer index operations::
* Path index operations::
- data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
+ data_table.c.data[("key_1", "key_2", 5, ..., "key_n")]
* Data casters for specific JSON element types, subsequent to an index
or path operation being invoked::
from sqlalchemy import cast, type_coerce
from sqlalchemy import String, JSON
- cast(
- data_table.c.data['some_key'], String
- ) == type_coerce(55, JSON)
+
+ cast(data_table.c.data["some_key"], String) == type_coerce(55, JSON)
The above case now works directly as::
- data_table.c.data['some_key'].as_integer() == 5
+ data_table.c.data["some_key"].as_integer() == 5
For details on the previous comparison approach within the 1.3.x
series, see the documentation for SQLAlchemy 1.2 or the included HTML
should be SQL NULL as opposed to JSON ``"null"``::
from sqlalchemy import null
+
conn.execute(table.insert(), {"json_value": null()})
To insert or select against a value that is JSON ``"null"``, use the
engine = create_engine(
"sqlite://",
- json_serializer=lambda obj: json.dumps(obj, ensure_ascii=False))
+ json_serializer=lambda obj: json.dumps(obj, ensure_ascii=False),
+ )
.. versionchanged:: 1.3.7
:class:`sqlalchemy.dialects.sqlite.JSON`
- """
+ """ # noqa: E501
__visit_name__ = "JSON"
transparent method is to use :func:`_expression.text`::
Table(
- 'my_table', metadata,
- Column('json_data', JSON, default=text("'null'"))
+ "my_table", metadata, Column("json_data", JSON, default=text("'null'"))
)
While it is possible to use :attr:`_types.JSON.NULL` in this context, the
generated defaults.
- """
+ """ # noqa: E501
def __init__(self, none_as_null: bool = False):
"""Construct a :class:`_types.JSON` type.
as SQL NULL::
from sqlalchemy import null
+
conn.execute(table.insert(), {"data": null()})
.. note::
e.g.::
- stmt = select(
- mytable.c.json_column['some_data'].as_boolean()
- ).where(
- mytable.c.json_column['some_data'].as_boolean() == True
+ stmt = select(mytable.c.json_column["some_data"].as_boolean()).where(
+ mytable.c.json_column["some_data"].as_boolean() == True
)
.. versionadded:: 1.3.11
- """
+ """ # noqa: E501
return self._binary_w_type(Boolean(), "as_boolean")
def as_string(self):
e.g.::
- stmt = select(
- mytable.c.json_column['some_data'].as_string()
- ).where(
- mytable.c.json_column['some_data'].as_string() ==
- 'some string'
+ stmt = select(mytable.c.json_column["some_data"].as_string()).where(
+ mytable.c.json_column["some_data"].as_string() == "some string"
)
.. versionadded:: 1.3.11
- """
+ """ # noqa: E501
return self._binary_w_type(Unicode(), "as_string")
def as_integer(self):
e.g.::
- stmt = select(
- mytable.c.json_column['some_data'].as_integer()
- ).where(
- mytable.c.json_column['some_data'].as_integer() == 5
+ stmt = select(mytable.c.json_column["some_data"].as_integer()).where(
+ mytable.c.json_column["some_data"].as_integer() == 5
)
.. versionadded:: 1.3.11
- """
+ """ # noqa: E501
return self._binary_w_type(Integer(), "as_integer")
def as_float(self):
e.g.::
- stmt = select(
- mytable.c.json_column['some_data'].as_float()
- ).where(
- mytable.c.json_column['some_data'].as_float() == 29.75
+ stmt = select(mytable.c.json_column["some_data"].as_float()).where(
+ mytable.c.json_column["some_data"].as_float() == 29.75
)
.. versionadded:: 1.3.11
- """
+ """ # noqa: E501
return self._binary_w_type(Float(), "as_float")
def as_numeric(self, precision, scale, asdecimal=True):
e.g.::
- stmt = select(
- mytable.c.json_column['some_data'].as_numeric(10, 6)
- ).where(
- mytable.c.
- json_column['some_data'].as_numeric(10, 6) == 29.75
+ stmt = select(mytable.c.json_column["some_data"].as_numeric(10, 6)).where(
+ mytable.c.json_column["some_data"].as_numeric(10, 6) == 29.75
)
.. versionadded:: 1.4.0b2
- """
+ """ # noqa: E501
return self._binary_w_type(
Numeric(precision, scale, asdecimal=asdecimal), "as_numeric"
)
e.g.::
- stmt = select(mytable.c.json_column['some_data'].as_json())
+ stmt = select(mytable.c.json_column["some_data"].as_json())
This is typically the default behavior of indexed elements in any
case.
An :class:`_types.ARRAY` type is constructed given the "type"
of element::
- mytable = Table("mytable", metadata,
- Column("data", ARRAY(Integer))
- )
+ mytable = Table("mytable", metadata, Column("data", ARRAY(Integer)))
The above type represents an N-dimensional array,
meaning a supporting backend such as PostgreSQL will interpret values
with any number of dimensions automatically. To produce an INSERT
construct that passes in a 1-dimensional array of integers::
- connection.execute(
- mytable.insert(),
- {"data": [1,2,3]}
- )
+ connection.execute(mytable.insert(), {"data": [1, 2, 3]})
The :class:`_types.ARRAY` type can be constructed given a fixed number
of dimensions::
- mytable = Table("mytable", metadata,
- Column("data", ARRAY(Integer, dimensions=2))
- )
+ mytable = Table(
+ "mytable", metadata, Column("data", ARRAY(Integer, dimensions=2))
+ )
Sending a number of dimensions is optional, but recommended if the
datatype is to represent arrays of more than one dimension. This number
as well as UPDATE statements when the :meth:`_expression.Update.values`
method is used::
- mytable.update().values({
- mytable.c.data[5]: 7,
- mytable.c.data[2:7]: [1, 2, 3]
- })
+ mytable.update().values(
+ {mytable.c.data[5]: 7, mytable.c.data[2:7]: [1, 2, 3]}
+ )
Indexed access is one-based by default;
for zero-based index conversion, set :paramref:`_types.ARRAY.zero_indexes`.
from sqlalchemy import ARRAY
from sqlalchemy.ext.mutable import MutableList
+
class SomeOrmClass(Base):
# ...
E.g.::
- Column('myarray', ARRAY(Integer))
+ Column("myarray", ARRAY(Integer))
Arguments are:
from sqlalchemy.sql import operators
conn.execute(
- select(table.c.data).where(
- table.c.data.any(7, operator=operators.lt)
- )
+ select(table.c.data).where(table.c.data.any(7, operator=operators.lt))
)
:param other: expression to be compared
:meth:`.types.ARRAY.Comparator.all`
- """
+ """ # noqa: E501
elements = util.preloaded.sql_elements
operator = operator if operator else operators.eq
from sqlalchemy.sql import operators
conn.execute(
- select(table.c.data).where(
- table.c.data.all(7, operator=operators.lt)
- )
+ select(table.c.data).where(table.c.data.all(7, operator=operators.lt))
)
:param other: expression to be compared
:meth:`.types.ARRAY.Comparator.any`
- """
+ """ # noqa: E501
elements = util.preloaded.sql_elements
operator = operator if operator else operators.eq
t = Table(
"t",
metadata_obj,
- Column('uuid_data', Uuid, primary_key=True),
- Column("other_data", String)
+ Column("uuid_data", Uuid, primary_key=True),
+ Column("other_data", String),
)
with engine.begin() as conn:
conn.execute(
- t.insert(),
- {"uuid_data": uuid.uuid4(), "other_data", "some data"}
+ t.insert(), {"uuid_data": uuid.uuid4(), "other_data": "some data"}
)
To have the :class:`_sqltypes.Uuid` datatype work with string-based
:class:`_sqltypes.UUID` - represents exactly the ``UUID`` datatype
without any backend-agnostic behaviors.
- """
+ """ # noqa: E501
__visit_name__ = "uuid"
E.g.::
Table(
- 'some_table', metadata,
+ "some_table",
+ metadata,
Column(
String(50).evaluates_none(),
nullable=True,
- server_default='no value')
+ server_default="no value",
+ ),
)
The ORM uses this flag to indicate that a positive value of ``None``
string_type = String()
string_type = string_type.with_variant(
- mysql.VARCHAR(collation='foo'), 'mysql', 'mariadb'
+ mysql.VARCHAR(collation="foo"), "mysql", "mariadb"
)
The variant mapping indicates that when this type is
"""
cache_ok: Optional[bool] = None
- """Indicate if statements using this :class:`.ExternalType` are "safe to
+ '''Indicate if statements using this :class:`.ExternalType` are "safe to
cache".
The default value ``None`` will emit a warning and then not allow caching
series of tuples. Given a previously un-cacheable type as::
class LookupType(UserDefinedType):
- '''a custom type that accepts a dictionary as a parameter.
+ """a custom type that accepts a dictionary as a parameter.
this is the non-cacheable version, as "self.lookup" is not
hashable.
- '''
+ """
def __init__(self, lookup):
self.lookup = lookup
def get_col_spec(self, **kw):
return "VARCHAR(255)"
- def bind_processor(self, dialect):
- # ... works with "self.lookup" ...
+ def bind_processor(self, dialect): ... # works with "self.lookup" ...
Where "lookup" is a dictionary. The type will not be able to generate
a cache key::
to the ".lookup" attribute::
class LookupType(UserDefinedType):
- '''a custom type that accepts a dictionary as a parameter.
+ """a custom type that accepts a dictionary as a parameter.
The dictionary is stored both as itself in a private variable,
and published in a public variable as a sorted tuple of tuples,
two equivalent dictionaries. Note it assumes the keys and
values of the dictionary are themselves hashable.
- '''
+ """
cache_ok = True
# assume keys/values of "lookup" are hashable; otherwise
# they would also need to be converted in some way here
- self.lookup = tuple(
- (key, lookup[key]) for key in sorted(lookup)
- )
+ self.lookup = tuple((key, lookup[key]) for key in sorted(lookup))
def get_col_spec(self, **kw):
return "VARCHAR(255)"
- def bind_processor(self, dialect):
- # ... works with "self._lookup" ...
+ def bind_processor(self, dialect): ... # works with "self._lookup" ...
Where above, the cache key for ``LookupType({"a": 10, "b": 20})`` will be::
:ref:`sql_caching`
- """ # noqa: E501
+ ''' # noqa: E501
@util.non_memoized_property
def _static_cache_key(
import sqlalchemy.types as types
+
class MyType(types.UserDefinedType):
cache_ok = True
- def __init__(self, precision = 8):
+ def __init__(self, precision=8):
self.precision = precision
def get_col_spec(self, **kw):
def bind_processor(self, dialect):
def process(value):
return value
+
return process
def result_processor(self, dialect, coltype):
def process(value):
return value
+
return process
Once the type is made, it's immediately usable::
- table = Table('foo', metadata_obj,
- Column('id', Integer, primary_key=True),
- Column('data', MyType(16))
- )
+ table = Table(
+ "foo",
+ metadata_obj,
+ Column("id", Integer, primary_key=True),
+ Column("data", MyType(16)),
+ )
The ``get_col_spec()`` method will in most cases receive a keyword
argument ``type_expression`` which refers to the owning expression
class TypeDecorator(SchemaEventTarget, ExternalType, TypeEngine[_T]):
- """Allows the creation of types which add additional functionality
+ '''Allows the creation of types which add additional functionality
to an existing type.
This method is preferred to direct subclassing of SQLAlchemy's
import sqlalchemy.types as types
+
class MyType(types.TypeDecorator):
- '''Prefixes Unicode values with "PREFIX:" on the way in and
+ """Prefixes Unicode values with "PREFIX:" on the way in and
strips it off on the way out.
- '''
+ """
impl = types.Unicode
from sqlalchemy import JSON
from sqlalchemy import TypeDecorator
+
class MyJsonType(TypeDecorator):
impl = JSON
from sqlalchemy import ARRAY
from sqlalchemy import TypeDecorator
+
class MyArrayType(TypeDecorator):
impl = ARRAY
def coerce_compared_value(self, op, value):
return self.impl.coerce_compared_value(op, value)
-
- """
+ '''
__visit_name__ = "type_decorator"
would produce an expression along the lines of::
- tablea.c.id==tableb.c.tablea_id
+ tablea.c.id == tableb.c.tablea_id
The join is determined based on the foreign key relationships
between the two selectables. If there are multiple ways
The function is of the form::
- def my_fn(binary, left, right)
+ def my_fn(binary, left, right): ...
For each binary expression located which has a
comparison operator, the product of "left" and
Hence an expression like::
- and_(
- (a + b) == q + func.sum(e + f),
- j == r
- )
+ and_((a + b) == q + func.sum(e + f), j == r)
+
+ would have the traversal:
- would have the traversal::
+ .. sourcecode:: text
a <eq> q
a <eq> e
E.g.::
- >>> expr = and_(
- ... table.c.foo==5, table.c.foo==7
- ... )
+ >>> expr = and_(table.c.foo == 5, table.c.foo == 7)
>>> bind_values(expr)
[5, 7]
"""
E.g.::
- table1 = Table('sometable', metadata,
- Column('col1', Integer),
- Column('col2', Integer)
- )
- table2 = Table('someothertable', metadata,
- Column('col1', Integer),
- Column('col2', Integer)
- )
+ table1 = Table(
+ "sometable",
+ metadata,
+ Column("col1", Integer),
+ Column("col2", Integer),
+ )
+ table2 = Table(
+ "someothertable",
+ metadata,
+ Column("col1", Integer),
+ Column("col2", Integer),
+ )
condition = table1.c.col1 == table2.c.col1
make an alias of table1::
- s = table1.alias('foo')
+ s = table1.alias("foo")
calling ``ClauseAdapter(s).traverse(condition)`` converts
condition to read::
from sqlalchemy.sql import visitors
- stmt = select(some_table).where(some_table.c.foo == 'bar')
+ stmt = select(some_table).where(some_table.c.foo == "bar")
+
def visit_bindparam(bind_param):
print("found bound value: %s" % bind_param.value)
+
visitors.traverse(stmt, {}, {"bindparam": visit_bindparam})
The iteration of objects uses the :func:`.visitors.iterate` function,
passed, each argument combination is turned into a pytest.param() object,
mapping the elements of the argument tuple to produce an id based on a
character value in the same position within the string template using the
- following scheme::
+ following scheme:
+
+ .. sourcecode:: text
i - the given argument is a string that is part of the id only, don't
pass it as an argument
(operator.ne, "ne"),
(operator.gt, "gt"),
(operator.lt, "lt"),
- id_="na"
+ id_="na",
)
def test_operator(self, opfunc, name):
pass
@testing.variation("querytyp", ["select", "subquery", "legacy_query"])
@testing.variation("lazy", ["select", "raise", "raise_on_sql"])
- def test_thing(
- self,
- querytyp,
- lazy,
- decl_base
- ):
+ def test_thing(self, querytyp, lazy, decl_base):
class Thing(decl_base):
- __tablename__ = 'thing'
+ __tablename__ = "thing"
# use name directly
rel = relationship("Rel", lazy=lazy.name)
else:
querytyp.fail()
-
The variable provided is a slots object of boolean variables, as well
as the name of the case itself under the attribute ".name"
"""Generate a set of URLs to test given configured URLs plus additional
driver names.
- Given::
+ Given:
+
+ .. sourcecode:: text
--dburi postgresql://db1 \
--dburi postgresql://db2 \
--dbdriver=psycopg2 --dbdriver=asyncpg
Noting that the default postgresql driver is psycopg2, the output
- would be::
+ would be:
+
+ .. sourcecode:: text
postgresql+psycopg2://db1
postgresql+asyncpg://db1
we want to keep it in that dburi.
Driver specific query options can be specified by added them to the
- driver name. For example, to a sample option the asyncpg::
+ driver name. For example, to a sample option the asyncpg:
+
+ .. sourcecode:: text
--dburi postgresql://db1 \
--dbdriver=asyncpg?some_option=a_value
@property
def table_value_constructor(self):
- """Database / dialect supports a query like::
+ """Database / dialect supports a query like:
+
+ .. sourcecode:: sql
SELECT * FROM VALUES ( (c1, c2), (c1, c2), ...)
AS some_table(col1, col2)
@property
def binary_literals(self):
"""target backend supports simple binary literals, e.g. an
- expression like::
+ expression like:
+
+ .. sourcecode:: sql
SELECT CAST('foo' AS BINARY)
expr = decimal.Decimal("15.7563")
- value = e.scalar(
- select(literal(expr))
- )
+ value = e.scalar(select(literal(expr)))
assert value == expr
present in a subquery in the WHERE clause.
This is an ANSI-standard syntax that apparently MySQL can't handle,
- such as::
+ such as:
+
+ .. sourcecode:: sql
UPDATE documents SET flag=1 WHERE documents.title IN
(SELECT max(documents.title) AS title
"""target database supports ordering by a column from a SELECT
inside of a UNION
- E.g. (SELECT id, ...) UNION (SELECT id, ...) ORDER BY id
+ E.g.:
+
+ .. sourcecode:: sql
+
+ (SELECT id, ...) UNION (SELECT id, ...) ORDER BY id
"""
return exclusions.open()
"""target backend supports ORDER BY a column label within an
expression.
- Basically this::
+ Basically this:
+
+ .. sourcecode:: sql
select data as foo from test order by foo || 'bar'
dict(lazy=False, passive=True),
dict(lazy=False, passive=True, raiseload=True),
)
-
+ def test_fn(lazy, passive, raiseload): ...
would result in::
@testing.combinations(
- ('', False, False, False),
- ('lazy', True, False, False),
- ('lazy_passive', True, True, False),
- ('lazy_passive', True, True, True),
- id_='iaaa',
- argnames='lazy,passive,raiseload'
+ ("", False, False, False),
+ ("lazy", True, False, False),
+ ("lazy_passive", True, True, False),
+ ("lazy_passive", True, True, True),
+ id_="iaaa",
+ argnames="lazy,passive,raiseload",
)
+ def test_fn(lazy, passive, raiseload): ...
"""
Example::
- >>> a = ['__tablename__', 'id', 'x', 'created_at']
- >>> b = ['id', 'name', 'data', 'y', 'created_at']
+ >>> a = ["__tablename__", "id", "x", "created_at"]
+ >>> b = ["id", "name", "data", "y", "created_at"]
>>> merge_lists_w_ordering(a, b)
['__tablename__', 'id', 'name', 'data', 'y', 'x', 'created_at']
weak_identity_map=(
"0.7",
"the :paramref:`.Session.weak_identity_map parameter "
- "is deprecated."
+ "is deprecated.",
)
-
)
+ def some_function(**kwargs): ...
"""
"""format_argspec_plus with considerations for typical __init__ methods
Wraps format_argspec_plus with error handling strategies for typical
- __init__ cases::
+ __init__ cases:
+
+ .. sourcecode:: text
object.__init__ -> (self)
other unreflectable (usually C) -> (self, *args, **kwargs)
def getargspec_init(method):
"""inspect.getargspec with considerations for typical __init__ methods
- Wraps inspect.getargspec with error handling for typical __init__ cases::
+ Wraps inspect.getargspec with error handling for typical __init__ cases:
+
+ .. sourcecode:: text
object.__init__ -> (self)
other unreflectable (usually C) -> (self, *args, **kwargs)
class symbol(int):
"""A constant symbol.
- >>> symbol('foo') is symbol('foo')
+ >>> symbol("foo") is symbol("foo")
True
- >>> symbol('foo')
+ >>> symbol("foo")
<symbol 'foo>
A slight refinement of the MAGICCOOKIE=object() pattern. The primary
database in process.
"""
+
import logging
import sys
ll = list
+
def make_class() -> None:
x: ll[int] = [1, 2, 3]
-
""" # noqa: E501
class Foo(decl_base):
e.g.::
self._fixture_from_geometry(
- "a": {
- "subclasses": {
- "b": {"polymorphic_load": "selectin"},
- "c": {
- "subclasses": {
- "d": {
- "polymorphic_load": "inlne", "single": True
- },
- "e": {
- "polymorphic_load": "inline", "single": True
+ {
+ "a": {
+ "subclasses": {
+ "b": {"polymorphic_load": "selectin"},
+ "c": {
+ "subclasses": {
+ "d": {"polymorphic_load": "inlne", "single": True},
+ "e": {
+ "polymorphic_load": "inline",
+ "single": True,
+ },
},
+ "polymorphic_load": "selectin",
},
- "polymorphic_load": "selectin",
}
}
}
would provide the equivalent of::
class a(Base):
- __tablename__ = 'a'
+ __tablename__ = "a"
id = Column(Integer, primary_key=True)
a_data = Column(String(50))
type = Column(String(50))
- __mapper_args__ = {
- "polymorphic_on": type,
- "polymorphic_identity": "a"
- }
+ __mapper_args__ = {"polymorphic_on": type, "polymorphic_identity": "a"}
+
class b(a):
- __tablename__ = 'b'
+ __tablename__ = "b"
- id = Column(ForeignKey('a.id'), primary_key=True)
+ id = Column(ForeignKey("a.id"), primary_key=True)
b_data = Column(String(50))
__mapper_args__ = {
"polymorphic_identity": "b",
- "polymorphic_load": "selectin"
+ "polymorphic_load": "selectin",
}
# ...
+
class c(a):
- __tablename__ = 'c'
+ __tablename__ = "c"
- class d(c):
- # ...
- class e(c):
- # ...
+ class d(c): ...
+
+
+ class e(c): ...
Declarative is used so that we get extra behaviors of declarative,
such as single-inheritance column masking.
- """
+ """ # noqa: E501
run_create_tables = "each"
run_define_tables = "each"
that points to itself, e.g. within a SQL function or similar.
The test is against a materialized path setup.
- this is an **extremely** unusual case::
+ this is an **extremely** unusual case:
+
+ .. sourcecode:: text
Entity
------
the relationship(), one col points
to itself in the same table.
- this is a very unusual case::
+ this is a very unusual case:
+
+ .. sourcecode:: text
company employee
---------- ----------
@property
def binary_literals(self):
"""target backend supports simple binary literals, e.g. an
- expression like::
+ expression like:
+
+ .. sourcecode:: sql
SELECT CAST('foo' AS BINARY)
present in a subquery in the WHERE clause.
This is an ANSI-standard syntax that apparently MySQL can't handle,
- such as::
+ such as:
+
+ .. sourcecode:: sql
UPDATE documents SET flag=1 WHERE documents.title IN
(SELECT max(documents.title) AS title
expr = decimal.Decimal("15.7563")
- value = e.scalar(
- select(literal(expr))
- )
+ value = e.scalar(select(literal(expr)))
assert value == expr
def test_recursive_union_no_alias_two(self):
"""
- pg's example::
+ pg's example:
+
+ .. sourcecode:: sql
WITH RECURSIVE t(n) AS (
VALUES (1)
@testing.combinations(("lateral",), ("cartesian",), ("join",))
def test_lateral_subqueries(self, control):
"""
- ::
+ .. sourcecode:: sql
test=> create table a (id integer);
CREATE TABLE
def test_alias_column(self):
"""
-
- ::
+ .. sourcecode:: sql
SELECT x, y
FROM
def test_column_valued_two(self):
"""
-
- ::
+ .. sourcecode:: sql
SELECT x, y
FROM
def test_function_alias(self):
"""
- ::
+ .. sourcecode:: sql
SELECT result_elem -> 'Field' as field
FROM "check" AS check_, json_array_elements(
"""test the quoting of labels.
If labels aren't quoted, a query in postgresql in particular will
- fail since it produces::
+ fail since it produces:
+
+ .. sourcecode:: sql
SELECT
LaLa.lowercase, LaLa."UPPERCASE", LaLa."MixedCase", LaLa."ASC"
def run(cmd: code_writer_cmd):
i = 0
- for file in sa_path.glob(f"**/*_cy.py"):
+ for file in sa_path.glob("**/*_cy.py"):
run_file(cmd, file)
i += 1
cmd.write_status(f"\nDone. Processed {i} files.")
from argparse import RawDescriptionHelpFormatter
from collections.abc import Iterator
from functools import partial
+from itertools import chain
from pathlib import Path
import re
from typing import NamedTuple
home = Path(__file__).parent.parent
-ignore_paths = (re.compile(r"changelog/unreleased_\d{2}"),)
+ignore_paths = (
+ re.compile(r"changelog/unreleased_\d{2}"),
+ re.compile(r"README\.unittests\.rst"),
+ re.compile(r"\.tox"),
+ re.compile(r"build"),
+)
class BlockLine(NamedTuple):
errors: list[tuple[int, str, Exception]],
is_doctest: bool,
file: str,
+ is_python_file: bool,
) -> list[str]:
if not is_doctest:
# The first line may have additional padding. Remove then restore later
add_padding = None
code = "\n".join(l.code for l in input_block)
+ mode = PYTHON_BLACK_MODE if is_python_file else RST_BLACK_MODE
try:
- formatted = format_str(code, mode=BLACK_MODE)
+ formatted = format_str(code, mode=mode)
except Exception as e:
start_line = input_block[0].line_no
first_error = not errors
r"^(((?!\.\.).+::)|(\.\.\s*sourcecode::(.*py.*)?)|(::))$"
)
start_space = re.compile(r"^(\s*)[^ ]?")
+not_python_line = re.compile(r"^\s+[$:]")
def format_file(
doctest_block: _Block | None = None
plain_block: _Block | None = None
+ is_python_file = file.suffix == ".py"
+
plain_code_section = False
plain_padding = None
plain_padding_len = None
errors=errors,
is_doctest=True,
file=str(file),
+ is_python_file=is_python_file,
)
def doctest_format():
errors=errors,
is_doctest=False,
file=str(file),
+ is_python_file=is_python_file,
)
def plain_format():
]
continue
buffer.append(line)
+ elif (
+ is_python_file
+ and not plain_block
+ and not_python_line.match(line)
+ ):
+ # not a python block. ignore it
+ plain_code_section = False
+ buffer.append(line)
else:
# start of a plain block
assert not doctest_block
def iter_files(directory: str) -> Iterator[Path]:
+ dir_path = home / directory
yield from (
file
- for file in (home / directory).glob("./**/*.rst")
+ for file in chain(
+ dir_path.glob("./**/*.rst"), dir_path.glob("./**/*.py")
+ )
if not any(pattern.search(file.as_posix()) for pattern in ignore_paths)
)
"-d",
"--directory",
help="Find documents in this directory and its sub dirs",
- default="doc/build",
+ default=".",
)
parser.add_argument(
"-c",
"-l",
"--project-line-length",
help="Configure the line length to the project value instead "
- "of using the black default of 88",
+ "of using the black default of 88. Python files always use the"
+ "project line length",
action="store_true",
)
parser.add_argument(
args = parser.parse_args()
config = parse_pyproject_toml(home / "pyproject.toml")
- BLACK_MODE = Mode(
- target_versions={
- TargetVersion[val.upper()]
- for val in config.get("target_version", [])
- if val != "py27"
- },
+ target_versions = {
+ TargetVersion[val.upper()]
+ for val in config.get("target_version", [])
+ if val != "py27"
+ }
+
+ RST_BLACK_MODE = Mode(
+ target_versions=target_versions,
line_length=(
config.get("line_length", DEFAULT_LINE_LENGTH)
if args.project_line_length
else DEFAULT_LINE_LENGTH
),
)
+ PYTHON_BLACK_MODE = Mode(
+ target_versions=target_versions,
+ # Remove a few char to account for normal indent
+ line_length=(config.get("line_length", 4) - 4 or DEFAULT_LINE_LENGTH),
+ )
REPORT_ONLY_DOCTEST = args.report_doctest
main(args.file, args.directory, args.exit_on_error, args.check)
# use tempfile in same path as the module, or at least in the
# current working directory, so that black / zimports use
# local pyproject.toml
- with NamedTemporaryFile(
- mode="w",
- delete=False,
- suffix=".py",
- ) as buf, open(filename) as orig_py:
+ with (
+ NamedTemporaryFile(
+ mode="w",
+ delete=False,
+ suffix=".py",
+ ) as buf,
+ open(filename) as orig_py,
+ ):
in_block = False
current_clsname = None
for line in orig_py:
def process_functions(filename: str, cmd: code_writer_cmd) -> str:
- with NamedTemporaryFile(
- mode="w",
- delete=False,
- suffix=".py",
- ) as buf, open(filename) as orig_py:
+ with (
+ NamedTemporaryFile(
+ mode="w",
+ delete=False,
+ suffix=".py",
+ ) as buf,
+ open(filename) as orig_py,
+ ):
indent = ""
in_block = False
# current working directory, so that black / zimports use
# local pyproject.toml
found = 0
- with NamedTemporaryFile(
- mode="w",
- delete=False,
- suffix=".py",
- ) as buf, open(filename) as orig_py:
+ with (
+ NamedTemporaryFile(
+ mode="w",
+ delete=False,
+ suffix=".py",
+ ) as buf,
+ open(filename) as orig_py,
+ ):
indent = ""
in_block = False
current_fnname = given_fnname = None
Demos::
- python tools/trace_orm_adapter.py -m pytest \
+ $ python tools/trace_orm_adapter.py -m pytest \
test/orm/inheritance/test_polymorphic_rel.py::PolymorphicAliasedJoinsTest::test_primary_eager_aliasing_joinedload
- python tools/trace_orm_adapter.py -m pytest \
+ $ python tools/trace_orm_adapter.py -m pytest \
test/orm/test_eager_relations.py::LazyLoadOptSpecificityTest::test_pathed_joinedload_aliased_abs_bcs
- python tools/trace_orm_adapter.py my_test_script.py
+ $ python tools/trace_orm_adapter.py my_test_script.py
The above two tests should spit out a ton of debug output. If a test or program
has no debug output at all, that's a good thing! it means ORMAdapter isn't
used for that case.
-You can then set a breakpoint at the end of any adapt step:
+You can then set a breakpoint at the end of any adapt step::
- python tools/trace_orm_adapter.py -d 10 -m pytest -s \
+ $ python tools/trace_orm_adapter.py -d 10 -m pytest -s \
test/orm/test_eager_relations.py::LazyLoadOptSpecificityTest::test_pathed_joinedload_aliased_abs_bcs