Several tests require alternate usernames or schemas to be present, which
are used to test dotted-name access scenarios. On some databases such
-as Oracle or Sybase, these are usernames, and others such as Postgresql
+as Oracle or Sybase, these are usernames, and others such as PostgreSQL
and MySQL they are schemas. The requirement applies to all backends
except SQLite and Firebird. The names are::
test_schema
- test_schema_2 (only used on Postgresql)
+ test_schema_2 (only used on PostgreSQL)
Please refer to your vendor documentation for the proper syntax to create
these namespaces - the database user must have permission to create and drop
tox -e -- -n 4 --db sqlite --db postgresql --db mysql
-Each backend has a different scheme for setting up the database. Postgresql
+Each backend has a different scheme for setting up the database. PostgreSQL
still needs the "test_schema" and "test_schema_2" schemas present, as the
parallel databases are created using the base database as a "template".
of the auto-generated sequence of a SERIAL column,
which currently only occurs if implicit_returning=False,
now accommodates if the table + column name is greater
- than 63 characters using the same logic Postgresql uses.
+ than 63 characters using the same logic PostgreSQL uses.
.. change::
:tags: postgresql
:tags: postgresql
:tickets: 1071
- Postgresql now reflects sequence names associated with
+ PostgreSQL now reflects sequence names associated with
SERIAL columns correctly, after the name of the sequence
has been changed. Thanks to Kumar McMillan for the patch.
:tags: postgresql
:tickets: 1769
- Postgresql reflects the name of primary key constraints,
+ PostgreSQL reflects the name of primary key constraints,
if one exists.
.. change::
as well as the adaptation of the Python operator into
a SQL operator, based on the full left/right/operator
of the given expression. In particular
- the date/time/interval system created for Postgresql
+ the date/time/interval system created for PostgreSQL
EXTRACT in has now been generalized into
the type system. The previous behavior which often
occurred of an expression "column + literal" forcing
returning() support is native to insert(), update(),
delete(). Implementations of varying levels of
- functionality exist for Postgresql, Firebird, MSSQL and
+ functionality exist for PostgreSQL, Firebird, MSSQL and
Oracle. returning() can be called explicitly with column
expressions which are then returned in the resultset,
usually via fetchone() or first().
another will now be grouped with parenthesis - previously,
the first compound element in the list would not be grouped,
as SQLite doesn't like a statement to start with
- parenthesis. However, Postgresql in particular has
+ parenthesis. However, PostgreSQL in particular has
precedence rules regarding INTERSECT, and it is
more consistent for parenthesis to be applied equally
to all sub-elements. So now, the workaround for SQLite
The "start" and "increment" attributes on Sequence now
generate "START WITH" and "INCREMENT BY" by default,
- on Oracle and Postgresql. Firebird doesn't support
+ on Oracle and PostgreSQL. Firebird doesn't support
these keywords right now.
.. change::
optimized, resulting in varying speed improvements:
Unicode, PickleType, Interval, TypeDecorator, Binary.
Also the following dbapi-specific implementations have been improved:
- Time, Date and DateTime on Sqlite, ARRAY on Postgresql,
+ Time, Date and DateTime on Sqlite, ARRAY on PostgreSQL,
Time on MySQL, Numeric(as_decimal=False) on MySQL, oursql and
pypostgresql, DateTime on cx_oracle and LOB-based types on cx_oracle.
:tickets: 2676
:versions: 0.8.0
- Added support for Postgresql's traditional SUBSTRING
+ Added support for PostgreSQL's traditional SUBSTRING
function syntax, renders as "SUBSTRING(x FROM y FOR z)"
when regular ``func.substring()`` is used.
Courtesy Gunnlaugur Þór Briem.
:tickets: 2445
Added new for_update/with_lockmode()
- options for Postgresql: for_update="read"/
+ options for PostgreSQL: for_update="read"/
with_lockmode("read"),
for_update="read_nowait"/
with_lockmode("read_nowait").
The update() construct can now accommodate
multiple tables in the WHERE clause, which will
render an "UPDATE..FROM" construct, recognized by
- Postgresql and MSSQL. When compiled on MySQL,
+ PostgreSQL and MSSQL. When compiled on MySQL,
will instead generate "UPDATE t1, t2, ..". MySQL
additionally can render against multiple tables in the
SET clause, if Column objects are used as keys
:tickets: 1679
a "has_schema" method has been implemented
- on dialect, but only works on Postgresql so far.
+ on dialect, but only works on PostgreSQL so far.
Courtesy Manlio Perillo.
.. change::
:tags: postgresql, bug
:tickets: 2311
- Postgresql dialect memoizes that an ENUM of a
+ PostgreSQL dialect memoizes that an ENUM of a
particular name was processed
during a create/drop sequence. This allows
a create/drop sequence to work without any
:tickets: 2081
REAL has been added to the core types. Supported
- by Postgresql, SQL Server, MySQL, SQLite. Note
+ by PostgreSQL, SQL Server, MySQL, SQLite. Note
that the SQL Server and MySQL versions, which
add extra arguments, are also still available
from those dialects.
:tickets: 1069
Query.distinct() now accepts column expressions
- as \*args, interpreted by the Postgresql dialect
+ as \*args, interpreted by the PostgreSQL dialect
as DISTINCT ON (<expr>).
.. change::
:tickets: 1069
select.distinct() now accepts column expressions
- as \*args, interpreted by the Postgresql dialect
+ as \*args, interpreted by the PostgreSQL dialect
as DISTINCT ON (<expr>). Note this was already
available via passing a list to the `distinct`
keyword argument to select().
"isolation_level" argument, sets transaction isolation
level for that connection only until returned to the
connection pool, for those backends which support it
- (SQLite, Postgresql)
+ (SQLite, PostgreSQL)
.. change::
:tags: sql
of the auto-generated sequence of a SERIAL column,
which currently only occurs if implicit_returning=False,
now accommodates if the table + column name is greater
- than 63 characters using the same logic Postgresql uses. (also in 0.6.7)
+ than 63 characters using the same logic PostgreSQL uses. (also in 0.6.7)
.. change::
:tags: postgresql
'unbounded'. This also occurs for the VARBINARY type..
This behavior makes these types more closely compatible
- with Postgresql's VARCHAR type which is similarly unbounded
+ with PostgreSQL's VARCHAR type which is similarly unbounded
when no length is specified.
.. change::
:versions: 0.9.4
Fixed regression caused by release 0.8.5 / 0.9.3's compatibility
- enhancements where index reflection on Postgresql versions specific
+ enhancements where index reflection on PostgreSQL versions specific
to only the 8.1, 8.2 series again
broke, surrounding the ever problematic int2vector type. While
int2vector supports array operations as of 8.1, apparently it only
:tags: postgresql, bug
:versions: 0.9.3
- Support has been improved for Postgresql reflection behavior on very old
- (pre 8.1) versions of Postgresql, and potentially other PG engines
+ Support has been improved for PostgreSQL reflection behavior on very old
+ (pre 8.1) versions of PostgreSQL, and potentially other PG engines
such as Redshift (assuming Redshift reports the version as < 8.1).
The query for "indexes" as well as "primary keys" relies upon inspecting
a so-called "int2vector" datatype, which refuses to coerce to an array
:tickets: 2291
:versions: 0.9.3
- Revised this very old issue where the Postgresql "get primary key"
+ Revised this very old issue where the PostgreSQL "get primary key"
reflection query were updated to take into account primary key constraints
that were renamed; the newer query fails on very old versions of
- Postgresql such as version 7, so the old query is restored in those cases
+ PostgreSQL such as version 7, so the old query is restored in those cases
when server_version_info < (8, 0) is detected.
.. change::
:tickets: 2819
:versions: 0.9.0b1
- Fixed bug where Postgresql version strings that had a prefix preceding
- the words "Postgresql" or "EnterpriseDB" would not parse.
+ Fixed bug where PostgreSQL version strings that had a prefix preceding
+ the words "PostgreSQL" or "EnterpriseDB" would not parse.
Courtesy Scott Schaefer.
.. change::
Added a new flag ``system=True`` to :class:`.Column`, which marks
the column as a "system" column which is automatically made present
- by the database (such as Postgresql ``oid`` or ``xmin``). The
+ by the database (such as PostgreSQL ``oid`` or ``xmin``). The
column will be omitted from the ``CREATE TABLE`` statement but will
otherwise be available for querying. In addition, the
:class:`.CreateColumn` construct can be appled to a custom
form of a some expressions when referring to the ``.c`` collection
on a ``select()`` construct, but the ``str()`` form isn't available
since the element relies on dialect-specific compilation constructs,
- notably the ``__getitem__()`` operator as used with a Postgresql
+ notably the ``__getitem__()`` operator as used with a PostgreSQL
``ARRAY`` element. The fix also adds a new exception class
:exc:`.UnsupportedCompilationError` which is raised in those cases
where a compiler is asked to compile something it doesn't know
:versions: 0.9.0b1
The behavior of :func:`.extract` has been simplified on the
- Postgresql dialect to no longer inject a hardcoded ``::timestamp``
+ PostgreSQL dialect to no longer inject a hardcoded ``::timestamp``
or similar cast into the given expression, as this interfered
with types such as timezone-aware datetimes, but also
does not appear to be at all necessary with modern versions
:versions: 0.9.0b1
Fixed bug where the order of columns in a multi-column
- Postgresql index would be reflected in the wrong order.
+ PostgreSQL index would be reflected in the wrong order.
Courtesy Roman Podolyaka.
.. change::
:tags: feature, postgresql
:versions: 0.9.0b1
- Support for Postgresql 9.2 range types has been added.
+ Support for PostgreSQL 9.2 range types has been added.
Currently, no type translation is provided, so works
directly with strings or psycopg2 2.5 range extension types
at the moment. Patch courtesy Chris Withers.
:tags: bug, postgresql
:tickets: 2681
- The operators for the Postgresql ARRAY type supports
+ The operators for the PostgreSQL ARRAY type supports
input types of sets, generators, etc. even when
a dimension is not specified, by turning the given
iterable into a collection unconditionally.
is now copied in all cases when :meth:`.Table.tometadata` happens,
and if ``inherit_schema=True``, the type will take on the new
schema name passed to the method. The ``schema`` is important
- when used with the Postgresql backend, as the type results in
+ when used with the PostgreSQL backend, as the type results in
a ``CREATE TYPE`` statement.
.. change::
The :class:`.Insert` construct now supports multi-valued inserts,
that is, an INSERT that renders like
"INSERT INTO table VALUES (...), (...), ...".
- Supported by Postgresql, SQLite, and MySQL.
+ Supported by PostgreSQL, SQLite, and MySQL.
Big thanks to Idan Kamara for doing the legwork on this one.
.. seealso::
:tags: postgresql, feature
:tickets: 2606
- :class:`.HSTORE` is now available in the Postgresql dialect.
+ :class:`.HSTORE` is now available in the PostgreSQL dialect.
Will also use psycopg2's extensions if available. Courtesy
Audrius Kažukauskas.
the `getitem` operator, i.e. the bracket
operator in Python. This is used at first
to provide index and slice behavior to the
- Postgresql ARRAY type, and also provides a hook
+ PostgreSQL ARRAY type, and also provides a hook
for end-user definition of custom __getitem__
schemes which can be applied at the type
level as well as within ORM-level custom
String types. When present, renders as
COLLATE <collation>. This to support the
COLLATE keyword now supported by several
- databases including MySQL, SQLite, and Postgresql.
+ databases including MySQL, SQLite, and PostgreSQL.
.. change::
:tags: change, sql
:tags: postgresql, feature
:tickets: 2506
- Added support for the Postgresql ONLY
+ Added support for the PostgreSQL ONLY
keyword, which can appear corresponding to a
table in a SELECT, UPDATE, or DELETE statement.
The phrase is established using with_hint().
:tickets:
The "ischema_names" dictionary of the
- Postgresql dialect is "unofficially" customizable.
+ PostgreSQL dialect is "unofficially" customizable.
Meaning, new types such as PostGIS types can
be added into this dictionary, and the PG type
reflection code should be able to handle simple
:pullreq: bitbucket:45
:versions: 1.0.0b1
- Added support for the ``CONCURRENTLY`` keyword with Postgresql
+ Added support for the ``CONCURRENTLY`` keyword with PostgreSQL
indexes, established using ``postgresql_concurrently``. Pull
request courtesy Iuri de Silvio.
:tickets: 2940
:versions: 1.0.0b1
- Repaired support for Postgresql UUID types in conjunction with
+ Repaired support for PostgreSQL UUID types in conjunction with
the ARRAY type when using psycopg2. The psycopg2 dialect now
employs use of the psycopg2.extras.register_uuid() hook
so that UUID values are always passed to/from the DBAPI as
additionally, the newly added psycopg2 extension
``extras.register_default_jsonb`` is used to establish a JSON
deserializer passed to the dialect via the ``json_deserializer``
- argument. Also repaired the Postgresql integration tests which
+ argument. Also repaired the PostgreSQL integration tests which
weren't actually round-tripping the JSONB type as opposed to the
JSON type. Pull request courtesy Mateusz Susik.
:versions: 1.0.0b1
:tickets: 3174
- Fixed bug where Postgresql dialect would fail to render an
+ Fixed bug where PostgreSQL dialect would fail to render an
expression in an :class:`.Index` that did not correspond directly
to a table-bound column; typically when a :func:`.text` construct
was one of the expressions within the index; or could misinterpret the
:versions: 1.0.0b1
:tickets: 3159
- Fixed bug where Postgresql JSON type was not able to persist or
+ Fixed bug where PostgreSQL JSON type was not able to persist or
otherwise render a SQL NULL column value, rather than a JSON-encoded
``'null'``. To support this case, changes are as follows:
then force all :class:`.Boolean` and :class:`.Enum` types to
require names as well, as these implicitly create a
constraint, even if the ultimate target backend were one that does
- not require generation of the constraint such as Postgresql.
+ not require generation of the constraint such as PostgreSQL.
The mechanics of naming conventions for these particular
constraints has been reorganized such that the naming
determination is done at DDL compile time, rather than at
:versions: 1.0.0b1
:pullreq: github:101
- Added support for Postgresql JSONB via :class:`.JSONB`. Pull request
+ Added support for PostgreSQL JSONB via :class:`.JSONB`. Pull request
courtesy Damian Dimmich.
.. change::
:tickets: 3002
:versions: 1.0.0b1
- Added a new type :class:`.postgresql.OID` to the Postgresql dialect.
+ Added a new type :class:`.postgresql.OID` to the PostgreSQL dialect.
While "oid" is generally a private type within PG that is not exposed
in modern versions, there are some PG use cases such as large object
support where these types might be exposed, as well as within some
:pullreq: bitbucket:18
:versions: 1.0.0b1
- Added a new flag :paramref:`.ARRAY.zero_indexes` to the Postgresql
+ Added a new flag :paramref:`.ARRAY.zero_indexes` to the PostgreSQL
:class:`.ARRAY` type. When set to ``True``, a value of one will be
added to all array index values before passing to the database, allowing
better interoperability between Python style zero-based indexes and
- Postgresql one-based indexes. Pull request courtesy Alexey Terentev.
+ PostgreSQL one-based indexes. Pull request courtesy Alexey Terentev.
.. change::
:tags: bug, engine
Added a new dialect-level argument ``postgresql_ignore_search_path``;
this argument is accepted by both the :class:`.Table` constructor
as well as by the :meth:`.MetaData.reflect` method. When in use
- against Postgresql, a foreign-key referenced table which specifies
+ against PostgreSQL, a foreign-key referenced table which specifies
a remote schema name will retain that schema name even if the name
is present in the ``search_path``; the default behavior since 0.7.3
has been that schemas present in ``search_path`` would not be copied
:tickets: 2581
:pullreq: github:50
- Support for Postgresql JSON has been added, using the new
+ Support for PostgreSQL JSON has been added, using the new
:class:`.JSON` type. Huge thanks to Nathan Rice for
implementing and testing this.
:tags: feature, postgresql
:pullreq: bitbucket:8
- Added support for Postgresql TSVECTOR via the
+ Added support for PostgreSQL TSVECTOR via the
:class:`.postgresql.TSVECTOR` type. Pull request courtesy
Noufal Ibrahim.
:tags: feature, sql, postgresql, mysql
:tickets: 2183
- The Postgresql and MySQL dialects now support reflection/inspection
- of foreign key options, including ON UPDATE, ON DELETE. Postgresql
+ The PostgreSQL and MySQL dialects now support reflection/inspection
+ of foreign key options, including ON UPDATE, ON DELETE. PostgreSQL
also reflects MATCH, DEFERRABLE, and INITIALLY. Coutesy ijl.
.. change::
Added support for rendering ``SMALLSERIAL`` when a :class:`.SmallInteger`
type is used on a primary key autoincrement column, based on server
- version detection of Postgresql version 9.2 or greater.
+ version detection of PostgreSQL version 9.2 or greater.
.. change::
:tags: feature, mysql
:versions: 1.1.0b3
Fixed bug whereby :class:`.TypeDecorator` and :class:`.Variant`
- types were not deeply inspected enough by the Postgresql dialect
+ types were not deeply inspected enough by the PostgreSQL dialect
to determine if SMALLSERIAL or BIGSERIAL needed to be rendered
rather than SERIAL.
Fixed bug in :func:`.expression.text` construct where a double-colon
expression would not escape properly, e.g. ``some\:\:expr``, as is most
- commonly required when rendering Postgresql-style CAST expressions.
+ commonly required when rendering PostgreSQL-style CAST expressions.
.. change::
:tags: bug, sql
Fixed bug where CREATE TABLE with a no-column table, but a constraint
such as a CHECK constraint would render an erroneous comma in the
- definition; this scenario can occur such as with a Postgresql
+ definition; this scenario can occur such as with a PostgreSQL
INHERITS table that has no columns of its own.
.. change::
:tickets: 3573
- Fixed issue where the "FOR UPDATE OF" Postgresql-specific SELECT
+ Fixed issue where the "FOR UPDATE OF" PostgreSQL-specific SELECT
modifier would fail if the referred table had a schema qualifier;
PG needs the schema name to be omitted. Pull request courtesy
Diana Clarke.
Fixed regression in 1.0 where new feature of using "executemany"
for UPDATE statements in the ORM (e.g. :ref:`feature_updatemany`)
- would break on Postgresql and other RETURNING backends
+ would break on PostgreSQL and other RETURNING backends
when using server-side version generation
schemes, as the server side value is retrieved via RETURNING which
is not supported with executemany.
:pullreq: github:190
- An adjustment to the new Postgresql feature of reflecting storage
+ An adjustment to the new PostgreSQL feature of reflecting storage
options and USING of :ticket:`3455` released in 1.0.6,
- to disable the feature for Postgresql versions < 8.2 where the
+ to disable the feature for PostgreSQL versions < 8.2 where the
``reloptions`` column is not provided; this allows Amazon Redshift
- to again work as it is based on an 8.0.x version of Postgresql.
+ to again work as it is based on an 8.0.x version of PostgreSQL.
Fix courtesy Pete Hollobon.
:pullreq: github:186
Added support for the MINVALUE, MAXVALUE, NO MINVALUE, NO MAXVALUE,
- and CYCLE arguments for CREATE SEQUENCE as supported by Postgresql
+ and CYCLE arguments for CREATE SEQUENCE as supported by PostgreSQL
and Oracle. Pull request courtesy jakeogh.
.. change::
label name for all backends, as described in :ref:`migration_1068`,
even though 1.0 includes a rewrite of this logic as part of
:ticket:`2992`. As far
- as emitting GROUP BY against a simple label, even Postgresql has
+ as emitting GROUP BY against a simple label, even PostgreSQL has
cases where it will raise an error even though the label to group
on should be apparent, so it is clear that GROUP BY should never
be rendered in this way automatically.
:tickets: 3343
Fixed bug where updated PG index reflection as a result of
- :ticket:`3184` would cause index operations to fail on Postgresql
+ :ticket:`3184` would cause index operations to fail on PostgreSQL
versions 8.4 and earlier. The enhancements are now
- disabled when using an older version of Postgresql.
+ disabled when using an older version of PostgreSQL.
.. change::
:tags: bug, sql
:tags: bug, postgresql
:tickets: 3319
- The Postgresql :class:`.postgresql.ENUM` type will emit a
+ The PostgreSQL :class:`.postgresql.ENUM` type will emit a
DROP TYPE instruction when a plain ``table.drop()`` is called,
assuming the object is not associated directly with a
:class:`.MetaData` object. In order to accommodate the use case of
be associated directly with the :class:`.MetaData` object; in this
case the type will only be created at the metadata level, or if
created directly. The rules for create/drop of
- Postgresql enumerated types have been highly reworked in general.
+ PostgreSQL enumerated types have been highly reworked in general.
.. seealso::
``pg_catalog.pg_table_is_visible(c.oid)``, rather than testing
for an exact schema match, when the schema name is None; this
so that the method will also illustrate that temporary tables
- are present. Note that this is a behavioral change, as Postgresql
+ are present. Note that this is a behavioral change, as PostgreSQL
allows a non-temporary table to silently overwrite an existing
temporary table of the same name, so this changes the behavior
of ``checkfirst`` in that unusual scenario.
The :class:`.UniqueConstraint` construct is now included when
reflecting a :class:`.Table` object, for databases where this
is applicable. In order to achieve this
- with sufficient accuracy, MySQL and Postgresql now contain features
+ with sufficient accuracy, MySQL and PostgreSQL now contain features
that correct for the duplication of indexes and unique constraints
when reflecting tables, indexes, and constraints.
In the case of MySQL, there is not actually a "unique constraint"
concept independent of a "unique index", so for this backend
:class:`.UniqueConstraint` continues to remain non-present for a
- reflected :class:`.Table`. For Postgresql, the query used to
+ reflected :class:`.Table`. For PostgreSQL, the query used to
detect indexes against ``pg_index`` has been improved to check for
the same construct in ``pg_constraint``, and the implicitly
constructed unique index is not included with a
In both cases, the :meth:`.Inspector.get_indexes` and the
:meth:`.Inspector.get_unique_constraints` methods return both
constructs individually, but include a new token
- ``duplicates_constraint`` in the case of Postgresql or
+ ``duplicates_constraint`` in the case of PostgreSQL or
``duplicates_index`` in the case
of MySQL to indicate when this condition is detected.
Pull request courtesy Johannes Erdfelt.
:pullreq: github:134
Added support for the FILTER keyword as applied to aggregate
- functions, supported by Postgresql 9.4. Pull request
+ functions, supported by PostgreSQL 9.4. Pull request
courtesy Ilja Everilä.
.. seealso::
for a non-nullable or ``ondelete="SET NULL"`` for a nullable set
of columns, the argument ``passive_deletes=True`` is also added to the
relationship. Note that not all backends support reflection of
- ondelete, but backends that do include Postgresql and MySQL.
+ ondelete, but backends that do include PostgreSQL and MySQL.
.. change::
:tags: feature, sql
and foreign tables, as well as support for materialized views
within :meth:`.Inspector.get_view_names`, and a new method
:meth:`.PGInspector.get_foreign_table_names` available on the
- Postgresql version of :class:`.Inspector`. Pull request courtesy
+ PostgreSQL version of :class:`.Inspector`. Pull request courtesy
Rodrigo Menezes.
.. seealso::
:pullreq: github:126
Added new method :meth:`.PGInspector.get_enums`, when using the
- inspector for Postgresql will provide a list of ENUM types.
+ inspector for PostgreSQL will provide a list of ENUM types.
Pull request courtesy Ilya Pekelny.
.. change::
:tags: bug, sql, postgresql
:tickets: 3806
- Added compiler-level flags used by Postgresql to place additional
+ Added compiler-level flags used by PostgreSQL to place additional
parenthesis than would normally be generated by precedence rules
around operations involving JSON, HSTORE indexing operators as well as
- within their operands since it has been observed that Postgresql's
+ within their operands since it has been observed that PostgreSQL's
precedence rules for at least the HSTORE indexing operator is not
consistent between 9.4 and 9.5.
Fixed regression in JSON datatypes where the "literal processor" for
a JSON index value would not be invoked. The native String and Integer
datatypes are now called upon from within the JSONIndexType
- and JSONPathType. This is applied to the generic, Postgresql, and
+ and JSONPathType. This is applied to the generic, PostgreSQL, and
MySQL JSON types and also has a dependency on :ticket:`3766`.
.. change::
to handle any number of DBAPIs for a particular backend,
using a scheme that is inspired by that of JDBC. The
previous format still works, and will select a "default"
-DBAPI implementation, such as the Postgresql URL below that
+DBAPI implementation, such as the PostgreSQL URL below that
will use psycopg2:
::
that of the first compound element within another compound
(such as, a ``union()`` inside of an ``except_()``) wouldn't
be parenthesized. This is inconsistent and produces the
-wrong results on Postgresql, which has precedence rules
+wrong results on PostgreSQL, which has precedence rules
regarding INTERSECTION, and its generally a surprise. When
using complex composites with SQLite, you now need to turn
the first element into a subquery (which is also compatible
the ``from_engine()`` method will in some cases provide a
backend-specific inspector with additional capabilities,
-such as that of Postgresql which provides a
+such as that of PostgreSQL which provides a
``get_table_oid()`` method:
::
The ``insert()``, ``update()`` and ``delete()`` constructs
now support a ``returning()`` method, which corresponds to
-the SQL RETURNING clause as supported by Postgresql, Oracle,
+the SQL RETURNING clause as supported by PostgreSQL, Oracle,
MS-SQL, and Firebird. It is not supported for any other
backend at this time.
SQLAlchemy allows the DBAPI and backend database in use to
handle Unicode parameters when available, and does not add
operational overhead by checking the incoming type; modern
-systems like sqlite and Postgresql will raise an encoding
+systems like sqlite and PostgreSQL will raise an encoding
error on their end if invalid data is passed. In those
cases where SQLAlchemy does need to coerce a bind parameter
from Python Unicode to an encoded string, or when the
the largest label, and applies a CHECK constraint to the
table within the CREATE TABLE statement. When using MySQL,
the type by default uses MySQL's ENUM type, and when using
-Postgresql the type will generate a user defined type using
+PostgreSQL the type will generate a user defined type using
``CREATE TYPE <mytype> AS ENUM``. In order to create the
-type using Postgresql, the ``name`` parameter must be
+type using PostgreSQL, the ``name`` parameter must be
specified to the constructor. The type also accepts a
``native_enum=False`` option which will issue the
VARCHAR/CHECK strategy for all databases. Note that
-Postgresql ENUM types currently don't work with pg8000 or
+PostgreSQL ENUM types currently don't work with pg8000 or
zxjdbc.
Reflection Returns Dialect-Specific Types
Joined-eagerly loaded scalars and collections can now be
instructed to use INNER JOIN instead of OUTER JOIN. On
-Postgresql this is observed to provide a 300-600% speedup on
+PostgreSQL this is observed to provide a 300-600% speedup on
some queries. Set this flag for any many-to-one which is
on a NOT NULLable foreign key, and similarly for any
collection where related items are guaranteed to exist.
A joined table inheritance config where the child table has
a PK that foreign keys to the parent PK can now be updated
-on a CASCADE-capable database like Postgresql.
+on a CASCADE-capable database like PostgreSQL.
``mapper()`` now has an option ``passive_updates=True``
which indicates this foreign key is updated automatically.
If on a non-cascading database like SQLite or MySQL/MyISAM,
:ticket:`723`
-select.distinct(), query.distinct() accepts \*args for Postgresql DISTINCT ON
+select.distinct(), query.distinct() accepts \*args for PostgreSQL DISTINCT ON
-----------------------------------------------------------------------------
This was already available by passing a list of expressions
to the ``distinct`` keyword argument of ``select()``, the
``distinct()`` method of ``select()`` and ``Query`` now
accept positional arguments which are rendered as DISTINCT
-ON when a Postgresql backend is used.
+ON when a PostgreSQL backend is used.
`distinct() <http://www.sqlalchemy.org/docs/07/core/expressi
on_api.html#sqlalchemy.sql.expression.Select.distinct>`_
about the result set as it's produced. This allows criteria
against various things like "row number", "rank" and so
forth. They are known to be supported at least by
-Postgresql, SQL Server and Oracle, possibly others.
+PostgreSQL, SQL Server and Oracle, possibly others.
-The best introduction to window functions is on Postgresql's
+The best introduction to window functions is on PostgreSQL's
site, where window functions have been supported since
version 8.4:
``isolation_level`` argument to ``create_engine()``.
Transaction isolation support is currently only supported by
-the Postgresql and SQLite backends.
+the PostgreSQL and SQLite backends.
`execution_options() <http://www.sqlalchemy.org/docs/07/core
/connections.html#sqlalchemy.engine.base.Connection.executio
On the MS-SQL backend, the String/Unicode types, and their
counterparts VARCHAR/ NVARCHAR, as well as VARBINARY
(:ticket:`1833`) emit "max" as the length when no length is
-specified. This makes it more compatible with Postgresql's
+specified. This makes it more compatible with PostgreSQL's
VARCHAR type which is similarly unbounded when no length
specified. SQL Server defaults the length on these types
to '1' when no length is specified.
New features which have come from this immediately include
-support for Postgresql's HSTORE type, as well as new
-operations associated with Postgresql's ARRAY
+support for PostgreSQL's HSTORE type, as well as new
+operations associated with PostgreSQL's ARRAY
type. It also paves the way for existing types to acquire
lots more operators that are specific to those types, such
as more string, integer and date operators.
The :meth:`.Insert.values` method now supports a list of dictionaries,
which will render a multi-VALUES statement such as
``VALUES (<row1>), (<row2>), ...``. This is only relevant to backends which
-support this syntax, including Postgresql, SQLite, and MySQL. It is
+support this syntax, including PostgreSQL, SQLite, and MySQL. It is
not the same thing as the usual ``executemany()`` style of INSERT which
remains unchanged::
:meth:`.Select.correlate_except`
-Postgresql HSTORE type
+PostgreSQL HSTORE type
----------------------
-Support for Postgresql's ``HSTORE`` type is now available as
+Support for PostgreSQL's ``HSTORE`` type is now available as
:class:`.postgresql.HSTORE`. This type makes great usage
of the new operator system to provide a full range of operators
for HSTORE types, including index access, concatenation,
:ticket:`2606`
-Enhanced Postgresql ARRAY type
+Enhanced PostgreSQL ARRAY type
------------------------------
The :class:`.postgresql.ARRAY` type will accept an optional
:ticket:`2363`
-"COLLATE" supported across all dialects; in particular MySQL, Postgresql, SQLite
+"COLLATE" supported across all dialects; in particular MySQL, PostgreSQL, SQLite
--------------------------------------------------------------------------------
The "collate" keyword, long accepted by the MySQL dialect, is now established
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
- # note we're using Postgresql to ensure that referential integrity
+ # note we're using PostgreSQL to ensure that referential integrity
# is enforced, for demonstration purposes.
e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
.. _migration_2878:
-Postgresql CREATE TYPE <x> AS ENUM now applies quoting to values
+PostgreSQL CREATE TYPE <x> AS ENUM now applies quoting to values
----------------------------------------------------------------
The :class:`.postgresql.ENUM` type will now apply escaping to single quote
An attempt is made to simplify the specification of the ``FOR UPDATE``
clause on ``SELECT`` statements made within Core and ORM, and support is added
-for the ``FOR UPDATE OF`` SQL supported by Postgresql and Oracle.
+for the ``FOR UPDATE OF`` SQL supported by PostgreSQL and Oracle.
Using the core :meth:`.GenerativeSelect.with_for_update`, options like ``FOR SHARE`` and
``NOWAIT`` can be specified individually, rather than linking to arbitrary
from each row at the same time the INSERT or UPDATE is emitted. When using a
server-generated version identifier, it is strongly
recommended that this feature be used only on a backend with strong RETURNING
-support (Postgresql, SQL Server; Oracle also supports RETURNING but the cx_oracle
+support (PostgreSQL, SQL Server; Oracle also supports RETURNING but the cx_oracle
driver has only limited support), else the additional SELECT statements will
add significant performance
overhead. The example provided at :ref:`server_side_version_counter` illustrates
-the usage of the Postgresql ``xmin`` system column in order to integrate it with
+the usage of the PostgreSQL ``xmin`` system column in order to integrate it with
the ORM's versioning feature.
.. seealso::
:ticket:`1535`
-Postgresql JSON Type
+PostgreSQL JSON Type
--------------------
-The Postgresql dialect now features a :class:`.postgresql.JSON` type to
+The PostgreSQL dialect now features a :class:`.postgresql.JSON` type to
complement the :class:`.postgresql.HSTORE` type.
.. seealso::
(Oracle 8, a very old database, doesn't support the JOIN keyword at all,
but SQLAlchemy has always had a simple rewriting scheme in place for Oracle's syntax).
To make matters worse, SQLAlchemy's usual workaround of applying a
-SELECT often degrades performance on platforms like Postgresql and MySQL::
+SELECT often degrades performance on platforms like PostgreSQL and MySQL::
SELECT a.*, anon_1.* FROM a LEFT OUTER JOIN (
SELECT b.id AS b_id, c.id AS c_id
In 0.9, as a result of the version id enhancements, ``eager_defaults`` can now
emit a RETURNING clause for these values, so on a backend with strong RETURNING
-support in particular Postgresql, the ORM can fetch newly generated default
+support in particular PostgreSQL, the ORM can fetch newly generated default
and SQL expression values inline with the INSERT or UPDATE. ``eager_defaults``,
when enabled, makes use of RETURNING automatically when the target backend
and :class:`.Table` supports "implicit returning".
A :class:`.Table` object populated using ``autoload=True`` will now
include :class:`.UniqueConstraint` constructs as well as
:class:`.Index` constructs. This logic has a few caveats for
-Postgresql and Mysql:
+PostgreSQL and Mysql:
-Postgresql
+PostgreSQL
^^^^^^^^^^
-Postgresql has the behavior such that when a UNIQUE constraint is
+PostgreSQL has the behavior such that when a UNIQUE constraint is
created, it implicitly creates a UNIQUE INDEX corresponding to that
constraint as well. The :meth:`.Inspector.get_indexes` and the
:meth:`.Inspector.get_unique_constraints` methods will continue to
The above behavior applies to all those places where we might want to refer
to a so-called "label reference"; ORDER BY and GROUP BY, but also within an
OVER clause as well as a DISTINCT ON clause that refers to columns (e.g. the
-Postgresql syntax).
+PostgreSQL syntax).
We can still specify any arbitrary expression for ORDER BY or others using
:func:`.text`::
:ticket:`3204`
-Dialect Improvements and Changes - Postgresql
+Dialect Improvements and Changes - PostgreSQL
=============================================
.. _change_3319:
Overhaul of ENUM type create/drop rules
---------------------------------------
-The rules for Postgresql :class:`.postgresql.ENUM` have been made more strict
+The rules for PostgreSQL :class:`.postgresql.ENUM` have been made more strict
with regards to creating and dropping of the TYPE.
An :class:`.postgresql.ENUM` that is created **without** being explicitly
:ticket:`3319`
-New Postgresql Table options
+New PostgreSQL Table options
-----------------------------
Added support for PG table options TABLESPACE, ON COMMIT,
.. _feature_get_enums:
-New get_enums() method with Postgresql Dialect
+New get_enums() method with PostgreSQL Dialect
----------------------------------------------
The :func:`.inspect` method returns a :class:`.PGInspector` object in the
-case of Postgresql, which includes a new :meth:`.PGInspector.get_enums`
+case of PostgreSQL, which includes a new :meth:`.PGInspector.get_enums`
method that returns information on all available ``ENUM`` types::
from sqlalchemy import inspect, create_engine
.. _feature_2891:
-Postgresql Dialect reflects Materialized Views, Foreign Tables
+PostgreSQL Dialect reflects Materialized Views, Foreign Tables
--------------------------------------------------------------
Changes are as follows:
* :meth:`.Inspector.get_view_names` will return plain and materialized view
names.
-* :meth:`.Inspector.get_table_names` does **not** change for Postgresql, it
+* :meth:`.Inspector.get_table_names` does **not** change for PostgreSQL, it
continues to return only the names of plain tables.
* A new method :meth:`.PGInspector.get_foreign_table_names` is added which
will return the names of tables that are specifically marked as "foreign"
- in the Postgresql schema tables.
+ in the PostgreSQL schema tables.
The change to reflection involves adding ``'m'`` and ``'f'`` to the list
of qualifiers we use when querying ``pg_class.relkind``, but this change
.. _change_3264:
-Postgresql ``has_table()`` now works for temporary tables
+PostgreSQL ``has_table()`` now works for temporary tables
---------------------------------------------------------
This is a simple fix such that "has table" for temporary tables now works,
user_tmp.create(conn, checkfirst=True)
The very unlikely case that this behavior will cause a non-failing application
-to behave differently, is because Postgresql allows a non-temporary table
+to behave differently, is because PostgreSQL allows a non-temporary table
to silently overwrite a temporary table. So code like the following will
now act completely differently, no longer creating the real table following
the temporary table::
.. _feature_gh134:
-Postgresql FILTER keyword
+PostgreSQL FILTER keyword
-------------------------
The SQL standard FILTER keyword for aggregate functions is now supported
-by Postgresql as of 9.4. SQLAlchemy allows this using
+by PostgreSQL as of 9.4. SQLAlchemy allows this using
:meth:`.FunctionElement.filter`::
func.count(1).filter(True)
As part of the changes in :ref:`change_3503`, the workings of the
:meth:`.ColumnElement.cast` operator on :class:`.postgresql.JSON` and
:class:`.postgresql.JSONB` no longer implicitly invoke the
-:attr:`.postgresql.JSON.Comparator.astext` modifier; Postgresql's JSON/JSONB types
+:attr:`.postgresql.JSON.Comparator.astext` modifier; PostgreSQL's JSON/JSONB types
support CAST operations to each other without the "astext" aspect.
This means that in most cases, an application that was doing this::
)
When we call upon :meth:`.MetaData.create_all` on a backend such as the
-Postgresql backend, the cycle between these two tables is resolved and the
+PostgreSQL backend, the cycle between these two tables is resolved and the
constraints are created separately:
.. sourcecode:: pycon+sql
)
The :class:`.SchemaType` classes use special internal symbols so that
-the naming convention is only determined at DDL compile time. On Postgresql,
+the naming convention is only determined at DDL compile time. On PostgreSQL,
there's a native BOOLEAN type, so the CHECK constraint of :class:`.Boolean`
is not needed; we are safe to set up a :class:`.Boolean` type without a
name, even though a naming convention is in place for check constraints.
Index('someindex', mytable.c.somecol.desc())
-Or with a backend that supports functional indexes such as Postgresql,
+Or with a backend that supports functional indexes such as PostgreSQL,
a "case insensitive" index can be created using the ``lower()`` function::
from sqlalchemy import func, Index
^^^^^^^^^^^^^^^^^^^^^^^^^^
Receives and returns Python uuid() objects. Uses the PG UUID type
-when using Postgresql, CHAR(32) on other backends, storing them
+when using PostgreSQL, CHAR(32) on other backends, storing them
in stringified hex format. Can be modified to store
binary in CHAR(16) if desired::
class GUID(TypeDecorator):
"""Platform-independent GUID type.
- Uses Postgresql's UUID type, otherwise uses
+ Uses PostgreSQL's UUID type, otherwise uses
CHAR(32), storing as stringified hex values.
"""
only the relational database contains a particular series of functions that are necessary
to coerce incoming and outgoing data between an application and persistence format.
Examples include using database-defined encryption/decryption functions, as well
-as stored procedures that handle geographic data. The Postgis extension to Postgresql
+as stored procedures that handle geographic data. The Postgis extension to PostgreSQL
includes an extensive array of SQL functions that are necessary for coercing
data into particular formats.
For an example of subclassing a built in type directly, we subclass
:class:`.postgresql.BYTEA` to provide a ``PGPString``, which will make use of the
-Postgresql ``pgcrypto`` extension to encrypt/decrypt values
+PostgreSQL ``pgcrypto`` extension to encrypt/decrypt values
transparently::
from sqlalchemy import create_engine, String, select, func, \
Unary operations
are also possible. For example, to add an implementation of the
-Postgresql factorial operator, we combine the :class:`.UnaryExpression` construct
+PostgreSQL factorial operator, we combine the :class:`.UnaryExpression` construct
along with a :class:`.custom_op` to produce the factorial expression::
from sqlalchemy import Integer
ability to be invoked conditionally based on inspection of the
database. This feature is available using the :meth:`.DDLElement.execute_if`
method. For example, if we wanted to create a trigger but only on
-the Postgresql backend, we could invoke this as::
+the PostgreSQL backend, we could invoke this as::
mytable = Table(
'mytable', metadata,
The :meth:`.DDLElement.execute_if` method can also work against a callable
function that will receive the database connection in use. In the
example below, we use this to conditionally create a CHECK constraint,
-first looking within the Postgresql catalogs to see if it exists:
+first looking within the PostgreSQL catalogs to see if it exists:
.. sourcecode:: python+sql
well as some MySQL dialects.
* the dialect does not support the "RETURNING" clause or similar, or the
``implicit_returning`` flag is set to ``False`` for the dialect. Dialects
- which support RETURNING currently include Postgresql, Oracle, Firebird, and
+ which support RETURNING currently include PostgreSQL, Oracle, Firebird, and
MS-SQL.
* the statement is a single execution, i.e. only supplies one set of
parameters and doesn't use "executemany" behavior
SQLAlchemy represents database sequences using the
:class:`~sqlalchemy.schema.Sequence` object, which is considered to be a
special case of "column default". It only has an effect on databases which
-have explicit support for sequences, which currently includes Postgresql,
+have explicit support for sequences, which currently includes PostgreSQL,
Oracle, and Firebird. The :class:`~sqlalchemy.schema.Sequence` object is
otherwise ignored.
parent table.
The :class:`~sqlalchemy.schema.Sequence` object also implements special
-functionality to accommodate Postgresql's SERIAL datatype. The SERIAL type in
+functionality to accommodate PostgreSQL's SERIAL datatype. The SERIAL type in
PG automatically generates a sequence that is used implicitly during inserts.
This means that if a :class:`~sqlalchemy.schema.Table` object defines a
:class:`~sqlalchemy.schema.Sequence` on its primary key column so that it
Column("createdate", DateTime())
)
-The above metadata will generate a CREATE TABLE statement on Postgresql as::
+The above metadata will generate a CREATE TABLE statement on PostgreSQL as::
CREATE TABLE cartitems (
cart_id INTEGER DEFAULT nextval('cart_id_seq') NOT NULL,
We place the :class:`.Sequence` also as a Python-side default above, that
is, it is mentioned twice in the :class:`.Column` definition. Depending
on the backend in use, this may not be strictly necessary, for example
-on the Postgresql backend the Core will use ``RETURNING`` to access the
+on the PostgreSQL backend the Core will use ``RETURNING`` to access the
newly generated primary key value in any case. However, for the best
compatibility, :class:`.Sequence` was originally intended to be a Python-side
directive first and foremost so it's probably a good idea to specify it
detailed information on all included dialects as well as links to third-party dialects, see
:ref:`dialect_toplevel`.
-Postgresql
+PostgreSQL
----------
-The Postgresql dialect uses psycopg2 as the default DBAPI. pg8000 is
+The PostgreSQL dialect uses psycopg2 as the default DBAPI. pg8000 is
also available as a pure-Python substitute::
# default
# pg8000
engine = create_engine('postgresql+pg8000://scott:tiger@localhost/mydatabase')
-More notes on connecting to Postgresql at :ref:`postgresql_toplevel`.
+More notes on connecting to PostgreSQL at :ref:`postgresql_toplevel`.
MySQL
-----
DDL as the original Python-defined :class:`.Table` objects. Areas where
this occurs includes server defaults, column-associated sequences and various
idosyncrasies regarding constraints and datatypes. Server side defaults may
-be returned with cast directives (typically Postgresql will include a ``::<type>``
+be returned with cast directives (typically PostgreSQL will include a ``::<type>``
cast) or different quoting patterns than originally specified.
Another category of limitation includes schema structures for which reflection
.. note::
Users familiar with the syntax of CREATE TABLE may notice that the
- VARCHAR columns were generated without a length; on SQLite and Postgresql,
+ VARCHAR columns were generated without a length; on SQLite and PostgreSQL,
this is a valid datatype, but on others, it's not allowed. So if running
this tutorial on one of those databases, and you wish to use SQLAlchemy to
issue CREATE TABLE, a "length" may be provided to the :class:`~sqlalchemy.types.String` type as
allows a selectable unit to refer to another selectable unit within a
single FROM clause. This is an extremely special use case which, while
part of the SQL standard, is only known to be supported by recent
-versions of Postgresql.
+versions of PostgreSQL.
Normally, if a SELECT statement refers to
``table1 JOIN (some SELECT) AS subquery`` in its FROM clause, the subquery
Most database backends support a system of limiting how many rows
are returned, and the majority also feature a means of starting to return
-rows after a given "offset". While common backends like Postgresql,
+rows after a given "offset". While common backends like PostgreSQL,
MySQL and SQLite support LIMIT and OFFSET keywords, other backends
need to refer to more esoteric features such as "window functions"
and row ids to achieve the same effect. The :meth:`~.Select.limit`
.. versionadded:: 0.7.4
-The Postgresql, Microsoft SQL Server, and MySQL backends all support UPDATE statements
+The PostgreSQL, Microsoft SQL Server, and MySQL backends all support UPDATE statements
that refer to multiple tables. For PG and MSSQL, this is the "UPDATE FROM" syntax,
which updates one table at a time, but can reference additional tables in an additional
"FROM" clause that can then be referenced in the WHERE clause directly. On MySQL,
)
Where above, the INTEGER and VARCHAR types are ultimately from
-sqlalchemy.types, and INET is specific to the Postgresql dialect.
+sqlalchemy.types, and INET is specific to the PostgreSQL dialect.
Some dialect level types have the same name as the SQL standard type,
but also provide additional arguments. For example, MySQL implements
as external projects. The rationale here is to keep the base
SQLAlchemy install and test suite from growing inordinately large.
- The "classic" dialects such as SQLite, MySQL, Postgresql, Oracle,
+ The "classic" dialects such as SQLite, MySQL, PostgreSQL, Oracle,
SQL Server, and Firebird will remain in the Core for the time being.
.. versionchanged:: 1.0
* `ibm_db_sa <http://code.google.com/p/ibm-db/wiki/README>`_ - driver for IBM DB2 and Informix,
developed jointly by IBM and SQLAlchemy developers.
* `sqlalchemy-redshift <https://pypi.python.org/pypi/sqlalchemy-redshift>`_ - driver for Amazon Redshift, adapts
- the existing Postgresql/psycopg2 driver.
+ the existing PostgreSQL/psycopg2 driver.
* `sqlalchemy_exasol <https://github.com/blue-yonder/sqlalchemy_exasol>`_ - driver for EXASolution.
* `sqlalchemy-sqlany <https://github.com/sqlanywhere/sqlalchemy-sqlany>`_ - driver for SAP Sybase SQL
Anywhere, developed by SAP.
must be assumed that a transaction is always in progress. The
connection pool issues ``connection.rollback()`` when a connection is returned.
This is so that any transactional resources remaining on the connection are
-released. On a database like Postgresql or MSSQL where table resources are
+released. On a database like PostgreSQL or MSSQL where table resources are
aggressively locked, this is critical so that rows and tables don't remain
locked within connections that are no longer in use. An application can
otherwise hang. It's not just for locks, however, and is equally critical on
were created, as well as a way to get at server-generated
default values in an atomic way.
- An example of RETURNING, idiomatic to Postgresql, looks like::
+ An example of RETURNING, idiomatic to PostgreSQL, looks like::
INSERT INTO user_account (name) VALUES ('new name') RETURNING id, timestamp
or SQL expressions can be placed into RETURNING, not just default-value columns).
The backends that currently support
- RETURNING or a similar construct are Postgresql, SQL Server, Oracle,
- and Firebird. The Postgresql and Firebird implementations are generally
+ RETURNING or a similar construct are PostgreSQL, SQL Server, Oracle,
+ and Firebird. The PostgreSQL and Firebird implementations are generally
full featured, whereas the implementations of SQL Server and Oracle
have caveats. On SQL Server, the clause is known as "OUTPUT INSERTED"
for INSERT and UPDATE statements and "OUTPUT DELETED" for DELETE statements;
establish such a join.
Below, a class ``HostEntry`` joins to itself, equating the string ``content``
-column to the ``ip_address`` column, which is a Postgresql type called ``INET``.
+column to the ``ip_address`` column, which is a PostgreSQL type called ``INET``.
We need to use :func:`.cast` in order to cast one side of the join to the
type of the other::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another use case for relationships is the use of custom operators, such
-as Postgresql's "is contained within" ``<<`` operator when joining with
+as PostgreSQL's "is contained within" ``<<`` operator when joining with
types such as :class:`.postgresql.INET` and :class:`.postgresql.CIDR`.
For custom operators we use the :meth:`.Operators.op` function::
:ref:`SQLite Transaction Isolation <sqlite_isolation_level>`
- :ref:`Postgresql Isolation Level <postgresql_isolation_level>`
+ :ref:`PostgreSQL Isolation Level <postgresql_isolation_level>`
:ref:`MySQL Isolation Level <mysql_isolation_level>`
.. topic:: Minimal Table Descriptions vs. Full Descriptions
Users familiar with the syntax of CREATE TABLE may notice that the
- VARCHAR columns were generated without a length; on SQLite and Postgresql,
+ VARCHAR columns were generated without a length; on SQLite and PostgreSQL,
this is a valid datatype, but on others, it's not allowed. So if running
this tutorial on one of those databases, and you wish to use SQLAlchemy to
issue CREATE TABLE, a "length" may be provided to the :class:`~sqlalchemy.types.String` type as
.. seealso::
- `Repeatable Read Isolation Level <http://www.postgresql.org/docs/9.1/static/transaction-iso.html#XACT-REPEATABLE-READ>`_ - Postgresql's implementation of repeatable read, including a description of the error condition.
+ `Repeatable Read Isolation Level <http://www.postgresql.org/docs/9.1/static/transaction-iso.html#XACT-REPEATABLE-READ>`_ - PostgreSQL's implementation of repeatable read, including a description of the error condition.
Simple Version Counting
-----------------------
some means of generating new identifiers when a row is subject to an INSERT
as well as with an UPDATE. For the UPDATE case, typically an update trigger
is needed, unless the database in question supports some other native
-version identifier. The Postgresql database in particular supports a system
+version identifier. The PostgreSQL database in particular supports a system
column called `xmin <http://www.postgresql.org/docs/9.1/static/ddl-system-columns.html>`_
which provides UPDATE versioning. We can make use
-of the Postgresql ``xmin`` column to version our ``User``
+of the PostgreSQL ``xmin`` column to version our ``User``
class as follows::
class User(Base):
.. topic:: creating tables that refer to system columns
- In the above scenario, as ``xmin`` is a system column provided by Postgresql,
+ In the above scenario, as ``xmin`` is a system column provided by PostgreSQL,
we use the ``system=True`` argument to mark it as a system-provided
column, omitted from the ``CREATE TABLE`` statement.
It is *strongly recommended* that server side version counters only be used
when absolutely necessary and only on backends that support :term:`RETURNING`,
-e.g. Postgresql, Oracle, SQL Server (though SQL Server has
+e.g. PostgreSQL, Oracle, SQL Server (though SQL Server has
`major caveats <http://blogs.msdn.com/b/sqlprogrammability/archive/2008/07/11/update-with-output-clause-triggers-and-sqlmoreresults.aspx>`_ when triggers are used), Firebird.
.. versionadded:: 0.9.0
class array(expression.Tuple):
- """A Postgresql ARRAY literal.
+ """A PostgreSQL ARRAY literal.
This is used to produce ARRAY literals in SQL expressions, e.g.::
class ARRAY(SchemaEventTarget, sqltypes.ARRAY):
- """Postgresql ARRAY type.
+ """PostgreSQL ARRAY type.
.. versionchanged:: 1.1 The :class:`.postgresql.ARRAY` type is now
a subclass of the core :class:`.types.ARRAY` type.
they were declared.
:param zero_indexes=False: when True, index values will be converted
- between Python zero-based and Postgresql one-based indexes, e.g.
+ between Python zero-based and PostgreSQL one-based indexes, e.g.
a value of one will be added to all index values before passing
to the database.
having the "last insert identifier" available, a RETURNING clause is added to
the INSERT statement which specifies the primary key columns should be
returned after the statement completes. The RETURNING functionality only takes
-place if Postgresql 8.2 or later is in use. As a fallback approach, the
+place if PostgreSQL 8.2 or later is in use. As a fallback approach, the
sequence, whether specified explicitly or implicitly via ``SERIAL``, is
executed independently beforehand, the returned value to be used in the
subsequent insert. Note that when an
Transaction Isolation Level
---------------------------
-All Postgresql dialects support setting of transaction isolation level
+All PostgreSQL dialects support setting of transaction isolation level
both via a dialect-specific parameter
:paramref:`.create_engine.isolation_level` accepted by :func:`.create_engine`,
as well as the :paramref:`.Connection.execution_options.isolation_level`
.. _postgresql_schema_reflection:
-Remote-Schema Table Introspection and Postgresql search_path
+Remote-Schema Table Introspection and PostgreSQL search_path
------------------------------------------------------------
-The Postgresql dialect can reflect tables from any schema. The
+The PostgreSQL dialect can reflect tables from any schema. The
:paramref:`.Table.schema` argument, or alternatively the
:paramref:`.MetaData.reflect.schema` argument determines which schema will
be searched for the table or tables. The reflected :class:`.Table` objects
via foreign key constraint, a decision must be made as to how the ``.schema``
is represented in those remote tables, in the case where that remote
schema name is also a member of the current
-`Postgresql search path
+`PostgreSQL search path
<http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH>`_.
-By default, the Postgresql dialect mimics the behavior encouraged by
-Postgresql's own ``pg_get_constraintdef()`` builtin procedure. This function
+By default, the PostgreSQL dialect mimics the behavior encouraged by
+PostgreSQL's own ``pg_get_constraintdef()`` builtin procedure. This function
returns a sample definition for a particular foreign key constraint,
omitting the referenced schema name from that definition when the name is
-also in the Postgresql schema search path. The interaction below
+also in the PostgreSQL schema search path. The interaction below
illustrates this behavior::
test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
>>> meta.tables['test_schema.referred'].schema
'test_schema'
-.. sidebar:: Best Practices for Postgresql Schema reflection
+.. sidebar:: Best Practices for PostgreSQL Schema reflection
- The description of Postgresql schema reflection behavior is complex, and
+ The description of PostgreSQL schema reflection behavior is complex, and
is the product of many years of dealing with widely varied use cases and
user preferences. But in fact, there's no need to understand any of it if
you just stick to the simplest use pattern: leave the ``search_path`` set
within these guidelines.
Note that **in all cases**, the "default" schema is always reflected as
-``None``. The "default" schema on Postgresql is that which is returned by the
-Postgresql ``current_schema()`` function. On a typical Postgresql
+``None``. The "default" schema on PostgreSQL is that which is returned by the
+PostgreSQL ``current_schema()`` function. On a typical PostgreSQL
installation, this is the name ``public``. So a table that refers to another
which is in the ``public`` (i.e. default) schema will always have the
``.schema`` attribute set to ``None``.
`The Schema Search Path
<http://www.postgresql.org/docs/9.0/static/ddl-schemas.html#DDL-SCHEMAS-PATH>`_
- - on the Postgresql website.
+ - on the PostgreSQL website.
INSERT/UPDATE...RETURNING
-------------------------
or they may be *inferred* by stating the columns and conditions that comprise
the indexes.
-SQLAlchemy provides ``ON CONFLICT`` support via the Postgresql-specific
+SQLAlchemy provides ``ON CONFLICT`` support via the PostgreSQL-specific
:func:`.postgresql.dml.insert()` function, which provides
the generative methods :meth:`~.postgresql.dml.Insert.on_conflict_do_update`
and :meth:`~.postgresql.dml.Insert.on_conflict_do_nothing`::
stmt = stmt.on_conflict_do_nothing()
conn.execute(stmt)
-.. versionadded:: 1.1 Added support for Postgresql ON CONFLICT clauses
+.. versionadded:: 1.1 Added support for PostgreSQL ON CONFLICT clauses
.. seealso::
- `INSERT .. ON CONFLICT <http://www.postgresql.org/docs/current/static/sql-insert.html#SQL-ON-CONFLICT>`_ - in the Postgresql documentation.
+ `INSERT .. ON CONFLICT <http://www.postgresql.org/docs/current/static/sql-insert.html#SQL-ON-CONFLICT>`_ - in the PostgreSQL documentation.
.. _postgresql_match:
Full Text Search
----------------
-SQLAlchemy makes available the Postgresql ``@@`` operator via the
+SQLAlchemy makes available the PostgreSQL ``@@`` operator via the
:meth:`.ColumnElement.match` method on any textual column expression.
-On a Postgresql dialect, an expression like the following::
+On a PostgreSQL dialect, an expression like the following::
select([sometable.c.text.match("search string")])
SELECT text @@ to_tsquery('search string') FROM table
-The Postgresql text search functions such as ``to_tsquery()``
+The PostgreSQL text search functions such as ``to_tsquery()``
and ``to_tsvector()`` are available
explicitly using the standard :data:`.func` construct. For example::
SELECT CAST('some text' AS TSVECTOR) AS anon_1
-Full Text Searches in Postgresql are influenced by a combination of: the
+Full Text Searches in PostgreSQL are influenced by a combination of: the
PostgresSQL setting of ``default_text_search_config``, the ``regconfig`` used
to build the GIN/GiST indexes, and the ``regconfig`` optionally passed in
during a query.
.. _postgresql_indexes:
-Postgresql-Specific Index Options
+PostgreSQL-Specific Index Options
---------------------------------
Several extensions to the :class:`.Index` construct are available, specific
Indexes with CONCURRENTLY
^^^^^^^^^^^^^^^^^^^^^^^^^
-The Postgresql index option CONCURRENTLY is supported by passing the
+The PostgreSQL index option CONCURRENTLY is supported by passing the
flag ``postgresql_concurrently`` to the :class:`.Index` construct::
tbl = Table('testtbl', m, Column('data', Integer))
idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)
The above index construct will render DDL for CREATE INDEX, assuming
-Postgresql 8.2 or higher is detected or for a connection-less dialect, as::
+PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as::
CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)
-For DROP INDEX, assuming Postgresql 9.2 or higher is detected or for
+For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or for
a connection-less dialect, it will emit::
DROP INDEX CONCURRENTLY test_idx1
.. versionadded:: 1.1 support for CONCURRENTLY on DROP INDEX. The
CONCURRENTLY keyword is now only emitted if a high enough version
- of Postgresql is detected on the connection (or for a connection-less
+ of PostgreSQL is detected on the connection (or for a connection-less
dialect).
.. _postgresql_index_reflection:
-Postgresql Index Reflection
+PostgreSQL Index Reflection
---------------------------
-The Postgresql database creates a UNIQUE INDEX implicitly whenever the
+The PostgreSQL database creates a UNIQUE INDEX implicitly whenever the
UNIQUE CONSTRAINT construct is used. When inspecting a table using
:class:`.Inspector`, the :meth:`.Inspector.get_indexes`
and the :meth:`.Inspector.get_unique_constraints` will report on these
.. versionchanged:: 1.0.0 - :class:`.Table` reflection now includes
:class:`.UniqueConstraint` objects present in the :attr:`.Table.constraints`
- collection; the Postgresql backend will no longer include a "mirrored"
+ collection; the PostgreSQL backend will no longer include a "mirrored"
:class:`.Index` construct in :attr:`.Table.indexes` if it is detected
as corresponding to a unique constraint.
Special Reflection Options
--------------------------
-The :class:`.Inspector` used for the Postgresql backend is an instance
+The :class:`.Inspector` used for the PostgreSQL backend is an instance
of :class:`.PGInspector`, which offers additional methods::
from sqlalchemy import create_engine, inspect
.. seealso::
- `Postgresql CREATE TABLE options
+ `PostgreSQL CREATE TABLE options
<http://www.postgresql.org/docs/current/static/sql-createtable.html>`_
ARRAY Types
-----------
-The Postgresql dialect supports arrays, both as multidimensional column types
+The PostgreSQL dialect supports arrays, both as multidimensional column types
as well as array literals:
* :class:`.postgresql.ARRAY` - ARRAY datatype
JSON Types
----------
-The Postgresql dialect supports both JSON and JSONB datatypes, including
-psycopg2's native support and support for all of Postgresql's special
+The PostgreSQL dialect supports both JSON and JSONB datatypes, including
+psycopg2's native support and support for all of PostgreSQL's special
operators:
* :class:`.postgresql.JSON`
HSTORE Type
-----------
-The Postgresql HSTORE type as well as hstore literals are supported:
+The PostgreSQL HSTORE type as well as hstore literals are supported:
* :class:`.postgresql.HSTORE` - HSTORE datatype
ENUM Types
----------
-Postgresql has an independently creatable TYPE structure which is used
+PostgreSQL has an independently creatable TYPE structure which is used
to implement an enumerated type. This approach introduces significant
complexity on the SQLAlchemy side in terms of when this type should be
CREATED and DROPPED. The type object is also an independently reflectable
class OID(sqltypes.TypeEngine):
- """Provide the Postgresql OID type.
+ """Provide the PostgreSQL OID type.
.. versionadded:: 0.9.5
class INTERVAL(sqltypes.TypeEngine):
- """Postgresql INTERVAL type.
+ """PostgreSQL INTERVAL type.
The INTERVAL type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000 or zxjdbc.
class UUID(sqltypes.TypeEngine):
- """Postgresql UUID type.
+ """PostgreSQL UUID type.
Represents the UUID column type, interpreting
data either as natively returned by the DBAPI
class TSVECTOR(sqltypes.TypeEngine):
- """The :class:`.postgresql.TSVECTOR` type implements the Postgresql
+ """The :class:`.postgresql.TSVECTOR` type implements the PostgreSQL
text search type TSVECTOR.
It can be used to do full text queries on natural language
class ENUM(sqltypes.Enum):
- """Postgresql ENUM type.
+ """PostgreSQL ENUM type.
This is a subclass of :class:`.types.Enum` which includes
support for PG's ``CREATE TYPE`` and ``DROP TYPE``.
When the builtin type :class:`.types.Enum` is used and the
:paramref:`.Enum.native_enum` flag is left at its default of
- True, the Postgresql backend will use a :class:`.postgresql.ENUM`
+ True, the PostgreSQL backend will use a :class:`.postgresql.ENUM`
type as the implementation, so the special create/drop rules
will be used.
my_enum.create(engine)
my_enum.drop(engine)
- .. versionchanged:: 1.0.0 The Postgresql :class:`.postgresql.ENUM` type
+ .. versionchanged:: 1.0.0 The PostgreSQL :class:`.postgresql.ENUM` type
now behaves more strictly with regards to CREATE/DROP. A metadata-level
ENUM type will only be created and dropped at the metadata level,
not the table level, with the exception of
:class:`~.postgresql.ENUM`.
If the underlying dialect does not support
- Postgresql CREATE TYPE, no action is taken.
+ PostgreSQL CREATE TYPE, no action is taken.
:param bind: a connectable :class:`.Engine`,
:class:`.Connection`, or similar object to emit
:class:`~.postgresql.ENUM`.
If the underlying dialect does not support
- Postgresql DROP TYPE, no action is taken.
+ PostgreSQL DROP TYPE, no action is taken.
:param bind: a connectable :class:`.Engine`,
:class:`.Connection`, or similar object to emit
def format_type(self, type_, use_schema=True):
if not type_.name:
- raise exc.CompileError("Postgresql ENUM type requires a name.")
+ raise exc.CompileError("PostgreSQL ENUM type requires a name.")
name = self.quote(type_.name)
effective_schema = self.schema_for_object(type_)
class Insert(StandardInsert):
- """Postgresql-specific implementation of INSERT.
+ """PostgreSQL-specific implementation of INSERT.
Adds methods for PG-specific syntaxes such as ON CONFLICT.
class aggregate_order_by(expression.ColumnElement):
- """Represent a Postgresql aggregate order by expression.
+ """Represent a PostgreSQL aggregate order by expression.
E.g.::
def array_agg(*arg, **kw):
- """Postgresql-specific form of :class:`.array_agg`, ensures
+ """PostgreSQL-specific form of :class:`.array_agg`, ensures
return type is :class:`.postgresql.ARRAY` and not
the plain :class:`.types.ARRAY`.
class HSTORE(sqltypes.Indexable, sqltypes.Concatenable, sqltypes.TypeEngine):
- """Represent the Postgresql HSTORE type.
+ """Represent the PostgreSQL HSTORE type.
The :class:`.HSTORE` type stores dictionaries containing strings, e.g.::
.. seealso::
- :class:`.hstore` - render the Postgresql ``hstore()`` function.
+ :class:`.hstore` - render the PostgreSQL ``hstore()`` function.
"""
class hstore(sqlfunc.GenericFunction):
"""Construct an hstore value within a SQL expression using the
- Postgresql ``hstore()`` function.
+ PostgreSQL ``hstore()`` function.
The :class:`.hstore` function accepts one or two arguments as described
- in the Postgresql documentation.
+ in the PostgreSQL documentation.
E.g.::
.. seealso::
- :class:`.HSTORE` - the Postgresql ``HSTORE`` datatype.
+ :class:`.HSTORE` - the PostgreSQL ``HSTORE`` datatype.
"""
type = HSTORE
class JSON(sqltypes.JSON):
- """Represent the Postgresql JSON type.
+ """Represent the PostgreSQL JSON type.
This type is a specialization of the Core-level :class:`.types.JSON`
type. Be sure to read the documentation for :class:`.types.JSON` for
important tips regarding treatment of NULL values and ORM use.
- .. versionchanged:: 1.1 :class:`.postgresql.JSON` is now a Postgresql-
+ .. versionchanged:: 1.1 :class:`.postgresql.JSON` is now a PostgreSQL-
specific specialization of the new :class:`.types.JSON` type.
- The operators provided by the Postgresql version of :class:`.JSON`
+ The operators provided by the PostgreSQL version of :class:`.JSON`
include:
* Index operations (the ``->`` operator)::
class JSONB(JSON):
- """Represent the Postgresql JSONB type.
+ """Represent the PostgreSQL JSONB type.
The :class:`.JSONB` type stores arbitrary JSONB format data, e.g.::
:func:`.create_engine` using the ``client_encoding`` parameter::
# set_client_encoding() setting;
- # works for *all* Postgresql versions
+ # works for *all* PostgreSQL versions
engine = create_engine("postgresql://user:pass@host/dbname",
client_encoding='utf8')
-This overrides the encoding specified in the Postgresql client configuration.
+This overrides the encoding specified in the PostgreSQL client configuration.
When using the parameter in this way, the psycopg2 driver emits
``SET client_encoding TO 'utf8'`` on the connection explicitly, and works
-in all Postgresql versions.
+in all PostgreSQL versions.
Note that the ``client_encoding`` setting as passed to :func:`.create_engine`
is **not the same** as the more recently added ``client_encoding`` parameter
using the :paramref:`.create_engine.connect_args` parameter::
# libpq direct parameter setting;
- # only works for Postgresql **9.1 and above**
+ # only works for PostgreSQL **9.1 and above**
engine = create_engine("postgresql://user:pass@host/dbname",
connect_args={'client_encoding': 'utf8'})
# using the query string is equivalent
engine = create_engine("postgresql://user:pass@host/dbname?client_encoding=utf8")
-The above parameter was only added to libpq as of version 9.1 of Postgresql,
+The above parameter was only added to libpq as of version 9.1 of PostgreSQL,
so using the previous method is better for cross-version support.
.. _psycopg2_disable_native_unicode:
-------------------------------------
As discussed in :ref:`postgresql_isolation_level`,
-all Postgresql dialects support setting of transaction isolation level
+all PostgreSQL dialects support setting of transaction isolation level
both via the ``isolation_level`` parameter passed to :func:`.create_engine`,
as well as the ``isolation_level`` argument used by
:meth:`.Connection.execution_options`. When using the psycopg2 dialect, these
options make use of psycopg2's ``set_isolation_level()`` connection method,
-rather than emitting a Postgresql directive; this is because psycopg2's
+rather than emitting a PostgreSQL directive; this is because psycopg2's
API-level setting is always emitted at the start of each transaction in any
case.
NOTICE logging
---------------
-The psycopg2 dialect will log Postgresql NOTICE messages via the
+The psycopg2 dialect will log PostgreSQL NOTICE messages via the
``sqlalchemy.dialects.postgresql`` logger::
import logging
Table 9-45 of the postgres documentation. For these, the normal
:func:`~sqlalchemy.sql.expression.func` object should be used.
- .. versionadded:: 0.8.2 Support for Postgresql RANGE operations.
+ .. versionadded:: 0.8.2 Support for PostgreSQL RANGE operations.
"""
class INT4RANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql INT4RANGE type.
+ """Represent the PostgreSQL INT4RANGE type.
.. versionadded:: 0.8.2
class INT8RANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql INT8RANGE type.
+ """Represent the PostgreSQL INT8RANGE type.
.. versionadded:: 0.8.2
class NUMRANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql NUMRANGE type.
+ """Represent the PostgreSQL NUMRANGE type.
.. versionadded:: 0.8.2
class DATERANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql DATERANGE type.
+ """Represent the PostgreSQL DATERANGE type.
.. versionadded:: 0.8.2
class TSRANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql TSRANGE type.
+ """Represent the PostgreSQL TSRANGE type.
.. versionadded:: 0.8.2
class TSTZRANGE(RangeOperators, sqltypes.TypeEngine):
- """Represent the Postgresql TSTZRANGE type.
+ """Represent the PostgreSQL TSTZRANGE type.
.. versionadded:: 0.8.2
MySQL names it SET in the dialect's base.py, and it subclasses types.String, since
it ultimately deals with strings.
-Example 5. Postgresql has a DATETIME type. The DBAPIs handle dates correctly,
+Example 5. PostgreSQL has a DATETIME type. The DBAPIs handle dates correctly,
and no special arguments are used in PG's DDL beyond what types.py provides.
-Postgresql dialect therefore imports types.DATETIME into its base.py.
+PostgreSQL dialect therefore imports types.DATETIME into its base.py.
Ideally one should be able to specify a schema using names imported completely from a
dialect, all matching the real name on that backend:
fetch newly generated primary key values when a single row
INSERT statement is emitted with no existing returning()
clause. This applies to those backends which support RETURNING
- or a compatible construct, including Postgresql, Firebird, Oracle,
+ or a compatible construct, including PostgreSQL, Firebird, Oracle,
Microsoft SQL Server. Set this to ``False`` to disable
the automatic usage of RETURNING.
:ref:`SQLite Transaction Isolation <sqlite_isolation_level>`
- :ref:`Postgresql Transaction Isolation <postgresql_isolation_level>`
+ :ref:`PostgreSQL Transaction Isolation <postgresql_isolation_level>`
:ref:`MySQL Transaction Isolation <mysql_isolation_level>`
:ref:`SQLite Transaction Isolation <sqlite_isolation_level>`
- :ref:`Postgresql Transaction Isolation <postgresql_isolation_level>`
+ :ref:`PostgreSQL Transaction Isolation <postgresql_isolation_level>`
:ref:`MySQL Transaction Isolation <mysql_isolation_level>`
})
]
- If the above construct is established on the Postgresql dialect,
+ If the above construct is established on the PostgreSQL dialect,
the :class:`.Index` construct will now accept the keyword arguments
``postgresql_using``, ``postgresql_where``, nad ``postgresql_ops``.
Any other argument specified to the constructor of :class:`.Index`
preexecute_autoincrement_sequences
True if 'implicit' primary key functions must be executed separately
in order to get their value. This is currently oriented towards
- Postgresql.
+ PostgreSQL.
implicit_returning
use RETURNING or equivalent during INSERT execution in order to load
sequences_optional
If True, indicates if the "optional" flag on the Sequence() construct
should signal to not generate a CREATE SEQUENCE. Applies only to
- dialects that support sequences. Currently used only to allow Postgresql
+ dialects that support sequences. Currently used only to allow PostgreSQL
SERIAL to be used on a column that specifies Sequence() for usage on
other backends.
"""Return the default schema name presented by the dialect
for the current engine's database user.
- E.g. this is typically ``public`` for Postgresql and ``dbo``
+ E.g. this is typically ``public`` for PostgreSQL and ``dbo``
for SQL Server.
"""
encodings - they're best applied only at the endpoints of an application
(i.e. convert to UTC upon user input, re-apply desired timezone upon display).
-For Postgresql and Microsoft SQL Server::
+For PostgreSQL and Microsoft SQL Server::
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
q = session.query(Person).filter(Person.year == '1980')
-On a Postgresql backend, the above query will render as::
+On a PostgreSQL backend, the above query will render as::
SELECT person.id, person.data
FROM person
:class:`.index_property` can be subclassed, in particular for the common
use case of providing coercion of values or SQL expressions as they are
-accessed. Below is a common recipe for use with a Postgresql JSON type,
+accessed. Below is a common recipe for use with a PostgreSQL JSON type,
where we want to also include automatic casting plus ``astext()``::
class pg_json_property(index_property):
expr = super(pg_json_property, self).expr(model)
return expr.astext.cast(self.cast_type)
-The above subclass can be used with the Postgresql-specific
+The above subclass can be used with the PostgreSQL-specific
version of :class:`.postgresql.JSON`::
from sqlalchemy import Column, Integer
age = pg_json_property('data', 'age', Integer)
The ``age`` attribute at the instance level works as before; however
-when rendering SQL, Postgresql's ``->>`` operator will be used
+when rendering SQL, PostgreSQL's ``->>`` operator will be used
for indexed access, instead of the usual index opearator of ``->``::
>>> query = session.query(Person).filter(Person.age < 20)
:meth:`.SelectBase.cte` method; see that method for
further details.
- Here is the `Postgresql WITH
+ Here is the `PostgreSQL WITH
RECURSIVE example
<http://www.postgresql.org/docs/8.4/static/queries-with.html>`_.
Note that, in this example, the ``included_parts`` cte and the
q = sess.query(User).with_for_update(nowait=True, of=User)
- The above query on a Postgresql backend will render like::
+ The above query on a PostgreSQL backend will render like::
SELECT users.id AS users_id FROM users FOR UPDATE OF users NOWAIT
:attr:`.Query.statement` accessor, however.
:param \*expr: optional column expressions. When present,
- the Postgresql dialect will render a ``DISTINCT ON (<expressions>>)``
+ the PostgreSQL dialect will render a ``DISTINCT ON (<expressions>>)``
construct.
"""
as an implicitly-present "system" column.
For example, suppose we wish to produce a :class:`.Table` which skips
- rendering of the Postgresql ``xmin`` column against the Postgresql
+ rendering of the PostgreSQL ``xmin`` column against the PostgreSQL
backend, but on other backends does render it, in anticipation of a
triggered rule. A conditional compilation rule could skip this name only
- on Postgresql::
+ on PostgreSQL::
from sqlalchemy.schema import CreateColumn
Above, a :class:`.CreateTable` construct will generate a ``CREATE TABLE``
which only includes the ``id`` column in the string; the ``xmin`` column
- will be omitted, but only against the Postgresql backend.
+ will be omitted, but only against the PostgreSQL backend.
.. versionadded:: 0.8.3 The :class:`.CreateColumn` construct supports
skipping of columns by returning ``None`` from a custom compilation
The :class:`.Insert` construct also supports being passed a list
of dictionaries or full-table-tuples, which on the server will
render the less common SQL syntax of "multiple values" - this
- syntax is supported on backends such as SQLite, Postgresql, MySQL,
+ syntax is supported on backends such as SQLite, PostgreSQL, MySQL,
but not necessarily others::
users.insert().values([
Above, we see that ``Wendy`` is passed as a parameter to the database,
while the placeholder ``:name_1`` is rendered in the appropriate form
- for the target database, in this case the Postgresql database.
+ for the target database, in this case the PostgreSQL database.
Similarly, :func:`.bindparam` is invoked automatically
when working with :term:`CRUD` statements as far as the "VALUES"
.. warning::
The composite IN construct is not supported by all backends,
- and is currently known to work on Postgresql and MySQL,
+ and is currently known to work on PostgreSQL and MySQL,
but not SQLite. Unsupported backends will raise
a subclass of :class:`~sqlalchemy.exc.DBAPIError` when such
an expression is invoked.
ANY and ALL.
The ANY and ALL keywords are available in different ways on different
- backends. On Postgresql, they only work for an ARRAY type. On
+ backends. On PostgreSQL, they only work for an ARRAY type. On
MySQL, they only work for subqueries.
"""
"""Represent SQL for a Python array-slice object.
This is not a specific SQL construct at this level, but
- may be interpreted by specific dialects, e.g. Postgresql.
+ may be interpreted by specific dialects, e.g. PostgreSQL.
"""
__visit_name__ = 'slice'
This construct wraps the function in a named alias which
is suitable for the FROM clause, in the style accepted for example
- by Postgresql.
+ by PostgreSQL.
e.g.::
"""Implement the [] operator.
This can be used by some database-specific types
- such as Postgresql ARRAY and HSTORE.
+ such as PostgreSQL ARRAY and HSTORE.
"""
return self.operate(getitem, index)
a MATCH-like function or operator provided by the backend.
Examples include:
- * Postgresql - renders ``x @@ to_tsquery(y)``
+ * PostgreSQL - renders ``x @@ to_tsquery(y)``
* MySQL - renders ``MATCH (x) AGAINST (y IN BOOLEAN MODE)``
* Oracle - renders ``CONTAINS(x, y)``
* other backends may provide special implementations.
an INTEGER type with no stated client-side or python-side defaults
should receive auto increment semantics automatically;
all other varieties of primary key columns will not. This
- includes that :term:`DDL` such as Postgresql SERIAL or MySQL
+ includes that :term:`DDL` such as PostgreSQL SERIAL or MySQL
AUTO_INCREMENT will be emitted for this column during a table
create, as well as that the column is assumed to generate new
integer primary key values when an INSERT statement invokes which
* DDL issued for the column will include database-specific
keywords intended to signify this column as an
"autoincrement" column, such as AUTO INCREMENT on MySQL,
- SERIAL on Postgresql, and IDENTITY on MS-SQL. It does
+ SERIAL on PostgreSQL, and IDENTITY on MS-SQL. It does
*not* issue AUTOINCREMENT for SQLite since this is a
special SQLite flag that is not required for autoincrementing
behavior.
:class:`.Sequence` object only needs to be explicitly generated
on backends that don't provide another way to generate primary
key identifiers. Currently, it essentially means, "don't create
- this sequence on the Postgresql backend, where the SERIAL keyword
+ this sequence on the PostgreSQL backend, where the SERIAL keyword
creates a sequence for us automatically".
:param quote: boolean value, when ``True`` or ``False``, explicitly
forces quoting of the schema name on or off. When left at its
FROM clause of an enclosing SELECT, but may correlate to other
FROM clauses of that SELECT. It is a special case of subquery
only supported by a small number of backends, currently more recent
- Postgresql versions.
+ PostgreSQL versions.
.. versionadded:: 1.1
on all :class:`.FromClause` subclasses.
While LATERAL is part of the SQL standard, curently only more recent
- Postgresql versions provide support for this keyword.
+ PostgreSQL versions provide support for this keyword.
.. versionadded:: 1.1
conjunction with UNION ALL in order to derive rows
from those already selected.
- The following examples include two from Postgresql's documentation at
+ The following examples include two from PostgreSQL's documentation at
http://www.postgresql.org/docs/current/static/queries-with.html,
as well as additional examples.
stmt = select([table]).with_for_update(nowait=True)
- On a database like Postgresql or Oracle, the above would render a
+ On a database like PostgreSQL or Oracle, the above would render a
statement like::
SELECT table.a, table.b FROM table FOR UPDATE NOWAIT
variants.
:param nowait: boolean; will render ``FOR UPDATE NOWAIT`` on Oracle
- and Postgresql dialects.
+ and PostgreSQL dialects.
:param read: boolean; will render ``LOCK IN SHARE MODE`` on MySQL,
- ``FOR SHARE`` on Postgresql. On Postgresql, when combined with
+ ``FOR SHARE`` on PostgreSQL. On PostgreSQL, when combined with
``nowait``, will render ``FOR SHARE NOWAIT``.
:param of: SQL expression or list of SQL expression elements
backend.
:param skip_locked: boolean, will render ``FOR UPDATE SKIP LOCKED``
- on Oracle and Postgresql dialects or ``FOR SHARE SKIP LOCKED`` if
+ on Oracle and PostgreSQL dialects or ``FOR SHARE SKIP LOCKED`` if
``read=True`` is also specified.
.. versionadded:: 1.1.0
:param key_share: boolean, will render ``FOR NO KEY UPDATE``,
or if combined with ``read=True`` will render ``FOR KEY SHARE``,
- on the Postgresql dialect.
+ on the PostgreSQL dialect.
.. versionadded:: 1.1.0
The boolean argument may also be a column expression or list
of column expressions - this is a special calling form which
- is understood by the Postgresql dialect to render the
+ is understood by the PostgreSQL dialect to render the
``DISTINCT ON (<columns>)`` syntax.
``distinct`` is also available on an existing :class:`.Select`
specific backends, including:
* ``"read"`` - on MySQL, translates to ``LOCK IN SHARE MODE``;
- on Postgresql, translates to ``FOR SHARE``.
- * ``"nowait"`` - on Postgresql and Oracle, translates to
+ on PostgreSQL, translates to ``FOR SHARE``.
+ * ``"nowait"`` - on PostgreSQL and Oracle, translates to
``FOR UPDATE NOWAIT``.
- * ``"read_nowait"`` - on Postgresql, translates to
+ * ``"read_nowait"`` - on PostgreSQL, translates to
``FOR SHARE NOWAIT``.
.. seealso::
columns clause.
:param \*expr: optional column expressions. When present,
- the Postgresql dialect will render a ``DISTINCT ON (<expressions>>)``
+ the PostgreSQL dialect will render a ``DISTINCT ON (<expressions>>)``
construct.
"""
:param collation: Optional, a column-level collation for
use in DDL and CAST expressions. Renders using the
- COLLATE keyword supported by SQLite, MySQL, and Postgresql.
+ COLLATE keyword supported by SQLite, MySQL, and PostgreSQL.
E.g.::
>>> from sqlalchemy import cast, select, String
The :class:`.LargeBinary` type corresponds to a large and/or unlengthed
binary type for the target platform, such as BLOB on MySQL and BYTEA for
- Postgresql. It also handles the necessary conversions for the DBAPI.
+ PostgreSQL. It also handles the necessary conversions for the DBAPI.
"""
:param metadata: Associate this type directly with a ``MetaData``
object. For types that exist on the target database as an
- independent schema construct (Postgresql), this type will be
+ independent schema construct (PostgreSQL), this type will be
created and dropped within ``create_all()`` and ``drop_all()``
operations. If the type is not associated with any ``MetaData``
object, it will associate itself with each ``Table`` in which it is
only dropped when ``drop_all()`` is called for that ``Table``
object's metadata, however.
- :param name: The name of this type. This is required for Postgresql
+ :param name: The name of this type. This is required for PostgreSQL
and any future supported database which requires an explicitly
named type, or an explicitly named constraint in order to generate
the type and/or a table that uses it. If a PEP-435 enumerated
constraint for all backends.
:param schema: Schema name of this type. For types that exist on the
- target database as an independent schema construct (Postgresql),
+ target database as an independent schema construct (PostgreSQL),
this parameter specifies the named schema in which the type is
present.
:param native: when True, use the actual
INTERVAL type provided by the database, if
- supported (currently Postgresql, Oracle).
+ supported (currently PostgreSQL, Oracle).
Otherwise, represent the interval data as
an epoch value regardless.
:param second_precision: For native interval types
which support a "fractional seconds precision" parameter,
- i.e. Oracle and Postgresql
+ i.e. Oracle and PostgreSQL
:param day_precision: for native interval types which
support a "day precision" parameter, i.e. Oracle.
.. note:: :class:`.types.JSON` is provided as a facade for vendor-specific
JSON types. Since it supports JSON SQL operations, it only
works on backends that have an actual JSON type, currently
- Postgresql as well as certain versions of MySQL.
+ PostgreSQL as well as certain versions of MySQL.
:class:`.types.JSON` is part of the Core in support of the growing
popularity of native JSON datatypes.
"""Represent a SQL Array type.
.. note:: This type serves as the basis for all ARRAY operations.
- However, currently **only the Postgresql backend has support
+ However, currently **only the PostgreSQL backend has support
for SQL arrays in SQLAlchemy**. It is recommended to use the
:class:`.postgresql.ARRAY` type directly when using ARRAY types
with PostgreSQL, as it provides additional operators specific
)
The above type represents an N-dimensional array,
- meaning a supporting backend such as Postgresql will interpret values
+ meaning a supporting backend such as PostgreSQL will interpret values
with any number of dimensions automatically. To produce an INSERT
construct that passes in a 1-dimensional array of integers::
"""The SQL TIMESTAMP type.
:class:`~.types.TIMESTAMP` datatypes have support for timezone
- storage on some backends, such as Postgresql and Oracle. Use the
+ storage on some backends, such as PostgreSQL and Oracle. Use the
:paramref:`~types.TIMESTAMP.timezone` argument in order to enable
"TIMESTAMP WITH TIMEZONE" for these backends.
However, using the addition operator with an :class:`.Integer`
and a :class:`.Date` object will produce a :class:`.Date`, assuming
"days delta" behavior by the database (in reality, most databases
- other than Postgresql don't accept this particular operation).
+ other than PostgreSQL don't accept this particular operation).
The method returns a tuple of the form <operator>, <type>.
The resulting operator and type will be those applied to the
:ref:`session_forcing_null` - in the ORM documentation
- :paramref:`.postgresql.JSON.none_as_null` - Postgresql JSON
+ :paramref:`.postgresql.JSON.none_as_null` - PostgreSQL JSON
interaction with this flag.
:attr:`.TypeEngine.should_evaluate_none` - class-level flag
select data as foo from test order by foo || 'bar'
- Lots of databases including Postgresql don't support this,
+ Lots of databases including PostgreSQL don't support this,
so this is off by default.
"""
"""Test should be skipped if coverage is enabled.
This is to block tests that exercise libraries that seem to be
- sensitive to coverage, such as Postgresql notice logging.
+ sensitive to coverage, such as PostgreSQL notice logging.
"""
return exclusions.skip_if(
"""tests using percent signs, spaces in table and column names.
This is a very fringe use case, doesn't work for MySQL
- or Postgresql. the requirement, "percent_schema_names",
+ or PostgreSQL. the requirement, "percent_schema_names",
is marked "skip" by default.
"""
def test_array_functions_plus_getitem(self):
"""test parenthesizing of functions plus indexing, which seems
- to be required by Postgresql.
+ to be required by PostgreSQL.
"""
stmt = select([
@testing.fails_on(
"postgresql < 9.4",
- "Improvement in Postgresql behavior?")
+ "Improvement in PostgreSQL behavior?")
def test_multi_index_query(self):
engine = testing.db
self._fixture_data(engine)
"""Target database must support tables that can automatically generate
PKs assuming they were reflected.
- this is essentially all the DBs in "identity" plus Postgresql, which
+ this is essentially all the DBs in "identity" plus PostgreSQL, which
has SERIAL support. FB and Oracle (and sybase?) require the Sequence to
be explicitly added, including if the table was reflected.
"""
e = engines.testing_engine()
# starts as False. This is because all of Firebird,
- # Postgresql, Oracle, SQL Server started supporting RETURNING
+ # PostgreSQL, Oracle, SQL Server started supporting RETURNING
# as of a certain version, and the flag is not set until
# version detection occurs. If some DB comes along that has
# RETURNING in all cases, this test can be adjusted.
this logic is triggered currently by a left side that doesn't
have a key. The current supported use case is updating the index
- of a Postgresql ARRAY type.
+ of a PostgreSQL ARRAY type.
"""
table1 = self.tables.mytable