and "operators" dictionaries in compiler subclasses with straightforward
visitor methods, and also allows compiler subclasses complete control over
rendering, as the full _Function or _BinaryExpression object is passed in.
+
+- postgresql
+ - the "postgres" dialect is now named "postgresql" ! Connection strings look
+ like:
+
+ postgresql://scott:tiger@localhost/test
+ postgresql+pg8000://scott:tiger@localhost/test
+ The "postgres" name remains for backwards compatiblity in the following ways:
+
+ - There is a "postgres.py" dummy dialect which allows old URLs to work,
+ i.e. postgres://scott:tiger@localhost/test
+
+ - The "postgres" name can be imported from the old "databases" module,
+ i.e. "from sqlalchemy.databases import postgres" as well as "dialects",
+ "from sqlalchemy.dialects.postgres import base as pg", will send
+ a deprecation warning.
+
+ - Special expression arguments are now named "postgresql_returning"
+ and "postgresql_where", but the older "postgres_returning" and
+ "postgres_where" names still work with a deprecation warning.
+
- mysql
- all the _detect_XXX() functions now run once underneath dialect.initialize()
SQLAlchemy operations.
- new dialects
- - postgres+pg8000
- - postgres+pypostgresql (partial)
- - postgres+zxjdbc
+ - postgresql+pg8000
+ - postgresql+pypostgresql (partial)
+ - postgresql+zxjdbc
- mysql+pyodbc
- mysql+zxjdbc
- Repaired the printing of SQL exceptions which are not
based on parameters or are not executemany() style.
-- postgres
+- postgresql
- Deprecated the hardcoded TIMESTAMP function, which when
used as func.TIMESTAMP(value) would render "TIMESTAMP value".
- This breaks on some platforms as Postgres doesn't allow
+ This breaks on some platforms as PostgreSQL doesn't allow
bind parameters to be used in this context. The hard-coded
uppercase is also inappropriate and there's lots of other
PG casts that we'd need to support. So instead, use
fail on recent versions of pysqlite which raise
an error when fetchone() called with no rows present.
-- postgres
+- postgresql
- Index reflection won't fail when an index with
multiple expressions is encountered.
- sql
- Improved the methodology to handling percent signs in column
names from [ticket:1256]. Added more tests. MySQL and
- Postgres dialects still do not issue correct CREATE TABLE
+ PostgreSQL dialects still do not issue correct CREATE TABLE
statements for identifiers with percent signs in them.
- schema
- Calling alias.execute() in conjunction with
server_side_cursors won't raise AttributeError.
- - Added Index reflection support to Postgres, using a great
+ - Added Index reflection support to PostgreSQL, using a great
patch we long neglected, submitted by Ken
Kuhlman. [ticket:714]
- simple label names in ORDER BY expressions render as
themselves, and not as a re-statement of their corresponding
expression. This feature is currently enabled only for
- SQLite, MySQL, and Postgres. It can be enabled on other
+ SQLite, MySQL, and PostgreSQL. It can be enabled on other
dialects as each is shown to support this
behavior. [ticket:1068]
Tests will target an in-memory SQLite database by default. To test against
another database, use the --dburi option with any standard SQLAlchemy URL:
- --dburi=postgres://user:password@localhost/test
+ --dburi=postgresql://user:password@localhost/test
Use an empty database and a database user with general DBA privileges. The
test suite will be creating and dropping many tables and other DDL, and
Available --db options (use --dburi to override)
mysql mysql://scott:tiger@127.0.0.1:3306/test
oracle oracle://scott:tiger@127.0.0.1:1521
- postgres postgres://scott:tiger@127.0.0.1:5432/test
+ postgresql postgresql://scott:tiger@127.0.0.1:5432/test
[...]
To run tests against an aliased database:
- $ nosetests --db=postgres
+ $ nosetests --db=postgresql
To customize the URLs with your own users or hostnames, make a simple .ini
file called `test.cfg` at the top level of the SQLAlchemy source distribution
or a `.satest.cfg` in your home directory:
[db]
- postgres=postgres://myuser:mypass@localhost/mydb
+ postgresql=postgresql://myuser:mypass@localhost/mydb
Your custom entries will override the defaults and you'll see them reflected
in the output of --dbs.
TIPS
----
-Postgres: The tests require an 'alt_schema' and 'alt_schema_2' to be present in
+PostgreSQL: The tests require an 'alt_schema' and 'alt_schema_2' to be present in
the testing database.
-Postgres: When running the tests on postgres, postgres can get slower and
+PostgreSQL: When running the tests on postgresql, postgresql can get slower and
slower each time you run the tests. This seems to be related to the constant
creation/dropping of tables. Running a "VACUUM FULL" on the database will
speed it up again.
Creating an engine is just a matter of issuing a single call, :func:`create_engine()`::
- engine = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')
+ engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
-The above engine invokes the ``postgres`` dialect and a connection pool which references ``localhost:5432``.
+The above engine invokes the ``postgresql`` dialect and a connection pool which references ``localhost:5432``.
The engine can be used directly to issue SQL to the database. The most generic way is to use connections, which you get via the ``connect()`` method::
Supported Databases
====================
-Recall that the ``Dialect`` is used to describe how to talk to a specific kind of database. Dialects are included with SQLAlchemy for SQLite, Postgres, MySQL, MS-SQL, Firebird, Informix, and Oracle; these can each be seen as a Python module present in the :mod:``~sqlalchemy.databases`` package. Each dialect requires the appropriate DBAPI drivers to be installed separately.
+Recall that the ``Dialect`` is used to describe how to talk to a specific kind of database. Dialects are included with SQLAlchemy for SQLite, PostgreSQL, MySQL, MS-SQL, Firebird, Informix, and Oracle; these can each be seen as a Python module present in the :mod:``~sqlalchemy.databases`` package. Each dialect requires the appropriate DBAPI drivers to be installed separately.
Downloads for each DBAPI at the time of this writing are as follows:
-* Postgres: `psycopg2 <http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo>`_
+* PostgreSQL: `psycopg2 <http://www.initd.org/tracker/psycopg/wiki/PsycopgTwo>`_
* SQLite: `sqlite3 <http://www.python.org/doc/2.5.2/lib/module-sqlite3.html>`_ (included in Python 2.5 or greater) `pysqlite <http://initd.org/tracker/pysqlite>`_
* MySQL: `MySQLDB <http://sourceforge.net/projects/mysql-python>`_
* Oracle: `cx_Oracle <http://cx-oracle.sourceforge.net/>`_
driver://username:password@host:port/database
-Available drivernames are ``sqlite``, ``mysql``, ``postgres``, ``oracle``, ``mssql``, and ``firebird``. For sqlite, the database name is the filename to connect to, or the special name ":memory:" which indicates an in-memory database. The URL is typically sent as a string to the ``create_engine()`` function:
+Available drivernames are ``sqlite``, ``mysql``, ``postgresql``, ``oracle``, ``mssql``, and ``firebird``. For sqlite, the database name is the filename to connect to, or the special name ":memory:" which indicates an in-memory database. The URL is typically sent as a string to the ``create_engine()`` function:
.. sourcecode:: python+sql
- # postgres
- pg_db = create_engine('postgres://scott:tiger@localhost:5432/mydatabase')
+ # postgresql
+ pg_db = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
# sqlite (note the four slashes for an absolute path)
sqlite_db = create_engine('sqlite:////absolute/path/to/database.txt')
.. sourcecode:: python+sql
- db = create_engine('postgres://scott:tiger@localhost/test?argument1=foo&argument2=bar')
+ db = create_engine('postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar')
If SQLAlchemy's database connector is aware of a particular query argument, it may convert its type from string to its proper type.
.. sourcecode:: python+sql
- db = create_engine('postgres://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
+ db = create_engine('postgresql://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'})
The most customizable connection method of all is to pass a ``creator`` argument, which specifies a callable that returns a DBAPI connection:
def connect():
return psycopg.connect(user='scott', host='localhost')
- db = create_engine('postgres://', creator=connect)
+ db = create_engine('postgresql://', creator=connect)
.. _create_engine_args:
.. sourcecode:: python+sql
- db = create_engine('postgres://...', encoding='latin1', echo=True)
+ db = create_engine('postgresql://...', encoding='latin1', echo=True)
Options common to all database dialects are described at :func:`~sqlalchemy.create_engine`.
widget_id name favorite_entry_id entry_id name widget_id
1 'somewidget' 5 5 'someentry' 1
-In the first case, a row points to itself. Technically, a database that uses sequences such as Postgres or Oracle can INSERT the row at once using a previously generated value, but databases which rely upon autoincrement-style primary key identifiers cannot. The ``relation()`` always assumes a "parent/child" model of row population during flush, so unless you are populating the primary key/foreign key columns directly, ``relation()`` needs to use two statements.
+In the first case, a row points to itself. Technically, a database that uses sequences such as PostgreSQL or Oracle can INSERT the row at once using a previously generated value, but databases which rely upon autoincrement-style primary key identifiers cannot. The ``relation()`` always assumes a "parent/child" model of row population during flush, so unless you are populating the primary key/foreign key columns directly, ``relation()`` needs to use two statements.
In the second case, the "widget" row must be inserted before any referring "entry" rows, but then the "favorite_entry_id" column of that "widget" row cannot be set until the "entry" rows have been generated. In this case, it's typically impossible to insert the "widget" and "entry" rows using just two INSERT statements; an UPDATE must be performed in order to keep foreign key constraints fulfilled. The exception is if the foreign keys are configured as "deferred until commit" (a feature some databases support) and if the identifiers were populated manually (again essentially bypassing ``relation()``).
* the column is a primary key column
-* the database dialect does not support a usable ``cursor.lastrowid`` accessor (or equivalent); this currently includes Postgres, Oracle, and Firebird.
+* the database dialect does not support a usable ``cursor.lastrowid`` accessor (or equivalent); this currently includes PostgreSQL, Oracle, and Firebird.
* the statement is a single execution, i.e. only supplies one set of parameters and doesn't use "executemany" behavior
When the ``Sequence`` is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus "bundling" the sequence object with its parent table.
-The flag ``optional=True`` on ``Sequence`` will produce a sequence that is only used on databases which have no "autoincrementing" capability. For example, Postgres supports primary key generation using the SERIAL keyword, whereas Oracle has no such capability. Therefore, a ``Sequence`` placed on a primary key column with ``optional=True`` will only be used with an Oracle backend but not Postgres.
+The flag ``optional=True`` on ``Sequence`` will produce a sequence that is only used on databases which have no "autoincrementing" capability. For example, PostgreSQL supports primary key generation using the SERIAL keyword, whereas Oracle has no such capability. Therefore, a ``Sequence`` placed on a primary key column with ``optional=True`` will only be used with an Oracle backend but not PostgreSQL.
A sequence can also be executed standalone, using an ``Engine`` or ``Connection``, returning its next value in a database-independent fashion:
()
COMMIT
-Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite, this is a valid datatype, but on most databases it's not allowed. So if running this tutorial on a database such as Postgres or MySQL, and you wish to use SQLAlchemy to generate the tables, a "length" may be provided to the ``String`` type as below::
+Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite, this is a valid datatype, but on most databases it's not allowed. So if running this tutorial on a database such as PostgreSQL or MySQL, and you wish to use SQLAlchemy to generate the tables, a "length" may be provided to the ``String`` type as below::
Column('name', String(50))
mssql
mysql
oracle
- postgres
+ postgresql
sqlite
sybase
PostgreSQL
==========
-.. automodule:: sqlalchemy.dialects.postgres.base
+.. automodule:: sqlalchemy.dialects.postgresql.base
psycopg2 Notes
--------------
-.. automodule:: sqlalchemy.dialects.postgres.psycopg2
+.. automodule:: sqlalchemy.dialects.postgresql.psycopg2
pg8000 Notes
--------------
-.. automodule:: sqlalchemy.dialects.postgres.pg8000
+.. automodule:: sqlalchemy.dialects.postgresql.pg8000
``pool_size``, ``max_overflow``, ``pool_recycle`` and
``pool_timeout``. For example::
- engine = create_engine('postgres://me@localhost/mydb',
+ engine = create_engine('postgresql://me@localhost/mydb',
pool_size=20, max_overflow=0)
In the case of SQLite, a :class:`SingletonThreadPool` is provided instead,
Or some PostgreSQL types::
- from sqlalchemy.dialect.postgres import dialect as postgresql
+ from sqlalchemy.dialect.postgresql import dialect as postgresql
table = Table('foo', meta,
Column('ipaddress', postgresql.INET),
Session = sessionmaker()
# later, we create the engine
- engine = create_engine('postgres://...')
+ engine = create_engine('postgresql://...')
# associate it with our custom Session class
Session.configure(bind=engine)
# global application scope. create Session class, engine
Session = sessionmaker()
- engine = create_engine('postgres://...')
+ engine = create_engine('postgresql://...')
...
Finally, for MySQL, PostgreSQL, and soon Oracle as well, the session can be instructed to use two-phase commit semantics. This will coordinate the committing of transactions across databases so that the transaction is either committed or rolled back in all databases. You can also ``prepare()`` the session for interacting with transactions not managed by SQLAlchemy. To use two phase transactions set the flag ``twophase=True`` on the session::
- engine1 = create_engine('postgres://db1')
- engine2 = create_engine('postgres://db2')
+ engine1 = create_engine('postgresql://db1')
+ engine2 = create_engine('postgresql://db2')
Session = sessionmaker(twophase=True)
When using the ``threadlocal`` engine context, the process above is simplified; the ``Session`` uses the same connection/transaction as everyone else in the current thread, whether or not you explicitly bind it::
- engine = create_engine('postgres://mydb', strategy="threadlocal")
+ engine = create_engine('postgresql://mydb', strategy="threadlocal")
engine.begin()
session = Session() # session takes place in the transaction like everyone else
Vertical partitioning places different kinds of objects, or different tables, across multiple databases::
- engine1 = create_engine('postgres://db1')
- engine2 = create_engine('postgres://db2')
+ engine1 = create_engine('postgresql://db1')
+ engine2 = create_engine('postgresql://db2')
Session = sessionmaker(twophase=True)
return runner.failures, runner.tries
def replace_file(s, newfile):
- engine = r"'(sqlite|postgres|mysql):///.*'"
+ engine = r"'(sqlite|postgresql|mysql):///.*'"
engine = re.compile(engine, re.MULTILINE)
s, n = re.subn(engine, "'sqlite:///" + newfile + "'", s)
if not n:
from sqlalchemy.orm import sessionmaker, column_property
from sqlalchemy.ext.declarative import declarative_base
- engine = create_engine('postgres://scott:tiger@localhost/gistest', echo=True)
+ engine = create_engine('postgresql://scott:tiger@localhost/gistest', echo=True)
metadata = MetaData(engine)
Base = declarative_base(metadata=metadata)
# the MIT License: http://www.opensource.org/licenses/mit-license.php
from sqlalchemy.dialects.sqlite import base as sqlite
-from sqlalchemy.dialects.postgres import base as postgres
+from sqlalchemy.dialects.postgresql import base as postgresql
+postgres = postgresql
from sqlalchemy.dialects.mysql import base as mysql
from sqlalchemy.dialects.oracle import base as oracle
from sqlalchemy.dialects.firebird import base as firebird
'maxdb',
'mssql',
'mysql',
- 'postgres',
+ 'postgresql',
'sqlite',
'oracle',
'sybase',
# 'mssql',
'mysql',
'oracle',
- 'postgres',
+ 'postgresql',
'sqlite',
# 'sybase',
)
--- /dev/null
+# backwards compat with the old name
+from sqlalchemy.util import warn_deprecated
+
+warn_deprecated(
+ "The SQLAlchemy PostgreSQL dialect has been renamed from 'postgres' to 'postgresql'. "
+ "The new URL format is postgresql[+driver]://<user>:<pass>@<host>/<dbname>"
+ )
+
+from sqlalchemy.dialects.postgresql import *
\ No newline at end of file
+++ /dev/null
-from sqlalchemy.dialects.postgres import base, psycopg2
-
-base.dialect = psycopg2.dialect
\ No newline at end of file
--- /dev/null
+from sqlalchemy.dialects.postgresql import base, psycopg2
+
+base.dialect = psycopg2.dialect
\ No newline at end of file
-# postgres.py
+# postgresql.py
# Copyright (C) 2005, 2006, 2007, 2008, 2009 Michael Bayer mike_mp@zzzcomputing.com
#
# This module is part of SQLAlchemy and is released under
Sequences/SERIAL
----------------
-Postgres supports sequences, and SQLAlchemy uses these as the default means of creating
+PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating
new primary key values for integer-based primary key columns. When creating tables,
SQLAlchemy will issue the ``SERIAL`` datatype for integer-based primary key columns,
which generates a sequence corresponding to the column and associated with it based on
"executemany" semantics, the sequence is not pre-executed and normal PG SERIAL behavior
is used.
-Postgres 8.3 supports an ``INSERT...RETURNING`` syntax which SQLAlchemy supports
+PostgreSQL 8.3 supports an ``INSERT...RETURNING`` syntax which SQLAlchemy supports
as well. A future release of SQLA will use this feature by default in lieu of
sequence pre-execution in order to retrieve new primary key values, when available.
but must be explicitly enabled on a per-statement basis::
# INSERT..RETURNING
- result = table.insert(postgres_returning=[table.c.col1, table.c.col2]).\\
+ result = table.insert(postgresql_returning=[table.c.col1, table.c.col2]).\\
values(name='foo')
print result.fetchall()
# UPDATE..RETURNING
- result = table.update(postgres_returning=[table.c.col1, table.c.col2]).\\
+ result = table.update(postgresql_returning=[table.c.col1, table.c.col2]).\\
where(table.c.name=='foo').values(name='bar')
print result.fetchall()
Indexes
-------
-PostgreSQL supports partial indexes. To create them pass a postgres_where
+PostgreSQL supports partial indexes. To create them pass a postgresql_where
option to the Index constructor::
- Index('my_index', my_table.c.id, postgres_where=tbl.c.value > 10)
+ Index('my_index', my_table.c.id, postgresql_where=tbl.c.value > 10)
def post_process_text(self, text):
if '%%' in text:
- util.warn("The SQLAlchemy postgres dialect now automatically escapes '%' in text() expressions to '%%'.")
+ util.warn("The SQLAlchemy postgresql dialect now automatically escapes '%' in text() expressions to '%%'.")
return text.replace('%', '%%')
def visit_sequence(self, seq):
return super(PGCompiler, self).for_update_clause(select)
def _append_returning(self, text, stmt):
- returning_cols = stmt.kwargs['postgres_returning']
+ try:
+ returning_cols = stmt.kwargs['postgresql_returning']
+ except KeyError:
+ returning_cols = stmt.kwargs['postgres_returning']
+ util.warn_deprecated("The 'postgres_returning' argument has been renamed 'postgresql_returning'")
+
def flatten_columnlist(collist):
for c in collist:
if isinstance(c, expression.Selectable):
def visit_update(self, update_stmt):
text = super(PGCompiler, self).visit_update(update_stmt)
- if 'postgres_returning' in update_stmt.kwargs:
+ if 'postgresql_returning' in update_stmt.kwargs or 'postgres_returning' in update_stmt.kwargs:
return self._append_returning(text, update_stmt)
else:
return text
def visit_insert(self, insert_stmt):
text = super(PGCompiler, self).visit_insert(insert_stmt)
- if 'postgres_returning' in insert_stmt.kwargs:
+ if 'postgresql_returning' in insert_stmt.kwargs or 'postgres_returning' in insert_stmt.kwargs:
return self._append_returning(text, insert_stmt)
else:
return text
% (preparer.quote(self._validate_identifier(index.name, True), index.quote),
preparer.format_table(index.table),
', '.join([preparer.format_column(c) for c in index.columns]))
-
- whereclause = index.kwargs.get('postgres_where', None)
+
+ if "postgres_where" in index.kwargs:
+ whereclause = index.kwargs['postgres_where']
+ util.warn_deprecated("The 'postgres_where' argument has been renamed to 'postgresql_where'.")
+ elif 'postgresql_where' in index.kwargs:
+ whereclause = index.kwargs['postgresql_where']
+ else:
+ whereclause = None
+
if whereclause is not None:
compiler = self._compile(whereclause, None)
# this might belong to the compiler class
class PGDialect(default.DefaultDialect):
- name = 'postgres'
+ name = 'postgresql'
supports_alter = True
max_identifier_length = 63
supports_sane_rowcount = True
Connecting
----------
-URLs are of the form `postgres+pg8000://user@password@host:port/dbname[?key=value&key=value...]`.
+URLs are of the form `postgresql+pg8000://user@password@host:port/dbname[?key=value&key=value...]`.
Unicode
-------
-pg8000 requires that the postgres client encoding be configured in the postgresql.conf file
+pg8000 requires that the postgresql client encoding be configured in the postgresql.conf file
in order to use encodings other than ascii. Set this value to the same value as
the "encoding" parameter on create_engine(), usually "utf-8".
import decimal
from sqlalchemy import util
from sqlalchemy import types as sqltypes
-from sqlalchemy.dialects.postgres.base import PGDialect, PGCompiler
+from sqlalchemy.dialects.postgresql.base import PGDialect, PGCompiler
class _PGNumeric(sqltypes.Numeric):
def bind_processor(self, dialect):
return value
return process
-class Postgres_pg8000ExecutionContext(default.DefaultExecutionContext):
+class PostgreSQL_pg8000ExecutionContext(default.DefaultExecutionContext):
pass
-class Postgres_pg8000Compiler(PGCompiler):
+class PostgreSQL_pg8000Compiler(PGCompiler):
def visit_mod(self, binary, **kw):
return self.process(binary.left) + " %% " + self.process(binary.right)
-class Postgres_pg8000(PGDialect):
+class PostgreSQL_pg8000(PGDialect):
driver = 'pg8000'
supports_unicode_statements = True
default_paramstyle = 'format'
supports_sane_multi_rowcount = False
- execution_ctx_cls = Postgres_pg8000ExecutionContext
- statement_compiler = Postgres_pg8000Compiler
+ execution_ctx_cls = PostgreSQL_pg8000ExecutionContext
+ statement_compiler = PostgreSQL_pg8000Compiler
colspecs = util.update_copy(
PGDialect.colspecs,
def is_disconnect(self, e):
return "connection is closed" in str(e)
-dialect = Postgres_pg8000
+dialect = PostgreSQL_pg8000
Connecting
----------
-URLs are of the form `postgres+psycopg2://user@password@host:port/dbname[?key=value&key=value...]`.
+URLs are of the form `postgresql+psycopg2://user@password@host:port/dbname[?key=value&key=value...]`.
psycopg2-specific keyword arguments which are accepted by :func:`~sqlalchemy.create_engine()` are:
from sqlalchemy.sql import expression
from sqlalchemy.sql import operators as sql_operators
from sqlalchemy import types as sqltypes
-from sqlalchemy.dialects.postgres.base import PGDialect, PGCompiler
+from sqlalchemy.dialects.postgresql.base import PGDialect, PGCompiler
class _PGNumeric(sqltypes.Numeric):
def bind_processor(self, dialect):
r'\s*SELECT',
re.I | re.UNICODE)
-class Postgres_psycopg2ExecutionContext(default.DefaultExecutionContext):
+class PostgreSQL_psycopg2ExecutionContext(default.DefaultExecutionContext):
def create_cursor(self):
# TODO: coverage for server side cursors + select.for_update()
is_server_side = \
else:
return base.ResultProxy(self)
-class Postgres_psycopg2Compiler(PGCompiler):
+class PostgreSQL_psycopg2Compiler(PGCompiler):
def visit_mod(self, binary, **kw):
return self.process(binary.left) + " %% " + self.process(binary.right)
def post_process_text(self, text):
return text.replace('%', '%%')
-class Postgres_psycopg2(PGDialect):
+class PostgreSQL_psycopg2(PGDialect):
driver = 'psycopg2'
supports_unicode_statements = False
default_paramstyle = 'pyformat'
supports_sane_multi_rowcount = False
- execution_ctx_cls = Postgres_psycopg2ExecutionContext
- statement_compiler = Postgres_psycopg2Compiler
+ execution_ctx_cls = PostgreSQL_psycopg2ExecutionContext
+ statement_compiler = PostgreSQL_psycopg2Compiler
colspecs = util.update_copy(
PGDialect.colspecs,
else:
return False
-dialect = Postgres_psycopg2
+dialect = PostgreSQL_psycopg2
Connecting
----------
-URLs are of the form `postgres+pypostgresql://user@password@host:port/dbname[?key=value&key=value...]`.
+URLs are of the form `postgresql+pypostgresql://user@password@host:port/dbname[?key=value&key=value...]`.
"""
import decimal
from sqlalchemy import util
from sqlalchemy import types as sqltypes
-from sqlalchemy.dialects.postgres.base import PGDialect, PGDefaultRunner
+from sqlalchemy.dialects.postgresql.base import PGDialect, PGDefaultRunner
class PGNumeric(sqltypes.Numeric):
def bind_processor(self, dialect):
return value
return process
-class Postgres_pypostgresqlExecutionContext(default.DefaultExecutionContext):
+class PostgreSQL_pypostgresqlExecutionContext(default.DefaultExecutionContext):
pass
-class Postgres_pypostgresqlDefaultRunner(PGDefaultRunner):
+class PostgreSQL_pypostgresqlDefaultRunner(PGDefaultRunner):
def execute_string(self, stmt, params=None):
return PGDefaultRunner.execute_string(self, stmt, params or ())
-class Postgres_pypostgresql(PGDialect):
+class PostgreSQL_pypostgresql(PGDialect):
driver = 'pypostgresql'
supports_unicode_statements = True
supports_unicode_binds = True
description_encoding = None
- defaultrunner = Postgres_pypostgresqlDefaultRunner
+ defaultrunner = PostgreSQL_pypostgresqlDefaultRunner
default_paramstyle = 'format'
supports_sane_multi_rowcount = False
- execution_ctx_cls = Postgres_pypostgresqlExecutionContext
+ execution_ctx_cls = PostgreSQL_pypostgresqlExecutionContext
colspecs = util.update_copy(
PGDialect.colspecs,
{
def is_disconnect(self, e):
return "connection is closed" in str(e)
-dialect = Postgres_pypostgresql
+dialect = PostgreSQL_pypostgresql
-from sqlalchemy.dialects.postgres.base import PGDialect
+from sqlalchemy.dialects.postgresql.base import PGDialect
from sqlalchemy.connectors.zxJDBC import ZxJDBCConnector
from sqlalchemy.engine import default
-class Postgres_jdbcExecutionContext(default.DefaultExecutionContext):
+class PostgreSQL_jdbcExecutionContext(default.DefaultExecutionContext):
pass
-class Postgres_jdbc(ZxJDBCConnector, PGDialect):
- execution_ctx_cls = Postgres_jdbcExecutionContext
+class PostgreSQL_jdbc(ZxJDBCConnector, PGDialect):
+ execution_ctx_cls = PostgreSQL_jdbcExecutionContext
jdbc_db_name = 'postgresql'
jdbc_driver_name = "org.postgresql.Driver"
def _get_server_version_info(self, connection):
return tuple(int(x) for x in connection.connection.dbversion.split('.'))
-dialect = Postgres_jdbc
\ No newline at end of file
+dialect = PostgreSQL_jdbc
\ No newline at end of file
the current mixed case naming can remain, i.e. _PGNumeric for Numeric - in this case,
end users would never need to use _PGNumeric directly. However, if a dialect-specific
type is specifying a type *or* arguments that are not present generically, it should
-match the real name of the type on that backend, in uppercase. E.g. postgres.INET,
-mysql.ENUM, postgres.ARRAY.
+match the real name of the type on that backend, in uppercase. E.g. postgresql.INET,
+mysql.ENUM, postgresql.ARRAY.
Or follow this handy flowchart:
Ideally one should be able to specify a schema using names imported completely from a
dialect, all matching the real name on that backend:
- from sqlalchemy.dialects.postgres import base as pg
+ from sqlalchemy.dialects.postgresql import base as pg
t = Table('mytable', metadata,
Column('id', pg.INTEGER, primary_key=True),
The URL is a string in the form
``dialect://user:password@host/dbname[?key=value..]``, where
- ``dialect`` is a name such as ``mysql``, ``oracle``, ``postgres``,
+ ``dialect`` is a name such as ``mysql``, ``oracle``, ``postgresql``,
etc. Alternatively, the URL can be an instance of
:class:`~sqlalchemy.engine.url.URL`.
:param module=None: used by database implementations which
support multiple DBAPI modules, this is a reference to a DBAPI2
module to be used instead of the engine's default module. For
- Postgres, the default is psycopg2. For Oracle, it's cx_Oracle.
+ PostgreSQL, the default is psycopg2. For Oracle, it's cx_Oracle.
:param pool=None: an already-constructed instance of
:class:`~sqlalchemy.pool.Pool`, such as a
"""Return a new cursor generated from this ExecutionContext's connection.
Some dialects may wish to change the behavior of
- connection.cursor(), such as postgres which may return a PG
+ connection.cursor(), such as postgresql which may return a PG
"server side" cursor.
"""
else:
dialect, driver = self.drivername, 'base'
- module = __import__('sqlalchemy.dialects.%s.%s' % (dialect, driver)).dialects
+ module = __import__('sqlalchemy.dialects.%s' % (dialect, )).dialects
module = getattr(module, dialect)
module = getattr(module, driver)
def visit_alter_column(element, compiler, **kw):
return "ALTER COLUMN %s ..." % element.column.name
- @compiles(AlterColumn, 'postgres')
+ @compiles(AlterColumn, 'postgresql')
def visit_alter_column(element, compiler, **kw):
return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name, element.column.name)
-The second ``visit_alter_table`` will be invoked when any ``postgres`` dialect is used.
+The second ``visit_alter_table`` will be invoked when any ``postgresql`` dialect is used.
The ``compiler`` argument is the :class:`~sqlalchemy.engine.base.Compiled` object
in use. This object can be inspected for any information about the in-progress
objects. A typical application setup using :func:`~sqlalchemy.orm.scoped_session` might look
like::
- engine = create_engine('postgres://scott:tiger@localhost/test')
+ engine = create_engine('postgresql://scott:tiger@localhost/test')
Session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
the foreign key in the database, and that the database will
handle propagation of an UPDATE from a source column to
dependent rows. Note that with databases which enforce
- referential integrity (i.e. Postgres, MySQL with InnoDB tables),
+ referential integrity (i.e. PostgreSQL, MySQL with InnoDB tables),
ON UPDATE CASCADE is required for this operation. The
relation() will update the value of the attribute on related
items which are locally present in the session during a flush.
like::
sess = Session(binds={
- SomeMappedClass: create_engine('postgres://engine1'),
- somemapper: create_engine('postgres://engine2'),
- some_table: create_engine('postgres://engine3'),
+ SomeMappedClass: create_engine('postgresql://engine1'),
+ somemapper: create_engine('postgresql://engine2'),
+ some_table: create_engine('postgresql://engine3'),
})
Also see the ``bind_mapper()`` and ``bind_table()`` methods.
or :meth:`create_all()`. The flag has no relevance at any
other time.
* The database supports autoincrementing behavior, such as
- Postgres or MySQL, and this behavior can be disabled (which does
+ PostgreSQL or MySQL, and this behavior can be disabled (which does
not include SQLite).
:param default: A scalar, Python callable, or :class:`~sqlalchemy.sql.expression.ClauseElement`
unique
Defaults to False: create a unique index.
- postgres_where
+ postgresql_where
Defaults to None: create a partial index when using PostgreSQL
"""
predicate. If a string, it will be compared to the name of the
executing database dialect::
- DDL('something', on='postgres')
+ DDL('something', on='postgresql')
If a tuple, specifies multiple dialect names:
- DDL('something', on=('postgres', 'mysql'))
+ DDL('something', on=('postgresql', 'mysql'))
If a callable, it will be invoked with three positional arguments
as well as optional keyword arguments:
[db]
sqlite=sqlite:///:memory:
sqlite_file=sqlite:///querytest.db
-postgres=postgres://scott:tiger@127.0.0.1:5432/test
-pg8000=postgres+pg8000://scott:tiger@127.0.0.1:5432/test
-postgres_jython=postgres+zxjdbc://scott:tiger@127.0.0.1:5432/test
+postgresql=postgresql://scott:tiger@127.0.0.1:5432/test
+postgres=postgresql://scott:tiger@127.0.0.1:5432/test
+pg8000=postgresql+pg8000://scott:tiger@127.0.0.1:5432/test
+postgresql_jython=postgresql+zxjdbc://scott:tiger@127.0.0.1:5432/test
mysql_jython=mysql+zxjdbc://scott:tiger@127.0.0.1:5432/test
mysql=mysql://scott:tiger@127.0.0.1:3306/test
oracle=oracle://scott:tiger@127.0.0.1:1521
fn,
no_support('firebird', 'not supported by database'),
no_support('oracle', 'not supported by database'),
- no_support('postgres', 'not supported by database'),
+ no_support('postgresql', 'not supported by database'),
no_support('sybase', 'not supported by database'),
)
exclude('mysql', '<', (5, 0, 10), 'not supported by database'),
# huh? TODO: implement triggers for PG tests, remove this
- no_support('postgres', 'PG triggers need to be implemented for tests'),
+ no_support('postgresql', 'PG triggers need to be implemented for tests'),
)
def correlated_outer_joins(fn):
Also supports comparison to database version when provided with one or
more 3-tuples of dialect name, operator, and version specification::
- testing.against('mysql', 'postgres')
+ testing.against('mysql', 'postgresql')
testing.against(('mysql', '>=', (5, 0, 0))
"""
elif against(('mysql', '<', (5, 0))):
# ignore reflection of bogus db-generated DefaultClause()
pass
- elif not c.primary_key or not against('postgres', 'mssql'):
+ elif not c.primary_key or not against('postgresql', 'mssql'):
#print repr(c)
assert reflected_c.default is None, reflected_c.default
assertsql.asserter.clear_rules()
def assert_sql(self, db, callable_, list_, with_sequences=None):
- if with_sequences is not None and config.db.name in ('firebird', 'oracle', 'postgres'):
+ if with_sequences is not None and config.db.name in ('firebird', 'oracle', 'postgresql'):
rules = with_sequences
else:
rules = list_
"""
- __only_on__ = 'postgres+psycopg2'
+ __only_on__ = 'postgresql+psycopg2'
__skip_if__ = ((lambda: sys.version_info < (2, 4)), )
def test_baseline_0_setup(self):
global metadata
player = lambda: dbapi_session.player()
- engine = create_engine('postgres:///', creator=player)
+ engine = create_engine('postgresql:///', creator=player)
metadata = MetaData(engine)
@profiling.function_call_count(2991, {'2.4': 1796})
"""
- __only_on__ = 'postgres+psycopg2'
+ __only_on__ = 'postgresql+psycopg2'
__skip_if__ = ((lambda: sys.version_info < (2, 5)), ) # TODO: get 2.4 support
def test_baseline_0_setup(self):
global metadata, session
player = lambda: dbapi_session.player()
- engine = create_engine('postgres:///', creator=player)
+ engine = create_engine('postgresql:///', creator=player)
metadata = MetaData(engine)
session = sessionmaker()()
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import exc, schema
-from sqlalchemy.dialects.postgres import base as postgres
+from sqlalchemy.dialects.postgresql import base as postgresql
from sqlalchemy.engine.strategies import MockEngineStrategy
from sqlalchemy.test import *
from sqlalchemy.sql import table, column
class SequenceTest(TestBase, AssertsCompiledSQL):
def test_basic(self):
seq = Sequence("my_seq_no_schema")
- dialect = postgres.PGDialect()
+ dialect = postgresql.PGDialect()
assert dialect.identifier_preparer.format_sequence(seq) == "my_seq_no_schema"
seq = Sequence("my_seq", schema="some_schema")
assert dialect.identifier_preparer.format_sequence(seq) == '"Some_Schema"."My_Seq"'
class CompileTest(TestBase, AssertsCompiledSQL):
- __dialect__ = postgres.dialect()
+ __dialect__ = postgresql.dialect()
def test_update_returning(self):
- dialect = postgres.dialect()
+ dialect = postgresql.dialect()
table1 = table('mytable',
column('myid', Integer),
column('name', String(128)),
column('description', String(128)),
)
- u = update(table1, values=dict(name='foo'), postgres_returning=[table1.c.myid, table1.c.name])
+ u = update(table1, values=dict(name='foo'), postgresql_returning=[table1.c.myid, table1.c.name])
self.assert_compile(u, "UPDATE mytable SET name=%(name)s RETURNING mytable.myid, mytable.name", dialect=dialect)
- u = update(table1, values=dict(name='foo'), postgres_returning=[table1])
+ u = update(table1, values=dict(name='foo'), postgresql_returning=[table1])
self.assert_compile(u, "UPDATE mytable SET name=%(name)s "\
"RETURNING mytable.myid, mytable.name, mytable.description", dialect=dialect)
- u = update(table1, values=dict(name='foo'), postgres_returning=[func.length(table1.c.name)])
+ u = update(table1, values=dict(name='foo'), postgresql_returning=[func.length(table1.c.name)])
self.assert_compile(u, "UPDATE mytable SET name=%(name)s RETURNING length(mytable.name)", dialect=dialect)
+
def test_insert_returning(self):
- dialect = postgres.dialect()
+ dialect = postgresql.dialect()
table1 = table('mytable',
column('myid', Integer),
column('name', String(128)),
column('description', String(128)),
)
- i = insert(table1, values=dict(name='foo'), postgres_returning=[table1.c.myid, table1.c.name])
+ i = insert(table1, values=dict(name='foo'), postgresql_returning=[table1.c.myid, table1.c.name])
self.assert_compile(i, "INSERT INTO mytable (name) VALUES (%(name)s) RETURNING mytable.myid, mytable.name", dialect=dialect)
- i = insert(table1, values=dict(name='foo'), postgres_returning=[table1])
+ i = insert(table1, values=dict(name='foo'), postgresql_returning=[table1])
self.assert_compile(i, "INSERT INTO mytable (name) VALUES (%(name)s) "\
"RETURNING mytable.myid, mytable.name, mytable.description", dialect=dialect)
- i = insert(table1, values=dict(name='foo'), postgres_returning=[func.length(table1.c.name)])
+ i = insert(table1, values=dict(name='foo'), postgresql_returning=[func.length(table1.c.name)])
self.assert_compile(i, "INSERT INTO mytable (name) VALUES (%(name)s) RETURNING length(mytable.name)", dialect=dialect)
+
+ @testing.uses_deprecated(r".*'postgres_returning' argument has been renamed.*")
+ def test_old_returning_names(self):
+ dialect = postgresql.dialect()
+ table1 = table('mytable',
+ column('myid', Integer),
+ column('name', String(128)),
+ column('description', String(128)),
+ )
+ u = update(table1, values=dict(name='foo'), postgres_returning=[table1.c.myid, table1.c.name])
+ self.assert_compile(u, "UPDATE mytable SET name=%(name)s RETURNING mytable.myid, mytable.name", dialect=dialect)
+
+ i = insert(table1, values=dict(name='foo'), postgres_returning=[table1.c.myid, table1.c.name])
+ self.assert_compile(i, "INSERT INTO mytable (name) VALUES (%(name)s) RETURNING mytable.myid, mytable.name", dialect=dialect)
+
def test_create_partial_index(self):
+ tbl = Table('testtbl', MetaData(), Column('data',Integer))
+ idx = Index('test_idx1', tbl.c.data, postgresql_where=and_(tbl.c.data > 5, tbl.c.data < 10))
+
+ self.assert_compile(schema.CreateIndex(idx),
+ "CREATE INDEX test_idx1 ON testtbl (data) WHERE testtbl.data > 5 AND testtbl.data < 10", dialect=postgresql.dialect())
+
+ @testing.uses_deprecated(r".*'postgres_where' argument has been renamed.*")
+ def test_old_create_partial_index(self):
tbl = Table('testtbl', MetaData(), Column('data',Integer))
idx = Index('test_idx1', tbl.c.data, postgres_where=and_(tbl.c.data > 5, tbl.c.data < 10))
self.assert_compile(schema.CreateIndex(idx),
- "CREATE INDEX test_idx1 ON testtbl (data) WHERE testtbl.data > 5 AND testtbl.data < 10", dialect=postgres.dialect())
+ "CREATE INDEX test_idx1 ON testtbl (data) WHERE testtbl.data > 5 AND testtbl.data < 10", dialect=postgresql.dialect())
def test_extract(self):
t = table('t', column('col1'))
"FROM t" % field)
class ReturningTest(TestBase, AssertsExecutionResults):
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
- @testing.exclude('postgres', '<', (8, 2), '8.3+ feature')
+ @testing.exclude('postgresql', '<', (8, 2), '8.3+ feature')
def test_update_returning(self):
meta = MetaData(testing.db)
table = Table('tables', meta,
try:
table.insert().execute([{'persons': 5, 'full': False}, {'persons': 3, 'full': False}])
- result = table.update(table.c.persons > 4, dict(full=True), postgres_returning=[table.c.id]).execute()
+ result = table.update(table.c.persons > 4, dict(full=True), postgresql_returning=[table.c.id]).execute()
eq_(result.fetchall(), [(1,)])
result2 = select([table.c.id, table.c.full]).order_by(table.c.id).execute()
finally:
table.drop()
- @testing.exclude('postgres', '<', (8, 2), '8.3+ feature')
+ @testing.exclude('postgresql', '<', (8, 2), '8.3+ feature')
def test_insert_returning(self):
meta = MetaData(testing.db)
table = Table('tables', meta,
)
table.create()
try:
- result = table.insert(postgres_returning=[table.c.id]).execute({'persons': 1, 'full': False})
+ result = table.insert(postgresql_returning=[table.c.id]).execute({'persons': 1, 'full': False})
eq_(result.fetchall(), [(1,)])
- @testing.fails_on('postgres', 'Known limitation of psycopg2')
+ @testing.fails_on('postgresql', 'Known limitation of psycopg2')
def test_executemany():
# return value is documented as failing with psycopg2/executemany
- result2 = table.insert(postgres_returning=[table]).execute(
+ result2 = table.insert(postgresql_returning=[table]).execute(
[{'persons': 2, 'full': False}, {'persons': 3, 'full': True}])
eq_(result2.fetchall(), [(2, 2, False), (3,3,True)])
test_executemany()
- result3 = table.insert(postgres_returning=[(table.c.id*2).label('double_id')]).execute({'persons': 4, 'full': False})
+ result3 = table.insert(postgresql_returning=[(table.c.id*2).label('double_id')]).execute({'persons': 4, 'full': False})
eq_([dict(row) for row in result3], [{'double_id':8}])
result4 = testing.db.execute('insert into tables (id, persons, "full") values (5, 10, true) returning persons')
class InsertTest(TestBase, AssertsExecutionResults):
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
@classmethod
def setup_class(cls):
class DomainReflectionTest(TestBase, AssertsExecutionResults):
"Test PostgreSQL domains"
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
@classmethod
def setup_class(cls):
assert table.columns.answer.nullable, "Expected reflected column to be nullable."
def test_unknown_types(self):
- from sqlalchemy.databases import postgres
+ from sqlalchemy.databases import postgresql
- ischema_names = postgres.PGDialect.ischema_names
- postgres.PGDialect.ischema_names = {}
+ ischema_names = postgresql.PGDialect.ischema_names
+ postgresql.PGDialect.ischema_names = {}
try:
m2 = MetaData(testing.db)
assert_raises(exc.SAWarning, Table, "testtable", m2, autoload=True)
assert t3.c.answer.type.__class__ == sa.types.NullType
finally:
- postgres.PGDialect.ischema_names = ischema_names
+ postgresql.PGDialect.ischema_names = ischema_names
class MiscTest(TestBase, AssertsExecutionResults, AssertsCompiledSQL):
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
def test_date_reflection(self):
m1 = MetaData(testing.db)
class TimezoneTest(TestBase, AssertsExecutionResults):
"""Test timezone-aware datetimes.
- psycopg will return a datetime with a tzinfo attached to it, if postgres
+ psycopg will return a datetime with a tzinfo attached to it, if postgresql
returns it. python then will not let you compare a datetime with a tzinfo
to a datetime that doesnt have one. this test illustrates two ways to
have datetime types with and without timezone info.
"""
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
@classmethod
def setup_class(cls):
global tztable, notztable, metadata
metadata = MetaData(testing.db)
- # current_timestamp() in postgres is assumed to return TIMESTAMP WITH TIMEZONE
+ # current_timestamp() in postgresql is assumed to return TIMESTAMP WITH TIMEZONE
tztable = Table('tztable', metadata,
Column("id", Integer, primary_key=True),
Column("date", DateTime(timezone=True), onupdate=func.current_timestamp()),
print notztable.select(tztable.c.id==1).execute().fetchone()
class ArrayTest(TestBase, AssertsExecutionResults):
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
@classmethod
def setup_class(cls):
arrtable = Table('arrtable', metadata,
Column('id', Integer, primary_key=True),
- Column('intarr', postgres.PGArray(Integer)),
- Column('strarr', postgres.PGArray(String(convert_unicode=True)), nullable=False)
+ Column('intarr', postgresql.PGArray(Integer)),
+ Column('strarr', postgresql.PGArray(String(convert_unicode=True)), nullable=False)
)
metadata.create_all()
def test_reflect_array_column(self):
metadata2 = MetaData(testing.db)
tbl = Table('arrtable', metadata2, autoload=True)
- assert isinstance(tbl.c.intarr.type, postgres.PGArray)
- assert isinstance(tbl.c.strarr.type, postgres.PGArray)
+ assert isinstance(tbl.c.intarr.type, postgresql.PGArray)
+ assert isinstance(tbl.c.strarr.type, postgresql.PGArray)
assert isinstance(tbl.c.intarr.type.item_type, Integer)
assert isinstance(tbl.c.strarr.type.item_type, String)
eq_(results[0]['intarr'], [1,2,3])
eq_(results[0]['strarr'], ['abc','def'])
- @testing.fails_on('postgres+pg8000', 'pg8000 has poor support for PG arrays')
+ @testing.fails_on('postgresql+pg8000', 'pg8000 has poor support for PG arrays')
def test_array_where(self):
arrtable.insert().execute(intarr=[1,2,3], strarr=['abc', 'def'])
arrtable.insert().execute(intarr=[4,5,6], strarr='ABC')
eq_(len(results), 1)
eq_(results[0]['intarr'], [1,2,3])
- @testing.fails_on('postgres+pg8000', 'pg8000 has poor support for PG arrays')
+ @testing.fails_on('postgresql+pg8000', 'pg8000 has poor support for PG arrays')
def test_array_concat(self):
arrtable.insert().execute(intarr=[1,2,3], strarr=['abc', 'def'])
results = select([arrtable.c.intarr + [4,5,6]]).execute().fetchall()
eq_(len(results), 1)
eq_(results[0][0], [1,2,3,4,5,6])
- @testing.fails_on('postgres+pg8000', 'pg8000 has poor support for PG arrays')
+ @testing.fails_on('postgresql+pg8000', 'pg8000 has poor support for PG arrays')
def test_array_subtype_resultprocessor(self):
arrtable.insert().execute(intarr=[4,5,6], strarr=[[u'm\xe4\xe4'], [u'm\xf6\xf6']])
arrtable.insert().execute(intarr=[1,2,3], strarr=[u'm\xe4\xe4', u'm\xf6\xf6'])
eq_(results[0]['strarr'], [u'm\xe4\xe4', u'm\xf6\xf6'])
eq_(results[1]['strarr'], [[u'm\xe4\xe4'], [u'm\xf6\xf6']])
- @testing.fails_on('postgres+pg8000', 'pg8000 has poor support for PG arrays')
+ @testing.fails_on('postgresql+pg8000', 'pg8000 has poor support for PG arrays')
def test_array_mutability(self):
class Foo(object): pass
footable = Table('foo', metadata,
Column('id', Integer, primary_key=True),
- Column('intarr', postgres.PGArray(Integer), nullable=True)
+ Column('intarr', postgresql.PGArray(Integer), nullable=True)
)
mapper(Foo, footable)
metadata.create_all()
sess.flush()
class TimestampTest(TestBase, AssertsExecutionResults):
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
def test_timestamp(self):
engine = testing.db
eq_(result[0], datetime.datetime(2007, 12, 25, 0, 0))
class ServerSideCursorsTest(TestBase, AssertsExecutionResults):
- __only_on__ = 'postgres+psycopg2'
+ __only_on__ = 'postgresql+psycopg2'
@classmethod
def setup_class(cls):
class SpecialTypesTest(TestBase, ComparesTables):
"""test DDL and reflection of PG-specific types """
- __only_on__ = 'postgres'
- __excluded_on__ = (('postgres', '<', (8, 3, 0)),)
+ __only_on__ = 'postgresql'
+ __excluded_on__ = (('postgresql', '<', (8, 3, 0)),)
@classmethod
def setup_class(cls):
metadata = MetaData(testing.db)
table = Table('sometable', metadata,
- Column('id', postgres.PGUuid, primary_key=True),
- Column('flag', postgres.PGBit),
- Column('addr', postgres.PGInet),
- Column('addr2', postgres.PGMacAddr),
- Column('addr3', postgres.PGCidr)
+ Column('id', postgresql.PGUuid, primary_key=True),
+ Column('flag', postgresql.PGBit),
+ Column('addr', postgresql.PGInet),
+ Column('addr2', postgresql.PGMacAddr),
+ Column('addr3', postgresql.PGCidr)
)
metadata.create_all()
class MatchTest(TestBase, AssertsCompiledSQL):
- __only_on__ = 'postgres'
- __excluded_on__ = (('postgres', '<', (8, 3, 0)),)
+ __only_on__ = 'postgresql'
+ __excluded_on__ = (('postgresql', '<', (8, 3, 0)),)
@classmethod
def setup_class(cls):
def teardown_class(cls):
metadata.drop_all()
- @testing.fails_on('postgres+pg8000', 'uses positional')
+ @testing.fails_on('postgresql+pg8000', 'uses positional')
def test_expression_pyformat(self):
self.assert_compile(matchtable.c.title.match('somstr'), "matchtable.title @@ to_tsquery(%(title_1)s)")
- @testing.fails_on('postgres+psycopg2', 'uses pyformat')
+ @testing.fails_on('postgresql+psycopg2', 'uses pyformat')
def test_expression_positional(self):
self.assert_compile(matchtable.c.title.match('somstr'), "matchtable.title @@ to_tsquery(%s)")
def test_conditional_constraint(self):
metadata, users, engine = self.metadata, self.users, self.engine
nonpg_mock = engines.mock_engine(dialect_name='sqlite')
- pg_mock = engines.mock_engine(dialect_name='postgres')
+ pg_mock = engines.mock_engine(dialect_name='postgresql')
constraint = CheckConstraint('a < b',name="my_test_constraint", table=users)
# by placing the constraint in an Add/Drop construct,
# the 'inline_ddl' flag is set to False
- AddConstraint(constraint, on='postgres').execute_at("after-create", users)
- DropConstraint(constraint, on='postgres').execute_at("before-drop", users)
+ AddConstraint(constraint, on='postgresql').execute_at("after-create", users)
+ DropConstraint(constraint, on='postgresql').execute_at("before-drop", users)
metadata.create_all(bind=nonpg_mock)
strings = " ".join(str(x) for x in nonpg_mock.mock)
assert res.fetchall() == [(1, "jack"), (2, "fred"), (3, "ed"), (4, "horse"), (5, "barney"), (6, "donkey"), (7, 'sally')]
conn.execute("delete from users")
- @testing.fails_on_everything_except('mysql+mysqldb', 'postgres')
- @testing.fails_on('postgres+zxjdbc', 'sprintf not supported')
+ @testing.fails_on_everything_except('mysql+mysqldb', 'postgresql')
+ @testing.fails_on('postgresql+zxjdbc', 'sprintf not supported')
# some psycopg2 versions bomb this.
def test_raw_sprintf(self):
for conn in (testing.db, testing.db.connect()):
# pyformat is supported for mysql, but skipping because a few driver
# versions have a bug that bombs out on this test. (1.2.2b3, 1.2.2c1, 1.2.2)
@testing.skip_if(lambda: testing.against('mysql+mysqldb'), 'db-api flaky')
- @testing.fails_on_everything_except('postgres+psycopg2')
+ @testing.fails_on_everything_except('postgresql+psycopg2')
def test_raw_python(self):
for conn in (testing.db, testing.db.connect()):
conn.execute("insert into users (user_id, user_name) values (%(id)s, %(name)s)", {'id':1, 'name':'jack'})
dbapi = MockDBAPI(foober='12', lala='18', fooz='somevalue')
e = create_engine(
- 'postgres://scott:tiger@somehost/test?foober=12&lala=18&fooz=somevalue',
+ 'postgresql://scott:tiger@somehost/test?foober=12&lala=18&fooz=somevalue',
module=dbapi,
_initialize=False
)
dbapi = MockDBAPI(foober=12, lala=18, hoho={'this':'dict'}, fooz='somevalue')
e = create_engine(
- 'postgres://scott:tiger@somehost/test?fooz=somevalue',
+ 'postgresql://scott:tiger@somehost/test?fooz=somevalue',
connect_args={'foober':12, 'lala':18, 'hoho':{'this':'dict'}},
module=dbapi,
_initialize=False
def test_coerce_config(self):
raw = r"""
[prefixed]
-sqlalchemy.url=postgres://scott:tiger@somehost/test?fooz=somevalue
+sqlalchemy.url=postgresql://scott:tiger@somehost/test?fooz=somevalue
sqlalchemy.convert_unicode=0
sqlalchemy.echo=false
sqlalchemy.echo_pool=1
sqlalchemy.pool_threadlocal=1
sqlalchemy.pool_timeout=10
[plain]
-url=postgres://scott:tiger@somehost/test?fooz=somevalue
+url=postgresql://scott:tiger@somehost/test?fooz=somevalue
convert_unicode=0
echo=0
echo_pool=1
ini.readfp(StringIO.StringIO(raw))
expected = {
- 'url': 'postgres://scott:tiger@somehost/test?fooz=somevalue',
+ 'url': 'postgresql://scott:tiger@somehost/test?fooz=somevalue',
'convert_unicode': 0,
'echo': False,
'echo_pool': True,
dbapi = MockDBAPI()
config = {
- 'sqlalchemy.url':'postgres://scott:tiger@somehost/test?fooz=somevalue',
+ 'sqlalchemy.url':'postgresql://scott:tiger@somehost/test?fooz=somevalue',
'sqlalchemy.pool_recycle':'50',
'sqlalchemy.echo':'true'
}
e = engine_from_config(config, module=dbapi)
assert e.pool._recycle == 50
- assert e.url == url.make_url('postgres://scott:tiger@somehost/test?fooz=somevalue')
+ assert e.url == url.make_url('postgresql://scott:tiger@somehost/test?fooz=somevalue')
assert e.echo is True
def test_custom(self):
def connect():
return dbapi.connect(foober=12, lala=18, fooz='somevalue', hoho={'this':'dict'})
- # start the postgres dialect, but put our mock DBAPI as the module instead of psycopg
- e = create_engine('postgres://', creator=connect, module=dbapi, _initialize=False)
+ # start the postgresql dialect, but put our mock DBAPI as the module instead of psycopg
+ e = create_engine('postgresql://', creator=connect, module=dbapi, _initialize=False)
c = e.connect()
def test_recycle(self):
dbapi = MockDBAPI(foober=12, lala=18, hoho={'this':'dict'}, fooz='somevalue')
- e = create_engine('postgres://', pool_recycle=472, module=dbapi, _initialize=False)
+ e = create_engine('postgresql://', pool_recycle=472, module=dbapi, _initialize=False)
assert e.pool._recycle == 472
def test_badargs(self):
assert_raises(ImportError, create_engine, "foobar://", module=MockDBAPI())
# bad arg
- assert_raises(TypeError, create_engine, 'postgres://', use_ansi=True, module=MockDBAPI())
+ assert_raises(TypeError, create_engine, 'postgresql://', use_ansi=True, module=MockDBAPI())
# bad arg
assert_raises(TypeError, create_engine, 'oracle://', lala=5, use_ansi=True, module=MockDBAPI())
- assert_raises(TypeError, create_engine, 'postgres://', lala=5, module=MockDBAPI())
+ assert_raises(TypeError, create_engine, 'postgresql://', lala=5, module=MockDBAPI())
assert_raises(TypeError, create_engine,'sqlite://', lala=5)
def test_poolargs(self):
"""test that connection pool args make it thru"""
- e = create_engine('postgres://', creator=None, pool_recycle=50, echo_pool=None, module=MockDBAPI(), _initialize=False)
+ e = create_engine('postgresql://', creator=None, pool_recycle=50, echo_pool=None, module=MockDBAPI(), _initialize=False)
assert e.pool._recycle == 50
# these args work for QueuePool
- e = create_engine('postgres://', max_overflow=8, pool_timeout=60, poolclass=tsa.pool.QueuePool, module=MockDBAPI())
+ e = create_engine('postgresql://', max_overflow=8, pool_timeout=60, poolclass=tsa.pool.QueuePool, module=MockDBAPI())
# but not SingletonThreadPool
assert_raises(TypeError, create_engine, 'sqlite://', max_overflow=8, pool_timeout=60, poolclass=tsa.pool.SingletonThreadPool)
dbapi = MockDBAPI()
# create engine using our current dburi
- db = tsa.create_engine('postgres://foo:bar@localhost/test', module=dbapi, _initialize=False)
+ db = tsa.create_engine('postgresql://foo:bar@localhost/test', module=dbapi, _initialize=False)
# monkeypatch disconnect checker
db.dialect.is_disconnect = lambda e: isinstance(e, MockDisconnect)
def test_basic(self):
try:
# the 'convert_unicode' should not get in the way of the reflection
- # process. reflecttable for oracle, postgres (others?) expect non-unicode
+ # process. reflecttable for oracle, postgresql (others?) expect non-unicode
# strings in result sets/bind params
bind = engines.utf8_engine(options={'convert_unicode':True})
metadata = MetaData(bind)
if testing.against('mysql+mysqldb'):
schema = testing.db.url.database
- elif testing.against('postgres'):
+ elif testing.against('postgresql'):
schema = 'public'
elif testing.against('sqlite'):
# Works for CREATE TABLE main.foo, SELECT FROM main.foo, etc.,
self._test_get_view_definition(schema=get_schema())
def _test_get_table_oid(self, table_name, schema=None):
- if testing.against('postgres'):
+ if testing.against('postgresql'):
meta = MetaData(testing.db)
(users, addresses) = createTables(meta, schema)
meta.create_all()
conn1.close()
# without auto-rollback in the connection pool's return() logic, this
- # deadlocks in Postgres, because conn1 is returned to the pool but
+ # deadlocks in PostgreSQL, because conn1 is returned to the pool but
# still has a lock on "deadlock_users".
# comment out the rollback in pool/ConnectionFairy._close() to see !
users.drop(conn2)
class ExplicitAutoCommitTest(TestBase):
"""test the 'autocommit' flag on select() and text() objects.
- Requires Postgres so that we may define a custom function which modifies the database.
+ Requires PostgreSQL so that we may define a custom function which modifies the database.
"""
- __only_on__ = 'postgres'
+ __only_on__ = 'postgresql'
@classmethod
def setup_class(cls):
def visit_type(type, compiler, **kw):
return "SQLITE_FOO"
- @compiles(MyType, 'postgres')
+ @compiles(MyType, 'postgresql')
def visit_type(type, compiler, **kw):
return "POSTGRES_FOO"
from sqlalchemy.dialects.sqlite import base as sqlite
- from sqlalchemy.dialects.postgres import base as postgres
+ from sqlalchemy.dialects.postgresql import base as postgresql
self.assert_compile(
MyType(),
self.assert_compile(
MyType(),
"POSTGRES_FOO",
- dialect=postgres.dialect()
+ dialect=postgresql.dialect()
)
# test pk with one column NULL
# TODO: can't seem to get NULL in for a PK value
- # in either mysql or postgres, autoincrement=False etc.
+ # in either mysql or postgresql, autoincrement=False etc.
# notwithstanding
@testing.fails_on_everything_except("sqlite")
def go():
@testing.requires.unicode_connections
def test_unicode(self):
"""test that Query.get properly sets up the type for the bind parameter. using unicode would normally fail
- on postgres, mysql and oracle unless it is converted to an encoded string"""
+ on postgresql, mysql and oracle unless it is converted to an encoded string"""
metadata = MetaData(engines.utf8_engine())
table = Table('unicode_data', metadata,
@testing.fails_on('mssql', 'FIXME: unknown')
@testing.fails_on('oracle', "Oracle doesn't support boolean expressions as columns")
- @testing.fails_on('postgres+pg8000', "pg8000 parses the SQL itself before passing on to PG, doesn't parse this")
+ @testing.fails_on('postgresql+pg8000', "pg8000 parses the SQL itself before passing on to PG, doesn't parse this")
def test_values_with_boolean_selects(self):
"""Tests a values clause that works with select boolean evaluations"""
sess = create_session()
@testing.fails_on_everything_except('sqlite', 'mysql')
@testing.resolve_artifact_names
def test_nullPKsOK_BtoA(self):
- # postgres cant handle a nullable PK column...?
+ # postgresql cant handle a nullable PK column...?
tableC = Table('tablec', tableA.metadata,
Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('tableA.id'),
@classmethod
def define_tables(cls, metadata):
- use_string_defaults = testing.against('postgres', 'oracle', 'sqlite', 'mssql')
+ use_string_defaults = testing.against('postgresql', 'oracle', 'sqlite', 'mssql')
if use_string_defaults:
hohotype = String(30)
Column('id', Integer, primary_key=True, test_needs_autoincrement=True),
Column('data', String(50)))
- if testing.against('postgres', 'oracle'):
+ if testing.against('postgresql', 'oracle'):
dt.append_column(
Column('secondary_id', Integer, sa.Sequence('sec_id_seq'),
unique=True))
# todo: on 8.3 at least, the failed commit seems to close the cursor?
# needs investigation. leaving in the DDL above now to help verify
# that the new deferrable support on FK isn't involved in this issue.
- if testing.against('postgres'):
+ if testing.against('postgresql'):
t1.bind.engine.dispose()
from sqlalchemy.engine import ddl
from sqlalchemy.test.testing import eq_
from sqlalchemy.test.assertsql import AllOf, RegexSQL, ExactSQL, CompiledSQL
-from sqlalchemy.dialects.postgres import base as postgres
+from sqlalchemy.dialects.postgresql import base as postgresql
class ConstraintTest(TestBase, AssertsExecutionResults, AssertsCompiledSQL):
Column('b', Integer, ForeignKey('t.a', name='fk_tb')), # to ensure create ordering ...
)
- e = engines.mock_engine(dialect_name='postgres')
+ e = engines.mock_engine(dialect_name='postgresql')
m.create_all(e)
m.drop_all(e)
# since its a "branched" connection
conn.close()
- use_function_defaults = testing.against('postgres', 'mssql', 'maxdb')
+ use_function_defaults = testing.against('postgresql', 'mssql', 'maxdb')
is_oracle = testing.against('oracle')
# select "count(1)" returns different results on different DBs also
l = l.fetchone()
eq_(55, l['col3'])
- @testing.fails_on_everything_except('postgres')
+ @testing.fails_on_everything_except('postgresql')
def test_passive_override(self):
"""
- Primarily for postgres, tests that when we get a primary key column
+ Primarily for postgresql, tests that when we get a primary key column
back from reflecting a table which has a default value on it, we
pre-execute that DefaultClause upon insert, even though DefaultClause
- says "let the database execute this", because in postgres we must have
+ says "let the database execute this", because in postgresql we must have
all the primary key values in memory before insert; otherwise we can't
locate the just inserted row.
"""
- # TODO: move this to dialect/postgres
+ # TODO: move this to dialect/postgresql
try:
meta = MetaData(testing.db)
testing.db.execute("""
try:
- # postgres + mysql strict will fail on first row,
+ # postgresql + mysql strict will fail on first row,
# mysql in legacy mode fails on second row
nonai.insert().execute(data='row 1')
nonai.insert().execute(data='row 2')
for ret, dialect in [
('CURRENT_TIMESTAMP', sqlite.dialect()),
- ('now()', postgres.dialect()),
+ ('now()', postgresql.dialect()),
('now()', mysql.dialect()),
('CURRENT_TIMESTAMP', oracle.dialect())
]:
for ret, dialect in [
('random()', sqlite.dialect()),
- ('random()', postgres.dialect()),
+ ('random()', postgresql.dialect()),
('rand()', mysql.dialect()),
('random', oracle.dialect())
]:
finally:
meta.drop_all()
- @testing.fails_on_everything_except('postgres')
+ @testing.fails_on_everything_except('postgresql')
def test_as_from(self):
# TODO: shouldnt this work on oracle too ?
x = testing.db.func.current_date().execute().scalar()
eq_(select([users.c.user_id]).where(users.c.user_name.ilike('TWO')).execute().fetchall(), [(2, )])
- if testing.against('postgres'):
+ if testing.against('postgresql'):
eq_(select([users.c.user_id]).where(users.c.user_name.like('one')).execute().fetchall(), [(1, )])
eq_(select([users.c.user_id]).where(users.c.user_name.like('TWO')).execute().fetchall(), [])
class PercentSchemaNamesTest(TestBase):
"""tests using percent signs, spaces in table and column names.
- Doesn't pass for mysql, postgres, but this is really a
+ Doesn't pass for mysql, postgresql, but this is really a
SQLAlchemy bug - we should be escaping out %% signs for this
operation the same way we do for text() and column labels.
@classmethod
@testing.crashes('mysql', 'mysqldb calls name % (params)')
- @testing.crashes('postgres', 'postgres calls name % (params)')
+ @testing.crashes('postgresql', 'postgresql calls name % (params)')
def setup_class(cls):
global percent_table, metadata
metadata = MetaData(testing.db)
@classmethod
@testing.crashes('mysql', 'mysqldb calls name % (params)')
- @testing.crashes('postgres', 'postgres calls name % (params)')
+ @testing.crashes('postgresql', 'postgresql calls name % (params)')
def teardown_class(cls):
metadata.drop_all()
@testing.crashes('mysql', 'mysqldb calls name % (params)')
- @testing.crashes('postgres', 'postgres calls name % (params)')
+ @testing.crashes('postgresql', 'postgresql calls name % (params)')
def test_roundtrip(self):
percent_table.insert().execute(
{'percent%':5, '%(oneofthese)s':7, 'spaces % more spaces':12},
def testlabels(self):
"""test the quoting of labels.
- if labels arent quoted, a query in postgres in particular will fail since it produces:
+ if labels arent quoted, a query in postgresql in particular will fail since it produces:
SELECT LaLa.lowercase, LaLa."UPPERCASE", LaLa."MixedCase", LaLa."ASC"
FROM (SELECT DISTINCT "WorstCase1".lowercase AS lowercase, "WorstCase1"."UPPERCASE" AS UPPERCASE, "WorstCase1"."MixedCase" AS MixedCase, "WorstCase1"."ASC" AS ASC \nFROM "WorstCase1") AS LaLa
(~table1.c.myid.like('somstr', escape='\\'), "mytable.myid NOT LIKE :myid_1 ESCAPE '\\'", None),
(table1.c.myid.ilike('somstr', escape='\\'), "lower(mytable.myid) LIKE lower(:myid_1) ESCAPE '\\'", None),
(~table1.c.myid.ilike('somstr', escape='\\'), "lower(mytable.myid) NOT LIKE lower(:myid_1) ESCAPE '\\'", None),
- (table1.c.myid.ilike('somstr', escape='\\'), "mytable.myid ILIKE %(myid_1)s ESCAPE '\\'", postgres.PGDialect()),
- (~table1.c.myid.ilike('somstr', escape='\\'), "mytable.myid NOT ILIKE %(myid_1)s ESCAPE '\\'", postgres.PGDialect()),
+ (table1.c.myid.ilike('somstr', escape='\\'), "mytable.myid ILIKE %(myid_1)s ESCAPE '\\'", postgresql.PGDialect()),
+ (~table1.c.myid.ilike('somstr', escape='\\'), "mytable.myid NOT ILIKE %(myid_1)s ESCAPE '\\'", postgresql.PGDialect()),
(table1.c.name.ilike('%something%'), "lower(mytable.name) LIKE lower(:name_1)", None),
- (table1.c.name.ilike('%something%'), "mytable.name ILIKE %(name_1)s", postgres.PGDialect()),
+ (table1.c.name.ilike('%something%'), "mytable.name ILIKE %(name_1)s", postgresql.PGDialect()),
(~table1.c.name.ilike('%something%'), "lower(mytable.name) NOT LIKE lower(:name_1)", None),
- (~table1.c.name.ilike('%something%'), "mytable.name NOT ILIKE %(name_1)s", postgres.PGDialect()),
+ (~table1.c.name.ilike('%something%'), "mytable.name NOT ILIKE %(name_1)s", postgresql.PGDialect()),
]:
self.assert_compile(expr, check, dialect=dialect)
(table1.c.myid.match('somstr'), "mytable.myid MATCH ?", sqlite.SQLiteDialect()),
(table1.c.myid.match('somstr'), "MATCH (mytable.myid) AGAINST (%s IN BOOLEAN MODE)", mysql.dialect()),
(table1.c.myid.match('somstr'), "CONTAINS (mytable.myid, :myid_1)", mssql.dialect()),
- (table1.c.myid.match('somstr'), "mytable.myid @@ to_tsquery(%(myid_1)s)", postgres.dialect()),
+ (table1.c.myid.match('somstr'), "mytable.myid @@ to_tsquery(%(myid_1)s)", postgresql.dialect()),
(table1.c.myid.match('somstr'), "CONTAINS (mytable.myid, :myid_1)", oracle.dialect()),
]:
self.assert_compile(expr, check, dialect=dialect)
params={},
)
- dialect = postgres.dialect()
+ dialect = postgresql.dialect()
self.assert_compile(
text("select * from foo where lala=:bar and hoho=:whee", bindparams=[bindparam('bar',4), bindparam('whee',7)]),
"select * from foo where lala=%(bar)s and hoho=%(whee)s",
else:
eq_(str(sel), "SELECT casttest.id, casttest.v1, casttest.v2, casttest.ts, CAST(casttest.v1 AS NUMERIC) AS anon_1 \nFROM casttest")
- # first test with Postgres engine
- check_results(postgres.dialect(), ['NUMERIC', 'NUMERIC(12, 9)', 'DATE', 'TEXT', 'VARCHAR(20)'], '%(param_1)s')
+ # first test with PostgreSQL engine
+ check_results(postgresql.dialect(), ['NUMERIC', 'NUMERIC(12, 9)', 'DATE', 'TEXT', 'VARCHAR(20)'], '%(param_1)s')
# then the Oracle engine
check_results(oracle.dialect(), ['NUMERIC', 'NUMERIC(12, 9)', 'DATE', 'CLOB', 'VARCHAR(20)'], ':param_1')
for dialect in [
oracle.dialect(),
mysql.dialect(),
- postgres.dialect(),
+ postgresql.dialect(),
sqlite.dialect(),
sybase.dialect(),
informix.dialect(),
Column('user_date', Date),
Column('user_time', Time)]
- if testing.against('sqlite', 'postgres'):
+ if testing.against('sqlite', 'postgresql'):
insert_data.append(
(11, 'historic',
datetime.datetime(1850, 11, 10, 11, 52, 35, datetime_micro),
assert isinstance(engine.dialect.identifier_preparer.format_sequence(Sequence('special_col')), unicode)
# now execute, run the sequence. it should run in u"Special_col.nextid" or similar as
- # a unicode object; cx_oracle asserts that this is None or a String (postgres lets it pass thru).
+ # a unicode object; cx_oracle asserts that this is None or a String (postgresql lets it pass thru).
# ensure that base.DefaultRunner is encoding.
t1.insert().execute(data='foo')
finally: