.. changelog::
:version: 1.1.0b1
+ .. change::
+ :tags: bug, postgresql
+ :tickets: 3499
+
+ The "hashable" flag on special datatypes such as :class:`.postgresql.ARRAY`,
+ :class:`.postgresql.JSON` and :class:`.postgresql.HSTORE` is now
+ set to False, which allows these types to be fetchable in ORM
+ queries that include entities within the row.
+
+ .. seealso::
+
+ :ref:`change_3499`
+
+ :ref:`change_3499_postgresql`
+
+ .. change::
+ :tags: bug, postgresql
+ :tickets: 3487
+
+ The Postgresql :class:`.postgresql.ARRAY` type now supports multidimensional
+ indexed access, e.g. expressions such as ``somecol[5][6]`` without
+ any need for explicit casts or type coercions, provided
+ that the :paramref:`.postgresql.ARRAY.dimensions` parameter is set to the
+ desired number of dimensions.
+
+ .. seealso::
+
+ :ref:`change_3503`
+
+ .. change::
+ :tags: bug, postgresql
+ :tickets: 3503
+
+ The return type for the :class:`.postgresql.JSON` and :class:`.postgresql.JSONB`
+ when using indexed access has been fixed to work like Postgresql itself,
+ and returns an expression that itself is of type :class:`.postgresql.JSON`
+ or :class:`.postgresql.JSONB`. Previously, the accessor would return
+ :class:`.NullType` which disallowed subsequent JSON-like operators to be
+ used.
+
+ .. seealso::
+
+ :ref:`change_3503`
+
+ .. change::
+ :tags: bug, postgresql
+ :tickets: 3503
+
+ The :class:`.postgresql.JSON`, :class:`.postgresql.JSONB` and
+ :class:`.postgresql.HSTORE` datatypes now allow full control over the
+ return type from an indexed textual access operation, either ``column[someindex].astext``
+ for a JSON type or ``column[someindex]`` for an HSTORE type,
+ via the :paramref:`.postgresql.JSON.astext_type` and
+ :paramref:`.postgresql.HSTORE.text_type` parameters.
+
+ .. seealso::
+
+ :ref:`change_3503`
+
+
+ .. change::
+ :tags: bug, postgresql
+ :tickets: 3503
+
+ The :attr:`.postgresql.JSON.Comparator.astext` modifier no longer
+ calls upon :meth:`.ColumnElement.cast` implicitly, as PG's JSON/JSONB
+ types allow cross-casting between each other as well. Code that
+ makes use of :meth:`.ColumnElement.cast` on JSON indexed access,
+ e.g. ``col[someindex].cast(Integer)``, will need to be changed
+ to call :attr:`.postgresql.JSON.Comparator.astext` explicitly.
+
+ .. seealso::
+
+ :ref:`change_3503_cast`
+
+
.. change::
:tags: bug, sql
:tickets: 2528
New Features and Improvements - ORM
===================================
+.. _change_3499:
+
+Changes regarding "unhashable" types
+------------------------------------
+
+The :class:`.Query` object has a well-known behavior of "deduping"
+returned rows that contain at least one ORM-mapped entity (e.g., a
+full mapped object, as opposed to individual column values). The
+primary purpose of this is so that the handling of entities works
+smoothly in conjunction with the identity map, including to
+accommodate for the duplicate entities normally represented within
+joined eager loading, as well as when joins are used for the purposes
+of filtering on additional columns.
+
+This deduplication relies upon the hashability of the elements within
+the row. With the introduction of Postgresql's special types like
+:class:`.postgresql.ARRAY`, :class:`.postgresql.HSTORE` and
+:class:`.postgresql.JSON`, the experience of types within rows being
+unhashable and encountering problems here is more prevalent than
+it was previously.
+
+In fact, SQLAlchemy has since version 0.8 included a flag on datatypes that
+are noted as "unhashable", however this flag was not used consistently
+on built in types. As described in :ref:`change_3499_postgresql`, this
+flag is now set consistently for all of Postgresql's "structural" types.
+
+The "unhashable" flag is also set on the :class:`.NullType` type,
+as :class:`.NullType` is used to refer to any expression of unknown
+type.
+
+Additionally, the treatment of a so-called "unhashable" type is slightly
+different than its been in previous releases; internally we are using
+the ``id()`` function to get a "hash value" from these structures, just
+as we would any ordinary mapped object. This replaces the previous
+approach which applied a counter to the object.
+
+:ticket:`3499`
New Features and Improvements - Core
====================================
Dialect Improvements and Changes - Postgresql
=============================================
+.. _change_3499_postgresql:
+
+ARRAY and JSON types now correctly specify "unhashable"
+-------------------------------------------------------
+
+As described in :ref:`change_3499`, the ORM relies upon being able to
+produce a hash function for column values when a query's selected entities
+mixes full ORM entities with column expressions. The ``hashable=False``
+flag is now correctly set on all of PG's "data structure" types, including
+:class:`.ARRAY` and :class:`.JSON`. The :class:`.JSONB` and :class:`.HSTORE`
+types already included this flag. For :class:`.ARRAY`,
+this is conditional based on the :paramref:`.postgresql.ARRAY.as_tuple`
+flag, however it should no longer be necessary to set this flag
+in order to have an array value present in a composed ORM row.
+
+.. seealso::
+
+ :ref:`change_3499`
+
+ :ref:`change_3503`
+
+:ticket:`3499`
+
+.. _change_3503:
+
+Correct SQL Types are Established from Indexed Access of ARRAY, JSON, HSTORE
+-----------------------------------------------------------------------------
+
+For all three of :class:`~.postgresql.ARRAY`, :class:`~.postgresql.JSON` and :class:`.HSTORE`,
+the SQL type assigned to the expression returned by indexed access, e.g.
+``col[someindex]``, should be correct in all cases.
+
+This includes:
+
+* The SQL type assigned to indexed access of an :class:`~.postgresql.ARRAY` takes into
+ account the number of dimensions configured. An :class:`~.postgresql.ARRAY` with three
+ dimensions will return a SQL expression with a type of :class:`~.postgresql.ARRAY` of
+ one less dimension. Given a column with type ``ARRAY(Integer, dimensions=3)``,
+ we can now perform this expression::
+
+ int_expr = col[5][6][7] # returns an Integer expression object
+
+ Previously, the indexed access to ``col[5]`` would return an expression of
+ type :class:`.Integer` where we could no longer perform indexed access
+ for the remaining dimensions, unless we used :func:`.cast` or :func:`.type_coerce`.
+
+* The :class:`~.postgresql.JSON` and :class:`~.postgresql.JSONB` types now mirror what Postgresql
+ itself does for indexed access. This means that all indexed access for
+ a :class:`~.postgresql.JSON` or :class:`~.postgresql.JSONB` type returns an expression that itself
+ is *always* :class:`~.postgresql.JSON` or :class:`~.postgresql.JSONB` itself, unless the
+ :attr:`~.postgresql.JSON.Comparator.astext` modifier is used. This means that whether
+ the indexed access of the JSON structure ultimately refers to a string,
+ list, number, or other JSON structure, Postgresql always considers it
+ to be JSON itself unless it is explicitly cast differently. Like
+ the :class:`~.postgresql.ARRAY` type, this means that it is now straightforward
+ to produce JSON expressions with multiple levels of indexed access::
+
+ json_expr = json_col['key1']['attr1'][5]
+
+* The "textual" type that is returned by indexed access of :class:`.HSTORE`
+ as well as the "textual" type that is returned by indexed access of
+ :class:`~.postgresql.JSON` and :class:`~.postgresql.JSONB` in conjunction with the
+ :attr:`~.postgresql.JSON.Comparator.astext` modifier is now configurable; it defaults
+ to :class:`.Text` in both cases but can be set to a user-defined
+ type using the :paramref:`.postgresql.JSON.astext_type` or
+ :paramref:`.postgresql.HSTORE.text_type` parameters.
+
+.. seealso::
+
+ :ref:`change_3503_cast`
+
+:ticket:`3499`
+:ticket:`3487`
+
+.. _change_3503_cast:
+
+The JSON cast() operation now requires ``.astext`` is called explicitly
+------------------------------------------------------------------------
+
+As part of the changes in :ref:`change_3503`, the workings of the
+:meth:`.ColumnElement.cast` operator on :class:`.postgresql.JSON` and
+:class:`.postgresql.JSONB` no longer implictly invoke the
+:attr:`.JSON.Comparator.astext` modifier; Postgresql's JSON/JSONB types
+support CAST operations to each other without the "astext" aspect.
+
+This means that in most cases, an application that was doing this::
+
+ expr = json_col['somekey'].cast(Integer)
+
+Will now need to change to this::
+
+ expr = json_col['somekey'].astext.cast(Integer)
+
+
+
Dialect Improvements and Changes - MySQL
=============================================
.. autoclass:: Concatenable
:members:
- :inherited-members:
+.. autoclass:: Indexable
+ :members:
+
.. autoclass:: NullType
from .base import \
INTEGER, BIGINT, SMALLINT, VARCHAR, CHAR, TEXT, NUMERIC, FLOAT, REAL, \
INET, CIDR, UUID, BIT, MACADDR, OID, DOUBLE_PRECISION, TIMESTAMP, TIME, \
- DATE, BYTEA, BOOLEAN, INTERVAL, ARRAY, ENUM, dialect, array, Any, All, \
- TSVECTOR, DropEnumType
+ DATE, BYTEA, BOOLEAN, INTERVAL, ENUM, dialect, TSVECTOR, DropEnumType, \
+ CreateEnumType
from .constraints import ExcludeConstraint
from .hstore import HSTORE, hstore
-from .json import JSON, JSONElement, JSONB
+from .json import JSON, JSONB
+from .array import array, ARRAY, Any, All
+
from .ranges import INT4RANGE, INT8RANGE, NUMRANGE, DATERANGE, TSRANGE, \
TSTZRANGE
'DOUBLE_PRECISION', 'TIMESTAMP', 'TIME', 'DATE', 'BYTEA', 'BOOLEAN',
'INTERVAL', 'ARRAY', 'ENUM', 'dialect', 'Any', 'All', 'array', 'HSTORE',
'hstore', 'INT4RANGE', 'INT8RANGE', 'NUMRANGE', 'DATERANGE',
- 'TSRANGE', 'TSTZRANGE', 'json', 'JSON', 'JSONB', 'JSONElement',
- 'DropEnumType'
+ 'TSRANGE', 'TSTZRANGE', 'json', 'JSON', 'JSONB',
+ 'DropEnumType', 'CreateEnumType', 'ExcludeConstraint'
)
--- /dev/null
+# postgresql/array.py
+# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
+# <see AUTHORS file>
+#
+# This module is part of SQLAlchemy and is released under
+# the MIT License: http://www.opensource.org/licenses/mit-license.php
+
+from .base import ischema_names
+from ...sql import expression, operators
+from ... import types as sqltypes
+
+try:
+ from uuid import UUID as _python_UUID
+except ImportError:
+ _python_UUID = None
+
+
+class Any(expression.ColumnElement):
+
+ """Represent the clause ``left operator ANY (right)``. ``right`` must be
+ an array expression.
+
+ .. seealso::
+
+ :class:`.postgresql.ARRAY`
+
+ :meth:`.postgresql.ARRAY.Comparator.any` - ARRAY-bound method
+
+ """
+ __visit_name__ = 'any'
+
+ def __init__(self, left, right, operator=operators.eq):
+ self.type = sqltypes.Boolean()
+ self.left = expression._literal_as_binds(left)
+ self.right = right
+ self.operator = operator
+
+
+class All(expression.ColumnElement):
+
+ """Represent the clause ``left operator ALL (right)``. ``right`` must be
+ an array expression.
+
+ .. seealso::
+
+ :class:`.postgresql.ARRAY`
+
+ :meth:`.postgresql.ARRAY.Comparator.all` - ARRAY-bound method
+
+ """
+ __visit_name__ = 'all'
+
+ def __init__(self, left, right, operator=operators.eq):
+ self.type = sqltypes.Boolean()
+ self.left = expression._literal_as_binds(left)
+ self.right = right
+ self.operator = operator
+
+
+class array(expression.Tuple):
+
+ """A Postgresql ARRAY literal.
+
+ This is used to produce ARRAY literals in SQL expressions, e.g.::
+
+ from sqlalchemy.dialects.postgresql import array
+ from sqlalchemy.dialects import postgresql
+ from sqlalchemy import select, func
+
+ stmt = select([
+ array([1,2]) + array([3,4,5])
+ ])
+
+ print stmt.compile(dialect=postgresql.dialect())
+
+ Produces the SQL::
+
+ SELECT ARRAY[%(param_1)s, %(param_2)s] ||
+ ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
+
+ An instance of :class:`.array` will always have the datatype
+ :class:`.ARRAY`. The "inner" type of the array is inferred from
+ the values present, unless the ``type_`` keyword argument is passed::
+
+ array(['foo', 'bar'], type_=CHAR)
+
+ .. versionadded:: 0.8 Added the :class:`~.postgresql.array` literal type.
+
+ See also:
+
+ :class:`.postgresql.ARRAY`
+
+ """
+ __visit_name__ = 'array'
+
+ def __init__(self, clauses, **kw):
+ super(array, self).__init__(*clauses, **kw)
+ self.type = ARRAY(self.type)
+
+ def _bind_param(self, operator, obj):
+ return array([
+ expression.BindParameter(None, o, _compared_to_operator=operator,
+ _compared_to_type=self.type, unique=True)
+ for o in obj
+ ])
+
+ def self_group(self, against=None):
+ return self
+
+
+CONTAINS = operators.custom_op("@>", precedence=5)
+
+CONTAINED_BY = operators.custom_op("<@", precedence=5)
+
+OVERLAP = operators.custom_op("&&", precedence=5)
+
+
+class ARRAY(sqltypes.Indexable, sqltypes.Concatenable, sqltypes.TypeEngine):
+
+ """Postgresql ARRAY type.
+
+ Represents values as Python lists.
+
+ An :class:`.ARRAY` type is constructed given the "type"
+ of element::
+
+ mytable = Table("mytable", metadata,
+ Column("data", ARRAY(Integer))
+ )
+
+ The above type represents an N-dimensional array,
+ meaning Postgresql will interpret values with any number
+ of dimensions automatically. To produce an INSERT
+ construct that passes in a 1-dimensional array of integers::
+
+ connection.execute(
+ mytable.insert(),
+ data=[1,2,3]
+ )
+
+ The :class:`.ARRAY` type can be constructed given a fixed number
+ of dimensions::
+
+ mytable = Table("mytable", metadata,
+ Column("data", ARRAY(Integer, dimensions=2))
+ )
+
+ This has the effect of the :class:`.ARRAY` type
+ specifying that number of bracketed blocks when a :class:`.Table`
+ is used in a CREATE TABLE statement, or when the type is used
+ within a :func:`.expression.cast` construct; it also causes
+ the bind parameter and result set processing of the type
+ to optimize itself to expect exactly that number of dimensions.
+ Note that Postgresql itself still allows N dimensions with such a type.
+
+ SQL expressions of type :class:`.ARRAY` have support for "index" and
+ "slice" behavior. The Python ``[]`` operator works normally here, given
+ integer indexes or slices. Note that Postgresql arrays default
+ to 1-based indexing. The operator produces binary expression
+ constructs which will produce the appropriate SQL, both for
+ SELECT statements::
+
+ select([mytable.c.data[5], mytable.c.data[2:7]])
+
+ as well as UPDATE statements when the :meth:`.Update.values` method
+ is used::
+
+ mytable.update().values({
+ mytable.c.data[5]: 7,
+ mytable.c.data[2:7]: [1, 2, 3]
+ })
+
+ Multi-dimensional array index support is provided automatically based on
+ either the value specified for the :paramref:`.ARRAY.dimensions` parameter.
+ E.g. an :class:`.ARRAY` with dimensions set to 2 would return an expression
+ of type :class:`.ARRAY` for a single index operation::
+
+ type = ARRAY(Integer, dimensions=2)
+
+ expr = column('x', type) # expr is of type ARRAY(Integer, dimensions=2)
+
+ expr = column('x', type)[5] # expr is of type ARRAY(Integer, dimensions=1)
+
+ An index expression from ``expr`` above would then return an expression
+ of type Integer::
+
+ sub_expr = expr[10] # expr is of type Integer
+
+ .. versionadded:: 1.1 support for index operations on multi-dimensional
+ :class:`.postgresql.ARRAY` objects is added.
+
+ :class:`.ARRAY` provides special methods for containment operations,
+ e.g.::
+
+ mytable.c.data.contains([1, 2])
+
+ For a full list of special methods see :class:`.ARRAY.Comparator`.
+
+ .. versionadded:: 0.8 Added support for index and slice operations
+ to the :class:`.ARRAY` type, including support for UPDATE
+ statements, and special array containment operations.
+
+ The :class:`.ARRAY` type may not be supported on all DBAPIs.
+ It is known to work on psycopg2 and not pg8000.
+
+ See also:
+
+ :class:`.postgresql.array` - produce a literal array value.
+
+ """
+ __visit_name__ = 'ARRAY'
+
+ class Comparator(
+ sqltypes.Indexable.Comparator, sqltypes.Concatenable.Comparator):
+
+ """Define comparison operations for :class:`.ARRAY`."""
+
+ def _setup_getitem(self, index):
+ if isinstance(index, slice):
+ return_type = self.type
+ elif self.type.dimensions is None or self.type.dimensions == 1:
+ return_type = self.type.item_type
+ else:
+ adapt_kw = {'dimensions': self.type.dimensions - 1}
+ return_type = self.type.adapt(self.type.__class__, **adapt_kw)
+
+ return operators.getitem, index, return_type
+
+ def any(self, other, operator=operators.eq):
+ """Return ``other operator ANY (array)`` clause.
+
+ Argument places are switched, because ANY requires array
+ expression to be on the right hand-side.
+
+ E.g.::
+
+ from sqlalchemy.sql import operators
+
+ conn.execute(
+ select([table.c.data]).where(
+ table.c.data.any(7, operator=operators.lt)
+ )
+ )
+
+ :param other: expression to be compared
+ :param operator: an operator object from the
+ :mod:`sqlalchemy.sql.operators`
+ package, defaults to :func:`.operators.eq`.
+
+ .. seealso::
+
+ :class:`.postgresql.Any`
+
+ :meth:`.postgresql.ARRAY.Comparator.all`
+
+ """
+ return Any(other, self.expr, operator=operator)
+
+ def all(self, other, operator=operators.eq):
+ """Return ``other operator ALL (array)`` clause.
+
+ Argument places are switched, because ALL requires array
+ expression to be on the right hand-side.
+
+ E.g.::
+
+ from sqlalchemy.sql import operators
+
+ conn.execute(
+ select([table.c.data]).where(
+ table.c.data.all(7, operator=operators.lt)
+ )
+ )
+
+ :param other: expression to be compared
+ :param operator: an operator object from the
+ :mod:`sqlalchemy.sql.operators`
+ package, defaults to :func:`.operators.eq`.
+
+ .. seealso::
+
+ :class:`.postgresql.All`
+
+ :meth:`.postgresql.ARRAY.Comparator.any`
+
+ """
+ return All(other, self.expr, operator=operator)
+
+ def contains(self, other, **kwargs):
+ """Boolean expression. Test if elements are a superset of the
+ elements of the argument array expression.
+ """
+ return self.operate(CONTAINS, other, result_type=sqltypes.Boolean)
+
+ def contained_by(self, other):
+ """Boolean expression. Test if elements are a proper subset of the
+ elements of the argument array expression.
+ """
+ return self.operate(
+ CONTAINED_BY, other, result_type=sqltypes.Boolean)
+
+ def overlap(self, other):
+ """Boolean expression. Test if array has elements in common with
+ an argument array expression.
+ """
+ return self.operate(OVERLAP, other, result_type=sqltypes.Boolean)
+
+ comparator_factory = Comparator
+
+ def __init__(self, item_type, as_tuple=False, dimensions=None,
+ zero_indexes=False):
+ """Construct an ARRAY.
+
+ E.g.::
+
+ Column('myarray', ARRAY(Integer))
+
+ Arguments are:
+
+ :param item_type: The data type of items of this array. Note that
+ dimensionality is irrelevant here, so multi-dimensional arrays like
+ ``INTEGER[][]``, are constructed as ``ARRAY(Integer)``, not as
+ ``ARRAY(ARRAY(Integer))`` or such.
+
+ :param as_tuple=False: Specify whether return results
+ should be converted to tuples from lists. DBAPIs such
+ as psycopg2 return lists by default. When tuples are
+ returned, the results are hashable.
+
+ :param dimensions: if non-None, the ARRAY will assume a fixed
+ number of dimensions. This will cause the DDL emitted for this
+ ARRAY to include the exact number of bracket clauses ``[]``,
+ and will also optimize the performance of the type overall.
+ Note that PG arrays are always implicitly "non-dimensioned",
+ meaning they can store any number of dimensions no matter how
+ they were declared.
+
+ :param zero_indexes=False: when True, index values will be converted
+ between Python zero-based and Postgresql one-based indexes, e.g.
+ a value of one will be added to all index values before passing
+ to the database.
+
+ .. versionadded:: 0.9.5
+
+
+ """
+ if isinstance(item_type, ARRAY):
+ raise ValueError("Do not nest ARRAY types; ARRAY(basetype) "
+ "handles multi-dimensional arrays of basetype")
+ if isinstance(item_type, type):
+ item_type = item_type()
+ self.item_type = item_type
+ self.as_tuple = as_tuple
+ self.dimensions = dimensions
+ self.zero_indexes = zero_indexes
+
+ @property
+ def hashable(self):
+ return self.as_tuple
+
+ @property
+ def python_type(self):
+ return list
+
+ def compare_values(self, x, y):
+ return x == y
+
+ def _proc_array(self, arr, itemproc, dim, collection):
+ if dim is None:
+ arr = list(arr)
+ if dim == 1 or dim is None and (
+ # this has to be (list, tuple), or at least
+ # not hasattr('__iter__'), since Py3K strings
+ # etc. have __iter__
+ not arr or not isinstance(arr[0], (list, tuple))):
+ if itemproc:
+ return collection(itemproc(x) for x in arr)
+ else:
+ return collection(arr)
+ else:
+ return collection(
+ self._proc_array(
+ x, itemproc,
+ dim - 1 if dim is not None else None,
+ collection)
+ for x in arr
+ )
+
+ def bind_processor(self, dialect):
+ item_proc = self.item_type.dialect_impl(dialect).\
+ bind_processor(dialect)
+
+ def process(value):
+ if value is None:
+ return value
+ else:
+ return self._proc_array(
+ value,
+ item_proc,
+ self.dimensions,
+ list)
+ return process
+
+ def result_processor(self, dialect, coltype):
+ item_proc = self.item_type.dialect_impl(dialect).\
+ result_processor(dialect, coltype)
+
+ def process(value):
+ if value is None:
+ return value
+ else:
+ return self._proc_array(
+ value,
+ item_proc,
+ self.dimensions,
+ tuple if self.as_tuple else list)
+ return process
+
+ischema_names['_array'] = ARRAY
`Postgresql CREATE TABLE options
<http://www.postgresql.org/docs/9.3/static/sql-createtable.html>`_
+ARRAY Types
+-----------
+
+The Postgresql dialect supports arrays, both as multidimensional column types
+as well as array literals:
+
+* :class:`.postgresql.ARRAY` - ARRAY datatype
+
+* :class:`.postgresql.array` - array literal
+
+JSON Types
+----------
+
+The Postgresql dialect supports both JSON and JSONB datatypes, including
+psycopg2's native support and support for all of Postgresql's special
+operators:
+
+* :class:`.postgresql.JSON`
+
+* :class:`.postgresql.JSONB`
+
+HSTORE Type
+-----------
+
+The Postgresql HSTORE type as well as hstore literals are supported:
+
+* :class:`.postgresql.HSTORE` - HSTORE datatype
+
+* :class:`.postgresql.hstore` - hstore literal
+
ENUM Types
----------
from ... import sql, schema, exc, util
from ...engine import default, reflection
-from ...sql import compiler, expression, operators, default_comparator
+from ...sql import compiler, expression
from ... import types as sqltypes
try:
__visit_name__ = 'TSVECTOR'
-class _Slice(expression.ColumnElement):
- __visit_name__ = 'slice'
- type = sqltypes.NULLTYPE
-
- def __init__(self, slice_, source_comparator):
- self.start = default_comparator._check_literal(
- source_comparator.expr,
- operators.getitem, slice_.start)
- self.stop = default_comparator._check_literal(
- source_comparator.expr,
- operators.getitem, slice_.stop)
-
-
-class Any(expression.ColumnElement):
-
- """Represent the clause ``left operator ANY (right)``. ``right`` must be
- an array expression.
-
- .. seealso::
-
- :class:`.postgresql.ARRAY`
-
- :meth:`.postgresql.ARRAY.Comparator.any` - ARRAY-bound method
-
- """
- __visit_name__ = 'any'
-
- def __init__(self, left, right, operator=operators.eq):
- self.type = sqltypes.Boolean()
- self.left = expression._literal_as_binds(left)
- self.right = right
- self.operator = operator
-
-
-class All(expression.ColumnElement):
-
- """Represent the clause ``left operator ALL (right)``. ``right`` must be
- an array expression.
-
- .. seealso::
-
- :class:`.postgresql.ARRAY`
-
- :meth:`.postgresql.ARRAY.Comparator.all` - ARRAY-bound method
-
- """
- __visit_name__ = 'all'
-
- def __init__(self, left, right, operator=operators.eq):
- self.type = sqltypes.Boolean()
- self.left = expression._literal_as_binds(left)
- self.right = right
- self.operator = operator
-
-
-class array(expression.Tuple):
-
- """A Postgresql ARRAY literal.
-
- This is used to produce ARRAY literals in SQL expressions, e.g.::
-
- from sqlalchemy.dialects.postgresql import array
- from sqlalchemy.dialects import postgresql
- from sqlalchemy import select, func
-
- stmt = select([
- array([1,2]) + array([3,4,5])
- ])
-
- print stmt.compile(dialect=postgresql.dialect())
-
- Produces the SQL::
-
- SELECT ARRAY[%(param_1)s, %(param_2)s] ||
- ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
-
- An instance of :class:`.array` will always have the datatype
- :class:`.ARRAY`. The "inner" type of the array is inferred from
- the values present, unless the ``type_`` keyword argument is passed::
-
- array(['foo', 'bar'], type_=CHAR)
-
- .. versionadded:: 0.8 Added the :class:`~.postgresql.array` literal type.
-
- See also:
-
- :class:`.postgresql.ARRAY`
-
- """
- __visit_name__ = 'array'
-
- def __init__(self, clauses, **kw):
- super(array, self).__init__(*clauses, **kw)
- self.type = ARRAY(self.type)
-
- def _bind_param(self, operator, obj):
- return array([
- expression.BindParameter(None, o, _compared_to_operator=operator,
- _compared_to_type=self.type, unique=True)
- for o in obj
- ])
-
- def self_group(self, against=None):
- return self
-
-
-class ARRAY(sqltypes.Concatenable, sqltypes.TypeEngine):
-
- """Postgresql ARRAY type.
-
- Represents values as Python lists.
-
- An :class:`.ARRAY` type is constructed given the "type"
- of element::
-
- mytable = Table("mytable", metadata,
- Column("data", ARRAY(Integer))
- )
-
- The above type represents an N-dimensional array,
- meaning Postgresql will interpret values with any number
- of dimensions automatically. To produce an INSERT
- construct that passes in a 1-dimensional array of integers::
-
- connection.execute(
- mytable.insert(),
- data=[1,2,3]
- )
-
- The :class:`.ARRAY` type can be constructed given a fixed number
- of dimensions::
-
- mytable = Table("mytable", metadata,
- Column("data", ARRAY(Integer, dimensions=2))
- )
-
- This has the effect of the :class:`.ARRAY` type
- specifying that number of bracketed blocks when a :class:`.Table`
- is used in a CREATE TABLE statement, or when the type is used
- within a :func:`.expression.cast` construct; it also causes
- the bind parameter and result set processing of the type
- to optimize itself to expect exactly that number of dimensions.
- Note that Postgresql itself still allows N dimensions with such a type.
-
- SQL expressions of type :class:`.ARRAY` have support for "index" and
- "slice" behavior. The Python ``[]`` operator works normally here, given
- integer indexes or slices. Note that Postgresql arrays default
- to 1-based indexing. The operator produces binary expression
- constructs which will produce the appropriate SQL, both for
- SELECT statements::
-
- select([mytable.c.data[5], mytable.c.data[2:7]])
-
- as well as UPDATE statements when the :meth:`.Update.values` method
- is used::
-
- mytable.update().values({
- mytable.c.data[5]: 7,
- mytable.c.data[2:7]: [1, 2, 3]
- })
-
- .. note::
-
- Multi-dimensional support for the ``[]`` operator is not supported
- in SQLAlchemy 1.0. Please use the :func:`.type_coerce` function
- to cast an intermediary expression to ARRAY again as a workaround::
-
- expr = type_coerce(my_array_column[5], ARRAY(Integer))[6]
-
- Multi-dimensional support will be provided in a future release.
-
- :class:`.ARRAY` provides special methods for containment operations,
- e.g.::
-
- mytable.c.data.contains([1, 2])
-
- For a full list of special methods see :class:`.ARRAY.Comparator`.
-
- .. versionadded:: 0.8 Added support for index and slice operations
- to the :class:`.ARRAY` type, including support for UPDATE
- statements, and special array containment operations.
-
- The :class:`.ARRAY` type may not be supported on all DBAPIs.
- It is known to work on psycopg2 and not pg8000.
-
- See also:
-
- :class:`.postgresql.array` - produce a literal array value.
-
- """
- __visit_name__ = 'ARRAY'
-
- class Comparator(sqltypes.Concatenable.Comparator):
-
- """Define comparison operations for :class:`.ARRAY`."""
-
- def __getitem__(self, index):
- shift_indexes = 1 if self.expr.type.zero_indexes else 0
- if isinstance(index, slice):
- if shift_indexes:
- index = slice(
- index.start + shift_indexes,
- index.stop + shift_indexes,
- index.step
- )
- index = _Slice(index, self)
- return_type = self.type
- else:
- index += shift_indexes
- return_type = self.type.item_type
-
- return default_comparator._binary_operate(
- self.expr, operators.getitem, index,
- result_type=return_type)
-
- def any(self, other, operator=operators.eq):
- """Return ``other operator ANY (array)`` clause.
-
- Argument places are switched, because ANY requires array
- expression to be on the right hand-side.
-
- E.g.::
-
- from sqlalchemy.sql import operators
-
- conn.execute(
- select([table.c.data]).where(
- table.c.data.any(7, operator=operators.lt)
- )
- )
-
- :param other: expression to be compared
- :param operator: an operator object from the
- :mod:`sqlalchemy.sql.operators`
- package, defaults to :func:`.operators.eq`.
-
- .. seealso::
-
- :class:`.postgresql.Any`
-
- :meth:`.postgresql.ARRAY.Comparator.all`
-
- """
- return Any(other, self.expr, operator=operator)
-
- def all(self, other, operator=operators.eq):
- """Return ``other operator ALL (array)`` clause.
-
- Argument places are switched, because ALL requires array
- expression to be on the right hand-side.
-
- E.g.::
-
- from sqlalchemy.sql import operators
-
- conn.execute(
- select([table.c.data]).where(
- table.c.data.all(7, operator=operators.lt)
- )
- )
-
- :param other: expression to be compared
- :param operator: an operator object from the
- :mod:`sqlalchemy.sql.operators`
- package, defaults to :func:`.operators.eq`.
-
- .. seealso::
-
- :class:`.postgresql.All`
-
- :meth:`.postgresql.ARRAY.Comparator.any`
-
- """
- return All(other, self.expr, operator=operator)
-
- def contains(self, other, **kwargs):
- """Boolean expression. Test if elements are a superset of the
- elements of the argument array expression.
- """
- return self.expr.op('@>')(other)
-
- def contained_by(self, other):
- """Boolean expression. Test if elements are a proper subset of the
- elements of the argument array expression.
- """
- return self.expr.op('<@')(other)
-
- def overlap(self, other):
- """Boolean expression. Test if array has elements in common with
- an argument array expression.
- """
- return self.expr.op('&&')(other)
-
- def _adapt_expression(self, op, other_comparator):
- if isinstance(op, operators.custom_op):
- if op.opstring in ['@>', '<@', '&&']:
- return op, sqltypes.Boolean
- return sqltypes.Concatenable.Comparator.\
- _adapt_expression(self, op, other_comparator)
-
- comparator_factory = Comparator
-
- def __init__(self, item_type, as_tuple=False, dimensions=None,
- zero_indexes=False):
- """Construct an ARRAY.
-
- E.g.::
-
- Column('myarray', ARRAY(Integer))
-
- Arguments are:
-
- :param item_type: The data type of items of this array. Note that
- dimensionality is irrelevant here, so multi-dimensional arrays like
- ``INTEGER[][]``, are constructed as ``ARRAY(Integer)``, not as
- ``ARRAY(ARRAY(Integer))`` or such.
-
- :param as_tuple=False: Specify whether return results
- should be converted to tuples from lists. DBAPIs such
- as psycopg2 return lists by default. When tuples are
- returned, the results are hashable.
-
- :param dimensions: if non-None, the ARRAY will assume a fixed
- number of dimensions. This will cause the DDL emitted for this
- ARRAY to include the exact number of bracket clauses ``[]``,
- and will also optimize the performance of the type overall.
- Note that PG arrays are always implicitly "non-dimensioned",
- meaning they can store any number of dimensions no matter how
- they were declared.
-
- :param zero_indexes=False: when True, index values will be converted
- between Python zero-based and Postgresql one-based indexes, e.g.
- a value of one will be added to all index values before passing
- to the database.
-
- .. versionadded:: 0.9.5
-
- """
- if isinstance(item_type, ARRAY):
- raise ValueError("Do not nest ARRAY types; ARRAY(basetype) "
- "handles multi-dimensional arrays of basetype")
- if isinstance(item_type, type):
- item_type = item_type()
- self.item_type = item_type
- self.as_tuple = as_tuple
- self.dimensions = dimensions
- self.zero_indexes = zero_indexes
-
- @property
- def python_type(self):
- return list
-
- def compare_values(self, x, y):
- return x == y
-
- def _proc_array(self, arr, itemproc, dim, collection):
- if dim is None:
- arr = list(arr)
- if dim == 1 or dim is None and (
- # this has to be (list, tuple), or at least
- # not hasattr('__iter__'), since Py3K strings
- # etc. have __iter__
- not arr or not isinstance(arr[0], (list, tuple))):
- if itemproc:
- return collection(itemproc(x) for x in arr)
- else:
- return collection(arr)
- else:
- return collection(
- self._proc_array(
- x, itemproc,
- dim - 1 if dim is not None else None,
- collection)
- for x in arr
- )
-
- def bind_processor(self, dialect):
- item_proc = self.item_type.\
- dialect_impl(dialect).\
- bind_processor(dialect)
-
- def process(value):
- if value is None:
- return value
- else:
- return self._proc_array(
- value,
- item_proc,
- self.dimensions,
- list)
- return process
-
- def result_processor(self, dialect, coltype):
- item_proc = self.item_type.\
- dialect_impl(dialect).\
- result_processor(dialect, coltype)
-
- def process(value):
- if value is None:
- return value
- else:
- return self._proc_array(
- value,
- item_proc,
- self.dimensions,
- tuple if self.as_tuple else list)
- return process
-
-PGArray = ARRAY
-
-
class ENUM(sqltypes.Enum):
"""Postgresql ENUM type.
else:
colspec += " SERIAL"
else:
- colspec += " " + self.dialect.type_compiler.process(column.type,
- type_expression=column)
+ colspec += " " + self.dialect.type_compiler.process(
+ column.type, type_expression=column)
default = self.get_column_default_string(column)
if default is not None:
colspec += " DEFAULT " + default
if coltype:
coltype = coltype(*args, **kwargs)
if is_array:
- coltype = ARRAY(coltype)
+ coltype = self.ischema_names['_array'](coltype)
else:
util.warn("Did not recognize type '%s' of column '%s'" %
(attype, name))
import re
-from .base import ARRAY, ischema_names
+from .base import ischema_names
+from .array import ARRAY
from ... import types as sqltypes
from ...sql import functions as sqlfunc
+from ...sql import operators
from ...sql.operators import custom_op
from ... import util
__all__ = ('HSTORE', 'hstore')
-# My best guess at the parsing rules of hstore literals, since no formal
-# grammar is given. This is mostly reverse engineered from PG's input parser
-# behavior.
-HSTORE_PAIR_RE = re.compile(r"""
-(
- "(?P<key> (\\ . | [^"])* )" # Quoted key
-)
-[ ]* => [ ]* # Pair operator, optional adjoining whitespace
-(
- (?P<value_null> NULL ) # NULL value
- | "(?P<value> (\\ . | [^"])* )" # Quoted value
-)
-""", re.VERBOSE)
-
-HSTORE_DELIMITER_RE = re.compile(r"""
-[ ]* , [ ]*
-""", re.VERBOSE)
-
-
-def _parse_error(hstore_str, pos):
- """format an unmarshalling error."""
-
- ctx = 20
- hslen = len(hstore_str)
-
- parsed_tail = hstore_str[max(pos - ctx - 1, 0):min(pos, hslen)]
- residual = hstore_str[min(pos, hslen):min(pos + ctx + 1, hslen)]
- if len(parsed_tail) > ctx:
- parsed_tail = '[...]' + parsed_tail[1:]
- if len(residual) > ctx:
- residual = residual[:-1] + '[...]'
-
- return "After %r, could not parse residual at position %d: %r" % (
- parsed_tail, pos, residual)
-
-
-def _parse_hstore(hstore_str):
- """Parse an hstore from its literal string representation.
-
- Attempts to approximate PG's hstore input parsing rules as closely as
- possible. Although currently this is not strictly necessary, since the
- current implementation of hstore's output syntax is stricter than what it
- accepts as input, the documentation makes no guarantees that will always
- be the case.
-
-
-
- """
- result = {}
- pos = 0
- pair_match = HSTORE_PAIR_RE.match(hstore_str)
-
- while pair_match is not None:
- key = pair_match.group('key').replace(r'\"', '"').replace(
- "\\\\", "\\")
- if pair_match.group('value_null'):
- value = None
- else:
- value = pair_match.group('value').replace(
- r'\"', '"').replace("\\\\", "\\")
- result[key] = value
-
- pos += pair_match.end()
-
- delim_match = HSTORE_DELIMITER_RE.match(hstore_str[pos:])
- if delim_match is not None:
- pos += delim_match.end()
-
- pair_match = HSTORE_PAIR_RE.match(hstore_str[pos:])
-
- if pos != len(hstore_str):
- raise ValueError(_parse_error(hstore_str, pos))
+INDEX = custom_op(
+ "->", precedence=5, natural_self_precedent=True
+)
- return result
+HAS_KEY = operators.custom_op(
+ "?", precedence=5, natural_self_precedent=True
+)
+HAS_ALL = operators.custom_op(
+ "?&", precedence=5, natural_self_precedent=True
+)
-def _serialize_hstore(val):
- """Serialize a dictionary into an hstore literal. Keys and values must
- both be strings (except None for values).
+HAS_ANY = operators.custom_op(
+ "?|", precedence=5, natural_self_precedent=True
+)
- """
- def esc(s, position):
- if position == 'value' and s is None:
- return 'NULL'
- elif isinstance(s, util.string_types):
- return '"%s"' % s.replace("\\", "\\\\").replace('"', r'\"')
- else:
- raise ValueError("%r in %s position is not a string." %
- (s, position))
+CONTAINS = operators.custom_op(
+ "@>", precedence=5, natural_self_precedent=True
+)
- return ', '.join('%s=>%s' % (esc(k, 'key'), esc(v, 'value'))
- for k, v in val.items())
+CONTAINED_BY = operators.custom_op(
+ "<@", precedence=5, natural_self_precedent=True
+)
-class HSTORE(sqltypes.Concatenable, sqltypes.TypeEngine):
+class HSTORE(sqltypes.Indexable, sqltypes.Concatenable, sqltypes.TypeEngine):
"""Represent the Postgresql HSTORE type.
The :class:`.HSTORE` type stores dictionaries containing strings, e.g.::
__visit_name__ = 'HSTORE'
hashable = False
+ text_type = sqltypes.Text()
+
+ def __init__(self, text_type=None):
+ """Construct a new :class:`.HSTORE`.
+
+ :param text_type: the type that should be used for indexed values.
+ Defaults to :class:`.types.Text`.
+
+ .. versionadded:: 1.1.0
- class comparator_factory(sqltypes.Concatenable.Comparator):
+ """
+ if text_type is not None:
+ self.text_type = text_type
+
+ class Comparator(
+ sqltypes.Indexable.Comparator, sqltypes.Concatenable.Comparator):
"""Define comparison operations for :class:`.HSTORE`."""
def has_key(self, other):
"""Boolean expression. Test for presence of a key. Note that the
key may be a SQLA expression.
"""
- return self.expr.op('?')(other)
+ return self.operate(HAS_KEY, other, result_type=sqltypes.Boolean)
def has_all(self, other):
- """Boolean expression. Test for presence of all keys in the PG
- array.
+ """Boolean expression. Test for presence of all keys in jsonb
"""
- return self.expr.op('?&')(other)
+ return self.operate(HAS_ALL, other, result_type=sqltypes.Boolean)
def has_any(self, other):
- """Boolean expression. Test for presence of any key in the PG
- array.
+ """Boolean expression. Test for presence of any key in jsonb
"""
- return self.expr.op('?|')(other)
-
- def defined(self, key):
- """Boolean expression. Test for presence of a non-NULL value for
- the key. Note that the key may be a SQLA expression.
- """
- return _HStoreDefinedFunction(self.expr, key)
+ return self.operate(HAS_ANY, other, result_type=sqltypes.Boolean)
def contains(self, other, **kwargs):
- """Boolean expression. Test if keys are a superset of the keys of
- the argument hstore expression.
+ """Boolean expression. Test if keys (or array) are a superset
+ of/contained the keys of the argument jsonb expression.
"""
- return self.expr.op('@>')(other)
+ return self.operate(CONTAINS, other, result_type=sqltypes.Boolean)
def contained_by(self, other):
"""Boolean expression. Test if keys are a proper subset of the
- keys of the argument hstore expression.
+ keys of the argument jsonb expression.
"""
- return self.expr.op('<@')(other)
+ return self.operate(
+ CONTAINED_BY, other, result_type=sqltypes.Boolean)
- def __getitem__(self, other):
- """Text expression. Get the value at a given key. Note that the
- key may be a SQLA expression.
+ def _setup_getitem(self, index):
+ return INDEX, index, self.type.text_type
+
+ def defined(self, key):
+ """Boolean expression. Test for presence of a non-NULL value for
+ the key. Note that the key may be a SQLA expression.
"""
- return self.expr.op('->', precedence=5)(other)
+ return _HStoreDefinedFunction(self.expr, key)
def delete(self, key):
"""HStore expression. Returns the contents of this hstore with the
"""Text array expression. Returns array of [key, value] pairs."""
return _HStoreMatrixFunction(self.expr)
- def _adapt_expression(self, op, other_comparator):
- if isinstance(op, custom_op):
- if op.opstring in ['?', '?&', '?|', '@>', '<@']:
- return op, sqltypes.Boolean
- elif op.opstring == '->':
- return op, sqltypes.Text
- return sqltypes.Concatenable.Comparator.\
- _adapt_expression(self, op, other_comparator)
+ comparator_factory = Comparator
def bind_processor(self, dialect):
if util.py2k:
class _HStoreMatrixFunction(sqlfunc.GenericFunction):
type = ARRAY(sqltypes.Text)
name = 'hstore_to_matrix'
+
+
+#
+# parsing. note that none of this is used with the psycopg2 backend,
+# which provides its own native extensions.
+#
+
+# My best guess at the parsing rules of hstore literals, since no formal
+# grammar is given. This is mostly reverse engineered from PG's input parser
+# behavior.
+HSTORE_PAIR_RE = re.compile(r"""
+(
+ "(?P<key> (\\ . | [^"])* )" # Quoted key
+)
+[ ]* => [ ]* # Pair operator, optional adjoining whitespace
+(
+ (?P<value_null> NULL ) # NULL value
+ | "(?P<value> (\\ . | [^"])* )" # Quoted value
+)
+""", re.VERBOSE)
+
+HSTORE_DELIMITER_RE = re.compile(r"""
+[ ]* , [ ]*
+""", re.VERBOSE)
+
+
+def _parse_error(hstore_str, pos):
+ """format an unmarshalling error."""
+
+ ctx = 20
+ hslen = len(hstore_str)
+
+ parsed_tail = hstore_str[max(pos - ctx - 1, 0):min(pos, hslen)]
+ residual = hstore_str[min(pos, hslen):min(pos + ctx + 1, hslen)]
+
+ if len(parsed_tail) > ctx:
+ parsed_tail = '[...]' + parsed_tail[1:]
+ if len(residual) > ctx:
+ residual = residual[:-1] + '[...]'
+
+ return "After %r, could not parse residual at position %d: %r" % (
+ parsed_tail, pos, residual)
+
+
+def _parse_hstore(hstore_str):
+ """Parse an hstore from its literal string representation.
+
+ Attempts to approximate PG's hstore input parsing rules as closely as
+ possible. Although currently this is not strictly necessary, since the
+ current implementation of hstore's output syntax is stricter than what it
+ accepts as input, the documentation makes no guarantees that will always
+ be the case.
+
+
+
+ """
+ result = {}
+ pos = 0
+ pair_match = HSTORE_PAIR_RE.match(hstore_str)
+
+ while pair_match is not None:
+ key = pair_match.group('key').replace(r'\"', '"').replace(
+ "\\\\", "\\")
+ if pair_match.group('value_null'):
+ value = None
+ else:
+ value = pair_match.group('value').replace(
+ r'\"', '"').replace("\\\\", "\\")
+ result[key] = value
+
+ pos += pair_match.end()
+
+ delim_match = HSTORE_DELIMITER_RE.match(hstore_str[pos:])
+ if delim_match is not None:
+ pos += delim_match.end()
+
+ pair_match = HSTORE_PAIR_RE.match(hstore_str[pos:])
+
+ if pos != len(hstore_str):
+ raise ValueError(_parse_error(hstore_str, pos))
+
+ return result
+
+
+def _serialize_hstore(val):
+ """Serialize a dictionary into an hstore literal. Keys and values must
+ both be strings (except None for values).
+
+ """
+ def esc(s, position):
+ if position == 'value' and s is None:
+ return 'NULL'
+ elif isinstance(s, util.string_types):
+ return '"%s"' % s.replace("\\", "\\\\").replace('"', r'\"')
+ else:
+ raise ValueError("%r in %s position is not a string." %
+ (s, position))
+
+ return ', '.join('%s=>%s' % (esc(k, 'key'), esc(v, 'value'))
+ for k, v in val.items())
+
+
# the MIT License: http://www.opensource.org/licenses/mit-license.php
from __future__ import absolute_import
+import collections
import json
from .base import ischema_names
from ... import types as sqltypes
-from ...sql.operators import custom_op
-from ... import sql
-from ...sql import elements, default_comparator
+from ...sql import operators
+from ...sql import elements
from ... import util
-__all__ = ('JSON', 'JSONElement', 'JSONB')
+__all__ = ('JSON', 'JSONB')
-class JSONElement(elements.BinaryExpression):
- """Represents accessing an element of a :class:`.JSON` value.
+# json : returns json
+INDEX = operators.custom_op(
+ "->", precedence=5, natural_self_precedent=True
+)
- The :class:`.JSONElement` is produced whenever using the Python index
- operator on an expression that has the type :class:`.JSON`::
+# path operator: returns json
+PATHIDX = operators.custom_op(
+ "#>", precedence=5, natural_self_precedent=True
+)
- expr = mytable.c.json_data['some_key']
+# json + astext: returns text
+ASTEXT = operators.custom_op(
+ "->>", precedence=5, natural_self_precedent=True
+)
- The expression typically compiles to a JSON access such as ``col -> key``.
- Modifiers are then available for typing behavior, including
- :meth:`.JSONElement.cast` and :attr:`.JSONElement.astext`.
+# path operator + astext: returns text
+ASTEXT_PATHIDX = operators.custom_op(
+ "#>>", precedence=5, natural_self_precedent=True
+)
- """
-
- def __init__(self, left, right, astext=False,
- opstring=None, result_type=None):
- self._astext = astext
- if opstring is None:
- if hasattr(right, '__iter__') and \
- not isinstance(right, util.string_types):
- opstring = "#>"
- right = "{%s}" % (
- ", ".join(util.text_type(elem) for elem in right))
- else:
- opstring = "->"
-
- self._json_opstring = opstring
- operator = custom_op(opstring, precedence=5)
- right = default_comparator._check_literal(
- left, operator, right)
- super(JSONElement, self).__init__(
- left, right, operator, type_=result_type)
-
- @property
- def astext(self):
- """Convert this :class:`.JSONElement` to use the 'astext' operator
- when evaluated.
-
- E.g.::
-
- select([data_table.c.data['some key'].astext])
-
- .. seealso::
-
- :meth:`.JSONElement.cast`
+HAS_KEY = operators.custom_op(
+ "?", precedence=5, natural_self_precedent=True
+)
- """
- if self._astext:
- return self
- else:
- return JSONElement(
- self.left,
- self.right,
- astext=True,
- opstring=self._json_opstring + ">",
- result_type=sqltypes.String(convert_unicode=True)
- )
-
- def cast(self, type_):
- """Convert this :class:`.JSONElement` to apply both the 'astext' operator
- as well as an explicit type cast when evaluated.
+HAS_ALL = operators.custom_op(
+ "?&", precedence=5, natural_self_precedent=True
+)
- E.g.::
+HAS_ANY = operators.custom_op(
+ "?|", precedence=5, natural_self_precedent=True
+)
- select([data_table.c.data['some key'].cast(Integer)])
+CONTAINS = operators.custom_op(
+ "@>", precedence=5, natural_self_precedent=True
+)
- .. seealso::
+CONTAINED_BY = operators.custom_op(
+ "<@", precedence=5, natural_self_precedent=True
+)
- :attr:`.JSONElement.astext`
-
- """
- if not self._astext:
- return self.astext.cast(type_)
- else:
- return sql.cast(self, type_)
-
-class JSON(sqltypes.TypeEngine):
+class JSON(sqltypes.Indexable, sqltypes.TypeEngine):
"""Represent the Postgresql JSON type.
The :class:`.JSON` type stores arbitrary JSON format data, e.g.::
:class:`.JSON` provides several operations:
- * Index operations::
+ * Index operations (the ``->`` operator)::
data_table.c.data['some key']
- * Index operations returning text (required for text comparison)::
+ * Index operations returning text (the ``->>`` operator)::
data_table.c.data['some key'].astext == 'some value'
- * Index operations with a built-in CAST call::
+ * Index operations with CAST
+ (equivalent to ``CAST(col ->> ['some key'] AS <type>)``)::
- data_table.c.data['some key'].cast(Integer) == 5
+ data_table.c.data['some key'].astext.cast(Integer) == 5
- * Path index operations::
+ * Path index operations (the ``#>`` operator)::
data_table.c.data[('key_1', 'key_2', ..., 'key_n')]
- * Path index operations returning text (required for text comparison)::
+ * Path index operations returning text (the ``#>>`` operator)::
+
+ data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \
+'some value'
- data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \\
- 'some value'
+ .. versionchanged:: 1.1 The :meth:`.ColumnElement.cast` operator on
+ JSON objects now requires that the :attr:`.JSON.Comparator.astext`
+ modifier be called explicitly, if the cast works only from a textual
+ string.
- Index operations return an instance of :class:`.JSONElement`, which
- represents an expression such as ``column -> index``. This element then
- defines methods such as :attr:`.JSONElement.astext` and
- :meth:`.JSONElement.cast` for setting up type behavior.
+ Index operations return an expression object whose type defaults to
+ :class:`.JSON` by default, so that further JSON-oriented instructions
+ may be called upon the result type.
The :class:`.JSON` type, when used with the SQLAlchemy ORM, does not
detect in-place mutations to the structure. In order to detect these, the
.. versionadded:: 0.9
+ .. seealso::
+
+ :class:`.JSONB`
+
"""
__visit_name__ = 'JSON'
- def __init__(self, none_as_null=False):
+ hashable = False
+ astext_type = sqltypes.Text()
+
+ def __init__(self, none_as_null=False, astext_type=None):
"""Construct a :class:`.JSON` type.
:param none_as_null: if True, persist the value ``None`` as a
.. versionchanged:: 0.9.8 - Added ``none_as_null``, and :func:`.null`
is now supported in order to persist a NULL value.
+ :param astext_type: the type to use for the
+ :attr:`.JSON.Comparator.astext`
+ accessor on indexed attributes. Defaults to :class:`.types.Text`.
+
+ .. versionadded:: 1.1.0
+
"""
self.none_as_null = none_as_null
+ if astext_type is not None:
+ self.astext_type = astext_type
- class comparator_factory(sqltypes.Concatenable.Comparator):
+ class Comparator(
+ sqltypes.Indexable.Comparator, sqltypes.Concatenable.Comparator):
"""Define comparison operations for :class:`.JSON`."""
- def __getitem__(self, other):
- """Get the value at a given key."""
+ @property
+ def astext(self):
+ """On an indexed expression, use the "astext" (e.g. "->>")
+ conversion when rendered in SQL.
+
+ E.g.::
+
+ select([data_table.c.data['some key'].astext])
+
+ .. seealso::
+
+ :meth:`.ColumnElement.cast`
+
+ """
+ against = self.expr.operator
+ if against is PATHIDX:
+ against = ASTEXT_PATHIDX
+ else:
+ against = ASTEXT
+
+ return self.expr.left.operate(
+ against, self.expr.right, result_type=self.type.astext_type)
+
+ def _setup_getitem(self, index):
+ if not isinstance(index, util.string_types):
+ assert isinstance(index, collections.Sequence)
+ tokens = [util.text_type(elem) for elem in index]
+ index = "{%s}" % (", ".join(tokens))
+ operator = PATHIDX
+ else:
+ operator = INDEX
- return JSONElement(self.expr, other)
+ return operator, index, self.type
- def _adapt_expression(self, op, other_comparator):
- if isinstance(op, custom_op):
- if op.opstring == '->':
- return op, sqltypes.Text
- return sqltypes.Concatenable.Comparator.\
- _adapt_expression(self, op, other_comparator)
+ comparator_factory = Comparator
def bind_processor(self, dialect):
json_serializer = dialect._json_serializer or json.dumps
data = {"key1": "value1", "key2": "value2"}
)
- :class:`.JSONB` provides several operations:
-
- * Index operations::
-
- data_table.c.data['some key']
-
- * Index operations returning text (required for text comparison)::
+ The :class:`.JSONB` type includes all operations provided by
+ :class:`.JSON`, including the same behaviors for indexing operations.
+ It also adds additional operators specific to JSONB, including
+ :meth:`.JSONB.Comparator.has_key`, :meth:`.JSONB.Comparator.has_all`,
+ :meth:`.JSONB.Comparator.has_any`, :meth:`.JSONB.Comparator.contains`,
+ and :meth:`.JSONB.Comparator.contained_by`.
+
+ Like the :class:`.JSON` type, the :class:`.JSONB` type does not detect
+ in-place changes when used with the ORM, unless the
+ :mod:`sqlalchemy.ext.mutable` extension is used.
+
+ Custom serializers and deserializers
+ are shared with the :class:`.JSON` class, using the ``json_serializer``
+ and ``json_deserializer`` keyword arguments. These must be specified
+ at the dialect level using :func:`.create_engine`. When using
+ psycopg2, the serializers are associated with the jsonb type using
+ ``psycopg2.extras.register_default_jsonb`` on a per-connection basis,
+ in the same way that ``psycopg2.extras.register_default_json`` is used
+ to register these handlers with the json type.
- data_table.c.data['some key'].astext == 'some value'
-
- * Index operations with a built-in CAST call::
-
- data_table.c.data['some key'].cast(Integer) == 5
-
- * Path index operations::
-
- data_table.c.data[('key_1', 'key_2', ..., 'key_n')]
-
- * Path index operations returning text (required for text comparison)::
-
- data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \\
- 'some value'
-
- Index operations return an instance of :class:`.JSONElement`, which
- represents an expression such as ``column -> index``. This element then
- defines methods such as :attr:`.JSONElement.astext` and
- :meth:`.JSONElement.cast` for setting up type behavior.
-
- The :class:`.JSON` type, when used with the SQLAlchemy ORM, does not
- detect in-place mutations to the structure. In order to detect these, the
- :mod:`sqlalchemy.ext.mutable` extension must be used. This extension will
- allow "in-place" changes to the datastructure to produce events which
- will be detected by the unit of work. See the example at :class:`.HSTORE`
- for a simple example involving a dictionary.
-
- Custom serializers and deserializers are specified at the dialect level,
- that is using :func:`.create_engine`. The reason for this is that when
- using psycopg2, the DBAPI only allows serializers at the per-cursor
- or per-connection level. E.g.::
+ .. versionadded:: 0.9.7
- engine = create_engine("postgresql://scott:tiger@localhost/test",
- json_serializer=my_serialize_fn,
- json_deserializer=my_deserialize_fn
- )
+ .. seealso::
- When using the psycopg2 dialect, the json_deserializer is registered
- against the database using ``psycopg2.extras.register_default_json``.
-
- .. versionadded:: 0.9.7
+ :class:`.JSON`
"""
__visit_name__ = 'JSONB'
- hashable = False
- class comparator_factory(sqltypes.Concatenable.Comparator):
+ class Comparator(JSON.Comparator):
"""Define comparison operations for :class:`.JSON`."""
- def __getitem__(self, other):
- """Get the value at a given key."""
-
- return JSONElement(self.expr, other)
-
- def _adapt_expression(self, op, other_comparator):
- # How does one do equality?? jsonb also has "=" eg.
- # '[1,2,3]'::jsonb = '[1,2,3]'::jsonb
- if isinstance(op, custom_op):
- if op.opstring in ['?', '?&', '?|', '@>', '<@']:
- return op, sqltypes.Boolean
- if op.opstring == '->':
- return op, sqltypes.Text
- return sqltypes.Concatenable.Comparator.\
- _adapt_expression(self, op, other_comparator)
-
def has_key(self, other):
"""Boolean expression. Test for presence of a key. Note that the
key may be a SQLA expression.
"""
- return self.expr.op('?')(other)
+ return self.operate(HAS_KEY, other, result_type=sqltypes.Boolean)
def has_all(self, other):
"""Boolean expression. Test for presence of all keys in jsonb
"""
- return self.expr.op('?&')(other)
+ return self.operate(HAS_ALL, other, result_type=sqltypes.Boolean)
def has_any(self, other):
"""Boolean expression. Test for presence of any key in jsonb
"""
- return self.expr.op('?|')(other)
+ return self.operate(HAS_ANY, other, result_type=sqltypes.Boolean)
def contains(self, other, **kwargs):
- """Boolean expression. Test if keys (or array) are a superset of/contained
- the keys of the argument jsonb expression.
+ """Boolean expression. Test if keys (or array) are a superset
+ of/contained the keys of the argument jsonb expression.
"""
- return self.expr.op('@>')(other)
+ return self.operate(CONTAINS, other, result_type=sqltypes.Boolean)
def contained_by(self, other):
"""Boolean expression. Test if keys are a proper subset of the
keys of the argument jsonb expression.
"""
- return self.expr.op('<@')(other)
+ return self.operate(
+ CONTAINED_BY, other, result_type=sqltypes.Boolean)
+
+ comparator_factory = Comparator
ischema_names['jsonb'] = JSONB
from ... import types as sqltypes
from .base import PGDialect, PGCompiler, \
PGIdentifierPreparer, PGExecutionContext, \
- ENUM, ARRAY, _DECIMAL_TYPES, _FLOAT_TYPES,\
+ ENUM, _DECIMAL_TYPES, _FLOAT_TYPES,\
_INT_TYPES, UUID
from .hstore import HSTORE
from .json import JSON, JSONB
context.runid = _new_runid()
- filter_fns = [ent.filter_fn for ent in query._entities]
- filtered = id in filter_fns
+ filtered = query._has_mapper_entities
single_entity = len(query._entities) == 1 and \
query._entities[0].supports_single_entity
filter_fn = id
else:
def filter_fn(row):
- return tuple(fn(x) for x, fn in zip(row, filter_fns))
+ return tuple(
+ id(item)
+ if ent.use_id_for_hash
+ else item
+ for ent, item in zip(query._entities, row)
+ )
try:
(process, labels) = \
_orm_only_adapt = True
_orm_only_from_obj_alias = True
_current_path = _path_registry
+ _has_mapper_entities = False
def __init__(self, entities, session=None):
self.session = session
entity_wrapper = _QueryEntity
self._entities = []
self._primary_entity = None
+ self._has_mapper_entities = False
for ent in util.to_list(entities):
entity_wrapper(self, ent)
if not query._primary_entity:
query._primary_entity = self
query._entities.append(self)
-
+ query._has_mapper_entities = True
self.entities = [entity]
self.expr = entity
supports_single_entity = True
+ use_id_for_hash = True
+
def setup_entity(self, ext_info, aliased_adapter):
self.mapper = ext_info.mapper
self.aliased_adapter = aliased_adapter
self.mapper, sql_util.ColumnAdapter(
from_obj, self.mapper._equivalent_columns))
- filter_fn = id
-
@property
def type(self):
return self.mapper.class_
class _BundleEntity(_QueryEntity):
+ use_id_for_hash = False
+
def __init__(self, query, bundle, setup_entities=True):
query._entities.append(self)
self.bundle = self.expr = bundle
self.entities = ()
- self.filter_fn = lambda item: item
-
self.supports_single_entity = self.bundle.single_entity
@property
search_entities = True
self.type = type_ = column.type
- if type_.hashable:
- self.filter_fn = lambda item: item
- else:
- counter = util.counter()
- self.filter_fn = lambda item: counter()
+ self.use_id_for_hash = not type_.hashable
# If the Column is unnamed, give it a
# label() so that mutable column expressions
from .elements import BindParameter, True_, False_, BinaryExpression, \
Null, _const_expr, _clause_element_as_expr, \
ClauseList, ColumnElement, TextClause, UnaryExpression, \
- collate, _is_literal, _literal_as_text, ClauseElement, and_, or_
+ collate, _is_literal, _literal_as_text, ClauseElement, and_, or_, \
+ Slice
from .selectable import SelectBase, Alias, Selectable, ScalarSelect
negate=negate_op)
+def _getitem_impl(expr, op, other, **kw):
+ if isinstance(expr.type, type_api.INDEXABLE):
+ if isinstance(other, slice):
+ if expr.type.zero_indexes:
+ other = slice(
+ other.start + 1,
+ other.stop + 1,
+ other.step
+ )
+ other = Slice(
+ _check_literal(expr, op, other.start),
+ _check_literal(expr, op, other.stop),
+ _check_literal(expr, op, other.step),
+ )
+ else:
+ if expr.type.zero_indexes:
+ other += 1
+
+ return _binary_operate(expr, op, other, **kw)
+ else:
+ _unsupported_impl(expr, op, other, **kw)
+
+
def _unsupported_impl(expr, op, *arg, **kw):
raise NotImplementedError("Operator '%s' is not supported on "
"this expression" % op.__name__)
"between_op": (_between_impl, ),
"notbetween_op": (_between_impl, ),
"neg": (_neg_impl,),
- "getitem": (_unsupported_impl,),
+ "getitem": (_getitem_impl,),
"lshift": (_unsupported_impl,),
"rshift": (_unsupported_impl,),
}
return super(BinaryExpression, self)._negate()
+class Slice(ColumnElement):
+ """Represent SQL for a Python array-slice object.
+
+ This is not a specific SQL construct at this level, but
+ may be interpreted by specific dialects, e.g. Postgresql.
+
+ """
+ __visit_name__ = 'slice'
+
+ def __init__(self, start, stop, step):
+ self.start = start
+ self.stop = stop
+ self.step = step
+ self.type = type_api.NULLTYPE
+
+
+class IndexExpression(BinaryExpression):
+ """Represent the class of expressions that are like an "index" operation.
+ """
+ pass
+
+
class Grouping(ColumnElement):
"""Represent a grouping within a column expression"""
"""
__name__ = 'custom_op'
- def __init__(self, opstring, precedence=0, is_comparison=False):
+ def __init__(
+ self, opstring, precedence=0, is_comparison=False,
+ natural_self_precedent=False):
self.opstring = opstring
self.precedence = precedence
self.is_comparison = is_comparison
+ self.natural_self_precedent = natural_self_precedent
def __eq__(self, other):
return isinstance(other, custom_op) and \
return op in (asc_op, desc_op,
nullsfirst_op, nullslast_op)
+
+def is_natural_self_precedent(op):
+ return op in _natural_self_precedent or \
+ isinstance(op, custom_op) and op.natural_self_precedent
+
_associative = _commutative.union([concat_op, and_, or_])
_natural_self_precedent = _associative.union([getitem])
def is_precedent(operator, against):
- if operator is against and operator in _natural_self_precedent:
+ if operator is against and is_natural_self_precedent(operator):
return False
else:
return (_PRECEDENCE.get(operator,
"""
+import collections
import datetime as dt
import codecs
)):
return operators.concat_op, self.expr.type
else:
- return op, self.expr.type
+ return super(Concatenable.Comparator, self)._adapt_expression(
+ op, other_comparator)
+
+ comparator_factory = Comparator
+
+
+class Indexable(object):
+ """A mixin that marks a type as supporting indexing operations,
+ such as array or JSON structures.
+
+
+ .. versionadded:: 1.1.0
+
+
+ """
+
+ zero_indexes = False
+ """if True, Python zero-based indexes should be interpreted as one-based
+ on the SQL expression side."""
+
+ class Comparator(TypeEngine.Comparator):
+
+ def _setup_getitem(self, index):
+ raise NotImplementedError()
+
+ def __getitem__(self, index):
+ operator, adjusted_right_expr, result_type = \
+ self._setup_getitem(index)
+ return self.operate(
+ operator,
+ adjusted_right_expr,
+ result_type=result_type
+ )
comparator_factory = Comparator
_isnull = True
+ hashable = False
+
def literal_processor(self, dialect):
def process(value):
return "NULL"
type_api.INTEGERTYPE = INTEGERTYPE
type_api.NULLTYPE = NULLTYPE
type_api.MATCHTYPE = MATCHTYPE
+type_api.INDEXABLE = Indexable
type_api._type_map = _type_map
TypeEngine.Comparator.BOOLEANTYPE = BOOLEANTYPE
NULLTYPE = None
STRINGTYPE = None
MATCHTYPE = None
+INDEXABLE = None
class TypeEngine(Visitable):
'SMALLINT', 'INTEGER', 'DATE', 'TIME', 'String', 'Integer',
'SmallInteger', 'BigInteger', 'Numeric', 'Float', 'DateTime',
'Date', 'Time', 'LargeBinary', 'Binary', 'Boolean', 'Unicode',
- 'Concatenable', 'UnicodeText', 'PickleType', 'Interval', 'Enum']
+ 'Concatenable', 'UnicodeText', 'PickleType', 'Interval', 'Enum',
+ 'Indexable']
from .sql.type_api import (
adapt_type,
Enum,
FLOAT,
Float,
+ Indexable,
INT,
INTEGER,
Integer,
"""
names = get_cls_kwargs(cls)
- kw.update((k, obj.__dict__[k]) for k in names if k in obj.__dict__)
+ kw.update(
+ (k, obj.__dict__[k]) for k in names.difference(kw)
+ if k in obj.__dict__)
return cls(*args, **kw)
Text
from sqlalchemy.dialects.postgresql import ExcludeConstraint, array
from sqlalchemy import exc, schema
-from sqlalchemy.dialects.postgresql import base as postgresql
+from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import TSRANGE
from sqlalchemy.orm import mapper, aliased, Session
from sqlalchemy.sql import table, column, operators
def test_format(self):
seq = Sequence('my_seq_no_schema')
- dialect = postgresql.PGDialect()
+ dialect = postgresql.dialect()
assert dialect.identifier_preparer.format_sequence(seq) \
== 'my_seq_no_schema'
seq = Sequence('my_seq', schema='some_schema')
import datetime
from sqlalchemy import Table, MetaData, Column, Integer, Enum, Float, select, \
func, DateTime, Numeric, exc, String, cast, REAL, TypeDecorator, Unicode, \
- Text, null, text
+ Text, null, text, column
from sqlalchemy.sql import operators
from sqlalchemy import types
import sqlalchemy as sa
-from sqlalchemy.dialects.postgresql import base as postgresql
+from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import HSTORE, hstore, array, \
INT4RANGE, INT8RANGE, NUMRANGE, DATERANGE, TSRANGE, TSTZRANGE, \
JSON, JSONB
from sqlalchemy.testing.util import round_decimal
from sqlalchemy import inspect
from sqlalchemy import event
+from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import Session
tztable = notztable = metadata = table = None
eq_(t2.c.c6.type.timezone, True)
-class ArrayTest(fixtures.TablesTest, AssertsExecutionResults):
+class ArrayTest(AssertsCompiledSQL, fixtures.TestBase):
+ __dialect__ = 'postgresql'
+
+ def test_array_int_index(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col[3]]),
+ "SELECT x[%(x_1)s] AS anon_1",
+ checkparams={'x_1': 3}
+ )
+
+ def test_array_any(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col.any(7, operator=operators.lt)]),
+ "SELECT %(param_1)s < ANY (x) AS anon_1",
+ checkparams={'param_1': 7}
+ )
+
+ def test_array_all(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col.all(7, operator=operators.lt)]),
+ "SELECT %(param_1)s < ALL (x) AS anon_1",
+ checkparams={'param_1': 7}
+ )
+
+ def test_array_contains(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col.contains(array([4, 5, 6]))]),
+ "SELECT x @> ARRAY[%(param_1)s, %(param_2)s, %(param_3)s] "
+ "AS anon_1",
+ checkparams={'param_1': 4, 'param_3': 6, 'param_2': 5}
+ )
+
+ def test_array_contained_by(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col.contained_by(array([4, 5, 6]))]),
+ "SELECT x <@ ARRAY[%(param_1)s, %(param_2)s, %(param_3)s] "
+ "AS anon_1",
+ checkparams={'param_1': 4, 'param_3': 6, 'param_2': 5}
+ )
+
+ def test_array_overlap(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col.overlap(array([4, 5, 6]))]),
+ "SELECT x && ARRAY[%(param_1)s, %(param_2)s, %(param_3)s] "
+ "AS anon_1",
+ checkparams={'param_1': 4, 'param_3': 6, 'param_2': 5}
+ )
+
+
+ def test_array_slice_index(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ self.assert_compile(
+ select([col[5:10]]),
+ "SELECT x[%(x_1)s:%(x_2)s] AS anon_1",
+ checkparams={'x_2': 10, 'x_1': 5}
+ )
+
+ def test_array_dim_index(self):
+ col = column('x', postgresql.ARRAY(Integer, dimensions=2))
+ self.assert_compile(
+ select([col[3][5]]),
+ "SELECT x[%(x_1)s][%(param_1)s] AS anon_1",
+ checkparams={'x_1': 3, 'param_1': 5}
+ )
+
+ def test_array_concat(self):
+ col = column('x', postgresql.ARRAY(Integer))
+ literal = array([4, 5])
+
+ self.assert_compile(
+ select([col + literal]),
+ "SELECT x || ARRAY[%(param_1)s, %(param_2)s] AS anon_1",
+ checkparams={'param_1': 4, 'param_2': 5}
+ )
+
+ def test_array_index_map_dimensions(self):
+ col = column('x', postgresql.ARRAY(Integer, dimensions=3))
+ is_(
+ col[5].type._type_affinity, postgresql.ARRAY
+ )
+ eq_(
+ col[5].type.dimensions, 2
+ )
+ is_(
+ col[5][6].type._type_affinity, postgresql.ARRAY
+ )
+ eq_(
+ col[5][6].type.dimensions, 1
+ )
+ is_(
+ col[5][6][7].type._type_affinity, Integer
+ )
+
+ def test_array_getitem_single_type(self):
+ m = MetaData()
+ arrtable = Table(
+ 'arrtable', m,
+ Column('intarr', postgresql.ARRAY(Integer)),
+ Column('strarr', postgresql.ARRAY(String)),
+ )
+ is_(arrtable.c.intarr[1].type._type_affinity, Integer)
+ is_(arrtable.c.strarr[1].type._type_affinity, String)
+
+ def test_array_getitem_slice_type(self):
+ m = MetaData()
+ arrtable = Table(
+ 'arrtable', m,
+ Column('intarr', postgresql.ARRAY(Integer)),
+ Column('strarr', postgresql.ARRAY(String)),
+ )
+ is_(arrtable.c.intarr[1:3].type._type_affinity, postgresql.ARRAY)
+ is_(arrtable.c.strarr[1:3].type._type_affinity, postgresql.ARRAY)
+
+
+class ArrayRoundTripTest(fixtures.TablesTest, AssertsExecutionResults):
__only_on__ = 'postgresql'
__backend__ = True
), True
)
- def test_array_getitem_single_type(self):
- arrtable = self.tables.arrtable
- is_(arrtable.c.intarr[1].type._type_affinity, Integer)
- is_(arrtable.c.strarr[1].type._type_affinity, String)
-
- def test_array_getitem_slice_type(self):
- arrtable = self.tables.arrtable
- is_(arrtable.c.intarr[1:3].type._type_affinity, postgresql.ARRAY)
- is_(arrtable.c.strarr[1:3].type._type_affinity, postgresql.ARRAY)
-
def test_array_getitem_single_exec(self):
arrtable = self.tables.arrtable
self._fixture_456(arrtable)
lambda elem: (
x for x in elem))
+ def test_multi_dim_roundtrip(self):
+ arrtable = self.tables.arrtable
+ testing.db.execute(arrtable.insert(), dimarr=[[1, 2, 3], [4, 5, 6]])
+ eq_(
+ testing.db.scalar(select([arrtable.c.dimarr])),
+ [[-1, 0, 1], [2, 3, 4]]
+ )
+
def test_array_contained_by_exec(self):
arrtable = self.tables.arrtable
with testing.db.connect() as conn:
set([('1', '2', '3'), ('4', '5', '6'), (('4', '5'), ('6', '7'))])
)
- def test_dimension(self):
- arrtable = self.tables.arrtable
- testing.db.execute(arrtable.insert(), dimarr=[[1, 2, 3], [4, 5, 6]])
+
+class HashableFlagORMTest(fixtures.TestBase):
+ """test the various 'collection' types that they flip the 'hashable' flag
+ appropriately. [ticket:3499]"""
+
+ __only_on__ = 'postgresql'
+
+ def _test(self, type_, data):
+ Base = declarative_base(metadata=self.metadata)
+
+ class A(Base):
+ __tablename__ = 'a1'
+ id = Column(Integer, primary_key=True)
+ data = Column(type_)
+ Base.metadata.create_all(testing.db)
+ s = Session(testing.db)
+ s.add_all([
+ A(data=elem) for elem in data
+ ])
+ s.commit()
+
eq_(
- testing.db.scalar(select([arrtable.c.dimarr])),
- [[-1, 0, 1], [2, 3, 4]]
+ [(obj.A.id, obj.data) for obj in
+ s.query(A, A.data).order_by(A.id)],
+ list(enumerate(data, 1))
+ )
+
+ @testing.provide_metadata
+ def test_array(self):
+ self._test(
+ postgresql.ARRAY(Text()),
+ [['a', 'b', 'c'], ['d', 'e', 'f']]
+ )
+
+ @testing.requires.hstore
+ @testing.provide_metadata
+ def test_hstore(self):
+ self._test(
+ postgresql.HSTORE(),
+ [
+ {'a': '1', 'b': '2', 'c': '3'},
+ {'d': '4', 'e': '5', 'f': '6'}
+ ]
+ )
+
+ @testing.provide_metadata
+ def test_json(self):
+ self._test(
+ postgresql.JSON(),
+ [
+ {'a': '1', 'b': '2', 'c': '3'},
+ {'d': '4', 'e': {'e1': '5', 'e2': '6'},
+ 'f': {'f1': [9, 10, 11]}}
+ ]
+ )
+
+ @testing.requires.postgresql_jsonb
+ @testing.provide_metadata
+ def test_jsonb(self):
+ self._test(
+ postgresql.JSONB(),
+ [
+ {'a': '1', 'b': '2', 'c': '3'},
+ {'d': '4', 'e': {'e1': '5', 'e2': '6'},
+ 'f': {'f1': [9, 10, 11]}}
+ ]
)
{"key1": "value1", "key2": "value2"}
)
+ def test_ret_type_text(self):
+ col = column('x', HSTORE())
+
+ is_(col['foo'].type.__class__, Text)
+
+ def test_ret_type_custom(self):
+ class MyType(types.UserDefinedType):
+ pass
+
+ col = column('x', HSTORE(text_type=MyType))
+
+ is_(col['foo'].type.__class__, MyType)
+
def test_where_has_key(self):
self._test_where(
# hide from 2to3
"(test_table.test_column #> %(test_column_1)s) IS NULL"
)
+ def test_path_typing(self):
+ col = column('x', JSON())
+ is_(
+ col['q'].type._type_affinity, JSON
+ )
+ is_(
+ col[('q', )].type._type_affinity, JSON
+ )
+ is_(
+ col['q']['p'].type._type_affinity, JSON
+ )
+ is_(
+ col[('q', 'p')].type._type_affinity, JSON
+ )
+
+ def test_custom_astext_type(self):
+ class MyType(types.UserDefinedType):
+ pass
+
+ col = column('x', JSON(astext_type=MyType))
+
+ is_(
+ col['q'].astext.type.__class__, MyType
+ )
+
+ is_(
+ col[('q', 'p')].astext.type.__class__, MyType
+ )
+
+ is_(
+ col['q']['p'].astext.type.__class__, MyType
+ )
+
def test_where_getitem_as_text(self):
self._test_where(
self.jsoncol['bar'].astext == None,
"(test_table.test_column ->> %(test_column_1)s) IS NULL"
)
- def test_where_getitem_as_cast(self):
+ def test_where_getitem_astext_cast(self):
self._test_where(
- self.jsoncol['bar'].cast(Integer) == 5,
+ self.jsoncol['bar'].astext.cast(Integer) == 5,
"CAST(test_table.test_column ->> %(test_column_1)s AS INTEGER) "
"= %(param_1)s"
)
+ def test_where_getitem_json_cast(self):
+ self._test_where(
+ self.jsoncol['bar'].cast(Integer) == 5,
+ "CAST(test_table.test_column -> %(test_column_1)s AS INTEGER) "
+ "= %(param_1)s"
+ )
+
def test_where_path_as_text(self):
self._test_where(
self.jsoncol[("foo", 1)].astext == None,
{'name': 'r3', 'data': {"k1": "r3v1", "k2": "r3v2"}},
{'name': 'r4', 'data': {"k1": "r4v1", "k2": "r4v2"}},
{'name': 'r5', 'data': {"k1": "r5v1", "k2": "r5v2", "k3": 5}},
+ {'name': 'r6', 'data': {"k1": {"r6v1": {'subr': [1, 2, 3]}}}},
)
def _assert_data(self, compare, column='data'):
engine = testing.db
self._fixture_data(engine)
data_table = self.tables.data_table
+
result = engine.execute(
- select([data_table.c.data]).where(
- data_table.c.data[('k1',)].astext == 'r3v1'
+ select([data_table.c.name]).where(
+ data_table.c.data[('k1', 'r6v1', 'subr')].astext == "[1, 2, 3]"
)
- ).first()
- eq_(result, ({'k1': 'r3v1', 'k2': 'r3v2'},))
+ )
+ eq_(result.scalar(), 'r6')
+
+ def test_multi_index_query(self):
+ engine = testing.db
+ self._fixture_data(engine)
+ data_table = self.tables.data_table
+
+ result = engine.execute(
+ select([data_table.c.name]).where(
+ data_table.c.data['k1']['r6v1']['subr'].astext == "[1, 2, 3]"
+ )
+ )
+ eq_(result.scalar(), 'r6')
def test_query_returned_as_text(self):
engine = testing.db
self._fixture_data(engine)
data_table = self.tables.data_table
result = engine.execute(
- select([data_table.c.data['k3'].cast(Integer)]).where(
+ select([data_table.c.data['k3'].astext.cast(Integer)]).where(
data_table.c.name == 'r5')
).first()
assert isinstance(result[0], int)
sess.query(User). \
filter(User.addresses.any(email_address='fred@fred.com')).all()
- # test that any() doesn't overcorrelate
- assert [User(id=7), User(id=8)] == \
- sess.query(User).join("addresses"). \
- filter(
- ~User.addresses.any(
- Address.email_address == 'fred@fred.com')).all()
-
# test that the contents are not adapted by the aliased join
assert [User(id=7), User(id=8)] == \
sess.query(User).join("addresses", aliased=True). \
sess.query(User).outerjoin("addresses", aliased=True). \
filter(~User.addresses.any()).all()
+ def test_any_doesnt_overcorrelate(self):
+ User, Address = self.classes.User, self.classes.Address
+
+ sess = create_session()
+
+ # test that any() doesn't overcorrelate
+ assert [User(id=7), User(id=8)] == \
+ sess.query(User).join("addresses"). \
+ filter(
+ ~User.addresses.any(
+ Address.email_address == 'fred@fred.com')).all()
+
def test_has(self):
Dingaling, User, Address = (
self.classes.Dingaling, self.classes.User, self.classes.Address)
)
+class HasMapperEntitiesTest(QueryTest):
+ def test_entity(self):
+ User = self.classes.User
+ s = Session()
+
+ q = s.query(User)
+
+ assert q._has_mapper_entities
+
+ def test_cols(self):
+ User = self.classes.User
+ s = Session()
+
+ q = s.query(User.id)
+
+ assert not q._has_mapper_entities
+
+ def test_cols_set_entities(self):
+ User = self.classes.User
+ s = Session()
+
+ q = s.query(User.id)
+
+ q._set_entities(User)
+ assert q._has_mapper_entities
+
+ def test_entity_set_entities(self):
+ User = self.classes.User
+ s = Session()
+
+ q = s.query(User)
+
+ q._set_entities(User.id)
+ assert not q._has_mapper_entities
+
+
class SetOpsTest(QueryTest, AssertsCompiledSQL):
__dialect__ = 'default'
eq_(
s.query(Thing.value).scalar().text, "foo"
)
+
from sqlalchemy.engine import default
from sqlalchemy.sql.elements import _literal_as_text
from sqlalchemy.schema import Column, Table, MetaData
+from sqlalchemy.sql import compiler
from sqlalchemy.types import TypeEngine, TypeDecorator, UserDefinedType, \
- Boolean, NullType, MatchType
+ Boolean, NullType, MatchType, Indexable
from sqlalchemy.dialects import mysql, firebird, postgresql, oracle, \
sqlite, mssql
from sqlalchemy import util
import collections
from sqlalchemy import text, literal_column
from sqlalchemy import and_, not_, between, or_
-from sqlalchemy.sql import true, false, null
class LoopOperate(operators.ColumnOperators):
)
+class IndexableTest(fixtures.TestBase, testing.AssertsCompiledSQL):
+ def setUp(self):
+ class MyTypeCompiler(compiler.GenericTypeCompiler):
+ def visit_mytype(self, type, **kw):
+ return "MYTYPE"
+
+ def visit_myothertype(self, type, **kw):
+ return "MYOTHERTYPE"
+
+ class MyCompiler(compiler.SQLCompiler):
+ def visit_slice(self, element, **kw):
+ return "%s:%s" % (
+ self.process(element.start, **kw),
+ self.process(element.stop, **kw),
+ )
+
+ def visit_getitem_binary(self, binary, operator, **kw):
+ return "%s[%s]" % (
+ self.process(binary.left, **kw),
+ self.process(binary.right, **kw)
+ )
+
+ class MyDialect(default.DefaultDialect):
+ statement_compiler = MyCompiler
+ type_compiler = MyTypeCompiler
+
+ class MyType(Indexable, TypeEngine):
+ __visit_name__ = 'mytype'
+
+ def __init__(self, zero_indexes=False, dimensions=1):
+ if zero_indexes:
+ self.zero_indexes = zero_indexes
+ self.dimensions = dimensions
+
+ class Comparator(Indexable.Comparator):
+ def _setup_getitem(self, index):
+ if isinstance(index, slice):
+ return_type = self.type
+ elif self.type.dimensions is None or \
+ self.type.dimensions == 1:
+ return_type = Integer()
+ else:
+ adapt_kw = {'dimensions': self.type.dimensions - 1}
+ # this is also testing the behavior of adapt()
+ # that we can pass kw that override constructor kws.
+ # required a small change to util.constructor_copy().
+ return_type = self.type.adapt(
+ self.type.__class__, **adapt_kw)
+
+ return operators.getitem, index, return_type
+ comparator_factory = Comparator
+
+ self.MyType = MyType
+ self.__dialect__ = MyDialect()
+
+ def test_setup_getitem_w_dims(self):
+ """test the behavior of the _setup_getitem() method given a simple
+ 'dimensions' scheme - this is identical to postgresql.ARRAY."""
+
+ col = Column('x', self.MyType(dimensions=3))
+
+ is_(
+ col[5].type._type_affinity, self.MyType
+ )
+ eq_(
+ col[5].type.dimensions, 2
+ )
+ is_(
+ col[5][6].type._type_affinity, self.MyType
+ )
+ eq_(
+ col[5][6].type.dimensions, 1
+ )
+ is_(
+ col[5][6][7].type._type_affinity, Integer
+ )
+
+ def test_getindex_literal(self):
+
+ col = Column('x', self.MyType())
+
+ self.assert_compile(
+ col[5],
+ "x[:x_1]",
+ checkparams={'x_1': 5}
+ )
+
+ def test_getindex_sqlexpr(self):
+
+ col = Column('x', self.MyType())
+ col2 = Column('y', Integer())
+
+ self.assert_compile(
+ col[col2],
+ "x[y]",
+ checkparams={}
+ )
+
+ self.assert_compile(
+ col[col2 + 8],
+ "x[(y + :y_1)]",
+ checkparams={'y_1': 8}
+ )
+
+ def test_getslice_literal(self):
+
+ col = Column('x', self.MyType())
+
+ self.assert_compile(
+ col[5:6],
+ "x[:x_1::x_2]",
+ checkparams={'x_1': 5, 'x_2': 6}
+ )
+
+ def test_getslice_sqlexpr(self):
+
+ col = Column('x', self.MyType())
+ col2 = Column('y', Integer())
+
+ self.assert_compile(
+ col[col2:col2 + 5],
+ "x[y:y + :y_1]",
+ checkparams={'y_1': 5}
+ )
+
+ def test_getindex_literal_zeroind(self):
+
+ col = Column('x', self.MyType(zero_indexes=True))
+
+ self.assert_compile(
+ col[5],
+ "x[:x_1]",
+ checkparams={'x_1': 6}
+ )
+
+ def test_getindex_sqlexpr_zeroind(self):
+
+ col = Column('x', self.MyType(zero_indexes=True))
+ col2 = Column('y', Integer())
+
+ self.assert_compile(
+ col[col2],
+ "x[(y + :y_1)]",
+ checkparams={'y_1': 1}
+ )
+
+ self.assert_compile(
+ col[col2 + 8],
+ "x[(y + :y_1 + :param_1)]",
+ checkparams={'y_1': 8, 'param_1': 1}
+ )
+
+ def test_getslice_literal_zeroind(self):
+
+ col = Column('x', self.MyType(zero_indexes=True))
+
+ self.assert_compile(
+ col[5:6],
+ "x[:x_1::x_2]",
+ checkparams={'x_1': 6, 'x_2': 7}
+ )
+
+ def test_getslice_sqlexpr_zeroind(self):
+
+ col = Column('x', self.MyType(zero_indexes=True))
+ col2 = Column('y', Integer())
+
+ self.assert_compile(
+ col[col2:col2 + 5],
+ "x[y + :y_1:y + :y_2 + :param_1]",
+ checkparams={'y_1': 1, 'y_2': 5, 'param_1': 1}
+ )
+
+ def test_override_operators(self):
+ special_index_op = operators.custom_op('->')
+
+ class MyOtherType(Indexable, TypeEngine):
+ __visit_name__ = 'myothertype'
+
+ class Comparator(TypeEngine.Comparator):
+
+ def _adapt_expression(self, op, other_comparator):
+ return special_index_op, MyOtherType()
+
+ comparator_factory = Comparator
+
+ col = Column('x', MyOtherType())
+ self.assert_compile(
+ col[5],
+ "x -> :x_1",
+ checkparams={'x_1': 5}
+ )
+
+
class BooleanEvalTest(fixtures.TestBase, testing.AssertsCompiledSQL):
"""test standalone booleans being wrapped in an AsBoolean, as well
t1 = typ()
repr(t1)
+ def test_adapt_constructor_copy_override_kw(self):
+ """test that adapt() can accept kw args that override
+ the state of the original object.
+
+ This essentially is testing the behavior of util.constructor_copy().
+
+ """
+ t1 = String(length=50, convert_unicode=False)
+ t2 = t1.adapt(Text, convert_unicode=True)
+ eq_(
+ t2.length, 50
+ )
+ eq_(
+ t2.convert_unicode, True
+ )
+
class TypeAffinityTest(fixtures.TestBase):