.. changelog::
:version: 1.1.0b1
+ .. change::
+ :tags: feature, mysql
+ :tickets: 3547
+
+ Added :class:`.mysql.JSON` for MySQL 5.7. The JSON type provides
+ persistence of JSON values in MySQL as well as basic operator support
+ of "getitem" and "getpath", making use of the ``JSON_EXTRACT``
+ function in order to refer to individual paths in a JSON structure.
+
+ .. seealso::
+
+ :ref:`change_3547`
+
+ .. change::
+ :tags: feature, sql
+ :tickets: 3619
+
+ Added a new type to core :class:`.types.JSON`. This is the
+ base of the PostgreSQL :class:`.postgresql.JSON` type as well as that
+ of the new :class:`.mysql.JSON` type, so that a PG/MySQL-agnostic
+ JSON column may be used. The type features basic index and path
+ searching support.
+
+ .. seealso::
+
+ :ref:`change_3619`
+
.. change::
:tags: bug, sql
:tickets: 3616
:tickets: 3132
Added support for the SQL-standard function :class:`.array_agg`,
- which automatically returns an :class:`.Array` of the correct type
+ which automatically returns an :class:`.postgresql.ARRAY` of the correct type
and supports index / slice operations, as well as
:func:`.postgresql.array_agg`, which returns a :class:`.postgresql.ARRAY`
with additional comparison features. As arrays are only
:tags: feature, sql
:tickets: 3516
- Added a new type to core :class:`.types.Array`. This is the
- base of the PostgreSQL :class:`.ARRAY` type, and is now part of Core
+ Added a new type to core :class:`.types.ARRAY`. This is the
+ base of the PostgreSQL :class:`.postgresql.ARRAY` type, and is now part of Core
to begin supporting various SQL-standard array-supporting features
including some functions and eventual support for native arrays
on other databases that have an "array" concept, such as DB2 or Oracle.
:tickets: 3514
Additional fixes have been made regarding the value of ``None``
- in conjunction with the Postgresql :class:`.JSON` type. When
+ in conjunction with the Postgresql :class:`.postgresql.JSON` type. When
the :paramref:`.JSON.none_as_null` flag is left at its default
value of ``False``, the ORM will now correctly insert the Json
"'null'" string into the column whenever the value on the ORM
:ticket:`2528`
+.. _change_3619:
+
+JSON support added to Core
+--------------------------
+
+As MySQL now has a JSON datatype in addition to the Postgresql JSON datatype,
+the core now gains a :class:`sqlalchemy.types.JSON` datatype that is the basis
+for both of these. Using this type allows access to the "getitem" operator
+as well as the "getpath" operator in a way that is agnostic across Postgresql
+and MySQL.
+
+The new datatype also has a series of improvements to the handling of
+NULL values as well as expression handling.
+
+.. seealso::
+
+ :ref:`change_3547`
+
+ :class:`.types.JSON`
+
+ :class:`.postgresql.JSON`
+
+ :class:`.mysql.JSON`
+
+:ticket:`3619`
+
+.. _change_3514:
+
+JSON "null" is inserted as expected with ORM operations, regardless of column default present
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :class:`.types.JSON` type and its descendant types :class:`.postgresql.JSON`
+and :class:`.mysql.JSON` have a flag :paramref:`.types.JSON.none_as_null` which
+when set to True indicates that the Python value ``None`` should translate
+into a SQL NULL rather than a JSON NULL value. This flag defaults to False,
+which means that the column should *never* insert SQL NULL or fall back
+to a default unless the :func:`.null` constant were used. However, this would
+fail in the ORM under two circumstances; one is when the column also contained
+a default or server_default value, a positive value of ``None`` on the mapped
+attribute would still result in the column-level default being triggered,
+replacing the ``None`` value::
+
+ obj = MyObject(json_value=None)
+ session.add(obj)
+ session.commit() # would fire off default / server_default, not encode "'none'"
+
+The other is when the :meth:`.Session.bulk_insert_mappings`
+method were used, ``None`` would be ignored in all cases::
+
+ session.bulk_insert_mappings(
+ MyObject,
+ [{"json_value": None}]) # would insert SQL NULL and/or trigger defaults
+
+The :class:`.types.JSON` type now implements the
+:attr:`.TypeEngine.should_evaluate_none` flag,
+indicating that ``None`` should not be ignored here; it is configured
+automatically based on the value of :paramref:`.types.JSON.none_as_null`.
+Thanks to :ticket:`3061`, we can differentiate when the value ``None`` is actively
+set by the user versus when it was never set at all.
+
+If the attribute is not set at all, then column level defaults *will*
+fire off and/or SQL NULL will be inserted as expected, as was the behavior
+previously. Below, the two variants are illustrated::
+
+ obj = MyObject(json_value=None)
+ session.add(obj)
+ session.commit() # *will not* fire off column defaults, will insert JSON 'null'
+
+ obj = MyObject()
+ session.add(obj)
+ session.commit() # *will* fire off column defaults, and/or insert SQL NULL
+
+The feature applies as well to the new base :class:`.types.JSON` type
+and its descendant types.
+
+:ticket:`3514`
+
+.. _change_3514_jsonnull:
+
+New JSON.NULL Constant Added
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To ensure that an application can always have full control at the value level
+of whether a :class:`.types.JSON`, :class:`.postgresql.JSON`, :class:`.mysql.JSON`,
+or :class:`.postgresql.JSONB` column
+should receive a SQL NULL or JSON ``"null"`` value, the constant
+:attr:`.types.JSON.NULL` has been added, which in conjunction with
+:func:`.null` can be used to determine fully between SQL NULL and
+JSON ``"null"``, regardless of what :paramref:`.types.JSON.none_as_null` is set
+to::
+
+ from sqlalchemy import null
+ from sqlalchemy.dialects.postgresql import JSON
+
+ obj1 = MyObject(json_value=null()) # will *always* insert SQL NULL
+ obj2 = MyObject(json_value=JSON.NULL) # will *always* insert JSON string "null"
+
+ session.add_all([obj1, obj2])
+ session.commit()
+
+The feature applies as well to the new base :class:`.types.JSON` type
+and its descendant types.
+
+:ticket:`3514`
+
.. _change_3516:
Array support added to Core; new ANY and ALL operators
------------------------------------------------------
-Along with the enhancements made to the Postgresql :class:`.ARRAY`
-type described in :ref:`change_3503`, the base class of :class:`.ARRAY`
-itself has been moved to Core in a new class :class:`.types.Array`.
+Along with the enhancements made to the Postgresql :class:`.postgresql.ARRAY`
+type described in :ref:`change_3503`, the base class of :class:`.postgresql.ARRAY`
+itself has been moved to Core in a new class :class:`.types.ARRAY`.
Arrays are part of the SQL standard, as are several array-oriented functions
such as ``array_agg()`` and ``unnest()``. In support of these constructs
for not just PostgreSQL but also potentially for other array-capable backends
in the future such as DB2, the majority of array logic for SQL expressions
-is now in Core. The :class:`.Array` type still **only works on
+is now in Core. The :class:`.types.ARRAY` type still **only works on
Postgresql**, however it can be used directly, supporting special array
use cases such as indexed access, as well as support for the ANY and ALL::
mytable = Table("mytable", metadata,
- Column("data", Array(Integer, dimensions=2))
+ Column("data", ARRAY(Integer, dimensions=2))
)
expr = mytable.c.data[5][6]
expr = mytable.c.data[5].any(12)
-In support of ANY and ALL, the :class:`.Array` type retains the same
-:meth:`.Array.Comparator.any` and :meth:`.Array.Comparator.all` methods
+In support of ANY and ALL, the :class:`.types.ARRAY` type retains the same
+:meth:`.types.ARRAY.Comparator.any` and :meth:`.types.ARRAY.Comparator.all` methods
from the PostgreSQL type, but also exports these operations to new
standalone operator functions :func:`.sql.expression.any_` and
:func:`.sql.expression.all_`. These two functions work in more
For the PostgreSQL-specific operators "contains", "contained_by", and
"overlaps", one should continue to use the :class:`.postgresql.ARRAY`
-type directly, which provides all functionality of the :class:`.Array`
+type directly, which provides all functionality of the :class:`.types.ARRAY`
type as well.
The :func:`.sql.expression.any_` and :func:`.sql.expression.all_` operators
New Function features, "WITHIN GROUP", array_agg and set aggregate functions
----------------------------------------------------------------------------
-With the new :class:`.Array` type we can also implement a pre-typed
+With the new :class:`.types.ARRAY` type we can also implement a pre-typed
function for the ``array_agg()`` SQL function that returns an array,
which is now available using :class:`.array_agg`::
produce a hash function for column values when a query's selected entities
mixes full ORM entities with column expressions. The ``hashable=False``
flag is now correctly set on all of PG's "data structure" types, including
-:class:`.ARRAY` and :class:`.JSON`. The :class:`.JSONB` and :class:`.HSTORE`
-types already included this flag. For :class:`.ARRAY`,
+:class:`.postgresql.ARRAY` and :class:`.postgresql.JSON`.
+The :class:`.JSONB` and :class:`.HSTORE`
+types already included this flag. For :class:`.postgresql.ARRAY`,
this is conditional based on the :paramref:`.postgresql.ARRAY.as_tuple`
flag, however it should no longer be necessary to set this flag
in order to have an array value present in a composed ORM row.
As part of the changes in :ref:`change_3503`, the workings of the
:meth:`.ColumnElement.cast` operator on :class:`.postgresql.JSON` and
:class:`.postgresql.JSONB` no longer implictly invoke the
-:attr:`.JSON.Comparator.astext` modifier; Postgresql's JSON/JSONB types
+:attr:`.postgresql.JSON.Comparator.astext` modifier; Postgresql's JSON/JSONB types
support CAST operations to each other without the "astext" aspect.
This means that in most cases, an application that was doing this::
expr = json_col['somekey'].astext.cast(Integer)
-
-.. _change_3514:
-
-Postgresql JSON "null" is inserted as expected with ORM operations, regardless of column default present
------------------------------------------------------------------------------------------------------------
-
-The :class:`.JSON` type has a flag :paramref:`.JSON.none_as_null` which
-when set to True indicates that the Python value ``None`` should translate
-into a SQL NULL rather than a JSON NULL value. This flag defaults to False,
-which means that the column should *never* insert SQL NULL or fall back
-to a default unless the :func:`.null` constant were used. However, this would
-fail in the ORM under two circumstances; one is when the column also contained
-a default or server_default value, a positive value of ``None`` on the mapped
-attribute would still result in the column-level default being triggered,
-replacing the ``None`` value::
-
- obj = MyObject(json_value=None)
- session.add(obj)
- session.commit() # would fire off default / server_default, not encode "'none'"
-
-The other is when the :meth:`.Session.bulk_insert_mappings`
-method were used, ``None`` would be ignored in all cases::
-
- session.bulk_insert_mappings(
- MyObject,
- [{"json_value": None}]) # would insert SQL NULL and/or trigger defaults
-
-The :class:`.JSON` type now implements the
-:attr:`.TypeEngine.should_evaluate_none` flag,
-indicating that ``None`` should not be ignored here; it is configured
-automatically based on the value of :paramref:`.JSON.none_as_null`.
-Thanks to :ticket:`3061`, we can differentiate when the value ``None`` is actively
-set by the user versus when it was never set at all.
-
-If the attribute is not set at all, then column level defaults *will*
-fire off and/or SQL NULL will be inserted as expected, as was the behavior
-previously. Below, the two variants are illustrated::
-
- obj = MyObject(json_value=None)
- session.add(obj)
- session.commit() # *will not* fire off column defaults, will insert JSON 'null'
-
- obj = MyObject()
- session.add(obj)
- session.commit() # *will* fire off column defaults, and/or insert SQL NULL
-
-:ticket:`3514`
-
-.. seealso::
-
- :ref:`change_3250`
-
- :ref:`change_3514_jsonnull`
-
-.. _change_3514_jsonnull:
-
-New JSON.NULL Constant Added
-----------------------------
-
-To ensure that an application can always have full control at the value level
-of whether a :class:`.postgresql.JSON` or :class:`.postgresql.JSONB` column
-should receive a SQL NULL or JSON ``"null"`` value, the constant
-:attr:`.postgresql.JSON.NULL` has been added, which in conjunction with
-:func:`.null` can be used to determine fully between SQL NULL and
-JSON ``"null"``, regardless of what :paramref:`.JSON.none_as_null` is set
-to::
-
- from sqlalchemy import null
- from sqlalchemy.dialects.postgresql import JSON
-
- obj1 = MyObject(json_value=null()) # will *always* insert SQL NULL
- obj2 = MyObject(json_value=JSON.NULL) # will *always* insert JSON string "null"
-
- session.add_all([obj1, obj2])
- session.commit()
-
-.. seealso::
-
- :ref:`change_3514`
-
-:ticket:`3514`
-
.. _change_2729:
ARRAY with ENUM will now emit CREATE TYPE for the ENUM
Dialect Improvements and Changes - MySQL
=============================================
+.. _change_3547:
+
+MySQL JSON Support
+------------------
+
+A new type :class:`.mysql.JSON` is added to the MySQL dialect supporting
+the JSON type newly added to MySQL 5.7. This type provides both persistence
+of JSON as well as rudimentary indexed-access using the ``JSON_EXTRACT``
+function internally. An indexable JSON column that works across MySQL
+and Postgresql can be achieved by using the :class:`.types.JSON` datatype
+common to both MySQL and Postgresql.
+
+.. seealso::
+
+ :ref:`change_3619`
+
+:ticket:`3547`
+
+
.. _change_mysql_3216:
No more generation of an implicit KEY for composite primary key w/ AUTO_INCREMENT
type is emitted in ``CREATE TABLE``, such as ``VARCHAR`` see `SQL
Standard Types`_ and the other sections of this chapter.
-.. autoclass:: Array
- :members:
-
.. autoclass:: BigInteger
:members:
.. _types_sqlstandard:
-SQL Standard Types
-------------------
+SQL Standard and Multiple Vendor Types
+--------------------------------------
+
+This category of types refers to types that are either part of the
+SQL standard, or are potentially found within a subset of database backends.
+Unlike the "generic" types, the SQL standard/multi-vendor types have **no**
+guarantee of working on all backends, and will only work on those backends
+that explicitly support them by name. That is, the type will always emit
+its exact name in DDL with ``CREATE TABLE`` is issued.
-The SQL standard types always create database column types of the same
-name when ``CREATE TABLE`` is issued. Some types may not be supported
-on all databases.
+
+.. autoclass:: ARRAY
+ :members:
.. autoclass:: BIGINT
.. autoclass:: INT
+.. autoclass:: JSON
+ :members:
+
.. autoclass:: sqlalchemy.types.INTEGER
.. autoclass:: INTEGER
:members: __init__
+.. autoclass:: JSON
+ :members:
.. autoclass:: LONGBLOB
:members: __init__
.. autoclass:: JSONB
:members:
-.. autoclass:: JSONElement
- :members:
-
.. autoclass:: MACADDR
:members: __init__
)
from .types import (
- Array,
+ ARRAY,
BIGINT,
BINARY,
BLOB,
INTEGER,
Integer,
Interval,
+ JSON,
LargeBinary,
NCHAR,
NVARCHAR,
from .base import \
BIGINT, BINARY, BIT, BLOB, BOOLEAN, CHAR, DATE, DATETIME, \
DECIMAL, DOUBLE, ENUM, DECIMAL,\
- FLOAT, INTEGER, INTEGER, LONGBLOB, LONGTEXT, MEDIUMBLOB, \
+ FLOAT, INTEGER, INTEGER, JSON, LONGBLOB, LONGTEXT, MEDIUMBLOB, \
MEDIUMINT, MEDIUMTEXT, NCHAR, \
NVARCHAR, NUMERIC, SET, SMALLINT, REAL, TEXT, TIME, TIMESTAMP, \
TINYBLOB, TINYINT, TINYTEXT,\
__all__ = (
'BIGINT', 'BINARY', 'BIT', 'BLOB', 'BOOLEAN', 'CHAR', 'DATE', 'DATETIME',
'DECIMAL', 'DOUBLE', 'ENUM', 'DECIMAL', 'FLOAT', 'INTEGER', 'INTEGER',
- 'LONGBLOB', 'LONGTEXT', 'MEDIUMBLOB', 'MEDIUMINT', 'MEDIUMTEXT', 'NCHAR',
- 'NVARCHAR', 'NUMERIC', 'SET', 'SMALLINT', 'REAL', 'TEXT', 'TIME',
+ 'JSON', 'LONGBLOB', 'LONGTEXT', 'MEDIUMBLOB', 'MEDIUMINT', 'MEDIUMTEXT',
+ 'NCHAR', 'NVARCHAR', 'NUMERIC', 'SET', 'SMALLINT', 'REAL', 'TEXT', 'TIME',
'TIMESTAMP', 'TINYBLOB', 'TINYINT', 'TINYTEXT', 'VARBINARY', 'VARCHAR',
'YEAR', 'dialect'
)
import re
import sys
+import json
from ... import schema as sa_schema
from ... import exc, log, sql, util
-from ...sql import compiler
+from ...sql import compiler, elements
from array import array as _array
from ...engine import reflection
from .types import _StringType, _IntegerType, _NumericType, \
_FloatType, _MatchType
from .enumerated import ENUM, SET
+from .json import JSON, JSONIndexType, JSONPathType
RESERVED_WORDS = set(
sqltypes.Float: FLOAT,
sqltypes.Time: TIME,
sqltypes.Enum: ENUM,
- sqltypes.MatchType: _MatchType
+ sqltypes.MatchType: _MatchType,
+ sqltypes.JSON: JSON,
+ sqltypes.JSON.JSONIndexType: JSONIndexType,
+ sqltypes.JSON.JSONPathType: JSONPathType
+
}
# Everything 3.23 through 5.1 excepting OpenGIS types.
'float': FLOAT,
'int': INTEGER,
'integer': INTEGER,
+ 'json': JSON,
'longblob': LONGBLOB,
'longtext': LONGTEXT,
'mediumblob': MEDIUMBLOB,
def visit_sysdate_func(self, fn, **kw):
return "SYSDATE()"
+ def visit_json_getitem_op_binary(self, binary, operator, **kw):
+ return "JSON_EXTRACT(%s, %s)" % (
+ self.process(binary.left),
+ self.process(binary.right))
+
+ def visit_json_path_getitem_op_binary(self, binary, operator, **kw):
+ return "JSON_EXTRACT(%s, %s)" % (
+ self.process(binary.left),
+ self.process(binary.right))
+
def visit_concat_op_binary(self, binary, operator, **kw):
return "concat(%s, %s)" % (self.process(binary.left),
self.process(binary.right))
return self.dialect.type_compiler.process(adapted)
elif isinstance(type_, sqltypes._Binary):
return 'BINARY'
+ elif isinstance(type_, sqltypes.JSON):
+ return "JSON"
elif isinstance(type_, sqltypes.NUMERIC):
return self.dialect.type_compiler.process(
type_).replace('NUMERIC', 'DECIMAL')
def visit_VARBINARY(self, type_, **kw):
return "VARBINARY(%d)" % type_.length
+ def visit_JSON(self, type_, **kw):
+ return "JSON"
+
def visit_large_binary(self, type_, **kw):
return self.visit_BLOB(type_)
})
]
- def __init__(self, isolation_level=None, **kwargs):
+ def __init__(self, isolation_level=None, json_serializer=None,
+ json_deserializer=None, **kwargs):
kwargs.pop('use_ansiquotes', None) # legacy
default.DefaultDialect.__init__(self, **kwargs)
self.isolation_level = isolation_level
+ self._json_serializer = json_serializer
+ self._json_deserializer = json_deserializer
def on_connect(self):
if self.isolation_level is not None:
default.DefaultDialect.initialize(self, connection)
+ @property
+ def _is_mariadb(self):
+ return 'MariaDB' in self.server_version_info
+
@property
def _supports_cast(self):
return self.server_version_info is None or \
--- /dev/null
+# mysql/json.py
+# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
+# <see AUTHORS file>
+#
+# This module is part of SQLAlchemy and is released under
+# the MIT License: http://www.opensource.org/licenses/mit-license.php
+
+from __future__ import absolute_import
+
+import json
+
+from ...sql import elements
+from ... import types as sqltypes
+from ... import util
+
+
+class JSON(sqltypes.JSON):
+ """MySQL JSON type.
+
+ MySQL supports JSON as of version 5.7. Note that MariaDB does **not**
+ support JSON at the time of this writing.
+
+ The :class:`.mysql.JSON` type supports persistence of JSON values
+ as well as the core index operations provided by :class:`.types.JSON`
+ datatype, by adapting the operations to render the ``JSON_EXTRACT``
+ function at the database level.
+
+ .. versionadded:: 1.1
+
+ """
+
+ @util.memoized_property
+ def _str_impl(self):
+ return sqltypes.String(convert_unicode=True)
+
+ def bind_processor(self, dialect):
+ string_process = self._str_impl.bind_processor(dialect)
+
+ json_serializer = dialect._json_serializer or json.dumps
+
+ def process(value):
+ if value is self.NULL:
+ value = None
+ elif isinstance(value, elements.Null) or (
+ value is None and self.none_as_null
+ ):
+ return None
+
+ serialized = json_serializer(value)
+ if string_process:
+ serialized = string_process(serialized)
+ return serialized
+
+ return process
+
+ def result_processor(self, dialect, coltype):
+ string_process = self._str_impl.result_processor(dialect, coltype)
+ json_deserializer = dialect._json_deserializer or json.loads
+
+ def process(value):
+ if value is None:
+ return None
+ if string_process:
+ value = string_process(value)
+ return json_deserializer(value)
+ return process
+
+
+class JSONIndexType(sqltypes.JSON.JSONIndexType):
+ def bind_processor(self, dialect):
+ def process(value):
+ if isinstance(value, int):
+ return "$[%s]" % value
+ else:
+ return '$."%s"' % value
+
+ return process
+
+
+class JSONPathType(sqltypes.JSON.JSONPathType):
+ def bind_processor(self, dialect):
+ def process(value):
+ return "$%s" % (
+ "".join([
+ "[%s]" % elem if isinstance(elem, int)
+ else '."%s"' % elem for elem in value
+ ])
+ )
+
+ return process
super(array, self).__init__(*clauses, **kw)
self.type = ARRAY(self.type)
- def _bind_param(self, operator, obj):
- return array([
- expression.BindParameter(None, o, _compared_to_operator=operator,
- _compared_to_type=self.type, unique=True)
- for o in obj
- ])
+ def _bind_param(self, operator, obj, _assume_scalar=False, type_=None):
+ if _assume_scalar or operator is operators.getitem:
+ # if getitem->slice were called, Indexable produces
+ # a Slice object from that
+ assert isinstance(obj, int)
+ return expression.BindParameter(
+ None, obj, _compared_to_operator=operator,
+ type_=type_,
+ _compared_to_type=self.type, unique=True)
+
+ else:
+ return array([
+ self._bind_param(operator, o, _assume_scalar=True, type_=type_)
+ for o in obj])
def self_group(self, against=None):
if (against in (
OVERLAP = operators.custom_op("&&", precedence=5)
-class ARRAY(SchemaEventTarget, sqltypes.Array):
+class ARRAY(SchemaEventTarget, sqltypes.ARRAY):
"""Postgresql ARRAY type.
.. versionchanged:: 1.1 The :class:`.postgresql.ARRAY` type is now
- a subclass of the core :class:`.Array` type.
+ a subclass of the core :class:`.types.ARRAY` type.
The :class:`.postgresql.ARRAY` type is constructed in the same way
- as the core :class:`.Array` type; a member type is required, and a
+ as the core :class:`.types.ARRAY` type; a member type is required, and a
number of dimensions is recommended if the type is to be used for more
than one dimension::
)
The :class:`.postgresql.ARRAY` type provides all operations defined on the
- core :class:`.Array` type, including support for "dimensions", indexed
- access, and simple matching such as :meth:`.Array.Comparator.any`
- and :meth:`.Array.Comparator.all`. :class:`.postgresql.ARRAY` class also
+ core :class:`.types.ARRAY` type, including support for "dimensions", indexed
+ access, and simple matching such as :meth:`.types.ARRAY.Comparator.any`
+ and :meth:`.types.ARRAY.Comparator.all`. :class:`.postgresql.ARRAY` class also
provides PostgreSQL-specific methods for containment operations, including
:meth:`.postgresql.ARRAY.Comparator.contains`
:meth:`.postgresql.ARRAY.Comparator.contained_by`,
.. seealso::
- :class:`.types.Array` - base array type
+ :class:`.types.ARRAY` - base array type
:class:`.postgresql.array` - produces a literal array value.
"""
- class Comparator(sqltypes.Array.Comparator):
+ class Comparator(sqltypes.ARRAY.Comparator):
"""Define comparison operations for :class:`.ARRAY`.
Note that these operations are in addition to those provided
- by the base :class:`.types.Array.Comparator` class, including
- :meth:`.types.Array.Comparator.any` and
- :meth:`.types.Array.Comparator.all`.
+ by the base :class:`.types.ARRAY.Comparator` class, including
+ :meth:`.types.ARRAY.Comparator.any` and
+ :meth:`.types.ARRAY.Comparator.all`.
"""
self.process(element.stop, **kw),
)
+ def visit_json_getitem_op_binary(self, binary, operator, **kw):
+ return self._generate_generic_binary(
+ binary, " -> ", **kw
+ )
+
+ def visit_json_path_getitem_op_binary(self, binary, operator, **kw):
+ return self._generate_generic_binary(
+ binary, " #> ", **kw
+ )
+
def visit_getitem_binary(self, binary, operator, **kw):
return "%s[%s]" % (
self.process(binary.left, **kw),
def array_agg(*arg, **kw):
"""Postgresql-specific form of :class:`.array_agg`, ensures
return type is :class:`.postgresql.ARRAY` and not
- the plain :class:`.types.Array`.
+ the plain :class:`.types.ARRAY`.
.. versionadded:: 1.1
from ... import types as sqltypes
from ...sql import functions as sqlfunc
from ...sql import operators
-from ...sql.operators import custom_op
from ... import util
__all__ = ('HSTORE', 'hstore')
-INDEX = custom_op(
- "->", precedence=5, natural_self_precedent=True
+GETITEM = operators.custom_op(
+ "->", precedence=15, natural_self_precedent=True,
)
HAS_KEY = operators.custom_op(
- "?", precedence=5, natural_self_precedent=True
+ "?", precedence=15, natural_self_precedent=True
)
HAS_ALL = operators.custom_op(
- "?&", precedence=5, natural_self_precedent=True
+ "?&", precedence=15, natural_self_precedent=True
)
HAS_ANY = operators.custom_op(
- "?|", precedence=5, natural_self_precedent=True
+ "?|", precedence=15, natural_self_precedent=True
)
CONTAINS = operators.custom_op(
- "@>", precedence=5, natural_self_precedent=True
+ "@>", precedence=15, natural_self_precedent=True
)
CONTAINED_BY = operators.custom_op(
- "<@", precedence=5, natural_self_precedent=True
+ "<@", precedence=15, natural_self_precedent=True
)
CONTAINED_BY, other, result_type=sqltypes.Boolean)
def _setup_getitem(self, index):
- return INDEX, index, self.type.text_type
+ return GETITEM, index, self.type.text_type
def defined(self, key):
"""Boolean expression. Test for presence of a non-NULL value for
# the MIT License: http://www.opensource.org/licenses/mit-license.php
from __future__ import absolute_import
-import collections
import json
+import collections
-from .base import ischema_names
+from .base import ischema_names, colspecs
from ... import types as sqltypes
from ...sql import operators
from ...sql import elements
__all__ = ('JSON', 'JSONB')
-
-# json : returns json
-INDEX = operators.custom_op(
- "->", precedence=5, natural_self_precedent=True
-)
-
-# path operator: returns json
-PATHIDX = operators.custom_op(
- "#>", precedence=5, natural_self_precedent=True
-)
-
-# json + astext: returns text
ASTEXT = operators.custom_op(
- "->>", precedence=5, natural_self_precedent=True
+ "->>", precedence=15, natural_self_precedent=True,
)
-# path operator + astext: returns text
-ASTEXT_PATHIDX = operators.custom_op(
- "#>>", precedence=5, natural_self_precedent=True
+JSONPATH_ASTEXT = operators.custom_op(
+ "#>>", precedence=15, natural_self_precedent=True,
)
+
HAS_KEY = operators.custom_op(
- "?", precedence=5, natural_self_precedent=True
+ "?", precedence=15, natural_self_precedent=True
)
HAS_ALL = operators.custom_op(
- "?&", precedence=5, natural_self_precedent=True
+ "?&", precedence=15, natural_self_precedent=True
)
HAS_ANY = operators.custom_op(
- "?|", precedence=5, natural_self_precedent=True
+ "?|", precedence=15, natural_self_precedent=True
)
CONTAINS = operators.custom_op(
- "@>", precedence=5, natural_self_precedent=True
+ "@>", precedence=15, natural_self_precedent=True
)
CONTAINED_BY = operators.custom_op(
- "<@", precedence=5, natural_self_precedent=True
+ "<@", precedence=15, natural_self_precedent=True
)
-class JSON(sqltypes.Indexable, sqltypes.TypeEngine):
- """Represent the Postgresql JSON type.
+class JSONPathType(sqltypes.JSON.JSONPathType):
+ def bind_processor(self, dialect):
+ def process(value):
+ assert isinstance(value, collections.Sequence)
+ tokens = [util.text_type(elem) for elem in value]
+ return "{%s}" % (", ".join(tokens))
- The :class:`.JSON` type stores arbitrary JSON format data, e.g.::
+ return process
- data_table = Table('data_table', metadata,
- Column('id', Integer, primary_key=True),
- Column('data', JSON)
- )
+colspecs[sqltypes.JSON.JSONPathType] = JSONPathType
- with engine.connect() as conn:
- conn.execute(
- data_table.insert(),
- data = {"key1": "value1", "key2": "value2"}
- )
- :class:`.JSON` provides several operations:
+class JSON(sqltypes.JSON):
+ """Represent the Postgresql JSON type.
+
+ This type is a specialization of the Core-level :class:`.types.JSON`
+ type. Be sure to read the documentation for :class:`.types.JSON` for
+ important tips regarding treatment of NULL values and ORM use.
+
+ .. versionchanged:: 1.1 :class:`.postgresql.JSON` is now a Postgresql-
+ specific specialization of the new :class:`.types.JSON` type.
+
+ The operators provided by the Postgresql version of :class:`.JSON`
+ include:
* Index operations (the ``->`` operator)::
data_table.c.data['some key']
+ data_table.c.data[5]
+
+
* Index operations returning text (the ``->>`` operator)::
data_table.c.data['some key'].astext == 'some value'
* Path index operations (the ``#>`` operator)::
- data_table.c.data[('key_1', 'key_2', ..., 'key_n')]
+ data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
* Path index operations returning text (the ``#>>`` operator)::
- data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \
+ data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == \
'some value'
.. versionchanged:: 1.1 The :meth:`.ColumnElement.cast` operator on
:class:`.JSON` by default, so that further JSON-oriented instructions
may be called upon the result type.
- The :class:`.JSON` type, when used with the SQLAlchemy ORM, does not
- detect in-place mutations to the structure. In order to detect these, the
- :mod:`sqlalchemy.ext.mutable` extension must be used. This extension will
- allow "in-place" changes to the datastructure to produce events which
- will be detected by the unit of work. See the example at :class:`.HSTORE`
- for a simple example involving a dictionary.
-
- When working with NULL values, the :class:`.JSON` type recommends the
- use of two specific constants in order to differentiate between a column
- that evaluates to SQL NULL, e.g. no value, vs. the JSON-encoded string
- of ``"null"``. To insert or select against a value that is SQL NULL,
- use the constant :func:`.null`::
-
- conn.execute(table.insert(), json_value=null())
-
- To insert or select against a value that is JSON ``"null"``, use the
- constant :attr:`.JSON.NULL`::
-
- conn.execute(table.insert(), json_value=JSON.NULL)
-
- The :class:`.JSON` type supports a flag
- :paramref:`.JSON.none_as_null` which when set to True will result
- in the Python constant ``None`` evaluating to the value of SQL
- NULL, and when set to False results in the Python constant
- ``None`` evaluating to the value of JSON ``"null"``. The Python
- value ``None`` may be used in conjunction with either
- :attr:`.JSON.NULL` and :func:`.null` in order to indicate NULL
- values, but care must be taken as to the value of the
- :paramref:`.JSON.none_as_null` in these cases.
-
Custom serializers and deserializers are specified at the dialect level,
that is using :func:`.create_engine`. The reason for this is that when
using psycopg2, the DBAPI only allows serializers at the per-cursor
When using the psycopg2 dialect, the json_deserializer is registered
against the database using ``psycopg2.extras.register_default_json``.
- .. versionadded:: 0.9
-
.. seealso::
+ :class:`.types.JSON` - Core level JSON type
+
:class:`.JSONB`
"""
- __visit_name__ = 'JSON'
-
- hashable = False
astext_type = sqltypes.Text()
- NULL = util.symbol('JSON_NULL')
- """Describe the json value of NULL.
-
- This value is used to force the JSON value of ``"null"`` to be
- used as the value. A value of Python ``None`` will be recognized
- either as SQL NULL or JSON ``"null"``, based on the setting
- of the :paramref:`.JSON.none_as_null` flag; the :attr:`.JSON.NULL`
- constant can be used to always resolve to JSON ``"null"`` regardless
- of this setting. This is in contrast to the :func:`.sql.null` construct,
- which always resolves to SQL NULL. E.g.::
-
- from sqlalchemy import null
- from sqlalchemy.dialects.postgresql import JSON
-
- obj1 = MyObject(json_value=null()) # will *always* insert SQL NULL
- obj2 = MyObject(json_value=JSON.NULL) # will *always* insert JSON string "null"
-
- session.add_all([obj1, obj2])
- session.commit()
-
- .. versionadded:: 1.1
-
- """
-
def __init__(self, none_as_null=False, astext_type=None):
"""Construct a :class:`.JSON` type.
:attr:`.JSON.Comparator.astext`
accessor on indexed attributes. Defaults to :class:`.types.Text`.
- .. versionadded:: 1.1.0
+ .. versionadded:: 1.1
"""
- self.none_as_null = none_as_null
+ super(JSON, self).__init__(none_as_null=none_as_null)
if astext_type is not None:
self.astext_type = astext_type
- class Comparator(
- sqltypes.Indexable.Comparator, sqltypes.Concatenable.Comparator):
+ class Comparator(sqltypes.JSON.Comparator):
"""Define comparison operations for :class:`.JSON`."""
@property
:meth:`.ColumnElement.cast`
"""
- against = self.expr.operator
- if against is PATHIDX:
- against = ASTEXT_PATHIDX
- else:
- against = ASTEXT
- return self.expr.left.operate(
- against, self.expr.right, result_type=self.type.astext_type)
-
- def _setup_getitem(self, index):
- if not isinstance(index, util.string_types):
- assert isinstance(index, collections.Sequence)
- tokens = [util.text_type(elem) for elem in index]
- index = "{%s}" % (", ".join(tokens))
- operator = PATHIDX
+ if isinstance(self.expr.right.type, sqltypes.JSON.JSONPathType):
+ return self.expr.left.operate(
+ JSONPATH_ASTEXT,
+ self.expr.right, result_type=self.type.astext_type)
else:
- operator = INDEX
-
- return operator, index, self.type
+ return self.expr.left.operate(
+ ASTEXT, self.expr.right, result_type=self.type.astext_type)
comparator_factory = Comparator
- @property
- def should_evaluate_none(self):
- return not self.none_as_null
-
- def bind_processor(self, dialect):
- json_serializer = dialect._json_serializer or json.dumps
- if util.py2k:
- encoding = dialect.encoding
- else:
- encoding = None
-
- def process(value):
- if value is self.NULL:
- value = None
- elif isinstance(value, elements.Null) or (
- value is None and self.none_as_null
- ):
- return None
- if encoding:
- return json_serializer(value).encode(encoding)
- else:
- return json_serializer(value)
-
- return process
-
- def result_processor(self, dialect, coltype):
- json_deserializer = dialect._json_deserializer or json.loads
- if util.py2k:
- encoding = dialect.encoding
- else:
- encoding = None
-
- def process(value):
- if value is None:
- return None
- if encoding:
- value = value.decode(encoding)
- return json_deserializer(value)
- return process
-
+colspecs[sqltypes.JSON] = JSON
ischema_names['json'] = JSON
sqltypes.Enum: _PGEnum, # needs force_unicode
HSTORE: _PGHStore,
JSON: _PGJSON,
+ sqltypes.JSON: _PGJSON,
JSONB: _PGJSONB,
UUID: _PGUUID
}
else:
return text
+ def _get_operator_dispatch(self, operator_, qualifier1, qualifier2):
+ attrname = "visit_%s_%s%s" % (
+ operator_.__name__, qualifier1,
+ "_" + qualifier2 if qualifier2 else "")
+ return getattr(self, attrname, None)
+
def visit_unary(self, unary, **kw):
if unary.operator:
if unary.modifier:
raise exc.CompileError(
"Unary expression does not support operator "
"and modifier simultaneously")
- disp = getattr(self, "visit_%s_unary_operator" %
- unary.operator.__name__, None)
+ disp = self._get_operator_dispatch(
+ unary.operator, "unary", "operator")
if disp:
return disp(unary, unary.operator, **kw)
else:
return self._generate_generic_unary_operator(
unary, OPERATORS[unary.operator], **kw)
elif unary.modifier:
- disp = getattr(self, "visit_%s_unary_modifier" %
- unary.modifier.__name__, None)
+ disp = self._get_operator_dispatch(
+ unary.modifier, "unary", "modifier")
if disp:
return disp(unary, unary.modifier, **kw)
else:
kw['literal_binds'] = True
operator_ = override_operator or binary.operator
- disp = getattr(self, "visit_%s_binary" % operator_.__name__, None)
+ disp = self._get_operator_dispatch(operator_, "binary", None)
if disp:
return disp(binary, operator_, **kw)
else:
def _getitem_impl(expr, op, other, **kw):
if isinstance(expr.type, type_api.INDEXABLE):
- if isinstance(other, slice):
- if expr.type.zero_indexes:
- other = slice(
- other.start + 1,
- other.stop + 1,
- other.step
- )
- other = Slice(
- _literal_as_binds(
- other.start, name=expr.key, type_=type_api.INTEGERTYPE),
- _literal_as_binds(
- other.stop, name=expr.key, type_=type_api.INTEGERTYPE),
- _literal_as_binds(
- other.step, name=expr.key, type_=type_api.INTEGERTYPE)
- )
- else:
- if expr.type.zero_indexes:
- other += 1
-
- other = _literal_as_binds(
- other, name=expr.key, type_=type_api.INTEGERTYPE)
+ other = _check_literal(expr, op, other)
return _binary_operate(expr, op, other, **kw)
else:
_unsupported_impl(expr, op, other, **kw)
"mod": (_binary_operate,),
"truediv": (_binary_operate,),
"custom_op": (_binary_operate,),
+ "json_path_getitem_op": (_binary_operate, ),
+ "json_getitem_op": (_binary_operate, ),
"concat_op": (_binary_operate,),
"lt": (_boolean_compare, operators.ge),
"le": (_boolean_compare, operators.gt),
}
-def _check_literal(expr, operator, other):
+def _check_literal(expr, operator, other, bindparam_type=None):
if isinstance(other, (ColumnElement, TextClause)):
if isinstance(other, BindParameter) and \
other.type._isnull:
if isinstance(other, (SelectBase, Alias)):
return other.as_scalar()
elif not isinstance(other, Visitable):
- return expr._bind_param(operator, other)
+ return expr._bind_param(operator, other, type_=bindparam_type)
else:
return other
def reverse_operate(self, op, other, **kwargs):
return op(other, self.comparator, **kwargs)
- def _bind_param(self, operator, obj):
+ def _bind_param(self, operator, obj, type_=None):
return BindParameter(None, obj,
_compared_to_operator=operator,
+ type_=type_,
_compared_to_type=self.type, unique=True)
@property
def _select_iterable(self):
return (self, )
- def _bind_param(self, operator, obj):
+ def _bind_param(self, operator, obj, type_=None):
return Tuple(*[
BindParameter(None, o, _compared_to_operator=operator,
- _compared_to_type=type_, unique=True)
- for o, type_ in zip(obj, self._type_tuple)
+ _compared_to_type=compared_to_type, unique=True,
+ type_=type_)
+ for o, compared_to_type in zip(obj, self._type_tuple)
]).self_group()
else:
return name
- def _bind_param(self, operator, obj):
+ def _bind_param(self, operator, obj, type_=None):
return BindParameter(self.key, obj,
_compared_to_operator=operator,
_compared_to_type=self.type,
+ type_=type_,
unique=True)
def _make_proxy(self, selectable, name=None, attach=True,
"""
return self.select().execute()
- def _bind_param(self, operator, obj):
+ def _bind_param(self, operator, obj, type_=None):
return BindParameter(None, obj, _compared_to_operator=operator,
- _compared_to_type=self.type, unique=True)
+ _compared_to_type=self.type, unique=True,
+ type_=type_)
def self_group(self, against=None):
# for the moment, we are parenthesizing all array-returning
# expressions against getitem. This may need to be made
# more portable if in the future we support other DBs
# besides postgresql.
- if against is operators.getitem:
+ if against is operators.getitem and \
+ isinstance(self.type, sqltypes.ARRAY):
return Grouping(self)
else:
return super(FunctionElement, self).self_group(against=against)
FunctionElement.__init__(self, *clauses, **kw)
- def _bind_param(self, operator, obj):
+ def _bind_param(self, operator, obj, type_=None):
return BindParameter(self.name, obj,
_compared_to_operator=operator,
_compared_to_type=self.type,
+ type_=type_,
unique=True)
"""support for the ARRAY_AGG function.
The ``func.array_agg(expr)`` construct returns an expression of
- type :class:`.Array`.
+ type :class:`.types.ARRAY`.
e.g.::
.. seealso::
:func:`.postgresql.array_agg` - PostgreSQL-specific version that
- returns :class:`.ARRAY`, which has PG-specific operators added.
+ returns :class:`.postgresql.ARRAY`, which has PG-specific operators added.
"""
- type = sqltypes.Array
+ type = sqltypes.ARRAY
def __init__(self, *args, **kwargs):
args = [_literal_as_binds(c) for c in args]
func_clauses = self.clause_expr.element
order_by = sqlutil.unwrap_order_by(within_group.order_by)
if self.array_for_multi_clause and len(func_clauses.clauses) > 1:
- return sqltypes.Array(order_by[0].type)
+ return sqltypes.ARRAY(order_by[0].type)
else:
return order_by[0].type
modifier to supply a sort expression to operate upon.
The return type of this function is the same as the sort expression,
- or if the arguments are an array, an :class:`.Array` of the sort
+ or if the arguments are an array, an :class:`.types.ARRAY` of the sort
expression's type.
.. versionadded:: 1.1
modifier to supply a sort expression to operate upon.
The return type of this function is the same as the sort expression,
- or if the arguments are an array, an :class:`.Array` of the sort
+ or if the arguments are an array, an :class:`.types.ARRAY` of the sort
expression's type.
.. versionadded:: 1.1
from .. import util
-
from operator import (
and_, or_, inv, add, mul, sub, mod, truediv, lt, le, ne, gt, ge, eq, neg,
getitem, lshift, rshift
def isfalse(a):
raise NotImplementedError()
-
def is_(a, b):
return a.is_(b)
return a.nullslast()
+def json_getitem_op(a, b):
+ raise NotImplementedError()
+
+
+def json_path_getitem_op(a, b):
+ raise NotImplementedError()
+
+
_commutative = set([eq, ne, add, mul])
_comparison = set([eq, ne, lt, gt, ge, le, between_op, like_op])
_associative = _commutative.union([concat_op, and_, or_])
-_natural_self_precedent = _associative.union([getitem])
+_natural_self_precedent = _associative.union([
+ getitem, json_getitem_op, json_path_getitem_op])
"""Operators where if we have (a op b) op c, we don't want to
parenthesize (a op b).
from_: 15,
any_op: 15,
all_op: 15,
+ json_getitem_op: 15,
+ json_path_getitem_op: 15,
getitem: 15,
mul: 8,
truediv: 8,
import datetime as dt
import codecs
+import collections
+import json
+from . import elements
from .type_api import TypeEngine, TypeDecorator, to_instance
-from .elements import quoted_name, TypeCoerce as type_coerce, _defer_name
+from .elements import quoted_name, TypeCoerce as type_coerce, _defer_name, Slice, _literal_as_binds
from .. import exc, util, processors
from .base import _bind_or_error, SchemaEventTarget
from . import operators
"""
- zero_indexes = False
- """if True, Python zero-based indexes should be interpreted as one-based
- on the SQL expression side."""
-
class Comparator(TypeEngine.Comparator):
def _setup_getitem(self, index):
raise NotImplementedError()
def __getitem__(self, index):
- operator, adjusted_right_expr, result_type = \
+ adjusted_op, adjusted_right_expr, result_type = \
self._setup_getitem(index)
return self.operate(
- operator,
+ adjusted_op,
adjusted_right_expr,
result_type=result_type
)
return self.impl.coerce_compared_value(op, value)
-class Array(Indexable, Concatenable, TypeEngine):
+class JSON(Indexable, TypeEngine):
+ """Represent a SQL JSON type.
+
+ .. note:: :class:`.types.JSON` is provided as a facade for vendor-specific
+ JSON types. Since it supports JSON SQL operations, it only
+ works on backends that have an actual JSON type, currently
+ Postgresql as well as certain versions of MySQL.
+
+ :class:`.types.JSON` is part of the Core in support of the growing
+ popularity of native JSON datatypes.
+
+ The :class:`.types.JSON` type stores arbitrary JSON format data, e.g.::
+
+ data_table = Table('data_table', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('data', JSON)
+ )
+
+ with engine.connect() as conn:
+ conn.execute(
+ data_table.insert(),
+ data = {"key1": "value1", "key2": "value2"}
+ )
+
+ The base :class:`.types.JSON` provides these two operations:
+
+ * Keyed index operations::
+
+ data_table.c.data['some key']
+
+ * Integer index operations::
+
+ data_table.c.data[3]
+
+ * Path index operations::
+
+ data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
+
+ Additional operations are available from the dialect-specific versions
+ of :class:`.types.JSON`, such as :class:`.postgresql.JSON` and
+ :class:`.postgresql.JSONB`, each of which offer more operators than
+ just the basic type.
+
+ Index operations return an expression object whose type defaults to
+ :class:`.JSON` by default, so that further JSON-oriented instructions
+ may be called upon the result type.
+
+ The :class:`.JSON` type, when used with the SQLAlchemy ORM, does not
+ detect in-place mutations to the structure. In order to detect these, the
+ :mod:`sqlalchemy.ext.mutable` extension must be used. This extension will
+ allow "in-place" changes to the datastructure to produce events which
+ will be detected by the unit of work. See the example at :class:`.HSTORE`
+ for a simple example involving a dictionary.
+
+ When working with NULL values, the :class:`.JSON` type recommends the
+ use of two specific constants in order to differentiate between a column
+ that evaluates to SQL NULL, e.g. no value, vs. the JSON-encoded string
+ of ``"null"``. To insert or select against a value that is SQL NULL,
+ use the constant :func:`.null`::
+
+ from sqlalchemy import null
+ conn.execute(table.insert(), json_value=null())
+
+ To insert or select against a value that is JSON ``"null"``, use the
+ constant :attr:`.JSON.NULL`::
+
+ conn.execute(table.insert(), json_value=JSON.NULL)
+
+ The :class:`.JSON` type supports a flag
+ :paramref:`.JSON.none_as_null` which when set to True will result
+ in the Python constant ``None`` evaluating to the value of SQL
+ NULL, and when set to False results in the Python constant
+ ``None`` evaluating to the value of JSON ``"null"``. The Python
+ value ``None`` may be used in conjunction with either
+ :attr:`.JSON.NULL` and :func:`.null` in order to indicate NULL
+ values, but care must be taken as to the value of the
+ :paramref:`.JSON.none_as_null` in these cases.
+
+ .. seealso::
+
+ :class:`.postgresql.JSON`
+
+ :class:`.postgresql.JSONB`
+
+ :class:`.mysql.JSON`
+
+ .. versionadded:: 1.1
+
+
+ """
+ __visit_name__ = 'JSON'
+
+ hashable = False
+ NULL = util.symbol('JSON_NULL')
+ """Describe the json value of NULL.
+
+ This value is used to force the JSON value of ``"null"`` to be
+ used as the value. A value of Python ``None`` will be recognized
+ either as SQL NULL or JSON ``"null"``, based on the setting
+ of the :paramref:`.JSON.none_as_null` flag; the :attr:`.JSON.NULL`
+ constant can be used to always resolve to JSON ``"null"`` regardless
+ of this setting. This is in contrast to the :func:`.sql.null` construct,
+ which always resolves to SQL NULL. E.g.::
+
+ from sqlalchemy import null
+ from sqlalchemy.dialects.postgresql import JSON
+
+ obj1 = MyObject(json_value=null()) # will *always* insert SQL NULL
+ obj2 = MyObject(json_value=JSON.NULL) # will *always* insert JSON string "null"
+
+ session.add_all([obj1, obj2])
+ session.commit()
+
+ """
+
+ def __init__(self, none_as_null=False):
+ """Construct a :class:`.types.JSON` type.
+
+ :param none_as_null=False: if True, persist the value ``None`` as a
+ SQL NULL value, not the JSON encoding of ``null``. Note that
+ when this flag is False, the :func:`.null` construct can still
+ be used to persist a NULL value::
+
+ from sqlalchemy import null
+ conn.execute(table.insert(), data=null())
+
+ .. seealso::
+
+ :attr:`.types.JSON.NULL`
+
+ """
+ self.none_as_null = none_as_null
+
+ class JSONIndexType(TypeEngine):
+ """Placeholder for the datatype of a JSON index value.
+
+ This allows execution-time processing of JSON index values
+ for special syntaxes.
+
+ """
+
+ class JSONPathType(TypeEngine):
+ """Placeholder type for JSON path operations.
+
+ This allows execution-time processing of a path-based
+ index value into a specific SQL syntax.
+
+ """
+
+ class Comparator(Indexable.Comparator, Concatenable.Comparator):
+ """Define comparison operations for :class:`.types.JSON`."""
+
+ @util.dependencies('sqlalchemy.sql.default_comparator')
+ def _setup_getitem(self, default_comparator, index):
+ if not isinstance(index, util.string_types) and \
+ isinstance(index, collections.Sequence):
+ index = default_comparator._check_literal(
+ self.expr, operators.json_path_getitem_op,
+ index, bindparam_type=JSON.JSONPathType
+ )
+
+ operator = operators.json_path_getitem_op
+ else:
+ index = default_comparator._check_literal(
+ self.expr, operators.json_getitem_op,
+ index, bindparam_type=JSON.JSONIndexType
+ )
+ operator = operators.json_getitem_op
+
+ return operator, index, self.type
+
+ comparator_factory = Comparator
+
+ @property
+ def should_evaluate_none(self):
+ return not self.none_as_null
+
+ @util.memoized_property
+ def _str_impl(self):
+ return String(convert_unicode=True)
+
+ def bind_processor(self, dialect):
+ string_process = self._str_impl.bind_processor(dialect)
+
+ json_serializer = dialect._json_serializer or json.dumps
+
+ def process(value):
+ if value is self.NULL:
+ value = None
+ elif isinstance(value, elements.Null) or (
+ value is None and self.none_as_null
+ ):
+ return None
+
+ serialized = json_serializer(value)
+ if string_process:
+ serialized = string_process(serialized)
+ return serialized
+
+ return process
+
+ def result_processor(self, dialect, coltype):
+ string_process = self._str_impl.result_processor(dialect, coltype)
+ json_deserializer = dialect._json_deserializer or json.loads
+
+ def process(value):
+ if value is None:
+ return None
+ if string_process:
+ value = string_process(value)
+ return json_deserializer(value)
+ return process
+
+
+class ARRAY(Indexable, Concatenable, TypeEngine):
"""Represent a SQL Array type.
.. note:: This type serves as the basis for all ARRAY operations.
with PostgreSQL, as it provides additional operators specific
to that backend.
- :class:`.Array` is part of the Core in support of various SQL standard
+ :class:`.types.ARRAY` is part of the Core in support of various SQL standard
functions such as :class:`.array_agg` which explicitly involve arrays;
however, with the exception of the PostgreSQL backend and possibly
some third-party dialects, no other SQLAlchemy built-in dialect has
support for this type.
- An :class:`.Array` type is constructed given the "type"
+ An :class:`.types.ARRAY` type is constructed given the "type"
of element::
mytable = Table("mytable", metadata,
- Column("data", Array(Integer))
+ Column("data", ARRAY(Integer))
)
The above type represents an N-dimensional array,
data=[1,2,3]
)
- The :class:`.Array` type can be constructed given a fixed number
+ The :class:`.types.ARRAY` type can be constructed given a fixed number
of dimensions::
mytable = Table("mytable", metadata,
- Column("data", Array(Integer, dimensions=2))
+ Column("data", ARRAY(Integer, dimensions=2))
)
Sending a number of dimensions is optional, but recommended if the
>>> expr = table.c.column[5] # returns ARRAY(Integer, dimensions=1)
>>> expr = expr[6] # returns Integer
- For 1-dimensional arrays, an :class:`.Array` instance with no
+ For 1-dimensional arrays, an :class:`.types.ARRAY` instance with no
dimension parameter will generally assume single-dimensional behaviors.
- SQL expressions of type :class:`.Array` have support for "index" and
+ SQL expressions of type :class:`.types.ARRAY` have support for "index" and
"slice" behavior. The Python ``[]`` operator works normally here, given
integer indexes or slices. Arrays default to 1-based indexing.
The operator produces binary expression
mytable.c.data[2:7]: [1, 2, 3]
})
- The :class:`.Array` type also provides for the operators
- :meth:`.Array.Comparator.any` and :meth:`.Array.Comparator.all`.
- The PostgreSQL-specific version of :class:`.Array` also provides additional
+ The :class:`.types.ARRAY` type also provides for the operators
+ :meth:`.types.ARRAY.Comparator.any` and :meth:`.types.ARRAY.Comparator.all`.
+ The PostgreSQL-specific version of :class:`.types.ARRAY` also provides additional
operators.
.. versionadded:: 1.1.0
"""
__visit_name__ = 'ARRAY'
+ zero_indexes = False
+ """if True, Python zero-based indexes should be interpreted as one-based
+ on the SQL expression side."""
+
class Comparator(Indexable.Comparator, Concatenable.Comparator):
- """Define comparison operations for :class:`.Array`.
+ """Define comparison operations for :class:`.types.ARRAY`.
More operators are available on the dialect-specific form
of this type. See :class:`.postgresql.ARRAY.Comparator`.
def _setup_getitem(self, index):
if isinstance(index, slice):
return_type = self.type
- elif self.type.dimensions is None or self.type.dimensions == 1:
- return_type = self.type.item_type
+ if self.type.zero_indexes:
+ index = slice(
+ index.start + 1,
+ index.stop + 1,
+ index.step
+ )
+ index = Slice(
+ _literal_as_binds(
+ index.start, name=self.expr.key,
+ type_=type_api.INTEGERTYPE),
+ _literal_as_binds(
+ index.stop, name=self.expr.key,
+ type_=type_api.INTEGERTYPE),
+ _literal_as_binds(
+ index.step, name=self.expr.key,
+ type_=type_api.INTEGERTYPE)
+ )
else:
- adapt_kw = {'dimensions': self.type.dimensions - 1}
- return_type = self.type.adapt(self.type.__class__, **adapt_kw)
+ if self.type.zero_indexes:
+ index += 1
+ if self.type.dimensions is None or self.type.dimensions == 1:
+ return_type = self.type.item_type
+ else:
+ adapt_kw = {'dimensions': self.type.dimensions - 1}
+ return_type = self.type.adapt(
+ self.type.__class__, **adapt_kw)
return operators.getitem, index, return_type
:func:`.sql.expression.any_`
- :meth:`.Array.Comparator.all`
+ :meth:`.types.ARRAY.Comparator.all`
"""
operator = operator if operator else operators.eq
:func:`.sql.expression.all_`
- :meth:`.Array.Comparator.any`
+ :meth:`.types.ARRAY.Comparator.any`
"""
operator = operator if operator else operators.eq
def __init__(self, item_type, as_tuple=False, dimensions=None,
zero_indexes=False):
- """Construct an :class:`.Array`.
+ """Construct an :class:`.types.ARRAY`.
E.g.::
- Column('myarray', Array(Integer))
+ Column('myarray', ARRAY(Integer))
Arguments are:
:param item_type: The data type of items of this array. Note that
dimensionality is irrelevant here, so multi-dimensional arrays like
- ``INTEGER[][]``, are constructed as ``Array(Integer)``, not as
- ``Array(Array(Integer))`` or such.
+ ``INTEGER[][]``, are constructed as ``ARRAY(Integer)``, not as
+ ``ARRAY(ARRAY(Integer))`` or such.
:param as_tuple=False: Specify whether return results
should be converted to tuples from lists. This parameter is
on the database, how it goes about interpreting Python and
result values, as well as how expression behavior in conjunction
with the "getitem" operator works. See the description at
- :class:`.Array` for additional detail.
+ :class:`.types.ARRAY` for additional detail.
:param zero_indexes=False: when True, index values will be converted
between Python zero-based and SQL one-based indexes, e.g.
to the database.
"""
- if isinstance(item_type, Array):
+ if isinstance(item_type, ARRAY):
raise ValueError("Do not nest ARRAY types; ARRAY(basetype) "
"handles multi-dimensional arrays of basetype")
if isinstance(item_type, type):
return exclusions.open()
+ @property
+ def json_type(self):
+ """target platform implements a native JSON type."""
+
+ return exclusions.closed()
+
@property
def precision_numerics_general(self):
"""target backend has general support for moderately high-precision
from ..config import requirements
from sqlalchemy import Integer, Unicode, UnicodeText, select
from sqlalchemy import Date, DateTime, Time, MetaData, String, \
- Text, Numeric, Float, literal, Boolean
+ Text, Numeric, Float, literal, Boolean, cast, null, JSON
from ..schema import Table, Column
from ... import testing
import decimal
)
-__all__ = ('UnicodeVarcharTest', 'UnicodeTextTest',
+class JSONTest(_LiteralRoundTripFixture, fixtures.TablesTest):
+ __requires__ = 'json_type',
+ __backend__ = True
+
+ datatype = JSON
+
+ data1 = {
+ "key1": "value1",
+ "key2": "value2"
+ }
+
+ data2 = {
+ "Key 'One'": "value1",
+ "key two": "value2",
+ "key three": "value ' three '"
+ }
+
+ data3 = {
+ "key1": [1, 2, 3],
+ "key2": ["one", "two", "three"],
+ "key3": [{"four": "five"}, {"six": "seven"}]
+ }
+
+ data4 = ["one", "two", "three"]
+
+ data5 = {
+ "nested": {
+ "elem1": [
+ {"a": "b", "c": "d"},
+ {"e": "f", "g": "h"}
+ ],
+ "elem2": {
+ "elem3": {"elem4": "elem5"}
+ }
+ }
+ }
+
+ @classmethod
+ def define_tables(cls, metadata):
+ Table('data_table', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('name', String(30), nullable=False),
+ Column('data', cls.datatype),
+ Column('nulldata', cls.datatype(none_as_null=True))
+ )
+
+ def test_round_trip_data1(self):
+ self._test_round_trip(self.data1)
+
+ def _test_round_trip(self, data_element):
+ data_table = self.tables.data_table
+
+ config.db.execute(
+ data_table.insert(),
+ {'name': 'row1', 'data': data_element}
+ )
+
+ row = config.db.execute(
+ select([
+ data_table.c.data,
+ ])
+ ).first()
+
+ eq_(row, (data_element, ))
+
+ def test_round_trip_none_as_sql_null(self):
+ col = self.tables.data_table.c['nulldata']
+
+ with config.db.connect() as conn:
+ conn.execute(
+ self.tables.data_table.insert(),
+ {"name": "r1", "data": None}
+ )
+
+ eq_(
+ conn.scalar(
+ select([self.tables.data_table.c.name]).
+ where(col.is_(null()))
+ ),
+ "r1"
+ )
+
+ eq_(
+ conn.scalar(
+ select([col])
+ ),
+ None
+ )
+
+ def test_round_trip_json_null_as_json_null(self):
+ col = self.tables.data_table.c['data']
+
+ with config.db.connect() as conn:
+ conn.execute(
+ self.tables.data_table.insert(),
+ {"name": "r1", "data": JSON.NULL}
+ )
+
+ eq_(
+ conn.scalar(
+ select([self.tables.data_table.c.name]).
+ where(cast(col, String) == 'null')
+ ),
+ "r1"
+ )
+
+ eq_(
+ conn.scalar(
+ select([col])
+ ),
+ None
+ )
+
+ def test_round_trip_none_as_json_null(self):
+ col = self.tables.data_table.c['data']
+
+ with config.db.connect() as conn:
+ conn.execute(
+ self.tables.data_table.insert(),
+ {"name": "r1", "data": None}
+ )
+
+ eq_(
+ conn.scalar(
+ select([self.tables.data_table.c.name]).
+ where(cast(col, String) == 'null')
+ ),
+ "r1"
+ )
+
+ eq_(
+ conn.scalar(
+ select([col])
+ ),
+ None
+ )
+
+ def _criteria_fixture(self):
+ config.db.execute(
+ self.tables.data_table.insert(),
+ [{"name": "r1", "data": self.data1},
+ {"name": "r2", "data": self.data2},
+ {"name": "r3", "data": self.data3},
+ {"name": "r4", "data": self.data4},
+ {"name": "r5", "data": self.data5}]
+ )
+
+ def _test_index_criteria(self, crit, expected):
+ self._criteria_fixture()
+ with config.db.connect() as conn:
+ eq_(
+ conn.scalar(
+ select([self.tables.data_table.c.name]).
+ where(crit)
+ ),
+ expected
+ )
+
+ def test_crit_spaces_in_key(self):
+ col = self.tables.data_table.c['data']
+ self._test_index_criteria(
+ cast(col["key two"], String) == '"value2"',
+ "r2"
+ )
+
+ def test_crit_simple_int(self):
+ col = self.tables.data_table.c['data']
+ self._test_index_criteria(
+ cast(col[1], String) == '"two"',
+ "r4"
+ )
+
+ def test_crit_mixed_path(self):
+ col = self.tables.data_table.c['data']
+ self._test_index_criteria(
+ cast(col[("key3", 1, "six")], String) == '"seven"',
+ "r3"
+ )
+
+ def test_crit_string_path(self):
+ col = self.tables.data_table.c['data']
+ self._test_index_criteria(
+ cast(col[("nested", "elem2", "elem3", "elem4")], String)
+ == '"elem5"',
+ "r5"
+ )
+
+ def test_unicode_round_trip(self):
+ s = select([
+ cast(
+ {
+ util.u('réveillé'): util.u('réveillé'),
+ "data": {"k1": util.u('drôle')}
+ },
+ self.datatype
+ )
+ ])
+ eq_(
+ config.db.scalar(s),
+ {
+ util.u('réveillé'): util.u('réveillé'),
+ "data": {"k1": util.u('drôle')}
+ },
+ )
+
+ def test_eval_none_flag_orm(self):
+ from sqlalchemy.ext.declarative import declarative_base
+ from sqlalchemy.orm import Session
+
+ Base = declarative_base()
+
+ class Data(Base):
+ __table__ = self.tables.data_table
+
+ s = Session(testing.db)
+
+ d1 = Data(name='d1', data=None, nulldata=None)
+ s.add(d1)
+ s.commit()
+
+ s.bulk_insert_mappings(
+ Data, [{"name": "d2", "data": None, "nulldata": None}]
+ )
+ eq_(
+ s.query(
+ cast(self.tables.data_table.c.data, String),
+ cast(self.tables.data_table.c.nulldata, String)
+ ).filter(self.tables.data_table.c.name == 'd1').first(),
+ ("null", None)
+ )
+ eq_(
+ s.query(
+ cast(self.tables.data_table.c.data, String),
+ cast(self.tables.data_table.c.nulldata, String)
+ ).filter(self.tables.data_table.c.name == 'd2').first(),
+ ("null", None)
+ )
+
+
+__all__ = ('UnicodeVarcharTest', 'UnicodeTextTest', 'JSONTest',
'DateTest', 'DateTimeTest', 'TextTest',
'NumericTest', 'IntegerTest',
'DateTimeHistoricTest', 'DateTimeCoercedToDateTimeTest',
'SmallInteger', 'BigInteger', 'Numeric', 'Float', 'DateTime',
'Date', 'Time', 'LargeBinary', 'Binary', 'Boolean', 'Unicode',
'Concatenable', 'UnicodeText', 'PickleType', 'Interval', 'Enum',
- 'Indexable', 'Array']
+ 'Indexable', 'ARRAY', 'JSON']
from .sql.type_api import (
adapt_type,
UserDefinedType
)
from .sql.sqltypes import (
- Array,
+ ARRAY,
BIGINT,
BINARY,
BLOB,
INTEGER,
Integer,
Interval,
+ JSON,
LargeBinary,
MatchType,
NCHAR,
import datetime
from sqlalchemy import Table, MetaData, Column, Integer, Enum, Float, select, \
func, DateTime, Numeric, exc, String, cast, REAL, TypeDecorator, Unicode, \
- Text, null, text, column, Array, any_, all_
+ Text, null, text, column, ARRAY, any_, all_
from sqlalchemy.sql import operators
from sqlalchemy import types
import sqlalchemy as sa
def test_array_index_map_dimensions(self):
col = column('x', postgresql.ARRAY(Integer, dimensions=3))
is_(
- col[5].type._type_affinity, Array
+ col[5].type._type_affinity, ARRAY
)
assert isinstance(
col[5].type, postgresql.ARRAY
col[5].type.dimensions, 2
)
is_(
- col[5][6].type._type_affinity, Array
+ col[5][6].type._type_affinity, ARRAY
)
assert isinstance(
col[5][6].type, postgresql.ARRAY
)
# type affinity is Array...
- is_(arrtable.c.intarr[1:3].type._type_affinity, Array)
- is_(arrtable.c.strarr[1:3].type._type_affinity, Array)
+ is_(arrtable.c.intarr[1:3].type._type_affinity, ARRAY)
+ is_(arrtable.c.strarr[1:3].type._type_affinity, ARRAY)
# but the slice returns the actual type
assert isinstance(arrtable.c.intarr[1:3].type, postgresql.ARRAY)
type_=postgresql.ARRAY(Integer)
)[3],
"(array_cat(ARRAY[%(param_1)s, %(param_2)s, %(param_3)s], "
- "ARRAY[%(param_4)s, %(param_5)s, %(param_6)s]))[%(param_7)s]"
+ "ARRAY[%(param_4)s, %(param_5)s, %(param_6)s]))[%(array_cat_1)s]"
)
def test_array_agg_generic(self):
expr = func.array_agg(column('q', Integer))
- is_(expr.type.__class__, types.Array)
+ is_(expr.type.__class__, types.ARRAY)
is_(expr.type.item_type.__class__, Integer)
def test_array_agg_specific(self):
def test_where_getitem(self):
self._test_where(
self.hashcol['bar'] == None,
- "(test_table.hash -> %(hash_1)s) IS NULL"
+ "test_table.hash -> %(hash_1)s IS NULL"
)
def test_cols_get(self):
def test_cols_concat_get(self):
self._test_cols(
(self.hashcol + self.hashcol)['foo'],
- "test_table.hash || test_table.hash -> %(param_1)s AS anon_1"
+ "(test_table.hash || test_table.hash) -> %(param_1)s AS anon_1"
)
def test_cols_keys(self):
cols = insp.get_columns('data_table')
assert isinstance(cols[2]['type'], HSTORE)
+ def test_literal_round_trip(self):
+ # in particular, this tests that the array index
+ # operator against the function is handled by PG; with some
+ # array functions it requires outer parenthezisation on the left and
+ # we may not be doing that here
+ expr = hstore(
+ postgresql.array(['1', '2']),
+ postgresql.array(['3', None]))['1']
+ eq_(
+ testing.db.scalar(
+ select([expr])
+ ),
+ "3"
+ )
+
@testing.requires.psycopg2_native_hstore
def test_insert_native(self):
engine = testing.db
) % expected
)
- def test_bind_serialize_default(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_bind_processor(
- dialect)
- eq_(
- proc({"A": [1, 2, 3, True, False]}),
- '{"A": [1, 2, 3, true, false]}'
- )
-
- def test_bind_serialize_None(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_bind_processor(
- dialect)
- eq_(
- proc(None),
- 'null'
- )
-
- def test_bind_serialize_none_as_null(self):
- dialect = postgresql.dialect()
- proc = JSON(none_as_null=True)._cached_bind_processor(
- dialect)
- eq_(
- proc(None),
- None
- )
- eq_(
- proc(null()),
- None
- )
-
- def test_bind_serialize_null(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_bind_processor(
- dialect)
- eq_(
- proc(null()),
- None
- )
-
- def test_result_deserialize_default(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_result_processor(
- dialect, None)
- eq_(
- proc('{"A": [1, 2, 3, true, false]}'),
- {"A": [1, 2, 3, True, False]}
- )
-
- def test_result_deserialize_null(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_result_processor(
- dialect, None)
- eq_(
- proc('null'),
- None
- )
-
- def test_result_deserialize_None(self):
- dialect = postgresql.dialect()
- proc = self.test_table.c.test_column.type._cached_result_processor(
- dialect, None)
- eq_(
- proc(None),
- None
- )
-
# This test is a bit misleading -- in real life you will need to cast to
# do anything
def test_where_getitem(self):
self._test_where(
self.jsoncol['bar'] == None,
- "(test_table.test_column -> %(test_column_1)s) IS NULL"
+ "test_table.test_column -> %(test_column_1)s IS NULL"
)
def test_where_path(self):
self._test_where(
self.jsoncol[("foo", 1)] == None,
- "(test_table.test_column #> %(test_column_1)s) IS NULL"
+ "test_table.test_column #> %(test_column_1)s IS NULL"
)
def test_path_typing(self):
col = column('x', JSON())
is_(
- col['q'].type._type_affinity, JSON
+ col['q'].type._type_affinity, types.JSON
)
is_(
- col[('q', )].type._type_affinity, JSON
+ col[('q', )].type._type_affinity, types.JSON
)
is_(
- col['q']['p'].type._type_affinity, JSON
+ col['q']['p'].type._type_affinity, types.JSON
)
is_(
- col[('q', 'p')].type._type_affinity, JSON
+ col[('q', 'p')].type._type_affinity, types.JSON
)
def test_custom_astext_type(self):
def test_where_getitem_as_text(self):
self._test_where(
self.jsoncol['bar'].astext == None,
- "(test_table.test_column ->> %(test_column_1)s) IS NULL"
+ "test_table.test_column ->> %(test_column_1)s IS NULL"
)
def test_where_getitem_astext_cast(self):
def test_where_path_as_text(self):
self._test_where(
self.jsoncol[("foo", 1)].astext == None,
- "(test_table.test_column #>> %(test_column_1)s) IS NULL"
+ "test_table.test_column #>> %(test_column_1)s IS NULL"
)
def test_cols_get(self):
return fails_on_everything_except('postgresql', 'oracle', 'mssql',
'sybase')
+ @property
+ def json_type(self):
+ return only_on([
+ lambda config: against(config, "mysql >= 5.7") and
+ not config.db.dialect._is_mariadb,
+ "postgresql >= 9.3"
+ ])
+
@property
def datetime_literals(self):
"""target dialect supports rendering of a date, time, or datetime as a
import datetime
from sqlalchemy import func, select, Integer, literal, DateTime, Table, \
Column, Sequence, MetaData, extract, Date, String, bindparam, \
- literal_column, Array, Numeric
+ literal_column, ARRAY, Numeric
from sqlalchemy.sql import table, column
from sqlalchemy import sql, util
from sqlalchemy.sql.compiler import BIND_TEMPLATES
def test_array_agg(self):
expr = func.array_agg(column('data', Integer))
- is_(expr.type._type_affinity, Array)
+ is_(expr.type._type_affinity, ARRAY)
is_(expr.type.item_type._type_affinity, Integer)
def test_mode(self):
def test_percentile_cont_array(self):
expr = func.percentile_cont(0.5, 0.7).within_group(
column('data', Integer))
- is_(expr.type._type_affinity, Array)
+ is_(expr.type._type_affinity, ARRAY)
is_(expr.type.item_type._type_affinity, Integer)
def test_percentile_cont_array_desc(self):
expr = func.percentile_cont(0.5, 0.7).within_group(
column('data', Integer).desc())
- is_(expr.type._type_affinity, Array)
+ is_(expr.type._type_affinity, ARRAY)
is_(expr.type.item_type._type_affinity, Integer)
def test_cume_dist(self):
from sqlalchemy.schema import Column, Table, MetaData
from sqlalchemy.sql import compiler
from sqlalchemy.types import TypeEngine, TypeDecorator, UserDefinedType, \
- Boolean, NullType, MatchType, Indexable, Concatenable, Array
+ Boolean, NullType, MatchType, Indexable, Concatenable, ARRAY, JSON
from sqlalchemy.dialects import mysql, firebird, postgresql, oracle, \
sqlite, mssql
from sqlalchemy import util
)
-class IndexableTest(fixtures.TestBase, testing.AssertsCompiledSQL):
+class JSONIndexOpTest(fixtures.TestBase, testing.AssertsCompiledSQL):
+ def setUp(self):
+ class MyTypeCompiler(compiler.GenericTypeCompiler):
+ def visit_mytype(self, type, **kw):
+ return "MYTYPE"
+
+ def visit_myothertype(self, type, **kw):
+ return "MYOTHERTYPE"
+
+ class MyCompiler(compiler.SQLCompiler):
+
+ def visit_json_getitem_op_binary(self, binary, operator, **kw):
+ return self._generate_generic_binary(
+ binary, " -> ", **kw
+ )
+
+ def visit_json_path_getitem_op_binary(
+ self, binary, operator, **kw):
+ return self._generate_generic_binary(
+ binary, " #> ", **kw
+ )
+
+ def visit_getitem_binary(self, binary, operator, **kw):
+ raise NotImplementedError()
+
+ class MyDialect(default.DefaultDialect):
+ statement_compiler = MyCompiler
+ type_compiler = MyTypeCompiler
+
+ class MyType(JSON):
+ __visit_name__ = 'mytype'
+
+ pass
+
+ self.MyType = MyType
+ self.__dialect__ = MyDialect()
+
+ def test_setup_getitem(self):
+ col = Column('x', self.MyType())
+
+ is_(
+ col[5].type._type_affinity, JSON
+ )
+ is_(
+ col[5]['foo'].type._type_affinity, JSON
+ )
+ is_(
+ col[('a', 'b', 'c')].type._type_affinity, JSON
+ )
+
+ def test_getindex_literal_integer(self):
+
+ col = Column('x', self.MyType())
+
+ self.assert_compile(
+ col[5],
+ "x -> :x_1",
+ checkparams={'x_1': 5}
+ )
+
+ def test_getindex_literal_string(self):
+
+ col = Column('x', self.MyType())
+
+ self.assert_compile(
+ col['foo'],
+ "x -> :x_1",
+ checkparams={'x_1': 'foo'}
+ )
+
+ def test_path_getindex_literal(self):
+
+ col = Column('x', self.MyType())
+
+ self.assert_compile(
+ col[('a', 'b', 3, 4, 'd')],
+ "x #> :x_1",
+ checkparams={'x_1': ('a', 'b', 3, 4, 'd')}
+ )
+
+ def test_getindex_sqlexpr(self):
+
+ col = Column('x', self.MyType())
+ col2 = Column('y', Integer())
+
+ self.assert_compile(
+ col[col2],
+ "x -> y",
+ checkparams={}
+ )
+
+ self.assert_compile(
+ col[col2 + 8],
+ "x -> (y + :y_1)",
+ checkparams={'y_1': 8}
+ )
+
+ def test_override_operators(self):
+ special_index_op = operators.custom_op('$$>')
+
+ class MyOtherType(JSON, TypeEngine):
+ __visit_name__ = 'myothertype'
+
+ class Comparator(TypeEngine.Comparator):
+
+ def _adapt_expression(self, op, other_comparator):
+ return special_index_op, MyOtherType()
+
+ comparator_factory = Comparator
+
+ col = Column('x', MyOtherType())
+ self.assert_compile(
+ col[5],
+ "x $$> :x_1",
+ checkparams={'x_1': 5}
+ )
+
+
+class ArrayIndexOpTest(fixtures.TestBase, testing.AssertsCompiledSQL):
def setUp(self):
class MyTypeCompiler(compiler.GenericTypeCompiler):
def visit_mytype(self, type, **kw):
statement_compiler = MyCompiler
type_compiler = MyTypeCompiler
- class MyType(Indexable, TypeEngine):
+ class MyType(ARRAY):
__visit_name__ = 'mytype'
def __init__(self, zero_indexes=False, dimensions=1):
if zero_indexes:
self.zero_indexes = zero_indexes
self.dimensions = dimensions
-
- class Comparator(Indexable.Comparator):
- def _setup_getitem(self, index):
- if isinstance(index, slice):
- return_type = self.type
- elif self.type.dimensions is None or \
- self.type.dimensions == 1:
- return_type = Integer()
- else:
- adapt_kw = {'dimensions': self.type.dimensions - 1}
- # this is also testing the behavior of adapt()
- # that we can pass kw that override constructor kws.
- # required a small change to util.constructor_copy().
- return_type = self.type.adapt(
- self.type.__class__, **adapt_kw)
-
- return operators.getitem, index, return_type
- comparator_factory = Comparator
+ self.item_type = Integer()
self.MyType = MyType
self.__dialect__ = MyDialect()
col = Column('x', self.MyType(dimensions=3))
is_(
- col[5].type._type_affinity, self.MyType
+ col[5].type._type_affinity, ARRAY
)
eq_(
col[5].type.dimensions, 2
)
is_(
- col[5][6].type._type_affinity, self.MyType
+ col[5][6].type._type_affinity, ARRAY
)
eq_(
col[5][6].type.dimensions, 1
t = Table(
'tab1', m,
- Column('arrval', Array(Integer)),
+ Column('arrval', ARRAY(Integer)),
Column('data', Integer)
)
return t
and_, func, Date, LargeBinary, literal, cast, text, Enum,
type_coerce, VARCHAR, Time, DateTime, BigInteger, SmallInteger, BOOLEAN,
BLOB, NCHAR, NVARCHAR, CLOB, TIME, DATE, DATETIME, TIMESTAMP, SMALLINT,
- INTEGER, DECIMAL, NUMERIC, FLOAT, REAL, Array)
+ INTEGER, DECIMAL, NUMERIC, FLOAT, REAL, ARRAY, JSON)
from sqlalchemy.sql import ddl
from sqlalchemy.sql import visitors
from sqlalchemy import inspection
from sqlalchemy import exc, types, util, dialects
for name in dialects.__all__:
__import__("sqlalchemy.dialects.%s" % name)
-from sqlalchemy.sql import operators, column, table
+from sqlalchemy.sql import operators, column, table, null
from sqlalchemy.schema import CheckConstraint, AddConstraint
from sqlalchemy.engine import default
from sqlalchemy.testing.schema import Table, Column
for is_down_adaption, typ, target_adaptions in adaptions():
if typ in (types.TypeDecorator, types.TypeEngine, types.Variant):
continue
- elif issubclass(typ, Array):
+ elif issubclass(typ, ARRAY):
t1 = typ(String)
else:
t1 = typ()
if not issubclass(typ, types.Enum) and \
issubclass(cls, types.Enum):
continue
+ if cls.__module__.startswith("test"):
+ continue
# print("ADAPT %s -> %s" % (t1.__class__, cls))
t2 = t1.adapt(cls)
for typ in self._all_types():
if typ in (types.TypeDecorator, types.TypeEngine, types.Variant):
continue
- elif issubclass(typ, Array):
+ elif issubclass(typ, ARRAY):
t1 = typ(String)
else:
t1 = typ()
return o.read()
+class JSONTest(fixtures.TestBase):
+
+ def setup(self):
+ metadata = MetaData()
+ self.test_table = Table('test_table', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('test_column', JSON),
+ )
+ self.jsoncol = self.test_table.c.test_column
+
+ self.dialect = default.DefaultDialect()
+ self.dialect._json_serializer = None
+ self.dialect._json_deserializer = None
+
+ def test_bind_serialize_default(self):
+ proc = self.test_table.c.test_column.type._cached_bind_processor(
+ self.dialect)
+ eq_(
+ proc({"A": [1, 2, 3, True, False]}),
+ '{"A": [1, 2, 3, true, false]}'
+ )
+
+ def test_bind_serialize_None(self):
+ proc = self.test_table.c.test_column.type._cached_bind_processor(
+ self.dialect)
+ eq_(
+ proc(None),
+ 'null'
+ )
+
+ def test_bind_serialize_none_as_null(self):
+ proc = JSON(none_as_null=True)._cached_bind_processor(
+ self.dialect)
+ eq_(
+ proc(None),
+ None
+ )
+ eq_(
+ proc(null()),
+ None
+ )
+
+ def test_bind_serialize_null(self):
+ proc = self.test_table.c.test_column.type._cached_bind_processor(
+ self.dialect)
+ eq_(
+ proc(null()),
+ None
+ )
+
+ def test_result_deserialize_default(self):
+ proc = self.test_table.c.test_column.type._cached_result_processor(
+ self.dialect, None)
+ eq_(
+ proc('{"A": [1, 2, 3, true, false]}'),
+ {"A": [1, 2, 3, True, False]}
+ )
+
+ def test_result_deserialize_null(self):
+ proc = self.test_table.c.test_column.type._cached_result_processor(
+ self.dialect, None)
+ eq_(
+ proc('null'),
+ None
+ )
+
+ def test_result_deserialize_None(self):
+ proc = self.test_table.c.test_column.type._cached_result_processor(
+ self.dialect, None)
+ eq_(
+ proc(None),
+ None
+ )
+
+
class ArrayTest(fixtures.TestBase):
def _myarray_fixture(self):
- class MyArray(Array):
+ class MyArray(ARRAY):
pass
return MyArray
def test_array_index_map_dimensions(self):
- col = column('x', Array(Integer, dimensions=3))
+ col = column('x', ARRAY(Integer, dimensions=3))
is_(
- col[5].type._type_affinity, Array
+ col[5].type._type_affinity, ARRAY
)
eq_(
col[5].type.dimensions, 2
)
is_(
- col[5][6].type._type_affinity, Array
+ col[5][6].type._type_affinity, ARRAY
)
eq_(
col[5][6].type.dimensions, 1
m = MetaData()
arrtable = Table(
'arrtable', m,
- Column('intarr', Array(Integer)),
- Column('strarr', Array(String)),
+ Column('intarr', ARRAY(Integer)),
+ Column('strarr', ARRAY(String)),
)
is_(arrtable.c.intarr[1].type._type_affinity, Integer)
is_(arrtable.c.strarr[1].type._type_affinity, String)
m = MetaData()
arrtable = Table(
'arrtable', m,
- Column('intarr', Array(Integer)),
- Column('strarr', Array(String)),
+ Column('intarr', ARRAY(Integer)),
+ Column('strarr', ARRAY(String)),
)
- is_(arrtable.c.intarr[1:3].type._type_affinity, Array)
- is_(arrtable.c.strarr[1:3].type._type_affinity, Array)
+ is_(arrtable.c.intarr[1:3].type._type_affinity, ARRAY)
+ is_(arrtable.c.strarr[1:3].type._type_affinity, ARRAY)
def test_array_getitem_slice_type_dialect_level(self):
MyArray = self._myarray_fixture()
Column('intarr', MyArray(Integer)),
Column('strarr', MyArray(String)),
)
- is_(arrtable.c.intarr[1:3].type._type_affinity, Array)
- is_(arrtable.c.strarr[1:3].type._type_affinity, Array)
+ is_(arrtable.c.intarr[1:3].type._type_affinity, ARRAY)
+ is_(arrtable.c.strarr[1:3].type._type_affinity, ARRAY)
# but the slice returns the actual type
assert isinstance(arrtable.c.intarr[1:3].type, MyArray)