Use cursive to make it looking like mathematic formulas.
to ``1142331976``. The hash code is then used to calculate a location in an
internal array where the value will be stored. Assuming that you're storing
keys that all have different hash values, this means that dictionaries take
-constant time -- O(1), in Big-O notation -- to retrieve a key.
+constant time -- *O*\ (1), in Big-O notation -- to retrieve a key.
Why must dictionary keys be immutable?
list
A built-in Python :term:`sequence`. Despite its name it is more akin
to an array in other languages than to a linked list since access to
- elements is O(1).
+ elements is *O*\ (1).
list comprehension
A compact way to process all or part of the elements in a sequence and
It works reliably even when the asyncio event loop is run in a non-main OS thread.
- There is no noticeable overhead when handling a big number of children (*O(1)* each
+ There is no noticeable overhead when handling a big number of children (*O*\ (1) each
time a child terminates), but starting a thread per process requires extra memory.
This watcher is used by default.
watcher is installed.
The solution is safe but it has a significant overhead when
- handling a big number of processes (*O(n)* each time a
+ handling a big number of processes (*O*\ (*n*) each time a
:py:data:`SIGCHLD` is received).
.. versionadded:: 3.8
The watcher avoids disrupting other code spawning processes
by polling every process explicitly on a :py:data:`SIGCHLD` signal.
- This solution is as safe as :class:`MultiLoopChildWatcher` and has the same *O(N)*
+ This solution is as safe as :class:`MultiLoopChildWatcher` and has the same *O*\ (*n*)
complexity but requires a running event loop in the main thread to work.
.. deprecated:: 3.12
processes and waiting for their termination.
There is no noticeable overhead when handling a big number of
- children (*O(1)* each time a child terminates).
+ children (*O*\ (1) each time a child terminates).
This solution requires a running event loop in the main thread to work, as
:class:`SafeChildWatcher`.
To support inserting records in a table, the *key* function (if any) is
applied to *x* for the search step but not for the insertion step.
- Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+ Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ (*n*)
insertion step.
.. versionchanged:: 3.10
To support inserting records in a table, the *key* function (if any) is
applied to *x* for the search step but not for the insertion step.
- Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+ Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ (*n*)
insertion step.
.. versionchanged:: 3.10
* Bisection is effective for searching ranges of values.
For locating specific values, dictionaries are more performant.
-* The *insort()* functions are ``O(n)`` because the logarithmic search step
+* The *insort()* functions are *O*\ (*n*) because the logarithmic search step
is dominated by the linear time insertion step.
* The search functions are stateless and discard key function results after
Deques are a generalization of stacks and queues (the name is pronounced "deck"
and is short for "double-ended queue"). Deques support thread-safe, memory
efficient appends and pops from either side of the deque with approximately the
- same O(1) performance in either direction.
+ same *O*\ (1) performance in either direction.
Though :class:`list` objects support similar operations, they are optimized for
- fast fixed-length operations and incur O(n) memory movement costs for
+ fast fixed-length operations and incur *O*\ (*n*) memory movement costs for
``pop(0)`` and ``insert(0, v)`` operations which change both the size and
position of the underlying data representation.
In addition to the above, deques support iteration, pickling, ``len(d)``,
``reversed(d)``, ``copy.copy(d)``, ``copy.deepcopy(d)``, membership testing with
the :keyword:`in` operator, and subscript references such as ``d[0]`` to access
-the first element. Indexed access is O(1) at both ends but slows to O(n) in
+the first element. Indexed access is *O*\ (1) at both ends but slows to *O*\ (*n*) in
the middle. For fast random access, use lists instead.
Starting in version 3.5, deques support ``__add__()``, ``__mul__()``,
ctx: Context = copy_context()
print(list(ctx.items()))
- The function has an O(1) complexity, i.e. works equally fast for
+ The function has an *O*\ (1) complexity, i.e. works equally fast for
contexts with a few context variables and for contexts that have
a lot of them.
to move some loser (let's say cell 30 in the diagram above) into the 0 position,
and then percolate this new 0 down the tree, exchanging values, until the
invariant is re-established. This is clearly logarithmic on the total number of
-items in the tree. By iterating over all items, you get an O(n log n) sort.
+items in the tree. By iterating over all items, you get an *O*\ (*n* log *n*) sort.
A nice feature of this sort is that you can efficiently insert new items while
the sort is going on, provided that the inserted items are not "better" than the
-----------------------------
Solaris and derivatives have ``/dev/poll``. While :c:func:`!select` is
-O(highest file descriptor) and :c:func:`!poll` is O(number of file
-descriptors), ``/dev/poll`` is O(active file descriptors).
+*O*\ (*highest file descriptor*) and :c:func:`!poll` is *O*\ (*number of file
+descriptors*), ``/dev/poll`` is *O*\ (*active file descriptors*).
``/dev/poll`` behaviour is very close to the standard :c:func:`!poll`
object.
time. :c:func:`!poll` scales better because the system call only requires listing
the file descriptors of interest, while :c:func:`!select` builds a bitmap, turns
on bits for the fds of interest, and then afterward the whole bitmap has to be
-linearly scanned again. :c:func:`!select` is O(highest file descriptor), while
-:c:func:`!poll` is O(number of file descriptors).
+linearly scanned again. :c:func:`!select` is *O*\ (*highest file descriptor*), while
+:c:func:`!poll` is *O*\ (*number of file descriptors*).
.. method:: poll.register(fd[, eventmask])
This is intended to provide protection against a denial-of-service caused
by carefully chosen inputs that exploit the worst case performance of a
- dict insertion, O(n\ :sup:`2`) complexity. See
+ dict insertion, *O*\ (*n*\ :sup:`2`) complexity. See
http://ocert.org/advisories/ocert-2011-003.html for details.
Changing hash values affects the iteration order of sets.
Hash randomization is intended to provide protection against a
denial-of-service caused by carefully chosen inputs that exploit the worst
- case performance of a dict construction, O(n\ :sup:`2`) complexity. See
+ case performance of a dict construction, *O*\ (*n*\ :sup:`2`) complexity. See
http://ocert.org/advisories/ocert-2011-003.html for details.
:envvar:`PYTHONHASHSEED` allows you to set a fixed value for the hash
* Multiplication of large long integers is now much faster thanks to an
implementation of Karatsuba multiplication, an algorithm that scales better than
- the O(n\*n) required for the grade-school multiplication algorithm. (Original
+ the *O*\ (*n*\ :sup:`2`) required for the grade-school multiplication algorithm. (Original
patch by Christopher A. Craig, and significantly reworked by Tim Peters.)
* The ``SET_LINENO`` opcode is now gone. This may provide a small speed
partially sorted order such that, for every index *k*, ``heap[k] <=
heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]``. This makes it quick to remove the
smallest item, and inserting a new item while maintaining the heap property is
- O(lg n). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
+ *O*\ (log *n*). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
information about the priority queue data structure.)
The :mod:`heapq` module provides :func:`~heapq.heappush` and :func:`~heapq.heappop` functions
doubly linked list of keys, appending new keys to the list as they're inserted.
A secondary dictionary maps keys to their corresponding list node, so
deletion doesn't have to traverse the entire linked list and therefore
-remains O(1).
+remains *O*\ (1).
The standard library now supports use of ordered dictionaries in several
modules.
b or c are now hashable. (Contributed by Antoine Pitrou in :issue:`13411`.)
* Arbitrary slicing of any 1-D arrays type is supported. For example, it
- is now possible to reverse a memoryview in O(1) by using a negative step.
+ is now possible to reverse a memoryview in *O*\ (1) by using a negative step.
API changes
-----------
.. nonce: Jf6gAj
.. section: Core and Builtins
-Cache the result of :c:func:`PyCode_GetCode` function to restore the O(1)
+Cache the result of :c:func:`PyCode_GetCode` function to restore the *O*\ (1)
lookup of the :attr:`~types.CodeType.co_code` attribute.
..
Speed up setting or deleting mutable attributes on non-dataclass subclasses
of frozen dataclasses. Due to the implementation of ``__setattr__`` and
``__delattr__`` for frozen dataclasses, this previously had a time
-complexity of ``O(n)``. It now has a time complexity of ``O(1)``.
+complexity of *O*\ (*n*). It now has a time complexity of *O*\ (1).
..
.. nonce: THJSYB
.. section: Library
-Changed FeedParser feed() to avoid O(N\ :sup:`2`) behavior when parsing long line.
+Changed FeedParser feed() to avoid *O*\ (*n*\ :sup:`2`) behavior when parsing long line.
Original patch by Raymond Hettinger.
..