Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jorenham/optype
Typing Protocols for Precise Type Hints in Python 3.12+
https://github.com/jorenham/optype
Last synced: about 2 months ago
JSON representation
Typing Protocols for Precise Type Hints in Python 3.12+
- Host: GitHub
- URL: https://github.com/jorenham/optype
- Owner: jorenham
- License: bsd-3-clause
- Created: 2024-02-22T03:07:03.000Z (10 months ago)
- Default Branch: master
- Last Pushed: 2024-04-29T22:01:56.000Z (8 months ago)
- Last Synced: 2024-05-01T15:54:15.045Z (8 months ago)
- Language: Python
- Size: 479 KB
- Stars: 5
- Watchers: 4
- Forks: 0
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-python-typing - optype - Opinionated `collections.abc` and `operators` alternative: Flexible single-method protocols and typed operators with predictable names. (Additional types)
README
optype
Building blocks for precise & flexible type hints.
---
## Installation
Optype is available as [`optype`][OPTYPE] on PyPI:
```shell
pip install optype
```For optional [NumPy][NUMPY] support, it is recommended to use the
`numpy` extra.
This ensures that the installed `numpy` version is compatible with
`optype`, following [NEP 29][NEP29] and [SPEC 0][SPEC0].```shell
pip install "optype[numpy]"
```See the [`optype.numpy` docs](#optypenumpy) for more info.
[OPTYPE]: https://pypi.org/project/optype/
[NUMPY]: https://github.com/numpy/numpy## Example
Let's say you're writing a `twice(x)` function, that evaluates `2 * x`.
Implementing it is trivial, but what about the type annotations?Because `twice(2) == 4`, `twice(3.14) == 6.28` and `twice('I') = 'II'`, it
might seem like a good idea to type it as `twice[T](x: T) -> T: ...`.
However, that wouldn't include cases such as `twice(True) == 2` or
`twice((42, True)) == (42, True, 42, True)`, where the input- and output types
differ.
Moreover, `twice` should accept *any* type with a custom `__rmul__` method
that accepts `2` as argument.This is where `optype` comes in handy, which has single-method protocols for
*all* the builtin special methods.
For `twice`, we can use `optype.CanRMul[T, R]`, which, as the name suggests,
is a protocol with (only) the `def __rmul__(self, lhs: T) -> R: ...` method.
With this, the `twice` function can written as:Python 3.10
Python 3.12+```python
from typing import Literal
from typing import TypeAlias, TypeVar
from optype import CanRMulR = TypeVar("R")
Two: TypeAlias = Literal[2]
RMul2: TypeAlias = CanRMul[Two, R]def twice(x: RMul2[R]) -> R:
return 2 * x
``````python
from typing import Literal
from optype import CanRMultype Two = Literal[2]
type RMul2[R] = CanRMul[Two, R]def twice[R](x: RMul2[R]) -> R:
return 2 * x
```But what about types that implement `__add__` but not `__radd__`?
In this case, we could return `x * 2` as fallback (assuming commutativity).
Because the `optype.Can*` protocols are runtime-checkable, the revised
`twice2` function can be compactly written as:Python 3.10
Python 3.12+```python
from optype import CanMulMul2: TypeAlias = CanMul[Two, R]
CMul2: TypeAlias = Mul2[R] | RMul2[R]def twice2(x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2
``````python
from optype import CanMultype Mul2[R] = CanMul[Two, R]
type CMul2[R] = Mul2[R] | RMul2[R]def twice2[R](x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2
```See [`examples/twice.py`](examples/twice.py) for the full example.
## Reference
The API of `optype` is flat; a single `import optype as opt` is all you need
(except for `optype.numpy`).- [`optype`](#optype)
- [Builtin type conversion](#builtin-type-conversion)
- [Rich relations](#rich-relations)
- [Binary operations](#binary-operations)
- [Reflected operations](#reflected-operations)
- [Inplace operations](#inplace-operations)
- [Unary operations](#unary-operations)
- [Rounding](#rounding)
- [Callables](#callables)
- [Iteration](#iteration)
- [Awaitables](#awaitables)
- [Async Iteration](#async-iteration)
- [Containers](#containers)
- [Attributes](#attributes)
- [Context managers](#context-managers)
- [Descriptors](#descriptors)
- [Buffer types](#buffer-types)
- [`optype.copy`](#optypecopy)
- [`optype.dataclasses`](#optypedataclasses)
- [`optype.inspect`](#optypeinspect)
- [`optype.json`](#optypejson)
- [`optype.pickle`](#optypepickle)
- [`optype.string`](#optypestring)
- [`optype.typing`](#optypetyping)
- [`Any*` type aliases](#any-type-aliases)
- [`Empty*` type aliases](#empty-type-aliases)
- [Literal types](#literal-types)
- [`optype.dlpack`](#optypedlpack)
- [`optype.numpy`](#optypenumpy)
- [`Array`](#array)
- [`UFunc`](#ufunc)
- [`Shape type aliases`](#shape-type-aliases)
- [`Scalar`](#scalar)
- [`DType`](#dtype)
- [`Any*Array` and `Any*DType`](#anyarray-and-anydtype)
- [Low-level interfaces](#low-level-interfaces)### `optype`
There are four flavors of things that live within `optype`,
-
`optype.Can{}` types describe *what can be done* with it.
For instance, any `CanAbs[T]` type can be used as argument to the `abs()`
builtin function with return type `T`. Most `Can{}` implement a single
special method, whose name directly matched that of the type. `CanAbs`
implements `__abs__`, `CanAdd` implements `__add__`, etc.
-
`optype.Has{}` is the analogue of `Can{}`, but for special *attributes*.
`HasName` has a `__name__` attribute, `HasDict` has a `__dict__`, etc.
-
`optype.Does{}` describe the *type of operators*.
So `DoesAbs` is the type of the `abs({})` builtin function,
and `DoesPos` the type of the `+{}` prefix operator.
-
`optype.do_{}` are the correctly-typed implementations of `Does{}`. For
each `do_{}` there is a `Does{}`, and vice-versa.
So `do_abs: DoesAbs` is the typed alias of `abs({})`,
and `do_pos: DoesPos` is a typed version of `operator.pos`.
The `optype.do_` operators are more complete than `operators`,
have runtime-accessible type annotations, and have names you don't
need to know by heart.The reference docs are structured as follows:
All [typing protocols][PC] here live in the root `optype` namespace.
They are [runtime-checkable][RC] so that you can do e.g.
`isinstance('snail', optype.CanAdd)`, in case you want to check whether
`snail` implements `__add__`.Unlike`collections.abc`, `optype`'s protocols aren't abstract base classes,
i.e. they don't extend `abc.ABC`, only `typing.Protocol`.
This allows the `optype` protocols to be used as building blocks for `.pyi`
type stubs.[PC]: https://typing.readthedocs.io/en/latest/spec/protocol.html
[RC]: https://typing.readthedocs.io/en/latest/spec/protocol.html#runtime-checkable-decorator-and-narrowing-types-by-isinstance#### Builtin type conversion
The return type of these special methods is *invariant*. Python will raise an
error if some other (sub)type is returned.
This is why these `optype` interfaces don't accept generic type arguments.
operator
operand
expression
function
type
method
type
complex(_)
do_complex
DoesComplex
__complex__
CanComplex
float(_)
do_float
DoesFloat
__float__
CanFloat
int(_)
do_int
DoesInt
__int__
CanInt[R: int = int]
bool(_)
do_bool
DoesBool
__bool__
CanBool[R: bool = bool]
bytes(_)
do_bytes
DoesBytes
__bytes__
CanBytes[R: bytes = bytes]
str(_)
do_str
DoesStr
__str__
CanStr[R: str = str]
> [!NOTE]
> The `Can*` interfaces of the types that can used as `typing.Literal`
> accept an optional type parameter `R`.
> This can be used to indicate a literal return type,
> for surgically precise typing, e.g. `None`, `True`, and `42` are
> instances of `CanBool[Literal[False]]`, `CanInt[Literal[1]]`, and
> `CanStr[Literal['42']]`, respectively.These formatting methods are allowed to return instances that are a subtype
of the `str` builtin. The same holds for the `__format__` argument.
So if you're a 10x developer that wants to hack Python's f-strings, but only
if your type hints are spot-on; `optype` is you friend.
operator
operand
expression
function
type
method
type
repr(_)
do_repr
DoesRepr
__repr__
CanRepr[R: str = str]
format(_, x)
do_format
DoesFormat
__format__
CanFormat[T: str = str, R: str = str]
Additionally, `optype` provides protocols for types with (custom) *hash* or
*index* methods:
operator
operand
expression
function
type
method
type
hash(_)
do_hash
DoesHash
__hash__
CanHash
_.__index__()
(docs)
do_index
DoesIndex
__index__
CanIndex[R: int = int]
#### Rich relations
The "rich" comparison special methods often return a `bool`.
However, instances of any type can be returned (e.g. a numpy array).
This is why the corresponding `optype.Can*` interfaces accept a second type
argument for the return type, that defaults to `bool` when omitted.
The first type parameter matches the passed method argument, i.e. the
right-hand side operand, denoted here as `x`.
operator
operand
expression
reflected
function
type
method
type
_ == x
x == _
do_eq
DoesEq
__eq__
CanEq[T = object, R = bool]
_ != x
x != _
do_ne
DoesNe
__ne__
CanNe[T = object, R = bool]
_ < x
x > _
do_lt
DoesLt
__lt__
CanLt[T, R = bool]
_ <= x
x >= _
do_le
DoesLe
__le__
CanLe[T, R = bool]
_ > x
x < _
do_gt
DoesGt
__gt__
CanGt[T, R = bool]
_ >= x
x <= _
do_ge
DoesGe
__ge__
CanGe[T, R = bool]
#### Binary operations
In the [Python docs][NT], these are referred to as "arithmetic operations".
But the operands aren't limited to numeric types, and because the
operations aren't required to be commutative, might be non-deterministic, and
could have side-effects.
Classifying them "arithmetic" is, at the very least, a bit of a stretch.
operator
operand
expression
function
type
method
type
_ + x
do_add
DoesAdd
__add__
CanAdd[T, R]
_ - x
do_sub
DoesSub
__sub__
CanSub[T, R]
_ * x
do_mul
DoesMul
__mul__
CanMul[T, R]
_ @ x
do_matmul
DoesMatmul
__matmul__
CanMatmul[T, R]
_ / x
do_truediv
DoesTruediv
__truediv__
CanTruediv[T, R]
_ // x
do_floordiv
DoesFloordiv
__floordiv__
CanFloordiv[T, R]
_ % x
do_mod
DoesMod
__mod__
CanMod[T, R]
divmod(_, x)
do_divmod
DoesDivmod
__divmod__
CanDivmod[T, R]
_ ** x
pow(_, x)
do_pow/2
DoesPow
__pow__
CanPow2[T, R]
CanPow[T, None, R, Never]
pow(_, x, m)
do_pow/3
DoesPow
__pow__
CanPow3[T, M, R]
CanPow[T, M, Never, R]
_ << x
do_lshift
DoesLshift
__lshift__
CanLshift[T, R]
_ >> x
do_rshift
DoesRshift
__rshift__
CanRshift[T, R]
_ & x
do_and
DoesAnd
__and__
CanAnd[T, R]
_ ^ x
do_xor
DoesXor
__xor__
CanXor[T, R]
_ | x
do_or
DoesOr
__or__
CanOr[T, R]
> [!NOTE]
> Because `pow()` can take an optional third argument, `optype`
> provides separate interfaces for `pow()` with two and three arguments.
> Additionally, there is the overloaded intersection type
> `CanPow[T, M, R, RM] =: CanPow2[T, R] & CanPow3[T, M, RM]`, as interface
> for types that can take an optional third argument.#### Reflected operations
For the binary infix operators above, `optype` additionally provides
interfaces with *reflected* (swapped) operands, e.g. `__radd__` is a reflected
`__add__`.
They are named like the original, but prefixed with `CanR` prefix, i.e.
`__name__.replace('Can', 'CanR')`.
operator
operand
expression
function
type
method
type
x + _
do_radd
DoesRAdd
__radd__
CanRAdd[T, R]
x - _
do_rsub
DoesRSub
__rsub__
CanRSub[T, R]
x * _
do_rmul
DoesRMul
__rmul__
CanRMul[T, R]
x @ _
do_rmatmul
DoesRMatmul
__rmatmul__
CanRMatmul[T, R]
x / _
do_rtruediv
DoesRTruediv
__rtruediv__
CanRTruediv[T, R]
x // _
do_rfloordiv
DoesRFloordiv
__rfloordiv__
CanRFloordiv[T, R]
x % _
do_rmod
DoesRMod
__rmod__
CanRMod[T, R]
divmod(x, _)
do_rdivmod
DoesRDivmod
__rdivmod__
CanRDivmod[T, R]
x ** _
pow(x, _)
do_rpow
DoesRPow
__rpow__
CanRPow[T, R]
x << _
do_rlshift
DoesRLshift
__rlshift__
CanRLshift[T, R]
x >> _
do_rrshift
DoesRRshift
__rrshift__
CanRRshift[T, R]
x & _
do_rand
DoesRAnd
__rand__
CanRAnd[T, R]
x ^ _
do_rxor
DoesRXor
__rxor__
CanRXor[T, R]
x | _
do_ror
DoesROr
__ror__
CanROr[T, R]
> [!NOTE]
> `CanRPow` corresponds to `CanPow2`; the 3-parameter "modulo" `pow` does not
> reflect in Python.
>
> According to the relevant [python docs][RPOW]:
> > Note that ternary `pow()` will not try calling `__rpow__()` (the coercion
> > rules would become too complicated).[RPOW]: https://docs.python.org/3/reference/datamodel.html#object.__rpow__
#### Inplace operations
Similar to the reflected ops, the inplace/augmented ops are prefixed with
`CanI`, namely:
operator
operand
expression
function
type
method
types
_ += x
do_iadd
DoesIAdd
__iadd__
CanIAdd[T, R]
CanIAddSelf[T]
_ -= x
do_isub
DoesISub
__isub__
CanISub[T, R]
CanISubSelf[T]
_ *= x
do_imul
DoesIMul
__imul__
CanIMul[T, R]
CanIMulSelf[T]
_ @= x
do_imatmul
DoesIMatmul
__imatmul__
CanIMatmul[T, R]
CanIMatmulSelf[T]
_ /= x
do_itruediv
DoesITruediv
__itruediv__
CanITruediv[T, R]
CanITruedivSelf[T]
_ //= x
do_ifloordiv
DoesIFloordiv
__ifloordiv__
CanIFloordiv[T, R]
CanIFloordivSelf[T]
_ %= x
do_imod
DoesIMod
__imod__
CanIMod[T, R]
CanIModSelf[T]
_ **= x
do_ipow
DoesIPow
__ipow__
CanIPow[T, R]
CanIPowSelf[T]
_ <<= x
do_ilshift
DoesILshift
__ilshift__
CanILshift[T, R]
CanILshiftSelf[T]
_ >>= x
do_irshift
DoesIRshift
__irshift__
CanIRshift[T, R]
CanIRshiftSelf[T]
_ &= x
do_iand
DoesIAnd
__iand__
CanIAnd[T, R]
CanIAndSelf[T]
_ ^= x
do_ixor
DoesIXor
__ixor__
CanIXor[T, R]
CanIXorSelf[T]
_ |= x
do_ior
DoesIOr
__ior__
CanIOr[T, R]
CanIOrSelf[T]
These inplace operators usually return itself (after some in-place mutation).
But unfortunately, it currently isn't possible to use `Self` for this (i.e.
something like `type MyAlias[T] = optype.CanIAdd[T, Self]` isn't allowed).
So to help ease this unbearable pain, `optype` comes equipped with ready-made
aliases for you to use. They bear the same name, with an additional `*Self`
suffix, e.g. `optype.CanIAddSelf[T]`.#### Unary operations
operator
operand
expression
function
type
method
types
+_
do_pos
DoesPos
__pos__
CanPos[R]
CanPosSelf
-_
do_neg
DoesNeg
__neg__
CanNeg[R]
CanNegSelf
~_
do_invert
DoesInvert
__invert__
CanInvert[R]
CanInvertSelf
abs(_)
do_abs
DoesAbs
__abs__
CanAbs[R]
CanAbsSelf
#### Rounding
The `round()` built-in function takes an optional second argument.
From a typing perspective, `round()` has two overloads, one with 1 parameter,
and one with two.
For both overloads, `optype` provides separate operand interfaces:
`CanRound1[R]` and `CanRound2[T, RT]`.
Additionally, `optype` also provides their (overloaded) intersection type:
`CanRound[T, R, RT] = CanRound1[R] & CanRound2[T, RT]`.
operator
operand
expression
function
type
method
type
round(_)
do_round/1
DoesRound
__round__/1
CanRound1[T = int]
round(_, n)
do_round/2
DoesRound
__round__/2
CanRound2[T = int, RT = float]
round(_, n=...)
do_round
DoesRound
__round__
CanRound[T = int, R = int, RT = float]
For example, type-checkers will mark the following code as valid (tested with
pyright in strict mode):```python
x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = x
```Furthermore, there are the alternative rounding functions from the
[`math`][MATH] standard library:
operator
operand
expression
function
type
method
type
math.trunc(_)
do_trunc
DoesTrunc
__trunc__
CanTrunc[R = int]
math.floor(_)
do_floor
DoesFloor
__floor__
CanFloor[R = int]
math.ceil(_)
do_ceil
DoesCeil
__ceil__
CanCeil[R = int]
Almost all implementations use `int` for `R`.
In fact, if no type for `R` is specified, it will default in `int`.
But technially speaking, these methods can be made to return anything.[MATH]: https://docs.python.org/3/library/math.html
[NT]: https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types#### Callables
Unlike `operator`, `optype` provides the operator for callable objects:
`optype.do_call(f, *args. **kwargs)`.`CanCall` is similar to `collections.abc.Callable`, but is runtime-checkable,
and doesn't use esoteric hacks.
operator
operand
expression
function
type
method
type
_(*args, **kwargs)
do_call
DoesCall
__call__
CanCall[**Pss, R]
> [!NOTE]
> Pyright (and probably other typecheckers) tend to accept
> `collections.abc.Callable` in more places than `optype.CanCall`.
> This could be related to the lack of co/contra-variance specification for
> `typing.ParamSpec` (they should almost always be contravariant, but
> currently they can only be invariant).
>
> In case you encounter such a situation, please open an issue about it, so we
> can investigate further.#### Iteration
The operand `x` of `iter(_)` is within Python known as an *iterable*, which is
what `collections.abc.Iterable[V]` is often used for (e.g. as base class, or
for instance checking).The `optype` analogue is `CanIter[R]`, which as the name suggests,
also implements `__iter__`. But unlike `Iterable[V]`, its type parameter `R`
binds to the return type of `iter(_) -> R`. This makes it possible to annotate
the specific type of the *iterable* that `iter(_)` returns. `Iterable[V]` is
only able to annotate the type of the iterated value. To see why that isn't
possible, see [python/typing#548](https://github.com/python/typing/issues/548).The `collections.abc.Iterator[V]` is even more awkward; it is a subtype of
`Iterable[V]`. For those familiar with `collections.abc` this might come as a
surprise, but an iterator only needs to implement `__next__`, `__iter__` isn't
needed. This means that the `Iterator[V]` is unnecessarily restrictive.
Apart from that being theoretically "ugly", it has significant performance
implications, because the time-complexity of `isinstance` on a
`typing.Protocol` is $O(n)$, with the $n$ referring to the amount of members.
So even if the overhead of the inheritance and the `abc.ABC` usage is ignored,
`collections.abc.Iterator` is twice as slow as it needs to be.That's one of the (many) reasons that `optype.CanNext[V]` and
`optype.CanNext[V]` are the better alternatives to `Iterable` and `Iterator`
from the abracadabra collections. This is how they are defined:
operator
operand
expression
function
type
method
type
next(_)
do_next
DoesNext
__next__
CanNext[V]
iter(_)
do_iter
DoesIter
__iter__
CanIter[R: CanNext[object]]
For the sake of compatibility with `collections.abc`, there is
`optype.CanIterSelf[V]`, which is a protocol whose `__iter__` returns
`typing.Self`, as well as a `__next__` method that returns `T`.
I.e. it is equivalent to `collections.abc.Iterator[V]`, but without the `abc`
nonsense.#### Awaitables
The `optype` is almost the same as `collections.abc.Awaitable[R]`, except
that `optype.CanAwait[R]` is a pure interface, whereas `Awaitable` is
also an abstract base class (making it absolutely useless when writing stubs).
operator
operand
expression
method
type
await _
__await__
CanAwait[R]
#### Async Iteration
Yes, you guessed it right; the abracadabra collections made the exact same
mistakes for the async iterablors (or was it "iteramblers"...?).But fret not; the `optype` alternatives are right here:
operator
operand
expression
function
type
method
type
anext(_)
do_anext
DoesANext
__anext__
CanANext[V]
aiter(_)
do_aiter
DoesAIter
__aiter__
CanAIter[R: CanAnext[object]]
But wait, shouldn't `V` be a `CanAwait`? Well, only if you don't want to get
fired...
Technically speaking, `__anext__` can return any type, and `anext` will pass
it along without nagging (instance checks are slow, now stop bothering that
liberal). For details, see the discussion at [python/typeshed#7491][AN].
Just because something is legal, doesn't mean it's a good idea (don't eat the
yellow snow).Additionally, there is `optype.CanAIterSelf[R]`, with both the
`__aiter__() -> Self` and the `__anext__() -> V` methods.[AN]: https://github.com/python/typeshed/pull/7491
#### Containers
operator
operand
expression
function
type
method
type
len(_)
do_len
DoesLen
__len__
CanLen[R: int = int]
_.__length_hint__()
(docs)
do_length_hint
DoesLengthHint
__length_hint__
CanLengthHint[R: int = int]
_[k]
do_getitem
DoesGetitem
__getitem__
CanGetitem[K, V]
_.__missing__()
(docs)
do_missing
DoesMissing
__missing__
CanMissing[K, D]
_[k] = v
do_setitem
DoesSetitem
__setitem__
CanSetitem[K, V]
del _[k]
do_delitem
DoesDelitem
__delitem__
CanDelitem[K]
k in _
do_contains
DoesContains
__contains__
CanContains[K = object]
reversed(_)
do_reversed
DoesReversed
__reversed__
CanReversed[R]
, or
CanSequence[I, V, N = int]
Because `CanMissing[K, D]` generally doesn't show itself without
`CanGetitem[K, V]` there to hold its hand, `optype` conveniently stitched them
together as `optype.CanGetMissing[K, V, D=V]`.Similarly, there is `optype.CanSequence[K: CanIndex | slice, V]`, which is the
combination of both `CanLen` and `CanItem[I, V]`, and serves as a more
specific and flexible `collections.abc.Sequence[V]`.#### Attributes
operator
operand
expression
function
type
method
type
v = _.k
or
v = getattr(_, k)
do_getattr
DoesGetattr
__getattr__
CanGetattr[K: str = str, V = object]
_.k = v
or
setattr(_, k, v)
do_setattr
DoesSetattr
__setattr__
CanSetattr[K: str = str, V = object]
del _.k
or
delattr(_, k)
do_delattr
DoesDelattr
__delattr__
CanDelattr[K: str = str]
dir(_)
do_dir
DoesDir
__dir__
CanDir[R: CanIter[CanIterSelf[str]]]
#### Context managers
Support for the `with` statement.
operator
operand
expression
method(s)
type(s)
__enter__
CanEnter[C]
, or
CanEnterSelf
__exit__
CanExit[R = None]
with _ as c:
__enter__
, and
__exit__
CanWith[C, R=None]
, or
CanWithSelf[R=None]
`CanEnterSelf` and `CanWithSelf` are (runtime-checkable) aliases for
`CanEnter[Self]` and `CanWith[Self, R]`, respectively.For the `async with` statement the interfaces look very similar:
operator
operand
expression
method(s)
type(s)
__aenter__
CanAEnter[C]
, or
CanAEnterSelf
__aexit__
CanAExit[R=None]
async with _ as c:
__aenter__
, and
__aexit__
CanAsyncWith[C, R=None]
, or
CanAsyncWithSelf[R=None]
#### Descriptors
Interfaces for [descriptors](https://docs.python.org/3/howto/descriptor.html).
operator
operand
expression
method
type
v: V = T().d
vt: VT = T.d
__get__
CanGet[T: object, V, VT = V]
T().k = v
__set__
CanSet[T: object, V]
del T().k
__delete__
CanDelete[T: object]
class T: d = _
__set_name__
CanSetName[T: object, N: str = str]
#### Buffer types
Interfaces for emulating buffer types using the [buffer protocol][BP].
operator
operand
expression
method
type
v = memoryview(_)
__buffer__
CanBuffer[T: int = int]
del v
__release_buffer__
CanReleaseBuffer
[BP]: https://docs.python.org/3/reference/datamodel.html#python-buffer-protocol
### `optype.copy`
For the [`copy`][CP] standard library, `optype.copy` provides the following
runtime-checkable interfaces:
copy
standard library
optype.copy
function
type
method
copy.copy(_) -> R
__copy__() -> R
CanCopy[R]
copy.deepcopy(_, memo={}) -> R
__deepcopy__(memo, /) -> R
CanDeepcopy[R]
copy.replace(_, /, **changes: V) -> R
[1]
__replace__(**changes: V) -> R
CanReplace[V, R]
[1] *`copy.replace` requires `python>=3.13`
(but `optype.copy.CanReplace` doesn't)*In practice, it makes sense that a copy of an instance is the same type as the
original.
But because `typing.Self` cannot be used as a type argument, this difficult
to properly type.
Instead, you can use the `optype.copy.Can{}Self` types, which are the
runtime-checkable equivalents of the following (recursive) type aliases:```python
type CanCopySelf = CanCopy[CanCopySelf]
type CanDeepcopySelf = CanDeepcopy[CanDeepcopySelf]
type CanReplaceSelf[V] = CanReplace[V, CanReplaceSelf[V]]
```[CP]: https://docs.python.org/3/library/copy.html
### `optype.dataclasses`
For the [`dataclasses`][DC] standard library, `optype.dataclasses` provides the
`HasDataclassFields[V: Mapping[str, Field]]` interface.
It can conveniently be used to check whether a type or instance is a
dataclass, i.e. `isinstance(obj, HasDataclassFields)`.[DC]: https://docs.python.org/3/library/dataclasses.html
### `optype.inspect`
A collection of functions for runtime inspection of types, modules, and other
objects.
Function
Description
get_args(_)
A better alternative to [`typing.get_args()`][GET_ARGS], that
- unpacks `typing.Annotated` and Python 3.12 `type _` alias types
(i.e. `typing.TypeAliasType`),
- recursively flattens unions and nested `typing.Literal` types, and
- raises `TypeError` if not a type expression.Return a `tuple[type | object, ...]` of type arguments or parameters.
To illustrate one of the (many) issues with `typing.get_args`:
```pycon
>>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])
```But this is in direct contradiction with the
[official typing documentation][LITERAL-DOCS]:> When a Literal is parameterized with more than one value, it’s treated as
> exactly equivalent to the union of those types.
> That is, `Literal[v1, v2, v3]` is equivalent to
> `Literal[v1] | Literal[v2] | Literal[v3]`.So this is why `optype.inspect.get_args` should be used
```pycon
>>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')
```Another issue of `typing.get_args` is with Python 3.12 `type _ = ...` aliases,
which are meant as a replacement for `_: typing.TypeAlias = ...`, and should
therefore be treated equally:```pycon
>>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
(, )
```Clearly, `typing.get_args` fails misarably here; it would have been better
if it would have raised an error, but it instead returns an empty tuple,
hiding the fact that it doesn't support the new `type _ = ...` aliases.
But luckily, `optype.inspect.get_args` doesn't have this problem, and treats
it just like it treats `typing.Alias` (and so do the other `optype.inspect`
functions).
get_protocol_members(_)
A better alternative to [`typing.get_protocol_members()`][PROTO_MEM], that
- doesn't require Python 3.13 or above,
- supports [PEP 695][PEP695] `type _` alias types on Python 3.12 and above,
- unpacks unions of `typing.Literal` ...
- ... and flattens them if nested within another `typing.Literal`,
- treats `typing.Annotated[T]` as `T`, and
- raises a `TypeError` if the passed value isn't a type expression.Returns a `frozenset[str]` with member names.
get_protocols(_)
Returns a `frozenset[type]` of the public protocols within the passed module.
Pass `private=True` to also return the private protocols.
is_iterable(_)
Check whether the object can be iterated over, i.e. if it can be used in a
`for` loop, without attempting to do so.
If `True` is returned, then the object is a `optype.typing.AnyIterable`
instance.
is_final(_)
Check if the type, method / classmethod / staticmethod / property, is
decorated with [`@typing.final`][@FINAL].Note that a `@property` won't be recognized unless the `@final` decorator is
placed *below* the `@property` decorator.
See the function docstring for more information.
is_protocol(_)
A backport of [`typing.is_protocol`][IS_PROTO] that was added in Python 3.13,
a re-export of [`typing_extensions.is_protocol`][IS_PROTO_EXT].
is_runtime_protocol(_)
Check if the type expression is a *runtime-protocol*, i.e. a
`typing.Protocol` *type*, decorated with `@typing.runtime_checkable` (also
supports `typing_extensions`).
is_union_type(_)
Check if the type is a [`typing.Union`][UNION] type, e.g. `str | int`.
Unlike `isinstance(_, types.Union)`, this function also returns `True` for
unions of user-defined `Generic` or `Protocol` types (because those are
different union types for some reason).
is_generic_alias(_)
Check if the type is a *subscripted* type, e.g. `list[str]` or
`optype.CanNext[int]`, but not `list`, `CanNext`.Unlike `isinstance(_, typing.GenericAlias)`, this function also returns `True`
for user-defined `Generic` or `Protocol` types (because those are
use a different generic alias for some reason).Even though technically `T1 | T2` is represented as `typing.Union[T1, T2]`
(which is a (special) generic alias), `is_generic_alias` will returns `False`
for such union types, because calling `T1 | T2` a subscripted type just
doesn't make much sense.
> [!NOTE]
> All functions in `optype.inspect` also work for Python 3.12 `type _` aliases
> (i.e. `types.TypeAliasType`) and with `typing.Annotated`.[UNION]: https://docs.python.org/3/library/typing.html#typing.Union
[LITERAL-DOCS]: https://typing.readthedocs.io/en/latest/spec/literal.html#shortening-unions-of-literals
[@FINAL]: https://docs.python.org/3/library/typing.html#typing.Literal
[GET_ARGS]: https://docs.python.org/3/library/typing.html#typing.get_args
[IS_PROTO]: https://docs.python.org/3.13/library/typing.html#typing.is_protocol
[IS_PROTO_EXT]: https://typing-extensions.readthedocs.io/en/latest/#typing_extensions.is_protocol
[PROTO_MEM]: https://docs.python.org/3.13/library/typing.html#typing.get_protocol_members### `optype.json`
Type aliases for the `json` standard library:
Value
AnyValue
json.load(s)
return type
json.dumps(s)
input type
Array[V: Value = Value]
AnyArray[V: AnyValue = AnyValue]
Object[V: Value = Value]
AnyObject[V: AnyValue = AnyValue]
The `(Any)Value` can be any json input, i.e. `Value | Array | Object` is
equivalent to `Value`.
It's also worth noting that `Value` is a subtype of `AnyValue`, which means
that `AnyValue | Value` is equivalent to `AnyValue`.### `optype.pickle`
For the [`pickle`][PK] standard library, `optype.pickle` provides the following
interfaces:[PK]: https://docs.python.org/3/library/pickle.html
method(s)
signature (bound)
type
__reduce__
() -> R
CanReduce[R: str | tuple = ...]
__reduce_ex__
(CanIndex) -> R
CanReduceEx[R: str | tuple = ...]
__getstate__
() -> S
CanGetstate[S]
__setstate__
(S) -> None
CanSetstate[S]
__getnewargs__
__new__
() -> tuple[V, ...]
(V) -> Self
CanGetnewargs[V]
__getnewargs_ex__
__new__
() -> tuple[tuple[V, ...], dict[str, KV]]
(*tuple[V, ...], **dict[str, KV]) -> Self
CanGetnewargsEx[V, KV]
### `optype.string`
The [`string`](https://docs.python.org/3/library/string.html) standard
library contains practical constants, but it has two issues:- The constants contain a collection of characters, but are represented as
a single string. This makes it practically impossible to type-hint the
individual characters, so typeshed currently types these constants as a
`LiteralString`.
- The names of the constants are inconsistent, and doesn't follow
[PEP 8](https://peps.python.org/pep-0008/#constants).So instead, `optype.string` provides an alternative interface, that is
compatible with `string`, but with slight differences:- For each constant, there is a corresponding `Literal` type alias for
the *individual* characters. Its name matches the name of the constant,
but is singular instead of plural.
- Instead of a single string, `optype.string` uses a `tuple` of characters,
so that each character has its own `typing.Literal` annotation.
Note that this is only tested with (based)pyright / pylance, so it might
not work with mypy (it has more bugs than it has lines of codes).
- The names of the constant are consistent with PEP 8, and use a postfix
notation for variants, e.g. `DIGITS_HEX` instead of `hexdigits`.
- Unlike `string`, `optype.string` has a constant (and type alias) for
binary digits `'0'` and `'1'`; `DIGITS_BIN` (and `DigitBin`). Because
besides `oct` and `hex` functions in `builtins`, there's also the
`builtins.bin` function.
string._
optype.string._
constant
char type
constant
char type
missing
DIGITS_BIN
DigitBin
octdigits
LiteralString
DIGITS_OCT
DigitOct
digits
DIGITS
Digit
hexdigits
DIGITS_HEX
DigitHex
ascii_letters
LETTERS
Letter
ascii_lowercase
LETTERS_LOWER
LetterLower
ascii_uppercase
LETTERS_UPPER
LetterUpper
punctuation
PUNCTUATION
Punctuation
whitespace
WHITESPACE
Whitespace
printable
PRINTABLE
Printable
Each of the `optype.string` constants is exactly the same as the corresponding
`string` constant (after concatenation / splitting), e.g.```pycon
>>> import string
>>> import optype as opt
>>> "".join(opt.string.PRINTABLE) == string.printable
True
>>> tuple(string.printable) == opt.string.PRINTABLE
True
```Similarly, the values within a constant's `Literal` type exactly match the
values of its constant:```pycon
>>> import optype as opt
>>> from optype.inspect import get_args
>>> get_args(opt.string.Printable) == opt.string.PRINTABLE
True
```The `optype.inspect.get_args` is a non-broken variant of `typing.get_args`
that correctly flattens nested literals, type-unions, and PEP 695 type aliases,
so that it matches the official typing specs.
*In other words; `typing.get_args` is yet another fundamentally broken
python-typing feature that's useless in the situations where you need it
most.*### `optype.typing`
#### `Any*` type aliases
Type aliases for anything that can *always* be passed to
`int`, `float`, `complex`, `iter`, or `typing.Literal`
Python constructor
optype.typing
alias
int(_)
AnyInt
float(_)
AnyFloat
complex(_)
AnyComplex
iter(_)
AnyIterable
typing.Literal[_]
AnyLiteral
> [!NOTE]
> Even though *some* `str` and `bytes` can be converted to `int`, `float`,
> `complex`, most of them can't, and are therefore not included in these
> type aliases.#### `Empty*` type aliases
These are builtin types or collections that are empty, i.e. have length 0 or
yield no elements.
instance
optype.typing
type
''
EmptyString
b''
EmptyBytes
()
EmptyTuple
[]
EmptyList
{}
EmptyDict
set()
EmptySet
(i for i in range(0))
EmptyIterable
#### Literal types
Literal values
optype.typing
type
Notes
{False, True}
LiteralFalse
Similar totyping.LiteralString
, but for
bool
.
{0, 1, ..., 255}
LiteralByte
Integers in the range 0-255, that make up abytes
orbytearray
objects.
### `optype.dlpack`
A collection of low-level types for working [DLPack](DOC-DLPACK).
#### Protocols
type signature
bound method
```plain
CanDLPack[
+T = int,
+D: int = int,
]
``````python
def __dlpack__(
*,
stream: int | None = ...,
max_version: tuple[int, int] | None = ...,
dl_device: tuple[T, D] | None = ...,
copy: bool | None = ...,
) -> types.CapsuleType: ...
```
```plain
CanDLPackDevice[
+T = int,
+D: int = int,
]
``````python
def __dlpack_device__() -> tuple[T, D]: ...
```
The `+` prefix indicates that the type parameter is *co*variant.
#### Enums
There are also two convenient
[`IntEnum`](https://docs.python.org/3/library/enum.html#enum.IntEnum)s
in `optype.dlpack`: `DLDeviceType` for the device types, and `DLDataTypeCode` for the
internal type-codes of the `DLPack` data types.### `optype.numpy`
Optype supports both NumPy 1 and 2.
The current minimum supported version is `1.24`,
following [NEP 29][NEP29] and [SPEC 0][SPEC0].When using `optype.numpy`, it is recommended to install `optype` with the
`numpy` extra, ensuring version compatibility:```shell
pip install "optype[numpy]"
```> [!NOTE]
> For the remainder of the `optype.numpy` docs, assume that the following
> import aliases are available.
>
> ```python
> from typing import Any, Literal
> import numpy as np
> import numpy.typing as npt
> import optype.numpy as onp
> ```
>
> For the sake of brevity and readability, the [PEP 695][PEP695] and
> [PEP 696][PEP696] type parameter syntax will be used, which is supported
> since Python 3.13.#### `Array`
Optype provides the generic `onp.Array` type alias for `np.ndarray`.
It is similar to `npt.NDArray`, but includes two (optional) type parameters:
one that matches the *shape type* (`ND: tuple[int, ...]`),
and one that matches the *scalar type* (`ST: np.generic`).When put the definitions of `npt.NDArray` and `onp.Array` side-by-side,
their differences become clear:
numpy.typing.NDArray
optype.numpy.Array
```python
type NDArray[
# no shape type
ST: np.generic, # no default
] = np.ndarray[Any, np.dtype[ST]]
``````python
type Array[
ND: tuple[int, ...] = tuple[int, ...],
ST: np.generic = np.generic,
] = np.ndarray[ND, np.dtype[ST]]
```> [!IMPORTANT]
> The shape type parameter (`ND`) of `np.ndarray` is currently defined as
> invariant.
> This is incorrect: it should be covariant.
>
> This means that `ND: tuple[int, ...]` is also invariant in `onp.Array`.
>
> The consequence is that e.g. `def span(a: onp.Array[tuple[int], ST])`,
> won't accept `onp.Array[tuple[Literal[42]]]` as argument, even though
> `Literal[42]` is a subtype of `int`.
>
> See [numpy/numpy#25729](https://github.com/numpy/numpy/issues/25729) and
> [numpy/numpy#26081](https://github.com/numpy/numpy/pull/26081) for details.In fact, `onp.Array` is *almost* a generalization of `npt.NDArray`.
This is because `npt.NDArray` can be defined purely in terms of `onp.Array`
(but not vice-versa).```python
type NDArray[ST: np.generic] = onp.Array[Any, ST]
```With `onp.Array`, it becomes possible to type the shape of arrays,
> [!NOTE]
> A little bird told me that `onp.Array` might be backported to NumPy in the
> near future.#### `UFunc`
A large portion of numpy's public API consists of *universal functions*, often
denoted as [ufuncs][DOC-UFUNC], which are (callable) instances of
[`np.ufunc`][REF_UFUNC].> [!TIP]
> Custom ufuncs can be created using [`np.frompyfunc`][REF_FROMPY], but also
> through a user-defined class that implements the required attributes and
> methods (i.e., duck typing).
>
But `np.ufunc` has a big issue; it accepts no type parameters.
This makes it very difficult to properly annotate its callable signature and
its literal attributes (e.g. `.nin` and `.identity`).This is where `optype.numpy.UFunc` comes into play:
It's a runtime-checkable generic typing protocol, that has been thoroughly
type- and unit-tested to ensure compatibility with all of numpy's ufunc
definitions.
Its generic type signature looks roughly like:```python
type UFunc[
# The type of the (bound) `__call__` method.
Fn: CanCall = CanCall,
# The types of the `nin` and `nout` (readonly) attributes.
# Within numpy these match either `Literal[1]` or `Literal[2]`.
Nin: int = int,
Nout: int = int,
# The type of the `signature` (readonly) attribute;
# Must be `None` unless this is a generalized ufunc (gufunc), e.g.
# `np.matmul`.
Sig: str | None = str | None,
# The type of the `identity` (readonly) attribute (used in `.reduce`).
# Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
# this should always be `None`.
# Note that `complex` also includes `bool | int | float`.
Id: complex | bytes | str | None = float | None,
] = ...
```> [!NOTE]
> Unfortunately, the extra callable methods of `np.ufunc` (`at`, `reduce`,
> `reduceat`, `accumulate`, and `outer`), are incorrectly annotated (as `None`
> *attributes*, even though at runtime they're methods that raise a
> `ValueError` when called).
> This currently makes it impossible to properly type these in
> `optype.numpy.UFunc`; doing so would make it incompatible with numpy's
> ufuncs.#### Shape type aliases
A *shape* is nothing more than a tuple of (non-negative) integers, i.e.
an instance of `tuple[int, ...]` such as `(42,)`, `(6, 6, 6)` or `()`.
The length of a shape is often referred to as the *number of dimensions*
or the *dimensionality* of the array or scalar.
For arrays this is accessible through the `np.ndarray.ndim`, which is
an alias for `len(np.ndarray.shape)`.> [!NOTE]
> Before NumPy 2, the maximum number of dimensions was 32, but has since
> been increased to 64.To make typing the shape of an array easier, optype provides two families of
shape type aliases: `AtLeast{N}D` and `AtMost{N}D`.
The `{N}` should be replaced by the number of dimensions, which currently
is limited to `0`, `1`, `2`, and `3`.Both of these families are generic, and their (optional) type parameters must
be either `int` (default), or a literal (non-negative) integer, i.e. like
`typing.Literal[N: int]`.> [!NOTE]
> NumPy's functions with a `shape` parameter usually also accept a "base"
> `int`, which is shorthand for `tuple[int]`.
> But for the sake of consistency, `AtLeast{N}D` and `AtMost{N}D` are
> limited to integer *tuples*.The names `AtLeast{N}D` and `AtMost{N}D` are pretty much as self-explanatory:
- `AtLeast{N}D` is a `tuple[int, ...]` with `ndim >= N`
- `AtMost{N}D` is a `tuple[int, ...]` with `ndim <= N`The shape aliases are roughly defined as:
AtLeast{N}D
AtMost{N}D
type signature
alias type
type signature
type alias```python
type AtLeast0D[
Ds: int = int,
] = _
``````python
tuple[Ds, ...]
``````python
type AtMost0D = _
``````python
tuple[()]
``````python
type AtLeast1D[
D0: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
*tuple[Ds, ...],
]
``````python
type AtMost1D[
D0: int = int,
] = _
``````python
tuple[D0] | AtMost0D
``````python
type AtLeast2D[
D0: int = int,
D1: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
D1,
*tuple[Ds, ...],
]
``````python
type AtMost2D[
D0: int = int,
D1: int = int,
] = _
``````python
(
tuple[D0, D1]
| AtMost1D[D0]
)
``````python
type AtLeast3D[
D0: int = int,
D1: int = int,
D2: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
D1,
D2,
*tuple[Ds, ...],
]
``````python
type AtMost3D[
D0: int = int,
D1: int = int,
D2: int = int,
] = _
``````python
(
tuple[D0, D1, D2]
| AtMost2D[D0, D1]
)
```#### `Scalar`
The `optype.numpy.Scalar` interface is a generic runtime-checkable protocol,
that can be seen as a "more specific" `np.generic`, both in name, and from
a typing perspective.Its type signature looks roughly like this:
```python
type Scalar[
# The "Python type", so that `Scalar.item() -> PT`.
PT: object,
# The "N-bits" type (without having to deal with `npt.NBitBase`).
# It matches the `itemsize: NB` property.
NB: int = int,
] = ...
```It can be used as e.g.
```python
are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)
```> [!NOTE]
> The second type argument for `itemsize` can be omitted, which is equivalent
> to setting it to `int`, so `Scalar[PT]` and `Scalar[PT, int]` are equivalent.#### `DType`
In NumPy, a *dtype* (data type) object, is an instance of the
`numpy.dtype[ST: np.generic]` type.
It's commonly used to convey metadata of a scalar type, e.g. within arrays.Because the type parameter of `np.dtype` isn't optional, it could be more
convenient to use the alias `optype.numpy.DType`, which is defined as:```python
type DType[ST: np.generic = np.generic] = np.dtype[ST]
```Apart from the "CamelCase" name, the only difference with `np.dtype` is that
the type parameter can be omitted, in which case it's equivalent to
`np.dtype[np.generic]`, but shorter.#### `Any*Array` and `Any*DType`
The `Any{Scalar}Array` type aliases describe *everything* that, when passed to
`numpy.asarray` (or any other `numpy.ndarray` constructor), results in a
`numpy.ndarray` with specific [dtype][REF-DTYPE], i.e.
`numpy.dtypes.{Scalar}DType`.> [!NOTE]
> The [`numpy.dtypes` docs][REF-DTYPES] exists since NumPy 1.25, but its
> type annotations were incorrect before NumPy 2.1 (see
> [numpy/numpy#27008](https://github.com/numpy/numpy/pull/27008))See the [docs][REF-SCT] for more info on the NumPy scalar type hierarchy.
[REF-SCT]: https://numpy.org/doc/stable/reference/arrays.scalars.html
[REF-DTYPE]: https://numpy.org/doc/stable/reference/arrays.dtypes.html
[REF-DTYPES]: https://numpy.org/doc/stable/reference/arrays.dtypes.html##### Abstract types
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
generic
AnyArray
AnyDType
number
generic
AnyNumberArray
AnyNumberDType
integer
number
AnyIntegerArray
AnyIntegerDType
inexact
AnyInexactArray
AnyInexactDType
unsignedinteger
integer
AnyUnsignedIntegerArray
AnyUnsignedIntegerDType
signedinteger
AnySignedIntegerArray
AnySignedIntegerDType
floating
inexact
AnyFloatingArray
AnyFloatingDType
complexfloating
AnyComplexFloatingArray
AnyComplexFloatingDType
##### Unsigned integers
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
uint8
unsignedinteger
AnyUInt8Array
AnyUInt8DType
uint16
AnyUInt16Array
AnyUInt16DType
uint32
AnyUInt32Array
AnyUInt32DType
uint64
AnyUInt64Array
AnyUInt64DType
uintp
AnyUIntPArray
AnyUIntPDType
ubyte
AnyUByteArray
AnyUByteDType
ushort
AnyUShortArray
AnyUShortDType
uintc
AnyUIntCArray
AnyUIntCDType
ulong
AnyULongArray
AnyULongDType
ulonglong
AnyULongLongArray
AnyULongLongDType
##### Signed integers
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
int8
signedinteger
AnyInt8Array
AnyInt8DType
int16
AnyInt16Array
AnyInt16DType
int32
AnyInt32Array
AnyInt32DType
int64
AnyInt64Array
AnyInt64DType
intp
AnyIntPArray
AnyIntPDType
byte
AnyByteArray
AnyByteDType
short
AnyShortArray
AnyShortDType
intc
AnyIntCArray
AnyIntCDType
long
AnyLongArray
AnyLongDType
longlong
AnyLongLongArray
AnyLongLongDType
##### Floats
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
float16
floating
AnyFloat16Array
AnyFloat16DType
half
float32
AnyFloat32Array
AnyFloat32DType
single
float64
AnyFloat64Array
AnyFloat64DType
double
longdouble
AnyLongDoubleArray
AnyLongDoubleDType
##### Complex numbers
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
complex64
complexfloating
AnyComplex64Array
AnyComplex64DType
csingle
complex128
AnyComplex128Array
AnyComplex128DType
cdouble
clongdouble
AnyCLongDoubleArray
AnyCLongDoubleDType
##### "Flexible"
Scalar types with "flexible" length, whose values have a (constant) length
that depends on the specific `np.dtype` instantiation.
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
str_
character
AnyStrArray
AnyStrDType
bytes_
AnyBytesArray
AnyBytesDType
void
flexible
AnyVoidArray
AnyVoidDType
##### Other types
numpy._
optype.numpy._
scalar type
base type
array-like type
dtype-like type
bool_
generic
AnyBoolArray
AnyBoolDType
datetime64
AnyDateTime64Array
AnyDateTime64DType
timedelta64
AnyTimeDelta64Array
AnyTimeDelta64DType
object_
AnyObjectArray
AnyObjectDType
missing
AnyStringArray
AnyStringDType
#### Low-level interfaces
Within `optype.numpy` there are several `Can*` (single-method) and `Has*`
(single-attribute) protocols, related to the `__array_*__` dunders of the
NumPy Python API.
These typing protocols are, just like the `optype.Can*` and `optype.Has*` ones,
runtime-checkable and extensible (i.e. not `@final`).> [!TIP]
> All type parameters of these protocols can be omitted, which is equivalent
> to passing its upper type bound.
Protocol type signature
Implements
NumPy docs
```python
class CanArray[
ND: tuple[int, ...] = ...,
ST: np.generic = ...,
]: ...
``````python
def __array__[RT = ST](
_,
dtype: DType[RT] | None = ...,
) -> Array[ND, RT]
```[User Guide: Interoperability with NumPy][DOC-ARRAY]
```python
class CanArrayUFunc[
U: UFunc = ...,
R: object = ...,
]: ...
``````python
def __array_ufunc__(
_,
ufunc: U,
method: LiteralString,
*args: object,
**kwargs: object,
) -> R: ...
```[NEP 13][NEP13]
```python
class CanArrayFunction[
F: CanCall[..., object] = ...,
R = object,
]: ...
``````python
def __array_function__(
_,
func: F,
types: CanIterSelf[type[CanArrayFunction]],
args: tuple[object, ...],
kwargs: Mapping[str, object],
) -> R: ...
```[NEP 18][NEP18]
```python
class CanArrayFinalize[
T: object = ...,
]: ...
``````python
def __array_finalize__(_, obj: T): ...
```[User Guide: Subclassing ndarray][DOC-AFIN]
```python
class CanArrayWrap: ...
``````python
def __array_wrap__[ND, ST](
_,
array: Array[ND, ST],
context: (...) | None = ...,
return_scalar: bool = ...,
) -> Self | Array[ND, ST]
```[API: Standard array subclasses][REF_ARRAY-WRAP]
```python
class HasArrayInterface[
V: Mapping[str, object] = ...,
]: ...
``````python
__array_interface__: V
```[API: The array interface protocol][REF_ARRAY-INTER]
```python
class HasArrayPriority: ...
``````python
__array_priority__: float
```[API: Standard array subclasses][REF_ARRAY-PRIO]
```python
class HasDType[
DT: DType = ...,
]: ...
``````python
dtype: DT
```[API: Specifying and constructing data types][REF_DTYPE]
[DOC-UFUNC]: https://numpy.org/doc/stable/reference/ufuncs.html
[DOC-ARRAY]: https://numpy.org/doc/stable/user/basics.interoperability.html#the-array-method
[DOC-AFIN]: https://numpy.org/doc/stable/user/basics.subclassing.html#the-role-of-array-finalize[REF_UFUNC]: https://numpy.org/doc/stable/reference/generated/numpy.ufunc.html
[REF_FROMPY]: https://numpy.org/doc/stable/reference/generated/numpy.frompyfunc.html
[REF_ARRAY-WRAP]: https://numpy.org/doc/stable/reference/arrays.classes.html#numpy.class.__array_wrap__
[REF_ARRAY-INTER]: https://numpy.org/doc/stable/reference/arrays.interface.html#python-side
[REF_ARRAY-PRIO]: https://numpy.org/doc/stable/reference/arrays.classes.html#numpy.class.__array_priority__
[REF_DTYPE]: https://numpy.org/doc/stable/reference/arrays.dtypes.html#specifying-and-constructing-data-types[NEP13]: https://numpy.org/neps/nep-0013-ufunc-overrides.html
[NEP18]: https://numpy.org/neps/nep-0018-array-function-protocol.html
[NEP29]: https://numpy.org/neps/nep-0029-deprecation_policy.html[SPEC0]: https://scientific-python.org/specs/spec-0000/
[PEP695]: https://peps.python.org/pep-0695/
[PEP696]: https://peps.python.org/pep-0696/