https://github.com/jorenham/optype
Typing Protocols for Precise Type Hints in Python 3.12+
https://github.com/jorenham/optype
Last synced: about 2 months ago
JSON representation
Typing Protocols for Precise Type Hints in Python 3.12+
- Host: GitHub
- URL: https://github.com/jorenham/optype
- Owner: jorenham
- License: bsd-3-clause
- Created: 2024-02-22T03:07:03.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-04-29T22:01:56.000Z (about 1 year ago)
- Last Synced: 2024-05-01T15:54:15.045Z (about 1 year ago)
- Language: Python
- Size: 479 KB
- Stars: 5
- Watchers: 4
- Forks: 0
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-python-typing - optype - Opinionated `collections.abc` and `operators` alternative: Flexible single-method protocols and typed operators with predictable names. (Additional types)
README
optype
Building blocks for precise & flexible type hints.
---
## Installation
### PyPI
Optype is available as [`optype`][PYPI] on PyPI:
```shell
pip install optype
```For optional [NumPy][NUMPY] support, it is recommended to use the
`numpy` extra.
This ensures that the installed `numpy` version is compatible with
`optype`, following [NEP 29][NEP29] and [SPEC 0][SPEC0].```shell
pip install "optype[numpy]"
```See the [`optype.numpy` docs](#optypenumpy) for more info.
### Conda
Optype can also be installed as with `conda` from the [`conda-forge`][CONDA] channel:
```shell
conda install conda-forge::optype
```[PYPI]: https://pypi.org/project/optype/
[CONDA]: https://anaconda.org/conda-forge/optype
[NUMPY]: https://github.com/numpy/numpy## Example
Let's say you're writing a `twice(x)` function, that evaluates `2 * x`.
Implementing it is trivial, but what about the type annotations?Because `twice(2) == 4`, `twice(3.14) == 6.28` and `twice('I') = 'II'`, it
might seem like a good idea to type it as `twice[T](x: T) -> T: ...`.
However, that wouldn't include cases such as `twice(True) == 2` or
`twice((42, True)) == (42, True, 42, True)`, where the input- and output types
differ.
Moreover, `twice` should accept *any* type with a custom `__rmul__` method
that accepts `2` as argument.This is where `optype` comes in handy, which has single-method protocols for
*all* the builtin special methods.
For `twice`, we can use `optype.CanRMul[T, R]`, which, as the name suggests,
is a protocol with (only) the `def __rmul__(self, lhs: T) -> R: ...` method.
With this, the `twice` function can written as:Python 3.10
Python 3.12+```python
from typing import Literal
from typing import TypeAlias, TypeVar
from optype import CanRMulR = TypeVar("R")
Two: TypeAlias = Literal[2]
RMul2: TypeAlias = CanRMul[Two, R]def twice(x: RMul2[R]) -> R:
return 2 * x
``````python
from typing import Literal
from optype import CanRMultype Two = Literal[2]
type RMul2[R] = CanRMul[Two, R]def twice[R](x: RMul2[R]) -> R:
return 2 * x
```But what about types that implement `__add__` but not `__radd__`?
In this case, we could return `x * 2` as fallback (assuming commutativity).
Because the `optype.Can*` protocols are runtime-checkable, the revised
`twice2` function can be compactly written as:Python 3.10
Python 3.12+```python
from optype import CanMulMul2: TypeAlias = CanMul[Two, R]
CMul2: TypeAlias = Mul2[R] | RMul2[R]def twice2(x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2
``````python
from optype import CanMultype Mul2[R] = CanMul[Two, R]
type CMul2[R] = Mul2[R] | RMul2[R]def twice2[R](x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2
```See [`examples/twice.py`](examples/twice.py) for the full example.
## Reference
The API of `optype` is flat; a single `import optype as opt` is all you need
(except for `optype.numpy`).- [`optype`](#optype)
- [`Just`](#just)
- [Builtin type conversion](#builtin-type-conversion)
- [Rich relations](#rich-relations)
- [Binary operations](#binary-operations)
- [Reflected operations](#reflected-operations)
- [Inplace operations](#inplace-operations)
- [Unary operations](#unary-operations)
- [Rounding](#rounding)
- [Callables](#callables)
- [Iteration](#iteration)
- [Awaitables](#awaitables)
- [Async Iteration](#async-iteration)
- [Containers](#containers)
- [Attributes](#attributes)
- [Context managers](#context-managers)
- [Descriptors](#descriptors)
- [Buffer types](#buffer-types)
- [`optype.copy`](#optypecopy)
- [`optype.dataclasses`](#optypedataclasses)
- [`optype.inspect`](#optypeinspect)
- [`optype.io`](#optypeio)
- [`optype.json`](#optypejson)
- [`optype.pickle`](#optypepickle)
- [`optype.string`](#optypestring)
- [`optype.typing`](#optypetyping)
- [`Any*` type aliases](#any-type-aliases)
- [`Empty*` type aliases](#empty-type-aliases)
- [Literal types](#literal-types)
- [`optype.dlpack`](#optypedlpack)
- [`optype.numpy`](#optypenumpy)
- [Shape-typing](#shape-typing)
- [Array-likes](#array-likes)
- [Literals](#literals)
- [`compat` submodule](#compat-submodule)
- [`random` submodule](#random-submodule)
- [`Any*Array` and `Any*DType`](#anyarray-and-anydtype)
- [Low-level interfaces](#low-level-interfaces)### `optype`
There are five flavors of things that live within `optype`,
-
The `optype.Just[T]` and its `optype.Just{Int,Float,Complex}` subtypes only accept
instances of the type itself, while rejecting instances of strict subtypes.
This can be be used to e.g. work around the `float` and `complex`
[type promotions][BAD], annotating `object()` sentinels with `Just[object]`,
rejecting `bool` in functions that accept `int`, etc.
-
`optype.Can{}` types describe *what can be done* with it.
For instance, any `CanAbs[T]` type can be used as argument to the `abs()`
builtin function with return type `T`. Most `Can{}` implement a single
special method, whose name directly matched that of the type. `CanAbs`
implements `__abs__`, `CanAdd` implements `__add__`, etc.
-
`optype.Has{}` is the analogue of `Can{}`, but for special *attributes*.
`HasName` has a `__name__` attribute, `HasDict` has a `__dict__`, etc.
-
`optype.Does{}` describe the *type of operators*.
So `DoesAbs` is the type of the `abs({})` builtin function,
and `DoesPos` the type of the `+{}` prefix operator.
-
`optype.do_{}` are the correctly-typed implementations of `Does{}`. For
each `do_{}` there is a `Does{}`, and vice-versa.
So `do_abs: DoesAbs` is the typed alias of `abs({})`,
and `do_pos: DoesPos` is a typed version of `operator.pos`.
The `optype.do_` operators are more complete than `operators`,
have runtime-accessible type annotations, and have names you don't
need to know by heart.The reference docs are structured as follows:
All [typing protocols][PC] here live in the root `optype` namespace.
They are [runtime-checkable][RC] so that you can do e.g.
`isinstance('snail', optype.CanAdd)`, in case you want to check whether
`snail` implements `__add__`.Unlike`collections.abc`, `optype`'s protocols aren't abstract base classes,
i.e. they don't extend `abc.ABC`, only `typing.Protocol`.
This allows the `optype` protocols to be used as building blocks for `.pyi`
type stubs.[BAD]: https://typing.readthedocs.io/en/latest/spec/special-types.html#special-cases-for-float-and-complex
[PC]: https://typing.readthedocs.io/en/latest/spec/protocol.html
[RC]: https://typing.readthedocs.io/en/latest/spec/protocol.html#runtime-checkable-decorator-and-narrowing-types-by-isinstance#### `Just`
`Just` is in invariant type "wrapper", where `Just[T]` only accepts instances of `T`,
and rejects instances of any strict subtypes of `T`.Note that e.g. `Literal[""]` and `LiteralString` are not a strict `str` subtypes,
and are therefore assignable to `Just[str]`, but instances of `class S(str): ...`
are **not** assignable to `Just[str]`.Disallow passing `bool` as `int`:
```py
import optype as opdef assert_int(x: op.Just[int]) -> int:
assert type(x) is int
return xassert_int(42) # ok
assert_int(False) # rejected
```Annotating a sentinel:
```py
import optype as op_DEFAULT = object()
def intmap(
value: int,
# same as `dict[int, int] | op.Just[object]`
mapping: dict[int, int] | op.JustObject = _DEFAULT,
/,
) -> int:
# same as `type(mapping) is object`
if isinstance(mapping, op.JustObject):
return valuereturn mapping[value]
intmap(1) # ok
intmap(1, {1: 42}) # ok
intmap(1, "some object") # rejected
```> [!TIP]
> The `Just{Bytes,Int,Float,Complex,Date,Object}` protocols are runtime-checkable,
> so that `instance(42, JustInt) is True` and `instance(bool(), JustInt) is False`.
> It's implemented through meta-classes, and type-checkers have no problem with it.| `optype` type | accepts instances of |
| ------------- | -------------------- |
| `Just[T]` | `T` |
| `JustInt` | `builtins.int` |
| `JustFloat` | `builtins.float` |
| `JustComplex` | `builtins.complex` |
| `JustBytes` | `builtins.bytes` |
| `JustObject` | `builtins.object` |
| `JustDate` | `datetime.date` |##### :warning: Compatibility: (based)pyright
On `pyright<1.1.390` and `basedpyright<1.22.1` this `Just[T]` type does not work,
due to a bug in the `typeshed` stubs for `object.__class__` (fixed in
[python/typeshed#13021](https://github.com/python/typeshed/pull/13021)).However, you could use `JustInt`, `JustFloat`, and `JustComplex` types work
around this: These already work on `pyright<1.390` without problem.##### :warning: Compatibility: (based)mypy
On `mypy<1.15` this does not work with promoted types, such as `float` and `bytes`
(fixed in [python/mypy#18360](https://github.com/python/mypy/pull/18360)).For other ("unpromoted") types like `Just[int]`, this already works, even
before the `typeshed` fix above (mypy ignores `@property` setter types and
overwrites it with the getter's return type).#### Builtin type conversion
The return type of these special methods is *invariant*. Python will raise an
error if some other (sub)type is returned.
This is why these `optype` interfaces don't accept generic type arguments.
operator
operand
expression
function
type
method
type
complex(_)
do_complex
DoesComplex
__complex__
CanComplex
float(_)
do_float
DoesFloat
__float__
CanFloat
int(_)
do_int
DoesInt
__int__
CanInt[R: int = int]
bool(_)
do_bool
DoesBool
__bool__
CanBool[R: bool = bool]
bytes(_)
do_bytes
DoesBytes
__bytes__
CanBytes[R: bytes = bytes]
str(_)
do_str
DoesStr
__str__
CanStr[R: str = str]
> [!NOTE]
> The `Can*` interfaces of the types that can used as `typing.Literal`
> accept an optional type parameter `R`.
> This can be used to indicate a literal return type,
> for surgically precise typing, e.g. `None`, `True`, and `42` are
> instances of `CanBool[Literal[False]]`, `CanInt[Literal[1]]`, and
> `CanStr[Literal['42']]`, respectively.These formatting methods are allowed to return instances that are a subtype
of the `str` builtin. The same holds for the `__format__` argument.
So if you're a 10x developer that wants to hack Python's f-strings, but only
if your type hints are spot-on; `optype` is you friend.
operator
operand
expression
function
type
method
type
repr(_)
do_repr
DoesRepr
__repr__
CanRepr[R: str = str]
format(_, x)
do_format
DoesFormat
__format__
CanFormat[T: str = str, R: str = str]
Additionally, `optype` provides protocols for types with (custom) *hash* or
*index* methods:
operator
operand
expression
function
type
method
type
hash(_)
do_hash
DoesHash
__hash__
CanHash
_.__index__()
(docs)
do_index
DoesIndex
__index__
CanIndex[R: int = int]
#### Rich relations
The "rich" comparison special methods often return a `bool`.
However, instances of any type can be returned (e.g. a numpy array).
This is why the corresponding `optype.Can*` interfaces accept a second type
argument for the return type, that defaults to `bool` when omitted.
The first type parameter matches the passed method argument, i.e. the
right-hand side operand, denoted here as `x`.
operator
operand
expression
reflected
function
type
method
type
_ == x
x == _
do_eq
DoesEq
__eq__
CanEq[T = object, R = bool]
_ != x
x != _
do_ne
DoesNe
__ne__
CanNe[T = object, R = bool]
_ < x
x > _
do_lt
DoesLt
__lt__
CanLt[T, R = bool]
_ <= x
x >= _
do_le
DoesLe
__le__
CanLe[T, R = bool]
_ > x
x < _
do_gt
DoesGt
__gt__
CanGt[T, R = bool]
_ >= x
x <= _
do_ge
DoesGe
__ge__
CanGe[T, R = bool]
#### Binary operations
In the [Python docs][NT], these are referred to as "arithmetic operations".
But the operands aren't limited to numeric types, and because the
operations aren't required to be commutative, might be non-deterministic, and
could have side-effects.
Classifying them "arithmetic" is, at the very least, a bit of a stretch.
operator
operand
expression
function
type
method
type
_ + x
do_add
DoesAdd
__add__
CanAdd[T, R]
_ - x
do_sub
DoesSub
__sub__
CanSub[T, R]
_ * x
do_mul
DoesMul
__mul__
CanMul[T, R]
_ @ x
do_matmul
DoesMatmul
__matmul__
CanMatmul[T, R]
_ / x
do_truediv
DoesTruediv
__truediv__
CanTruediv[T, R]
_ // x
do_floordiv
DoesFloordiv
__floordiv__
CanFloordiv[T, R]
_ % x
do_mod
DoesMod
__mod__
CanMod[T, R]
divmod(_, x)
do_divmod
DoesDivmod
__divmod__
CanDivmod[T, R]
_ ** x
pow(_, x)
do_pow/2
DoesPow
__pow__
CanPow2[T, R]
CanPow[T, None, R, Never]
pow(_, x, m)
do_pow/3
DoesPow
__pow__
CanPow3[T, M, R]
CanPow[T, M, Never, R]
_ << x
do_lshift
DoesLshift
__lshift__
CanLshift[T, R]
_ >> x
do_rshift
DoesRshift
__rshift__
CanRshift[T, R]
_ & x
do_and
DoesAnd
__and__
CanAnd[T, R]
_ ^ x
do_xor
DoesXor
__xor__
CanXor[T, R]
_ | x
do_or
DoesOr
__or__
CanOr[T, R]
> [!NOTE]
> Because `pow()` can take an optional third argument, `optype`
> provides separate interfaces for `pow()` with two and three arguments.
> Additionally, there is the overloaded intersection type
> `CanPow[T, M, R, RM] =: CanPow2[T, R] & CanPow3[T, M, RM]`, as interface
> for types that can take an optional third argument.#### Reflected operations
For the binary infix operators above, `optype` additionally provides
interfaces with *reflected* (swapped) operands, e.g. `__radd__` is a reflected
`__add__`.
They are named like the original, but prefixed with `CanR` prefix, i.e.
`__name__.replace('Can', 'CanR')`.
operator
operand
expression
function
type
method
type
x + _
do_radd
DoesRAdd
__radd__
CanRAdd[T, R]
x - _
do_rsub
DoesRSub
__rsub__
CanRSub[T, R]
x * _
do_rmul
DoesRMul
__rmul__
CanRMul[T, R]
x @ _
do_rmatmul
DoesRMatmul
__rmatmul__
CanRMatmul[T, R]
x / _
do_rtruediv
DoesRTruediv
__rtruediv__
CanRTruediv[T, R]
x // _
do_rfloordiv
DoesRFloordiv
__rfloordiv__
CanRFloordiv[T, R]
x % _
do_rmod
DoesRMod
__rmod__
CanRMod[T, R]
divmod(x, _)
do_rdivmod
DoesRDivmod
__rdivmod__
CanRDivmod[T, R]
x ** _
pow(x, _)
do_rpow
DoesRPow
__rpow__
CanRPow[T, R]
x << _
do_rlshift
DoesRLshift
__rlshift__
CanRLshift[T, R]
x >> _
do_rrshift
DoesRRshift
__rrshift__
CanRRshift[T, R]
x & _
do_rand
DoesRAnd
__rand__
CanRAnd[T, R]
x ^ _
do_rxor
DoesRXor
__rxor__
CanRXor[T, R]
x | _
do_ror
DoesROr
__ror__
CanROr[T, R]
> [!NOTE]
> `CanRPow` corresponds to `CanPow2`; the 3-parameter "modulo" `pow` does not
> reflect in Python.
>
> According to the relevant [python docs][RPOW]:
> > Note that ternary `pow()` will not try calling `__rpow__()` (the coercion
> > rules would become too complicated).[RPOW]: https://docs.python.org/3/reference/datamodel.html#object.__rpow__
#### Inplace operations
Similar to the reflected ops, the inplace/augmented ops are prefixed with
`CanI`, namely:
operator
operand
expression
function
type
method
types
_ += x
do_iadd
DoesIAdd
__iadd__
CanIAdd[T, R]
CanIAddSelf[T]
_ -= x
do_isub
DoesISub
__isub__
CanISub[T, R]
CanISubSelf[T]
_ *= x
do_imul
DoesIMul
__imul__
CanIMul[T, R]
CanIMulSelf[T]
_ @= x
do_imatmul
DoesIMatmul
__imatmul__
CanIMatmul[T, R]
CanIMatmulSelf[T]
_ /= x
do_itruediv
DoesITruediv
__itruediv__
CanITruediv[T, R]
CanITruedivSelf[T]
_ //= x
do_ifloordiv
DoesIFloordiv
__ifloordiv__
CanIFloordiv[T, R]
CanIFloordivSelf[T]
_ %= x
do_imod
DoesIMod
__imod__
CanIMod[T, R]
CanIModSelf[T]
_ **= x
do_ipow
DoesIPow
__ipow__
CanIPow[T, R]
CanIPowSelf[T]
_ <<= x
do_ilshift
DoesILshift
__ilshift__
CanILshift[T, R]
CanILshiftSelf[T]
_ >>= x
do_irshift
DoesIRshift
__irshift__
CanIRshift[T, R]
CanIRshiftSelf[T]
_ &= x
do_iand
DoesIAnd
__iand__
CanIAnd[T, R]
CanIAndSelf[T]
_ ^= x
do_ixor
DoesIXor
__ixor__
CanIXor[T, R]
CanIXorSelf[T]
_ |= x
do_ior
DoesIOr
__ior__
CanIOr[T, R]
CanIOrSelf[T]
These inplace operators usually return itself (after some in-place mutation).
But unfortunately, it currently isn't possible to use `Self` for this (i.e.
something like `type MyAlias[T] = optype.CanIAdd[T, Self]` isn't allowed).
So to help ease this unbearable pain, `optype` comes equipped with ready-made
aliases for you to use. They bear the same name, with an additional `*Self`
suffix, e.g. `optype.CanIAddSelf[T]`.#### Unary operations
operator
operand
expression
function
type
method
types
+_
do_pos
DoesPos
__pos__
CanPos[R]
CanPosSelf
-_
do_neg
DoesNeg
__neg__
CanNeg[R]
CanNegSelf
~_
do_invert
DoesInvert
__invert__
CanInvert[R]
CanInvertSelf
abs(_)
do_abs
DoesAbs
__abs__
CanAbs[R]
CanAbsSelf
#### Rounding
The `round()` built-in function takes an optional second argument.
From a typing perspective, `round()` has two overloads, one with 1 parameter,
and one with two.
For both overloads, `optype` provides separate operand interfaces:
`CanRound1[R]` and `CanRound2[T, RT]`.
Additionally, `optype` also provides their (overloaded) intersection type:
`CanRound[T, R, RT] = CanRound1[R] & CanRound2[T, RT]`.
operator
operand
expression
function
type
method
type
round(_)
do_round/1
DoesRound
__round__/1
CanRound1[T = int]
round(_, n)
do_round/2
DoesRound
__round__/2
CanRound2[T = int, RT = float]
round(_, n=...)
do_round
DoesRound
__round__
CanRound[T = int, R = int, RT = float]
For example, type-checkers will mark the following code as valid (tested with
pyright in strict mode):```python
x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = x
```Furthermore, there are the alternative rounding functions from the
[`math`][MATH] standard library:
operator
operand
expression
function
type
method
type
math.trunc(_)
do_trunc
DoesTrunc
__trunc__
CanTrunc[R = int]
math.floor(_)
do_floor
DoesFloor
__floor__
CanFloor[R = int]
math.ceil(_)
do_ceil
DoesCeil
__ceil__
CanCeil[R = int]
Almost all implementations use `int` for `R`.
In fact, if no type for `R` is specified, it will default in `int`.
But technially speaking, these methods can be made to return anything.[MATH]: https://docs.python.org/3/library/math.html
[NT]: https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types#### Callables
Unlike `operator`, `optype` provides the operator for callable objects:
`optype.do_call(f, *args. **kwargs)`.`CanCall` is similar to `collections.abc.Callable`, but is runtime-checkable,
and doesn't use esoteric hacks.
operator
operand
expression
function
type
method
type
_(*args, **kwargs)
do_call
DoesCall
__call__
CanCall[**Pss, R]
> [!NOTE]
> Pyright (and probably other typecheckers) tend to accept
> `collections.abc.Callable` in more places than `optype.CanCall`.
> This could be related to the lack of co/contra-variance specification for
> `typing.ParamSpec` (they should almost always be contravariant, but
> currently they can only be invariant).
>
> In case you encounter such a situation, please open an issue about it, so we
> can investigate further.#### Iteration
The operand `x` of `iter(_)` is within Python known as an *iterable*, which is
what `collections.abc.Iterable[V]` is often used for (e.g. as base class, or
for instance checking).The `optype` analogue is `CanIter[R]`, which as the name suggests,
also implements `__iter__`. But unlike `Iterable[V]`, its type parameter `R`
binds to the return type of `iter(_) -> R`. This makes it possible to annotate
the specific type of the *iterable* that `iter(_)` returns. `Iterable[V]` is
only able to annotate the type of the iterated value. To see why that isn't
possible, see [python/typing#548](https://github.com/python/typing/issues/548).The `collections.abc.Iterator[V]` is even more awkward; it is a subtype of
`Iterable[V]`. For those familiar with `collections.abc` this might come as a
surprise, but an iterator only needs to implement `__next__`, `__iter__` isn't
needed. This means that the `Iterator[V]` is unnecessarily restrictive.
Apart from that being theoretically "ugly", it has significant performance
implications, because the time-complexity of `isinstance` on a
`typing.Protocol` is $O(n)$, with the $n$ referring to the amount of members.
So even if the overhead of the inheritance and the `abc.ABC` usage is ignored,
`collections.abc.Iterator` is twice as slow as it needs to be.That's one of the (many) reasons that `optype.CanNext[V]` and
`optype.CanNext[V]` are the better alternatives to `Iterable` and `Iterator`
from the abracadabra collections. This is how they are defined:
operator
operand
expression
function
type
method
type
next(_)
do_next
DoesNext
__next__
CanNext[V]
iter(_)
do_iter
DoesIter
__iter__
CanIter[R: CanNext[object]]
For the sake of compatibility with `collections.abc`, there is
`optype.CanIterSelf[V]`, which is a protocol whose `__iter__` returns
`typing.Self`, as well as a `__next__` method that returns `T`.
I.e. it is equivalent to `collections.abc.Iterator[V]`, but without the `abc`
nonsense.#### Awaitables
The `optype` is almost the same as `collections.abc.Awaitable[R]`, except
that `optype.CanAwait[R]` is a pure interface, whereas `Awaitable` is
also an abstract base class (making it absolutely useless when writing stubs).
operator
operand
expression
method
type
await _
__await__
CanAwait[R]
#### Async Iteration
Yes, you guessed it right; the abracadabra collections made the exact same
mistakes for the async iterablors (or was it "iteramblers"...?).But fret not; the `optype` alternatives are right here:
operator
operand
expression
function
type
method
type
anext(_)
do_anext
DoesANext
__anext__
CanANext[V]
aiter(_)
do_aiter
DoesAIter
__aiter__
CanAIter[R: CanAnext[object]]
But wait, shouldn't `V` be a `CanAwait`? Well, only if you don't want to get
fired...
Technically speaking, `__anext__` can return any type, and `anext` will pass
it along without nagging (instance checks are slow, now stop bothering that
liberal). For details, see the discussion at [python/typeshed#7491][AN].
Just because something is legal, doesn't mean it's a good idea (don't eat the
yellow snow).Additionally, there is `optype.CanAIterSelf[R]`, with both the
`__aiter__() -> Self` and the `__anext__() -> V` methods.[AN]: https://github.com/python/typeshed/pull/7491
#### Containers
operator
operand
expression
function
type
method
type
len(_)
do_len
DoesLen
__len__
CanLen[R: int = int]
_.__length_hint__()
(docs)
do_length_hint
DoesLengthHint
__length_hint__
CanLengthHint[R: int = int]
_[k]
do_getitem
DoesGetitem
__getitem__
CanGetitem[K, V]
_.__missing__()
(docs)
do_missing
DoesMissing
__missing__
CanMissing[K, D]
_[k] = v
do_setitem
DoesSetitem
__setitem__
CanSetitem[K, V]
del _[k]
do_delitem
DoesDelitem
__delitem__
CanDelitem[K]
k in _
do_contains
DoesContains
__contains__
CanContains[K = object]
reversed(_)
do_reversed
DoesReversed
__reversed__
CanReversed[R]
, or
CanSequence[I, V, N = int]
Because `CanMissing[K, D]` generally doesn't show itself without
`CanGetitem[K, V]` there to hold its hand, `optype` conveniently stitched them
together as `optype.CanGetMissing[K, V, D=V]`.Similarly, there is `optype.CanSequence[K: CanIndex | slice, V]`, which is the
combination of both `CanLen` and `CanItem[I, V]`, and serves as a more
specific and flexible `collections.abc.Sequence[V]`.#### Attributes
operator
operand
expression
function
type
method
type
v = _.k
or
v = getattr(_, k)
do_getattr
DoesGetattr
__getattr__
CanGetattr[K: str = str, V = object]
_.k = v
or
setattr(_, k, v)
do_setattr
DoesSetattr
__setattr__
CanSetattr[K: str = str, V = object]
del _.k
or
delattr(_, k)
do_delattr
DoesDelattr
__delattr__
CanDelattr[K: str = str]
dir(_)
do_dir
DoesDir
__dir__
CanDir[R: CanIter[CanIterSelf[str]]]
#### Context managers
Support for the `with` statement.
operator
operand
expression
method(s)
type(s)
__enter__
CanEnter[C]
, or
CanEnterSelf
__exit__
CanExit[R = None]
with _ as c:
__enter__
, and
__exit__
CanWith[C, R=None]
, or
CanWithSelf[R=None]
`CanEnterSelf` and `CanWithSelf` are (runtime-checkable) aliases for
`CanEnter[Self]` and `CanWith[Self, R]`, respectively.For the `async with` statement the interfaces look very similar:
operator
operand
expression
method(s)
type(s)
__aenter__
CanAEnter[C]
, or
CanAEnterSelf
__aexit__
CanAExit[R=None]
async with _ as c:
__aenter__
, and
__aexit__
CanAsyncWith[C, R=None]
, or
CanAsyncWithSelf[R=None]
#### Descriptors
Interfaces for [descriptors](https://docs.python.org/3/howto/descriptor.html).
operator
operand
expression
method
type
v: V = T().d
vt: VT = T.d
__get__
CanGet[T: object, V, VT = V]
T().k = v
__set__
CanSet[T: object, V]
del T().k
__delete__
CanDelete[T: object]
class T: d = _
__set_name__
CanSetName[T: object, N: str = str]
#### Buffer types
Interfaces for emulating buffer types using the [buffer protocol][BP].
operator
operand
expression
method
type
v = memoryview(_)
__buffer__
CanBuffer[T: int = int]
del v
__release_buffer__
CanReleaseBuffer
[BP]: https://docs.python.org/3/reference/datamodel.html#python-buffer-protocol
### `optype.copy`
For the [`copy`][CP] standard library, `optype.copy` provides the following
runtime-checkable interfaces:
copy
standard library
optype.copy
function
type
method
copy.copy(_) -> R
__copy__() -> R
CanCopy[R]
copy.deepcopy(_, memo={}) -> R
__deepcopy__(memo, /) -> R
CanDeepcopy[R]
copy.replace(_, /, **changes: V) -> R
[1]
__replace__(**changes: V) -> R
CanReplace[V, R]
[1] *`copy.replace` requires `python>=3.13`
(but `optype.copy.CanReplace` doesn't)*In practice, it makes sense that a copy of an instance is the same type as the
original.
But because `typing.Self` cannot be used as a type argument, this difficult
to properly type.
Instead, you can use the `optype.copy.Can{}Self` types, which are the
runtime-checkable equivalents of the following (recursive) type aliases:```python
type CanCopySelf = CanCopy[CanCopySelf]
type CanDeepcopySelf = CanDeepcopy[CanDeepcopySelf]
type CanReplaceSelf[V] = CanReplace[V, CanReplaceSelf[V]]
```[CP]: https://docs.python.org/3/library/copy.html
### `optype.dataclasses`
For the [`dataclasses`][DC] standard library, `optype.dataclasses` provides the
`HasDataclassFields[V: Mapping[str, Field]]` interface.
It can conveniently be used to check whether a type or instance is a
dataclass, i.e. `isinstance(obj, HasDataclassFields)`.[DC]: https://docs.python.org/3/library/dataclasses.html
### `optype.inspect`
A collection of functions for runtime inspection of types, modules, and other
objects.
Function
Description
get_args(_)
A better alternative to [`typing.get_args()`][GET_ARGS], that
- unpacks `typing.Annotated` and Python 3.12 `type _` alias types
(i.e. `typing.TypeAliasType`),
- recursively flattens unions and nested `typing.Literal` types, and
- raises `TypeError` if not a type expression.Return a `tuple[type | object, ...]` of type arguments or parameters.
To illustrate one of the (many) issues with `typing.get_args`:
```pycon
>>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])
```But this is in direct contradiction with the
[official typing documentation][LITERAL-DOCS]:> When a Literal is parameterized with more than one value, it’s treated as
> exactly equivalent to the union of those types.
> That is, `Literal[v1, v2, v3]` is equivalent to
> `Literal[v1] | Literal[v2] | Literal[v3]`.So this is why `optype.inspect.get_args` should be used
```pycon
>>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')
```Another issue of `typing.get_args` is with Python 3.12 `type _ = ...` aliases,
which are meant as a replacement for `_: typing.TypeAlias = ...`, and should
therefore be treated equally:```pycon
>>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
(, )
```Clearly, `typing.get_args` fails misarably here; it would have been better
if it would have raised an error, but it instead returns an empty tuple,
hiding the fact that it doesn't support the new `type _ = ...` aliases.
But luckily, `optype.inspect.get_args` doesn't have this problem, and treats
it just like it treats `typing.Alias` (and so do the other `optype.inspect`
functions).
get_protocol_members(_)
A better alternative to [`typing.get_protocol_members()`][PROTO_MEM], that
- doesn't require Python 3.13 or above,
- supports [PEP 695][PEP695] `type _` alias types on Python 3.12 and above,
- unpacks unions of `typing.Literal` ...
- ... and flattens them if nested within another `typing.Literal`,
- treats `typing.Annotated[T]` as `T`, and
- raises a `TypeError` if the passed value isn't a type expression.Returns a `frozenset[str]` with member names.
get_protocols(_)
Returns a `frozenset[type]` of the public protocols within the passed module.
Pass `private=True` to also return the private protocols.
is_iterable(_)
Check whether the object can be iterated over, i.e. if it can be used in a
`for` loop, without attempting to do so.
If `True` is returned, then the object is a `optype.typing.AnyIterable`
instance.
is_final(_)
Check if the type, method / classmethod / staticmethod / property, is
decorated with [`@typing.final`][@FINAL].Note that a `@property` won't be recognized unless the `@final` decorator is
placed *below* the `@property` decorator.
See the function docstring for more information.
is_protocol(_)
A backport of [`typing.is_protocol`][IS_PROTO] that was added in Python 3.13,
a re-export of [`typing_extensions.is_protocol`][IS_PROTO_EXT].
is_runtime_protocol(_)
Check if the type expression is a *runtime-protocol*, i.e. a
`typing.Protocol` *type*, decorated with `@typing.runtime_checkable` (also
supports `typing_extensions`).
is_union_type(_)
Check if the type is a [`typing.Union`][UNION] type, e.g. `str | int`.
Unlike `isinstance(_, types.Union)`, this function also returns `True` for
unions of user-defined `Generic` or `Protocol` types (because those are
different union types for some reason).
is_generic_alias(_)
Check if the type is a *subscripted* type, e.g. `list[str]` or
`optype.CanNext[int]`, but not `list`, `CanNext`.Unlike `isinstance(_, typing.GenericAlias)`, this function also returns `True`
for user-defined `Generic` or `Protocol` types (because those are
use a different generic alias for some reason).Even though technically `T1 | T2` is represented as `typing.Union[T1, T2]`
(which is a (special) generic alias), `is_generic_alias` will returns `False`
for such union types, because calling `T1 | T2` a subscripted type just
doesn't make much sense.
> [!NOTE]
> All functions in `optype.inspect` also work for Python 3.12 `type _` aliases
> (i.e. `types.TypeAliasType`) and with `typing.Annotated`.[UNION]: https://docs.python.org/3/library/typing.html#typing.Union
[LITERAL-DOCS]: https://typing.readthedocs.io/en/latest/spec/literal.html#shortening-unions-of-literals
[@FINAL]: https://docs.python.org/3/library/typing.html#typing.Literal
[GET_ARGS]: https://docs.python.org/3/library/typing.html#typing.get_args
[IS_PROTO]: https://docs.python.org/3.13/library/typing.html#typing.is_protocol
[IS_PROTO_EXT]: https://typing-extensions.readthedocs.io/en/latest/#typing_extensions.is_protocol
[PROTO_MEM]: https://docs.python.org/3.13/library/typing.html#typing.get_protocol_members### `optype.io`
A collection of protocols and type-aliases that, unlike their analogues in `_typeshed`,
are accessible at runtime, and use a consistent naming scheme.
optype.io
protocol
implements
replaces
CanFSPath[+T: str | bytes = ...]
__fspath__: () -> T
os.PathLike[AnyStr]
CanRead[+T]
read: () -> T
CanReadN[+T]
read: (int) -> T
_typeshed.SupportsRead[T]
CanReadline[+T]
readline: () -> T
_typeshed.SupportsNoArgReadline[+T]
CanReadlineN[+T]
readline: (int) -> T
_typeshed.SupportsReadline[+T]
CanWrite[-T, +RT = object]
write: (T) -> RT
_typeshed.SupportsWrite[-T]
CanFlush[+RT = object]
write: (T) -> RT
_typeshed.SupportsWrite
CanFileno
fileno: () -> int
_typeshed.HasFileno
optype.io
type alias
expression
replaces
ToPath[+T: str | bytes = ...]
T | CanFSPath[T]
_typeshed.StrPath
_typeshed.BytesPath
_typeshed.StrOrBytesPath
_typeshed.GenericPath[AnyStr]
ToFileno
int | CanFileno
_typeshed.FileDescriptorLike
### `optype.json`
Type aliases for the `json` standard library:
Value
AnyValue
json.load(s)
return type
json.dumps(s)
input type
Array[V: Value = Value]
AnyArray[V: AnyValue = AnyValue]
Object[V: Value = Value]
AnyObject[V: AnyValue = AnyValue]
The `(Any)Value` can be any json input, i.e. `Value | Array | Object` is
equivalent to `Value`.
It's also worth noting that `Value` is a subtype of `AnyValue`, which means
that `AnyValue | Value` is equivalent to `AnyValue`.### `optype.pickle`
For the [`pickle`][PK] standard library, `optype.pickle` provides the following
interfaces:[PK]: https://docs.python.org/3/library/pickle.html
method(s)
signature (bound)
type
__reduce__
() -> R
CanReduce[R: str | tuple = ...]
__reduce_ex__
(CanIndex) -> R
CanReduceEx[R: str | tuple = ...]
__getstate__
() -> S
CanGetstate[S]
__setstate__
(S) -> None
CanSetstate[S]
__getnewargs__
__new__
() -> tuple[V, ...]
(V) -> Self
CanGetnewargs[V]
__getnewargs_ex__
__new__
() -> tuple[tuple[V, ...], dict[str, KV]]
(*tuple[V, ...], **dict[str, KV]) -> Self
CanGetnewargsEx[V, KV]
### `optype.string`
The [`string`](https://docs.python.org/3/library/string.html) standard
library contains practical constants, but it has two issues:- The constants contain a collection of characters, but are represented as
a single string. This makes it practically impossible to type-hint the
individual characters, so typeshed currently types these constants as a
`LiteralString`.
- The names of the constants are inconsistent, and doesn't follow
[PEP 8](https://peps.python.org/pep-0008/#constants).So instead, `optype.string` provides an alternative interface, that is
compatible with `string`, but with slight differences:- For each constant, there is a corresponding `Literal` type alias for
the *individual* characters. Its name matches the name of the constant,
but is singular instead of plural.
- Instead of a single string, `optype.string` uses a `tuple` of characters,
so that each character has its own `typing.Literal` annotation.
Note that this is only tested with (based)pyright / pylance, so it might
not work with mypy (it has more bugs than it has lines of codes).
- The names of the constant are consistent with PEP 8, and use a postfix
notation for variants, e.g. `DIGITS_HEX` instead of `hexdigits`.
- Unlike `string`, `optype.string` has a constant (and type alias) for
binary digits `'0'` and `'1'`; `DIGITS_BIN` (and `DigitBin`). Because
besides `oct` and `hex` functions in `builtins`, there's also the
`builtins.bin` function.
string._
optype.string._
constant
char type
constant
char type
missing
DIGITS_BIN
DigitBin
octdigits
LiteralString
DIGITS_OCT
DigitOct
digits
DIGITS
Digit
hexdigits
DIGITS_HEX
DigitHex
ascii_letters
LETTERS
Letter
ascii_lowercase
LETTERS_LOWER
LetterLower
ascii_uppercase
LETTERS_UPPER
LetterUpper
punctuation
PUNCTUATION
Punctuation
whitespace
WHITESPACE
Whitespace
printable
PRINTABLE
Printable
Each of the `optype.string` constants is exactly the same as the corresponding
`string` constant (after concatenation / splitting), e.g.```pycon
>>> import string
>>> import optype as opt
>>> "".join(opt.string.PRINTABLE) == string.printable
True
>>> tuple(string.printable) == opt.string.PRINTABLE
True
```Similarly, the values within a constant's `Literal` type exactly match the
values of its constant:```pycon
>>> import optype as opt
>>> from optype.inspect import get_args
>>> get_args(opt.string.Printable) == opt.string.PRINTABLE
True
```The `optype.inspect.get_args` is a non-broken variant of `typing.get_args`
that correctly flattens nested literals, type-unions, and PEP 695 type aliases,
so that it matches the official typing specs.
*In other words; `typing.get_args` is yet another fundamentally broken
python-typing feature that's useless in the situations where you need it
most.*### `optype.typing`
#### `Any*` type aliases
Type aliases for anything that can *always* be passed to
`int`, `float`, `complex`, `iter`, or `typing.Literal`
Python constructor
optype.typing
alias
int(_)
AnyInt
float(_)
AnyFloat
complex(_)
AnyComplex
iter(_)
AnyIterable
typing.Literal[_]
AnyLiteral
> [!NOTE]
> Even though *some* `str` and `bytes` can be converted to `int`, `float`,
> `complex`, most of them can't, and are therefore not included in these
> type aliases.#### `Empty*` type aliases
These are builtin types or collections that are empty, i.e. have length 0 or
yield no elements.
instance
optype.typing
type
''
EmptyString
b''
EmptyBytes
()
EmptyTuple
[]
EmptyList
{}
EmptyDict
set()
EmptySet
(i for i in range(0))
EmptyIterable
#### Literal types
Literal values
optype.typing
type
Notes
{False, True}
LiteralFalse
Similar totyping.LiteralString
, but for
bool
.
{0, 1, ..., 255}
LiteralByte
Integers in the range 0-255, that make up abytes
orbytearray
objects.
### `optype.dlpack`
A collection of low-level types for working [DLPack](DOC-DLPACK).
#### Protocols
type signature
bound method
```plain
CanDLPack[
+T = int,
+D: int = int,
]
``````python
def __dlpack__(
*,
stream: int | None = ...,
max_version: tuple[int, int] | None = ...,
dl_device: tuple[T, D] | None = ...,
copy: bool | None = ...,
) -> types.CapsuleType: ...
```
```plain
CanDLPackDevice[
+T = int,
+D: int = int,
]
``````python
def __dlpack_device__() -> tuple[T, D]: ...
```
The `+` prefix indicates that the type parameter is *co*variant.
#### Enums
There are also two convenient
[`IntEnum`](https://docs.python.org/3/library/enum.html#enum.IntEnum)s
in `optype.dlpack`: `DLDeviceType` for the device types, and `DLDataTypeCode` for the
internal type-codes of the `DLPack` data types.### `optype.numpy`
Optype supports both NumPy 1 and 2.
The current minimum supported version is `1.24`,
following [NEP 29][NEP29] and [SPEC 0][SPEC0].When using `optype.numpy`, it is recommended to install `optype` with the
`numpy` extra, ensuring version compatibility:```shell
pip install "optype[numpy]"
```> [!NOTE]
> For the remainder of the `optype.numpy` docs, assume that the following
> import aliases are available.
>
> ```python
> from typing import Any, Literal
> import numpy as np
> import numpy.typing as npt
> import optype.numpy as onp
> ```
>
> For the sake of brevity and readability, the [PEP 695][PEP695] and
> [PEP 696][PEP696] type parameter syntax will be used, which is supported
> since Python 3.13.#### Shape-typing
##### Array aliases
Optype provides the generic `onp.Array` type alias for `np.ndarray`.
It is similar to `npt.NDArray`, but includes two (optional) type parameters:
one that matches the *shape type* (`ND: tuple[int, ...]`),
and one that matches the *scalar type* (`ST: np.generic`).When we put the definitions of `npt.NDArray` and `onp.Array` side-by-side,
their differences become clear:`numpy.typing.NDArray`[^1]
`optype.numpy.Array`
`optype.numpy.ArrayND`
```python
type NDArray[
# no shape type
SCT: generic, # no default
] = ndarray[Any, dtype[SCT]]
``````python
type Array[
NDT: (int, ...) = (int, ...),
SCT: generic = generic,
] = ndarray[NDT, dtype[SCT]]
``````python
type ArrayND[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ndarray[NDT, dtype[SCT]]
```Additionally, there are the four `Array{0,1,2,3}D` aliases, which are
equivalent to `Array` with `tuple[()]`, `tuple[int]`, `tuple[int, int]` and
`tuple[int, int, int]` as shape-type, respectively.[^1]: Since `numpy>=2.2` the `NDArray` alias uses `tuple[int, ...]` as shape-type
instead of `Any`.> [!TIP]
> Before NumPy 2.1, the shape type parameter of `ndarray` (i.e. the type of
> `ndarray.shape`) was invariant. It is therefore recommended to not use `Literal`
> within shape types on `numpy<2.1`. So with `numpy>=2.1` you can use
> `tuple[Literal[3], Literal[3]]` without problem, but with `numpy<2.1` you should use
> `tuple[int, int]` instead.
>
> See [numpy/numpy#25729](https://github.com/numpy/numpy/issues/25729) and
> [numpy/numpy#26081](https://github.com/numpy/numpy/pull/26081) for details.In the same way as `ArrayND` for `ndarray` (shown for reference), its subtypes
`np.ma.MaskedArray` and `np.matrix` are also aliased:`ArrayND` (`np.ndarray`)
`MArray` (`np.ma.MaskedArray`)
`Matrix` (`np.matrix`)
```python
type ArrayND[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ndarray[NDT, dtype[SCT]]
``````python
type MArray[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ma.MaskedArray[NDT, dtype[SCT]]
``````python
type Matrix[
SCT: generic = generic,
M: int = int,
N: int = M,
] = matrix[(M, N), dtype[SCT]]
```For masked arrays with specific `ndim`, you could also use one of the four
`MArray{0,1,2,3}D` aliases.##### Array typeguards
To check whether a given object is an instance of `Array{0,1,2,3,N}D`, in a way that
static type-checkers also understand it, the following [PEP 742][PEP742] typeguards can
be used:
typeguard
narrows to
shape type
optype.numpy._
builtins._
is_array_nd
ArrayND[ST]
tuple[int, ...]
is_array_0d
Array0D[ST]
tuple[()]
is_array_1d
Array1D[ST]
tuple[int]
is_array_2d
Array2D[ST]
tuple[int, int]
is_array_3d
Array3D[ST]
tuple[int, int, int]
These functions additionally accept an optional `dtype` argument, that can either be
a `np.dtype[ST]` instance, a `type[ST]`, or something that has a `dtype: np.dtype[ST]`
attribute.
The signatures are almost identical to each other, and in the `0d` case it roughly
looks like this:```py
ST = TypeVar("ST", bound=np.generic, default=np.generic)
_ToDType: TypeAlias = type[ST] | np.dtype[ST] | HasDType[np.dtype[ST]]def is_array_0d(a, /, dtype: _ToDType[ST] | None = None) -> TypeIs[Array0D[ST]]: ...
```##### Shape aliases
A *shape* is nothing more than a tuple of (non-negative) integers, i.e.
an instance of `tuple[int, ...]` such as `(42,)`, `(480, 720, 3)` or `()`.
The length of a shape is often referred to as the *number of dimensions*
or the *dimensionality* of the array or scalar.
For arrays this is accessible through the `np.ndarray.ndim`, which is
an alias for `len(np.ndarray.shape)`.> [!NOTE]
> Before NumPy 2, the maximum number of dimensions was `32`, but has since
> been increased to `ndim <= 64`.To make typing the shape of an array easier, optype provides two families of
shape type aliases: `AtLeast{N}D` and `AtMost{N}D`.
The `{N}` should be replaced by the number of dimensions, which currently
is limited to `0`, `1`, `2`, and `3`.Both of these families are generic, and their (optional) type parameters must
be either `int` (default), or a literal (non-negative) integer, i.e. like
`typing.Literal[N: int]`.The names `AtLeast{N}D` and `AtMost{N}D` are pretty much as self-explanatory:
- `AtLeast{N}D` is a `tuple[int, ...]` with `ndim >= N`
- `AtMost{N}D` is a `tuple[int, ...]` with `ndim <= N`The shape aliases are roughly defined as:
AtLeast{N}D
AtMost{N}D
type signature
alias type
type signature
type alias```python
type AtLeast0D[
Ds: int = int,
] = _
``````python
tuple[Ds, ...]
``````python
type AtMost0D = _
``````python
tuple[()]
``````python
type AtLeast1D[
D0: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
*tuple[Ds, ...],
]
``````python
type AtMost1D[
D0: int = int,
] = _
``````python
tuple[D0] | AtMost0D
``````python
type AtLeast2D[
D0: int = int,
D1: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
D1,
*tuple[Ds, ...],
]
``````python
type AtMost2D[
D0: int = int,
D1: int = int,
] = _
``````python
(
tuple[D0, D1]
| AtMost1D[D0]
)
``````python
type AtLeast3D[
D0: int = int,
D1: int = int,
D2: int = int,
Ds: int = int,
] = _
``````python
tuple[
D0,
D1,
D2,
*tuple[Ds, ...],
]
``````python
type AtMost3D[
D0: int = int,
D1: int = int,
D2: int = int,
] = _
``````python
(
tuple[D0, D1, D2]
| AtMost2D[D0, D1]
)
```#### Array-likes
Similar to the `numpy._typing._ArrayLike{}_co` *coercible array-like* types,
`optype.numpy` provides the `optype.numpy.To{}ND`. Unlike the ones in `numpy`, these
don't accept "bare" scalar types (the `__len__` method is required).
Additionally, there are the `To{}1D`, `To{}2D`, and `To{}3D` for vector-likes,
matrix-likes, and cuboid-likes, and the `To{}` aliases for "bare" scalar types.
builtins
numpy
optype.numpy
exact scalar types
scalar-like
{1,2,3,N}
-d array-like
strict{1,2,3}
-d array-like
False
False_
ToJustFalse
False
| 0
False_
ToFalse
True
True_
ToJustTrue
True
| 1
True_
ToTrue
bool
bool_
ToJustBool
ToJustBool{}D
ToJustBoolStrict{}D
bool
| 0
| 1
bool_
ToBool
ToBool{}D
ToBoolStrict{}D
~int
integer
ToJustInt
ToJustInt{}D
ToJustIntStrict{}D
int
| bool
integer
| bool_
ToInt
ToInt{}D
ToIntStrict{}D
~float
float64
ToJustFloat64
ToJustFloat64_{}D
ToJustFloat64Strict{}D
float
| int
| bool
float64
| float32
| float16
| integer
| bool_
ToFloat64
ToFloat64_{}D
ToFloat64Strict{}D
~float
floating
ToJustFloat
ToJustFloat{}D
ToJustFloatStrict{}D
float
| int
| bool
floating
| integer
| bool_
ToFloat
ToFloat{}D
ToFloatStrict{}D
~complex
complex128
ToJustComplex128
ToJustComplex128_{}D
ToJustComplex128Strict{}D
complex
| float
| int
| bool
complex128
| complex64
| float64
| float64
| float32
| float16
| integer
| bool_
ToComplex128
ToComplex128_{}D
ToComplex128Strict{}D
~complex
complexfloating
ToJustComplex
ToJustComplex{}D
ToJustComplexStrict{}D
complex
| float
| int
| bool
number
| bool_
ToComplex
ToComplex{}D
ToComplexStrict{}D
bytes
| str
| complex
| float
| int
| bool
generic
ToScalar
ToArray{}D
ToArrayStrict{}D
> [!NOTE]
> The `To*Strict{1,2,3}D` aliases were added in `optype 0.7.3`.
>
> These array-likes with *strict shape-type* require the shape-typed input to be
> shape-typed.
> This means that e.g. `ToFloat1D` and `ToFloat2D` are disjoint (non-overlapping),
> and makes them suitable to overload array-likes of a particular dtype for different
> numbers of dimensions.> [!NOTE]
> The `ToJust{Bool,Float,Complex}*` type aliases were added in `optype 0.8.0`.
>
> See [`optype.Just`](#just) for more information.> [!NOTE]
> The `To[Just]{False,True}` type aliases were added in `optype 0.9.1`.
>
> These only include the `np.bool` types on `numpy>=2.2`. Before that, `np.bool`
> wasn't generic, making it impossible to distinguish between `np.False_` and `np.True_`
> using static typing.Source code: [`optype/numpy/_to.py`][CODE-NP-TO]
> [!NOTE]
> The `ToArrayStrict{1,2,3}D` types are generic since `optype 0.9.1`, analogous to
> their non-strict dual type, `ToArray{1,2,3}D`.Source code: [`optype/numpy/_to.py`][CODE-NP-TO]
#### Literals
| Type Alias | String values |
| --------------- | ------------------------------------------------------------------ |
| `ByteOrder` | `ByteOrderChar \| ByteOrderName \| {L, B, N, I, S}` |
| `ByteOrderChar` | `{<, >, =, \|}` |
| `ByteOrderName` | `{little, big, native, ignore, swap}` |
| `Casting` | `CastingUnsafe \| CastingSafe` |
| `CastingUnsafe` | `{unsafe}` |
| `CastingSafe` | `{no, equiv, safe, same_kind}` |
| `ConvolveMode` | `{full, same, valid}` |
| `Device` | `{cpu}` |
| `IndexMode` | `{raise, wrap, clip}` |
| `OrderCF` | `{C, F}` |
| `OrderACF` | `{A, C, F}` |
| `OrderKACF` | `{K, A, C, F}` |
| `PartitionKind` | `{introselect}` |
| `SortKind` | `{Q, quick[sort], M, merge[sort], H, heap[sort], S, stable[sort]}` |
| `SortSide` | `{left, right}` |#### `compat` submodule
Compatibility module for supporting a wide (currently `1.23` - `2.2`) range of numpy
versions. It contains two kinds of things:- All [`numpy.exceptions`][NP-EXC], which didn't exist before `<1.25`, making it very
difficult to use if you need to support those versions, especially within stubs.
- The abstract numeric scalar types, with `numpy>=2.2` type-parameter defaults, which
I explained in the [`release notes`][NP-REL22].[NP-EXC]: https://numpy.org/doc/stable/reference/routines.exceptions.html
[NP-REL22]: https://numpy.org/doc/stable/release/2.2.0-notes.html#new-features#### `random` submodule
[SPEC 7](https://scientific-python.org/specs/spec-0007/) -compatible type aliases.
The `optype.numpy.random` module provides three type aliases: `RNG`, `ToRNG`, and
`ToSeed`.In general, the most useful one is `ToRNG`, which describes what can be
passed to `numpy.random.default_rng`. It is defined as the union of `RNG`, `ToSeed`,
and `numpy.random.BitGenerator`.The `RNG` is the union type of `numpy.random.Generator` and its legacy dual type,
`numpy.random.RandomState`.`ToSeed` accepts integer-like scalars, sequences, and arrays, as well as instances of
`numpy.random.SeedSequence`.#### `DType`
In NumPy, a *dtype* (data type) object, is an instance of the
`numpy.dtype[ST: np.generic]` type.
It's commonly used to convey metadata of a scalar type, e.g. within arrays.Because the type parameter of `np.dtype` isn't optional, it could be more
convenient to use the alias `optype.numpy.DType`, which is defined as:```python
type DType[ST: np.generic = np.generic] = np.dtype[ST]
```Apart from the "CamelCase" name, the only difference with `np.dtype` is that
the type parameter can be omitted, in which case it's equivalent to
`np.dtype[np.generic]`, but shorter.#### `Scalar`
The `optype.numpy.Scalar` interface is a generic runtime-checkable protocol,
that can be seen as a "more specific" `np.generic`, both in name, and from
a typing perspective.Its type signature looks roughly like this:
```python
type Scalar[
# The "Python type", so that `Scalar.item() -> PT`.
PT: object,
# The "N-bits" type (without having to deal with `npt.NBitBase`).
# It matches the `itemsize: NB` property.
NB: int = int,
] = ...
```It can be used as e.g.
```python
are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)
```> [!NOTE]
> The second type argument for `itemsize` can be omitted, which is equivalent
> to setting it to `int`, so `Scalar[PT]` and `Scalar[PT, int]` are equivalent.#### `UFunc`
A large portion of numpy's public API consists of *universal functions*, often
denoted as [ufuncs][DOC-UFUNC], which are (callable) instances of
[`np.ufunc`][REF_UFUNC].> [!TIP]
> Custom ufuncs can be created using [`np.frompyfunc`][REF_FROMPY], but also
> through a user-defined class that implements the required attributes and
> methods (i.e., duck typing).
>
But `np.ufunc` has a big issue; it accepts no type parameters.
This makes it very difficult to properly annotate its callable signature and
its literal attributes (e.g. `.nin` and `.identity`).This is where `optype.numpy.UFunc` comes into play:
It's a runtime-checkable generic typing protocol, that has been thoroughly
type- and unit-tested to ensure compatibility with all of numpy's ufunc
definitions.
Its generic type signature looks roughly like:```python
type UFunc[
# The type of the (bound) `__call__` method.
Fn: CanCall = CanCall,
# The types of the `nin` and `nout` (readonly) attributes.
# Within numpy these match either `Literal[1]` or `Literal[2]`.
Nin: int = int,
Nout: int = int,
# The type of the `signature` (readonly) attribute;
# Must be `None` unless this is a generalized ufunc (gufunc), e.g.
# `np.matmul`.
Sig: str | None = str | None,
# The type of the `identity` (readonly) attribute (used in `.reduce`).
# Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
# this should always be `None`.
# Note that `complex` also includes `bool | int | float`.
Id: complex | bytes | str | None = float | None,
] = ...
```> [!NOTE]
> Unfortunately, the extra callable methods of `np.ufunc` (`at`, `reduce`,
> `reduceat`, `accumulate`, and `outer`), are incorrectly annotated (as `None`
> *attributes*, even though at runtime they're methods that raise a
> `ValueError` when called).
> This currently makes it impossible to properly type these in
> `optype.numpy.UFunc`; doing so would make it incompatible with numpy's
> ufuncs.#### `Any*Array` and `Any*DType`
The `Any{Scalar}Array` type aliases describe array-likes that are coercible to an
`numpy.ndarray` with specific [dtype][REF-DTYPES].Unlike `numpy.typing.ArrayLike`, these `optype.numpy` aliases **don't**
accept "bare" scalar types such as `float` and `np.float64`. However, arrays of
"zero dimensions" like `onp.Array[tuple[()], np.float64]` will be accepted.
This is in line with the behavior of [`numpy.isscalar`][REF-ISSCALAR] on `numpy >= 2`.```py
import numpy.typing as npt
import optype.numpy as onpv_np: npt.ArrayLike = 3.14 # accepted
v_op: onp.AnyArray = 3.14 # rejectedsigma1_np: npt.ArrayLike = [[0, 1], [1, 0]] # accepted
sigma1_op: onp.AnyArray = [[0, 1], [1, 0]] # accepted
```> [!NOTE]
> The [`numpy.dtypes` docs][REF-DTYPES] exists since NumPy 1.25, but its
> type annotations were incorrect before NumPy 2.1 (see
> [numpy/numpy#27008](https://github.com/numpy/numpy/pull/27008))See the [docs][REF-SCT] for more info on the NumPy scalar type hierarchy.
[REF-SCT]: https://numpy.org/doc/stable/reference/arrays.scalars.html
[REF-DTYPES]: https://numpy.org/doc/stable/reference/arrays.dtypes.html
[REF-ISSCALAR]: https://numpy.org/doc/stable/reference/generated/numpy.isscalar.html##### Abstract types
numpy._
optype.numpy._
scalar
scalar base
array-like
dtype-like
generic
AnyArray
AnyDType
number
generic
AnyNumberArray
AnyNumberDType
integer
number
AnyIntegerArray
AnyIntegerDType
inexact
AnyInexactArray
AnyInexactDType
unsignedinteger
integer
AnyUnsignedIntegerArray
AnyUnsignedIntegerDType
signedinteger
AnySignedIntegerArray
AnySignedIntegerDType
floating
inexact
AnyFloatingArray
AnyFloatingDType
complexfloating
AnyComplexFloatingArray
AnyComplexFloatingDType
##### Unsigned integers
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
`uint_` [^5]
unsignedinteger
AnyUIntArray
AnyUIntDType
uintp
AnyUIntPArray
AnyUIntPDType
uint8
,ubyte
UInt8DType
AnyUInt8Array
AnyUInt8DType
uint16
,ushort
UInt16DType
AnyUInt16Array
AnyUInt16DType
`uint32`[^6]
UInt32DType
AnyUInt32Array
AnyUInt32DType
uint64
UInt64DType
AnyUInt64Array
AnyUInt64DType
`uintc`[^6]
UIntDType
AnyUIntCArray
AnyUIntCDType
`ulong`[^7]
ULongDType
AnyULongArray
AnyULongDType
ulonglong
ULongLongDType
AnyULongLongArray
AnyULongLongDType
##### Signed integers
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
`int_` [^5]
signedinteger
AnyIntArray
AnyIntDType
intp
AnyIntPArray
AnyIntPDType
int8
,byte
Int8DType
AnyInt8Array
AnyInt8DType
int16
,short
Int16DType
AnyInt16Array
AnyInt16DType
`int32`[^6]
Int32DType
AnyInt32Array
AnyInt32DType
int64
Int64DType
AnyInt64Array
AnyInt64DType
`intc`[^6]
IntDType
AnyIntCArray
AnyIntCDType
`long`[^7]
LongDType
AnyLongArray
AnyLongDType
longlong
LongLongDType
AnyLongLongArray
AnyLongLongDType
[^5]: Since NumPy 2, `np.uint` and `np.int_` are aliases for `np.uintp` and `np.intp`, respectively.
[^6]: On unix-based platforms `np.[u]intc` are aliases for `np.[u]int32`.
[^7]: On NumPy 1 `np.uint` and `np.int_` are what in NumPy 2 are now the `np.ulong` and `np.long` types, respectively.##### Real floats
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
float16
,
half
np.floating
Float16DType
AnyFloat16Array
AnyFloat16DType
float32
,
single
Float32DType
AnyFloat32Array
AnyFloat32DType
float64
,
double
np.floating &
builtins.float
Float64DType
AnyFloat64Array
AnyFloat64DType
`longdouble`[^13]
np.floating
LongDoubleDType
AnyLongDoubleArray
AnyLongDoubleDType
[^13]: Depending on the platform, `np.longdouble` is (almost always) an alias for **either** `float128`,
`float96`, or (sometimes) `float64`.##### Complex floats
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
complex64
,
csingle
complexfloating
Complex64DType
AnyComplex64Array
AnyComplex64DType
complex128
,
cdouble
complexfloating &
builtins.complex
Complex128DType
AnyComplex128Array
AnyComplex128DType
`clongdouble`[^16]
complexfloating
CLongDoubleDType
AnyCLongDoubleArray
AnyCLongDoubleDType
[^16]: Depending on the platform, `np.clongdouble` is (almost always) an alias for **either** `complex256`,
`complex192`, or (sometimes) `complex128`.##### "Flexible"
Scalar types with "flexible" length, whose values have a (constant) length
that depends on the specific `np.dtype` instantiation.
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
str_
character
StrDType
AnyStrArray
AnyStrDType
bytes_
BytesDType
AnyBytesArray
AnyBytesDType
dtype("c")
AnyBytes8DType
void
flexible
VoidDType
AnyVoidArray
AnyVoidDType
##### Other types
numpy._
numpy.dtypes._
optype.numpy._
scalar
scalar base
dtype
array-like
dtype-like
`bool_`[^0]
generic
BoolDType
AnyBoolArray
AnyBoolDType
object_
ObjectDType
AnyObjectArray
AnyObjectDType
datetime64
DateTime64DType
AnyDateTime64Array
AnyDateTime64DType
timedelta64
*`generic`*[^22]
TimeDelta64DType
AnyTimeDelta64Array
AnyTimeDelta64DType
[^2056]
StringDType
AnyStringArray
AnyStringDType
[^0]: Since NumPy 2, `np.bool` is preferred over `np.bool_`, which only exists for backwards compatibility.
[^22]: At runtime `np.timedelta64` is a subclass of `np.signedinteger`, but this is currently not
reflected in the type annotations.[^2056]: The `np.dypes.StringDType` has no associated numpy scalar type, and its `.type` attribute returns the
`builtins.str` type instead. But from a typing perspective, such a `np.dtype[builtins.str]` isn't a valid type.#### Low-level interfaces
Within `optype.numpy` there are several `Can*` (single-method) and `Has*`
(single-attribute) protocols, related to the `__array_*__` dunders of the
NumPy Python API.
These typing protocols are, just like the `optype.Can*` and `optype.Has*` ones,
runtime-checkable and extensible (i.e. not `@final`).> [!TIP]
> All type parameters of these protocols can be omitted, which is equivalent
> to passing its upper type bound.
Protocol type signature
Implements
NumPy docs
```python
class CanArray[
ND: tuple[int, ...] = ...,
ST: np.generic = ...,
]: ...
``````python
def __array__[RT = ST](
_,
dtype: DType[RT] | None = ...,
) -> Array[ND, RT]
```[User Guide: Interoperability with NumPy][DOC-ARRAY]
```python
class CanArrayUFunc[
U: UFunc = ...,
R: object = ...,
]: ...
``````python
def __array_ufunc__(
_,
ufunc: U,
method: LiteralString,
*args: object,
**kwargs: object,
) -> R: ...
```[NEP 13][NEP13]
```python
class CanArrayFunction[
F: CanCall[..., object] = ...,
R = object,
]: ...
``````python
def __array_function__(
_,
func: F,
types: CanIterSelf[type[CanArrayFunction]],
args: tuple[object, ...],
kwargs: Mapping[str, object],
) -> R: ...
```[NEP 18][NEP18]
```python
class CanArrayFinalize[
T: object = ...,
]: ...
``````python
def __array_finalize__(_, obj: T): ...
```[User Guide: Subclassing ndarray][DOC-AFIN]
```python
class CanArrayWrap: ...
``````python
def __array_wrap__[ND, ST](
_,
array: Array[ND, ST],
context: (...) | None = ...,
return_scalar: bool = ...,
) -> Self | Array[ND, ST]
```[API: Standard array subclasses][REF_ARRAY-WRAP]
```python
class HasArrayInterface[
V: Mapping[str, object] = ...,
]: ...
``````python
__array_interface__: V
```[API: The array interface protocol][REF_ARRAY-INTER]
```python
class HasArrayPriority: ...
``````python
__array_priority__: float
```[API: Standard array subclasses][REF_ARRAY-PRIO]
```python
class HasDType[
DT: DType = ...,
]: ...
``````python
dtype: DT
```[API: Specifying and constructing data types][REF_DTYPE]
[DOC-UFUNC]: https://numpy.org/doc/stable/reference/ufuncs.html
[DOC-ARRAY]: https://numpy.org/doc/stable/user/basics.interoperability.html#the-array-method
[DOC-AFIN]: https://numpy.org/doc/stable/user/basics.subclassing.html#the-role-of-array-finalize[REF_UFUNC]: https://numpy.org/doc/stable/reference/generated/numpy.ufunc.html
[REF_FROMPY]: https://numpy.org/doc/stable/reference/generated/numpy.frompyfunc.html
[REF_ARRAY-WRAP]: https://numpy.org/doc/stable/reference/arrays.classes.html#numpy.class.__array_wrap__
[REF_ARRAY-INTER]: https://numpy.org/doc/stable/reference/arrays.interface.html#python-side
[REF_ARRAY-PRIO]: https://numpy.org/doc/stable/reference/arrays.classes.html#numpy.class.__array_priority__
[REF_DTYPE]: https://numpy.org/doc/stable/reference/arrays.dtypes.html#specifying-and-constructing-data-types[CODE-NP-TO]: https://github.com/jorenham/optype/blob/master/optype/numpy/_to.py
[NEP13]: https://numpy.org/neps/nep-0013-ufunc-overrides.html
[NEP18]: https://numpy.org/neps/nep-0018-array-function-protocol.html
[NEP29]: https://numpy.org/neps/nep-0029-deprecation_policy.html[SPEC0]: https://scientific-python.org/specs/spec-0000/
[PEP695]: https://peps.python.org/pep-0695/
[PEP696]: https://peps.python.org/pep-0696/
[PEP742]: https://peps.python.org/pep-0742/