Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/xtensor-stack/xtensor
C++ tensors with broadcasting and lazy computing
https://github.com/xtensor-stack/xtensor
c-plus-plus-14 multidimensional-arrays numpy tensors
Last synced: 24 days ago
JSON representation
C++ tensors with broadcasting and lazy computing
- Host: GitHub
- URL: https://github.com/xtensor-stack/xtensor
- Owner: xtensor-stack
- License: bsd-3-clause
- Created: 2016-10-30T10:40:13.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2024-03-26T08:24:03.000Z (8 months ago)
- Last Synced: 2024-04-14T07:17:45.516Z (7 months ago)
- Topics: c-plus-plus-14, multidimensional-arrays, numpy, tensors
- Language: C++
- Homepage:
- Size: 11 MB
- Stars: 3,198
- Watchers: 89
- Forks: 389
- Open Issues: 397
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-list - xtensor - C++ tensors with broadcasting and lazy computing. (Linear Algebra / Statistics Toolkit / General Purpose Tensor Library)
README
# ![xtensor](docs/source/xtensor.svg)
[![GHA Linux](https://github.com/xtensor-stack/xtensor/actions/workflows/linux.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/linux.yml)
[![GHA OSX](https://github.com/xtensor-stack/xtensor/actions/workflows/osx.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/osx.yml)
[![GHA Windows](https://github.com/xtensor-stack/xtensor/actions/workflows/windows.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/windows.yml)
[![Documentation](http://readthedocs.org/projects/xtensor/badge/?version=latest)](https://xtensor.readthedocs.io/en/latest/?badge=latest)
[![Doxygen -> gh-pages](https://github.com/xtensor-stack/xtensor/workflows/gh-pages/badge.svg)](https://xtensor-stack.github.io/xtensor)
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks%2Fxtensor.ipynb)
[![Join the Gitter Chat](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/QuantStack/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)Multi-dimensional arrays with broadcasting and lazy computing.
## Introduction
`xtensor` is a C++ library meant for numerical analysis with multi-dimensional
array expressions.`xtensor` provides
- an extensible expression system enabling **lazy broadcasting**.
- an API following the idioms of the **C++ standard library**.
- tools to manipulate array expressions and build upon `xtensor`.Containers of `xtensor` are inspired by [NumPy](http://www.numpy.org), the
Python array programming library. **Adaptors** for existing data structures to
be plugged into our expression system can easily be written.In fact, `xtensor` can be used to **process NumPy data structures inplace**
using Python's [buffer protocol](https://docs.python.org/3/c-api/buffer.html).
Similarly, we can operate on Julia and R arrays. For more details on the NumPy,
Julia and R bindings, check out the [xtensor-python](https://github.com/xtensor-stack/xtensor-python),
[xtensor-julia](https://github.com/xtensor-stack/Xtensor.jl) and
[xtensor-r](https://github.com/xtensor-stack/xtensor-r) projects respectively.`xtensor` requires a modern C++ compiler supporting C++14. The following C++
compilers are supported:- On Windows platforms, Visual C++ 2015 Update 2, or more recent
- On Unix platforms, gcc 4.9 or a recent version of Clang## Installation
### Package managers
We provide a package for the mamba (or conda) package manager:
```bash
mamba install -c conda-forge xtensor
```### Install from sources
`xtensor` is a header-only library.
You can directly install it from the sources:
```bash
cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix
make install
```### Installing xtensor using vcpkg
You can download and install xtensor using the [vcpkg](https://github.com/Microsoft/vcpkg) dependency manager:
```bash
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install xtensor
```The xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
## Trying it online
You can play with `xtensor` interactively in a Jupyter notebook right now! Just click on the binder link below:
[![Binder](docs/source/binder-logo.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks/xtensor.ipynb)
The C++ support in Jupyter is powered by the [xeus-cling](https://github.com/jupyter-xeus/xeus-cling) C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.
![xeus-cling](docs/source/xeus-cling-screenshot.png)
## Documentation
For more information on using `xtensor`, check out the reference documentation
http://xtensor.readthedocs.io/
## Dependencies
`xtensor` depends on the [xtl](https://github.com/xtensor-stack/xtl) library and
has an optional dependency on the [xsimd](https://github.com/xtensor-stack/xsimd)
library:| `xtensor` | `xtl` |`xsimd` (optional) |
|-----------|---------|-------------------|
| master | ^0.7.5 | ^11.0.0 |
| 0.25.0 | ^0.7.5 | ^11.0.0 |
| 0.24.7 | ^0.7.0 | ^10.0.0 |
| 0.24.6 | ^0.7.0 | ^10.0.0 |
| 0.24.5 | ^0.7.0 | ^10.0.0 |
| 0.24.4 | ^0.7.0 | ^10.0.0 |
| 0.24.3 | ^0.7.0 | ^8.0.3 |
| 0.24.2 | ^0.7.0 | ^8.0.3 |
| 0.24.1 | ^0.7.0 | ^8.0.3 |
| 0.24.0 | ^0.7.0 | ^8.0.3 |
| 0.23.x | ^0.7.0 | ^7.4.8 |
| 0.22.0 | ^0.6.23 | ^7.4.8 |The dependency on `xsimd` is required if you want to enable SIMD acceleration
in `xtensor`. This can be done by defining the macro `XTENSOR_USE_XSIMD`
*before* including any header of `xtensor`.## Usage
### Basic usage
**Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.**
```cpp
#include
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xview.hpp"xt::xarray arr1
{{1.0, 2.0, 3.0},
{2.0, 5.0, 7.0},
{2.0, 5.0, 7.0}};xt::xarray arr2
{5.0, 6.0, 7.0};xt::xarray res = xt::view(arr1, 1) + arr2;
std::cout << res;
```Outputs:
```
{7, 11, 14}
```**Initialize a 1-D array and reshape it inplace.**
```cpp
#include
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"xt::xarray arr
{1, 2, 3, 4, 5, 6, 7, 8, 9};arr.reshape({3, 3});
std::cout << arr;
```Outputs:
```
{{1, 2, 3},
{4, 5, 6},
{7, 8, 9}}
```**Index Access**
```cpp
#include
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"xt::xarray arr1
{{1.0, 2.0, 3.0},
{2.0, 5.0, 7.0},
{2.0, 5.0, 7.0}};std::cout << arr1(0, 0) << std::endl;
xt::xarray arr2
{1, 2, 3, 4, 5, 6, 7, 8, 9};std::cout << arr2(0);
```Outputs:
```
1.0
1
```### The NumPy to xtensor cheat sheet
If you are familiar with NumPy APIs, and you are interested in xtensor, you can
check out the [NumPy to xtensor cheat sheet](https://xtensor.readthedocs.io/en/latest/numpy.html)
provided in the documentation.### Lazy broadcasting with `xtensor`
Xtensor can operate on arrays of different shapes of dimensions in an
element-wise fashion. Broadcasting rules of xtensor are similar to those of
[NumPy](http://www.numpy.org) and [libdynd](http://libdynd.org).### Broadcasting rules
In an operation involving two arrays of different dimensions, the array with
the lesser dimensions is broadcast across the leading dimensions of the other.For example, if `A` has shape `(2, 3)`, and `B` has shape `(4, 2, 3)`, the
result of a broadcasted operation with `A` and `B` has shape `(4, 2, 3)`.```
(2, 3) # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result
```The same rule holds for scalars, which are handled as 0-D expressions. If `A`
is a scalar, the equation becomes:```
() # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result
```If matched up dimensions of two input arrays are different, and one of them has
size `1`, it is broadcast to match the size of the other. Let's say B has the
shape `(4, 2, 1)` in the previous example, so the broadcasting happens as
follows:```
(2, 3) # A
(4, 2, 1) # B
---------
(4, 2, 3) # Result
```### Universal functions, laziness and vectorization
With `xtensor`, if `x`, `y` and `z` are arrays of *broadcastable shapes*, the
return type of an expression such as `x + y * sin(z)` is **not an array**. It
is an `xexpression` object offering the same interface as an N-dimensional
array, which does not hold the result. **Values are only computed upon access
or when the expression is assigned to an xarray object**. This allows to
operate symbolically on very large arrays and only compute the result for the
indices of interest.We provide utilities to **vectorize any scalar function** (taking multiple
scalar arguments) into a function that will perform on `xexpression`s, applying
the lazy broadcasting rules which we just described. These functions are called
*xfunction*s. They are `xtensor`'s counterpart to NumPy's universal functions.In `xtensor`, arithmetic operations (`+`, `-`, `*`, `/`) and all special
functions are *xfunction*s.### Iterating over `xexpression`s and broadcasting Iterators
All `xexpression`s offer two sets of functions to retrieve iterator pairs (and
their `const` counterpart).- `begin()` and `end()` provide instances of `xiterator`s which can be used to
iterate over all the elements of the expression. The order in which
elements are listed is `row-major` in that the index of last dimension is
incremented first.
- `begin(shape)` and `end(shape)` are similar but take a *broadcasting shape*
as an argument. Elements are iterated upon in a row-major way, but certain
dimensions are repeated to match the provided shape as per the rules
described above. For an expression `e`, `e.begin(e.shape())` and `e.begin()`
are equivalent.### Runtime vs compile-time dimensionality
Two container classes implementing multi-dimensional arrays are provided:
`xarray` and `xtensor`.- `xarray` can be reshaped dynamically to any number of dimensions. It is the
container that is the most similar to NumPy arrays.
- `xtensor` has a dimension set at compilation time, which enables many
optimizations. For example, shapes and strides of `xtensor` instances are
allocated on the stack instead of the heap.`xarray` and `xtensor` container are both `xexpression`s and can be involved
and mixed in universal functions, assigned to each other etc...Besides, two access operators are provided:
- The variadic template `operator()` which can take multiple integral
arguments or none.
- And the `operator[]` which takes a single multi-index argument, which can be
of size determined at runtime. `operator[]` also supports access with braced
initializers.## Performances
Xtensor operations make use of SIMD acceleration depending on what instruction
sets are available on the platform at hand (SSE, AVX, AVX512, Neon).### [![xsimd](docs/source/xsimd-small.svg)](https://github.com/xtensor-stack/xsimd)
The [xsimd](https://github.com/xtensor-stack/xsimd) project underlies the
detection of the available instruction sets, and provides generic high-level
wrappers and memory allocators for client libraries such as xtensor.### Continuous benchmarking
Xtensor operations are continuously benchmarked, and are significantly improved
at each new version. Current performances on statically dimensioned tensors
match those of the Eigen library. Dynamically dimension tensors for which the
shape is heap allocated come at a small additional cost.### Stack allocation for shapes and strides
More generally, the library implement a `promote_shape` mechanism at build time
to determine the optimal sequence type to hold the shape of an expression. The
shape type of a broadcasting expression whose members have a dimensionality
determined at compile time will have a stack allocated sequence type. If at
least one note of a broadcasting expression has a dynamic dimension
(for example an `xarray`), it bubbles up to the entire broadcasting expression
which will have a heap allocated shape. The same hold for views, broadcast
expressions, etc...Therefore, when building an application with xtensor, we recommend using
statically-dimensioned containers whenever possible to improve the overall
performance of the application.## Language bindings
### [![xtensor-python](docs/source/xtensor-python-small.svg)](https://github.com/xtensor-stack/xtensor-python)
The [xtensor-python](https://github.com/xtensor-stack/xtensor-python) project
provides the implementation of two `xtensor` containers, `pyarray` and
`pytensor` which effectively wrap NumPy arrays, allowing inplace modification,
including reshapes.Utilities to automatically generate NumPy-style universal functions, exposed to
Python from scalar functions are also provided.### [![xtensor-julia](docs/source/xtensor-julia-small.svg)](https://github.com/xtensor-stack/xtensor-julia)
The [xtensor-julia](https://github.com/xtensor-stack/xtensor-julia) project
provides the implementation of two `xtensor` containers, `jlarray` and
`jltensor` which effectively wrap julia arrays, allowing inplace modification,
including reshapes.Like in the Python case, utilities to generate NumPy-style universal functions
are provided.### [![xtensor-r](docs/source/xtensor-r-small.svg)](https://github.com/xtensor-stack/xtensor-r)
The [xtensor-r](https://github.com/xtensor-stack/xtensor-r) project provides the
implementation of two `xtensor` containers, `rarray` and `rtensor` which
effectively wrap R arrays, allowing inplace modification, including reshapes.Like for the Python and Julia bindings, utilities to generate NumPy-style
universal functions are provided.## Library bindings
### [![xtensor-blas](docs/source/xtensor-blas-small.svg)](https://github.com/xtensor-stack/xtensor-blas)
The [xtensor-blas](https://github.com/xtensor-stack/xtensor-blas) project provides
bindings to BLAS libraries, enabling linear-algebra operations on xtensor
expressions.### [![xtensor-io](docs/source/xtensor-io-small.svg)](https://github.com/xtensor-stack/xtensor-io)
The [xtensor-io](https://github.com/xtensor-stack/xtensor-io) project enables the
loading of a variety of file formats into xtensor expressions, such as image
files, sound files, HDF5 files, as well as NumPy npy and npz files.## Building and running the tests
Building the tests requires the [GTest](https://github.com/google/googletest)
testing framework and [cmake](https://cmake.org).gtest and cmake are available as packages for most Linux distributions.
Besides, they can also be installed with the `conda` package manager (even on
windows):```bash
conda install -c conda-forge gtest cmake
```Once `gtest` and `cmake` are installed, you can build and run the tests:
```bash
mkdir build
cd build
cmake -DBUILD_TESTS=ON ../
make xtest
```You can also use CMake to download the source of `gtest`, build it, and use the
generated libraries:```bash
mkdir build
cd build
cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../
make xtest
```## Building the HTML documentation
xtensor's documentation is built with three tools
- [doxygen](http://www.doxygen.org)
- [sphinx](http://www.sphinx-doc.org)
- [breathe](https://breathe.readthedocs.io)While doxygen must be installed separately, you can install breathe by typing
```bash
pip install breathe sphinx_rtd_theme
```Breathe can also be installed with `conda`
```bash
conda install -c conda-forge breathe
```Finally, go to `docs` subdirectory and build the documentation with the
following command:```bash
make html
```## License
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.This software is licensed under the BSD-3-Clause license. See the
[LICENSE](LICENSE) file for details.