{"id":13419880,"url":"https://github.com/xtensor-stack/xtensor","last_synced_at":"2025-05-12T15:33:01.135Z","repository":{"id":12758057,"uuid":"72343220","full_name":"xtensor-stack/xtensor","owner":"xtensor-stack","description":"C++ tensors with broadcasting and lazy computing","archived":false,"fork":false,"pushed_at":"2025-04-25T08:09:06.000Z","size":12307,"stargazers_count":3514,"open_issues_count":425,"forks_count":409,"subscribers_count":88,"default_branch":"master","last_synced_at":"2025-05-10T01:02:43.338Z","etag":null,"topics":["c-plus-plus-14","multidimensional-arrays","numpy","tensors"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xtensor-stack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2016-10-30T10:40:13.000Z","updated_at":"2025-05-08T23:22:14.000Z","dependencies_parsed_at":"2023-01-16T20:15:46.455Z","dependency_job_id":"6af4c938-7a66-4f77-a53a-2f4429bbb6d9","html_url":"https://github.com/xtensor-stack/xtensor","commit_stats":{"total_commits":2209,"total_committers":133,"mean_commits":16.60902255639098,"dds":0.6097781801720236,"last_synced_commit":"283f2b8375215fc05da9f91a1e7f16e5f75cf238"},"previous_names":["quantstack/xtensor"],"tags_count":120,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxtensor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxtensor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxtensor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxtensor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xtensor-stack","download_url":"https://codeload.github.com/xtensor-stack/xtensor/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253765909,"owners_count":21960816,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["c-plus-plus-14","multidimensional-arrays","numpy","tensors"],"created_at":"2024-07-30T22:01:22.285Z","updated_at":"2025-05-12T15:33:01.082Z","avatar_url":"https://github.com/xtensor-stack.png","language":"C++","readme":"# ![xtensor](docs/source/xtensor.svg)\n\n[![GHA Linux](https://github.com/xtensor-stack/xtensor/actions/workflows/linux.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/linux.yml)\n[![GHA OSX](https://github.com/xtensor-stack/xtensor/actions/workflows/osx.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/osx.yml)\n[![GHA Windows](https://github.com/xtensor-stack/xtensor/actions/workflows/windows.yml/badge.svg)](https://github.com/xtensor-stack/xtensor/actions/workflows/windows.yml)\n[![Documentation](http://readthedocs.org/projects/xtensor/badge/?version=latest)](https://xtensor.readthedocs.io/en/latest/?badge=latest)\n[![Doxygen -\u003e gh-pages](https://github.com/xtensor-stack/xtensor/workflows/gh-pages/badge.svg)](https://xtensor-stack.github.io/xtensor)\n[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks%2Fxtensor.ipynb)\n[![Join the Gitter Chat](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/QuantStack/Lobby?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge\u0026utm_content=badge)\n\nMulti-dimensional arrays with broadcasting and lazy computing.\n\n## Introduction\n\n`xtensor` is a C++ library meant for numerical analysis with multi-dimensional\narray expressions.\n\n`xtensor` provides\n\n - an extensible expression system enabling **lazy broadcasting**.\n - an API following the idioms of the **C++ standard library**.\n - tools to manipulate array expressions and build upon `xtensor`.\n\nContainers of `xtensor` are inspired by [NumPy](http://www.numpy.org), the\nPython array programming library. **Adaptors** for existing data structures to\nbe plugged into our expression system can easily be written.\n\nIn fact, `xtensor` can be used to **process NumPy data structures inplace**\nusing Python's [buffer protocol](https://docs.python.org/3/c-api/buffer.html).\nSimilarly, we can operate on Julia and R arrays. For more details on the NumPy,\nJulia and R bindings, check out the [xtensor-python](https://github.com/xtensor-stack/xtensor-python),\n[xtensor-julia](https://github.com/xtensor-stack/Xtensor.jl) and\n[xtensor-r](https://github.com/xtensor-stack/xtensor-r) projects respectively.\n\nUp to version 0.26.0, `xtensor` requires a C++ compiler supporting C++14.\n\n`xtensor` 0.26.x requires a C++ compiler supporting C++17.\n\n\n## Installation\n\n### Package managers\n\nWe provide a package for the mamba (or conda) package manager:\n\n```bash\nmamba install -c conda-forge xtensor\n```\n\n### Install from sources\n\n`xtensor` is a header-only library.\n\nYou can directly install it from the sources:\n\n```bash\ncmake -DCMAKE_INSTALL_PREFIX=your_install_prefix\nmake install\n```\n\n### Installing xtensor using vcpkg\n\nYou can download and install xtensor using the [vcpkg](https://github.com/Microsoft/vcpkg) dependency manager:\n\n```bash\ngit clone https://github.com/Microsoft/vcpkg.git\ncd vcpkg\n./bootstrap-vcpkg.sh\n./vcpkg integrate install\n./vcpkg install xtensor\n```\n\nThe xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.\n\n## Trying it online\n\nYou can play with `xtensor` interactively in a Jupyter notebook right now! Just click on the binder link below:\n\n[![Binder](docs/source/binder-logo.svg)](https://mybinder.org/v2/gh/xtensor-stack/xtensor/stable?filepath=notebooks/xtensor.ipynb)\n\nThe C++ support in Jupyter is powered by the [xeus-cling](https://github.com/jupyter-xeus/xeus-cling) C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.\n\n![xeus-cling](docs/source/xeus-cling-screenshot.png)\n\n## Documentation\n\nFor more information on using `xtensor`, check out the reference documentation\n\nhttp://xtensor.readthedocs.io/\n\n## Dependencies\n\n`xtensor` depends on the [xtl](https://github.com/xtensor-stack/xtl) library and\nhas an optional dependency on the [xsimd](https://github.com/xtensor-stack/xsimd)\nlibrary:\n\n| `xtensor` | `xtl`   |`xsimd` (optional) |\n|-----------|---------|-------------------|\n|  master   | ^0.8.0  |       ^13.2.0     |\n|  0.26.0   | ^0.8.0  |       ^13.2.0     |\n|  0.25.0   | ^0.7.5  |       ^11.0.0     |\n|  0.24.7   | ^0.7.0  |       ^10.0.0     |\n|  0.24.6   | ^0.7.0  |       ^10.0.0     |\n|  0.24.5   | ^0.7.0  |       ^10.0.0     |\n|  0.24.4   | ^0.7.0  |       ^10.0.0     |\n|  0.24.3   | ^0.7.0  |       ^8.0.3      |\n|  0.24.2   | ^0.7.0  |       ^8.0.3      |\n|  0.24.1   | ^0.7.0  |       ^8.0.3      |\n|  0.24.0   | ^0.7.0  |       ^8.0.3      |\n|  0.23.x   | ^0.7.0  |       ^7.4.8      |\n|  0.22.0   | ^0.6.23 |       ^7.4.8      |\n\nThe dependency on `xsimd` is required if you want to enable SIMD acceleration\nin `xtensor`. This can be done by defining the macro `XTENSOR_USE_XSIMD`\n*before* including any header of `xtensor`.\n\n## Usage\n\n### Basic usage\n\n**Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.**\n\n```cpp\n#include \u003ciostream\u003e\n#include \"xtensor/xarray.hpp\"\n#include \"xtensor/xio.hpp\"\n#include \"xtensor/xview.hpp\"\n\nxt::xarray\u003cdouble\u003e arr1\n  {{1.0, 2.0, 3.0},\n   {2.0, 5.0, 7.0},\n   {2.0, 5.0, 7.0}};\n\nxt::xarray\u003cdouble\u003e arr2\n  {5.0, 6.0, 7.0};\n\nxt::xarray\u003cdouble\u003e res = xt::view(arr1, 1) + arr2;\n\nstd::cout \u003c\u003c res;\n```\n\nOutputs:\n\n```\n{7, 11, 14}\n```\n\n**Initialize a 1-D array and reshape it inplace.**\n\n```cpp\n#include \u003ciostream\u003e\n#include \"xtensor/xarray.hpp\"\n#include \"xtensor/xio.hpp\"\n\nxt::xarray\u003cint\u003e arr\n  {1, 2, 3, 4, 5, 6, 7, 8, 9};\n\narr.reshape({3, 3});\n\nstd::cout \u003c\u003c arr;\n```\n\nOutputs:\n\n```\n{{1, 2, 3},\n {4, 5, 6},\n {7, 8, 9}}\n```\n\n**Index Access**\n\n```cpp\n#include \u003ciostream\u003e\n#include \"xtensor/xarray.hpp\"\n#include \"xtensor/xio.hpp\"\n\nxt::xarray\u003cdouble\u003e arr1\n  {{1.0, 2.0, 3.0},\n   {2.0, 5.0, 7.0},\n   {2.0, 5.0, 7.0}};\n\nstd::cout \u003c\u003c arr1(0, 0) \u003c\u003c std::endl;\n\nxt::xarray\u003cint\u003e arr2\n  {1, 2, 3, 4, 5, 6, 7, 8, 9};\n\nstd::cout \u003c\u003c arr2(0);\n```\n\nOutputs:\n\n```\n1.0\n1\n```\n\n### The NumPy to xtensor cheat sheet\n\nIf you are familiar with NumPy APIs, and you are interested in xtensor, you can\ncheck out the [NumPy to xtensor cheat sheet](https://xtensor.readthedocs.io/en/latest/numpy.html)\nprovided in the documentation.\n\n### Lazy broadcasting with `xtensor`\n\nXtensor can operate on arrays of different shapes of dimensions in an\nelement-wise fashion. Broadcasting rules of xtensor are similar to those of\n[NumPy](http://www.numpy.org) and [libdynd](http://libdynd.org).\n\n### Broadcasting rules\n\nIn an operation involving two arrays of different dimensions, the array with\nthe lesser dimensions is broadcast across the leading dimensions of the other.\n\nFor example, if `A` has shape `(2, 3)`, and `B` has shape `(4, 2, 3)`, the\nresult of a broadcasted operation with `A` and `B` has shape `(4, 2, 3)`.\n\n```\n   (2, 3) # A\n(4, 2, 3) # B\n---------\n(4, 2, 3) # Result\n```\n\nThe same rule holds for scalars, which are handled as 0-D expressions. If `A`\nis a scalar, the equation becomes:\n\n```\n       () # A\n(4, 2, 3) # B\n---------\n(4, 2, 3) # Result\n```\n\nIf matched up dimensions of two input arrays are different, and one of them has\nsize `1`, it is broadcast to match the size of the other. Let's say B has the\nshape `(4, 2, 1)` in the previous example, so the broadcasting happens as\nfollows:\n\n```\n   (2, 3) # A\n(4, 2, 1) # B\n---------\n(4, 2, 3) # Result\n```\n\n### Universal functions, laziness and vectorization\n\nWith `xtensor`, if `x`, `y` and `z` are arrays of *broadcastable shapes*, the\nreturn type of an expression such as `x + y * sin(z)` is **not an array**. It\nis an `xexpression` object offering the same interface as an N-dimensional\narray, which does not hold the result. **Values are only computed upon access\nor when the expression is assigned to an xarray object**. This allows to\noperate symbolically on very large arrays and only compute the result for the\nindices of interest.\n\nWe provide utilities to **vectorize any scalar function** (taking multiple\nscalar arguments) into a function that will perform on `xexpression`s, applying\nthe lazy broadcasting rules which we just described. These functions are called\n*xfunction*s. They are `xtensor`'s counterpart to NumPy's universal functions.\n\nIn `xtensor`, arithmetic operations (`+`, `-`, `*`, `/`) and all special\nfunctions are *xfunction*s.\n\n### Iterating over `xexpression`s and broadcasting Iterators\n\nAll `xexpression`s offer two sets of functions to retrieve iterator pairs (and\ntheir `const` counterpart).\n\n - `begin()` and `end()` provide instances of `xiterator`s which can be used to\n   iterate over all the elements of the expression. The order in which\n   elements are listed is `row-major` in that the index of last dimension is\n   incremented first.\n - `begin(shape)` and `end(shape)` are similar but take a *broadcasting shape*\n   as an argument. Elements are iterated upon in a row-major way, but certain\n   dimensions are repeated to match the provided shape as per the rules\n   described above. For an expression `e`, `e.begin(e.shape())` and `e.begin()`\n   are equivalent.\n\n### Runtime vs compile-time dimensionality\n\nTwo container classes implementing multi-dimensional arrays are provided:\n`xarray` and `xtensor`.\n\n - `xarray` can be reshaped dynamically to any number of dimensions. It is the\n   container that is the most similar to NumPy arrays.\n - `xtensor` has a dimension set at compilation time, which enables many\n   optimizations. For example, shapes and strides of `xtensor` instances are\n   allocated on the stack instead of the heap.\n\n`xarray` and `xtensor` container are both `xexpression`s and can be involved\nand mixed in universal functions, assigned to each other etc...\n\nBesides, two access operators are provided:\n\n - The variadic template `operator()` which can take multiple integral\n   arguments or none.\n - And the `operator[]` which takes a single multi-index argument, which can be\n   of size determined at runtime. `operator[]` also supports access with braced\n   initializers.\n\n## Performances\n\nXtensor operations make use of SIMD acceleration depending on what instruction\nsets are available on the platform at hand (SSE, AVX, AVX512, Neon).\n\n### [![xsimd](docs/source/xsimd-small.svg)](https://github.com/xtensor-stack/xsimd)\n\nThe [xsimd](https://github.com/xtensor-stack/xsimd) project underlies the\ndetection of the available instruction sets, and provides generic high-level\nwrappers and memory allocators for client libraries such as xtensor.\n\n### Continuous benchmarking\n\nXtensor operations are continuously benchmarked, and are significantly improved\nat each new version. Current performances on statically dimensioned tensors\nmatch those of the Eigen library. Dynamically dimension tensors for which the\nshape is heap allocated come at a small additional cost.\n\n### Stack allocation for shapes and strides\n\nMore generally, the library implement a `promote_shape` mechanism at build time\nto determine the optimal sequence type to hold the shape of an expression. The\nshape type of a broadcasting expression whose members have a dimensionality\ndetermined at compile time will have a stack allocated sequence type. If at\nleast one note of a broadcasting expression has a dynamic dimension\n(for example an `xarray`), it bubbles up to the entire broadcasting expression\nwhich will have a heap allocated shape. The same hold for views, broadcast\nexpressions, etc...\n\nTherefore, when building an application with xtensor, we recommend using\nstatically-dimensioned containers whenever possible to improve the overall\nperformance of the application.\n\n## Language bindings\n\n### [![xtensor-python](docs/source/xtensor-python-small.svg)](https://github.com/xtensor-stack/xtensor-python)\n\nThe [xtensor-python](https://github.com/xtensor-stack/xtensor-python) project\nprovides the implementation of two `xtensor` containers, `pyarray` and\n`pytensor` which effectively wrap NumPy arrays, allowing inplace modification,\nincluding reshapes.\n\nUtilities to automatically generate NumPy-style universal functions, exposed to\nPython from scalar functions are also provided.\n\n### [![xtensor-julia](docs/source/xtensor-julia-small.svg)](https://github.com/xtensor-stack/xtensor-julia)\n\nThe [xtensor-julia](https://github.com/xtensor-stack/xtensor-julia) project\nprovides the implementation of two `xtensor` containers, `jlarray` and\n`jltensor` which effectively wrap julia arrays, allowing inplace modification,\nincluding reshapes.\n\nLike in the Python case, utilities to generate NumPy-style universal functions\nare provided.\n\n### [![xtensor-r](docs/source/xtensor-r-small.svg)](https://github.com/xtensor-stack/xtensor-r)\n\nThe [xtensor-r](https://github.com/xtensor-stack/xtensor-r) project provides the\nimplementation of two `xtensor` containers, `rarray` and `rtensor` which\neffectively wrap R arrays, allowing inplace modification, including reshapes.\n\nLike for the Python and Julia bindings, utilities to generate NumPy-style\nuniversal functions are provided.\n\n## Library bindings\n\n### [![xtensor-blas](docs/source/xtensor-blas-small.svg)](https://github.com/xtensor-stack/xtensor-blas)\n\nThe [xtensor-blas](https://github.com/xtensor-stack/xtensor-blas) project provides\nbindings to BLAS libraries, enabling linear-algebra operations on xtensor\nexpressions.\n\n### [![xtensor-io](docs/source/xtensor-io-small.svg)](https://github.com/xtensor-stack/xtensor-io)\n\nThe [xtensor-io](https://github.com/xtensor-stack/xtensor-io) project enables the\nloading of a variety of file formats into xtensor expressions, such as image\nfiles, sound files, HDF5 files, as well as NumPy npy and npz files.\n\n## Building and running the tests\n\nBuilding the tests requires the [GTest](https://github.com/google/googletest)\ntesting framework and [cmake](https://cmake.org).\n\ngtest and cmake are available as packages for most Linux distributions.\nBesides, they can also be installed with the `conda` package manager (even on\nwindows):\n\n```bash\nconda install -c conda-forge gtest cmake\n```\n\nOnce `gtest` and `cmake` are installed, you can build and run the tests:\n\n```bash\nmkdir build\ncd build\ncmake -DBUILD_TESTS=ON ../\nmake xtest\n```\n\nYou can also use CMake to download the source of `gtest`, build it, and use the\ngenerated libraries:\n\n```bash\nmkdir build\ncd build\ncmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../\nmake xtest\n```\n\n## Building the HTML documentation\n\nxtensor's documentation is built with three tools\n\n - [doxygen](http://www.doxygen.org)\n - [sphinx](http://www.sphinx-doc.org)\n - [breathe](https://breathe.readthedocs.io)\n\nWhile doxygen must be installed separately, you can install breathe by typing\n\n```bash\npip install breathe sphinx_rtd_theme\n```\n\nBreathe can also be installed with `conda`\n\n```bash\nconda install -c conda-forge breathe\n```\n\nFinally, go to `docs` subdirectory and build the documentation with the\nfollowing command:\n\n```bash\nmake html\n```\n\n## License\n\nWe use a shared copyright model that enables all contributors to maintain the\ncopyright on their contributions.\n\nThis software is licensed under the BSD-3-Clause license. See the\n[LICENSE](LICENSE) file for details.\n","funding_links":[],"categories":["TODO scan for Android support in followings","Math","C++","Linear Algebra / Statistics Toolkit"],"sub_categories":["General Purpose Tensor Library"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxtensor-stack%2Fxtensor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxtensor-stack%2Fxtensor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxtensor-stack%2Fxtensor/lists"}