{"id":13591391,"url":"https://github.com/xtensor-stack/xsimd","last_synced_at":"2025-10-03T22:21:15.840Z","repository":{"id":37484581,"uuid":"52121329","full_name":"xtensor-stack/xsimd","owner":"xtensor-stack","description":"C++ wrappers for SIMD intrinsics and parallelized, optimized mathematical functions (SSE, AVX, AVX512, NEON, SVE))","archived":false,"fork":false,"pushed_at":"2025-04-27T14:35:43.000Z","size":3997,"stargazers_count":2358,"open_issues_count":47,"forks_count":265,"subscribers_count":75,"default_branch":"master","last_synced_at":"2025-04-27T15:31:20.961Z","etag":null,"topics":["avx","avx512","c-plus-plus-11","cpp","mathematical-functions","neon","simd","simd-instructions","simd-intrinsics","sse","sve","vectorization"],"latest_commit_sha":null,"homepage":"https://xsimd.readthedocs.io/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xtensor-stack.png","metadata":{"files":{"readme":"README.md","changelog":"Changelog.rst","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2016-02-19T22:41:39.000Z","updated_at":"2025-04-26T12:12:22.000Z","dependencies_parsed_at":"2023-12-29T13:30:41.620Z","dependency_job_id":"e46e3e7d-29a1-48e3-86d9-1fc61d1eafe2","html_url":"https://github.com/xtensor-stack/xsimd","commit_stats":{"total_commits":1256,"total_committers":74,"mean_commits":"16.972972972972972","dds":0.6011146496815287,"last_synced_commit":"96edf0340492fa9c080f5182b38358ca85baef5e"},"previous_names":[],"tags_count":79,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxsimd","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxsimd/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxsimd/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xtensor-stack%2Fxsimd/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xtensor-stack","download_url":"https://codeload.github.com/xtensor-stack/xsimd/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251337701,"owners_count":21573420,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["avx","avx512","c-plus-plus-11","cpp","mathematical-functions","neon","simd","simd-instructions","simd-intrinsics","sse","sve","vectorization"],"created_at":"2024-08-01T16:00:56.993Z","updated_at":"2025-10-03T22:21:15.816Z","avatar_url":"https://github.com/xtensor-stack.png","language":"C++","readme":"# ![xsimd](docs/source/xsimd.svg)\n\n[![GHA android](https://github.com/xtensor-stack/xsimd/actions/workflows/android.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/android.yml)\n[![GHA cross-rvv](https://github.com/xtensor-stack/xsimd/actions/workflows/cross-rvv.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/cross-rvv.yml)\n[![GHA cross-sve](https://github.com/xtensor-stack/xsimd/actions/workflows/cross-sve.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/cross-sve.yml)\n[![GHA cross](https://github.com/xtensor-stack/xsimd/actions/workflows/cross.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/cross.yml)\n[![GHA cxx-no-exceptions](https://github.com/xtensor-stack/xsimd/actions/workflows/cxx-no-exceptions.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/cxx-no-exceptions.yml)\n[![GHA cxx-versions](https://github.com/xtensor-stack/xsimd/actions/workflows/cxx-versions.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/cxx-versions.yml)\n[![GHA emscripten](https://github.com/xtensor-stack/xsimd/actions/workflows/emscripten.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/emscripten.yml)\n[![GHA linux](https://github.com/xtensor-stack/xsimd/actions/workflows/linux.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/linux.yml)\n[![GHA macos](https://github.com/xtensor-stack/xsimd/actions/workflows/macos.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/macos.yml)\n[![GHA windows](https://github.com/xtensor-stack/xsimd/actions/workflows/windows.yml/badge.svg)](https://github.com/xtensor-stack/xsimd/actions/workflows/windows.yml)\n[![Documentation Status](http://readthedocs.org/projects/xsimd/badge/?version=latest)](https://xsimd.readthedocs.io/en/latest/?badge=latest)\n[![Join the Gitter Chat](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/QuantStack/Lobby?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge\u0026utm_content=badge)\n\nC++ wrappers for SIMD intrinsics\n\n## Introduction\n\nSIMD (Single Instruction, Multiple Data) is a feature of microprocessors that has been available for many years. SIMD instructions perform a single operation\non a batch of values at once, and thus provide a way to significantly accelerate code execution. However, these instructions differ between microprocessor\nvendors and compilers.\n\n`xsimd` provides a unified means for using these features for library authors. Namely, it enables manipulation of batches of numbers with the same arithmetic operators as for single values. It also provides accelerated implementation of common mathematical functions operating on batches.\n\n## Adoption\n\nBeyond Xtensor, Xsimd has been adopted by major open-source projects, such as Mozilla Firefox, Apache Arrow, Pythran, and Krita.\n\n## History\n\nThe XSimd project started with a series of blog articles by Johan Mabille on how to implement wrappers for SIMD intrinsicts.\nThe archives of the blog can be found here: [The C++ Scientist](http://johanmabille.github.io/blog/archives/). The design described in\nthe articles remained close to the actual architecture of XSimd up until Version 8.0.\n\nThe mathematical functions are a lightweight implementation of the algorithms originally implemented in the now deprecated [boost.SIMD](https://github.com/NumScale/boost.simd) project.\n\n## Requirements\n\n`xsimd` requires a C++11 compliant compiler. The following C++ compilers are supported:\n\nCompiler                | Version\n------------------------|-------------------------------\nMicrosoft Visual Studio | MSVC 2015 update 2 and above\ng++                     | 4.9 and above\nclang                   | 4.0 and above\n\nThe following SIMD instruction set extensions are supported:\n\nArchitecture | Instruction set extensions\n-------------|-----------------------------------------------------\nx86          | SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, FMA3+SSE, FMA3+AVX, FMA3+AVX2\nx86          | AVX512BW, AVX512CD, AVX512DQ, AVX512F (gcc7 and higher)\nx86 AMD      | FMA4\nARM          | NEON, NEON64, SVE128/256/512 (fixed vector size)\nWebAssembly  | WASM\npowerpc64    | VSX\nRISC-V       | RISC-V128/256/512 (fixed vector size)\n\n## Installation\n\n### Install from conda-forge\n\nA package for xsimd is available on the mamba (or conda) package manager.\n\n```bash\nmamba install -c conda-forge xsimd\n```\n\n### Install with Spack\n\nA package for xsimd is available on the Spack package manager.\n\n```bash\nspack install xsimd\nspack load xsimd\n```\n\n### Install from sources\n\nYou can directly install it from the sources with cmake:\n\n```bash\ncmake -D CMAKE_INSTALL_PREFIX=your_install_prefix .\nmake install\n```\n\n## Documentation\n\nTo get started with using `xsimd`, check out the full documentation\n\nhttp://xsimd.readthedocs.io/\n\n## Dependencies\n\n`xsimd` has an optional dependency on the [xtl](https://github.com/xtensor-stack/xtl) library:\n\n| `xsimd` | `xtl` (optional) |\n|---------|------------------|\n|  master |     ^0.7.0       |\n|  12.x   |     ^0.7.0       |\n|  11.x   |     ^0.7.0       |\n|  10.x   |     ^0.7.0       |\n|   9.x   |     ^0.7.0       |\n|   8.x   |     ^0.7.0       |\n\nThe dependency on `xtl` is required if you want to support vectorization for `xtl::xcomplex`. In this case, you must build your project with C++14 support enabled.\n\n## Usage\n\nThe version 8 of the library is a complete rewrite and there are some slight differences with 7.x versions.\nA migration guide will be available soon. In the meanwhile, the following examples show how to use both versions\n7 and 8 of the library?\n\n### Explicit use of an instruction set extension\n\nHere is an example that computes the mean of two sets of 4 double floating point values, assuming AVX extension is supported:\n```cpp\n#include \u003ciostream\u003e\n#include \"xsimd/xsimd.hpp\"\n\nnamespace xs = xsimd;\n\nint main(int argc, char* argv[])\n{\n    xs::batch\u003cdouble, xs::avx2\u003e a = {1.5, 2.5, 3.5, 4.5};\n    xs::batch\u003cdouble, xs::avx2\u003e b = {2.5, 3.5, 4.5, 5.5};\n    auto mean = (a + b) / 2;\n    std::cout \u003c\u003c mean \u003c\u003c std::endl;\n    return 0;\n}\n```\n\nDo not forget to enable AVX extension when building the example. With gcc or clang, this is done with the `-mavx` flag,\non MSVC you have to pass the `/arch:AVX` option.\n\nThis example outputs:\n\n```cpp\n(2.0, 3.0, 4.0, 5.0)\n```\n\n### Auto detection of the instruction set extension to be used\n\nThe same computation operating on vectors and using the most performant instruction set available:\n\n```cpp\n#include \u003ccstddef\u003e\n#include \u003cvector\u003e\n#include \"xsimd/xsimd.hpp\"\n\nnamespace xs = xsimd;\nusing vector_type = std::vector\u003cdouble, xsimd::aligned_allocator\u003cdouble\u003e\u003e;\n\nvoid mean(const vector_type\u0026 a, const vector_type\u0026 b, vector_type\u0026 res)\n{\n    std::size_t size = a.size();\n    constexpr std::size_t simd_size = xsimd::simd_type\u003cdouble\u003e::size;\n    std::size_t vec_size = size - size % simd_size;\n\n    for(std::size_t i = 0; i \u003c vec_size; i += simd_size)\n    {\n        auto ba = xs::load_aligned(\u0026a[i]);\n        auto bb = xs::load_aligned(\u0026b[i]);\n        auto bres = (ba + bb) / 2.;\n        bres.store_aligned(\u0026res[i]);\n    }\n    for(std::size_t i = vec_size; i \u003c size; ++i)\n    {\n        res[i] = (a[i] + b[i]) / 2.;\n    }\n}\n```\n\n## Building and Running the Tests\n\nBuilding the tests requires [cmake](https://cmake.org).\n\n`cmake` is available as a package for most linux distributions. Besides, they can also be installed with the `conda` package manager (even on windows):\n\n```bash\nconda install -c conda-forge cmake\n```\n\nOnce `cmake` is installed, you can build and run the tests:\n\n```bash\nmkdir build\ncd build\ncmake ../ -DBUILD_TESTS=ON\nmake xtest\n```\n\nIn the context of continuous integration with Travis CI, tests are run in a `conda` environment, which can be activated with\n\n```bash\ncd test\nconda env create -f ./test-environment.yml\nsource activate test-xsimd\ncd ..\ncmake . -DBUILD_TESTS=ON\nmake xtest\n```\n\n## Building the HTML Documentation\n\nxsimd's documentation is built with three tools\n\n - [doxygen](http://www.doxygen.org)\n - [sphinx](http://www.sphinx-doc.org)\n - [breathe](https://breathe.readthedocs.io)\n\nWhile doxygen must be installed separately, you can install breathe by typing\n\n```bash\npip install breathe\n```\n\nBreathe can also be installed with `conda`\n\n```bash\nconda install -c conda-forge breathe\n```\n\nFinally, build the documentation with\n\n```bash\nmake html\n```\n\nfrom the `docs` subdirectory.\n\n## License\n\nWe use a shared copyright model that enables all contributors to maintain the\ncopyright on their contributions.\n\nThis software is licensed under the BSD-3-Clause license. See the [LICENSE](LICENSE) file for details.\n","funding_links":[],"categories":["Maths","C++","CPU_RISC-V"],"sub_categories":["资源传输下载"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxtensor-stack%2Fxsimd","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxtensor-stack%2Fxsimd","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxtensor-stack%2Fxsimd/lists"}