{"id":13474604,"url":"https://github.com/intel/MLSL","last_synced_at":"2025-03-26T21:32:00.280Z","repository":{"id":143102147,"uuid":"76486441","full_name":"intel/MLSL","owner":"intel","description":"Intel(R) Machine Learning Scaling Library is a library providing an efficient implementation of communication patterns used in deep learning.","archived":true,"fork":false,"pushed_at":"2023-01-07T00:08:17.000Z","size":16588,"stargazers_count":109,"open_issues_count":3,"forks_count":34,"subscribers_count":23,"default_branch":"master","last_synced_at":"2024-10-30T07:48:30.068Z","etag":null,"topics":["artificial-intelligence","deep-learning","distributed","intel","machine-learning","mlsl","mpi"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2016-12-14T18:34:08.000Z","updated_at":"2024-06-14T16:29:06.000Z","dependencies_parsed_at":null,"dependency_job_id":"5a6d9964-6fdb-4d2f-8f81-e4ef8fc529f8","html_url":"https://github.com/intel/MLSL","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FMLSL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FMLSL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FMLSL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FMLSL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/MLSL/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245738778,"owners_count":20664341,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","deep-learning","distributed","intel","machine-learning","mlsl","mpi"],"created_at":"2024-07-31T16:01:13.509Z","updated_at":"2025-03-26T21:31:55.257Z","avatar_url":"https://github.com/intel.png","language":"C++","readme":"DISCONTINUATION OF PROJECT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n# Intel(R) Machine Learning Scaling Library for Linux* OS\nIntel® MLSL is no longer supported, no new releases are available. Please switch to the new API introduced in [Intel® oneAPI Collective Communications Library (oneCCL)](http://github.com/intel/oneccl)\n## Introduction ##\nIntel(R) Machine Learning Scaling Library (Intel(R) MLSL) is a library providing\nan efficient implementation of communication patterns used in deep learning.\n\n    - Built on top of MPI, allows for use of other communication libraries\n    - Optimized to drive scalability of communication patterns\n    - Works across various interconnects: Intel(R) Omni-Path Architecture,\n      InfiniBand*, and Ethernet\n    - Common API to support Deep Learning frameworks (Caffe*, Theano*,\n      Torch*, etc.)\n\nIntel(R) MLSL package comprises the Intel MLSL Software Development Kit (SDK)\nand the Intel(R) MPI Library Runtime components.\n## SOFTWARE SYSTEM REQUIREMENTS ##\nThis section describes the required software.\n\nOperating Systems:\n\n    - Red Hat* Enterprise Linux* 6 or 7\n    - SuSE* Linux* Enterprise Server 12\n    - Ubuntu* 16\n\nCompilers:\n\n    - GNU*: C, C++ 4.4.0 or higher\n    - Intel(R) C++ Compiler for Linux* OS 16.0 through 17.0 or higher\n\nVirtual Environments:\n    - Docker*\n    - KVM*\n## Installing Intel(R) Machine Learning Scaling Library ##\nInstalling Intel(R) MLSL by building from source:\n\n        $ make all\n        $ [MLSL_INSTALL_PATH=/path] make install\n\n    By default MLSL_INSTALL_PATH=$PWD/_install\n\nBinary releases are available on our [release page](https://github.com/intel/MLSL/releases).\n\nInstalling Intel(R) MLSL using RPM Package Manager (root mode):\n\n    1. Log in as root\n\n    2. Install the package:\n\n        $ rpm -i intel-mlsl-devel-64-\u003cversion\u003e.\u003cupdate\u003e-\u003cpackage#\u003e.x86_64.rpm\n\n        where \u003cversion\u003e.\u003cupdate\u003e-\u003cpackage#\u003e is a string, such as: 2017.0-009\n\n    3. Uninstalling Intel(R) MLSL using the RPM Package Manager\n\n        $ rpm -e intel-mlsl-devel-64-\u003cversion\u003e.\u003cupdate\u003e-\u003cpackage#\u003e.x86_64\n\nInstalling Intel(R) MLSL using the tar file (user mode):\n\n        $ tar zxf l_mlsl-devel-64-\u003cversion\u003e.\u003cupdate\u003e.\u003cpackage#\u003e.tgz\n        $ cd l_mlsl_\u003cversion\u003e.\u003cupdate\u003e.\u003cpackage#\u003e\n        $ ./install.sh\n\n    There is no uninstall script. To uninstall Intel(R) MLSL, delete the\n    full directory you have installed the package into.\n\n## Launching Sample Application ##\n\nThe sample application needs python with the numpy package installed.\nYou can use [Intel Distribution for Python]\n(https://software.intel.com/en-us/distribution-for-python),\n[Anaconda](https://conda.io/docs/user-guide/install/download.html),\nor the python and numpy that comes with your OS.\nBefore you start using Intel(R) MLSL, make sure to set up the library environment.\n\nUse the command:\n\n    $ source \u003cinstall_dir\u003e/intel64/bin/mlslvars.sh\n    $ cd \u003cinstall_dir\u003e/test\n    $ make run\n\nIf the test fails, look in the log files in the same directory.\nHere  \u003cinstall_dir\u003e is the Intel MLSL installation directory.\n\n## Migration to oneCCL ##\n\nIntel® MLSL is no longer supported, no new releases are available. Please switch to the new API introduced in [Intel® oneAPI Collective Communications Library (oneCCL)](http://github.com/intel/oneccl)\nThere are some examples that can help you get started with oneCCL, simply try to perform the following:\n\n```\n$ cd ./mlsl_to_ccl\n$ . ${MLSL_ROOT}/intel64/bin/mlslvars.sh\n$ . ${CCL_ROOT}/env/vars.sh\n$ make run -f Makefile\n```\n\nIf you used MLSL before, here is an example that demonstrates the key differences between libraries' APIs.\n\n```diff\n#include \u003ciostream\u003e\n#include \u003cstdio.h\u003e\n- #include \"mlsl.hpp\"\n+ #include \"ccl.hpp\"\n\n- using namespace MLSL;\n+ using namespace ccl;\n\n#define COUNT 128\n \nint main(int argc, char** argv)\n{\n    int i, size, rank;\n \n    auto sendbuf = new float[COUNT];\n    auto recvbuf = new float[COUNT];\n \n-    Environment::GetEnv().Init(\u0026argc, \u0026argv);\n-    rank = Environment::GetEnv().GetProcessIdx();\n-    size = Environment::GetEnv().GetProcessCount();     \n-    auto dist = Environment::GetEnv().CreateDistribution(size, 1);\n+    auto stream = environment::instance().create_stream();\n+    auto comm = environment::instance().create_communicator();\n+    rank = comm-\u003erank();\n+    size = comm-\u003esize();\n \n    /* initialize sendbuf */\n    for (i = 0; i \u003c COUNT; i++)\n        sendbuf[i] = rank;\n \n    /* invoke allreduce */\n-    auto req = dist-\u003eAllReduce(sendbuf, recvbuf, COUNT,                      \n-                               DT_FLOAT, RT_SUM, GT_GLOBAL);\n-    Environment::GetEnv().Wait(req);\n+    comm-\u003eallreduce(sendbuf, recvbuf, COUNT,\n+                    reduction::sum,\n+                    nullptr /* coll_attr */,\n+                    stream)-\u003ewait(); \n    /* check correctness of recvbuf */\n    float expected = (size - 1) * ((float)size / 2);\n    for (i = 0; i \u003c COUNT; i++)\n    {\n        if (recvbuf[i] != expected)\n        {\n            std::cout \u003c\u003c \"idx \" \u003c\u003c i\n                      \u003c\u003c \": got \" \u003c\u003c recvbuf[i]\n                      \u003c\u003c \" but expected \" \u003c\u003c expected\n                      \u003c\u003c std::endl;\n            break;\n        }\n    }\n \n    if (i == COUNT \u0026\u0026 rank == 0)\n        std::cout \u003c\u003c \"PASSED\" \u003c\u003c std::endl;\n \n-    Environment::GetEnv().DeleteDistribution(dist);\n-    Environment::GetEnv().Finalize();\n \n    delete[] sendbuf;\n    delete[] recvbuf;\n \n    return 0;\n}\n```\n\n\n## License ##\nIntel MLSL is licensed under [Apache License Version 2.0](https://github.com/01org/MLSL/blob/master/LICENSE).\n## Optimization Notice ##\nIntel's compilers may or may not optimize to the same degree for non-Intel\nmicroprocessors for optimizations that are not unique to Intel microprocessors.\nThese optimizations include SSE2, SSE3, and SSSE3 instruction sets and other\noptimizations. Intel does not guarantee the availability, functionality, or\neffectiveness of any optimization on microprocessors not manufactured by Intel.\nMicroprocessor-dependent optimizations in this product are intended for use \nwith Intel microprocessors. Certain optimizations not specific to Intel \nmicroarchitecture are reserved for Intel microprocessors. Please refer to the \napplicable product User and Reference Guides for more information regarding the\nspecific instruction sets covered by this notice.\n\nNotice revision #20110804\n\n*Other names and brands may be claimed as the property of others.\n\n","funding_links":[],"categories":["C++"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2FMLSL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2FMLSL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2FMLSL/lists"}