{"id":13357412,"url":"https://github.com/microsoft/LightGBM","last_synced_at":"2025-03-12T11:31:01.624Z","repository":{"id":37415502,"uuid":"64991887","full_name":"microsoft/LightGBM","owner":"microsoft","description":"A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.","archived":false,"fork":false,"pushed_at":"2024-10-29T11:40:32.000Z","size":23018,"stargazers_count":16654,"open_issues_count":373,"forks_count":3832,"subscribers_count":434,"default_branch":"master","last_synced_at":"2024-10-29T11:46:43.338Z","etag":null,"topics":["data-mining","decision-trees","distributed","gbdt","gbm","gbrt","gradient-boosting","kaggle","lightgbm","machine-learning","microsoft","parallel","python","r"],"latest_commit_sha":null,"homepage":"https://lightgbm.readthedocs.io/en/latest/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/microsoft.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-08-05T05:45:50.000Z","updated_at":"2024-10-29T10:47:37.000Z","dependencies_parsed_at":"2024-03-06T07:26:27.575Z","dependency_job_id":"d0da7559-74f0-4c2f-9957-76b5f61ba03e","html_url":"https://github.com/microsoft/LightGBM","commit_stats":{"total_commits":3435,"total_committers":318,"mean_commits":10.80188679245283,"dds":0.770014556040757,"last_synced_commit":"e0cda880fc74ca6d1b7d6cb425a24e3a69764bb1"},"previous_names":[],"tags_count":39,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FLightGBM","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FLightGBM/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FLightGBM/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FLightGBM/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/microsoft","download_url":"https://codeload.github.com/microsoft/LightGBM/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":242932081,"owners_count":20208751,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["data-mining","decision-trees","distributed","gbdt","gbm","gbrt","gradient-boosting","kaggle","lightgbm","machine-learning","microsoft","parallel","python","r"],"created_at":"2024-07-29T21:03:06.907Z","updated_at":"2025-03-12T11:31:01.615Z","avatar_url":"https://github.com/microsoft.png","language":"C++","readme":"\u003cimg src=https://github.com/microsoft/LightGBM/blob/master/docs/logo/LightGBM_logo_black_text.svg width=300 /\u003e\n\nLight Gradient Boosting Machine\n===============================\n\n[![Python-package GitHub Actions Build Status](https://github.com/microsoft/LightGBM/actions/workflows/python_package.yml/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions/workflows/python_package.yml)\n[![R-package GitHub Actions Build Status](https://github.com/microsoft/LightGBM/actions/workflows/r_package.yml/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions/workflows/r_package.yml)\n[![CUDA Version GitHub Actions Build Status](https://github.com/microsoft/LightGBM/actions/workflows/cuda.yml/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions/workflows/cuda.yml)\n[![Static Analysis GitHub Actions Build Status](https://github.com/microsoft/LightGBM/actions/workflows/static_analysis.yml/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions/workflows/static_analysis.yml)\n[![Azure Pipelines Build Status](https://lightgbm-ci.visualstudio.com/lightgbm-ci/_apis/build/status/Microsoft.LightGBM?branchName=master)](https://lightgbm-ci.visualstudio.com/lightgbm-ci/_build/latest?definitionId=1)\n[![Appveyor Build Status](https://ci.appveyor.com/api/projects/status/1ys5ot401m0fep6l/branch/master?svg=true)](https://ci.appveyor.com/project/guolinke/lightgbm/branch/master)\n[![Documentation Status](https://readthedocs.org/projects/lightgbm/badge/?version=latest)](https://lightgbm.readthedocs.io/)\n[![Link checks](https://github.com/microsoft/LightGBM/actions/workflows/linkchecker.yml/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions/workflows/linkchecker.yml)\n[![License](https://img.shields.io/github/license/microsoft/lightgbm.svg)](https://github.com/microsoft/LightGBM/blob/master/LICENSE)\n[![Python Versions](https://img.shields.io/pypi/pyversions/lightgbm.svg?logo=python\u0026logoColor=white)](https://pypi.org/project/lightgbm)\n[![PyPI Version](https://img.shields.io/pypi/v/lightgbm.svg?logo=pypi\u0026logoColor=white)](https://pypi.org/project/lightgbm)\n[![conda Version](https://img.shields.io/conda/vn/conda-forge/lightgbm?logo=conda-forge\u0026logoColor=white\u0026label=conda)](https://anaconda.org/conda-forge/lightgbm)\n[![CRAN Version](https://www.r-pkg.org/badges/version/lightgbm)](https://cran.r-project.org/package=lightgbm)\n[![NuGet Version](https://img.shields.io/nuget/v/lightgbm?logo=nuget\u0026logoColor=white)](https://www.nuget.org/packages/LightGBM)\n\nLightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:\n\n- Faster training speed and higher efficiency.\n- Lower memory usage.\n- Better accuracy.\n- Support of parallel, distributed, and GPU learning.\n- Capable of handling large-scale data.\n\nFor further details, please refer to [Features](https://github.com/microsoft/LightGBM/blob/master/docs/Features.rst).\n\nBenefiting from these advantages, LightGBM is being widely-used in many [winning solutions](https://github.com/microsoft/LightGBM/blob/master/examples/README.md#machine-learning-challenge-winning-solutions) of machine learning competitions.\n\n[Comparison experiments](https://github.com/microsoft/LightGBM/blob/master/docs/Experiments.rst#comparison-experiment) on public datasets show that LightGBM can outperform existing boosting frameworks on both efficiency and accuracy, with significantly lower memory consumption. What's more, [distributed learning experiments](https://github.com/microsoft/LightGBM/blob/master/docs/Experiments.rst#parallel-experiment) show that LightGBM can achieve a linear speed-up by using multiple machines for training in specific settings.\n\nGet Started and Documentation\n-----------------------------\n\nOur primary documentation is at https://lightgbm.readthedocs.io/ and is generated from this repository. If you are new to LightGBM, follow [the installation instructions](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html) on that site.\n\nNext you may want to read:\n\n- [**Examples**](https://github.com/microsoft/LightGBM/tree/master/examples) showing command line usage of common tasks.\n- [**Features**](https://github.com/microsoft/LightGBM/blob/master/docs/Features.rst) and algorithms supported by LightGBM.\n- [**Parameters**](https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst) is an exhaustive list of customization you can make.\n- [**Distributed Learning**](https://github.com/microsoft/LightGBM/blob/master/docs/Parallel-Learning-Guide.rst) and [**GPU Learning**](https://github.com/microsoft/LightGBM/blob/master/docs/GPU-Tutorial.rst) can speed up computation.\n- [**FLAML**](https://www.microsoft.com/en-us/research/project/fast-and-lightweight-automl-for-large-scale-data/articles/flaml-a-fast-and-lightweight-automl-library/) provides automated tuning for LightGBM ([code examples](https://microsoft.github.io/FLAML/docs/Examples/AutoML-for-LightGBM/)).\n- [**Optuna Hyperparameter Tuner**](https://medium.com/optuna/lightgbm-tuner-new-optuna-integration-for-hyperparameter-optimization-8b7095e99258) provides automated tuning for LightGBM hyperparameters ([code examples](https://github.com/optuna/optuna-examples/blob/main/lightgbm/lightgbm_tuner_simple.py)).\n- [**Understanding LightGBM Parameters (and How to Tune Them using Neptune)**](https://neptune.ai/blog/lightgbm-parameters-guide).\n\nDocumentation for contributors:\n\n- [**How we update readthedocs.io**](https://github.com/microsoft/LightGBM/blob/master/docs/README.rst).\n- Check out the [**Development Guide**](https://github.com/microsoft/LightGBM/blob/master/docs/Development-Guide.rst).\n\nNews\n----\n\nPlease refer to changelogs at [GitHub releases](https://github.com/microsoft/LightGBM/releases) page.\n\nExternal (Unofficial) Repositories\n----------------------------------\n\nProjects listed here offer alternative ways to use LightGBM.\nThey are not maintained or officially endorsed by the `LightGBM` development team.\n\nJPMML (Java PMML converter): https://github.com/jpmml/jpmml-lightgbm\n\nNyoka (Python PMML converter): https://github.com/SoftwareAG/nyoka\n\nTreelite (model compiler for efficient deployment): https://github.com/dmlc/treelite\n\nlleaves (LLVM-based model compiler for efficient inference): https://github.com/siboehm/lleaves\n\nHummingbird (model compiler into tensor computations): https://github.com/microsoft/hummingbird\n\ncuML Forest Inference Library (GPU-accelerated inference): https://github.com/rapidsai/cuml\n\ndaal4py (Intel CPU-accelerated inference): https://github.com/intel/scikit-learn-intelex/tree/master/daal4py\n\nm2cgen (model appliers for various languages): https://github.com/BayesWitnesses/m2cgen\n\nleaves (Go model applier): https://github.com/dmitryikh/leaves\n\nONNXMLTools (ONNX converter): https://github.com/onnx/onnxmltools\n\nSHAP (model output explainer): https://github.com/slundberg/shap\n\nShapash (model visualization and interpretation): https://github.com/MAIF/shapash\n\ndtreeviz (decision tree visualization and model interpretation): https://github.com/parrt/dtreeviz\n\nsupertree (interactive visualization of decision trees): https://github.com/mljar/supertree\n\nSynapseML (LightGBM on Spark): https://github.com/microsoft/SynapseML\n\nKubeflow Fairing (LightGBM on Kubernetes): https://github.com/kubeflow/fairing\n\nKubeflow Operator (LightGBM on Kubernetes): https://github.com/kubeflow/xgboost-operator\n\nlightgbm_ray (LightGBM on Ray): https://github.com/ray-project/lightgbm_ray\n\nMars (LightGBM on Mars): https://github.com/mars-project/mars\n\nML.NET (.NET/C#-package): https://github.com/dotnet/machinelearning\n\nLightGBM.NET (.NET/C#-package): https://github.com/rca22/LightGBM.Net\n\nLightGBM Ruby (Ruby gem): https://github.com/ankane/lightgbm-ruby\n\nLightGBM4j (Java high-level binding): https://github.com/metarank/lightgbm4j\n\nLightGBM4J (JVM interface for LightGBM written in Scala): https://github.com/seek-oss/lightgbm4j\n\nJulia-package: https://github.com/IQVIA-ML/LightGBM.jl\n\nlightgbm3 (Rust binding): https://github.com/Mottl/lightgbm3-rs\n\nMLServer (inference server for LightGBM): https://github.com/SeldonIO/MLServer\n\nMLflow (experiment tracking, model monitoring framework): https://github.com/mlflow/mlflow\n\nFLAML (AutoML library for hyperparameter optimization): https://github.com/microsoft/FLAML\n\nMLJAR AutoML (AutoML on tabular data): https://github.com/mljar/mljar-supervised\n\nOptuna (hyperparameter optimization framework): https://github.com/optuna/optuna\n\nLightGBMLSS (probabilistic modelling with LightGBM): https://github.com/StatMixedML/LightGBMLSS\n\nmlforecast (time series forecasting with LightGBM): https://github.com/Nixtla/mlforecast\n\nskforecast (time series forecasting with LightGBM): https://github.com/JoaquinAmatRodrigo/skforecast\n\n`{bonsai}` (R `{parsnip}`-compliant interface): https://github.com/tidymodels/bonsai\n\n`{mlr3extralearners}` (R `{mlr3}`-compliant interface): https://github.com/mlr-org/mlr3extralearners\n\nlightgbm-transform (feature transformation binding): https://github.com/microsoft/lightgbm-transform\n\n`postgresml` (LightGBM training and prediction in SQL, via a Postgres extension): https://github.com/postgresml/postgresml\n\n`pyodide` (run `lightgbm` Python-package in a web browser): https://github.com/pyodide/pyodide\n\n`vaex-ml` (Python DataFrame library with its own interface to LightGBM): https://github.com/vaexio/vaex\n\nSupport\n-------\n\n- Ask a question [on Stack Overflow with the `lightgbm` tag](https://stackoverflow.com/questions/ask?tags=lightgbm), we monitor this for new questions.\n- Open **bug reports** and **feature requests** on [GitHub issues](https://github.com/microsoft/LightGBM/issues).\n\nHow to Contribute\n-----------------\n\nCheck [CONTRIBUTING](https://github.com/microsoft/LightGBM/blob/master/CONTRIBUTING.md) page.\n\nMicrosoft Open Source Code of Conduct\n-------------------------------------\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\nReference Papers\n----------------\n\nYu Shi, Guolin Ke, Zhuoming Chen, Shuxin Zheng, Tie-Yan Liu. \"Quantized Training of Gradient Boosting Decision Trees\" ([link](https://papers.nips.cc/paper_files/paper/2022/hash/77911ed9e6e864ca1a3d165b2c3cb258-Abstract.html)). Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp. 18822-18833.\n\nGuolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu. \"[LightGBM: A Highly Efficient Gradient Boosting Decision Tree](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree)\". Advances in Neural Information Processing Systems 30 (NIPS 2017), pp. 3149-3157.\n\nQi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Tie-Yan Liu. \"[A Communication-Efficient Parallel Algorithm for Decision Tree](http://papers.nips.cc/paper/6380-a-communication-efficient-parallel-algorithm-for-decision-tree)\". Advances in Neural Information Processing Systems 29 (NIPS 2016), pp. 1279-1287.\n\nHuan Zhang, Si Si and Cho-Jui Hsieh. \"[GPU Acceleration for Large-scale Tree Boosting](https://arxiv.org/abs/1706.08359)\". SysML Conference, 2018.\n\nLicense\n-------\n\nThis project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/microsoft/LightGBM/blob/master/LICENSE) for additional details.\n","funding_links":[],"categories":["C++","Machine Learning","梯度提升和树模型","Frameworks-for-Training","Frameworks for Training","Machine Learning Framework","Table of Contents","Recently Updated","Computation and Communication Optimisation","The Data Science Toolbox","Related Resources","Training","语言资源库","机器学习框架","Repos","Implementations","📚 فهرست","📋 Contents","🤖 Machine Learning \u0026 AI"],"sub_categories":["Monitoring","Popular-LLM","General Purpose Framework","[Oct 18, 2024](/content/2024/10/18/README.md)","General Machine Learning Packages","Vector search","Frameworks for Training","c++","LightGBM","یادگیری ماشین","🧬 1. Core Frameworks \u0026 Libraries","Tools"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2FLightGBM","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmicrosoft%2FLightGBM","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2FLightGBM/lists"}