{"id":21461170,"url":"https://github.com/dholzmueller/pytabkit","last_synced_at":"2025-05-16T11:06:23.408Z","repository":{"id":246844902,"uuid":"824118360","full_name":"dholzmueller/pytabkit","owner":"dholzmueller","description":"ML models + benchmark for tabular data classification and regression","archived":false,"fork":false,"pushed_at":"2025-03-31T09:22:21.000Z","size":1332,"stargazers_count":117,"open_issues_count":0,"forks_count":7,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-04-12T08:38:30.719Z","etag":null,"topics":["deep-learning","deep-neural-networks","deeplearning","machine-learning","tabular","tabular-data","tabular-data-package","tabular-methods","tabular-model"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dholzmueller.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-07-04T11:56:43.000Z","updated_at":"2025-04-10T16:44:29.000Z","dependencies_parsed_at":null,"dependency_job_id":"185f5c1b-1c5b-4241-a8f9-7e86e60a6f6b","html_url":"https://github.com/dholzmueller/pytabkit","commit_stats":null,"previous_names":["dholzmueller/pytabkit"],"tags_count":9,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dholzmueller%2Fpytabkit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dholzmueller%2Fpytabkit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dholzmueller%2Fpytabkit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dholzmueller%2Fpytabkit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dholzmueller","download_url":"https://codeload.github.com/dholzmueller/pytabkit/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254518383,"owners_count":22084374,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","deep-neural-networks","deeplearning","machine-learning","tabular","tabular-data","tabular-data-package","tabular-methods","tabular-model"],"created_at":"2024-11-23T07:07:48.909Z","updated_at":"2025-05-16T11:06:18.392Z","avatar_url":"https://github.com/dholzmueller.png","language":"Python","readme":"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dholzmueller/pytabkit/blob/main/examples/tutorial_notebook.ipynb)\n[![](https://readthedocs.org/projects/pytabkit/badge/?version=latest\u0026style=flat-default)](https://pytabkit.readthedocs.io/en/latest/)\n[![test](https://github.com/dholzmueller/pytabkit/actions/workflows/testing.yml/badge.svg)](https://github.com/dholzmueller/pytabkit/actions/workflows/testing.yml)\n[![Downloads](https://img.shields.io/pypi/dm/pytabkit)](https://pypistats.org/packages/pytabkit)\n\n# PyTabKit: Tabular ML models and benchmarking (NeurIPS 2024)\n\n [Paper](https://arxiv.org/abs/2407.04491) | [Documentation](https://pytabkit.readthedocs.io) | [RealMLP-TD-S standalone implementation](https://github.com/dholzmueller/realmlp-td-s_standalone) | [Grinsztajn et al. benchmark code](https://github.com/LeoGrin/tabular-benchmark/tree/better_by_default) | [Data archive](https://doi.org/10.18419/darus-4555) |\n|-------------------------------------------|--------------------------------------------------|---------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|-----------------------------------------------------|\n\nPyTabKit provides **scikit-learn interfaces for modern tabular classification and regression methods**\nbenchmarked in our [paper](https://arxiv.org/abs/2407.04491), see below.\nIt also contains the code we used for **benchmarking** these methods\non our benchmarks.\n\n![Meta-test benchmark results](./figures/meta-test_benchmark_results.png)\n\n## Installation\n\n```bash\npip install pytabkit\n```\n\n- If you want to use **TabR**, you have to manually install\n  [faiss](https://github.com/facebookresearch/faiss/blob/main/INSTALL.md),\n  which is only available on **conda**.\n- Please install torch separately if you want to control the version (CPU/GPU etc.)\n- Use `pytabkit[autogluon,extra,hpo,bench,dev]` to install additional dependencies for\n  AutoGluon models, extra preprocessing,\n  hyperparameter optimization methods beyond random search (hyperopt/SMAC),\n  the benchmarking part, and testing/documentation. For the hpo part,\n  you might need to install *swig* (e.g. via pip) if the build of *pyrfr* fails.\n  See also the [documentation](https://pytabkit.readthedocs.io).\n  To run the data download for the meta-train benchmark, you need one of rar, unrar, or 7-zip\n  to be installed on the system.\n\n## Using the ML models\n\nMost of our machine learning models are directly available via scikit-learn interfaces.\nFor example, you can use RealMLP-TD for classification as follows:\n\n```python\nfrom pytabkit import RealMLP_TD_Classifier\n\nmodel = RealMLP_TD_Classifier()  # or TabR_S_D_Classifier, CatBoost_TD_Classifier, etc.\nmodel.fit(X_train, y_train)\nmodel.predict(X_test)\n```\n\nThe code above will automatically select a GPU if available,\ntry to detect categorical columns in dataframes,\npreprocess numerical variables and regression targets (no standardization required),\nand use a training-validation split for early stopping.\nAll of this (and much more) can be configured through the constructor\nand the parameters of the fit() method.\nFor example, it is possible to do bagging\n(ensembling of models on 5-fold cross-validation)\nsimply by passing `n_cv=5` to the constructor.\nHere is an example for some of the parameters that can be set explicitly:\n\n```python\nfrom pytabkit import RealMLP_TD_Classifier\n\nmodel = RealMLP_TD_Classifier(device='cpu', random_state=0, n_cv=1, n_refit=0,\n                              n_epochs=256, batch_size=256, hidden_sizes=[256] * 3,\n                              val_metric_name='cross_entropy',\n                              use_ls=False,  # for metrics like AUC / log-loss\n                              lr=0.04, verbosity=2)\nmodel.fit(X_train, y_train, X_val, y_val, cat_col_names=['Education'])\nmodel.predict_proba(X_test)\n```\n\nSee [this notebook](https://colab.research.google.com/github/dholzmueller/pytabkit/blob/main/examples/tutorial_notebook.ipynb)\nfor more examples. Missing numerical values are currently *not* allowed and need to be imputed beforehand.\n\n### Available ML models\n\nOur ML models are available in up to three variants, all with best-epoch selection:\n\n- library defaults (D)\n- our tuned defaults (TD)\n- random search hyperparameter optimization (HPO), sometimes also tree parzen estimator (HPO-TPE)\n\nWe provide the following ML models:\n\n- **RealMLP** (TD, HPO): Our new neural net models with tuned defaults (TD)\n  or random search hyperparameter optimization (HPO)\n- **XGB**, **LGBM**, **CatBoost** (D, TD, HPO, HPO-TPE): Interfaces for gradient-boosted\n  tree libraries XGBoost, LightGBM, CatBoost\n- **MLP**, **ResNet**, **FTT** (D, HPO): Models\n  from [Revisiting Deep Learning Models for Tabular Data](https://proceedings.neurips.cc/paper_files/paper/2021/hash/9d86d83f925f2149e9edb0ac3b49229c-Abstract.html)\n- **MLP-PLR** (D, HPO): MLP with numerical embeddings\n  from [On Embeddings for Numerical Features in Tabular Deep Learning](https://proceedings.neurips.cc/paper_files/paper/2022/hash/9e9f0ffc3d836836ca96cbf8fe14b105-Abstract-Conference.html)\n- **TabR** (D, HPO): TabR model\n  from [TabR: Tabular Deep Learning Meets Nearest Neighbors](https://openreview.net/forum?id=rhgIgTSSxW)\n- **TabM** (D): TabM model\n  from [TabM: Advancing Tabular Deep Learning with Parameter-Efficient Ensembling](https://arxiv.org/abs/2410.24210)\n- **RealTabR** (D): Our new TabR variant with default parameters\n- **Ensemble-TD**: Weighted ensemble of all TD models (RealMLP, XGB, LGBM, CatBoost)\n\n## Post-hoc calibration and refinement stopping\n\nFor using post-hoc temperature scaling and refinement stopping from our \npaper [Rethinking Early Stopping: Refine, Then Calibrate](https://arxiv.org/abs/2501.19195),\nyou can pass the following parameters to the scikit-learn interfaces:\n```python\nfrom pytabkit import RealMLP_TD_Classifier\nclf = RealMLP_TD_Classifier(\n    val_metric_name='ref-ll-ts',  # short for 'refinement_logloss_ts-mix_all'\n    calibration_method='ts-mix',  # temperature scaling with laplace smoothing\n    use_ls=False  # recommended for cross-entropy loss\n)\n```\nOther calibration methods and validation metrics\nfrom [probmetrics](https://github.com/dholzmueller/probmetrics)\ncan be used as well.\n\nFor reproducing the results from this paper, we refer to the\n[documentation](https://pytabkit.readthedocs.io/en/latest/bench/refine_then_calibrate.html).\n\n## Benchmarking code\n\nOur benchmarking code has functionality for\n\n- dataset download\n- running methods highly parallel on single-node/multi-node/multi-GPU hardware,\n  with automatic scheduling and trying to respect RAM constraints\n- analyzing/plotting results\n\nFor more details, we refer to the [documentation](https://pytabkit.readthedocs.io).\n\n## Preprocessing code\n\nWhile many preprocessing methods are implemented in this repository,\na standalone version of our robust scaling + smooth clipping\ncan be found [here](https://github.com/dholzmueller/realmlp-td-s_standalone/blob/main/preprocessing.py#L65C7-L65C37).\n\n## Citation\n\nIf you use this repository for research purposes, please cite our [paper](https://arxiv.org/abs/2407.04491):\n\n```\n@inproceedings{holzmuller2024better,\n  title={Better by default: {S}trong pre-tuned {MLPs} and boosted trees on tabular data},\n  author={Holzm{\\\"u}ller, David and Grinsztajn, Leo and Steinwart, Ingo},\n  booktitle = {Neural {Information} {Processing} {Systems}},\n  year={2024}\n}\n```\n\n## Contributors\n\n- David Holzmüller (main developer)\n- Léo Grinsztajn (deep learning baselines, plotting)\n- Ingo Steinwart (UCI dataset download)\n- Katharina Strecker (PyTorch-Lightning interface)\n- Lennart Purucker (some features/fixes)\n- Jérôme Dockès (deployment, continuous integration)\n\n## Acknowledgements\n\nCode from other repositories is acknowledged as well as possible in code comments.\nEspecially, we used code from https://github.com/yandex-research/rtdl\nand sub-packages (Apache 2.0 license),\ncode from https://github.com/catboost/benchmarks/\n(Apache 2.0 license),\nand https://docs.ray.io/en/latest/cluster/vms/user-guides/community/slurm.html\n(Apache 2.0 license).\n\n## Releases (see git tags)\n\n- v1.3.0: \n    - Added multiquantile regression for RealMLP: \n      see the [documentation](https://pytabkit.readthedocs.io/en/latest/models/quantile_reg.html)\n    - More hyperparameters for RealMLP\n    - Added [TabICL](github.com/soda-inria/tabicl) wrapper\n    - Small fixes\n- v1.2.1: avoid error for older skorch versions\n- v1.2.0:\n    - Included post-hoc calibration and more metrics through \n      [probmetrics](https://github.com/dholzmueller/probmetrics).\n    - Added benchmarking code for [Rethinking Early Stopping: Refine, Then Calibrate](https://arxiv.org/abs/2501.19195).\n    - Updated format for saving predictions, \n      allow to stop on multiple metrics during the same training \n      in the benchmark.\n    - Better categorical handling, \n      avoiding an error for string and object columns,\n      not ignoring boolean columns by default but treating them as \n      categorical.\n    - Added Ensemble_HPO_Classifier and Ensemble_HPO_Regressor.\n- v1.1.3:\n  - Fixed a bug where the categorical encoding was incorrect if categories \n    were missing in the training or validation set. The bug affected XGBoost \n    and potentially many other models except RealMLP.\n  - Scikit-learn interfaces now accept and auto-detect categorical datatypes\n    (category, string, object) in dataframes.\n- v1.1.2:\n    - Some compatibility improvements for scikit-learn 1.6\n      (but disabled 1.6 since skorch is not compatible with it).\n    - Improved documentation for Pytorch-Lightning interface.\n    - Other small bugfixes and improvements.\n- v1.1.1:\n    - Added parameters `weight_decay`, `tfms`,\n      and `gradient_clipping_norm` to TabM.\n      The updated default parameters now apply the RTDL quantile transform.\n- v1.1.0:\n    - Included TabM\n    - Replaced `__` by `_` in parameter names for MLP, MLP-PLR, ResNet, and FTT,\n      to comply with scikit-learn interface requirements.\n    - Fixed non-determinism in NN baselines\n      by initializing the random state of quantile (and KDI)\n      preprocessing transforms.\n    - n_threads parameter is not ignored by NNs anymore.\n    - Changes by [Lennart Purucker](https://github.com/LennartPurucker):\n      Add time limit for RealMLP,\n      add support for `lightning` (but also still allowing `pytorch-lightning`),\n      making skorch a lazy import, removed msgpack\\_numpy dependency.\n- v1.0.0: Release for the NeurIPS version and arXiv v2.\n    - More baselines (MLP-PLR, FT-Transformer, TabR-HPO, RF-HPO),\n      also some un-polished internal interfaces for other methods,\n      esp. the ones in AutoGluon.\n    - Updated benchmarking code (configurations, plots)\n      including the new version of the Grinsztajn et al. benchmark\n    - Updated fit() parameters in scikit-learn interfaces, etc.\n- v0.0.1: First release for arXiv v1.\n  Code and data are archived at [DaRUS](https://doi.org/10.18419/darus-4255).\n\n","funding_links":[],"categories":["Benchmarks \u0026 Comparisons"],"sub_categories":["Benchmark Repositories"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdholzmueller%2Fpytabkit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdholzmueller%2Fpytabkit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdholzmueller%2Fpytabkit/lists"}