{"id":20096640,"url":"https://github.com/hendrycks/pre-training","last_synced_at":"2025-05-06T05:31:50.206Z","repository":{"id":36233080,"uuid":"168016371","full_name":"hendrycks/pre-training","owner":"hendrycks","description":"Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)","archived":false,"fork":false,"pushed_at":"2022-03-01T02:46:35.000Z","size":66922,"stargazers_count":100,"open_issues_count":3,"forks_count":18,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-04-09T08:51:17.499Z","etag":null,"topics":["adversarial-examples","calibration","data-corruption","ml-safety","out-of-distribution-detection","pretrained","robustness","uncertainty"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hendrycks.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-01-28T18:47:49.000Z","updated_at":"2024-12-05T14:51:33.000Z","dependencies_parsed_at":"2022-08-08T13:46:45.456Z","dependency_job_id":null,"html_url":"https://github.com/hendrycks/pre-training","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hendrycks%2Fpre-training","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hendrycks%2Fpre-training/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hendrycks%2Fpre-training/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hendrycks%2Fpre-training/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hendrycks","download_url":"https://codeload.github.com/hendrycks/pre-training/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252629089,"owners_count":21779139,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-examples","calibration","data-corruption","ml-safety","out-of-distribution-detection","pretrained","robustness","uncertainty"],"created_at":"2024-11-13T16:59:27.828Z","updated_at":"2025-05-06T05:31:45.194Z","avatar_url":"https://github.com/hendrycks.png","language":"Python","readme":"# Using Pre-Training Can Improve Model Robustness and Uncertainty\n\nThis repository contains the essential code for the paper [_Using Pre-Training Can Improve Model Robustness and Uncertainty_](https://arxiv.org/abs/1901.09960), ICML 2019.\n\nRequires Python 3+ and PyTorch 0.4.1+.\n\n\u003cimg align=\"center\" src=\"table_adv.png\" width=\"600\"\u003e\n\n## Abstract\n\n[Kaiming He et al. (2018)](https://arxiv.org/abs/1811.08883) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. With pre-training, we show approximately a 30% relative improvement in label noise robustness and a _10% absolute improvement in adversarial robustness_ on CIFAR-10 and CIFAR-100. Pre-training also improves model calibration. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.\n\n\n## Citation\n\nIf you find this useful in your research, please consider citing:\n\n    @article{hendrycks2019pretraining,\n      title={Using Pre-Training Can Improve Model Robustness and Uncertainty},\n      author={Hendrycks, Dan and Lee, Kimin and Mazeika, Mantas},\n      journal={Proceedings of the International Conference on Machine Learning},\n      year={2019}\n    }\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhendrycks%2Fpre-training","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhendrycks%2Fpre-training","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhendrycks%2Fpre-training/lists"}