{"id":19603910,"url":"https://github.com/divelab/atta","last_synced_at":"2025-07-21T06:35:20.642Z","repository":{"id":218570765,"uuid":"746790677","full_name":"divelab/ATTA","owner":"divelab","description":"Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [ICLR 2024]","archived":false,"fork":false,"pushed_at":"2024-11-04T16:57:50.000Z","size":82614,"stargazers_count":21,"open_issues_count":0,"forks_count":3,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-14T05:28:30.816Z","etag":null,"topics":["active-learning","active-test-time-adaptation","domain-adaptation","test-time-adaptation"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/divelab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-01-22T17:20:51.000Z","updated_at":"2025-06-27T02:14:24.000Z","dependencies_parsed_at":"2024-01-22T21:35:42.612Z","dependency_job_id":"9c17bc2b-d6b5-4abe-a1ad-2bfb81fcb9e3","html_url":"https://github.com/divelab/ATTA","commit_stats":null,"previous_names":["divelab/atta"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/divelab/ATTA","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/divelab%2FATTA","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/divelab%2FATTA/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/divelab%2FATTA/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/divelab%2FATTA/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/divelab","download_url":"https://codeload.github.com/divelab/ATTA/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/divelab%2FATTA/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266253821,"owners_count":23900056,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["active-learning","active-test-time-adaptation","domain-adaptation","test-time-adaptation"],"created_at":"2024-11-11T09:33:39.156Z","updated_at":"2025-07-21T06:35:20.619Z","avatar_url":"https://github.com/divelab.png","language":"Python","readme":"# Active Test-Time Adaptation: Theoretical Analyses and An Algorithm\n\n[![arXiv](https://img.shields.io/badge/arXiv-2404.05094-b31b1b.svg)](https://arxiv.org/abs/2404.05094)\n[![Static Badge](https://img.shields.io/badge/ICLR-2024-orange)](https://openreview.net/forum?id=YHUGlwTzFB)\n[![License][license-image]][license-url]\n\n[license-url]: https://github.com/divelab/ATTA/blob/main/LICENSE\n[license-image]:https://img.shields.io/badge/license-GPL3.0-green.svg\n\n\nThis is the official implementation of the ICLR 2024 accepted paper: Active Test-Time Adaptation: Theoretical Analyses and An Algorithm.\n\n## News\n- Code released [Mar 12th, 2024]\n\n## Table of Contents\n- [Introduction](#introduction)\n- [Installation](#installation)\n- [Run SimTTA](#run-simtta)\n- [Locked Environments for references](#locked-environments-for-references)\n- [Cite](#cite)\n\n## Introduction\n\nTest-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings. Currently, most TTA methods can only deal with minor shifts and rely heavily on heuristic and empirical studies. \nTo advance TTA under domain shifts, we propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.\nWe provide a learning theory analysis, demonstrating that incorporating limited labeled test instances enhances overall performances across test domains with a theoretical guarantee. We also present a sample entropy balancing for implementing ATTA while avoiding catastrophic forgetting (CF). \nWe introduce a simple yet effective ATTA algorithm, known as SimATTA, using real-time sample selection techniques. \nExtensive experimental results confirm consistency with our theoretical analyses and show that the proposed ATTA method yields substantial performance improvements over TTA methods while maintaining efficiency and shares similar effectiveness to the more demanding active domain adaptation (ADA) methods.\n\n![Framework](/docs/imgs/ATTA.png)\n\n## Installation\n\n- Ubuntu 20.04\n- Python 3.10\n- PyTorch 1.10 or 2.1\n- scikit-learn=1.2.2\n- others\n\n### An installation example is provided below:\n\n```shell\nconda create -n atta python=3.10\nconda activate atta\nconda install -y pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia\nconda install -y -c conda-forge tqdm pandas tensorboard matplotlib scikit-learn=1.2.2\npip install cilog psutil pynvml munch wilds gdown typed-argument-parser ruamel.yaml\n```\nTo run the code in PyTorch **1.10**,\nplease remove all `@torch.compile` decorators.\n\n### Install this package as a package\n\n```shell\npip install -e .\n```\nAfter installing this project as a package, you may replace the `python -m ATTA.kernel.alg_main` with `attatg` in the following commands.\n\n## Run SimTTA\n\n```shell\npython -m ATTA.kernel.alg_main --task train --config_path TTA_configs/PACS/SimATTA.yaml --atta.SimATTA.cold_start 100 --atta.SimATTA.nc_increase 1 --gpu_idx 0 --exp_round 1 [--atta.gpu_clustering]\npython -m ATTA.kernel.alg_main --task train --config_path TTA_configs/VLCS/SimATTA.yaml --atta.SimATTA.cold_start 100 --atta.SimATTA.nc_increase 1 --gpu_idx 0 --exp_round 1 [--atta.gpu_clustering]\npython -m ATTA.kernel.alg_main --task train --config_path TTA_configs/OfficeHome/SimATTA.yaml --atta.SimATTA.cold_start 100 --atta.SimATTA.nc_increase 1 --gpu_idx 1 --exp_round 1 [--atta.gpu_clustering]\npython -m ATTA.kernel.alg_main --task train --config_path TTA_configs/TinyImageNetC/SimATTA.yaml --atta.SimATTA.cold_start 100 --atta.SimATTA.el [1e-1, 1e-2] --atta.SimATTA.nc_increase [1, 1.1, 1.2] --gpu_idx 0 --exp_round 1 --atta.gpu_clustering\n```\n\n- For GPU K-Means, add `--atta.gpu_clustering` to the above commands. In this implementation, we use PyTorch to perform batched K-Means on GPU, but it is also recommended to use the JAX library for GPU K-Means. Although a JAX's implementation is generally much faster than PyTorch's, the library ott's K-Means implementation is not as efficient as the PyTorch implementation provided. Therefore, to use JAX's K-Means, you need to implement a more efficient K-Means algorithm by yourself (e.g., transform the PyTorch implementation to JAX).\n- `atta.SimATTA.cold_start` is the number of labeled samples where we maintain a contrain $alpha\\ge 0.2$ to avoid training corruptions.\n- `atta.SimATTA.nc_increase` is the number of clusters to increase at each iteration.\n- `atta.SimATTA.el` is the bound $\\epsilon_l$ for low entropy sample selections.\n- `atta.SimATTA.eh` is the bound $\\epsilon_h$ for high entropy sample selections.\n- `atta.SimATTA.gpu_idx` is the GPU index to use.\n- `atta.SimATTA.target_cluster [0, 1]` is a flag to determine whether to use the incremental clustering selection strategy.\n- `atta.SimATTA.LE [0, 1]` is a flag to determine whether to use the low entropy sample selection strategy.\n\nPre-trained model checkpoints for PACS and VLCS are provided in `\u003cproject_root\u003e/storage`.\n\n## Locked Environments for references\nRequirements are provided in `environment_PyTorch110_locked.yml` and `environment_PyTorch21_locked.yml`.\n\n- `environment_PyTorch21_locked.yml`: PyTorch 2.1 environment.\n- `environment_PyTorch110_locked.yml`: PyTorch 1.10 environment. \n\n## Cite\nIf you find this repo useful, please consider citing our paper:\n```bibtex\n@inproceedings{\ngui2024atta,\ntitle={Active Test-Time Adaptation: Theoretical Analyses and An Algorithm},\nauthor={Shurui Gui and Xiner Li and Shuiwang Ji},\nbooktitle={The Twelfth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=YHUGlwTzFB}\n}\n```\n\n## Acknowledgements\n\nThis work was supported in part by National Science Foundation grant IIS-2006861 and National Institutes of Health grant U01AG070112.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdivelab%2Fatta","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdivelab%2Fatta","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdivelab%2Fatta/lists"}