Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Xuchen-Li/Awesome-Visual-Object-Tracking
A visual object tracking paper list, articles related to visual object tracking have been documented.
https://github.com/Xuchen-Li/Awesome-Visual-Object-Tracking
List: Awesome-Visual-Object-Tracking
awesome-list computer-vision deep-learning visual-object-tracking
Last synced: 3 months ago
JSON representation
A visual object tracking paper list, articles related to visual object tracking have been documented.
- Host: GitHub
- URL: https://github.com/Xuchen-Li/Awesome-Visual-Object-Tracking
- Owner: Xuchen-Li
- Created: 2023-12-15T13:02:41.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-08-01T01:31:34.000Z (3 months ago)
- Last Synced: 2024-08-01T04:24:50.406Z (3 months ago)
- Topics: awesome-list, computer-vision, deep-learning, visual-object-tracking
- Homepage:
- Size: 544 KB
- Stars: 13
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - Awesome-Visual-Object-Tracking - A visual object tracking paper list, articles related to visual object tracking have been documented. (Other Lists / PowerShell Lists)
README
# Visual_Object_Tracking_Paper_List
## Papers
### CVPR 2024
- **ARTrackV2:** Yifan Bai, Zeyang Zhao, Yihong Gong, Xing Wei
"ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe" CVPR 2024
[[paper](https://arxiv.org/abs/2312.17133)]
[[code](https://artrackv2.github.io/)]- **AQATrack:** Jinxia Xie, Bineng Zhong, Zhiyi Mo, Shengping Zhang, Liangtao Shi, Shuxiang Song, Rongrong Ji
"Autoregressive Queries for Adaptive Tracking with Spatio-Temporal Transformers" CVPR 2024
[[paper](https://arxiv.org/abs/2403.10574)]
[[code](https://github.com/GXNU-ZhongLab/AQATrack)]- **DiffusionTrack:** Fei Xie, Zhongdao Wang, Chao Ma
"DiffusionTrack: Point Set Diffusion Model for Visual Object Tracking" CVPR 2024
[[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Xie_DiffusionTrack_Point_Set_Diffusion_Model_for_Visual_Object_Tracking_CVPR_2024_paper.html)]
[[code](https://github.com/VISION-SJTU/DiffusionTrack)]
- **HIPTrack:** Wenrui Cai, Qingjie Liu, Yunhong Wang
"HIPTrack: Visual Tracking with Historical Prompts" CVPR 2024
[[paper](https://arxiv.org/abs/2311.02072)]
[[code](https://github.com/WenRuiCai/HIPTrack)]- **OneTracker:** Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue Guo, Kaixun Jiang, Yiting Cheng, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang
"OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning" CVPR 2024
[[paper](https://arxiv.org/abs/2403.09634)]- **RTracker:** Yuqing Huang, Xin Li, Zikun Zhou, Yaowei Wang, Zhenyu He, Ming-Hsuan Yang
"RTracker: Recoverable Tracking via PN Tree Structured Memory" CVPR 2024
[[paper](https://arxiv.org/abs/2403.19242)]
[[code](https://github.com/NorahGreen/RTracker)]
### ECCV 2024
- **Diff-Tracker:** Zhengbo Zhang, Li Xu, Duo Peng, Hossein Rahmani, Jun Liu
"Diff-Tracker: Text-to-Image Diffusion Models are Unsupervised Trackers" ECCV 2024
[[paper](https://arxiv.org/abs/2407.08394)]- **LoRAT:** Liting Lin, Heng Fan, Zhipeng Zhang, Yaowei Wang, Yong Xu, Haibin Ling
"Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance" ECCV 2024
[[paper](https://arxiv.org/abs/2403.05231)]
[[code](https://github.com/LitingLin/LoRAT)]
### AAAI 2024- **ODTrack:** Yaozong Zheng, Bineng Zhong, Qihua Liang, Zhiyi Mo, Shengping Zhang, Xianxian Li
"ODTrack: Online Dense Temporal Token Learning for Visual Tracking" AAAI 2024
[[paper](https://arxiv.org/abs/2401.01686)]
[[code](https://github.com/GXNU-ZhongLab/ODTrack)]- **EVPTrack:** Liangtao Shi, Bineng Zhong, Qihua Liang, Ning Li, Shengping Zhang, Xianxian Li
"Explicit Visual Prompts for Visual Object Tracking" AAAI 2024
[[paper](https://arxiv.org/abs/2401.03142)]
[[code](https://github.com/GXNU-ZhongLab/EVPTrack)]### WACV 2024
- **SMAT:** Goutam Yelluru Gopal, Maria A. Amer
"Separable Self and Mixed Attention Transformers for Efficient Object Tracking" WACV 2024
[[paper](https://arxiv.org/abs/2309.03979)]
[[code](https://github.com/goutamyg/SMAT)]- **DATr:** Jie Zhao, Johan Edstedt, Michael Felsberg, Dong Wang, Huchuan Lu
"Leveraging the Power of Data Augmentation for Transformer-based Tracking" WACV 2024
[[paper](https://arxiv.org/abs/2309.08264)]
[[code](https://github.com/zj5559/DATr)]### ArXiv 2024
- **DyTrack:** Jiawen Zhu, Xin Chen, Haiwen Diao, Shuai Li, Jun-Yan He, Chenyang Li, Bin Luo, Dong Wang, Huchuan Lu
"Exploring Dynamic Transformer for Efficient Object Tracking" ArXiv 2024
[[paper](https://arxiv.org/abs/2403.17651)]- **OIFTrack:** Janani Kugarajeevan, Thanikasalam Kokul, Amirthalingam Ramanan, Subha Fernando
"Optimized Information Flow for Transformer Tracking" ArXiv 2024
[[paper](https://arxiv.org/abs/2402.08195)]
[[code](https://github.com/JananiKugaa/OIFTrack)]- **SuperSBT:** Fei Xie, Wankou Yang, Chunyu Wang, Lei Chu, Yue Cao, Chao Ma, Wenjun Zeng
"Correlation-Embedded Transformer Tracking: A Single-Branch Framework" ArXiv 2024
[[paper](https://arxiv.org/abs/2401.12743)]
[[code](https://github.com/phiphiphi31/SBT)]### TPAMI 2023
- **GIT:** Shiyu Hu, Xin Zhao, Lianghua Huang, Kaiqi Huang
"Global Instance Tracking: Locating Target More Like Humans" TPAMI 2023
[[paper](https://arxiv.org/pdf/2202.13073.pdf)]
[[platform](http://videocube.aitestunion.com/)]### IJCV 2023
- **SOTVerse:** Shiyu Hu, Xin Zhao, Kaiqi Huang
"SOTVerse: A User-defined Task Space of Single Object Tracking" IJCV 2023
[[paper](https://arxiv.org/abs/2204.07414)]
[[platform](http://metaverse.aitestunion.com/sotverse)]- **STRtrack:** Shaochuan Zhao, Tianyang Xu, Xiaojun Wu, Josef Kittler
"A Spatio-Temporal Robust Tracker with Spatial-Channel Transformer and Jitter Suppression" IJCV 2023
[[paper](https://link.springer.com/article/10.1007/s11263-023-01902-x)]### TIP 2023
- **SRT:** Tianpeng Liu, Jing Li, Jia Wu, Lefei Zhang, Jun Chang, Jun Wan, Lezhi Lian
"Tracking with Saliency Region Transformer" TIP 2023
[[paper](https://ieeexplore.ieee.org/document/10359476)]- **SiamTactic:** Tianyang Xu, Zhenhua Feng, Xiao-Jun Wu, Josef Kittler
"Toward Robust Visual Object Tracking With Independent Target-Agnostic Detection and Effective Siamese Cross-Task Interaction" TIP 2023
[[paper](https://dl.acm.org/doi/abs/10.1109/TIP.2023.3246800)]### CVPR 2023
- **DropMAE:** Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan
"DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks" CVPR 2023
[[paper](https://arxiv.org/abs/2304.00571)]
[[code](https://github.com/jimmy-dq/DropMAE)]
- **VideoTrack:** Fei Xie, Lei Chu, Jiahao Li, Yan Lu, Chao Ma
"VideoTrack: Learning to Track Objects via Video Transformer" CVPR 2023
[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Xie_VideoTrack_Learning_To_Track_Objects_via_Video_Transformer_CVPR_2023_paper.pdf)]
[[code](https://github.com/phiphiphi31/VideoTrack)]
- **GRM:** Shenyuan Gao, Chunluan Zhou, Jun Zhang
"Generalized Relation Modeling for Transformer Tracking" CVPR 2023
[[paper](https://arxiv.org/pdf/2303.16580v1.pdf)]
[[code](https://github.com/Little-Podi/GRM)]
- **ARTrack:** Xing Wei, Yifan Bai, Yongchao Zheng, Dahu Shi, Yihong Gong
"Autoregressive Visual Tracking" CVPR 2023
[[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Wei_Autoregressive_Visual_Tracking_CVPR_2023_paper.html)]
[[code](https://github.com/MIV-XJTU/ARTrack)]
- **MAT:** Haojie Zhao, Dong Wang, Huchuan Lu
"Representation Learning for Visual Object Tracking by Masked Appearance Transfer" CVPR 2023
[[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Zhao_Representation_Learning_for_Visual_Object_Tracking_by_Masked_Appearance_Transfer_CVPR_2023_paper.html)]
[[code](https://github.com/difhnp/MAT)]
- **SeqTrack:** Xin Chen, Houwen Peng, Dong Wang, Huchuan Lu, Han Hu
"SeqTrack: Sequence to Sequence Learning for Visual Object Tracking" CVPR 2023
[[paper](https://arxiv.org/abs/2304.14394)]
[[code](https://github.com/microsoft/VideoX)]### ICCV 2023
- **HiT:** Ben Kang, Xin Chen, Dong Wang, Houwen Peng, Huchuan Lu
"Exploring Lightweight Hierarchical Vision Transformers for Efficient Visual Tracking" ICCV 2023
[[paper](https://arxiv.org/abs/2308.06904)]
[[code](https://github.com/kangben258/HiT)]
- **ROMTrack:** Yidong Cai, Jie Liu, Jie Tang, Gangshan Wu
"Robust Object Modeling for Visual Tracking" ICCV 2023
[[paper](https://arxiv.org/abs/2308.05140)]
[[code](https://github.com/dawnyc/ROMTrack)]
- **F-BDMTrack:** Dawei Yang, Jianfeng He, Yinchao Ma, Qianjin Yu, Tianzhu Zhang
"Foreground-Background Distribution Modeling Transformer for Visual Object Tracking" ICCV 2023
[[paper](https://openaccess.thecvf.com/content/ICCV2023/papers/Yang_Foreground-Background_Distribution_Modeling_Transformer_for_Visual_Object_Tracking_ICCV_2023_paper.pdf)]### NeurIPS 2023
- **MixFormerV2:** Yutao Cui, Tianhui Song, Gangshan Wu, Limin Wang
"MixFormerV2: Efficient Fully Transformer Tracking" NeurIPS 2023
[[paper](https://arxiv.org/abs/2305.15896)]
[[code](https://github.com/MCG-NJU/MixFormerV2)]
- **ZoomTrack:** Yutong Kou, Jin Gao, Bing Li, Gang Wang, Weiming Hu, Yizheng Wang, Liang Li
"ZoomTrack: Target-aware Non-uniform Resizing for Efficient Visual Tracking" NeurIPS 2023
[[paper](https://arxiv.org/abs/2310.10071)]
[[code](https://github.com/Kou-99/ZoomTrack)]
- **MGIT:** Shiyu Hu, Dailin Zhang, Meiqi Wu, Xiaokun Feng, Xuchen Li, Xin Zhao, Kaiqi Huang
"A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship" NeurIPS 2023
[[paper](https://huuuuusy.github.io/files/MGIT.pdf)]
[[platform](http://videocube.aitestunion.com/)]- **RFGM:** Xinyu Zhou, Pinxue Guo, LingyiHong, JinglunLi, WeiZhang, Weifeng Ge, Wenqiang Zhang
"Reading Relevant Feature from Global Representation Memory for Visual Object Tracking" NeurIPS 2023
[[paper](https://arxiv.org/pdf/2402.14392.pdf)]### AAAI 2023
- **CTTrack:** Zikai Song, Run Luo, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang
"Compact Transformer Tracker with Correlative Masked Modeling" AAAI 2023
[[paper](https://arxiv.org/abs/2301.10938)]
[[code](https://github.com/HUSTDML/CTTrack)]
- **TATrack:** Kaijie He, Canlong Zhang, Sheng Xie, Zhixin Li, Zhiwen Wang
"Target-Aware Tracking with Long-term Context Attention" AAAI 2023
[[paper](https://arxiv.org/abs/2302.13840)]
[[code](https://github.com/hekaijie123/TATrack)]
- **GdaTFT:** Yun Liang; Qiaoqiao Li; Fumian Long
"Global Dilated Attention and Target Focusing Network for Robust Tracking" AAAI 2023
[[paper](https://underline.io/lecture/69278-global-dilated-attention-and-target-focusing-network-for-robust-tracking)]
[[code](https://github.com/)]### ACM MM 2023
- **UTrack:** Jie Gao, Bineng Zhong, Yan Chen
"Unambiguous Object Tracking by Exploiting Target Cues" ACM MM 2023
[[paper](https://dl.acm.org/doi/10.1145/3581783.3612240)]### TCSVT 2023
- **SiamTHN:** Jiahao Bao, Kaiqiang Chen, Xian Sun, Liangjin Zhao, Wenhui Diao, Menglong Yan
"SiamTHN: Siamese Target Highlight Network for Visual Tracking" TCSVT 2023
[[paper](https://arxiv.org/abs/2303.12304)]### ACM MMA 2023
- **UPVPT:** Guangtong Zhang, Qihua Liang, Ning Li, Zhiyi Mo, Bineng Zhong
"Robust Tracking via Unifying Pretrain-Finetuning and Visual Prompt Tuning" ACM MMA 2023
[[paper](https://dl.acm.org/doi/10.1145/3595916.3626410)]### BMCV 2023
- **MVT:** Goutam Yelluru Gopal, Maria A. Amer
"Mobile Vision Transformer-based Visual Object Tracking" BMVC 2023
[[paper](https://arxiv.org/abs/2309.05829)]
[[code](https://github.com/goutamyg/MVT)]### WACV 2023
- **E.T.Track:** Philippe Blatter, Menelaos Kanakis, Martin Danelljan, Luc Van Gool
"Efficient Visual Tracking with Exemplar Transformers" WACV 2023
[[paper](https://arxiv.org/abs/2112.09686)]
[[code](https://github.com/pblatter/ettrack)]### ArXiv 2023
- **CycleTrack:** Chuanming Tang, Kai Wang, Joost van de Weijer, Jianlin Zhang, Yongmei Huang
"Exploiting Image-Related Inductive Biases in Single-Branch Visual Tracking" ArXiv 2023
[[paper](https://arxiv.org/abs/2310.19542)]- **CoTracker:** Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, Christian Rupprecht
"CoTracker: It is Better to Track Together" ArXiv 2023
[[paper](https://arxiv.org/abs/2307.07635)]
[[code](https://co-tracker.github.io/)]
- **LiteTrack:** Qingmao Wei, Bi Zeng, Jianqi Liu, Li He, Guotian Zeng
"LiteTrack: Layer Pruning with Asynchronous Feature Extraction for Lightweight and Efficient Visual Tracking" ArXiv 2023
[[paper](https://arxiv.org/abs/2309.09249)]
[[code](https://github.com/TsingWei/LiteTrack)]
- **LightFC:** Li Yunfeng, Wang Bo, Li Ye, Liu Zhuoyan, Wu Xueyi
"Lightweight Full-Convolutional Siamese Tracker" ArXiv 2023
[[paper](https://arxiv.org/abs/2310.05392)]
[[code](https://github.com/LiYunfengLYF/LightFC)]
- **DETRrack:** Qingmao Wei, Bi Zeng, Guotian Zeng
"Efficient Training for Visual Tracking with Deformable Transformer" ArXiv 2023
[[paper](https://arxiv.org/abs/2309.02676)]- **JN:** Qingmao Wei, Bi Zeng, Guotian Zeng
"Towards Efficient Training with Negative Samples in Visual Tracking" ArXiv 2023
[[paper](https://arxiv.org/abs/2309.02903)]- **TransSOT:** Janani Thangavel, Thanikasalam Kokul, Amirthalingam Ramanan, Subha Fernando
"Transformers in Single Object Tracking: An Experimental Survey" ArXiv 2023
[[paper](https://arxiv.org/abs/2302.11867)]### CVPR 2022
- **MixFormer:** Yutao Cui, Jiang Cheng, Limin Wang, Gangshan Wu
"MixFormer: End-to-End Tracking with Iterative Mixed Attention" CVPR 2022
[[paper](https://arxiv.org/abs/2203.11082)]
[[code](https://github.com/MCG-NJU/MixFormer)]
- **UTT:** Fan Ma, Mike Zheng Shou, Linchao Zhu, Haoqi Fan, Yilei Xu, Yi Yang, Zhicheng Yan
"Unified Transformer Tracker for Object Tracking" CVPR 2022
[[paper](https://arxiv.org/abs/2203.15175)]
[[code](https://github.com/Flowerfan/Trackron)]
- **CSWinTT:** Zikai Song, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang
"Transformer Tracking with Cyclic Shifting Window Attention" CVPR 2022
[[paper](https://arxiv.org/abs/2205.03806)]
[[code](https://github.com/SkyeSong38/CSWinTT)]
- **ToMP:** Christoph Mayer, Martin Danelljan, Goutam Bhat, Matthieu Paul, Danda Pani Paudel, Fisher Yu, Luc Van Gool
"Transforming Model Prediction for Tracking" CVPR 2022
[[paper](https://arxiv.org/abs/2203.11192)]
[[code](https://github.com/visionml/pytracking)]
- **SBT:** Fei Xie, Chunyu Wang, Guangting Wang, Yue Cao, Wankou Yang, Wenjun Zeng
"Correlation-Aware Deep Tracking" CVPR 2022
[[paper](https://arxiv.org/abs/2203.01666)]
[[code](https://github.com/phiphiphi31/SuperSBT)]
- **GTELT:** Zikun Zhou, Jianqiu Chen, Wenjie Pei, Kaige Mao, Hongpeng Wang, Zhenyu He
"Global Tracking via Ensemble of Local Trackers" CVPR 2022
[[paper](https://arxiv.org/abs/2203.16092)]
[[code](https://github.com/ZikunZhou/GTELT)]
- **RBO:** Feng Tang, Qiang Ling
"Ranking-based Siamese Visual Tracking" CVPR 2022
[[paper](https://arxiv.org/pdf/2205.11761.pdf)]
[[code](https://github.com/sansanfree/RBO)]
- **ULAST:** Qiuhong Shen, Lei Qiao, Jinyang Guo, Peixia Li, Xin Li, Bo Li, Weitao Feng, Weihao Gan, Wei Wu, Wanli Ouyang
"Unsupervised Learning of Accurate Siamese Tracking" CVPR 2022
[[paper](https://arxiv.org/abs/2204.01475)]
[[code](https://github.com/FlorinShum/ULAST)]### ECCV 2022
- **OSTrack:** Botao Ye, Hong Chang, Bingpeng Ma, Shiguang Shan
"Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework" ECCV 2022
[[paper](https://arxiv.org/abs/2203.11991)]
[[code](https://github.com/botaoye/OSTrack)]
- **SimTrack:** Boyu Chen, Peixia Li, Lei Bai, Lei Qiao, Qiuhong Shen, Bo Li, Weihao Gan, Wei Wu, Wanli Ouyang
"Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking" ECCV 2022
[[paper](https://arxiv.org/abs/2203.05328)]
[[code](https://github.com/LPXTT/SimTrack)]
- **CIA:** Zhixiong Pi, Weitao Wan, Chong Sun, Changxin Gao, Nong Sang, Chen Li
"Hierarchical Feature Embedding for Visual Tracking" ECCV 2022
[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/html/4400_ECCV_2022_paper.php)]
[[code](https://github.com/zxgravity/CIA)]
- **RTS:** Matthieu Paul,Martin Danelljan,Christoph Mayer,Luc Van Gool
"Robust Visual Tracking by Segmentation" ECCV 2022
[[paper](https://arxiv.org/abs/2203.11191)]
[[code](https://github.com/visionml/pytracking)]
- **AiATrack:** Shenyuan Gao, Chunluan Zhou, Chao Ma, Xinggang Wang, Junsong Yuan
"AiATrack: Attention in Attention for Transformer Visual Tracking" ECCV 2022
[[paper](https://arxiv.org/abs/2207.09603)]
[[code](https://github.com/Little-Podi/AiATrack)]
- **SLTtrack:** Minji Kim, Seungkwan Lee, Jungseul Ok, Bohyung Han, Minsu Cho
"Towards Sequence-Level Training for Visual Tracking" ECCV 2022
[[paper](https://arxiv.org/abs/2208.05810)]
[[code](https://github.com/byminji/SLTtrack)]
- **FEAR:** Vasyl Borsuk, Roman Vei, Orest Kupyn, Tetiana Martyniuk, Igor Krashenyi, Jiři Matas
"FEAR: Fast, Efficient, Accurate and Robust Visual Tracker" ECCV 2022
[[paper](https://arxiv.org/pdf/2112.07957.pdf)]- **P3AFormer:** Zelin Zhao, Ze Wu, Yueqing Zhuang, Boxun Li, Jiaya Jia
"Tracking Objects as Pixel-wise Distributions" ECCV 2022
[[paper](https://arxiv.org/abs/2207.05518)]
[[code](https://sjtuytc.github.io/zelin_pages/p3aformer.html)]### NeurIPS 2022
- **SwinTrack:** Liting Lin, Heng Fan, Yong Xu, Haibin Ling
"SwinTrack: A Simple and Strong Baseline for Transformer Tracking" NeurIPS 2022
[[paper](https://arxiv.org/abs/2112.00995)]
[[code](https://github.com/LitingLin/SwinTrack)]
### IJCAI 2022- **InBN:** Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing, Yilin Lyu, Bing Li, Weiming Hu
"Learning Target-aware Representation for Visual Tracking via Informative Interactions" IJCAI 2022
[[paper](https://arxiv.org/abs/2201.02526)]
- **SparseTT:** Zhihong Fu, Zehua Fu, Qingjie Liu, Zehua Fu, Yunhong Wang
"SparseTT: Visual Tracking with Sparse Transformers" IJCAI 2022
[[paper](https://arxiv.org/abs/2205.03776)]
[[code](https://github.com/fzh0917/SparseTT)]### CVIU 2022
- **VOTSurvey:** Fei Chen, Xiaodong Wang, Yunxiang Zhao, Shaohe Lv, Xin Niu
"Visual object tracking: A survey" CVIU 2022
[[paper](https://www.sciencedirect.com/science/article/pii/S1077314222001011?dgcid=author)]### ArXiv 2022
- **NeighborTrack:** Yu-Hsi Chen, Chien-Yao Wang, Cheng-Yun Yang, Hung-Shuo Chang, Youn-Long Lin, Yung-Yu Chuang, Hong-Yuan Mark Liao
"NeighborTrack: Improving Single Object Tracking by Bipartite Matching with Neighbor Tracklets" ArXiv 2022
[[paper](https://arxiv.org/pdf/2211.06663.pdf)]
- **SUSHI:** Orcun Cetintas, Guillem Brasó, Laura Leal-Taixé
"Unifying Short and Long-Term Tracking with Graph Hierarchies" ArXiv 2022
[[paper](https://arxiv.org/abs/2212.03038)]
- **PruningInTracking:** Saksham Aggarwal, Taneesh Gupta, Pawan Kumar Sahu, Arnav Chavan, Rishabh Tiwari, Dilip K. Prasad, Deepak K. Gupta
"On designing light-weight object trackers through network pruning: Use CNNs or transformers?" ArXiv 2022
[[paper](https://arxiv.org/abs/2211.13769)]- **ProContEXT:** Jin-Peng Lan, Zhi-Qi Cheng, Jun-Yan He, Chenyang Li, Bin Luo, Xu Bao, Wangmeng Xiang, Yifeng Geng, Xuansong Xie
"ProContEXT: Exploring Progressive Context Transformer for Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2210.15511)]
[[code](https://drive.google.com/drive/folders/18kHdBNEwvbk8S4-mwHaI-mw5w6cK-pyY?usp=sharing)]
- **SFTransT:** Chuanming Tang, Xiao Wang, Yuanchao Bai, Zhe Wu, Jianlin Zhang, Yongmei Huang
"Learning Spatial-Frequency Transformer for Visual Object Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2208.08829)]
[[code](https://github.com/Tchuanm/SFTransT.git)]
- **SOTSurvey:** Zahra Soleimanitaleb, Mohammad Ali Keyvanrad
"Single Object Tracking: A Survey of Methods, Datasets, and Evaluation Metrics" ArXiv 2022
[[paper](https://arxiv.org/abs/2201.13066)]
- **SOTRearch:** Ruize Han, Wei Feng, Qing Guo, Qinghua Hu
"Single Object Tracking Research: A Survey" ArXiv 2022
[[paper](https://arxiv.org/abs/2204.11410)]
- **HCAT:** Xin Chen, Dong Wang, Dongdong Li, Huchuan Lu
"Efficient Visual Tracking via Hierarchical Cross-Attention Transformer" ArXiv 2022
[[paper](https://arxiv.org/abs/2203.13537)]
[[code](https://github.com/chenxin-dlut/HCAT)]
- **TransT-M:** Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Huchuan Lu
"High-Performance Transformer Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2203.13533)]
[[code](https://github.com/chenxin-dlut/TransT-M)]
- **GUSOT:** Zhiruo Zhou, Hongyu Fu, Suya You, C. -C. Jay Kuo
"GUSOT: Green and Unsupervised Single Object Tracking for Long Video Sequences" ArXiv 2022
[[paper](https://arxiv.org/abs/2207.07629)]
- **SRRT:** Jiawen Zhu, Xin Chen, Dong Wang, Wenda Zhao, Huchuan Lu
"SRRT: Search Region Regulation Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2207.04438)]- **DIMBA:** Xiangyu Yin, Wenjie Ruan, Jonathan Fieldsend
"DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2207.08044)]- **CAJMU:** Qiuhong Shen, Xin Li, Fanyang Meng, Yongsheng Liang
"Context-aware Visual Tracking with Joint Meta-updating" ArXiv 2022
[[paper](https://arxiv.org/abs/2204.01513)]- **SiamPSA:** Guangze Zheng, Changhong Fu, Junjie Ye, Bowen Li, Geng Lu, and Jia Pan
"SiamPSA: Siamese Object Tracking for Vision-Based UAM Approaching with Pairwise Scale-Channel Attention" ArXiv 2022
[[paper](https://arxiv.org/abs/xxxxxxxx)]
[[code](https://github.com/vision4robotics/SiamPSA)]
- **AdaptiveSiam:** Madhu Kiran, Le Thanh Nguyen-Meidine, Rajat Sahay, Rafael Menelau Oliveira E Cruz, Louis-Antoine Blais-Morin, Eric Granger
"Generative Target Update for Adaptive Siamese Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2202.09938)]
[[code](https://anonymous.4open.science/r/AdaptiveSiamese-CE78/)]
- **SiamLA:** Jiahao Nie, Han Wu, Zhiwei He, Yuxiang Yang, Mingyu Gao, Zhekang Dong
"Learning Localization-aware Target Confidence for Siamese Visual Tracking" ArXiv 2022
[[paper](https://arxiv.org/abs/2204.14093)]### CVPR 2021
- **TransT:** Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun yang, Huchuan Lu
"Transformer Tracking" CVPR 2021
[[paper](https://arxiv.org/abs/2103.15436)]
[[code](https://github.com/chenxin-dlut/TransT)]
- **Alpha-Refine:** Bin Yan, Xinyu Zhang, Dong Wang, Huchuan Lu, Xiaoyun Yang
"Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation" CVPR 2021
[[paper](https://arxiv.org/pdf/1911.12836.pdf)]
[[code](https://github.com/MasterBin-IIAU/AlphaRefine)]
- **LightTrack:** Bin Yan, Houwen Peng, Kan Wu, Dong Wang, Jianlong Fu, Huchuan Lu
"LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search" CVPR 2021
[[paper](https://arxiv.org/abs/2104.14545)]
[[code](https://github.com/cvpr-2021/lighttrack)]
- **TrTrack:** Ning Wang, Wengang Zhou, Jie Wang, Houqiang Li
"Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking" CVPR 2021
[[paper](https://arxiv.org/pdf/2103.11681.pdf)]
[[code](https://github.com/594422814/TransformerTrack)]
- **STMTrack:** Zhihong Fu, Qingjie Liu, Zehua Fu, Yunhong Wang
"STMTrack: Template-free Visual Tracking with Space-time Memory Networks" CVPR 2021
[[paper](https://arxiv.org/abs/2104.00324)]
[[code](https://github.com/fzh0917/STMTrack)]
- **SiamGAT:** Dongyan Guo, Yanyan Shao, Ying Cui, Zhenhua Wang, Liyan Zhang, Chunhua Shen
"Graph Attention Tracking" CVPR 2021
[[paper](https://arxiv.org/abs/2011.11204)]
[[code](https://github.com/ohhhyeahhh/SiamGAT)]
- **SiamACM:** Wencheng Han, Xingping Dong, Fahad Shahbaz Khan, Ling Shao, Jianbing Shen
"Learning to Fuse Asymmetric Feature Maps in Siamese Trackers" CVPR 2021
[[paper](https://arxiv.org/pdf/2012.02776.pdf)]
[[code](https://github.com/wencheng256/SiamBAN-ACM)]
- **PUL:** Qiangqiang Wu, Jia Wan, Antoni B. Chan
"Progressive Unsupervised Learning for Visual Object Tracking" CVPR 2021
[[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Progressive_Unsupervised_Learning_for_Visual_Object_Tracking_CVPR_2021_paper.pdf)]
[[code](https://github.com/PUL)]
- **CapsuleRRT:** Ding Ma, Xiangqian Wu
"CapsuleRRT: Relationships-Aware Regression Tracking via Capsules" CVPR 2021
[[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Ma_CapsuleRRT_Relationships-Aware_Regression_Tracking_via_Capsules_CVPR_2021_paper.pdf)]
[[code](https://github.com/CapsuleRRT)]
- **RE-Siam:** Deepak K. Gupta, Devanshu Arya, Efstratios Gavves
"Rotation Equivariant Siamese Networks for Tracking" CVPR 2021
[[paper](https://arxiv.org/abs/2012.13078)]
[[code](https://github.com/dkgupta90/re-siamnet)]
- **LF-Siam:** Siyuan Cheng, Bineng Zhong, Guorong Li, Xin Liu, Zhenjun Tang, Xianxian Li, Jing Wang
"Learning to Filter: Siamese Relation Network for Robust Tracking" CVPR 2021
[[paper](https://arxiv.org/abs/2104.00829)]
[[code](https://github.com/hqucv/siamrn)]### ICCV 2021
- **STARK:** Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, Huchuan Lu
"Learning Spatio-Temporal Transformer for Visual Tracking" ICCV 2021
[[paper](https://arxiv.org/pdf/2103.17154.pdf)]
[[code](https://github.com/researchmm/Stark)]
- **AutoMatch:** Zhang Zhipeng, Liu Yihao, Wang Xiao, Li Bing, Hu Weiming
"Learn to Match: Automatic Matching Network Design for Visual Tracking" ICCV 2021
[[paper](https://arxiv.org/pdf/2108.00803.pdf)]
[[code](https://github.com/JudasDie/SOTS)]
- **DDT:** Bin Yu, Ming Tang, Linyu Zheng, Guibo Zhu, Jinqiao Wang
"High-Performance Discriminative Tracking with Transformers" ICCV 2021
[[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Yu_High-Performance_Discriminative_Tracking_With_Transformers_ICCV_2021_paper.pdf)]- **KeepTrack:** Christoph Mayer, Martin Danelljan, Danda Pani Paudel, Luc Van Gool
"Learning Target Candidate Association to Keep Track of What Not to Track" ICCV 2021
[[paper](https://arxiv.org/abs/2103.16556)]
[[code](https://github.com/visionml/pytracking)]
- **SAOT:** Zikun Zhou, Wenjie Pei, Xin Li, Hongpeng Wang, Feng Zheng, Zhenyu He
"Saliency-Associated Object Tracking" ICCV 2021
[[paper](https://arxiv.org/pdf/2108.03637.pdf)]
[[code](https://github.com/ZikunZhou/SAOT)]### NeurIPS 2021
- **PathTrack:** Drew Linsley, Girik Malik, Junkyung Kim, Lakshmi Narasimhan Govindarajan, Ennio Mingolla, Thomas Serre
"Tracking Without Re-recognition in Humans and Machines" NeurIPS 2021
[[paper](https://proceedings.neurips.cc/paper/2021/hash/a2557a7b2e94197ff767970b67041697-Abstract.html)]
[[code](http://bit.ly/InTcircuit)]### AAAI 2021
- **MUG:** Lijun Zhou, Antoine Ledent, Qintao Hu, Ting Liu, Jianlin Zhang, Marius Kloft
"Model Uncertainty Guides Visual Object Tracking" AAAI 2021
[[paper](https://ojs.aaai.org/index.php/AAAI/article/view/16473)]- **UPA:** Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, Z. Jane Wang
"Towards Universal Physical Attacks on Single Object Tracking" AAAI 2021
[[paper](https://www.aaai.org/AAAI21Papers/AAAI-2606.DingL.pdf)]- **PACNet:** Dawei Zhang, Zhonglong Zheng, Riheng Jia, Minglu Li
"Visual Tracking via Hierarchical Deep Reinforcement Learning" AAAI 2021
[[paper](https://ojs.aaai.org/index.php/AAAI/article/view/16443)]- **MSANet:** Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, Guo-Jun Qi
"A Unified Multi-Scenario Attacking Network for Visual Object Tracking" AAAI 2021
[[paper](https://ojs.aaai.org/index.php/AAAI/article/view/16195)]### ICCVW 2021
- **DMB:** Fei Xie, Wankou Yang, Kaihua Zhang, Bo Liu, Wanli Xue, Wangmeng Zuo
"Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking" ICCVW 2021
[[paper](https://arxiv.org/pdf/2009.09669.pdf)]
[[code](https://github.com/phiphiphi31/DMB)]
- **DualTFR:** Fei Xie, Chunyu Wang, Guangting Wang, Wankou Yang, Wenjun Zeng
"Learning Tracking Representations via Dual-Branch Fully Transformer Networks" ICCVW 2021
[[paper](https://arxiv.org/abs/2112.02571)]
[[code](https://github.com/phiphiphi31/DualTFR)]### TNNLS 2021
- **CCR:** Shiming Ge, Chunhui Zhang, Shikun Li, Dan Zeng, Dacheng Tao
"Cascaded Correlation Refinement for Robust Deep Tracking." TNNLS 2021
[[paper](https://ieeexplore.ieee.org/document/9069312)]
[[code](https://github.com/983632847/CCR)]### IJCNN 2021
- **TrackMLP:** Tianyu Zhu, Rongkai Ma, Mehrtash Harandi, Tom Drummond
"Learning Online for Unified Segmentation and Tracking Models" IJCNN 2021
[[paper](https://arxiv.org/abs/2111.06994)]
- **SiamGAN:** Yifei Zhou, Jing Li, Jun Chang, Yafu Xiao, Jun Wan, Hang Sun
"Siamese Guided Anchoring Network for Visual Tracking" IJCNN 2021
[[paper](https://ieeexplore.ieee.org/document/9533985)]
[[code](https://github.com/xxxxx.xx)]### WACV 2021
- **MART:** Heng Fan, Haibin Ling
"MART: Motion-Aware Recurrent Neural Network for Robust Visual Tracking" WACV 2021
[[paper](https://openaccess.thecvf.com/content/WACV2021/papers/Fan_MART_Motion-Aware_Recurrent_Neural_Network_for_Robust_Visual_Tracking_WACV_2021_paper.pdf)]
[[code](https://hengfan2010.github.io/projects/MART/MART.htm)]
- **SiamSE:** Ivan Sosnovik, Artem Moskalev, Arnold Smeulders
"Scale Equivariance Improves Siamese Tracking" WACV 2021
[[paper](https://arxiv.org/pdf/2007.09115.pdf)]
[[code](https://github.com/ISosnovik/SiamSE)]### BMCV 2021
- **CHASE:** Seyed Mojtaba Marvasti-Zadeh, Javad Khaghani, Li Cheng, Hossein Ghanei-Yakhdan, Shohreh Kasaei
"CHASE: Robust Visual Tracking via Cell-Level Differentiable Neural Architecture Search" BMVC 2021
[[paper](https://arxiv.org/abs/2107.03463)]
- **TAPL:** Wei han, Hantao Huang, Xiaoxi Yu
"TAPL: Dynamic Part-based Visual Tracking via Attention-guided Part Localization" BMVC 2021
[[paper](https://arxiv.org/abs/2110.13027)]### ArXiv 2021
- **RPT++:** Ziang Ma, Haitao Zhang, Linyuan Wang, Jun Yin
"RPT++: Customized Feature Representation for Siamese Visual Tracking" ArXiv 2021
[[paper](https://arxiv.org/abs/2110.12194)]- **IAT:** Mengmeng Wang, Xiaoqian Yang, Yong Liu
"Explicitly Modeling the Discriminability for Instance-Aware Visual Object Tracking" ArXiv 2021
[[paper](https://arxiv.org/abs/2110.13259)]- **ALT:** Di Yuan, Xiaojun Chang, Qiao Liu, Dehua Wang, Zhenyu He
"Active Learning for Deep Visual Tracking" ArXiv 2021
[[paper](https://arxiv.org/abs/2110.15030)]- **DML:** Jinghao Zhou, Bo Li, Lei Qiao, Peng Wang, Weihao Gan, Wei Wu, Junjie Yan, Wanli Ouyang
"Higher Performance Visual Tracking with Dual-Modal Localization" ArXiv 2021
[[paper](https://arxiv.org/pdf/2103.10089.pdf)]- **TREG:** Yutao Cui, Cheng Jiang, Limin Wang, Gangshan Wu
"Target Transformed Regression for Accurate Tracking" ArXiv 2021
[[paper](https://arxiv.org/pdf/2104.00403.pdf)]
[[code](https://github.com/MCG-NJU/TREG)]
- **SiamSTM:** Jinpu Zhang, Yuehuan Wang
"Spatio-Temporal Matching for Siamese Visual Tracking" ArXiv 2021
[[paper](https://arxiv.org/pdf/2105.02408.pdf)]
- **TrTr:** Moju Zhao, Kei Okada, Masayuki Inaba
"TrTr: Visual Tracking with Transformer" ArXiv 2021
[[paper](https://arxiv.org/pdf/2105.03817.pdf)]
[[code](https://github.com/tongtybj/TrTr)]