{"id":15027522,"url":"https://github.com/d-li14/involution","last_synced_at":"2025-05-16T14:06:53.531Z","repository":{"id":37399736,"uuid":"334181506","full_name":"d-li14/involution","owner":"d-li14","description":"[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator","archived":false,"fork":false,"pushed_at":"2021-07-16T06:01:08.000Z","size":499,"stargazers_count":1309,"open_issues_count":26,"forks_count":178,"subscribers_count":15,"default_branch":"main","last_synced_at":"2025-05-16T14:06:49.085Z","etag":null,"topics":["cvpr2021","image-classification","instance-segmentation","involution","object-detection","operator","pre-trained-model","pytorch","semantic-segmentation"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2103.06255","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/d-li14.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-01-29T15:16:07.000Z","updated_at":"2025-05-05T13:08:45.000Z","dependencies_parsed_at":"2022-08-08T20:15:28.038Z","dependency_job_id":null,"html_url":"https://github.com/d-li14/involution","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/d-li14%2Finvolution","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/d-li14%2Finvolution/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/d-li14%2Finvolution/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/d-li14%2Finvolution/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/d-li14","download_url":"https://codeload.github.com/d-li14/involution/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254544146,"owners_count":22088807,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cvpr2021","image-classification","instance-segmentation","involution","object-detection","operator","pre-trained-model","pytorch","semantic-segmentation"],"created_at":"2024-09-24T20:06:36.610Z","updated_at":"2025-05-16T14:06:53.511Z","avatar_url":"https://github.com/d-li14.png","language":"Python","readme":"# involution\n\nOfficial implementation of a neural operator as described in [Involution: Inverting the Inherence of Convolution for Visual Recognition](https://arxiv.org/abs/2103.06255) (CVPR'21)\n\nBy [Duo Li](https://duoli.org/), [Jie Hu](https://github.com/hujie-frank), [Changhu Wang](https://scholar.google.com/citations?user=DsVZkjAAAAAJ), [Xiangtai Li](https://github.com/lxtGH), [Qi She](https://scholar.google.com/citations?user=iHoGTt4AAAAJ), [Lei Zhu](https://github.com/zh460045050), [Tong Zhang](http://tongzhang-ml.org/), and [Qifeng Chen](https://cqf.io/)\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"fig/involution.png\" width=\"500\" /\u003e\u003c/p\u003e\n\n**TL; DR.** `involution` is a general-purpose neural primitive that is versatile for a spectrum of deep learning models on different vision tasks. `involution` bridges `convolution` and `self-attention` in design, while being more efficient and effective than `convolution`, simpler than `self-attention` in form. \n\n\u003cp align=\"center\"\u003e\u003cimg src=\"fig/complexity.png\" width=\"400\" /\u003e\u003cimg src=\"fig/parameter.png\" width=\"400\" /\u003e\u003c/p\u003e\n\nIf you find our work useful in your research, please cite:\n```\n@InProceedings{Li_2021_CVPR,\n    author = {Li, Duo and Hu, Jie and Wang, Changhu and Li, Xiangtai and She, Qi and Zhu, Lei and Zhang, Tong and Chen, Qifeng},\n    title = {Involution: Inverting the Inherence of Convolution for Visual Recognition},\n    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n    month = {June},\n    year = {2021}\n}\n```\n\n## Getting Started\n\nThis repository is fully built upon the [OpenMMLab](https://openmmlab.com/) toolkits. For each individual task, the config and model files follow the same directory organization as [mmcls](https://github.com/open-mmlab/mmclassification), [mmdet](https://github.com/open-mmlab/mmdetection), and [mmseg](https://github.com/open-mmlab/mmsegmentation) respectively, so just copy-and-paste them to the corresponding locations to get started.\n\nFor example, in terms of evaluating detectors\n```shell\ngit clone https://github.com/open-mmlab/mmdetection # and install\n\n# copy model files\ncp det/mmdet/models/backbones/* mmdetection/mmdet/models/backbones\ncp det/mmdet/models/necks/* mmdetection/mmdet/models/necks\ncp det/mmdet/models/dense_heads/* mmdetection/mmdet/models/dense_heads\ncp det/mmdet/models/roi_heads/* mmdetection/mmdet/models/roi_heads\ncp det/mmdet/models/roi_heads/mask_heads/* mmdetection/mmdet/models/roi_heads/mask_heads\ncp det/mmdet/models/utils/* mmdetection/mmdet/models/utils\ncp det/mmdet/datasets/* mmdetection/mmdet/datasets\n\n# copy config files\ncp det/configs/_base_/models/* mmdetection/configs/_base_/models\ncp det/configs/_base_/schedules/* mmdetection/configs/_base_/schedules\ncp det/configs/involution mmdetection/configs -r\n\n# evaluate checkpoints\ncd mmdetection\nbash tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]\n```\n\nFor more detailed guidance, please refer to the original [mmcls](https://github.com/open-mmlab/mmclassification), [mmdet](https://github.com/open-mmlab/mmdetection), and [mmseg](https://github.com/open-mmlab/mmsegmentation) tutorials.\n\nCurrently, we provide an memory-efficient implementation of the involuton operator based on [CuPy](https://cupy.dev/). Please install this library in advance. A customized CUDA kernel would bring about further acceleration on the hardware. Any contribution from the community regarding this is welcomed!\n\n## Model Zoo\n\nThe parameters/FLOPs\u0026#8595; and performance\u0026#8593; compared to the convolution baselines are marked in the parentheses. Part of these checkpoints are obtained in our reimplementation runs, whose performance may show slight differences with those reported in our paper. Models are trained with 64 GPUs on ImageNet, 8 GPUs on COCO, and 4 GPUs on Cityscapes.\n\n### Image Classification on ImageNet\n\n|         Model         | Params(M) | FLOPs(G) | Top-1 (%) | Top-5 (%) | Config | Download |\n|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|\n| RedNet-26             |  9.23\u003csub\u003e(32.8%\u0026#8595;)\u003c/sub\u003e     | 1.73\u003csub\u003e(29.2%\u0026#8595;)\u003c/sub\u003e     | 75.96 | 93.19 | [config](https://github.com/d-li14/involution/blob/main/cls/configs/rednet/rednet26_b32x64_warmup_coslr_imagenet.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EWmTnvB1cqtIi-OI4HfxGBgBKzO0w_qc3CnErHhNfBitlg?e=XPws5X) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EVJ_eDMSsr1JqhInx67OCxcB-P54pj3o5mGO_rYVsRSk3A?e=70tJAc) |\n| RedNet-38             | 12.39\u003csub\u003e(36.7%\u0026#8595;)\u003c/sub\u003e     | 2.22\u003csub\u003e(31.3%\u0026#8595;)\u003c/sub\u003e     | 77.48 | 93.57 | [config](https://github.com/d-li14/involution/blob/main/cls/configs/rednet/rednet38_b32x64_warmup_coslr_imagenet.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ETZIquU7P3lDvru0OAPiTYIBAt-B__2LpP_NeB4sR0hJsg?e=b9Rbl0) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/Ed62YcJgC-NCp72NpEsMLGABkb7f-EkCQ1X-RyLmAMYoUQ?e=Hqetbj) |\n| RedNet-50             | 15.54\u003csub\u003e(39.5%\u0026#8595;)\u003c/sub\u003e     | 2.71\u003csub\u003e(34.1%\u0026#8595;)\u003c/sub\u003e     | 78.35 | 94.13 | [config](https://github.com/d-li14/involution/blob/main/cls/configs/rednet/rednet50_b32x64_warmup_coslr_imagenet.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EZjRG3qUMu5IuR7YH4Giyc8B6koPvu6s8rOlIG8-BuFevg?e=f4ce5G) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ETL5NxDwnQpCldbJb906aOABjjuhZSquxKzK5xYQm-6Bhw?e=lOzEEf) |\n| RedNet-101            | 25.65\u003csub\u003e(42.6%\u0026#8595;)\u003c/sub\u003e     | 4.74\u003csub\u003e(40.5%\u0026#8595;)\u003c/sub\u003e     | 78.92 | 94.35 | [config](https://github.com/d-li14/involution/blob/main/cls/configs/rednet/rednet101_b32x64_warmup_coslr_imagenet.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EXAuVXdXz1xAg5eG-dkvwTUBkds2IOK1kglHtkMeGz5z_A?e=vHvh5y) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EbbiBxdZoZJFmTPSg9hW3BIBLRmRpfPa70nu8pi_8ddOSw?e=CdAV86) |\n| RedNet-152            | 33.99\u003csub\u003e(43.5%\u0026#8595;)\u003c/sub\u003e     | 6.79\u003csub\u003e(41.4%\u0026#8595;)\u003c/sub\u003e     | 79.12 | 94.38 | [config](https://github.com/d-li14/involution/blob/main/cls/configs/rednet/rednet152_b32x64_warmup_coslr_imagenet.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ERxcS4wXUCtPl4uUnPoT9vcByzhLA0eHgDE-fw_EESfP0w?e=x0dZWB) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EYr2Yx-p4w1AuT-Q3E7M2m0BFhAGDoYvxps09vYy4Cnj3A?e=XGxzPF) |\n\nBefore finetuning on the following downstream tasks, download the ImageNet pre-trained [RedNet-50 weights](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EaVInpb6TGJApN6QCAWwKJAB3cK9Iz55QfJgmhhaV7yuHw?e=yuWxyI) and set the `pretrained` argument in `det/configs/_base_/models/*.py` or `seg/configs/_base_/models/*.py` to your local path.\n\n### Object Detection and Instance Segmentation on COCO\n\n#### Faster R-CNN\n|    Backbone     |     Neck    |     Head    |  Style  | Lr schd | Params(M) | FLOPs(G) | box AP | Config | Download |\n| :-------------: | :---------: | :---------: | :-----: | :-----: |:---------:|:--------:| :----: | :------: | :--------: |\n|    RedNet-50-FPN     | convolution | convolution | pytorch |   1x    | 31.6\u003csub\u003e(23.9%\u0026#8595;)\u003c/sub\u003e | 177.9\u003csub\u003e(14.1%\u0026#8595;)\u003c/sub\u003e | 39.5\u003csub\u003e(1.8\u0026#8593;)\u003c/sub\u003e   | [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/faster_rcnn_red50_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ESOJAF74jK5HrevtBdMDku0Bgf71nC7F4UcMmGWER5z1_w?e=qGPdA5) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ESYSpzei_INMn1wu5qa0Su8B9YxXf_rOtib5xHjb1y2alA?e=Qn3lyd) |\n|    RedNet-50-FPN     |  involution | convolution | pytorch |   1x    | 29.5\u003csub\u003e(28.9%\u0026#8595;)\u003c/sub\u003e | 135.0\u003csub\u003e(34.8%\u0026#8595;)\u003c/sub\u003e | 40.2\u003csub\u003e(2.5\u0026#8593;)\u003c/sub\u003e   | [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/faster_rcnn_red50_neck_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EV90stAJIXxEnDRe0QM0lvwB_jm9jwqwHoBOVVOqosPHJw?e=0QoikN) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/Ec8z-SZbJTxJrAJ3FLq0PSsB1Q7T1dXLvhfHmegQqH7rqA?e=5O9jDY) |\n|    RedNet-50-FPN     |  involution |  involution | pytorch |   1x    | 29.0\u003csub\u003e(30.1%\u0026#8595;)\u003c/sub\u003e | 91.5\u003csub\u003e(55.8%\u0026#8595;)\u003c/sub\u003e | 39.2\u003csub\u003e(1.5\u0026#8593;)\u003c/sub\u003e   | [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/faster_rcnn_red50_neck_fpn_head_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EeTwxsehR5VLhvf5TbTr8WwBmiNUwUeuXtbdOJlg0mFkmw?e=DL3gWX) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EUBsDdHQ10BKp8wW2aj2GHYBzhHtmW2BP65PIhn3KcSYqA?e=6dmNn7) |\n\n#### Mask R-CNN\n|    Backbone     |     Neck    |     Head    |  Style  | Lr schd | Params(M) | FLOPs(G) | box AP | mask AP | Config | Download |\n| :-------------: | :---------: | :---------: | :-----: | :-----: |:---------:|:--------:| :----: | :-----: | :------: | :--------: |\n|    RedNet-50-FPN     | convolution | convolution | pytorch |   1x    | 34.2\u003csub\u003e(22.6%\u0026#8595;)\u003c/sub\u003e | 224.2\u003csub\u003e(11.5%\u0026#8595;)\u003c/sub\u003e | 39.9\u003csub\u003e(1.5\u0026#8593;)\u003c/sub\u003e   | 35.7\u003csub\u003e(0.6\u0026#8593;)\u003c/sub\u003e    |  [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/mask_rcnn_red50_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EdheYm71X2pFu427_557zqcBmuKaLKEoU5R0Z2Kwo2alvg?e=qXShyW) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EQK-5qH_XxhHn4QnxmQbJ4cBL3sz9HqjS0EoybT2s1751g?e=4gpwK2) |\n|    RedNet-50-FPN     |  involution | convolution | pytorch |   1x    | 32.2\u003csub\u003e(27.1%\u0026#8595;)\u003c/sub\u003e | 181.3\u003csub\u003e(28.5%\u0026#8595;)\u003c/sub\u003e | 40.8\u003csub\u003e(2.4\u0026#8593;)\u003c/sub\u003e   | 36.4\u003csub\u003e(1.3\u0026#8593;)\u003c/sub\u003e    |  [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/mask_rcnn_red50_neck_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EYYgUzXjJ3VBrscng-5QW_oB9wFK-dcqSDYB-LUXldFweg?e=idFEgd) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ETWdfYuhjY5AlGkUH11rLl4BLk9zsyKgwAbay47TYzIU-w?e=6ey6cD) |\n|    RedNet-50-FPN     |  involution |  involution | pytorch |   1x    | 29.5\u003csub\u003e(33.3%\u0026#8595;)\u003c/sub\u003e | 104.6\u003csub\u003e(58.7%\u0026#8595;)\u003c/sub\u003e | 39.6\u003csub\u003e(1.2\u0026#8593;)\u003c/sub\u003e   | 35.1\u003csub\u003e(0.0\u0026#8593;)\u003c/sub\u003e    |  [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/mask_rcnn_red50_neck_fpn_head_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EZwtdWXX8sBLp7L__TrmkykBPEe7kJInbkbUblP3PxuURQ?e=09l25P) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/Ebevxbj_0OtNkb3uCdpM0aoBeMQUABiQ0bDfZ9P9Jw1AZA?e=ZUcbUo) |\n\n#### RetinaNet\n|    Backbone     |     Neck    |  Style  | Lr schd | Params(M) | FLOPs(G) | box AP | Config | Download |\n| :-------------: | :---------: | :-----: | :-----: |:---------:|:--------:| :----: | :------: | :--------: |\n|    RedNet-50-FPN     | convolution | pytorch |   1x    | 27.8\u003csub\u003e(26.3%\u0026#8595;)\u003c/sub\u003e | 210.1\u003csub\u003e(12.2%\u0026#8595;)\u003c/sub\u003e | 38.2\u003csub\u003e(1.6\u0026#8593;)\u003c/sub\u003e   | [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/retinanet_red50_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EfUY9orEyCVCsYMlcDhIZ2wBBDw7k1HqfTm9u11KfTopmA?e=4Jhu79) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EQQ_EVDmVg1FlfgpAu9NF5wB6xe6qnqaYWKJw9lL7kRxdw?e=fXxjPg) |\n|    RedNet-50-FPN     |  involution | pytorch |   1x    | 26.3\u003csub\u003e(30.2%\u0026#8595;)\u003c/sub\u003e | 199.9\u003csub\u003e(16.5%\u0026#8595;)\u003c/sub\u003e | 38.2\u003csub\u003e(1.6\u0026#8593;)\u003c/sub\u003e   | [config](https://github.com/d-li14/involution/blob/main/det/configs/involution/retinanet_red50_neck_fpn_1x_coco.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EedZ3bMWZkJIvKjyLkTZHksBc_8wdOMHhFZA7RDewjPO8g?e=jsSjYI) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/ES7chxQh5-lGr5--GqroMScBKNTNACyvosdVuThPvkZGkg?e=CrlN9F) |\n\n\n### Semantic Segmentation on Cityscapes\n\n| Method | Backbone | Neck | Crop Size | Lr schd | Params(M) | FLOPs(G) | mIoU  | Config |                                                                                                                                                                               download                                                                                                                                                                               |\n|--------|----------|------|-----------|--------:|:---------:|:--------:|------:|:------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| FPN    | RedNet-50     | convolution | 512x1024  |   80000 | 18.5\u003csub\u003e(35.1%\u0026#8595;)\u003c/sub\u003e | 293.9\u003csub\u003e(19.0%\u0026#8595;)\u003c/sub\u003e | 78.0\u003csub\u003e(3.6\u0026#8593;)\u003c/sub\u003e | [config](https://github.com/d-li14/involution/blob/main/seg/configs/involution/fpn_red50_512x1024_80k_cityscapes.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EYstjiI28SJPohJE54wapFUBW5Wc95Di2Rsh0vf6K79vPw?e=lOvbkZ) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EXdupIgFuAlFuH854wThyXcBQTyL7YhK3wPYcR98rw7PJg?e=MyXx2w) |\n| FPN    | RedNet-50     |  involution | 512x1024  |   80000 | 16.4\u003csub\u003e(42.5%\u0026#8595;)\u003c/sub\u003e | 205.2\u003csub\u003e(43.4%\u0026#8595;)\u003c/sub\u003e | 79.1\u003csub\u003e(4.7\u0026#8593;)\u003c/sub\u003e | [config](https://github.com/d-li14/involution/blob/main/seg/configs/involution/fpn_red50_neck_512x1024_80k_cityscapes.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EZzDyESh0ElFp2pIFL1xN70BAj1EyvhFyqi0g7Mp1OZxog?e=F7kZYH) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EXcP_3ujO_1Juj8ap7rqDJ8BWZDCyJL86BWjeZiJ_FfLOw?e=47lvtq) |\n| UPerNet| RedNet-50     | convolution | 512x1024  |   80000 | 56.4\u003csub\u003e(15.1%\u0026#8595;)\u003c/sub\u003e | 1825.6\u003csub\u003e(3.6%\u0026#8595;)\u003c/sub\u003e | 80.6\u003csub\u003e(2.4\u0026#8593;)\u003c/sub\u003e | [config](https://github.com/d-li14/involution/blob/main/seg/configs/involution/upernet_red50_512x1024_80k_cityscapes.py) | [model](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/Eb8-frsvSuNAm7qQ6-H2DtEBdACuf-mUOBhvE3YIOiobmA?e=Ibb2cN) \u0026#124; [log](https://hkustconnect-my.sharepoint.com/:u:/g/personal/dlibh_connect_ust_hk/EWhyFAZpxfRBoFi1myoT-RMB6-HeaP7NjSv88YQve4bZkg?e=wC8ccl) |\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fd-li14%2Finvolution","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fd-li14%2Finvolution","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fd-li14%2Finvolution/lists"}