Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/VainF/Awesome-Anything

General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX
https://github.com/VainF/Awesome-Anything

List: Awesome-Anything

anything anything-ai awesome awesome-segment-anything general-ai segment-anything

Last synced: about 2 months ago
JSON representation

General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX

Awesome Lists containing this project

README

        

# Awesome-Anything
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
[![Awesome Anything](https://img.shields.io/badge/Awesome-Anything-blue)](https://github.com/topics/awesome)

A curated list of **general AI methods for Anything**: AnyObject, AnyGeneration, AnyModel, AnyTask, etc.

[Contributions](https://github.com/VainF/Awesome-Anything/pulls) are welcome!

- [Awesome-Anything](#awesome-anything)
- [AnyObject](#anyobject) - Segmentation, Detection, Classification, Medical Image, OCR, Pose, etc.
- [AnyGeneration](#anygeneration) - Text-to-Image Generation, Editing, Inpainting, Style Transfer, Video Frame Interpolation, etc.
- [Any3D](#any3d) - 3D Generation, Segmentation, etc.
- [AnyModel](#anymodel) - Any Pruning, Any Quantization, Model Reuse.
- [AnyTask](#anytask) - LLM Controller + ModelZoo, General Decoding, Multi-Task Learning.
- [AnyX](#anyx) - Other Topics: Captioning, etc.
- [Paper List](#paper-list-for-anything-ai)

## AnyObject

| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/facebookresearch/segment-anything.svg?style=social&label=Star)](https://github.com/facebookresearch/segment-anything)
[**Segment Anything**](https://arxiv.org/abs/2304.02643)
*Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick*
> Meta Research
> Preprint'23

[[**Segment Anything (Project)**](https://github.com/facebookresearch/segment-anything)] | ![intro](https://github.com/facebookresearch/segment-anything/blob/main/assets/masks2.jpg?raw=true) | [[Github](https://github.com/facebookresearch/segment-anything)]
[[Page](https://segment-anything.com/)]
[[Demo](https://segment-anything.com/demo)] |
| [![Star](https://img.shields.io/github/stars/facebookresearch/ov-seg.svg?style=social&label=Star)](https://github.com/facebookresearch/ov-seg)
[**OVSeg: Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP**](https://arxiv.org/abs/2210.04150)
*Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu*
> Meta Research
> Preprint'23

[[**OVSeg (Project)**](https://github.com/facebookresearch/segment-anything)] | image | [[Github](https://github.com/facebookresearch/ov-seg)]
[[Page](https://jeff-liangf.github.io/projects/ovseg/)] |
| [![Star](https://img.shields.io/github/stars/ronghanghu/seg_every_thing.svg?style=social&label=Star)](https://github.com/ronghanghu/seg_every_thing)
[**Learning to Segment Every Thing**](https://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Learning_to_Segment_CVPR_2018_paper.pdf)
*Ronghang Hu, Piotr Dollar, Kaiming He, Trevor Darrell, Ross Girshick*
> UC Berkeley, FAIR
> CVPR'18

[[**seg_every_thing (Project)**](https://github.com/ronghanghu/seg_every_thing)] | image | [[Github](https://github.com/ronghanghu/seg_every_thing)]
[[Page](https://github.com/ronghanghu/seg_every_thing)] |
| [![Star](https://img.shields.io/github/stars/IDEA-Research/Grounded-Segment-Anything.svg?style=social&label=Star)](https://github.com/IDEA-Research/Grounded-Segment-Anything)
[**Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection**](https://arxiv.org/abs/2303.05499)
*Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang*
> IDEA-Research
> Preprint'23

[[**Grounded-SAM**](https://github.com/IDEA-Research/Grounded-Segment-Anything), [**GroundingDINO (Project)**](https://github.com/IDEA-Research/GroundingDINO)] | ![intro](https://github.com/IDEA-Research/Grounded-Segment-Anything/raw/main/assets/grounded_sam_demo3_demo4.png) | [[Github](https://github.com/IDEA-Research/Grounded-Segment-Anything)]
[[Demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb)] |
| [![Star](https://img.shields.io/github/stars/baaivision/Painter.svg?style=social&label=Star)](https://github.com/baaivision/Painter)
[**SegGPT: Segmenting Everything In Context**](https://arxiv.org/abs/2304.03284)
*Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang*
> BAAI-Vision
> Preprint'23

[[**SegGPT (Project)**](https://github.com/baaivision/Painter)] | image | [[Github](https://github.com/baaivision/Painter)] |
| [**V3Det: Vast Vocabulary Visual Detection Dataset**](https://arxiv.org/abs/2304.03752)
*Jiaqi Wang, Pan Zhang, Tao Chu, Yuhang Cao, Yujie Zhou, Tong Wu, Bin Wang, Conghui He, Dahua Lin*
> Shanghai AI Laboratory, CUHK
> Preprint'23 | ![image](https://user-images.githubusercontent.com/18592211/230936730-4837c3ea-1af5-470c-8532-d0d7bd245df7.png) | -- |
| [![Star](https://img.shields.io/github/stars/kadirnar/segment-anything-video.svg?style=social&label=Star)](https://github.com/kadirnar/segment-anything-video)
[**segment-anything-video (Project)**](https://github.com/kadirnar/segment-anything-video)
Kadir Nar | ![intro](https://github.com/kadirnar/segment-anything-pip/releases/download/v0.2.2/metaseg_demo.gif) | [[Github](https://github.com/kadirnar/segment-anything-video)] |
| [![Star](https://img.shields.io/github/stars/achalddave/segment-any-moving.svg?style=social&label=Star)](https://github.com/achalddave/segment-any-moving)
[**Towards Segmenting Anything That Moves**](https://arxiv.org/abs/1902.03715)
*Achal Dave, Pavel Tokmakov, Deva Ramanan*
> ICCV'19 Workshop

[[**segment-any-moving (Project)**](https://github.com/achalddave/segment-any-moving)] | [](http://www.achaldave.com/projects/anything-that-moves/videos/ZXN6A-tracked-with-objectness-trimmed.mp4)[](http://www.achaldave.com/projects/anything-that-moves/videos/c95cd17749.mp4) | [[Github](https://github.com/achalddave/segment-any-moving)] |
| [![Star](https://img.shields.io/github/stars/fudan-zvg/Semantic-Segment-Anything.svg?style=social&label=Star)](https://github.com/fudan-zvg/Semantic-Segment-Anything)
[**Semantic Segment Anything**](https://github.com/fudan-zvg/Semantic-Segment-Anything)
*Jiaqi Chen, Zeyu Yang, Li Zhang*

[[**Semantic-Segment-Anything (Project)**](https://github.com/fudan-zvg/Semantic-Segment-Anything)] | image | [[Github](https://github.com/fudan-zvg/Semantic-Segment-Anything)] |
| [![Star](https://img.shields.io/github/stars/Cheems-Seminar/segment-anything-and-name-it.svg?style=social&label=Star)](https://github.com/Cheems-Seminar/segment-anything-and-name-it)
[Grounded Segment Anything: From Objects to **Parts** (Project)](https://github.com/Cheems-Seminar/segment-anything-and-name-it)
*Peize Sun* and *Shoufa Chen* | ![intro](https://github.com/Cheems-Seminar/segment-anything-and-name-it/raw/main/assets/logo.png) | [[Github](https://github.com/Cheems-Seminar/segment-anything-and-name-it)]
| [![Star](https://img.shields.io/github/stars/caoyunkang/GroundedSAM-zero-shot-anomaly-detection.svg?style=social&label=Star)](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)
[**GroundedSAM-zero-shot-anomaly-detection (Project)**](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)
*Yunkang Cao* | image | [[Github](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)] |
| [![Star](https://img.shields.io/github/stars/anuragxel/salt.svg?style=social&label=Star)](https://github.com/anuragxel/salt)
[**Segment Anything Labelling Tool (SALT) (Project)**](https://github.com/anuragxel/salt)
*Anurag Ghosh* | ![intro](https://github.com/anuragxel/salt/raw/main/assets/how-it-works.gif) | [[Github](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)] |
| [![Star](https://img.shields.io/github/stars/RockeyCoss/Prompt-Segment-Anything.svg?style=social&label=Star)](https://github.com/RockeyCoss/Prompt-Segment-Anything)
[**Prompt-Segment-Anything (Project)**](https://github.com/RockeyCoss/Prompt-Segment-Anything)
*Rockey* | ![intro](https://github.com/RockeyCoss/Prompt-Segment-Anything/raw/master/assets/example1.jpg) | [[Github](https://github.com/RockeyCoss/Prompt-Segment-Anything)]|
| [![Star](https://img.shields.io/github/stars/Li-Qingyun/sam-mmrotate.svg?style=social&label=Star)](https://github.com/Li-Qingyun/sam-mmrotate)
[**SAM-RBox (Project)**](https://github.com/Li-Qingyun/sam-mmrotate)
*Qingyun Li* | ![intro](https://user-images.githubusercontent.com/79644233/230732578-649086b4-7720-4450-9e87-25873bec07cb.png) | [[Github](https://github.com/Li-Qingyun/sam-mmrotate)] |
| [![Star](https://img.shields.io/github/stars/BingfengYan/VISAM.svg?style=social&label=Star)](https://github.com/BingfengYan/VISAM)
[**VISAM (Project)**](https://github.com/BingfengYan/VISAM)
*Feng Yan, Weixin Luo, Yujie Zhong, Yiyang Gan, Lin Ma* | ![intro](https://raw.githubusercontent.com/BingfengYan/MOTSAM/main/tmp.gif) | [[Github](https://github.com/BingfengYan/VISAM)]
|
| [![Star](https://img.shields.io/github/stars/aliaksandr960/segment-anything-eo.svg?style=social&label=Star)](https://github.com/aliaksandr960/segment-anything-eo)
[**Segment Anything EO tools: Earth observation tools for Meta AI Segment Anything (Project)**](https://github.com/aliaksandr960/segment-anything-eo)
*Aliaksandr Hancharenka, Alexander Chichigin* | ![intro](https://github.com/aliaksandr960/segment-anything-eo/raw/main/title_sameo.png?raw=true) | [[Github](https://github.com/aliaksandr960/segment-anything-eo)] |
| [![Star](https://img.shields.io/github/stars/JoOkuma/napari-segment-anything.svg?style=social&label=Star)](https://github.com/JoOkuma/napari-segment-anything)
[**napari-segment-anything: Segment Anything Model (SAM) native Qt UI (Project)**](https://github.com/JoOkuma/napari-segment-anything)
*Jordão Bragantini, Kyle I S Harrington, Ajinkya Kulkarni* | image | [[Github](https://github.com/JoOkuma/napari-segment-anything)] |
| [![Star](https://img.shields.io/github/stars/amine0110/SAM-Medical-Imaging.svg?style=social&label=Star)](https://github.com/amine0110/SAM-Medical-Imaging)
[**SAM-Medical-Imaging: Segment Anything Model (SAM) native Qt UI (Project)**](https://github.com/amine0110/SAM-Medical-Imaging)
*Jordão Bragantini, Kyle I S Harrington, Ajinkya Kulkarni* | ![image](https://user-images.githubusercontent.com/18592211/231660993-4b7fbdc8-8f0d-44ab-b8f4-1b330f9168e5.png) | [[Github](https://github.com/amine0110/SAM-Medical-Imaging)] |
| [![Star](https://img.shields.io/github/stars/yeungchenwa/OCR-SAM.svg?style=social&label=Star)](https://github.com/yeungchenwa/OCR-SAM)
[**OCR-SAM: Combining MMOCR with Segment Anything & Stable Diffusion. (Project)**](https://github.com/yeungchenwa/OCR-SAM)
*Zhenhua Yang, Qing Jiang* | ![image](https://github.com/yeungchenwa/OCR-SAM/raw/main/imgs/sam_vis.png) | [[Github](https://github.com/yeungchenwa/OCR-SAM)] |
| [![Star](https://img.shields.io/github/stars/Maybeshewill-CV/segment-anything-u-specify.svg?style=social&label=Star)](https://github.com/MaybeShewill-CV/segment-anything-u-specify)
[**segment-anything-u-specify: using sam+clip to segment any objs u specify with text prompts. (Project)**](https://github.com/MaybeShewill-CV/segment-anything-u-specify)
*MaybeShewill-CV* | ![image](https://github.com/MaybeShewill-CV/segment-anything-u-specify/blob/main/data/resources/test_baseball_insseg_result.jpg) | [[Github](https://github.com/MaybeShewill-CV/segment-anything-u-specify)] |
| [![Star](https://img.shields.io/github/stars/UX-Decoder/Segment-Everything-Everywhere-All-At-Once.svg?style=social&label=Star)](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
[**Segment Everything Everywhere All at Once**](https://arxiv.org/abs/2304.06718)
*Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Gao, Yong Jae Lee*

[[SEEM (Project)](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)] | ![image](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/raw/main/assets/referring_video_visualize.png?raw=true) | [[Github](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)] |
| [![Star](https://img.shields.io/github/stars/lujiazho/SegDrawer.svg?style=social&label=Star)](https://github.com/lujiazho/SegDrawer)
[**SegDrawer: Simple static web-based mask drawer (Project)**](https://github.com/lujiazho/SegDrawer)
*Harry* | ![image](https://github.com/lujiazho/SegDrawer/raw/main/example/demo1.gif) | [[Github](https://github.com/lujiazho/SegDrawer)] |
| [![Star](https://img.shields.io/github/stars/kevmo314/magic-copy.svg?style=social&label=Star)](https://github.com/kevmo314/magic-copy)
[**Magic Copy: a Chrome extension (Project)**](https://github.com/kevmo314/magic-copy)
*Harry* | image | [[Github](https://github.com/kevmo314/magic-copy)] |
| [![Star](https://img.shields.io/github/stars/gaomingqi/Track-Anything.svg?style=social&label=Star)](https://github.com/gaomingqi/Track-Anything)
[**Track Anything: Segment Anything Meets Videos**](https://arxiv.org/abs/2304.11968)
*Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, Feng Zheng*

[[Track-Anything (Project)](https://github.com/gaomingqi/Track-Anything)] | ![Image](https://github.com/gaomingqi/Track-Anything/raw/master/assets/avengers.gif) | [[Github](https://github.com/gaomingqi/Track-Anything)]
[[Demo](https://huggingface.co/spaces/watchtowerss/Track-Anything)]|
| [![Star](https://img.shields.io/github/stars/ylqi/Count-Anything.svg?style=social&label=Star)](https://github.com/ylqi/Count-Anything)
[**Count Anything (Project)**](https://github.com/ylqi/Count-Anything)
*Liqi Yan* | image | [[Github](https://github.com/ylqi/Count-Anything)]|
| [![Star](https://img.shields.io/github/stars/z-x-yang/Segment-and-Track-Anything.svg?style=social&label=Star)](https://github.com/z-x-yang/Segment-and-Track-Anything)
[**Segment-and-Track-Anything (Project)**](https://github.com/z-x-yang/Segment-and-Track-Anything)
*Zongxin Yang* | image | [[Github](https://github.com/z-x-yang/Segment-and-Track-Anything)]|
| [![Star](https://img.shields.io/github/stars/luminxu/Pose-for-Everything.svg?style=social&label=Star)](https://github.com/luminxu/Pose-for-Everything)
[**Pose for Everything: Towards Category-Agnostic Pose Estimation**](https://arxiv.org/abs/2207.10387)
*Lumin Xu\*, Sheng Jin\*, Wang Zeng, Wentao Liu, Chen Qian, Wanli Ouyang, Ping Luo, Xiaogang Wang*
> CUHK, SenseTime
> ECCV'22 Oral

[[**Pose-for-Everything (Project)**](https://github.com/luminxu/Pose-for-Everything)] | ![image](https://github.com/luminxu/Pose-for-Everything/blob/main/assets/intro.png) | [[Github](https://github.com/luminxu/Pose-for-Everything)]|
| [![Star](https://img.shields.io/github/stars/Luodian/RelateAnything.svg?style=social&label=Star)](https://github.com/Luodian/RelateAnything)
[**Relate Anything Model (Project)**](https://github.com/Luodian/RelateAnything)
Zujin Guo*, Bo Li*, Jingkang Yang*, Zijian Zhou*, Ziwei Liu
> MMLab@NTU
> VisCom Lab, KCL/TongJi | ![intro](https://github.com/Luodian/RelateAnything/raw/main/assets/soccer.png) | [Github](https://github.com/Luodian/RelateAnything) |
| [![Star](https://img.shields.io/github/stars/Jun-CEN/SegmentAnyRGBD.svg?style=social&label=Star)](https://github.com/Jun-CEN/SegmentAnyRGBD)
[**SegmentAnyRGBD (Project)**](https://github.com/Jun-CEN/SegmentAnyRGBD)
Jun Cen, Yizheng Wu, Xingyi Li, Jingkang Yang, Yixuan Pei, Lingdong Kong
> Visual Intelligence Lab@HKUST,
> HUST,
> MMLab@NTU,
> Smiles Lab@XJTU,
> NUS | ![intro](https://github.com/Jun-CEN/SegmentAnyRGBD/raw/main/resources/flowchart_3.png) | [Github](https://github.com/Jun-CEN/SegmentAnyRGBD) |
|
[**Retrieve Any Object via Prompt-based Tracking**](https://arxiv.org/abs/2305.13495)
Pha Nguyen, Kha Gia Quach, Kris Kitani, Khoa Luu
> CVIU@UArk,
> pdActive Inc.,
> RI@CMU | ![intro](https://anonymo-user.github.io/images/type-to-track.png) | [[ArXiv](https://arxiv.org/abs/2305.13495)]
[[Page](https://uark-cviu.github.io/Type-to-Track/)] |
| [![Star](https://img.shields.io/github/stars/jamesjg/FoodSAM.svg?style=social&label=Star)](https://github.com/jamesjg/FoodSAM)
[**FoodSAM (Project)**](https://github.com/jamesjg/FoodSAM)
Xing Lan, Jiayi Lyu, Hanyu Jiang, Kun Dong, Zehai Niu, Yi Zhang, Jian Xue
> UCAS | ![intro](https://github.com/jamesjg/FoodSAM/blob/main/assets/crossdomain.png) | [[Github](https://github.com/jamesjg/FoodSAM)]
[[Page](https://starhiking.github.io/FoodSAM_Page/)]
[[ArXiv](https://arxiv.org/abs/2308.05938)]|



## AnyGeneration
| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/CompVis/stable-diffusion.svg?style=social&label=Star)](https://github.com/CompVis/stable-diffusion)
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)
*Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer*
> LMU München, Runway ML
> CVPR'22

[[**Stable-Diffusion (Project)**](https://github.com/CompVis/stable-diffusion)] | ![intro](https://r2.stablediffusionweb.com/images/stable-diffusion-demo-2.webp) | [[Github](https://github.com/CompVis/stable-diffusion)]
[[Page](https://stablediffusionweb.com/)]
[[Demo](https://stablediffusionweb.com/#demo)] |
| [![Star](https://img.shields.io/github/stars/lllyasviel/ControlNet.svg?style=social&label=Star)](https://github.com/lllyasviel/ControlNet)
[**Adding Conditional Control to Text-to-Image Diffusion Models**](https://arxiv.org/abs/2302.05543)
*Lvmin Zhang, Maneesh Agrawala*
> Stanford University
> Preprint'23

[[**ControlNet (Project)**](https://github.com/lllyasviel/ControlNet)] | ![intro](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_16_output_1.jpeg) | [[Github](https://github.com/lllyasviel/ControlNet)]
[[Demo](https://huggingface.co/spaces/hysts/ControlNet)] |
| [**GigaGAN: Large-scale GAN for Text-to-Image Synthesis**](https://arxiv.org/abs/2303.05511)
*Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, Taesung Park*
> POSTECH, Carnegie Mellon University, Adobe Research
> CVPR'23 | image | [[Page](https://mingukkang.github.io/GigaGAN/)] |
| [![Star](https://img.shields.io/github/stars/geekyutao/Inpaint-Anything.svg?style=social&label=Star)](https://github.com/geekyutao/Inpaint-Anything)
[**Inpaint-Anything: Segment Anything Meets Image Inpainting (Project)**](https://github.com/geekyutao/Inpaint-Anything)
*Tao Yu* | ![intro](https://github.com/geekyutao/Inpaint-Anything/raw/main/example/MainFramework.png) | [[Github](https://github.com/geekyutao/Inpaint-Anything)] |
| [![Star](https://img.shields.io/github/stars/feizc/IEA.svg?style=social&label=Star)](https://github.com/feizc/IEA)
[**IEA: Image Editing Anything (Project)**](https://github.com/feizc/IEA)
*Zhengcong Fei* | ![intro](https://user-images.githubusercontent.com/37614046/230707537-206c0714-de32-41cd-a277-203fd57cd300.png) | [[Github](https://github.com/feizc/IEA)] |
| [![Star](https://img.shields.io/github/stars/sail-sg/EditAnything.svg?style=social&label=Star)](https://github.com/sail-sg/EditAnything)
[**EditAnything (Project)**](https://github.com/sail-sg/EditAnything)
*Shanghua Gao, Pan Zhou* | ![intro](https://github.com/sail-sg/EditAnything/raw/main/images/edit_sample1.jpg) | [[Github](https://github.com/sail-sg/EditAnything)] |
| [![Star](https://img.shields.io/github/stars/continue-revolution/sd-webui-segment-anything.svg?style=social&label=Star)](https://github.com/continue-revolution/sd-webui-segment-anything)
[**Segment Anything for Stable Diffusion Webui (Project)**](https://github.com/continue-revolution/sd-webui-segment-anything)
*Chengsong Zhang* | image | [[Github](https://github.com/continue-revolution/sd-webui-segment-anything)] |
| [![Star](https://img.shields.io/github/stars/Curt-Park/segment-anything-with-clip.svg?style=social&label=Star)](https://github.com/Curt-Park/segment-anything-with-clip)
[**Segment Anything with Clip (Project)**](https://github.com/Curt-Park/segment-anything-with-clip)
*Jinwoo Park* | ![intro](https://user-images.githubusercontent.com/14961526/230437084-79ef6e02-a254-421e-bd4c-32e87415c623.png) | [[Github](https://github.com/Curt-Park/segment-anything-with-clip)] |
| [![Star](https://img.shields.io/github/stars/showlab/ShowAnything.svg?style=social&label=Star)](https://github.com/showlab/ShowAnything)
[**ShowAnything: Edit and Generate Anything In Image and Video (Project)**](https://github.com/showlab/ShowAnything)
*Showlab, NUS* | ![intro](https://github.com/showlab/ShowAnything/blob/main/assets/video/showcase_3.gif) | [Github](https://github.com/showlab/ShowAnything) |
| [![Star](https://img.shields.io/github/stars/Huage001/Transfer-Any-Style.svg?style=social&label=Star)](https://github.com/Huage001/Transfer-Any-Style)
[**Transfer-Any-Style: About An interactive demo based on Segment-Anything for style transfer (Project)**](https://github.com/Huage001/Transfer-Any-Style)
*LV-Lab, NUS* | ![intro](https://github.com/Huage001/Transfer-Any-Style/raw/main/picture/demo1.gif) | [Github](https://github.com/Huage001/Transfer-Any-Style) |
| [![Star](https://img.shields.io/github/stars/Zeqiang-Lai/Anything2Image.svg?style=social&label=Star)](https://github.com/Zeqiang-Lai/Anything2Image)
[**Anything To Image: Generate image from anything with ImageBind and Stable Diffusion (Project)**](https://github.com/Zeqiang-Lai/Anything2Image)
*Zeqiang-Lai* | ![intro](https://github.com/VainF/Awesome-Anything/assets/26198430/c245f4f9-939e-41d9-b919-663426c83f90) | [Github](https://github.com/Zeqiang-Lai/Anything2Image) |
| [![Star](https://img.shields.io/github/stars/zzh-tech/InterpAny-Clearer.svg?style=social&label=Star)](https://github.com/zzh-tech/InterpAny-Clearer)
[**Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame Interpolation**](https://zzh-tech.github.io/InterpAny-Clearer/)
*Zhihang Zhong, Gurunandan Krishnan, Xiao Sun, Yu Qiao, Sizhuo Ma, Jian Wang*
> Shanghai AI Laboratory, Snap Inc.
> Preprint'23 | ![intro](https://github.com/zzh-tech/InterpAny-Clearer/blob/site/docs/static/InterpAny-Clearer-Demo-Small.gif) | [[Github]](https://github.com/zzh-tech/InterpAny-Clearer)
[[Page]](https://zzh-tech.github.io/InterpAny-Clearer/)
[[ArXiv]](https://arxiv.org/abs/2311.08007) |



## Any3D
| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/Pointcept/OpenIns3D.svg?style=social&label=Star)](https://github.com/Pointcept/OpenIns3D)
[**OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation**](https://arxiv.org/abs/2309.00616)
*Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, Joan Lasenby*
> Cambridge, HKU, HKUST

[[**OpenIns3D**](https://github.com/Pointcept/OpenIns3D)] | ![image](https://github.com/Pointcept/OpenIns3D/blob/main/demo/3D_reasoning_seg.gif) | [[Github](https://github.com/Pointcept/OpenIns3D)]
[[Page](https://zheninghuang.github.io/OpenIns3D/)]|
| [![Star](https://img.shields.io/github/stars/Anything-of-anything/Anything-3D.svg?style=social&label=Star)](https://github.com/Anything-of-anything/Anything-3D)
[**Anything-3D: Segment-Anything + 3D, Let's lift the anything to 3D (Project)**](https://github.com/Anything-of-anything/Anything-3D)
*LV-Lab, NUS* | ![intro](https://github.com/Anything-of-anything/Anything-3D/raw/main/novel-view/assets/3.jpeg)
![intro2](https://github.com/Anything-of-anything/Anything-3D/raw/main/novel-view/assets/2.jpeg) | [Github](https://github.com/Anything-of-anything/Anything-3D) |
| [![Star](https://img.shields.io/github/stars/nexuslrf/SAM-3D-Selector.svg?style=social&label=Star)](https://github.com/nexuslrf/SAM-3D-Selector)
[**SAM 3D Selector: Utilizing segment-anything to help the region selection of 3D point cloud or mesh. (Project)**](https://github.com/nexuslrf/SAM-3D-Selector)
*Nexuslrf* | ![intro](https://github.com/nexuslrf/SAM-3D-Selector/raw/main/figs/demo_1.gif) | [Github](https://github.com/nexuslrf/SAM-3D-Selector) |
| [![Star](https://img.shields.io/github/stars/dvlab-research/3D-Box-Segment-Anything.svg?style=social&label=Star)](https://github.com/dvlab-research/3D-Box-Segment-Anything)
[**3D-Box via Segment Anything. (Project)**](https://github.com/dvlab-research/3D-Box-Segment-Anything)
*dvlab-research* | ![image](https://github.com/dvlab-research/3D-Box-Segment-Anything/raw/main/images/sam-voxelnext.png) | [[Github](https://github.com/dvlab-research/3D-Box-Segment-Anything)] |
| [![Star](https://img.shields.io/github/stars/Pointcept/SegmentAnything3D.svg?style=social&label=Star)](https://github.com/Pointcept/SegmentAnything3D)
[**SAM3D: Segment Anything in 3D Scenes**](https://arxiv.org/abs/2306.03908)
*Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, Xihui Liu*
> Shanghai AI Laboratory, HKU

[[**SAM3D: Segment Anything in 3D Scenes (Project)**](https://github.com/Pointcept/SegmentAnything3D)] | ![image](https://github.com/Pointcept/SegmentAnything3D/raw/main/docs/0.png) | [[Github](https://github.com/Pointcept/SegmentAnything3D)] |



## AnyModel
| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/VainF/Torch-Pruning.svg?style=social&label=Star)](https://github.com/VainF/Torch-Pruning)
[**DepGraph: Towards Any Structural Pruning**](https://arxiv.org/abs/2301.12900)
*Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang*
> Learning and Vision Lab @ NUS
> CVPR'23

[[**Torch-Pruning (Project)**](https://github.com/VainF/Torch-Pruning)] | ![intro](https://github.com/VainF/Torch-Pruning/raw/master/assets/intro.png) | [[Github](https://github.com/VainF/Torch-Pruning)]
[[Demo](https://colab.research.google.com/drive/1TRvELQDNj9PwM-EERWbF3IQOyxZeDepp?usp=sharing)] |
| [![Star](https://img.shields.io/github/stars/ModelTC/MQBench.svg?style=social&label=Star)](https://github.com/ModelTC/MQBench)
[**MQBench: Towards Reproducible and Deployable Model Quantization Benchmark**](https://arxiv.org/abs/2111.03759)
*Yuhang Li and Mingzhu Shen and Jian Ma and Yan Ren and Mingxin Zhao and Qi Zhang and Ruihao Gong and Fengwei Yu and Junjie Yan*
> SenseTime Research
> NeurIPS'21

[[**MQBench (Project)**](https://github.com/ModelTC/MQBench)] | ![intro](http://mqbench.tech/assets/img/overview.png) | [[Github](https://github.com/ModelTC/MQBench)]
[[Page](http://mqbench.tech/)] |
| [![Star](https://img.shields.io/github/stars/tianyic/only_train_once.svg?style=social&label=Star)](https://github.com/tianyic/only_train_once)
[**OTOv2: Automatic, Generic, User-Friendly**](https://openreview.net/pdf?id=7ynoX1ojPMt)
*Tianyi Chen, Luming Liang, Tianyu Ding, Ilya Zharkov*
> Microsoft
> ICLR'23

[[**Only Train Once (Project)**](https://github.com/tianyic/only_train_once)] | ![intro](https://user-images.githubusercontent.com/8930611/230513048-e07b09a2-b29b-49ad-a47f-52630337ab2a.png) | [[Github](https://github.com/tianyic/only_train_once)] |
| [![Star](https://img.shields.io/github/stars/Adamdad/DeRy.svg?style=social&label=Star)](https://github.com/Adamdad/DeRy)
[**Deep Model Reassembly**](https://arxiv.org/abs/2210.17409)
*Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang*
LV Lab, NUS
> NeurIPS'22

[[**Deep Model Reassembly (Project)**](https://github.com/Adamdad/DeRy)]
| ![intro](https://github.com/Adamdad/DeRy/raw/main/assets/pipeline.png) | [[Github](https://github.com/Adamdad/DeRy)]
[[Page](https://adamdad.github.io/dery/)] |



## AnyTask
| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/microsoft/JARVIS.svg?style=social&label=Star)](https://github.com/microsoft/JARVIS)
[**HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace**](https://arxiv.org/abs/2303.17580)
*Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang*
> Zhejiang University, MSRA
Preprint'23

[[**Jarvis (Project)**](https://github.com/microsoft/JARVIS)] | | [[Github](https://github.com/microsoft/JARVIS)]
[[Demo](https://huggingface.co/spaces/microsoft/HuggingGPT)] |
| [**TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs**](https://arxiv.org/abs/2303.16434)
*Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, Nan Duan*
> Microsoft
> > Preprint'23 | ![intro](https://github.com/microsoft/visual-chatgpt/raw/main/assets/overview.png) | [[Github](https://github.com/microsoft/visual-chatgpt/tree/main/TaskMatrix.AI)] |
| [![Star](https://img.shields.io/github/stars/microsoft/X-Decoder.svg?style=social&label=Star)](https://github.com/microsoft/X-Decoder)
[**Generalized Decoding for Pixel, Image and Language**](https://arxiv.org/abs/2212.11270)
*Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, Nanyun Peng, Lijuan Wang, Yong Jae Lee, Jianfeng Gao*
> Microsoft
> CVPR'23

[[**X-Decoder (Project)**](https://github.com/microsoft/X-Decoder/)] | ![intro](https://user-images.githubusercontent.com/11957155/210801832-c9143c42-ef65-4501-95a5-0d54749dcc52.gif) | [[Github](https://github.com/microsoft/X-Decoder/)]
[[Page](https://x-decoder-vl.github.io)]
[[Demo](https://huggingface.co/spaces/xdecoder/Demo)] |
| [![Star](https://img.shields.io/github/stars/huawei-noah/Pretrained-IPT.svg?style=social&label=Star)](https://github.com/huawei-noah/Pretrained-IPT)
[**Pre-Trained Image Processing Transformer**]()
*Chen, Hanting and Wang, Yunhe and Guo, Tianyu and Xu, Chang and Deng, Yiping and Liu, Zhenhua and Ma, Siwei and Xu, Chunjing and Xu, Chao and Gao, Wen*
> Huawei-Noah
> CVPR'21

[[**Pretrained-IPT (Project)**](https://github.com/huawei-noah/Pretrained-IPT)] | ![intro](https://github.com/huawei-noah/Pretrained-IPT/raw/main/images/intro.png) | [[Github](https://github.com/huawei-noah/Pretrained-IPT)] |
| [![Star](https://img.shields.io/github/stars/agiresearch/OpenAGI.svg?style=social&label=Star)](https://github.com/agiresearch/OpenAGI)
[**OpenAGI: When LLM Meets Domain Experts**](https://arxiv.org/pdf/2304.04370.pdf)
*Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, Yongfeng Zhang*
> Rutgers University
> Preprint'23

[[**OpenAGI (Project)**](https://github.com/agiresearch/OpenAGI)] | ![intro](https://github.com/agiresearch/OpenAGI/raw/main/image/pipeline.png) | [Github](https://github.com/agiresearch/OpenAGI) |



## AnyX
| Title & Authors | Intro | Useful Links |
|:----| :----: | :---:|
| [![Star](https://img.shields.io/github/stars/ttengwang/Caption-Anything.svg?style=social&label=Star)](https://github.com/ttengwang/Caption-Anything)
[**Caption Anything: Interactive Image Description with Diverse Multimodal Controls**](https://arxiv.org/abs/2305.02677)
*Teng Wang, Jinrui Zhang, Junjie Fei, Hao Zheng, Yunlong Tang, Zhe Li, Mingqi Gao, Shanshan Zhao*
> SUSTech VIP Lab
> Preprint'23

[**Caption Anything (Project)**](https://github.com/ttengwang/Caption-Anything) | ![intro](https://github.com/ttengwang/Caption-Anything/raw/main/assets/qingming.gif) | [[Github](https://github.com/ttengwang/Caption-Anything)]
[[Demo](https://huggingface.co/spaces/TencentARC/Caption-Anything)] |
| [![Star](https://img.shields.io/github/stars/showlab/Image2Paragraph.svg?style=social&label=Star)](https://github.com/showlab/Image2Paragraph)
[**Image2Paragraph:Transform Image into Unique Paragraph (Project)**](https://github.com/showlab/Image2Paragraph)
*Jinpeng Wang* | ![intro](https://github.com/showlab/Image2Paragraph/raw/main/output/2_result.png) | [Github](https://github.com/showlab/Image2Paragraph) |
...



# Paper List for Anything AI

A paper list for Anything AI

## AnyObject
| Paper | First Author | Venue | Topic |
| :--- | :---: | :--: | :--: |
| [Segment Anything](https://arxiv.org/abs/2304.02643) | Alexander Kirillov | Preprint'23 | Segmentation |
| [Learning to Segment Every Thing](https://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Learning_to_Segment_CVPR_2018_paper.pdf) | Ronghang Hu | CVPR'18 |
| [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) | Shilong Liu | Preprint'23 | Grounding+Detection |
| [SegGPT: Segmenting Everything In Context](https://arxiv.org/abs/2304.03284) | Xinlong Wang | Preprint'23 | Segmentation |
| [V3Det: Vast Vocabulary Visual Detection Dataset](https://arxiv.org/abs/2304.03752) | Jiaqi Wang | Preprint'23 | Dataset |
| [Pose for Everything: Towards Category-Agnostic Pose Estimation](https://arxiv.org/abs/2207.10387) | Lumin Xu | ECCV'22 Oral | Pose |
| [Type-to-Track: Retrieve Any Object via Prompt-based Tracking](https://arxiv.org/abs/2305.13495) | Pha Nguyen | NeurIPS'23 | Grounding+Tracking |

## AnyGeneration
| Paper | First Author | Venue | Topic |
| :--- | :---: | :--: | :--: |
| [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Robin Rombach | CVPR'22 | Text-to-Image Generation |
| [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Lvmin Zhang | Preprint'23 | Controlllable Generation |
| [GigaGAN: Large-scale GAN for Text-to-Image Synthesis](https://arxiv.org/abs/2303.05511) | Minguk Kang | CVPR'23 | Large-scale GAN |
| [Inpaint Anything: Segment Anything Meets Image Inpainting](https://arxiv.org/abs/2304.06790) | Tao Yu | Preprint'23 | Inpainting |

## AnyModel
| Paper | First Author | Venue | Topic |
| :--- | :---: | :--: | :--: |
| [DepGraph: Towards Any Structural Pruning](https://arxiv.org/abs/2301.12900) | Gongfan Fang | CVPR'23 | Network Pruning |
| [MQBench: Towards Reproducible and Deployable Model Quantization Benchmark](https://arxiv.org/abs/2111.03759) | Yuhang Li | NeurIPS'21 | Network Quantization |
| [OTOv2: Automatic, Generic, User-Friendly](https://openreview.net/pdf?id=7ynoX1ojPMt) | Tianyi Chen | ICLR'23 | Network Pruning |
| [Deep Model Reassembly](https://arxiv.org/abs/2210.17409) | Xingyi Yang | NeurIPS'22 | Model Reuse |

## AnyTask
| Paper | First Author | Venue | Topic |
| :--- | :---: | :--: | :--: |
| [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace](https://arxiv.org/abs/2303.17580) | Yongliang Shen | Preprint'23 | Modelzoo + LLM |
| [TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs](https://arxiv.org/abs/2303.16434) | Yaobo Liang | Preprint'23 | Modelzoo + LLM |
| [Generalized Decoding for Pixel, Image and Language](https://arxiv.org/abs/2212.11270) | Xueyan Zou | CVPR'23 | Multi Tasking |
| [Pre-Trained Image Processing Transformer](https://arxiv.org/abs/2012.00364) | Chen, Hanting | CVPR'21 | Low-level Vision |