Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

Awesome-Anything

General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX
https://github.com/VainF/Awesome-Anything

Last synced: 1 day ago
JSON representation

  • AnyObject

    • ![Star - anything) <br> [**Segment Anything**](https://arxiv.org/abs/2304.02643) <br> *Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick* <br> > Meta Research <br> > Preprint'23 <br><br> [[**Segment Anything (Project)**](https://github.com/facebookresearch/segment-anything)] | ![intro](https://github.com/facebookresearch/segment-anything/blob/main/assets/masks2.jpg?raw=true) | [[Github](https://github.com/facebookresearch/segment-anything)] <br> [[Page](https://segment-anything.com/)] <br> [[Demo](https://segment-anything.com/demo)] |
    • ![Star - seg) <br> [**OVSeg: Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP**](https://arxiv.org/abs/2210.04150) <br> *Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu* <br> > Meta Research <br> > Preprint'23 <br><br> [[**OVSeg (Project)**](https://github.com/facebookresearch/segment-anything)] | <img width="855" alt="image" src="https://user-images.githubusercontent.com/18592211/232279307-cf00ebe2-0751-48dc-b4ac-47ff343c28dc.png"> | [[Github](https://github.com/facebookresearch/ov-seg)] <br> [[Page](https://jeff-liangf.github.io/projects/ovseg/)] |
    • ![Star - images.githubusercontent.com/18592211/232575250-4e6fa0cf-507b-40bb-b71b-c0bcc2a85aaf.png"> | [[Github](https://github.com/ronghanghu/seg_every_thing)] <br> [[Page](https://github.com/ronghanghu/seg_every_thing)] |
    • ![Star - Research/Grounded-Segment-Anything) <br> [**Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection**](https://arxiv.org/abs/2303.05499) <br> *Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang* <br> > IDEA-Research <br> > Preprint'23 <br><br> [[**Grounded-SAM**](https://github.com/IDEA-Research/Grounded-Segment-Anything), [**GroundingDINO (Project)**](https://github.com/IDEA-Research/GroundingDINO)] | ![intro](https://github.com/IDEA-Research/Grounded-Segment-Anything/raw/main/assets/grounded_sam_demo3_demo4.png) | [[Github](https://github.com/IDEA-Research/Grounded-Segment-Anything)] <br> [[Demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb)] |
    • ![Star - Vision <br> > Preprint'23 <br><br>[[**SegGPT (Project)**](https://github.com/baaivision/Painter)] | <img width="903" alt="image" src="https://user-images.githubusercontent.com/18592211/230897227-c797f375-a44d-4536-a06b-41f0d9f4dbc4.png"> | [[Github](https://github.com/baaivision/Painter)] |
    • ![Star - anything-video) <br> [**segment-anything-video (Project)**](https://github.com/kadirnar/segment-anything-video) <br> Kadir Nar | ![intro](https://github.com/kadirnar/segment-anything-pip/releases/download/v0.2.2/metaseg_demo.gif) | [[Github](https://github.com/kadirnar/segment-anything-video)] |
    • ![Star - any-moving) <br> [**Towards Segmenting Anything That Moves**](https://arxiv.org/abs/1902.03715) <br> *Achal Dave, Pavel Tokmakov, Deva Ramanan* <br> > ICCV'19 Workshop <br><br> [[**segment-any-moving (Project)**](https://github.com/achalddave/segment-any-moving)] | [<img src="http://www.achaldave.com/projects/anything-that-moves/videos/ZXN6A-tracked-with-objectness-trimmed.gif" width="32%" />](http://www.achaldave.com/projects/anything-that-moves/videos/ZXN6A-tracked-with-objectness-trimmed.mp4)[<img src="http://www.achaldave.com/projects/anything-that-moves/videos/c95cd17749.gif" width="32%" />](http://www.achaldave.com/projects/anything-that-moves/videos/c95cd17749.mp4)<img src="http://www.achaldave.com/projects/anything-that-moves/videos/e0bdb5dfae.gif" width="32%" /> | [[Github](https://github.com/achalddave/segment-any-moving)] |
    • ![Star - zvg/Semantic-Segment-Anything) <br> [**Semantic Segment Anything**](https://github.com/fudan-zvg/Semantic-Segment-Anything) <br> *Jiaqi Chen, Zeyu Yang, Li Zhang* <br><br> [[**Semantic-Segment-Anything (Project)**](https://github.com/fudan-zvg/Semantic-Segment-Anything)] | <img width="903" alt="image" src="https://github.com/fudan-zvg/Semantic-Segment-Anything/blob/main/figures/SSA_motivation.png"> | [[Github](https://github.com/fudan-zvg/Semantic-Segment-Anything)] |
    • ![Star - Seminar/segment-anything-and-name-it) <br> [Grounded Segment Anything: From Objects to **Parts** (Project)](https://github.com/Cheems-Seminar/segment-anything-and-name-it) <br> *Peize Sun* and *Shoufa Chen* | ![intro](https://github.com/Cheems-Seminar/segment-anything-and-name-it/raw/main/assets/logo.png) | [[Github](https://github.com/Cheems-Seminar/segment-anything-and-name-it)]
    • ![Star - zero-shot-anomaly-detection) <br> [**GroundedSAM-zero-shot-anomaly-detection (Project)**](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection) <br> *Yunkang Cao* | <img width="677" alt="image" src="https://user-images.githubusercontent.com/18592211/231068964-ddeae0ea-4e83-40d6-b73e-2811d46f808d.png"> | [[Github](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)] |
    • ![Star - it-works.gif) | [[Github](https://github.com/caoyunkang/GroundedSAM-zero-shot-anomaly-detection)] |
    • ![Star - Segment-Anything) <br> [**Prompt-Segment-Anything (Project)**](https://github.com/RockeyCoss/Prompt-Segment-Anything) <br> *Rockey* | ![intro](https://github.com/RockeyCoss/Prompt-Segment-Anything/raw/master/assets/example1.jpg) | [[Github](https://github.com/RockeyCoss/Prompt-Segment-Anything)]|
    • ![Star - Qingyun/sam-mmrotate) <br> [**SAM-RBox (Project)**](https://github.com/Li-Qingyun/sam-mmrotate) <br> *Qingyun Li* | ![intro](https://user-images.githubusercontent.com/79644233/230732578-649086b4-7720-4450-9e87-25873bec07cb.png) | [[Github](https://github.com/Li-Qingyun/sam-mmrotate)] |
    • ![Star
    • ![Star - anything-eo) <br> [**Segment Anything EO tools: Earth observation tools for Meta AI Segment Anything (Project)**](https://github.com/aliaksandr960/segment-anything-eo) <br> *Aliaksandr Hancharenka, Alexander Chichigin* | ![intro](https://github.com/aliaksandr960/segment-anything-eo/raw/main/title_sameo.png?raw=true) | [[Github](https://github.com/aliaksandr960/segment-anything-eo)] |
    • ![Star - segment-anything) <br> [**napari-segment-anything: Segment Anything Model (SAM) native Qt UI (Project)**](https://github.com/JoOkuma/napari-segment-anything) <br> *Jordão Bragantini, Kyle I S Harrington, Ajinkya Kulkarni* | <img width="658" alt="image" src="https://user-images.githubusercontent.com/18592211/231413725-661fb2a9-1951-40b1-8239-6896eeb7eb4c.png"> | [[Github](https://github.com/JoOkuma/napari-segment-anything)] |
    • ![Star - Medical-Imaging) <br> [**SAM-Medical-Imaging: Segment Anything Model (SAM) native Qt UI (Project)**](https://github.com/amine0110/SAM-Medical-Imaging) <br> *Jordão Bragantini, Kyle I S Harrington, Ajinkya Kulkarni* | ![image](https://user-images.githubusercontent.com/18592211/231660993-4b7fbdc8-8f0d-44ab-b8f4-1b330f9168e5.png) | [[Github](https://github.com/amine0110/SAM-Medical-Imaging)] |
    • ![Star - SAM) <br> [**OCR-SAM: Combining MMOCR with Segment Anything & Stable Diffusion. (Project)**](https://github.com/yeungchenwa/OCR-SAM) <br> *Zhenhua Yang, Qing Jiang* | ![image](https://github.com/yeungchenwa/OCR-SAM/raw/main/imgs/sam_vis.png) | [[Github](https://github.com/yeungchenwa/OCR-SAM)] |
    • ![Star - CV/segment-anything-u-specify) <br> [**segment-anything-u-specify: using sam+clip to segment any objs u specify with text prompts. (Project)**](https://github.com/MaybeShewill-CV/segment-anything-u-specify) <br> *MaybeShewill-CV* | ![image](https://github.com/MaybeShewill-CV/segment-anything-u-specify/blob/main/data/resources/test_baseball_insseg_result.jpg) | [[Github](https://github.com/MaybeShewill-CV/segment-anything-u-specify)] |
    • ![Star - Decoder/Segment-Everything-Everywhere-All-At-Once) <br> [**Segment Everything Everywhere All at Once**](https://arxiv.org/abs/2304.06718) <br> *Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Gao, Yong Jae Lee* <br><br> [[SEEM (Project)](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)] | ![image](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/raw/main/assets/referring_video_visualize.png?raw=true) | [[Github](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)] |
    • ![Star - based mask drawer (Project)**](https://github.com/lujiazho/SegDrawer) <br> *Harry* | ![image](https://github.com/lujiazho/SegDrawer/raw/main/example/demo1.gif) | [[Github](https://github.com/lujiazho/SegDrawer)] |
    • ![Star - copy) <br> [**Magic Copy: a Chrome extension (Project)**](https://github.com/kevmo314/magic-copy) <br> *Harry* | <img width="546" alt="image" src="https://user-images.githubusercontent.com/18592211/232190851-1dc85342-3d50-42a7-a8e2-f45c4c862d70.png"> | [[Github](https://github.com/kevmo314/magic-copy)] |
    • ![Star - Anything) <br> [**Track Anything: Segment Anything Meets Videos**](https://arxiv.org/abs/2304.11968) <br> *Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, Feng Zheng* <br><br> [[Track-Anything (Project)](https://github.com/gaomingqi/Track-Anything)] | ![Image](https://github.com/gaomingqi/Track-Anything/raw/master/assets/avengers.gif) | [[Github](https://github.com/gaomingqi/Track-Anything)] <br> [[Demo](https://huggingface.co/spaces/watchtowerss/Track-Anything)]|
    • ![Star - Anything) <br> [**Count Anything (Project)**](https://github.com/ylqi/Count-Anything) <br> *Liqi Yan* | <img width="549" alt="image" src="https://user-images.githubusercontent.com/18592211/232305466-ad68546f-b5b1-4c2a-a543-78dea66c7151.png"> | [[Github](https://github.com/ylqi/Count-Anything)]|
    • ![Star - x-yang/Segment-and-Track-Anything) <br> [**Segment-and-Track-Anything (Project)**](https://github.com/z-x-yang/Segment-and-Track-Anything) <br> *Zongxin Yang* | <img width="954" alt="image" src="https://user-images.githubusercontent.com/18592211/232711476-895699e5-fc11-4624-a9fa-e34d84438342.png"> | [[Github](https://github.com/z-x-yang/Segment-and-Track-Anything)]|
    • ![Star - for-Everything) <br> [**Pose for Everything: Towards Category-Agnostic Pose Estimation**](https://arxiv.org/abs/2207.10387) <br> *Lumin Xu\*, Sheng Jin\*, Wang Zeng, Wentao Liu, Chen Qian, Wanli Ouyang, Ping Luo, Xiaogang Wang* <br> > CUHK, SenseTime <br> > ECCV'22 Oral <br><br> [[**Pose-for-Everything (Project)**](https://github.com/luminxu/Pose-for-Everything)] | ![image](https://github.com/luminxu/Pose-for-Everything/blob/main/assets/intro.png) | [[Github](https://github.com/luminxu/Pose-for-Everything)]|
    • ![Star
    • ![Star - CEN/SegmentAnyRGBD) <br>[**SegmentAnyRGBD (Project)**](https://github.com/Jun-CEN/SegmentAnyRGBD) <br> Jun Cen, Yizheng Wu, Xingyi Li, Jingkang Yang, Yixuan Pei, Lingdong Kong <br> > Visual Intelligence Lab@HKUST, <br> > HUST, <br> > MMLab@NTU, <br> > Smiles Lab@XJTU, <br> > NUS | ![intro](https://github.com/Jun-CEN/SegmentAnyRGBD/raw/main/resources/flowchart_3.png) | [Github](https://github.com/Jun-CEN/SegmentAnyRGBD) |
    • ![Star
    • Segment Anything
    • Learning to Segment Every Thing
    • Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
    • SegGPT: Segmenting Everything In Context
    • Pose for Everything: Towards Category-Agnostic Pose Estimation
    • V3Det: Vast Vocabulary Visual Detection Dataset
    • Type-to-Track: Retrieve Any Object via Prompt-based Tracking
  • AnyGeneration

    • ![Star - diffusion) <br> [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) <br> *Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer* <br> > LMU München, Runway ML <br> > CVPR'22 <br><br> [[**Stable-Diffusion (Project)**](https://github.com/CompVis/stable-diffusion)] | ![intro](https://r2.stablediffusionweb.com/images/stable-diffusion-demo-2.webp) | [[Github](https://github.com/CompVis/stable-diffusion)] <br> [[Page](https://stablediffusionweb.com/)] <br> [[Demo](https://stablediffusionweb.com/#demo)] |
    • ![Star - to-Image Diffusion Models**](https://arxiv.org/abs/2302.05543) <br> *Lvmin Zhang, Maneesh Agrawala* <br> > Stanford University <br> > Preprint'23 <br><br> [[**ControlNet (Project)**](https://github.com/lllyasviel/ControlNet)] | ![intro](https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_16_output_1.jpeg) | [[Github](https://github.com/lllyasviel/ControlNet)] <br> [[Demo](https://huggingface.co/spaces/hysts/ControlNet)] |
    • ![Star - Anything) <br> [**Inpaint-Anything: Segment Anything Meets Image Inpainting (Project)**](https://github.com/geekyutao/Inpaint-Anything) <br> *Tao Yu* | ![intro](https://github.com/geekyutao/Inpaint-Anything/raw/main/example/MainFramework.png) | [[Github](https://github.com/geekyutao/Inpaint-Anything)] |
    • ![Star - images.githubusercontent.com/37614046/230707537-206c0714-de32-41cd-a277-203fd57cd300.png) | [[Github](https://github.com/feizc/IEA)] |
    • ![Star - sg/EditAnything) <br> [**EditAnything (Project)**](https://github.com/sail-sg/EditAnything) <br> *Shanghua Gao, Pan Zhou* | ![intro](https://github.com/sail-sg/EditAnything/raw/main/images/edit_sample1.jpg) | [[Github](https://github.com/sail-sg/EditAnything)] |
    • ![Star - revolution/sd-webui-segment-anything) <br> [**Segment Anything for Stable Diffusion Webui (Project)**](https://github.com/continue-revolution/sd-webui-segment-anything) <br> *Chengsong Zhang* | <img width="659" alt="image" src="https://user-images.githubusercontent.com/18592211/231410895-eac4c4b6-ee61-487b-9333-8dcd1befc610.png"> | [[Github](https://github.com/continue-revolution/sd-webui-segment-anything)] |
    • ![Star - Park/segment-anything-with-clip) <br> [**Segment Anything with Clip (Project)**](https://github.com/Curt-Park/segment-anything-with-clip) <br> *Jinwoo Park* | ![intro](https://user-images.githubusercontent.com/14961526/230437084-79ef6e02-a254-421e-bd4c-32e87415c623.png) | [[Github](https://github.com/Curt-Park/segment-anything-with-clip)] |
    • ![Star
    • ![Star - Any-Style) <br>[**Transfer-Any-Style: About An interactive demo based on Segment-Anything for style transfer (Project)**](https://github.com/Huage001/Transfer-Any-Style) <br> *LV-Lab, NUS* | ![intro](https://github.com/Huage001/Transfer-Any-Style/raw/main/picture/demo1.gif) | [Github](https://github.com/Huage001/Transfer-Any-Style) |
    • ![Star - Lai/Anything2Image) <br>[**Anything To Image: Generate image from anything with ImageBind and Stable Diffusion (Project)**](https://github.com/Zeqiang-Lai/Anything2Image) <br> *Zeqiang-Lai* | ![intro](https://github.com/VainF/Awesome-Anything/assets/26198430/c245f4f9-939e-41d9-b919-663426c83f90) | [Github](https://github.com/Zeqiang-Lai/Anything2Image) |
    • ![Star - tech/InterpAny-Clearer) <br> [**Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame Interpolation**](https://zzh-tech.github.io/InterpAny-Clearer/) <br> *Zhihang Zhong, Gurunandan Krishnan, Xiao Sun, Yu Qiao, Sizhuo Ma, Jian Wang* <br> > Shanghai AI Laboratory, Snap Inc. <br> > Preprint'23 | ![intro](https://github.com/zzh-tech/InterpAny-Clearer/blob/site/docs/static/InterpAny-Clearer-Demo-Small.gif) | [[Github]](https://github.com/zzh-tech/InterpAny-Clearer) <br> [[Page]](https://zzh-tech.github.io/InterpAny-Clearer/) <br> [[ArXiv]](https://arxiv.org/abs/2311.08007) |
    • High-Resolution Image Synthesis with Latent Diffusion Models - to-Image Generation |
    • Adding Conditional Control to Text-to-Image Diffusion Models
    • Inpaint Anything: Segment Anything Meets Image Inpainting
    • GigaGAN: Large-scale GAN for Text-to-Image Synthesis - scale GAN |
  • Any3D

    • ![Star - vocabulary Instance Segmentation**](https://arxiv.org/abs/2309.00616) <br> *Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, Joan Lasenby* <br> > Cambridge, HKU, HKUST <br><br> [[**OpenIns3D**](https://github.com/Pointcept/OpenIns3D)] | ![image](https://github.com/Pointcept/OpenIns3D/blob/main/demo/3D_reasoning_seg.gif) | [[Github](https://github.com/Pointcept/OpenIns3D)] <br> [[Page](https://zheninghuang.github.io/OpenIns3D/)]|
    • ![Star - of-anything/Anything-3D) <br>[**Anything-3D: Segment-Anything + 3D, Let's lift the anything to 3D (Project)**](https://github.com/Anything-of-anything/Anything-3D) <br> *LV-Lab, NUS* | ![intro](https://github.com/Anything-of-anything/Anything-3D/raw/main/novel-view/assets/3.jpeg) <br> ![intro2](https://github.com/Anything-of-anything/Anything-3D/raw/main/novel-view/assets/2.jpeg) | [Github](https://github.com/Anything-of-anything/Anything-3D) |
    • ![Star - 3D-Selector) <br>[**SAM 3D Selector: Utilizing segment-anything to help the region selection of 3D point cloud or mesh. (Project)**](https://github.com/nexuslrf/SAM-3D-Selector) <br> *Nexuslrf* | ![intro](https://github.com/nexuslrf/SAM-3D-Selector/raw/main/figs/demo_1.gif) | [Github](https://github.com/nexuslrf/SAM-3D-Selector) |
    • ![Star - research/3D-Box-Segment-Anything) <br> [**3D-Box via Segment Anything. (Project)**](https://github.com/dvlab-research/3D-Box-Segment-Anything) <br> *dvlab-research* | ![image](https://github.com/dvlab-research/3D-Box-Segment-Anything/raw/main/images/sam-voxelnext.png) | [[Github](https://github.com/dvlab-research/3D-Box-Segment-Anything)] |
    • ![Star
  • AnyModel

    • ![Star - Pruning) <br> [**DepGraph: Towards Any Structural Pruning**](https://arxiv.org/abs/2301.12900) <br> *Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang* <br> > Learning and Vision Lab @ NUS<br> > CVPR'23 <br><br> [[**Torch-Pruning (Project)**](https://github.com/VainF/Torch-Pruning)] | ![intro](https://github.com/VainF/Torch-Pruning/raw/master/assets/intro.png) | [[Github](https://github.com/VainF/Torch-Pruning)] <br> [[Demo](https://colab.research.google.com/drive/1TRvELQDNj9PwM-EERWbF3IQOyxZeDepp?usp=sharing)] |
    • ![Star
    • ![Star - Friendly**](https://openreview.net/pdf?id=7ynoX1ojPMt) <br> *Tianyi Chen, Luming Liang, Tianyu Ding, Ilya Zharkov* <br> > Microsoft <br> > ICLR'23 <br><br> [[**Only Train Once (Project)**](https://github.com/tianyic/only_train_once)] | ![intro](https://user-images.githubusercontent.com/8930611/230513048-e07b09a2-b29b-49ad-a47f-52630337ab2a.png) | [[Github](https://github.com/tianyic/only_train_once)] |
    • ![Star
    • DepGraph: Towards Any Structural Pruning
    • MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
    • OTOv2: Automatic, Generic, User-Friendly
    • Deep Model Reassembly
  • AnyTask

    • ![Star - Decoder) <br> [**Generalized Decoding for Pixel, Image and Language**](https://arxiv.org/abs/2212.11270) <br> *Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, Nanyun Peng, Lijuan Wang, Yong Jae Lee, Jianfeng Gao* <br> > Microsoft <br> > CVPR'23 <br><br> [[**X-Decoder (Project)**](https://github.com/microsoft/X-Decoder/)] | ![intro](https://user-images.githubusercontent.com/11957155/210801832-c9143c42-ef65-4501-95a5-0d54749dcc52.gif) | [[Github](https://github.com/microsoft/X-Decoder/)] <br> [[Page](https://x-decoder-vl.github.io)] <br> [[Demo](https://huggingface.co/spaces/xdecoder/Demo)] |
    • ![Star - noah/Pretrained-IPT) <br> [**Pre-Trained Image Processing Transformer**]() <br> *Chen, Hanting and Wang, Yunhe and Guo, Tianyu and Xu, Chang and Deng, Yiping and Liu, Zhenhua and Ma, Siwei and Xu, Chunjing and Xu, Chao and Gao, Wen* <br> > Huawei-Noah <br> > CVPR'21 <br><br> [[**Pretrained-IPT (Project)**](https://github.com/huawei-noah/Pretrained-IPT)] | ![intro](https://github.com/huawei-noah/Pretrained-IPT/raw/main/images/intro.png) | [[Github](https://github.com/huawei-noah/Pretrained-IPT)] |
    • ![Star
    • HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
    • Generalized Decoding for Pixel, Image and Language
    • Pre-Trained Image Processing Transformer - level Vision |
    • TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs
  • AnyX