Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/amrzv/awesome-colab-notebooks

Collection of google colaboratory notebooks for fast and easy experiments
https://github.com/amrzv/awesome-colab-notebooks

List: awesome-colab-notebooks

cnn colab-notebooks deep-learning deep-neural-networks generative-adversarial-network google-colab google-colab-notebook google-colab-notebooks google-colab-tutorial google-colaboratory google-colabs jupyter-notebooks machine-learning pytorch tensorflow tensorflow-tutorials

Last synced: 2 days ago
JSON representation

Collection of google colaboratory notebooks for fast and easy experiments

Awesome Lists containing this project

README

        

[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https://github.com/amrzv/awesome-colab-notebooks)](https://hits.seeyoufarm.com)
![awesome-colab-notebooks](https://count.getloli.com/get/@awesome-colab-notebooks?theme=rule34)

The page might not be rendered properly. Please open [README.md](https://github.com/amrzv/awesome-colab-notebooks/blob/main/README.md) file directly
# Awesome colab notebooks collection for ML experiments
## Trending
| repositories | papers | packages |
|---|---|---|
|


  • datachain [![](https://img.shields.io/github/stars/iterative/datachain?style=social)](https://github.com/iterative/datachain)
  • IC-Light [![](https://img.shields.io/github/stars/lllyasviel/IC-Light?style=social)](https://github.com/lllyasviel/IC-Light)
  • BiRefNet [![](https://img.shields.io/github/stars/ZhengPeng7/BiRefNet?style=social)](https://github.com/ZhengPeng7/BiRefNet)
  • SAELens [![](https://img.shields.io/github/stars/jbloomAus/SAELens?style=social)](https://github.com/jbloomAus/SAELens)
  • PuLID [![](https://img.shields.io/github/stars/ToTheBeginning/PuLID?style=social)](https://github.com/ToTheBeginning/PuLID)
  • ARENA_3.0 [![](https://img.shields.io/github/stars/callummcdougall/ARENA_3.0?style=social)](https://github.com/callummcdougall/ARENA_3.0)
  • autogen [![](https://img.shields.io/github/stars/microsoft/autogen?style=social)](https://github.com/microsoft/autogen)
  • langgraph [![](https://img.shields.io/github/stars/langchain-ai/langgraph?style=social)](https://github.com/langchain-ai/langgraph)
  • segment-anything-2 [![](https://img.shields.io/github/stars/facebookresearch/segment-anything-2?style=social)](https://github.com/facebookresearch/segment-anything-2)
  • unsloth [![](https://img.shields.io/github/stars/unslothai/unsloth?style=social)](https://github.com/unslothai/unsloth)
  • ComfyUI [![](https://img.shields.io/github/stars/comfyanonymous/ComfyUI?style=social)](https://github.com/comfyanonymous/ComfyUI)
  • TransformerLens [![](https://img.shields.io/github/stars/TransformerLensOrg/TransformerLens?style=social)](https://github.com/TransformerLensOrg/TransformerLens)
  • fab-torch [![](https://img.shields.io/github/stars/lollcat/fab-torch?style=social)](https://github.com/lollcat/fab-torch)
  • llama-recipes [![](https://img.shields.io/github/stars/meta-llama/llama-recipes?style=social)](https://github.com/meta-llama/llama-recipes)
  • rl_games [![](https://img.shields.io/github/stars/Denys88/rl_games?style=social)](https://github.com/Denys88/rl_games)
  • InstantMesh [![](https://img.shields.io/github/stars/TencentARC/InstantMesh?style=social)](https://github.com/TencentARC/InstantMesh)
  • instructor [![](https://img.shields.io/github/stars/jxnl/instructor?style=social)](https://github.com/jxnl/instructor)
  • co-tracker [![](https://img.shields.io/github/stars/facebookresearch/co-tracker?style=social)](https://github.com/facebookresearch/co-tracker)
  • DDColor [![](https://img.shields.io/github/stars/piddnad/DDColor?style=social)](https://github.com/piddnad/DDColor)
  • ultralytics [![](https://img.shields.io/github/stars/ultralytics/ultralytics?style=social)](https://github.com/ultralytics/ultralytics)
  • normalizing-flows [![](https://img.shields.io/github/stars/VincentStimper/normalizing-flows?style=social)](https://github.com/VincentStimper/normalizing-flows)
  • open-interpreter [![](https://img.shields.io/github/stars/KillianLucas/open-interpreter?style=social)](https://github.com/KillianLucas/open-interpreter)
  • pymdp [![](https://img.shields.io/github/stars/infer-actively/pymdp?style=social)](https://github.com/infer-actively/pymdp)

|

  • DifFace [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2024.3432651)](https://doi.org/10.1109/TPAMI.2024.3432651)
  • UniFormerV2 [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV51070.2023.00157)](https://doi.org/10.1109/ICCV51070.2023.00157)
  • Panini-Net [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1609/aaai.v36i3.20159)](https://doi.org/10.1609/aaai.v36i3.20159)
  • PyMAF-X [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2023.3271691)](https://doi.org/10.1109/TPAMI.2023.3271691)
  • GraphCast [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1126/science.adi2336)](https://doi.org/10.1126/science.adi2336)
  • Gaussian Splatting [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3592433)](https://doi.org/10.1145/3592433)
  • MMOCR [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3474085.3478328)](https://doi.org/10.1145/3474085.3478328)
  • CodeTalker [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.01229)](https://doi.org/10.1109/CVPR52729.2023.01229)
  • VideoReTalking [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3550469.3555399)](https://doi.org/10.1145/3550469.3555399)
  • VRT [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TIP.2024.3372454)](https://doi.org/10.1109/TIP.2024.3372454)
  • FILM [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20071-7_15)](https://doi.org/10.1007/978-3-031-20071-7_15)
  • SadTalker [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.00836)](https://doi.org/10.1109/CVPR52729.2023.00836)
  • f-BRS [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00865)](https://doi.org/10.1109/CVPR42600.2020.00865)
  • HiDT [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00751)](https://doi.org/10.1109/CVPR42600.2020.00751)
  • Score Jacobian Chaining [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.01214)](https://doi.org/10.1109/CVPR52729.2023.01214)
  • RealBasicVSR [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00587)](https://doi.org/10.1109/CVPR52688.2022.00587)
  • OWL-ViT [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20080-9_42)](https://doi.org/10.1007/978-3-031-20080-9_42)
  • LaSAFT [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICASSP39728.2021.9413896)](https://doi.org/10.1109/ICASSP39728.2021.9413896)
  • Geometry-Free View Synthesis [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.01409)](https://doi.org/10.1109/ICCV48922.2021.01409)
  • SAM [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3450626.3459805)](https://doi.org/10.1145/3450626.3459805)
  • Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01202)](https://doi.org/10.1109/CVPR46437.2021.01202)
  • PyTorchVideo [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3474085.3478329)](https://doi.org/10.1145/3474085.3478329)
  • Omnivore [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01563)](https://doi.org/10.1109/CVPR52688.2022.01563)

|

  • unsloth [![](https://img.shields.io/pypi/dw/unsloth?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/unsloth/)
  • Crawl4AI [![](https://img.shields.io/pypi/dw/Crawl4AI?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/Crawl4AI/)
  • langgraph [![](https://img.shields.io/pypi/dw/langgraph?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/langgraph/)
  • llama-index [![](https://img.shields.io/pypi/dw/llama-index?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/llama-index/)
  • ollama [![](https://img.shields.io/pypi/dw/ollama?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/ollama/)
  • langchain [![](https://img.shields.io/pypi/dw/langchain?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/langchain/)
  • catboost [![](https://img.shields.io/pypi/dw/catboost?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/catboost/)
  • rl-games [![](https://img.shields.io/pypi/dw/rl-games?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/rl-games/)
  • img2dataset [![](https://img.shields.io/pypi/dw/img2dataset?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/img2dataset/)
  • reformer-pytorch [![](https://img.shields.io/pypi/dw/reformer-pytorch?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/reformer-pytorch/)
  • xgboost [![](https://img.shields.io/pypi/dw/xgboost?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/xgboost/)
  • mmpose [![](https://img.shields.io/pypi/dw/mmpose?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmpose/)
  • sae-lens [![](https://img.shields.io/pypi/dw/sae-lens?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/sae-lens/)
  • lightautoml [![](https://img.shields.io/pypi/dw/lightautoml?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/lightautoml/)
  • mistral-inference [![](https://img.shields.io/pypi/dw/mistral-inference?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mistral-inference/)
  • neural-tangents [![](https://img.shields.io/pypi/dw/neural-tangents?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/neural-tangents/)
  • TensorFlowTTS [![](https://img.shields.io/pypi/dw/TensorFlowTTS?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/TensorFlowTTS/)
  • dm-reverb [![](https://img.shields.io/pypi/dw/dm-reverb?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/dm-reverb/)
  • xmanager [![](https://img.shields.io/pypi/dw/xmanager?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/xmanager/)
  • mmrotate [![](https://img.shields.io/pypi/dw/mmrotate?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmrotate/)
  • clip-retrieval [![](https://img.shields.io/pypi/dw/clip-retrieval?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/clip-retrieval/)
  • contextualized_topic_models [![](https://img.shields.io/pypi/dw/contextualized_topic_models?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/contextualized_topic_models/)
  • datachain [![](https://img.shields.io/pypi/dw/datachain?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/datachain/)

|
## Research
| name | description | authors | links | colaboratory | update |
|------|-------------|:--------|:------|:------------:|:------:|
| GraphCast | Learning skillful medium-range global weather forecasting |

  • [Rémi Lam](https://github.com/remilam)
  • [Alvaro Sanchez-Gonzalez](https://github.com/alvarosg)
  • [Matthew Willson](https://github.com/mjwillson)
  • [Peter Wirnsberger](https://pewi.org/)
  • others
  • [Meire Fortunato](https://scholar.google.com/citations?user=_fMHSIUAAAAJ)
  • [Ferran Alet](https://scholar.google.com/citations?user=1lmBq3QAAAAJ)
  • [Suman Ravuri](https://www.linkedin.com/in/suman-ravuri-81928082)
  • [Timo Ewalds](https://github.com/tewalds)
  • [Zach Eaton-Rosen](https://scholar.google.com/citations?user=mQ3zD_wAAAAJ)
  • [Weihua Hu](https://weihua916.github.io/)
  • [Alexander Merose](https://alex.merose.com/)
  • [Stephan Hoyer](https://stephanhoyer.com/)
  • [George Holland](https://www.linkedin.com/in/g-aracil-holland)
  • [Oriol Vinyals](https://research.google/people/oriol-vinyals/)
  • [Jacklynn Stott](https://linkedin.com/in/jacklynnstott)
  • [Alexander Pritzel](https://github.com/a-pritzel)
  • [Shakir Mohamed](https://www.shakirm.com/)
  • [Peter Battaglia](https://scholar.google.com/citations?user=nQ7Ij30AAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1126/science.adi2336)](https://doi.org/10.1126/science.adi2336) [![](https://img.shields.io/github/stars/google-deepmind/graphcast?style=social)](https://github.com/google-deepmind/graphcast)

  • [arxiv](https://arxiv.org/abs/2212.12794)

  • [data](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5)

  • [deepmind](https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/)

  • [git](https://github.com/google-deepmind/chex), [git](https://github.com/dask/dask), [git](https://github.com/google-deepmind/jaxline), [git](https://github.com/google-deepmind/tree), [git](https://github.com/mikedh/trimesh)

  • [medium](https://towardsdatascience.com/graphcast-how-to-get-things-done-f2fd5630c5fb)

  • [yt](https://youtu.be/BufUW7h9TB8), [yt](https://youtu.be/PD1v5PCJs_o), [yt](https://youtu.be/Eul-JN9Nwb0), [yt](https://youtu.be/BTyhgp9Hugc), [yt](https://youtu.be/aJ_H4exg0xU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/graphcast/blob/master/graphcast_demo.ipynb) | 04.12.2024 |
| TAPIR | Tracking Any Point with per-frame Initialization and temporal Refinement |

  • [Carl Doersch](http://www.carldoersch.com/)
  • [Yi Yang](https://yangyi02.github.io/)
  • [Mel Vecerik](https://scholar.google.com/citations?user=Jvi_XPAAAAAJ)
  • [Dilara Gokay](https://scholar.google.com/citations?user=cnbENAEAAAAJ)
  • others
  • [Ankush Gupta](https://ankushgupta.org/)
  • [Yusuf Aytar](https://people.csail.mit.edu/yusuf/)
  • [Joao Carreira](https://scholar.google.com/citations?user=IUZ-7_cAAAAJ)
  • [Andrew Zisserman](https://www.robots.ox.ac.uk/~az/)

| [![](https://img.shields.io/github/stars/google-deepmind/tapnet?style=social)](https://github.com/google-deepmind/tapnet)

  • [arxiv](https://arxiv.org/abs/2306.08637), [arxiv](https://arxiv.org/abs/2308.15975)

  • [blog post](https://deepmind-tapir.github.io/), [blog post](https://deepmind-tapir.github.io/blogpost.html)

  • [deepmind](https://www.deepmind.com/open-source/kinetics)

  • [git](https://github.com/google-research/kubric/tree/main/challenges/point_tracking)

  • [medium](https://medium.com/@jumabek4044/what-is-tapir-tracking-any-point-with-per-frame-initialization-and-temporal-refinement-and-how-it-bdad9946dc53)

  • [neurips](https://proceedings.neurips.cc/paper_files/paper/2022/hash/58168e8a92994655d6da3939e7cc0918-Abstract-Datasets_and_Benchmarks.html)

  • [yt](https://youtu.be/2HSHofqoJ9M), [yt](https://youtu.be/I1DQJH3v7Nk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/tapnet/blob/master/colabs/causal_tapir_demo.ipynb) | 30.11.2024 |
| T2M-GPT | Conditional generative framework based on Vector Quantised-Variational AutoEncoder and Generative Pre-trained Transformer for human motion generation from textural descriptions |

  • [Jianrong Zhang](https://github.com/Jiro-zhang)
  • [Yangsong Zhang](https://github.com/Mael-zys)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • [Shaoli Huang](https://shaoli-huang.github.io/)
  • others
  • [Yong Zhang](https://yzhang2016.github.io/)
  • [Hongwei Zhao](https://teachers.jlu.edu.cn/zhaohongwei/en/index.htm)
  • [Hongtao Lu](https://www.cs.sjtu.edu.cn/en/PeopleDetail.aspx?id=156)
  • [Xi Shen](https://xishen0220.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.01415)](https://doi.org/10.1109/CVPR52729.2023.01415) [![](https://img.shields.io/github/stars/Mael-zys/T2M-GPT?style=social)](https://github.com/Mael-zys/T2M-GPT)

  • [arxiv](https://arxiv.org/abs/2301.06052)

  • [git](https://github.com/EricGuo5513/HumanML3D), [git](https://github.com/EricGuo5513/text-to-motion), [git](https://github.com/GuyTevet/motion-diffusion-model), [git](https://github.com/EricGuo5513/TM2T)

  • [hf](https://huggingface.co/vumichien/T2M-GPT), [hf](https://huggingface.co/spaces/vumichien/generate_human_motion)

  • [medium](https://medium.com/@kaveh.kamali/t2m-gpt-pioneering-human-motion-generation-from-textual-descriptions-48dc62b5cd7a)

  • [project](https://mael-zys.github.io/T2M-GPT/)

  • [yt](https://youtu.be/09K2cx9P0_0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Vy69w2q2d-Hg19F-KibqG0FRdpSj3L4O) | 24.11.2024 |
| PuLID | Pure and Lightning ID customization, a tuning-free ID customization method for text-to-image generation |

  • [Zinan Guo](https://github.com/guozinan126)
  • [Yanze Wu](https://tothebeginning.github.io/)
  • [Zhuowei Chen](https://scholar.google.com/citations?user=ow1jGJkAAAAJ)
  • [Lang Chen](https://scholar.google.com/citations?user=h5xex20AAAAJ)
  • [Qian He](https://scholar.google.com/citations?user=9rWWCgUAAAAJ)

| [![](https://img.shields.io/github/stars/ToTheBeginning/PuLID?style=social)](https://github.com/ToTheBeginning/PuLID)

  • [arxiv](https://arxiv.org/abs/2404.16022)

  • [git](https://github.com/cubiq/PuLID_ComfyUI), [git](https://github.com/ZHO-ZHO-ZHO/ComfyUI-PuLID-ZHO), [git](https://github.com/Mikubill/sd-webui-controlnet/pull/2838)

  • [reddit](https://www.reddit.com/r/comfyui/comments/1cnv269/pulid_pure_and_lightning_id_customization_via/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/PuLID-jupyter/blob/main/PuLID_jupyter.ipynb) | 09.11.2024 |
| CoTracker | Architecture that jointly tracks multiple points throughout an entire video |

  • [Nikita Karaev](https://nikitakaraevv.github.io/)
  • [Ignacio Rocco](https://www.irocco.info/)
  • [Benjamin Graham](https://ai.meta.com/people/benjamin-graham/)
  • [Natalia Neverova](https://nneverova.github.io/)
  • others
  • [Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/)
  • [Christian Rupprecht](https://chrirupp.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/co-tracker?style=social)](https://github.com/facebookresearch/co-tracker)

  • [arxiv](https://arxiv.org/abs/2307.07635), [arxiv](https://arxiv.org/abs/2303.11898)

  • [git](https://github.com/benjiebob/BADJA)

  • [project](https://co-tracker.github.io/)

  • [yt](https://youtu.be/w5QVc7BVGPA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/co-tracker/blob/main/notebooks/demo.ipynb) | 16.10.2024 |
| PIFu | Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization |

  • [Ryota Natsume](https://github.com/nanopoteto)
  • [Shunsuke Saito](https://shunsukesaito.github.io/)
  • [Zeng Huang](https://zeng.science/)
  • [Angjoo Kanazawa](https://people.eecs.berkeley.edu/~kanazawa/)
  • [Hao Li](http://hao.li)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00239)](https://doi.org/10.1109/ICCV.2019.00239) [![](https://img.shields.io/github/stars/shunsukesaito/PIFu?style=social)](https://github.com/shunsukesaito/PIFu)

  • [arxiv](https://arxiv.org/abs/1905.05172)

  • [yt](https://www.youtube.com/watch?v=S1FpjwKqtPs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1GFSsqP2BWz4gtq0e-nki00ZHSirXwFyY) | 08.10.2024 |
| DifFace | Method that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs |

  • [Zongsheng Yue](https://zsyoaoa.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2024.3432651)](https://doi.org/10.1109/TPAMI.2024.3432651) [![](https://img.shields.io/github/stars/zsyOAOA/DifFace?style=social)](https://github.com/zsyOAOA/DifFace)

  • [arxiv](https://arxiv.org/abs/2212.06512)

  • [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/openai/improved-diffusion), [git](https://github.com/deepcam-cn/yolov5-face), [git](https://github.com/xinntao/facexlib)

  • [hf](https://huggingface.co/spaces/OAOA/DifFace)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1BNtoPPRuJwNDvqfwDOOmD9XJyF05Zh4m) | 05.10.2024 |
| Segment Anything 2 | Foundation model towards solving promptable visual segmentation in images and videos |

  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Valentin Gabeur](https://gabeur.github.io/)
  • [Yuan-Ting Hu](https://scholar.google.com/citations?user=E8DVVYQAAAAJ)
  • [Ronghang Hu](https://ronghanghu.com/)
  • others
  • [Chaitanya Ryali](https://scholar.google.com/citations?user=4LWx24UAAAAJ)
  • [Tengyu Ma](https://scholar.google.com/citations?user=VeTSl0wAAAAJ)
  • [Haitham Khedr](https://hkhedr.com/)
  • [Roman Rädle](https://scholar.google.de/citations?user=Tpt57v0AAAAJ)
  • [Chloé Rolland](https://scholar.google.com/citations?user=n-SnMhoAAAAJ)
  • [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ)
  • [Eric Mintun](https://ericmintun.github.io/)
  • [Junting Pan](https://junting.github.io/)
  • [Kalyan Vasudev](lwala](https://scholar.google.co.in/citations?user=m34oaWEAAAAJ)
  • [Nicolas Carion](https://www.nicolascarion.com/)
  • [Chao-Yuan](u](https://chaoyuan.org/)
  • [Ross Girshick](https://www.rossgirshick.info/)
  • [Piotr Dollár](https://pdollar.github.io/)
  • [Christoph Feichtenhofer](https://feichtenhofer.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/segment-anything-2?style=social)](https://github.com/facebookresearch/segment-anything-2)

  • [arxiv](https://arxiv.org/abs/2408.00714)

  • [demo](https://sam2.metademolab.com/)

  • [git](https://github.com/zsef123/Connected_components_PyTorch)

  • [hf](https://huggingface.co/models?search=facebook/sam2)

  • [meta](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/), [meta](https://ai.meta.com/datasets/segment-anything-video), [meta](https://ai.meta.com/blog/segment-anything-2)

  • [project](https://ai.meta.com/sam2/)

  • [twitter](https://x.com/AIatMeta/status/1818055906179105010)

  • [yt](https://www.youtube.com/watch?v=w-cmMcMZoZ4&t=2325s), [yt](https://youtu.be/O8QdvZbRDp4), [yt](https://www.youtube.com/live/Dv003fTyO-Y), [yt](https://youtu.be/IW7jFq3vQbw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb) | 01.10.2024 |
| Open-Unmix | A deep neural network reference implementation for music source separation, applicable for researchers, audio engineers and artists |

  • [Fabian-Robert Stöter](http://faroit.com/)
  • [Antoine Liutkus](https://github.com/aliutkus)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.21105/joss.01667)](https://doi.org/10.21105/joss.01667) [![](https://img.shields.io/github/stars/sigsep/open-unmix-pytorch?style=social)](https://github.com/sigsep/open-unmix-pytorch)

  • [data](https://sigsep.github.io/datasets/musdb.html#musdb18-compressed-stems)

  • [git](https://github.com/sigsep/norbert)

  • [project](https://sigsep.github.io/open-unmix/)

  • [pwc](https://paperswithcode.com/sota/music-source-separation-on-musdb18?p=open-unmix-a-reference-implementation-for)

  • [yt](https://www.youtube.com/playlist?list=PLhA3b2k8R3t0VpYCpCTU2B1h604rvnV4N)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1mijF0zGWxN-KaxTnd0q6hayAlrID5fEQ) | 25.09.2024 |
| Deep Painterly Harmonization | Algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve |

  • [Fujun Luan](https://luanfujun.github.io/)
  • [Sylvain Paris](http://people.csail.mit.edu/sparis/)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Kavita Bala](https://www.cs.cornell.edu/~kb/)

| [![](https://img.shields.io/github/stars/luanfujun/deep-painterly-harmonization?style=social)](https://github.com/luanfujun/deep-painterly-harmonization)

  • [arxiv](https://arxiv.org/abs/1804.03189), [arxiv](https://arxiv.org/abs/1701.08893)

  • [git](https://github.com/jcjohnson/neural-style), [git](https://github.com/torch/torch7), [git](https://github.com/szagoruyko/loadcaffe)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/gist/eyaler/5303782669fb43510d398bd346c6e3e6/deep-painterly-harmonization.ipynb) | 23.09.2024 |
| audio2photoreal | Framework for generating full-bodied photorealistic avatars that gesture according to the conversational dynamics of a dyadic interaction |

  • [Evonne Ng](https://people.eecs.berkeley.edu/~evonne_ng/)
  • [Javier Romero](https://scholar.google.com/citations?user=Wx62iOsAAAAJ)
  • [Timur Bagautdinov](https://scholar.google.ch/citations?user=oLi7xJ0AAAAJ)
  • [Shaojie Bai](https://jerrybai1995.github.io/)
  • others
  • [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/)
  • [Angjoo Kanazawa](https://people.eecs.berkeley.edu/~kanazawa/)
  • [Alexander Richard](https://alexanderrichard.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/audio2photoreal?style=social)](https://github.com/facebookresearch/audio2photoreal)

  • [arxiv](https://arxiv.org/abs/2401.01885)

  • [git](https://github.com/facebookresearch/ca_body)

  • [project](https://people.eecs.berkeley.edu/~evonne_ng/projects/audio2photoreal/)

  • [yt](https://youtu.be/Y0GMaMtUynQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1A6EwKM3PeX7dcKV66zxQWuP-v_dKlX_0) | 13.09.2024 |
| Fast Segment Anything | CNN Segment Anything Model trained using only 2% of the SA-1B dataset published by SAM authors |

  • [Xu Zhao](https://scholar.google.com/citations?user=F0cYEyAAAAAJ)
  • [Wenchao Ding](https://github.com/berry-ding)
  • [Yongqi An](https://github.com/an-yongqi)
  • [Yinglong Du](https://github.com/YinglongDu)
  • others
  • [Tao Yu](https://github.com/tianjinren)
  • [Min Li](https://github.com/limin2021)
  • [Ming Tang](https://www.researchgate.net/profile/Ming-Tang-2)
  • [Jinqiao Wang](https://scholar.google.com/citations?user=7_BkyxEAAAAJ)

| [![](https://img.shields.io/github/stars/CASIA-IVA-Lab/FastSAM?style=social)](https://github.com/CASIA-IVA-Lab/FastSAM)

  • [arxiv](https://arxiv.org/abs/2306.12156), [arxiv](https://arxiv.org/abs/2112.10003)

  • [git](https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT)

  • [medium](https://medium.com/@mahimairaja/so-what-exactly-is-fastsam-the-ultimate-guide-ddae21d3b486)

  • [yt](https://youtu.be/yHNPyqazYYU), [yt](https://youtu.be/SslzS0AsiAw), [yt](https://www.youtube.com/live/qvqkjP1wCDE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9) | 10.09.2024 |
| Neuralangelo | Framework for high-fidelity 3D surface reconstruction from RGB video captures |

  • [Zhaoshuo Li](https://mli0603.github.io/)
  • [Thomas Müller](https://tom94.net/)
  • [Alex Evans](https://scholar.google.com/citations?user=ToqGImkAAAAJ)
  • [Russell Taylor](https://www.cs.jhu.edu/~rht/)
  • others
  • [Mathias Unberath](https://mathiasunberath.github.io/)
  • [Ming-Yu Liu](https://mingyuliu.net/)
  • [Chen-Hsuan Lin](https://chenhsuanlin.bitbucket.io/)

| [![](https://img.shields.io/github/stars/NVlabs/neuralangelo?style=social)](https://github.com/NVlabs/neuralangelo)

  • [arxiv](https://arxiv.org/abs/2306.03092)

  • [blog post](https://blogs.nvidia.com/blog/2023/06/01/neuralangelo-ai-research-3d-reconstruction/)

  • [git](https://github.com/mli0603/BlenderNeuralangelo)

  • [project](https://research.nvidia.com/labs/dir/neuralangelo/)

  • [yt](https://youtu.be/PQMNCXR-WF8), [yt](https://youtu.be/Qpdw3SW54kI), [yt](https://youtu.be/lC2uPDfaTcE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1i16s8W_OV0Hd3-PIuo64JKDwwdOesgXQ) | 02.09.2024 |
| BiRefNet | Bilateral reference framework for high-resolution dichotomous image segmentation |

  • [Peng Zheng](https://zhengpeng7.github.io/about/)
  • [Dehong Gao](https://teacher.nwpu.edu.cn/dehonggao)
  • [Deng-Ping Fan](https://dengpingfan.github.io/)
  • [Li Liu](https://scholar.google.com/citations?user=9cMQrVsAAAAJ)
  • others
  • [Jorma Laaksonen](https://scholar.google.com/citations?user=qQP6WXIAAAAJ)
  • [Wanli Ouyang](https://wlouyang.github.io/)
  • [Nicu Sebe](https://disi.unitn.it/~sebe/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.26599/AIR.2024.9150038)](https://doi.org/10.26599/AIR.2024.9150038) [![](https://img.shields.io/github/stars/ZhengPeng7/BiRefNet?style=social)](https://github.com/ZhengPeng7/BiRefNet)

  • [arxiv](https://arxiv.org/abs/2401.03407), [arxiv](https://arxiv.org/abs/2302.14485)

  • [discord](https://discord.gg/d9NN5sgFrq)

  • [git](https://github.com/Kazuhito00/BiRefNet-ONNX-Sample), [git](https://github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO), [git](https://github.com/viperyl/ComfyUI-BiRefNet)

  • [hf](https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo), [hf](https://huggingface.co/ZhengPeng7/BiRefNet)

  • [project](https://www.birefnet.top/)

  • [pwc](https://paperswithcode.com/sota/dichotomous-image-segmentation-on-dis-te1?p=bilateral-reference-for-high-resolution), [pwc](https://paperswithcode.com/sota/camouflaged-object-segmentation-on-cod?p=bilateral-reference-for-high-resolution), [pwc](https://paperswithcode.com/sota/rgb-salient-object-detection-on-davis-s?p=bilateral-reference-for-high-resolution)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1B6aKZ3ekcvKMkSBn0N5mCASLUYMp0whK) | 23.08.2024 |
| SPIN | Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop |

  • [Nikos Kolotouros](https://www.nikoskolot.com/)
  • [Georgios Pavlakos](https://geopavlakos.github.io/)
  • [Michael Black](https://ps.is.mpg.de/~black)
  • [Kostas Daniilidis](https://www.cis.upenn.edu/~kostas/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00234)](https://doi.org/10.1109/ICCV.2019.00234) [![](https://img.shields.io/github/stars/nkolot/SPIN?style=social)](https://github.com/nkolot/SPIN)

  • [arxiv](https://arxiv.org/abs/1909.12828)

  • [docker](https://hub.docker.com/r/chaneyk/spin)

  • [git](https://github.com/vchoutas/smplify-x), [git](https://github.com/CMU-Perceptual-Computing-Lab/openpose)

  • [project](https://www.nikoskolot.com/projects/spin/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1uH2JtavOtDrFl6RsipyIncCSr19GWW4x) | 21.08.2024 |
| YOLOv10 | Aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and model architecture |

  • [Ao Wang](https://github.com/jameslahm)
  • [Hui Chen](https://huichen24.github.io/)
  • [Kai Chen](https://scholar.google.com/citations?user=bZQX708AAAAJ)
  • [Zijia Lin](https://sites.google.com/site/linzijia72)
  • others
  • [Jungong Han](https://jungonghan.github.io/)
  • [Guiguang Ding](https://scholar.google.com/citations?user=B7F3yt4AAAAJ)

| [![](https://img.shields.io/github/stars/THU-MIG/yolov10?style=social)](https://github.com/THU-MIG/yolov10)

  • [arxiv](https://arxiv.org/abs/2405.14458)

  • [blog post](https://learnopencv.com/yolov10/)

  • [demo](https://openbayes.com/console/public/tutorials/im29uYrnIoz)

  • [git](https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference), [git](https://github.com/Seeed-Projects/jetson-examples/blob/main/reComputer/scripts/yolov10/README.md), [git](https://github.com/kaylorchen/rk3588-yolo-demo), [git](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/yolov10-optimization/yolov10-optimization.ipynb), [git](https://github.com/sujanshresstha/YOLOv10_DeepSORT), [git](https://github.com/CVHub520/X-AnyLabeling), [git](https://github.com/DanielSarmiento04/yolov10cpp), [git](https://github.com/lyuwenyu/RT-DETR)

  • [hf](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2), [hf](https://huggingface.co/spaces/jameslahm/YOLOv10), [hf](https://huggingface.co/spaces/kadirnar/Yolov10), [hf](https://huggingface.co/spaces/Xenova/yolov10-web)

  • [medium](https://medium.com/@batuhansenerr/yolov10-custom-object-detection-bd7298ddbfd3), [medium](https://medium.com/@sunidhi.ashtekar/yolov10-revolutionizing-real-time-object-detection-72ef04ad441a)

  • [reddit](https://www.reddit.com/r/GPTFutureScience/comments/1d34rj1/yolov10_the_future_of_realtime_object_detection/)

  • [yt](https://youtu.be/29tnSxhB3CY), [yt](https://youtu.be/2ZFJbeJXXDM), [yt](https://youtu.be/wM6nO75keOQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb) | 20.08.2024 |
| SpecVQGAN | Taming the visually guided sound generation by shrinking a training dataset to a set of representative vectors |

  • [Vladimir Iashin](https://iashin.ai/)
  • [Esa Rahtu](https://esa.rahtu.fi/)

| [![](https://img.shields.io/github/stars/v-iashin/SpecVQGAN?style=social)](https://github.com/v-iashin/SpecVQGAN)

  • [arxiv](http://arxiv.org/abs/2110.08791), [arxiv](https://arxiv.org/abs/2012.09841), [arxiv](https://arxiv.org/abs/1711.00937), [arxiv](https://arxiv.org/abs/2008.00820), [arxiv](https://arxiv.org/abs/1712.01393), [arxiv](https://arxiv.org/abs/1512.08512)

  • [git](https://github.com/PeihaoChen/regnet), [git](https://github.com/toshas/torch-fidelity), [git](https://github.com/descriptinc/melgan-neurips), [git](https://github.com/google/lyra)

  • [project](https://iashin.ai/SpecVQGAN)

  • [wiki](https://en.wikipedia.org/wiki/Foley_(filmmaking), [wiki](https://en.wikipedia.org/wiki/Row-_and_column-major_order), [wiki](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)

  • [yt](https://www.youtube.com/watch?v=Bucb3nAa398)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1pxTIMweAKApJZ3ZFqyBee3HtMqFpnwQ0) | 12.07.2024 |
| LivePortrait | Video-driven portrait animation framework with a focus on better generalization, controllability, and efficiency for practical usage |

  • [Jianzhu Guo](https://guojianzhu.com/)
  • [Dingyun Zhang](https://github.com/DingyunZhang)
  • [Xiaoqiang Liu](https://github.com/Liu-lxq)
  • [Zhizhou Zhong](https://scholar.google.com/citations?user=t88nyvsAAAAJ)
  • others
  • [Yuan Zhang](https://scholar.google.com/citations?user=_8k1ubAAAAAJ)
  • [Pengfei Wan](https://scholar.google.com/citations?user=P6MraaYAAAAJ)
  • [Di Zhang](https://openreview.net/profile?id=~Di_ZHANG3)

| [![](https://img.shields.io/github/stars/KwaiVGI/LivePortrait?style=social)](https://github.com/KwaiVGI/LivePortrait)

  • [arxiv](https://arxiv.org/abs/2407.03168)

  • [git](https://github.com/kijai/ComfyUI-LivePortraitKJ), [git](https://github.com/shadowcz007/comfyui-liveportrait), [git](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [git](https://github.com/NVlabs/SPADE), [git](https://github.com/deepinsight/insightface)

  • [hf](https://huggingface.co/spaces/KwaiVGI/LivePortrait)

  • [project](https://liveportrait.github.io/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1dvepjx/liveportrait_efficient_portrait_animation_with/)

  • [yt](https://youtu.be/uyjSTAOY7yI), [yt](https://youtu.be/8-IcDDmiUMM), [yt](https://youtu.be/aFcS31OWMjE), [yt](https://youtu.be/bRHf2oQwgG4), [yt](https://youtu.be/FPtpNrmuwXk), [yt](https://youtu.be/wG7oPp01COg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/LivePortrait-jupyter/blob/main/LivePortrait_jupyter.ipynb) | 10.07.2024 |
| Wav2Lip | A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild |

  • [Prajwal Renukanand](https://github.com/prajwalkr)
  • [Rudrabha Mukhopadhyay](https://rudrabha.github.io/)
  • [Vinay Namboodiri](https://vinaypn.github.io/)
  • [C. V. Jawahar](https://faculty.iiit.ac.in/~jawahar/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3394171.3413532)](https://doi.org/10.1145/3394171.3413532) [![](https://img.shields.io/github/stars/Rudrabha/Wav2Lip?style=social)](https://github.com/Rudrabha/Wav2Lip)

  • [arxiv](https://arxiv.org/abs/2008.10010)

  • [data](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html)

  • [demo](http://bhaasha.iiit.ac.in/lipsync/)

  • [project](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/)

  • [yt](https://www.youtube.com/watch?v=0fXaDCZNOJc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/eyaler/avatars4all/blob/master/melaflefon.ipynb) | 27.06.2024 |
| DeepLabCut | Efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data |

  • [Alexander Mathis](https://github.com/AlexEMG)
  • [Pranav Mamidanna](https://pranavm19.github.io/)
  • [Kevin Cury](https://kevincury.com/)
  • [Taiga Abe](https://cellistigs.github.io/)
  • others
  • [Venkatesh Murthy](https://github.com/venkateshnmurthy)
  • [Mackenzie Mathis](https://github.com/MMathisLab)
  • [Matthias Bethge](https://bethgelab.org/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41593-018-0209-y)](https://doi.org/10.1038/s41593-018-0209-y) [![](https://img.shields.io/github/stars/DeepLabCut/DeepLabCut?style=social)](https://github.com/DeepLabCut/DeepLabCut)

  • [arxiv](https://arxiv.org/abs/1605.03170), [arxiv](https://arxiv.org/abs/1804.03142), [arxiv](https://arxiv.org/abs/1909.11229), [arxiv](https://arxiv.org/abs/2009.00564), [arxiv](https://arxiv.org/abs/1909.13868), [arxiv](https://arxiv.org/abs/1909.13868)

  • [docker](https://hub.docker.com/r/deeplabcut/deeplabcut)

  • [forum](https://forum.image.sc/tag/deeplabcut)

  • [git](https://github.com/DeepLabCut/DLCutils), [git](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials)

  • [medium](https://medium.com/@cziscience/how-open-source-software-contributors-are-accelerating-biomedicine-1a5f50f6846a)

  • [twitter](https://twitter.com/DeepLabCut)

  • [website](https://www.deeplabcut.org/)

  • [yt](https://www.youtube.com/@deeplabcut7702), [yt](https://youtu.be/uWZu3rnj-kQ), [yt](https://youtu.be/Teb5r2TNAYs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_maDLC_TrainNetwork_VideoAnalysis.ipynb) | 05.06.2024 |
| PoolFormer | MetaFormer Is Actually What You Need for Vision |

  • [Weihao Yu](https://whyu.me/)
  • [Mi Luo](https://luomi97.github.io/)
  • [Pan Zhou](https://panzhous.github.io/)
  • [Chenyang Si](https://github.com/ChenyangSi)
  • others
  • [Yichen Zhou](https://dblp.org/pid/55/10422.html)
  • [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)
  • [Jiashi Feng](https://sites.google.com/site/jshfeng/)
  • [Shuicheng Yan](https://yanshuicheng.ai/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01055)](https://doi.org/10.1109/CVPR52688.2022.01055) [![](https://img.shields.io/github/stars/sail-sg/poolformer?style=social)](https://github.com/sail-sg/poolformer)

  • [arxiv](https://arxiv.org/abs/2111.11418)

  • [git](https://github.com/rwightman/pytorch-image-models), [git](https://github.com/facebookresearch/fvcore), [git](https://github.com/NVIDIA/apex)

  • [hf](https://huggingface.co/spaces/akhaliq/poolformer)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/sail-sg/poolformer/blob/main/misc/poolformer_demo.ipynb) | 01.06.2024 |
| StoryDiffusion | Way of self-attention calculation, termed Consistent Self-Attention, that significantly boosts the consistency between the generated images and augments prevalent pretrained diffusion-based text-to-image models in a zero-shot manner |

  • [Yupeng Zhou](https://mmcheng.net/zyp/)
  • [Daquan Zhou](https://github.com/zhoudaquan)
  • [Ming-Ming Cheng](https://mmcheng.net/cmm/)
  • [Jiashi Feng](https://sites.google.com/site/jshfeng/?pli=1)
  • [Qibin Hou](https://houqb.github.io/)

| [![](https://img.shields.io/github/stars/HVision-NKU/StoryDiffusion?style=social)](https://github.com/HVision-NKU/StoryDiffusion)

  • [arxiv](https://arxiv.org/abs/2405.01434)

  • [medium](https://youtu.be/GeNyP4VY9rE?si=qW1jcW_GbKutmKQv)

  • [project](https://storydiffusion.github.io/)

  • [reddit](https://www.reddit.com/r/StoryDiffusion/)

  • [yt](https://youtu.be/jZWRENqCl6I), [yt](https://youtu.be/GeNyP4VY9rE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/HVision-NKU/StoryDiffusion/blob/main/Comic_Generation.ipynb) | 04.05.2024 |
| FILM | A frame interpolation algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion |

  • [Fitsum Reda](https://fitsumreda.github.io/)
  • [Janne Kontkanen](https://scholar.google.com/citations?user=MnXc4JQAAAAJ)
  • [Eric Tabellion](http://www.tabellion.org/et/)
  • [Deqing Sun](https://deqings.github.io/)
  • others
  • [Caroline Pantofaru](https://scholar.google.com/citations?user=vKAKE1gAAAAJ)
  • [Brian Curless](https://homes.cs.washington.edu/~curless/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20071-7_15)](https://doi.org/10.1007/978-3-031-20071-7_15) [![](https://img.shields.io/github/stars/google-research/frame-interpolation?style=social)](https://github.com/google-research/frame-interpolation)

  • [arxiv](https://arxiv.org/abs/2202.04901)

  • [data](http://data.csail.mit.edu/tofu/testset/vimeo_interp_test.zip), [data](https://vision.middlebury.edu/flow/data), [data](https://people.cs.umass.edu/~hzjiang/projects/superslomo/UCF101_results.zip)

  • [git](https://github.com/sniklaus/softmax-splatting/blob/master/benchmark.py)

  • [project](https://film-net.github.io/)

  • [tf](https://www.tensorflow.org/tutorials/load_data/tfrecord), [tf](https://www.tensorflow.org/api_docs/python/tf/train/Example), [tf](https://www.tensorflow.org/guide/saved_model)

  • [yt](https://youtu.be/OAD-BieIjH4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1sK0uc-GJxmdnaxHhYqD2afRknakpdTNZ) | 03.05.2024 |
| VoiceCraft | token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech on audiobooks, internet videos, and podcasts |

  • [Puyuan Peng](https://jasonppy.github.io/)
  • [Po-Yao Huang](https://berniebear.github.io/)
  • [Shang-Wen Li](https://swdanielli.github.io/)
  • [Abdelrahman Mohamed](https://www.cs.toronto.edu/~asamir/)
  • [David Harwath](https://www.cs.utexas.edu/~harwath/)

| [![](https://img.shields.io/github/stars/jasonppy/VoiceCraft?style=social)](https://github.com/jasonppy/VoiceCraft)

  • [arxiv](https://arxiv.org/abs/2403.16973)

  • [git](https://github.com/lifeiteng/vall-e)

  • [hf](https://huggingface.co/pyp1/VoiceCraft)

  • [project](https://jasonppy.github.io/VoiceCraft_web/)

  • [reddit](https://www.reddit.com/r/LocalLLaMA/comments/1bmxfk3/voicecraft_zeroshot_speech_editing_and/)

  • [yt](https://youtu.be/eikybOi8iwU), [yt](https://youtu.be/PJ2qSjycLcw), [yt](https://youtu.be/JxRrHpq-hys)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jasonppy/VoiceCraft/blob/master/voicecraft-gradio-colab.ipynb) | 21.04.2024 |
| ZeST | Method for zero-shot material transfer to an object in the input image given a material exemplar image |

  • [Ta-Ying Cheng](https://ttchengab.github.io/)
  • [Prafull Sharma](https://prafullsharma.net/)
  • [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/)
  • [Niki Trigoni](https://www.cs.ox.ac.uk/people/niki.trigoni/)
  • [Varun Jampani](https://varunjampani.github.io/)

| [![](https://img.shields.io/github/stars/ttchengab/zest_code?style=social)](https://github.com/ttchengab/zest_code)

  • [arxiv](https://arxiv.org/abs/2404.06425)

  • [git](https://github.com/kealiu/ComfyUI-ZeroShot-MTrans)

  • [hf](https://huggingface.co/h94/IP-Adapter), [hf](https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt)

  • [medium](https://xthemadgenius.medium.com/zest-unlocks-material-magic-in-single-image-transfers-05f7ff7ee483)

  • [project](https://ttchengab.github.io/zest/)

  • [reddit](https://www.reddit.com/r/learnmachinelearning/comments/1c0wpjd/zest_zeroshot_material_transfer_from_a_single/)

  • [yt](https://youtu.be/atG1VvgeG_g)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/zest-jupyter/blob/main/zest_jupyter.ipynb) | 16.04.2024 |
| InstantMesh | Feed-forward framework for instant 3D mesh generation from a single image, featuring state-of-the-art generation quality and significant training scalability |

  • [Jiale Xu](https://github.com/bluestyle97)
  • [Weihao Cheng](https://www.cheng.website/)
  • [Yiming Gao](https://scholar.google.com/citations?user=uRCc-McAAAAJ)
  • [Xintao Wang](https://xinntao.github.io/)
  • others
  • [Shenghua Gao](https://scholar.google.com/citations?user=fe-1v0MAAAAJ)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)

| [![](https://img.shields.io/github/stars/TencentARC/InstantMesh?style=social)](https://github.com/TencentARC/InstantMesh)

  • [arxiv](https://arxiv.org/abs/2404.07191)

  • [git](https://github.com/danielgatis/rembg), [git](https://github.com/3DTopia/OpenLRM), [git](https://github.com/nv-tlabs/FlexiCubes)

  • [hf](https://huggingface.co/TencentARC/InstantMesh)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1c5hs3e/instantmesh_efficient_3d_mesh_generation_from_a/)

  • [yt](https://youtu.be/BvngSJOStvQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/InstantMesh-jupyter/blob/main/InstantMesh_jupyter.ipynb) | 16.04.2024 |
| AlphaFold | Highly accurate protein structure prediction |

  • [John Jumper](https://scholar.google.com/citations?user=a5goOh8AAAAJ)
  • [Richard Evans](http://www.doc.ic.ac.uk/~re14/)
  • [Alexander Pritzel](https://scholar.google.com/citations?user=GPgAyU0AAAAJ)
  • [Tim Green](http://tfgg.me/)
  • others
  • [Michael Figurnov](https://figurnov.ru/)
  • [Olaf Ronneberger](https://lmb.informatik.uni-freiburg.de/people/ronneber/)
  • [Kathryn Tunyasuvunakool](https://scholar.google.com/citations?user=eEqNGagAAAAJ)
  • [Russ Bates](https://scholar.google.com/citations?user=Koes5ewAAAAJ)
  • [Augustin Žídek](https://augustin.zidek.eu/)
  • [Anna Potapenko](http://apotapenko.com/)
  • [Alex Bridgland](https://scholar.google.com/citations?user=VWmXKPMAAAAJ)
  • [Clemens Meyer](https://scholar.google.com/citations?user=EWLZiM8AAAAJ)
  • [Simon Kohl](https://www.simonkohl.com/)
  • [Andrew Ballard](https://scholar.google.com/citations?user=syjQhAMAAAAJ)
  • [Bernardino Romera-Paredes](https://sites.google.com/site/romeraparedes/)
  • [Stanislav Nikolov](https://scholar.google.co.uk/citations?user=O-b7pBEAAAAJ)
  • [Rishub Jain](http://rishub.me/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41586-021-03819-2)](https://doi.org/10.1038/s41586-021-03819-2) [![](https://img.shields.io/github/stars/deepmind/alphafold?style=social)](https://github.com/deepmind/alphafold/)

  • [blog post](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology), [blog post](https://deepmind.com/blog/article/putting-the-power-of-alphafold-into-the-worlds-hands)

  • [git](https://github.com/deepmind/tree), [git](https://github.com/deepmind/chex)

  • [paper](https://www.nature.com/articles/s41586-021-03828-1)

  • [pwc](https://paperswithcode.com/method/alphafold)

  • [wiki](https://en.wikipedia.org/wiki/AlphaFold)

  • [yt](https://www.youtube.com/watch?v=gg7WjuFs8F4), [yt](https://www.youtube.com/watch?v=B9PL__gVxLI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/alphafold/blob/master/notebooks/AlphaFold.ipynb) | 15.04.2024 |
| Würstchen | Architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models |

  • [Pablo Pernias](https://github.com/pabloppp)
  • [Dominic Rampas](https://github.com/dome272)
  • [Mats Richter](https://scholar.google.com/citations?user=xtlV5SAAAAAJ)
  • [Christopher Pal](https://www.polymtl.ca/expertises/pal-christopher-j)
  • [Marc Aubreville](https://lme.tf.fau.de/person/aubreville/)

| [![](https://img.shields.io/github/stars/dome272/wuerstchen?style=social)](https://github.com/dome272/wuerstchen)

  • [arxiv](https://arxiv.org/abs/2306.00637)

  • [hf](https://huggingface.co/blog/wuerstchen)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/16hsklt/w%C3%BCrstchen_is_here_a_game_changing_fastest/)

  • [yt](https://youtu.be/ogJsCPqgFMk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/dome272/Wuerstchen/blob/main/w%C3%BCrstchen-stage-C.ipynb) | 06.04.2024 |
| AudioSep | Foundation model for open-domain audio source separation with natural language queries |

  • [Xubo Liu](https://liuxubo717.github.io/)
  • [Qiuqiang Kong](https://qiuqiangkong.github.io/)
  • [Yan Zhao](https://cliffzhao.github.io/)
  • [Haohe Liu](https://haoheliu.github.io/)
  • others
  • [Yi Yuan](https://www.surrey.ac.uk/people/yi-yuan)
  • [Yuzhuo Liu](https://github.com/redrabbit94)
  • [Rui Xia](https://scholar.google.co.uk/citations?user=26oErxwAAAAJ)
  • [Yuxuan Wang](https://scholar.google.com/citations?user=3RaOfJkAAAAJ)
  • [Mark Plumbley](https://www.surrey.ac.uk/people/mark-plumbley)
  • [Wenwu Wang](http://personal.ee.surrey.ac.uk/Personal/W.Wang/)

| [![](https://img.shields.io/github/stars/Audio-AGI/AudioSep?style=social)](https://github.com/Audio-AGI/AudioSep)

  • [arxiv](https://arxiv.org/abs/2308.05037)

  • [project](https://audio-agi.github.io/Separate-Anything-You-Describe/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Audio-AGI/AudioSep/blob/main/AudioSep_Colab.ipynb) | 15.03.2024 |
| AQLM | Extreme Compression of Large Language Models via Additive Quantization |

  • [Vage Egiazarian](https://github.com/Vahe1994)
  • [Andrei Panferov](https://blog.panferov.org/)
  • [Denis Kuznedelev](https://github.com/Godofnothing)
  • [Elias Frantar](https://efrantar.github.io/)
  • others
  • [Artem Babenko](https://scholar.google.com/citations?user=2Kv3JP0AAAAJ)
  • [Dan Alistarh](https://github.com/dalistarh)

| [![](https://img.shields.io/github/stars/Vahe1994/AQLM?style=social)](https://github.com/Vahe1994/AQLM)

  • [arxiv](https://arxiv.org/abs/2401.06118)

  • [hf](https://huggingface.co/docs/datasets/main/en/cache#cache-directory), [hf](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample), [hf](https://huggingface.co/datasets/Vahe1994/AQLM)

  • [reddit](https://www.reddit.com/r/LearningMachines/comments/1atvrnl/240106118_extreme_compression_of_large_language/)

  • [yt](https://youtu.be/Qx8PNk4OkUA), [yt](https://youtu.be/hAHBKAXO-88)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Vahe1994/AQLM/blob/main/notebooks/colab_example.ipynb) | 08.03.2024 |
| YOLOv9 | Learning What You Want to Learn Using Programmable Gradient Information |

  • [Chien-Yao Wang](https://scholar.google.com/citations?user=DkQh4M4AAAAJ)
  • [I-Hau Yeh](https://ieeexplore.ieee.org/author/37088448531)
  • [Hong-Yuan Mark Liao](https://homepage.iis.sinica.edu.tw/pages/liao/index_zh.html)

| [![](https://img.shields.io/github/stars/WongKinYiu/yolov9?style=social)](https://github.com/WongKinYiu/yolov9)

  • [arxiv](https://arxiv.org/abs/2402.13616), [arxiv](https://arxiv.org/abs/2309.16921)

  • [blog post](https://learnopencv.com/yolov9-advancing-the-yolo-legacy/)

  • [git](https://github.com/WongKinYiu/yolor), [git](https://github.com/VDIGPKU/DynamicDet), [git](https://github.com/DingXiaoH/RepVGG)

  • [hf](https://huggingface.co/spaces/kadirnar/Yolov9), [hf](https://huggingface.co/merve/yolov9)

  • [medium](https://medium.com/@Mert.A/how-to-use-yolov9-for-object-detection-93598ad88d7d)

  • [yt](https://youtu.be/XHT2c8jT3Bc), [yt](https://youtu.be/3iLJ6YWPg28), [yt](https://youtu.be/dccf_sJF0Gg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov9-object-detection-on-custom-dataset.ipynb) | 05.03.2024 |
| Multi-LoRA Composition | LoRA Switch and LoRA Composite, approaches that aim to surpass traditional techniques in terms of accuracy and image quality, especially in complex compositions |

  • [Ming Zhong](https://maszhongming.github.io/)
  • [Yelong Shen](https://scholar.google.com/citations?user=S6OFEFEAAAAJ)
  • [Shuohang Wang](https://www.microsoft.com/en-us/research/people/shuowa/)
  • [Yadong Lu](https://adamlu123.github.io/)
  • others
  • [Yizhu Jiao](https://yzjiao.github.io/)
  • [Siru Ouyang](https://ozyyshr.github.io/)
  • [Donghan Yu](https://plusross.github.io/)
  • [Jiawei Han](https://hanj.cs.illinois.edu/)
  • [Weizhu Chen](https://www.microsoft.com/en-us/research/people/wzchen/)

| [![](https://img.shields.io/github/stars/maszhongming/Multi-LoRA-Composition?style=social)](https://github.com/maszhongming/Multi-LoRA-Composition)

  • [arxiv](https://arxiv.org/abs/2402.16843)

  • [medium](https://medium.com/@letscodeai/multi-lora-composition-for-image-generation-f2706528c590)

  • [reddit](https://www.reddit.com/r/ninjasaid13/comments/1b13q8s/multilora_composition_for_image_generation/)

  • [twitter](https://x.com/MingZhong_/status/1762347881812443575?s=20)

  • [website](https://maszhongming.github.io/Multi-LoRA-Composition/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1eSTj6qGOtSY5NaazwwN3meXOzEZxgaZq) | 03.03.2024 |
| AMARETTO | Multiscale and multimodal inference of regulatory networks to identify cell circuits and their drivers shared and distinct within and across biological systems of human disease |

  • [Nathalie Pochet](http://portals.broadinstitute.org/pochetlab/)
  • [Olivier Gevaert](https://profiles.stanford.edu/olivier-gevaert)
  • [Mohsen Nabian](https://github.com/monabiyan)
  • [Jayendra Shinde](https://jayendrashinde91.github.io/)
  • others
  • [Celine Everaert](http://www.crig.ugent.be/en/node/510)
  • [Thorin Tabor](http://thorin.tabcreations.com/)

| [![](https://img.shields.io/github/stars/gevaertlab/AMARETTO?style=social)](https://github.com/gevaertlab/AMARETTO)

  • [bioconductor](https://bioconductor.org/packages/release/bioc/html/AMARETTO.html)

  • [project](http://portals.broadinstitute.org/pochetlab/amaretto.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1JfnRoNgTVX_7VEGAAmjGjwP_yX2tdDxs) | 28.02.2024 |
| LIDA | Tool for generating grammar-agnostic visualizations and infographics | [Victor Dibia](https://victordibia.com/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/2023.acl-demo.11)](https://doi.org/10.18653/v1/2023.acl-demo.11) [![](https://img.shields.io/github/stars/microsoft/lida?style=social)](https://github.com/microsoft/lida)

  • [arxiv](https://arxiv.org/abs/2303.02927)

  • [git](https://github.com/victordibia/llmx), [git](https://github.com/lida-project/lida-streamlit)

  • [medium](https://medium.com/@c17hawke/lida-automatically-generate-visualization-and-with-llms-the-future-of-data-visualization-6bc556876b46)

  • [project](https://microsoft.github.io/lida/)

  • [yt](https://youtu.be/exYi9W-dhME), [yt](https://youtu.be/U9K1Cu45nMQ), [yt](https://youtu.be/6xcCwlDx6f8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/microsoft/lida/blob/main/notebooks/tutorial.ipynb) | 06.02.2024 |
| ViT | Vision Transformer and MLP-Mixer Architectures |

  • [Alexey Dosovitskiy](https://scholar.google.com/citations?user=FXNJRDoAAAAJ)
  • [Lucas Beyer](http://lucasb.eyer.be)
  • [Alexander Kolesnikov](https://github.com/akolesnikoff)
  • [Dirk Weissenborn](https://github.com/dirkweissenborn)
  • others
  • [Xiaohua Zhai](https://github.com/xiaohuazhai)
  • [Thomas Unterthiner](https://github.com/untom)
  • [Mostafa Dehghani](https://www.mostafadehghani.com/)
  • [Matthias Minderer](https://matthias.minderer.net/)
  • [Georg Heigold](https://scholar.google.com/citations?user=WwqlChAAAAAJ)
  • [Sylvain Gelly](https://scholar.google.com/citations?user=m7LvuTkAAAAJ)
  • [Jakob Uszkoreit](https://scholar.google.com/citations?user=mOG0bwsAAAAJ)
  • [Neil Houlsby](https://neilhoulsby.github.io/)

| [![](https://img.shields.io/github/stars/google-research/vision_transformer?style=social)](https://github.com/google-research/vision_transformer)

  • [arxiv](https://arxiv.org/abs/2010.11929), [arxiv](https://arxiv.org/abs/2105.01601), [arxiv](https://arxiv.org/abs/2105.01601), [arxiv](https://arxiv.org/abs/2106.10270), [arxiv](https://arxiv.org/abs/2106.01548), [arxiv](https://arxiv.org/abs/2111.07991), [arxiv](https://arxiv.org/abs/2203.08065)

  • [blog post](https://blog.research.google/2022/04/locked-image-tuning-adding-language.html)

  • [git](https://github.com/huggingface/pytorch-image-models), [git](https://github.com/google/flaxformer)

  • [kaggle](https://www.kaggle.com/models)

  • [medium](https://medium.com/@weiwen21/an-image-is-worth-16x16-words-transformers-for-image-recognition-at-scale-957f88e53726)

  • [yt](https://youtu.be/TrdevFK_am4), [yt](https://youtu.be/HZ4j_U3FC94), [yt](https://youtu.be/7K4Z8RqjWIk), [yt](https://youtu.be/oDtcobGQ7xU?si=C2EgZTESzhTXFSq6), [yt](https://youtu.be/v6xj_DG-UEo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb) | 06.02.2024 |
| 3D Ken Burns | A reference implementation of 3D Ken Burns Effect from a Single Image using PyTorch - given a single input image, it animates this still image with a virtual camera scan and zoom subject to motion parallax | [Manuel Romero](https://mrm8488.github.io/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3355089.3356528)](https://doi.org/10.1145/3355089.3356528) [![](https://img.shields.io/github/stars/sniklaus/3d-ken-burns?style=social)](https://github.com/sniklaus/3d-ken-burns)

  • [arxiv](https://arxiv.org/abs/1909.05483)

  • [yt](https://www.youtube.com/watch?v=WrajxHHfRBA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/3D_Ken_Burns.ipynb) | 24.01.2024 |
| VALL-E X | Cross-lingual neural codec language model for cross-lingual speech synthesis |

  • [Ziqiang Zhang](https://github.com/onisac-K)
  • [Long Zhou](https://long-zhou.github.io/)
  • [Chengyi Wang](https://cywang97.github.io/)
  • [Sanyuan Chen](https://sanyuan-chen.github.io/)
  • others
  • [Yu Wu](https://www.microsoft.com/en-us/research/people/yuwu1/)
  • [Shujie Liu](https://www.microsoft.com/en-us/research/people/shujliu/)
  • [Zhuo Chen](https://www.microsoft.com/en-us/research/people/zhuc/)
  • [Yanqing Liu](https://scholar.google.com/citations?user=dIJFz4UAAAAJ)
  • [Huaming Wang](https://scholar.google.com/citations?user=aJDLg5IAAAAJ)
  • [Jinyu Li](https://www.microsoft.com/en-us/research/people/jinyli/)
  • [Lei He](https://scholar.google.com/citations?user=EKl9yY8AAAAJ)
  • [Sheng Zhao](https://scholar.google.com/citations?user=689bIIwAAAAJ)
  • [Furu Wei](https://www.microsoft.com/en-us/research/people/fuwei/)

| [![](https://img.shields.io/github/stars/Plachtaa/VALL-E-X?style=social)](https://github.com/Plachtaa/VALL-E-X)

  • [arxiv](https://arxiv.org/abs/2303.03926), [arxiv](https://arxiv.org/abs/2301.02111), [arxiv](https://arxiv.org/abs/2209.03143)

  • [demo](https://plachtaa.github.io/)

  • [discord](https://discord.gg/qCBRmAnTxg)

  • [git](https://github.com/lifeiteng/vall-e)

  • [hf](https://huggingface.co/Plachta/VALL-E-X)

  • [medium](https://medium.com/syncedreview/speak-a-foreign-language-in-your-own-voice-1dafa42f78d9)

  • [project](https://www.microsoft.com/en-us/research/project/vall-e-x)

  • [yt](https://youtu.be/7qgfoVFQmvk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1yyD_sz531QntLKowMHo-XxorsFBCfKul) | 19.01.2024 |
| PhotoMaker | Efficient personalized text-to-image generation method, which mainly encodes an arbitrary number of input ID images into a stack ID embedding for preserving ID information |

  • [Zhen Li](https://paper99.github.io/)
  • [Mingdeng Cao](https://github.com/ljzycmd)
  • [Xintao Wang](https://xinntao.github.io/)
  • [Zhongang Qi](https://scholar.google.com/citations?user=zJvrrusAAAAJ)
  • others
  • [Ming-Ming Cheng](https://mmcheng.net/cmm/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)

| [![](https://img.shields.io/github/stars/TencentARC/PhotoMaker?style=social)](https://github.com/TencentARC/PhotoMaker)

  • [arxiv](https://arxiv.org/abs/2312.04461)

  • [git](https://github.com/bmaltais/PhotoMaker), [git](https://github.com/sdbds/PhotoMaker-for-windows), [git](https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker), [git](https://github.com/mit-han-lab/fastcomposer), [git](https://github.com/TencentARC/T2I-Adapter), [git](https://github.com/tencent-ailab/IP-Adapter)

  • [hf](https://huggingface.co/TencentARC/PhotoMaker)

  • [medium](https://medium.com/@christopheverdier/photomaker-the-art-of-ai-consistent-characters-generation-cf2cd037bc3e)

  • [project](https://photo-maker.github.io/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/197bfj9/tencentarc_releases_photomaker/)

  • [yt](https://youtu.be/NWIdzTEk5O4), [yt](https://youtu.be/ZTck128jfFY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/TencentARC/PhotoMaker/blob/main/photomaker_demo.ipynb) | 18.01.2024 |
| DDColor | End-to-end method with dual decoders for image colorization |

  • [Xiaoyang Kang](https://piddnad.github.io/xiaoyangkang)
  • [Tao Yang](https://cg.cs.tsinghua.edu.cn/people/~tyang/)
  • [Wenqi Ouyang](https://vicky0522.github.io/Wenqi-Ouyang/)
  • [Peiran Ren](https://scholar.google.com/citations?user=x5dEuxsAAAAJ)
  • others
  • [Lingzhi Li](https://lingzhili.com/)
  • [Xuansong Xie](https://github.com/xungie)

| [![](https://img.shields.io/github/stars/piddnad/DDColor?style=social)](https://github.com/piddnad/DDColor)

  • [arxiv](https://arxiv.org/abs/2212.11613)

  • [git](https://github.com/jixiaozhong/ColorFormer), [git](https://github.com/KIMGEONUNG/BigColor)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/DDColor-colab/blob/main/DDColor_colab.ipynb) | 15.01.2024 |
| PASD | Pixel-aware stable diffusion network to achieve robust Real-ISR as well as personalized stylization |

  • [Tao Yang](https://cg.cs.tsinghua.edu.cn/people/~tyang)
  • [Peiran Ren](http://renpr.org/)
  • [Xuansong Xie](https://github.com/xungie)
  • [Lei Zhang](https://www4.comp.polyu.edu.hk/~cslzhang)

| [![](https://img.shields.io/github/stars/yangxy/PASD?style=social)](https://github.com/yangxy/PASD)

  • [arxiv](https://arxiv.org/abs/2308.14469)

  • [git](https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111)

  • [hf](https://huggingface.co/runwayml/stable-diffusion-v1-5), [hf](https://huggingface.co/nitrosocke/mo-di-diffusion)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/18qxe5q/pixelaware_stable_diffusion_for_realistic_image/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1lZ_-rSGcmreLCiRniVT973x6JLjFiC-b) | 12.01.2024 |
| HandRefiner | Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting |

  • [Wenquan Lu](https://github.com/wenquanlu)
  • [Yufei Xu](https://scholar.google.com/citations?user=hlYWxX8AAAAJ)
  • [Jing Zhang](https://scholar.google.com/citations?user=9jH5v74AAAAJ)
  • [Chaoyue Wang](https://wang-chaoyue.github.io/)
  • [Dacheng Tao](https://scholar.google.com/citations?user=RwlJNLcAAAAJ)

| [![](https://img.shields.io/github/stars/wenquanlu/HandRefiner?style=social)](https://github.com/wenquanlu/HandRefiner)

  • [arxiv](https://arxiv.org/abs/2311.17957)

  • [git](https://github.com/Fannovel16/comfyui_controlnet_aux), [git](https://github.com/Mikubill/sd-webui-controlnet), [git](https://github.com/microsoft/MeshGraphormer)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1881z4v/handrefiner_refining_malformed_hands_in_generated/)

  • [yt](https://youtu.be/Tt-Fyn1RA6c)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/HandRefiner-colab/blob/main/HandRefiner_colab.ipynb) | 08.01.2024 |
| ESM | Evolutionary Scale Modeling: Pretrained language models for proteins |

  • [Zeming Lin](https://research.facebook.com/people/lin-zeming/)
  • [Roshan Rao](https://rmrao.github.io/)
  • [Brian Hie](https://brianhie.com/)
  • [Zhongkai Zhu](https://www.linkedin.com/in/zhongkai-zhu-03a27424)
  • others
  • [Allan dos Santos Costa](https://scholar.google.com/citations?user=Zb4RsFsAAAAJ)
  • [Maryam Fazel-Zarandi](https://www.maryamfazel.com/)
  • [Tom Sercu](https://tom.sercu.me/)
  • [Salvatore Candido](https://scholar.google.com/citations?user=BDgbhmEAAAAJ)
  • [Alexander Rives](https://scholar.google.com/citations?user=vqb78-gAAAAJ)
  • [Joshua Meier](https://scholar.google.com/citations?user=2M0OltAAAAAJ)
  • [Robert Verkuil](https://dblp.org/pid/296/8930.html)
  • [Jason Liu](https://www.linkedin.com/in/liujiayi/)
  • [Chloe Hsu](https://chloe-hsu.com/)
  • [Adam Lerer](https://scholar.google.com/citations?user=Ad6O4-0AAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1101/622803)](https://doi.org/10.1101/622803) [![](https://img.shields.io/github/stars/facebookresearch/esm?style=social)](https://github.com/facebookresearch/esm)

  • [ESM Atlas](https://esmatlas.com/)

  • [FSDP](https://fairscale.readthedocs.io/en/stable/api/nn/fsdp.html)

  • [ICML](https://proceedings.mlr.press/v139/rao21a.html)

  • [data](https://ftp.uniprot.org/pub/databases/uniprot/previous_releases/release-2018_03/uniref/)

  • [git](https://github.com/sokrypton/ColabFold)

  • [hf](https://huggingface.co/docs/transformers/model_doc/esm)

  • [paper](https://doi.org/10.1101/2022.07.20.500902), [paper](https://doi.org/10.1101/2021.07.09.450648), [paper](https://doi.org/10.1101/2022.04.10.487779), [paper](https://doi.org/10.1101/2022.12.21.521521)

  • [pubmed](https://pubmed.ncbi.nlm.nih.gov/33876751/)

  • [yt](https://youtu.be/N-eisTvUYrk), [yt](https://youtu.be/GHoE4VkDehY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/ESMFold.ipynb) | 28.12.2023 |
| LLaVA | Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding |

  • [Haotian Liu](https://hliu.cc/)
  • [Chunyuan Li](https://chunyuan.li/)
  • [Qingyang Wu](https://qywu.github.io/)
  • [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)
  • [Yuheng Li](https://yuheng-li.github.io/)

| [![](https://img.shields.io/github/stars/haotian-liu/LLaVA?style=social)](https://github.com/haotian-liu/LLaVA)

  • [arxiv](https://arxiv.org/abs/2304.08485), [arxiv](https://arxiv.org/abs/2310.03744), [arxiv](https://arxiv.org/abs/2306.00890), [arxiv](https://arxiv.org/abs/2309.09958), [arxiv](https://arxiv.org/abs/2306.14895)

  • [demo](https://llava.hliu.cc/)

  • [git](https://github.com/ggerganov/llama.cpp/pull/3436), [git](https://github.com/microsoft/LLaVA-Med), [git](https://github.com/lm-sys/FastChat), [git](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once), [git](https://github.com/Luodian/Otter), [git](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)

  • [hf](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain), [hf](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors)

  • [medium](https://xthemadgenius.medium.com/how-to-use-llava-large-language-and-vision-assistant-732c666b5ed0)

  • [project](https://llava-vl.github.io/)

  • [yt](https://youtu.be/mkI7EPD1vp8), [yt](https://youtu.be/kx1VpI6JzsY), [yt](https://youtu.be/RxBSmbdJ1I8), [yt](https://youtu.be/mdYycY4lsuE), [yt](https://youtu.be/t7I46dxfmWs), [yt](https://youtu.be/KRAQkJC-XJU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/LLaVA-colab/blob/main/LLaVA_13b_4bit_vanilla_colab.ipynb) | 22.12.2023 |
| Background Matting V2 | Real-time, high-resolution background replacement technique which operates at 30fps in 4K resolution, and 60fps for HD on a modern GPU |

  • [Shanchuan Lin](https://github.com/PeterL1n)
  • [Andrey Ryabtsev](https://github.com/andreyryabtsev)
  • [Soumyadip Sengupta](https://github.com/senguptaumd)
  • [Brian Curless](https://homes.cs.washington.edu/~curless/)
  • others
  • [Steve Seitz](https://www.smseitz.com/)
  • [Ira Kemelmacher-Shlizerman](https://www.irakemelmacher.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00865)](https://doi.org/10.1109/CVPR46437.2021.00865) [![](https://img.shields.io/github/stars/PeterL1n/BackgroundMattingV2?style=social)](https://github.com/PeterL1n/BackgroundMattingV2)

  • [arxiv](https://arxiv.org/abs/2012.07810)

  • [git](https://github.com/senguptaumd/Background-Matting), [git](https://github.com/andreyryabtsev/BGMv2-webcam-plugin-linux)

  • [project](https://grail.cs.washington.edu/projects/background-matting-v2/)

  • [yt](https://youtu.be/oMfPTeYDF9g), [yt](https://youtu.be/b7ps21MVyTA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1cTxFq1YuoJ5QPqaTcnskwlHDolnjBkB9) | 22.12.2023 |
| Gaussian Splatting | State-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 100 fps) novel-view synthesis at 1080p resolution |

  • [Bernhard Kerbl](https://www.cg.tuwien.ac.at/staff/BernhardKerbl)
  • [Georgios Kopanas](https://grgkopanas.github.io/)
  • [Thomas Leimkühler](https://people.mpi-inf.mpg.de/~tleimkue/)
  • [George Drettakis](http://www-sop.inria.fr/members/George.Drettakis/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3592433)](https://doi.org/10.1145/3592433) [![](https://img.shields.io/github/stars/graphdeco-inria/gaussian-splatting?style=social)](https://github.com/graphdeco-inria/gaussian-splatting)

  • [arxiv](https://arxiv.org/abs/2308.04079)

  • [hf](https://huggingface.co/camenduru/gaussian-splatting)

  • [medium](https://medium.com/axinc-ai/3d-gaussian-splatting-real-time-rendering-of-photorealistic-scenes-f7f1a47f060)

  • [project](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)

  • [reddit](https://www.reddit.com/r/singularity/comments/163jeqa/3d_gaussian_splatting_for_realtime_radiance_field/)

  • [yt](https://youtu.be/T_kXY43VZnk), [yt](https://youtu.be/UXtuigy_wYc), [yt](https://youtu.be/HVv_IQKlafQ), [yt](https://youtu.be/w43KV79LsFw), [yt](https://youtu.be/TLK3TDDcJFU), [yt](https://youtu.be/kShNYOuDnlI), [yt](https://youtu.be/juRMRej2d5c)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/gaussian-splatting-colab/blob/main/gaussian_splatting_colab.ipynb) | 19.12.2023 |
| SMPLer-X | Scaling up EHPS towards the first generalist foundation model, with up to ViT-Huge as the backbone and training with up to 4.5M instances from diverse data sources |

  • [Zhongang Cai](https://caizhongang.github.io/)
  • [Wanqi Yin](https://scholar.google.com/citations?user=zlIJwBEAAAAJ)
  • [Ailing Zeng](https://ailingzeng.site/)
  • [Chen Wei](https://github.com/Wei-Chen-hub)
  • others
  • [Qingping Sun](https://github.com/ttxskk)
  • [Yanjun Wang](https://github.com/WYJSJTU)
  • [Hui En Pang](https://pangyyyyy.github.io/)
  • [Haiyi Mei](https://haiyi-mei.com/)
  • [Mingyuan Zhang](https://mingyuan-zhang.github.io/)
  • [Lei Zhang](https://www.leizhang.org/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
  • [Lei Yang](https://scholar.google.com/citations?user=jZH2IPYAAAAJ)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://img.shields.io/github/stars/caizhongang/SMPLer-X?style=social)](https://github.com/caizhongang/SMPLer-X)

  • [arxiv](https://arxiv.org/abs/2309.17448)

  • [git](https://github.com/open-mmlab/mmhuman3d/blob/main/docs/human_data.md), [git](https://github.com/mks0601/Hand4Whole_RELEASE), [git](https://github.com/IDEA-Research/OSX)

  • [neurips](https://neurips.cc/virtual/2023/poster/73473)

  • [project](https://caizhongang.com/projects/SMPLer-X/)

  • [reddit](https://www.reddit.com/r/machinelearningnews/comments/176c5z7/this_ai_research_proposes_smplerx_a_generalist/)

  • [yt](https://youtu.be/DepTqbPpVzY), [yt](https://youtu.be/aFTGFInUnM4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/SMPLer-X-colab/blob/main/SMPLer_X_colab.ipynb) | 18.12.2023 |
| DeepCache | Training-free paradigm that accelerates diffusion models from the perspective of model architecture |

  • [Xinyin Ma](https://horseee.github.io/)
  • [Gongfan Fang](https://fangggf.github.io/)
  • [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)

| [![](https://img.shields.io/github/stars/horseee/DeepCache?style=social)](https://github.com/horseee/DeepCache)

  • [arxiv](https://arxiv.org/abs/2312.00858)

  • [hf](https://huggingface.co/docs/diffusers/v0.24.0/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)

  • [project](https://horseee.github.io/Diffusion_DeepCache/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/18b40hh/deepcache_accelerating_diffusion_models_for_free/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/DeepCache-colab/blob/main/DeepCache_colab.ipynb) | 18.12.2023 |
| MagicAnimate | Diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity |

  • [Zhongcong Xu](https://scholar.google.com/citations?user=-4iADzMAAAAJ)
  • [Jianfeng Zhang](http://jeff95.me/)
  • [Jun Hao Liew](https://scholar.google.com/citations?user=8gm-CYYAAAAJ)
  • [Hanshu Yan](https://hanshuyan.github.io/)
  • others
  • [Jiawei Liu](https://jia-wei-liu.github.io/)
  • [Chenxu Zhang](https://zhangchenxu528.github.io/)
  • [Jiashi Feng](https://sites.google.com/site/jshfeng/home)
  • [Mike Shou](https://sites.google.com/view/showlab)

| [![](https://img.shields.io/github/stars/magic-research/magic-animate?style=social)](https://github.com/magic-research/magic-animate)

  • [arxiv](https://arxiv.org/abs/2311.16498)

  • [hf](https://huggingface.co/zcxu-eric/MagicAnimate), [hf](https://huggingface.co/runwayml/stable-diffusion-v1-5), [hf](https://huggingface.co/stabilityai/sd-vae-ft-mse)

  • [medium](https://medium.com/@AIWorldBlog/revolutionizing-image-animation-with-magicanimate-technology-78cc94151915)

  • [project](https://showlab.github.io/magicanimate/)

  • [website](https://www.magicanimate.org/)

  • [yt](https://youtu.be/td27SyA9M80), [yt](https://youtu.be/1pATjLFvNtY), [yt](https://youtu.be/HeXknItbMM8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/MagicAnimate-colab/blob/main/MagicAnimate_colab.ipynb) | 18.12.2023 |
| DiffBIR | Towards Blind Image Restoration with Generative Diffusion Prior |

  • [Xinqi Lin](https://github.com/0x3f3f3f3fun)
  • [Jingwen He](https://github.com/hejingwenhejingwen)
  • [Ziyan Chen](https://github.com/ziyannchen)
  • [Zhaoyang Lyu](https://zhaoyanglyu.github.io/)
  • others
  • [Ben Fei](https://scholar.google.com/citations?user=skQROj8AAAAJ)
  • [Bo Dai](http://daibo.info/)
  • [Wanli Ouyang](https://wlouyang.github.io/)
  • [Yu Qiao](https://mmlab.siat.ac.cn/yuqiao)
  • [Chao Dong](http://xpixel.group/2010/01/20/chaodong.html)

| [![](https://img.shields.io/github/stars/XPixelGroup/DiffBIR?style=social)](https://github.com/XPixelGroup/DiffBIR)

  • [arxiv](https://arxiv.org/abs/2308.15070)

  • [git](https://github.com/albarji/mixture-of-diffusers)

  • [hf](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)

  • [project](https://0x3f3f3f3fun.github.io/projects/diffbir/)

  • [yt](https://youtu.be/rGnrpxWjBOg), [yt](https://youtu.be/MIRiJGuGqsg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb) | 18.12.2023 |
| AudioLDM | Text-to-audio system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining latents |

  • [Haohe Liu](https://haoheliu.github.io/)
  • [Zehua Chen](https://github.com/zehuachenImperial)
  • [Yi Yuan](https://www.surrey.ac.uk/people/yi-yuan)
  • [Xinhao Mei](https://xinhaomei.github.io/)
  • others
  • [Xubo Liu](https://liuxubo717.github.io/)
  • [Danilo Mandic](https://www.imperial.ac.uk/people/d.mandic)
  • [Wenwu Wang](http://personal.ee.surrey.ac.uk/Personal/W.Wang/)
  • [Mark Plumbley](https://www.surrey.ac.uk/people/mark-plumbley)

| [![](https://img.shields.io/github/stars/haoheliu/AudioLDM?style=social)](https://github.com/haoheliu/AudioLDM)

  • [arxiv](https://arxiv.org/abs/2301.12503)

  • [git](https://github.com/LAION-AI/CLAP), [git](https://github.com/CompVis/stable-diffusion), [git](https://github.com/toshas/torch-fidelity)

  • [project](https://audioldm.github.io/)

  • [yt](https://youtu.be/_0VTltNYhao)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/olaviinha/NeuralTextToAudio/blob/main/AudioLDM_pub.ipynb) | 02.12.2023 |
| TabPFN | Neural network that learned to do tabular data prediction |

  • [Noah Hollmann](https://github.com/noahho)
  • [Samuel Müller](https://scholar.google.com/citations?user=pevYEjAAAAAJ)
  • [Katharina Eggensperger](https://github.com/KEggensperger)
  • [Frank Hutter](https://ml.informatik.uni-freiburg.de/profile/hutter/)

| [![](https://img.shields.io/github/stars/automl/TabPFN?style=social)](https://github.com/automl/TabPFN)

  • [arxiv](https://arxiv.org/abs/2207.01848), [arxiv](https://arxiv.org/abs/2106.11189), [arxiv](https://arxiv.org/abs/2106.01342), [arxiv](https://arxiv.org/abs/2106.03253), [arxiv](https://arxiv.org/abs/2106.11189), [arxiv](https://arxiv.org/abs/2112.10510)

  • [blog post](https://www.automl.org/tabpfn-a-transformer-that-solves-small-tabular-classification-problems-in-a-second/)

  • [twitter](https://twitter.com/tunguz/status/1578730907711655937)

  • [yt](https://youtu.be/BGTO5N5-ack)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/194mCs6SEPEW6C0rcP7xWzcEtt1RBc8jJ) | 29.11.2023 |
| Concept Sliders | Plug-and-play low rank adaptors applied on top of pretrained models |

  • [Rohit Gandikota](https://rohitgandikota.github.io/)
  • [Joanna Materzyńska](https://joaanna.github.io/)
  • [Tingrui Zhou](https://www.p1at.dev/)
  • [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/)
  • [David Bau](https://baulab.info/)

| [![](https://img.shields.io/github/stars/rohitgandikota/sliders?style=social)](https://github.com/rohitgandikota/sliders)

  • [arxiv](https://arxiv.org/abs/2311.12092), [arxiv](https://arxiv.org/abs/2207.12598)

  • [medium](https://medium.com/@furkangozukara/concept-sliders-lora-adaptors-for-precise-control-in-diffusion-models-b7f6b36fabee)

  • [neurips](https://proceedings.neurips.cc/paper/2020/hash/49856ed476ad01fcff881d57e161d73f-Abstract.html)

  • [project](https://sliders.baulab.info/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/180zon7/concept_sliders_lora_adaptors_for_precise_control/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rohitgandikota/sliders/blob/main/demo_concept_sliders.ipynb) | 26.11.2023 |
| Qwen-VL | Set of large-scale vision-language models designed to perceive and understand both text and images |

  • [Jinze Bai](https://github.com/jinze1994)
  • [Shuai Bai](https://github.com/ShuaiBai623)
  • [Shusheng Yang](https://shushengyang.com/)
  • [Shijie Wang](https://scholar.google.com/citations?user=DuAqyTwAAAAJ)
  • others
  • [Sinan Tan](https://www.tinytangent.com/)
  • [Peng Wang](https://scholar.google.com/citations?user=7fjqA0YAAAAJ)
  • [Junyang Lin](https://justinlin610.github.io/)
  • [Chang Zhou](https://scholar.google.com/citations?user=QeSoG3sAAAAJ)
  • [Jingren Zhou](http://www.cs.columbia.edu/~jrzhou/)

| [![](https://img.shields.io/github/stars/QwenLM/Qwen-VL?style=social)](https://github.com/QwenLM/Qwen-VL)

  • [arxiv](https://arxiv.org/abs/2308.12966), [arxiv](https://arxiv.org/abs/2106.09685), [arxiv](https://arxiv.org/abs/2305.14314)

  • [demo](https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary)

  • [discord](https://discord.gg/z3GAxXZ9Ce)

  • [git](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation), [git](https://github.com/OFA-Sys/TouchStone), [git](https://github.com/PanQiWei/AutoGPTQ)

  • [hf](https://huggingface.co/spaces/AILab-CVC/SEED-Bench_Leaderboard), [hf](https://huggingface.co/Qwen/Qwen-VL)

  • [yt](https://youtu.be/ElrSJDg23Po), [yt](https://youtu.be/E3MS8GfGWj4), [yt](https://youtu.be/ju09YaO7BGA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/Qwen-VL-Chat-colab/blob/main/Qwen_VL_Chat_colab.ipynb) | 24.11.2023 |
| AnimeGANv3 | Double-tail generative adversarial network for fast photo animation |

  • [Gang Liu](https://github.com/lg0061408)
  • [Xin Chen](https://github.com/TachibanaYoshino)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1587/transinf.2023EDP7061)](http://doi.org/10.1587/transinf.2023EDP7061) [![](https://img.shields.io/github/stars/TachibanaYoshino/AnimeGANv3?style=social)](https://github.com/TachibanaYoshino/AnimeGANv3)

  • [project](https://tachibanayoshino.github.io/AnimeGANv3/)

  • [yt](https://youtu.be/EosubeJmAnE), [yt](https://youtu.be/5qLUflWb45E), [yt](https://youtu.be/iFjiaPlhVm4), [yt](https://youtu.be/vJqQQMRYKh0), [yt](https://youtu.be/0KaScDxgyBw), [yt](https://youtu.be/6WXhjXb5a-o)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1XYNWwM8Xq-U7KaTOqNap6A-Yq1f-V-FB) | 23.11.2023 |
| Ithaca | First Deep Neural Network for the textual restoration, geographical and chronological attribution of ancient Greek inscriptions |

  • [Yannis Assael](https://www.assael.gr/)
  • [Thea Sommerschield](https://theasommerschield.it/)
  • [Brendan Shillingford](https://github.com/bshillingford)
  • [Mahyar Bordbar](https://scholar.google.com/citations?user=KB3ldSQAAAAJ)
  • others
  • [John Pavlopoulos](https://ipavlopoulos.github.io/)
  • [Marita Chatzipanagiotou](https://gr.linkedin.com/in/marita-chatzipanagiotou-b0611a1a2)
  • [Ion Androutsopoulos](https://pages.aueb.gr/users/ion/)
  • [Jonathan Prag](https://www.classics.ox.ac.uk/people/dr-jonathan-prag)
  • [Nando de Freitas](https://www.cs.ox.ac.uk/people/nando.defreitas/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41586-022-04448-z)](https://doi.org/10.1038/s41586-022-04448-z) [![](https://img.shields.io/github/stars/google-deepmind/ithaca?style=social)](https://github.com/google-deepmind/ithaca)

  • [arxiv](https://arxiv.org/abs/1910.06262)

  • [git](https://github.com/sommerschield/iphi)

  • [medium](https://odsc.medium.com/deep-neural-networks-could-be-key-to-ancient-text-restoration-and-attribution-research-shows-81a2d89d9413), [medium](https://medium.com/syncedreview/ithaca-paper-published-in-nature-the-first-dnn-designed-for-textual-restoration-and-geographical-ef395d56697e)

  • [project](https://ithaca.deepmind.com/)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/tgeo0q/r_restoring_and_attributing_ancient_texts_using/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/ithaca/blob/master/colabs/ithaca_inference.ipynb) | 21.11.2023 |
| PixArt-Σ | Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation |

  • [Junsong Chen](https://lawrence-cj.github.io/)
  • [Chongjian Ge](https://chongjiange.github.io/)
  • [Enze Xie](https://xieenze.github.io/)
  • [Yue Wu](https://yuewuhkust.github.io/)
  • others
  • [Lewei Yao](https://scholar.google.com/citations?user=hqDyTg8AAAAJ)
  • [Xiaozhe Ren](https://scholar.google.com/citations?user=3t2j87YAAAAJ)
  • [Zhongdao Wang](https://zhongdao.github.io/)
  • [Ping Luo](http://luoping.me/)
  • [Huchuan Lu](https://scholar.google.com/citations?user=D3nE0agAAAAJ)
  • [Zhenguo Li](https://scholar.google.com/citations?user=XboZC1AAAAAJ)

| [![](https://img.shields.io/github/stars/PixArt-alpha/PixArt-sigma?style=social)](https://github.com/PixArt-alpha/PixArt-sigma)

  • [arxiv](https://arxiv.org/abs/2403.04692), [arxiv](https://arxiv.org/abs/2310.00426), [arxiv](https://arxiv.org/abs/2401.05252)

  • [discord](https://discord.gg/rde6eaE5Ta)

  • [hf](https://huggingface.co/spaces/PixArt-alpha/PixArt-alpha), [hf](https://huggingface.co/spaces/PixArt-alpha/PixArt-LCM)

  • [project](https://pixart-alpha.github.io/PixArt-sigma-project/)

  • [reddit](https://www.reddit.com/r/PixArtSigma/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1jZ5UZXk7tcpTfVwnX33dDuefNMcnW9ME) | 07.11.2023 |
| Zero123++ | Image-conditioned diffusion model for generating 3D-consistent multi-view images from a single input view |

  • [Ruoxi Shi](https://rshi.top/)
  • [Hansheng Chen](https://lakonik.github.io/)
  • [Zhuoyang Zhang](https://github.com/zhuoyang20)
  • [Minghua Liu](https://cseweb.ucsd.edu/~mil070/)
  • others
  • [Chao Xu](https://chaoxu.xyz/)
  • [Xinyue Wei](https://sarahweiii.github.io/)
  • [Linghao Chen](https://ootts.github.io/)
  • [Chong Zeng](https://www.chong-zeng.com/)
  • [Hao Su](https://cseweb.ucsd.edu/~haosu/)

| [![](https://img.shields.io/github/stars/SUDO-AI-3D/zero123plus?style=social)](https://github.com/SUDO-AI-3D/zero123plus)

  • [arxiv](https://arxiv.org/abs/2310.15110)

  • [git](https://github.com/One-2-3-45/One-2-3-45), [git](https://github.com/cvlab-columbia/zero123)

  • [hf](https://huggingface.co/spaces/sudo-ai/zero123plus-demo-space), [hf](https://huggingface.co/spaces/ysharma/Zero123PlusDemo)

  • [medium](https://xthemadgenius.medium.com/zero123-your-guide-to-single-view-to-multi-view-3d-image-transformation-b4346b0e6615)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/17f4c6p/zero123_a_single_image_to_consistent_multiview/)

  • [yt](https://youtu.be/V9AR-81pAgk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1_5ECnTOosRuAsm2tUp0zvBG0DppL-F3V) | 26.10.2023 |
| UniFormerV2 | Unified Transformer for Efficient Spatiotemporal Representation Learning |

  • [Kunchang Li](https://github.com/Andy1621)
  • [Yali Wang](https://scholar.google.com/citations?user=hD948dkAAAAJ)
  • [Yinan He](https://github.com/yinanhe)
  • [Yizhuo Li](http://liyizhuo.com/)
  • others
  • [Yi Wang](https://scholar.google.com/citations?user=Xm2M8UwAAAAJ)
  • [Limin Wang](http://wanglimin.github.io/)
  • [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV51070.2023.00157)](https://doi.org/10.1109/ICCV51070.2023.00157) [![](https://img.shields.io/github/stars/OpenGVLab/UniFormerV2?style=social)](https://github.com/OpenGVLab/UniFormerV2)

  • [arxiv](https://arxiv.org/abs/2211.09552)

  • [git](https://github.com/innat/UniFormerV2), [git](https://huggingface.co/spaces/Andy1621/uniformerv2_demo), [git](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py), [git](https://github.com/facebookresearch/SlowFast)

  • [hf](https://huggingface.co/spaces/Andy1621/uniformerv2_demo)

  • [pwc](https://paperswithcode.com/sota/action-classification-on-activitynet?p=uniformerv2-spatiotemporal-learning-by-arming), [pwc](https://paperswithcode.com/sota/action-recognition-on-hacs?p=uniformerv2-spatiotemporal-learning-by-arming), [pwc](https://paperswithcode.com/sota/action-classification-on-moments-in-time?p=uniformerv2-spatiotemporal-learning-by-arming), [pwc](https://paperswithcode.com/sota/action-recognition-in-videos-on-something-1?p=uniformerv2-spatiotemporal-learning-by-arming), [pwc](https://paperswithcode.com/sota/action-classification-on-kinetics-700?p=uniformerv2-spatiotemporal-learning-by-arming)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Z6RzLcno_eLGD_E96mlWoyGieGbKxPQr) | 20.10.2023 |
| Show-1 | Hybrid model, dubbed as Show-1, which marries pixel-based and latent-based VDMs for text-to-video generation |

  • [David Junhao Zhang](https://junhaozhang98.github.io/)
  • [Jay Zhangjie Wu](https://zhangjiewu.github.io/)
  • [Jiawei Liu](https://jia-wei-liu.github.io/)
  • [Rui Zhao](https://ruizhaocv.github.io/)
  • others
  • [Lingmin Ran](https://siacorplab.nus.edu.sg/people/ran-lingmin/)
  • [Yuchao Gu](https://ycgu.site/)
  • [Difei Gao](https://scholar.google.com/citations?user=No9OsocAAAAJ)
  • [Mike Zheng Shou](https://sites.google.com/view/showlab/home)

| [![](https://img.shields.io/github/stars/showlab/Show-1?style=social)](https://github.com/showlab/Show-1)

  • [arxiv](https://arxiv.org/abs/2309.15818)

  • [hf](https://huggingface.co/showlab/show-1-base), [hf](https://huggingface.co/showlab/show-1-interpolation), [hf](https://huggingface.co/showlab/show-1-sr1), [hf](https://huggingface.co/showlab/show-1-sr2), [hf](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis), [hf](https://huggingface.co/cerspense/zeroscope_v2_576w)

  • [project](https://showlab.github.io/Show-1/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/Show-1-colab/blob/main/Show_1_steps_colab.ipynb) | 15.10.2023 |
| DA-CLIP | Degradation-aware vision-language model to better transfer pretrained vision-language models to low-level vision tasks as a universal framework for image restoration |

  • [Ziwei Luo](https://algolzw.github.io/)
  • [Fredrik Gustafsson](http://www.fregu856.com/)
  • [Zheng Zhao](https://zz.zabemon.com/)
  • [Jens Sjölund](https://github.com/jsjol)
  • [Thomas Schön](https://user.it.uu.se/~thosc112/index.html)

| [![](https://img.shields.io/github/stars/Algolzw/daclip-uir?style=social)](https://github.com/Algolzw/daclip-uir)

  • [arxiv](https://arxiv.org/abs/2310.01018)

  • [git](https://github.com/Algolzw/image-restoration-sde)

  • [hf](https://huggingface.co/weblzw/daclip-uir-ViT-B-32-irsde)

  • [project](https://algolzw.github.io/daclip-uir/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/daclip-uir-colab/blob/main/daclip_uir_gradio_colab.ipynb) | 11.10.2023 |
| SadTalker | Generates 3D motion coefficients of the 3DMM from audio and implicitly modulates a novel 3D-aware face render for talking head generation |

  • [Wenxuan Zhang](https://github.com/Winfredy)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • [Xuan Wang](https://xuanwangvc.github.io/)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • others
  • [Xi Shen](https://xishen0220.github.io/)
  • [Yu Guo](https://yuguo-xjtu.github.io/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Fei Wang](http://gr.xjtu.edu.cn/zh/web/feynmanw)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.00836)](https://doi.org/10.1109/CVPR52729.2023.00836) [![](https://img.shields.io/github/stars/OpenTalker/SadTalker?style=social)](https://github.com/OpenTalker/SadTalker)

  • [arxiv](https://arxiv.org/abs/2211.12194)

  • [discord](https://discord.gg/rrayYqZ4tf)

  • [git](https://github.com/OpenTalker/DPE), [git](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [git](https://github.com/RenYurui/PIRender), [git](https://github.com/microsoft/Deep3DFaceReconstruction), [git](https://github.com/xinntao/facexlib), [git](https://github.com/Zz-ww/SadTalker-Video-Lip-Sync), [git](https://github.com/FeiiYin/SPI)

  • [project](https://sadtalker.github.io/)

  • [yt](https://youtu.be/AoIzJWnQw1M), [yt](https://youtu.be/fDgQcDL-qOc), [yt](https://youtu.be/BkSnM9cxkcM), [yt](https://youtu.be/7u0FYVPQ5rc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/OpenTalker/SadTalker/blob/main/quick_demo.ipynb) | 10.10.2023 |
| Musika | Music generation system that can be trained on hundreds of hours of music using a single consumer GPU, and that allows for much faster than real-time generation of music of arbitrary length on a consumer CPU |

  • [Marco Pasini](https://github.com/marcoppasini)
  • [Jan Schlüter](https://www.ofai.at/~jan.schlueter/)

| [![](https://img.shields.io/github/stars/marcoppasini/musika?style=social)](https://github.com/marcoppasini/musika)

  • [arxiv](https://arxiv.org/abs/2208.08706), [arxiv](https://arxiv.org/abs/2005.08526)

  • [data](https://magenta.tensorflow.org/datasets/maestro)

  • [git](https://github.com/hendriks73/tempo-cnn), [git](https://github.com/CPJKU/madmom)

  • [hf](https://huggingface.co/spaces/marcop/musika)

  • [project](https://marcoppasini.github.io/musika)

  • [yt](https://youtu.be/QBl8y2Z_i7Y), [yt](https://youtu.be/0l7OSM-bFvc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1PowSw3doBURwLE-OTCiWkO8HVbS5paRb) | 09.10.2023 |
| YOLOv6 | Single-stage object detection framework dedicated to industrial applications |

  • [Kaiheng Weng](https://github.com/khwengXU)
  • [Meng Cheng](https://github.com/MTChengMeng)
  • [Yiduo Li](https://github.com/yili123123)
  • [Xiangxiang Chu](https://scholar.google.com/citations?&user=jn21pUsAAAAJ)
  • [Xiaolin Wei](https://scholar.google.com/citations?user=s5b7lU4AAAAJ)

| [![](https://img.shields.io/github/stars/meituan/YOLOv6?style=social)](https://github.com/meituan/YOLOv6)

  • [arxiv](https://arxiv.org/abs/2209.02976), [arxiv](https://arxiv.org/abs/2301.05586)

  • [blog post](https://learnopencv.com/yolov6-object-detection/)

  • [data](https://cocodataset.org/#download)

  • [docs](https://yolov6-docs.readthedocs.io/zh_CN/latest/)

  • [git](https://github.com/FeiGeChuanShu/ncnn-android-yolov6), [git](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ort/cv/yolov6.cpp), [git](https://github.com/Linaom1214/TensorRT-For-YOLO-Series), [git](https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/tensorrt-yolov6)

  • [yt](https://youtu.be/3OpwcGU7VvE), [yt](https://youtu.be/GJ0lVOE3a7c), [yt](https://youtu.be/3hqkbqJ5ag8), [yt](https://youtu.be/fFCWrMFH2UY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/meituan/YOLOv6/blob/master/turtorial.ipynb) | 08.10.2023 |
| DreamGaussian | Algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details |

  • [Jiaxiang Tang](https://me.kiui.moe/)
  • [Jiawei Ren](https://jiawei-ren.github.io/)
  • [Hang Zhou](https://hangz-nju-cuhk.github.io/)
  • [Ziwei Liu](https://liuziwei7.github.io/)
  • [Gang Zeng](http://www.cis.pku.edu.cn/info/1177/1378.htm)

| [![](https://img.shields.io/github/stars/dreamgaussian/dreamgaussian?style=social)](https://github.com/dreamgaussian/dreamgaussian)

  • [arxiv](https://arxiv.org/abs/2309.16653)

  • [git](https://github.com/graphdeco-inria/diff-gaussian-rasterization), [git](https://github.com/NVlabs/nvdiffrast), [git](https://github.com/hoffstadt/DearPyGui)

  • [project](https://dreamgaussian.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1sLpYmmLS209-e5eHgcuqdryFRRO6ZhFS) | 04.10.2023 |
| ICON | Given a set of images, method estimates a detailed 3D surface from each image and then combines these into an animatable avatar |

  • [Yuliang Xiu](https://xiuyuliang.cn/)
  • [Jinlong Yang](https://is.mpg.de/~jyang)
  • [Dimitrios Tzionas](https://ps.is.mpg.de/~dtzionas)
  • [Michael Black](https://ps.is.mpg.de/~black)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01294)](https://doi.org/10.1109/CVPR52688.2022.01294) [![](https://img.shields.io/github/stars/yuliangxiu/icon?style=social)](https://github.com/yuliangxiu/icon)

  • [arxiv](https://arxiv.org/abs/2112.09127)

  • [git](https://github.com/facebookresearch/KeypointNeRF), [git](https://github.com/YadiraF/PIXIE), [git](https://github.com/YuliangXiu/bvh-distance-queries), [git](https://github.com/Project-Splinter/MonoPortDataset), [git](https://github.com/ZhengZerong/PaMIR), [git](https://github.com/Project-Splinter/MonoPort), [git](https://github.com/shunsukesaito/SCANimate), [git](https://github.com/google/aistplusplus_api)

  • [hf](https://huggingface.co/spaces/Yuliang/ICON)

  • [project](https://icon.is.tue.mpg.de/)

  • [yt](https://youtu.be/hZd6AYin2DE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgtMk10uPU05ihoA) | 31.08.2023 |
| DINOv2 | Produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning |

  • [Maxime Oquab](https://scholar.google.com/citations?user=5vteYV8AAAAJ)
  • [Timothée Darcet](https://github.com/TimDarcet)
  • [Théo Moutakanni](https://github.com/TheoMoutakanni)
  • [Huy Vo](https://huyvvo.github.io/)
  • others
  • [Marc Szafraniec](https://github.com/MarcSzafraniec/)
  • [Vasil Khalidov](https://scholar.google.com/citations?user=tjazz3AAAAAJ)
  • [Pierre Fernandez](https://pierrefdz.github.io/)
  • [Daniel Haziza](https://scholar.google.com/citations?user=2eSKdFMAAAAJ)
  • [Francisco Massa](https://github.com/fmassa)
  • [Alaaeldin El-Nouby](https://aelnouby.github.io/)
  • [Mahmoud Assran](http://www.midoassran.ca/)
  • [Nicolas Ballas](https://scholar.google.com/citations?user=euUV4iUAAAAJ)
  • [Wojciech Galuba](https://scholar.google.com/citations?user=jyaTX64AAAAJ)
  • [Russell Howes](http://www.russellhowes.net/)
  • [Po-Yao Huang](https://berniebear.github.io/)
  • [Shang-Wen Li](https://swdanielli.github.io/)
  • [Ishan Misra](http://imisra.github.io/)
  • [Michael Rabbat](https://scholar.google.com/citations?user=cMPKe9UAAAAJ)
  • [Vasu Sharma](https://vasusharma.github.io/)
  • [Gabriel Synnaeve](https://syhw.github.io/)
  • [Hu Xu](https://howardhsu.github.io/)
  • [Hervé Jegou](https://github.com/jegou)
  • [Julien Mairal](http://thoth.inrialpes.fr/people/mairal/)
  • [Patrick Labatut](https://github.com/patricklabatut)
  • [Armand Joulin](https://scholar.google.com/citations?user=kRJkDakAAAAJ)
  • [Piotr Bojanowski](https://github.com/piotr-bojanowski)

| [![](https://img.shields.io/github/stars/facebookresearch/dinov2?style=social)](https://github.com/facebookresearch/dinov2)

  • [arxiv](https://arxiv.org/abs/2304.07193)

  • [blog post](https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/)

  • [demo](https://dinov2.metademolab.com/)

  • [hf](https://huggingface.co/docs/transformers/main/model_doc/dinov2)

  • [medium](https://purnasaigudikandula.medium.com/dinov2-image-classification-visualization-and-paper-review-745bee52c826), [medium](https://towardsdatascience.com/meta-ais-another-revolutionary-large-scale-model-dinov2-for-image-feature-extraction-1114b287eadd)

  • [yt](https://youtu.be/csEgtSh7jV4), [yt](https://www.youtube.com/live/KSZiJ4k28b4), [yt](https://youtu.be/RZEkdOc3szU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/dinov2/blob/main/notebooks/semantic_segmentation.ipynb) | 31.08.2023 |
| OWL-ViT | Simple Open-Vocabulary Object Detection with Vision Transformers |

  • [Matthias Minderer](http://matthias.minderer.net/)
  • [Alexey Gritsenko](https://github.com/AlexeyG)
  • [Austin Stone](https://github.com/AustinCStone)
  • [Maxim Neumann](https://github.com/maximneumann)
  • others
  • [Dirk Weissenborn](https://github.com/dirkweissenborn)
  • [Alexey Dosovitskiy](https://scholar.google.com/citations?user=FXNJRDoAAAAJ)
  • [Aravindh Mahendran](https://github.com/aravindhm)
  • [Anurag Arnab](https://github.com/anuragarnab)
  • [Mostafa Dehghani](https://mostafadehghani.com/)
  • [Zhuoran Shen](https://cmsflash.github.io/)
  • [Xiao Wang](https://scholar.google.com/citations?user=ukyXqzMAAAAJ)
  • [Xiaohua Zhai](https://github.com/xiaohuazhai)
  • [Thomas Kipf](https://tkipf.github.io/)
  • [Neil Houlsby](https://neilhoulsby.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20080-9_42)](https://doi.org/10.1007/978-3-031-20080-9_42)

  • [arxiv](https://arxiv.org/abs/2205.06230)

  • [hf](https://huggingface.co/docs/transformers/model_doc/owlvit)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) | 21.08.2023 |
| StyleGAN3 | Alias-Free Generative Adversarial Networks |

  • [Tero Karras](https://research.nvidia.com/person/tero-karras)
  • [Miika Aittala](https://research.nvidia.com/person/Miika-Aittala)
  • [Samuli Laine](https://research.nvidia.com/person/Samuli-Laine)
  • [Erik Härkönen](https://github.com/harskish)
  • others
  • [Janne Hellsten](https://research.nvidia.com/person/Janne-Hellsten)
  • [Jaakko Lehtinen](https://users.aalto.fi/~lehtinj7/)
  • [Timo Aila](https://research.nvidia.com/person/timo-aila)

| [![](https://img.shields.io/github/stars/NVlabs/stylegan3?style=social)](https://github.com/NVlabs/stylegan3)

  • [arxiv](https://arxiv.org/abs/2106.12423), [arxiv](https://arxiv.org/abs/1706.08500), [arxiv](https://arxiv.org/abs/1801.01401), [arxiv](https://arxiv.org/abs/1904.06991), [arxiv](https://arxiv.org/abs/1812.04948), [arxiv](https://arxiv.org/abs/1606.03498)

  • [git](https://github.com/NVlabs/stylegan3-detector), [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/NVlabs/metfaces-dataset), [git](https://github.com/NVlabs/stylegan2-ada-pytorch), [git](https://github.com/NVlabs/stylegan2-ada)

  • [neurips](https://proceedings.neurips.cc/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html)

  • [project](https://nvlabs.github.io/stylegan3)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1BXNHZBai-pXtP-ncliouXo_kUiG1Pq7M) | 13.08.2023 |
| FateZero | Zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask |

  • [Chenyang Qi](https://chenyangqiqi.github.io/)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • [Chenyang Lei](https://chenyanglei.github.io/)
  • others
  • [Xintao Wang](https://xinntao.github.io/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Qifeng Chen](https://cqf.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV51070.2023.01460)](https://doi.org/10.1109/ICCV51070.2023.01460) [![](https://img.shields.io/github/stars/ChenyangQiQi/FateZero?style=social)](https://github.com/ChenyangQiQi/FateZero)

  • [arxiv](https://arxiv.org/abs/2303.09535)

  • [git](https://github.com/bryandlee/Tune-A-Video), [git](https://github.com/google/prompt-to-prompt)

  • [hf](https://huggingface.co/spaces/chenyangqi/FateZero), [hf](https://huggingface.co/chenyangqi/jeep_tuned_200)

  • [project](https://fate-zero-edit.github.io/)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/11uzioo/r_fatezero_fusing_attentions_for_zeroshot/)

  • [video](https://hkustconnect-my.sharepoint.com/personal/cqiaa_connect_ust_hk/_layouts/15/stream.aspx?id=%2Fpersonal%2Fcqiaa%5Fconnect%5Fust%5Fhk%2FDocuments%2Fdiffusion%2Fweb%5Fvideo%2Emp4&ga=1&referrer=StreamWebApp%2EWeb&referrerScenario=AddressBarCopied%2Eview%2E9b85614a%2D5af9%2D4485%2Dbcb1%2Db39f90e8d381)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ChenyangQiQi/FateZero/blob/main/colab_fatezero.ipynb) | 13.08.2023 |
| Big GAN | Large Scale GAN Training for High Fidelity Natural Image Synthesis |

  • [Andrew Brock](https://github.com/ajbrock)
  • [Jeff Donahue](https://jeffdonahue.com/)
  • [Karen Simonyan](https://scholar.google.com/citations?user=L7lMQkQAAAAJ)

|
  • [arxiv](https://arxiv.org/abs/1809.11096)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb) | 03.08.2023 |
| LaMa | Resolution-robust Large Mask Inpainting with Fourier Convolutions |

  • [Roman Suvorov](https://github.com/windj007)
  • [Elizaveta Logacheva](https://github.com/elimohl)
  • [Anton Mashikhin](https://www.linkedin.com/in/heyt0ny/)
  • [Anastasia Remizova](https://github.com/feathernox)
  • others
  • [Arsenii Ashukha](https://ashukha.com/)
  • [Aleksei Silvestrov](https://www.linkedin.com/in/%D0%B0%D0%BB%D0%B5%D0%BA%D1%81%D0%B5%D0%B9-%D1%81%D0%B8%D0%BB%D1%8C%D0%B2%D0%B5%D1%81%D1%82%D1%80%D0%BE%D0%B2-141b99b6/)
  • [Naejin Kong](https://github.com/naejin-kong)
  • [Harshith Goka](https://github.com/h9399-goka)
  • [Kiwoong Park](https://github.com/kyoong-park)
  • [Victor Lempitsky](http://sites.skoltech.ru/compvision/members/vilem/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/WACV51458.2022.00323)](https://doi.org/10.1109/WACV51458.2022.00323) [![](https://img.shields.io/github/stars/saic-mdal/lama?style=social)](https://github.com/saic-mdal/lama)

  • [arxiv](https://arxiv.org/abs/2109.07161)

  • [git](https://github.com/andy971022/auto-lama), [git](https://github.com/richzhang/PerceptualSimilarity), [git](https://github.com/Po-Hsun-Su/pytorch-ssim), [git](https://github.com/mseitzer/pytorch-fid)

  • [project](https://saic-mdal.github.io/lama-project/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/saic-mdal/lama/blob/master/colab/LaMa_inpainting.ipynb) | 02.08.2023 |
| MakeItTalk | A method that generates expressive talking-head videos from a single facial image with audio as the only input |

  • [Yang Zhou](https://people.umass.edu/~yangzhou/)
  • [Xintong Han](http://users.umiacs.umd.edu/~xintong/)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Jose Echevarria](http://www.jiechevarria.com/)
  • others
  • [Evangelos Kalogerakis](https://people.cs.umass.edu/~kalo/)
  • [Dingzeyu Li](https://dingzeyu.li/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3414685.3417774)](https://doi.org/10.1145/3414685.3417774) [![](https://img.shields.io/github/stars/yzhou359/MakeItTalk?style=social)](https://github.com/yzhou359/MakeItTalk)

  • [arxiv](https://arxiv.org/abs/2004.12992)

  • [data](https://drive.google.com/drive/folders/1EwuAy3j1b9Zc1MsidUfxG_pJGc_cV60O)

  • [project](https://people.umass.edu/~yangzhou/MakeItTalk/)

  • [yt](https://www.youtube.com/watch?v=vUMGKASgbf8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/iboyles/makeittalknow/blob/main/working_quick_demo_of_makeittalk_07_2023.ipynb) | 27.07.2023 |
| HiDT | A generative image-to-image model and a new upsampling scheme that allows to apply image translation at high resolution |

  • [Denis Korzhenkov](https://github.com/denkorzh)
  • [Gleb Sterkin](https://github.com/belkakari)
  • [Sergey Nikolenko](https://logic.pdmi.ras.ru/~sergey/)
  • [Victor Lempitsky](http://sites.skoltech.ru/compvision/members/vilem/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00751)](https://doi.org/10.1109/CVPR42600.2020.00751) [![](https://img.shields.io/github/stars/saic-mdal/HiDT?style=social)](https://github.com/saic-mdal/HiDT)

  • [arxiv](https://arxiv.org/abs/2003.08791)

  • [project](https://saic-mdal.github.io/HiDT/)

  • [yt](https://www.youtube.com/playlist?list=PLuvGzlEQXT1KQuKrfBBEWh2f3PToxyeM5), [yt](https://www.youtube.com/watch?v=EWKAgwgqXB4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/saic-mdal/hidt/blob/master/notebooks/HighResolutionDaytimeTranslation.ipynb) | 24.07.2023 |
| CutLER | Simple approach for training unsupervised object detection and segmentation models |

  • [Xudong Wang](https://people.eecs.berkeley.edu/~xdwang/)
  • [Rohit Girdhar](https://rohitgirdhar.github.io/)
  • [Stella Yu](https://www1.icsi.berkeley.edu/~stellayu/)
  • [Ishan Misra](https://imisra.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/CutLER?style=social)](https://github.com/facebookresearch/CutLER)

  • [arxiv](https://arxiv.org/abs/2301.11320), [arxiv](https://arxiv.org/abs/1706.02677)

  • [docs](https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html)

  • [project](http://people.eecs.berkeley.edu/~xdwang/projects/CutLER/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1NgEyFHvOfuA2MZZnfNPWg1w5gSr3HOBb) | 24.07.2023 |
| Recognize Anything & Tag2Text | Vision language pre-training framework, which introduces image tagging into vision-language models to guide the learning of visual-linguistic features |

  • [Xinyu Huang](https://xinyu1205.github.io/)
  • [Youcai Zhang](https://github.com/Coler1994)
  • [Jinyu Ma](https://github.com/majinyu666)
  • [Zhaoyang Li](https://github.com/ZhaoyangLi-nju)
  • others
  • [Yanchun Xie](https://scholar.google.com/citations?user=T0xk9-wAAAAJ)
  • [Yuzhuo Qin](https://scholar.google.com/citations?user=5ZG65AkAAAAJ)
  • [Tong Luo](https://ieeexplore.ieee.org/author/37089387319)
  • [Yaqian Li](https://openreview.net/profile?id=~Yaqian_Li1)
  • [Yandong Guo](http://www.lsl.zone/)
  • [Yandong Guo](https://scholar.google.com/citations?user=fWDoWsQAAAAJ)
  • [Lei Zhang](https://www.leizhang.org/)

| [![](https://img.shields.io/github/stars/xinyu1205/recognize-anything?style=social)](https://github.com/xinyu1205/recognize-anything)

  • [arxiv](https://arxiv.org/abs/2306.03514), [arxiv](https://arxiv.org/abs/2303.05657)

  • [git](https://github.com/OpenGVLab/Ask-Anything), [git](https://github.com/positive666/Prompt-Can-Anything)

  • [medium](https://artgor.medium.com/paper-review-recognize-anything-a-strong-image-tagging-model-9e5e1c6dd0af)

  • [project](https://recognize-anything.github.io/), [project](https://recognize-anything.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mhd-medfa/recognize-anything/blob/main/recognize_anything_demo.ipynb) | 09.07.2023 |
| Thin-Plate Spline Motion Model | End-to-end unsupervised motion transfer framework |

  • [Jian Zhao](https://scholar.google.com/citations?user=OKm5CQYAAAAJ)
  • [Hui Zhang](https://scholar.google.com/citations?user=w3mzCiwAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00364)](https://doi.org/10.1109/CVPR52688.2022.00364) [![](https://img.shields.io/github/stars/yoyo-nb/Thin-Plate-Spline-Motion-Model?style=social)](https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model)

  • [arxiv](https://arxiv.org/abs/2203.14367)

  • [git](https://github.com/AliaksandrSiarohin/monkey-net), [git](https://github.com/AliaksandrSiarohin/video-preprocessing), [git](https://github.com/AliaksandrSiarohin/pose-evaluation), [git](https://github.com/TalkUHulk/Image-Animation-Turbo-Boost)

  • [hf](https://huggingface.co/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-Model)

  • [supp](https://cloud.tsinghua.edu.cn/f/f7b8573bb5b04583949f/?dl=1)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1DREfdpnaBhqISg0fuQlAAIwyGVn1loH_) | 07.07.2023 |
| MobileSAM | Towards Lightweight SAM for Mobile Applications |

  • [Chaoning Zhang](https://github.com/ChaoningZhang)
  • [Dongshen Han](https://github.com/dongshenhan)
  • [Yu Qiao](https://github.com/qiaoyu1002)
  • [Jung Uk Kim](https://visualai.khu.ac.kr/)
  • others
  • [Sung-Ho Bae](https://scholar.google.com/citations?user=EULut5oAAAAJ)
  • [Seungkyu Lee](https://scholar.google.com/citations?user=3Pf6C6cAAAAJ)
  • [Choong Seon Hong](https://scholar.google.com/citations?user=oKANWloAAAAJ)

| [![](https://img.shields.io/github/stars/ChaoningZhang/MobileSAM?style=social)](https://github.com/ChaoningZhang/MobileSAM)

  • [arxiv](https://arxiv.org/abs/2306.14289)

  • [git](https://github.com/jolibrain/joliGEN), [git](https://github.com/akbartus/MobileSAM-in-the-Browser), [git](https://github.com/qiaoyu1002/Inpaint-Anything), [git](https://github.com/qiaoyu1002/Personalize-SAM), [git](https://github.com/Jumpat/SegmentAnythingin3D), [git](https://github.com/vietanhdev/anylabeling), [git](https://github.com/wangsssky/SonarSAM), [git](https://github.com/continue-revolution/sd-webui-segment-anything)

  • [twitter](https://twitter.com/_akhaliq/status/1674410573075718145)

  • [yt](https://youtu.be/eTEfq_kWabQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ChaoningZhang/MobileSAM/blob/master/notebooks/predictor_example.ipynb) | 30.06.2023 |
| Grounding DINO | Marrying DINO with Grounded Pre-Training for Open-Set Object Detection |

  • [Shilong Liu](https://github.com/SlongLiu)
  • [Zhaoyang Zeng](https://scholar.google.com/citations?user=U_cvvUwAAAAJ)
  • [Tianhe Ren](https://rentainhe.github.io/)
  • [Feng Li](https://scholar.google.com/citations?user=ybRe9GcAAAAJ)
  • others
  • [Hao Zhang](https://scholar.google.com/citations?user=B8hPxMQAAAAJ)
  • [Jie Yang](https://yangjie-cv.github.io/)
  • [Chunyuan Li](https://scholar.google.com/citations?user=Zd7WmXUAAAAJ)
  • [Jianwei Yang](https://jwyang.github.io/)
  • [Hang Su](https://www.suhangss.me/)
  • [Jun Zhu](https://scholar.google.com/citations?user=axsP38wAAAAJ)
  • [Lei Zhang](https://www.leizhang.org/)

| [![](https://img.shields.io/github/stars/IDEA-Research/GroundingDINO?style=social)](https://github.com/IDEA-Research/GroundingDINO)

  • [arxiv](https://arxiv.org/abs/2303.05499)

  • [git](https://github.com/IDEA-Research/DINO), [git](https://github.com/UX-Decoder/Semantic-SAM), [git](https://github.com/OptimalScale/DetGPT), [git](https://github.com/IDEA-Research/OpenSeeD), [git](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once), [git](https://github.com/microsoft/X-Decoder/tree/xgpt), [git](https://github.com/IDEA-Research/detrex)

  • [pwc](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded), [pwc](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded), [pwc](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded), [pwc](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded)

  • [yt](https://youtu.be/wxWDt5UiwY8), [yt](https://youtu.be/cMa77r3YrDk), [yt](https://youtu.be/C4NqaRBz_Kw), [yt](https://youtu.be/oEQYStnF2l8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) | 28.06.2023 |
| T5X | Modular, composable, research-friendly framework for high-performance, configurable, self-service training, evaluation, and inference of sequence models at many scales |

  • [Adam Roberts](https://github.com/adarob)
  • [Hyung Won Chung](https://github.com/hwchung27)
  • [Anselm Levskaya](https://anselmlevskaya.com/)
  • [Gaurav Mishra](https://research.google/people/GauravMishra/)
  • others
  • [James Bradbury](https://github.com/jekbradbury)
  • [Daniel Andor](https://github.com/andorardo)
  • [Sharan Narang](https://github.com/sharannarang)
  • [Brian Lester](https://blester125.com/)
  • [Colin Gaffney](https://github.com/cpgaffney1)
  • [Afroz Mohiuddin](https://github.com/afrozenator)
  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Aitor Lewkowycz](https://scholar.google.com/citations?user=Yum1ah0AAAAJ)
  • [Alex Salcianu](https://scholar.google.com/citations?user=HSrT1wsAAAAJ)
  • [Marc van Zee](https://github.com/marcvanzee)
  • [Jacob Austin](https://jacobaustin123.github.io/)
  • [Sebastian Goodman](https://github.com/0x0539)
  • [Livio Baldini Soares](https://liviosoares.github.io/)
  • [Haitang Hu](https://hthu.github.io/)
  • [Sasha Tsvyashchenko](https://endl.ch/)
  • [Aakanksha Chowdhery](http://www.achowdhery.com/)
  • [Jasmijn Bastings](https://jasmijn.ninja/)
  • [Jannis Bulian](http://bulian.org/)
  • [Xavier Garcia](https://scholar.google.com/citations?user=Y2Hio6MAAAAJ)
  • [Jianmo Ni](https://nijianmo.github.io/)
  • [Kathleen Kenealy](https://scholar.google.com/citations?&user=HgRBC5gAAAAJ)
  • [Jonathan Clark](http://www.cs.cmu.edu/~jhclark/)
  • [Dan Garrette](http://www.dhgarrette.com/)
  • [James Lee-Thorp](https://scholar.google.com/citations?user=qsPv098AAAAJ)
  • [Colin Raffel](https://colinraffel.com/)
  • [Noam Shazeer](https://scholar.google.com/citations?user=wsGvgA8AAAAJ)
  • [Marvin Ritter](https://scholar.google.com/citations?user=arcf5FgAAAAJ)
  • [Maarten Bosma](https://scholar.google.com/citations?user=wkeFQPgAAAAJ)
  • [Alexandre Passos](https://www.ic.unicamp.br/~tachard/)
  • [Jeremy Maitin-Shepard](https://research.google/people/JeremyMaitinShepard/)
  • [Noah Fiedel](https://scholar.google.com/citations?user=XWpV9DsAAAAJ)
  • [Brennan Saeta](https://github.com/saeta)
  • [Ryan Sepassi](https://ryansepassi.com/)
  • [Alexander Spiridonov](https://research.google/people/AlexanderSpiridonov/)
  • [Joshua Newlan](https://github.com/joshnewlan)
  • [Andrea Gesmundo](https://github.com/agesmundo)

| [![](https://img.shields.io/github/stars/google-research/t5x?style=social)](https://github.com/google-research/t5x)

  • [arxiv](https://arxiv.org/abs/2203.17189), [arxiv](https://arxiv.org/abs/1910.10683)

  • [docs](https://t5x.readthedocs.io/en/latest/)

  • [git](https://github.com/tensorflow/mesh), [git](https://github.com/tensorflow/serving)

  • [tf](https://www.tensorflow.org/datasets/catalog/wmt_t2t_translate), [tf](https://www.tensorflow.org/guide/data), [tf](https://www.tensorflow.org/tensorboard)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/t5x/blob/main/t5x/notebooks/introduction.ipynb) | 27.06.2023 |
| CodeTalker | Cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty |

  • [Jinbo Xing](https://doubiiu.github.io/)
  • [Menghan Xia](https://menghanxia.github.io/)
  • [Yuechen Zhang](https://julianjuaner.github.io/)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • others
  • [Jue Wang](https://juewang725.github.io/)
  • [Tien-Tsin Wong](https://ttwong12.github.io/myself.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.01229)](https://doi.org/10.1109/CVPR52729.2023.01229) [![](https://img.shields.io/github/stars/Doubiiu/CodeTalker?style=social)](https://github.com/Doubiiu/CodeTalker)

  • [arxiv](https://arxiv.org/abs/2301.02379), [arxiv](https://arxiv.org/abs/2303.09797)

  • [git](https://github.com/MPI-IS/mesh), [git](https://github.com/TimoBolkart/voca/tree/master/template), [git](https://github.com/EvelynFan/FaceFormer), [git](https://github.com/RenYurui/PIRender), [git](https://github.com/OpenTalker/StyleHEAT), [git](https://github.com/Meta-Portrait/MetaPortrait)

  • [project](https://doubiiu.github.io/projects/codetalker/)

  • [yt](https://youtu.be/J2RngmuYrG4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Doubiiu/CodeTalker/blob/main/demo.ipynb) | 16.06.2023 |
| First Order Motion Model for Image Animation | Transferring facial movements from video to image | [Aliaksandr Siarohin](https://aliaksandrsiarohin.github.io/aliaksandr-siarohin-website/) | [![](https://img.shields.io/github/stars/AliaksandrSiarohin/first-order-model?style=social)](https://github.com/AliaksandrSiarohin/first-order-model)

  • [neurips](https://papers.nips.cc/paper/2019/hash/31c0b36aef265d9221af80872ceb62f9-Abstract.html)

  • [project](https://aliaksandrsiarohin.github.io/first-order-model-website/)

  • [yt](https://www.youtube.com/watch?v=u-0cQ-grXBQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb) | 04.06.2023 |
| Parallel WaveGAN | State-of-the-art non-autoregressive models to build your own great vocoder | [Tomoki Hayashi](https://kan-bayashi.github.io/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICASSP40776.2020.9053795)](https://doi.org/10.1109/ICASSP40776.2020.9053795) [![](https://img.shields.io/github/stars/kan-bayashi/ParallelWaveGAN?style=social)](https://github.com/kan-bayashi/ParallelWaveGAN)

  • [arxiv](https://arxiv.org/abs/1910.11480), [arxiv](https://arxiv.org/abs/1910.06711), [arxiv](https://arxiv.org/abs/2005.05106)

  • [demo](https://kan-bayashi.github.io/ParallelWaveGAN/)

  • [git](https://github.com/NVIDIA/tacotron2), [git](https://github.com/espnet/espnet)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/espnet/notebook/blob/master/espnet2_tts_realtime_demo.ipynb) | 01.06.2023 |
| ECON | designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with loose clothing or in challenging poses |

  • [Yuliang Xiu](https://xiuyuliang.cn/)
  • [Jinlong Yang](https://is.mpg.de/~jyang)
  • [Xu Cao](https://xucao-42.github.io/homepage/)
  • [Dimitrios Tzionas](https://ps.is.mpg.de/~dtzionas)
  • [Michael Black](https://ps.is.mpg.de/~black)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.00057)](https://doi.org/10.1109/CVPR52729.2023.00057) [![](https://img.shields.io/github/stars/YuliangXiu/ECON?style=social)](https://github.com/YuliangXiu/ECON)

  • [arxiv](https://arxiv.org/abs/2212.07422)

  • [discord](https://discord.gg/Vqa7KBGRyk)

  • [docker](https://github.com/YuliangXiu/ECON/blob/master/docs/installation-docker.md)

  • [git](https://github.com/kwan3854/CEB_ECON), [git](https://github.com/xucao-42/bilateral_normal_integration), [git](https://github.com/Project-Splinter/MonoPortDataset), [git](https://github.com/huangyangyi/TeCH), [git](https://github.com/huangyangyi/TeCH), [git](https://github.com/vchoutas/smplx), [git](https://github.com/yfeng95/PIXIE)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1451sjr/econ_explicit_clothed_humans_optimized_via_normal/)

  • [twitter](https://twitter.com/yuliangxiu)

  • [yt](https://youtu.be/sbWZbTf6ZYk), [yt](https://youtu.be/SDVfCeaI4AY), [yt](https://youtu.be/5PEd_p90kS0), [yt](https://youtu.be/MDFvV7y5Qgk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1YRgwoRCZIrSB2e7auEWFyG10Xzjbrbno) | 31.05.2023 |
| MMS | The Massively Multilingual Speech project expands speech technology from about 100 languages to over 1000 by building a single multilingual speech recognition model supporting over 1100 languages, language identification models able to identify over 4000 languages, pretrained models supporting over 1400 languages, and text-to-speech models for over 1100 languages |

  • [Vineel Pratap](https://github.com/vineelpratap)
  • [Andros Tjandra](https://github.com/androstj)
  • [Bowen Shi](https://scholar.google.com/citations?user=xqyoorYAAAAJ)
  • [Paden Tomasello](https://scholar.google.com/citations?user=sBtWMGYAAAAJ)
  • others
  • [Arun Babu](https://scholar.google.com/citations?user=oJfoTakAAAAJ)
  • [Sayani Kundu](https://www.linkedin.com/in/sayani-kundu)
  • [Ali Elkahky](https://scholar.google.com/citations?user=KB3S8RoAAAAJ)
  • [Zhaoheng Ni](https://scholar.google.com/citations?user=SYFMSNsAAAAJ)
  • [Apoorv Vyas](https://apoorv2904.github.io/)
  • [Maryam Fazel-Zarandi](https://www.maryamfazel.com/)
  • [Alexei Baevski](https://github.com/alexeib)
  • [Yossi Adi](https://www.cs.huji.ac.il/~adiyoss/)
  • [Xiaohui Zhang](https://github.com/xiaohui-zhang)
  • [Wei-Ning Hsu](https://wnhsu.github.io/)
  • [Alexis Conneau](https://github.com/aconneau)
  • [Michael Auli](https://github.com/michaelauli)

| [![](https://img.shields.io/github/stars/facebookresearch/fairseq?style=social)](https://github.com/facebookresearch/fairseq/tree/main/examples/mms)

  • [arxiv](https://arxiv.org/abs/2305.13516)

  • [hf](https://huggingface.co/docs/transformers/main/en/model_doc/mms), [hf](https://huggingface.co/facebook/mms-cclms/), [hf](https://huggingface.co/blog/mms_adapters)

  • [meta](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)

  • [yt](https://youtu.be/GEzxHxWys2s), [yt](https://youtu.be/g06agCmxS7I)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/fairseq/blob/main/examples/mms/asr/tutorial/MMS_ASR_Inference_Colab.ipynb) | 26.05.2023 |
| FAB | Flow AIS Bootstrap uses AIS to generate samples in regions where the flow is a poor approximation of the target, facilitating the discovery of new modes |

  • [Laurence Midgley](https://lollcat.github.io/laurence-midgley/)
  • [Vincent Stimper](https://is.mpg.de/person/vstimper)
  • [Gregor N. C. Simm](https://www.gncs.me/)
  • [Bernhard Schölkopf](https://scholar.google.com/citations?user=DZ-fHPgAAAAJ)
  • [José Miguel Hernández-Lobato](https://jmhl.org/)

| [![](https://img.shields.io/github/stars/lollcat/fab-torch?style=social)](https://github.com/lollcat/fab-torch)

  • [arxiv](https://arxiv.org/abs/2208.01893)

  • [git](https://github.com/lollcat/fab-jax-old), [git](https://github.com/deepmind/annealed_flow_transport)

  • [yt](https://youtu.be/xQQXvOWu9nE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/lollcat/fab-torch/blob/master/experiments/gmm/fab_gmm.ipynb) | 29.04.2023 |
| CodeFormer | Transformer-based prediction network to model global composition and context of the low-quality faces for code prediction, enabling the discovery of natural faces that closely approximate the target faces even when the inputs are severely degraded |

  • [Shangchen Zhou](https://shangchenzhou.com/)
  • [Kelvin Chan](https://ckkelvinchan.github.io/)
  • [Chongyi Li](https://li-chongyi.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://img.shields.io/github/stars/sczhou/CodeFormer?style=social)](https://github.com/sczhou/CodeFormer)

  • [arxiv](https://arxiv.org/abs/2206.11253)

  • [git](https://github.com/samb-t/unleashing-transformers), [git](https://github.com/deepcam-cn/yolov5-face), [git](https://github.com/xinntao/facexlib)

  • [neurips](https://proceedings.neurips.cc/paper_files/paper/2022/hash/c573258c38d0a3919d8c1364053c45df-Abstract-Conference.html)

  • [project](https://shangchenzhou.com/projects/CodeFormer/)

  • [yt](https://youtu.be/d3VDpkXlueI), [yt](https://youtu.be/PtwWu-FugbA), [yt](https://youtu.be/ORtYP8NW4T0), [yt](https://youtu.be/xc5lKOKBCcg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1m52PNveE4PBhYrecj34cnpEeiHcC5LTb) | 21.04.2023 |
| Text2Video-Zero | Text-to-Image Diffusion Models are Zero-Shot Video Generators |

  • [Levon Khachatryan](https://github.com/lev1khachatryan)
  • [Andranik Movsisyan](https://github.com/19and99)
  • [Vahram Tadevosyan](https://www.linkedin.com/in/vtadevosian)
  • [Roberto Henschel](https://github.com/rob-hen)
  • others
  • [Zhangyang Wang](https://www.ece.utexas.edu/people/faculty/atlas-wang)
  • [Shant Navasardyan](https://scholar.google.com/citations?user=VJSh59sAAAAJ)
  • [Humphrey Shi](https://www.humphreyshi.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV51070.2023.01462)](https://doi.org/10.1109/ICCV51070.2023.01462) [![](https://img.shields.io/github/stars/Picsart-AI-Research/Text2Video-Zero?style=social)](https://github.com/Picsart-AI-Research/Text2Video-Zero)

  • [arxiv](https://arxiv.org/abs/2303.13439), [arxiv](https://arxiv.org/abs/1907.01341), [arxiv](https://arxiv.org/abs/2303.17604)

  • [git](https://github.com/dbolya/tomesd), [git](https://github.com/JiauZhang/Text2Video-Zero), [git](https://github.com/camenduru/text2video-zero-colab), [git](https://github.com/SHI-Labs/Text2Video-Zero-sd-webui)

  • [hf](https://huggingface.co/docs/diffusers/api/pipelines/text_to_video_zero)

  • [project](https://text2video-zero.github.io/)

  • [video](https://www.dropbox.com/s/uv90mi2z598olsq/Text2Video-Zero.MP4)

  • [yt](https://youtu.be/beeDJJz-Q0A), [yt](https://youtu.be/97-1GYPtz0M)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/text2video-zero-colab/blob/main/text2video_all.ipynb) | 11.04.2023 |
| Segment Anything | The Segment Anything Model produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image |

  • [Alexander Kirillov](https://alexander-kirillov.github.io/)
  • [Eric Mintun](https://ericmintun.github.io/)
  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Hanzi Mao](https://hanzimao.me/)
  • others
  • [Chloé Rolland](https://scholar.google.com/citations?user=n-SnMhoAAAAJ)
  • [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ)
  • [Tete Xiao](https://tetexiao.com/)
  • [Spencer Whitehead](https://www.spencerwhitehead.com/)
  • [Alex Berg](http://acberg.com/)
  • [Wan-Yen Lo](https://github.com/wanyenlo)
  • [Piotr Dollár](https://pdollar.github.io/)
  • [Ross Girshick](https://www.rossgirshick.info/)

| [![](https://img.shields.io/github/stars/facebookresearch/segment-anything?style=social)](https://github.com/facebookresearch/segment-anything)

  • [arxiv](https://arxiv.org/abs/2304.02643)

  • [data](https://ai.facebook.com/datasets/segment-anything/)

  • [meta](https://ai.facebook.com/research/publications/segment-anything/), [meta](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)

  • [website](https://segment-anything.com/)

  • [yt](https://youtu.be/2O_vecl28OA), [yt](https://youtu.be/fVeW9a6wItM), [yt](https://youtu.be/FjYE0tKWOiY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/segment-anything/blob/main/notebooks/predictor_example.ipynb) | 10.04.2023 |
| FollowYourPose | Two-stage training scheme that can utilize image pose pair and pose-free video datasets and the pre-trained text-to-image model to obtain the pose-controllable character videos |

  • [Yue Ma](https://mayuelala.github.io/)
  • [Yingqing He](https://yingqinghe.github.io/)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • [Xintao Wang](https://xinntao.github.io/)
  • others
  • [Siran Chen](https://github.com/Sranc3)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Xiu Li](https://scholar.google.com/citations?user=Xrh1OIUAAAAJ)
  • [Qifeng Chen](https://cqf.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1609/aaai.v38i5.28206)](https://doi.org/10.1609/aaai.v38i5.28206) [![](https://img.shields.io/github/stars/mayuelala/FollowYourPose?style=social)](https://github.com/mayuelala/FollowYourPose)

  • [arxiv](https://arxiv.org/abs/2304.01186), [arxiv](https://arxiv.org/abs/2112.10752)

  • [git](https://github.com/bryandlee/Tune-A-Video)

  • [hf](https://huggingface.co/YueMafighting/FollowYourPose_v1/tree/main), [hf](https://huggingface.co/CompVis/stable-diffusion-v1-4)

  • [project](https://follow-your-pose.github.io/)

  • [twitter](https://github.com/mayuelala)

  • [video](https://underline.io/lecture/91712-follow-your-pose-pose-guided-text-to-video-generation-using-pose-free-videos)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mayuelala/FollowYourPose/blob/main/quick_demo.ipynb) | 07.04.2023 |
| EVA3D | High-quality unconditional 3D human generative model that only requires 2D image collections for training |

  • [Fangzhou Hong](https://hongfz16.github.io/)
  • [Zhaoxi Chen](https://frozenburning.github.io/)
  • [Yushi Lan](https://github.com/NIRVANALAN)
  • [Liang Pan](https://github.com/paul007pl)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://img.shields.io/github/stars/hongfz16/EVA3D?style=social)](https://github.com/hongfz16/EVA3D)

  • [arxiv](https://arxiv.org/abs/2210.04888)

  • [project](https://hongfz16.github.io/projects/EVA3D.html)

  • [yt](https://youtu.be/JNV0FJ0aDWM), [yt](https://youtu.be/M-kyvzTQrBI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/hongfz16/EVA3D/blob/main/notebook/EVA3D_Demo.ipynb) | 06.04.2023 |
| Stable Dreamfusion | Using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis |

  • [Jiaxiang Tang](https://me.kiui.moe/)
  • [Ben Poole](https://cs.stanford.edu/~poole/)
  • [Ajay Jain](https://ajayj.com/)
  • [Jon Barron](https://jonbarron.info/)
  • [Ben Mildenhall](https://bmild.github.io/)

| [![](https://img.shields.io/github/stars/ashawkey/stable-dreamfusion?style=social)](https://github.com/ashawkey/stable-dreamfusion)

  • [arxiv](https://arxiv.org/abs/2209.14988)

  • [git](https://github.com/ashawkey/torch-ngp), [git](https://github.com/hoffstadt/DearPyGui)

  • [hf](https://huggingface.co/runwayml/stable-diffusion-v1-5)

  • [project](https://dreamfusion3d.github.io/)

  • [pt](https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.load)

  • [yt](https://youtu.be/uM5NPodZZ1U?t=219), [yt](https://youtu.be/zWD5ZR5GtJM), [yt](https://youtu.be/L3G0dx1Q0R8), [yt](https://youtu.be/dIgDbBTztUM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1MXT3yfOFvO0ooKEfiUUvTKwUkrrlCHpF) | 04.04.2023 |
| PIFuHD | Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization |

  • [Shunsuke Saito](https://shunsukesaito.github.io/)
  • [Tomas Simon](http://www.cs.cmu.edu/~tsimon/)
  • [Jason Saragih](https://scholar.google.com/citations?user=ss-IvjMAAAAJ)
  • [Hanbyul Joo](https://jhugestar.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00016)](https://doi.org/10.1109/CVPR42600.2020.00016) [![](https://img.shields.io/github/stars/facebookresearch/pifuhd?style=social)](https://github.com/facebookresearch/pifuhd)

  • [arxiv](https://arxiv.org/abs/2004.00452)

  • [yt](https://youtu.be/uEDqCxvF5yc), [yt](https://www.youtube.com/watch?v=8qnwbbDS8xk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/11z58bl3meSzo6kFqkahMa35G5jmh2Wgt) | 26.03.2023 |
| VideoReTalking | System to edit the faces of a real-world talking head video according to input audio, producing a high-quality and lip-syncing output video even with a different emotion |

  • [Kun Cheng](https://github.com/kunncheng)
  • [Xiaodong Cun](https://vinthony.github.io/)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • [Menghan Xia](https://menghanxia.github.io/)
  • others
  • [Fei Yin](https://feiiyin.github.io/)
  • [Mingrui Zhu](https://web.xidian.edu.cn/mrzhu/en/index.html)
  • [Xuan Wang](https://xuanwangvc.github.io/)
  • [Jue Wang](https://juewang725.github.io/)
  • [Nannan Wang](https://web.xidian.edu.cn/nnwang/en/index.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3550469.3555399)](https://doi.org/10.1145/3550469.3555399) [![](https://img.shields.io/github/stars/OpenTalker/video-retalking?style=social)](https://github.com/OpenTalker/video-retalking)

  • [arxiv](https://arxiv.org/abs/2211.14758)

  • [git](https://github.com/donydchen/ganimation_replicate), [git](https://github.com/RenYurui/PIRender), [git](https://github.com/OpenTalker/StyleHEAT), [git](https://github.com/FeiiYin/SPI)

  • [medium](https://xthemadgenius.medium.com/making-videos-talk-right-syncing-lips-with-sound-using-videoretalking-611428084bbc)

  • [project](https://opentalker.github.io/video-retalking/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/178krha/videoretalking/)

  • [yt](https://youtu.be/pttsTrQ-fko), [yt](https://youtu.be/2Lkw8AmmRn0), [yt](https://youtu.be/RJ8YK_K4Ne0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vinthony/video-retalking/blob/main/quick_demo.ipynb) | 19.03.2023 |
| Visual ChatGPT | Connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting |

  • [Chenfei Wu](https://github.com/chenfei-wu)
  • [Shengming Yin](https://github.com/shengming-yin)
  • [Weizhen Qi](https://github.com/WeizhenQ)
  • [Xiaodong Wang](https://wang-xiaodong1899.github.io/)
  • others
  • [Zecheng Tang](https://github.com/CODINNLG)
  • [Nan Duan](https://nanduan.github.io/)

| [![](https://img.shields.io/github/stars/microsoft/visual-chatgpt?style=social)](https://github.com/microsoft/visual-chatgpt)

  • [arxiv](https://arxiv.org/abs/2303.04671)

  • [git](https://github.com/hwchase17/langchain), [git](https://github.com/lllyasviel/ControlNet), [git](https://github.com/timothybrooks/instruct-pix2pix), [git](https://github.com/timojl/clipseg)

  • [yt](https://youtu.be/0UfXlFUwLms), [yt](https://youtu.be/7YEiEyfPF5U)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/11BtP3h-w0dZjA-X8JsS9_eo8OeGYvxXB) | 15.03.2023 |
| Tune-A-Video | One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation |

  • [Jay Zhangjie Wu](https://zhangjiewu.github.io/)
  • [Yixiao Ge](https://geyixiao.com/)
  • [Xintao Wang](https://xinntao.github.io/)
  • [Stan Weixian Lei](https://github.com/StanLei52)
  • others
  • [Yuchao Gu](https://ycgu.site/)
  • [Yufei Shi](https://scholar.google.com/citations?user=rpnlkwEAAAAJ)
  • [Wynne Hsu](https://www.comp.nus.edu.sg/~whsu/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Xiaohu Qie](https://scholar.google.com/citations?user=mk-F69UAAAAJ)
  • [Mike Zheng Shou](https://sites.google.com/view/showlab)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV51070.2023.00701)](https://doi.org/10.1109/ICCV51070.2023.00701) [![](https://img.shields.io/github/stars/showlab/Tune-A-Video?style=social)](https://github.com/showlab/Tune-A-Video)

  • [arxiv](https://arxiv.org/abs/2212.11565), [arxiv](https://arxiv.org/abs/2112.10752)

  • [hf](https://huggingface.co/Tune-A-Video-library), [hf](https://huggingface.co/stabilityai/stable-diffusion-2-1), [hf](https://huggingface.co/sd-dreambooth-library)

  • [project](https://tuneavideo.github.io/)

  • [yt](https://youtu.be/uzF6CTtjn-g), [yt](https://youtu.be/uUlp1_ExsGQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/showlab/Tune-A-Video/blob/main/notebooks/Tune-A-Video.ipynb) | 23.02.2023 |
| GPEN | GAN Prior Embedded Network for Blind Face Restoration in the Wild |

  • [Tao Yang](https://cg.cs.tsinghua.edu.cn/people/~tyang/)
  • [Peiran Ren](https://scholar.google.com/citations?&user=x5dEuxsAAAAJ)
  • [Xuansong Xie](https://scholar.google.com/citations?user=M0Ei1zkAAAAJ)
  • [Lei Zhang](http://www4.comp.polyu.edu.hk/~cslzhang/)

| [![](https://img.shields.io/github/stars/yangxy/GPEN?style=social)](https://github.com/yangxy/GPEN)

  • [arxiv](https://arxiv.org/abs/2105.06070)

  • [demo](https://vision.aliyun.com/experience/detail?spm=a211p3.14020179.J_7524944390.17.66cd4850wVDkUQ&tagName=facebody&children=EnhanceFace)

  • [git](https://github.com/biubug6/Pytorch_Retinaface), [git](https://github.com/rosinality/stylegan2-pytorch)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/yangxy/GPEN/blob/main/GPEN.ipynb) | 15.02.2023 |
| PyMAF-X | Кegression-based approach to recovering parametric full-body models from monocular images |

  • [Hongwen Zhang](https://hongwenzhang.github.io/)
  • [Yating Tian](https://github.com/tinatiansjz)
  • [Yuxiang Zhang](https://zhangyux15.github.io/)
  • [Mengcheng Li](https://github.com/Dw1010)
  • others
  • [Liang An](https://anl13.github.io/)
  • [Zhenan Sun](http://www.cbsr.ia.ac.cn/users/znsun/)
  • [Yebin Liu](https://www.liuyebin.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2023.3271691)](https://doi.org/10.1109/TPAMI.2023.3271691) [![](https://img.shields.io/github/stars/HongwenZhang/PyMAF-X?style=social)](https://github.com/HongwenZhang/PyMAF-X)

  • [arxiv](https://arxiv.org/abs/2207.06400)

  • [git](https://github.com/HongwenZhang/DaNet-DensePose2SMPL), [git](https://github.com/facebookresearch/DensePose), [git](https://github.com/Microsoft/human-pose-estimation.pytorch), [git](https://github.com/microsoft/MeshGraphormer), [git](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch)

  • [project](https://www.liuyebin.com/pymaf-x/)

  • [yt](https://youtu.be/ylOB0wCeV34)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/13Iytx1Hb0ZryEwbJdpXBW9ggDxs2Y-tL) | 14.02.2023 |
| Disco Diffusion | A frankensteinian amalgamation of notebooks, models and techniques for the generation of AI Art and Animations |

  • [Max Ingham](https://github.com/somnai-dreams)
  • [Adam Letts](https://linktr.ee/gandamu)
  • [Daniel Russell](https://github.com/russelldc)
  • [Chigozie Nri](https://github.com/chigozienri)

| [![](https://img.shields.io/github/stars/alembics/disco-diffusion?style=social)](https://github.com/alembics/disco-diffusion)

  • [git](https://github.com/openai/guided-diffusion)

  • [yt](https://youtu.be/_DtWfh9oS54), [yt](https://youtu.be/gWxmtdZL8FE), [yt](https://youtu.be/yVJB6oD0_gM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb) | 11.02.2023 |
| GrooVAE | Some applications of machine learning for generating and manipulating beats and drum performances |

  • [Jon Gillick](https://www.jongillick.com/)
  • [Adam Roberts](https://github.com/adarob)
  • [Jesse Engel](https://github.com/jesseengel)

| [![](https://img.shields.io/github/stars/magenta/magenta?style=social)](https://github.com/magenta/magenta/tree/main/magenta/models/music_vae)

  • [arxiv](https://arxiv.org/abs/1905.06118)

  • [blog post](https://g.co/magenta/groovae)

  • [data](https://g.co/magenta/groove-datasets)

  • [web app](https://groove-drums.glitch.me/)

  • [yt](https://www.youtube.com/watch?v=x2YLmXzovDo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/magenta-demos/blob/master/colab-notebooks/GrooVAE.ipynb) | 02.02.2023 |
| Multitrack MusicVAE | The models in this notebook are capable of encoding and decoding single measures of up to 8 tracks, optionally conditioned on an underlying chord |

  • [Ian Simon](https://github.com/iansimon)
  • [Adam Roberts](https://github.com/adarob)
  • [Colin Raffel](https://colinraffel.com//)
  • [Jesse Engel](https://github.com/jesseengel)
  • others
  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Douglas Eck](https://github.com/douglaseck)

|

  • [arxiv](https://arxiv.org/abs/1806.00195)

  • [blog post](http://g.co/magenta/multitrack)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/magenta/magenta-demos/blob/master/colab-notebooks/Multitrack_MusicVAE.ipynb) | 02.02.2023 |
| MusicVAE | A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music |

  • [Adam Roberts](https://github.com/adarob)
  • [Jesse Engel](https://github.com/jesseengel)
  • [Colin Raffel](https://colinraffel.com//)
  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Douglas Eck](https://github.com/douglaseck)

|

  • [arxiv](https://arxiv.org/abs/1803.05428)

  • [blog post](https://g.co/magenta/music-vae)

  • [project](https://magenta.tensorflow.org/music-vae)

  • [yt](https://www.youtube.com/playlist?list=PLBUMAYA6kvGU8Cgqh709o5SUvo-zHGTxr)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/magenta/magenta-demos/blob/master/colab-notebooks/MusicVAE.ipynb) | 02.02.2023 |
| Learning to Paint | Learning to Paint With Model-based Deep Reinforcement Learning | [Manuel Romero](https://mrm8488.github.io/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00880)](https://doi.org/10.1109/ICCV.2019.00880)

  • [arxiv](https://arxiv.org/abs/1903.04411)

  • [reddit](https://www.reddit.com/r/reinforcementlearning/comments/b5lpfl/learning_to_paint_with_modelbased_deep/)

  • [yt](https://www.youtube.com/watch?v=YmOgKZ5oipk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/custom_learningtopaint.ipynb) | 01.02.2023 |
| Instant-NGP | Instant Neural Graphics Primitives with a Multiresolution Hash Encoding |

  • [Thomas Müller](https://tom94.net/)
  • [Alex Evans](https://research.nvidia.com/person/alex-evans)
  • [Christoph Schied](https://research.nvidia.com/person/christoph-schied)
  • [Alexander Keller](https://research.nvidia.com/person/alex-keller)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3528223.3530127)](https://doi.org/10.1145/3528223.3530127) [![](https://img.shields.io/github/stars/NVlabs/instant-ngp?style=social)](https://github.com/NVlabs/instant-ngp)

  • [arxiv](https://arxiv.org/abs/2201.05989)

  • [blog post](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/)

  • [git](https://github.com/NVlabs/tiny-cuda-nn), [git](https://github.com/IDLabMedia/large-lightfields-dataset), [git](https://github.com/nickponline/dd-nerf-dataset), [git](https://github.com/ocornut/imgui), [git](https://github.com/nothings/stb)

  • [project](https://nvlabs.github.io/instant-ngp/)

  • [tutorial](https://www.nvidia.com/en-us/on-demand/session/siggraph2022-sigg22-s-16/)

  • [yt](https://youtu.be/j8tMk-GE8hY), [yt](https://youtu.be/8GbENSmdVeE), [yt](https://youtu.be/DJ2hcC1orc4), [yt](https://youtu.be/z3-fjYzd0BA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/NVlabs/instant-ngp/blob/master/notebooks/instant_ngp.ipynb) | 18.01.2023 |
| Fourier Feature Networks | Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains |

  • [Matthew Tancik](https://www.matthewtancik.com/)
  • [Pratul Srinivasan](https://pratulsrinivasan.github.io/)
  • [Ben Mildenhall](https://bmild.github.io/)
  • [Sara Fridovich-Keil](https://people.eecs.berkeley.edu/~sfk/)
  • others
  • [Nithin Raghavan](https://cseweb.ucsd.edu//~n2raghavan/)
  • [Utkarsh Singhal](https://scholar.google.com/citations?user=lvA86MYAAAAJ)
  • [Ravi Ramamoorthi](https://cseweb.ucsd.edu//~ravir/)
  • [Jon Barron](https://jonbarron.info/)
  • [Ren Ng](https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html)

| [![](https://img.shields.io/github/stars/tancik/fourier-feature-networks?style=social)](https://github.com/tancik/fourier-feature-networks)

  • [arxiv](https://arxiv.org/abs/1806.07572)

  • [neurips](https://proceedings.neurips.cc/paper/2020/hash/55053683268957697aa39fba6f231c68-Abstract.html), [neurips](https://papers.nips.cc/paper/2007/hash/013a006f03dbc5392effeb8f18fda755-Abstract.html)

  • [project](https://bmild.github.io/fourfeat/)

  • [yt](https://youtu.be/nVA6K6Sn2S4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tancik/fourier-feature-networks/blob/master/Demo.ipynb) | 17.01.2023 |
| AlphaPose | Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time |

  • [Hao-Shu Fang](https://fang-haoshu.github.io/)
  • [Jiefeng Li](https://jeffli.site/)
  • [Hongyang Tang](https://github.com/tang-hy)
  • [Chao Xu](https://www.isdas.cn/)
  • others
  • [Haoyi Zhu](https://www.haoyizhu.site/)
  • [Yuliang Xiu](https://xiuyuliang.cn/)
  • [Yong-Lu Li](https://dirtyharrylyl.github.io/)
  • [Cewu Lu](https://scholar.google.com/citations?user=QZVQEWAAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2022.3222784)](https://doi.org/10.1109/TPAMI.2022.3222784) [![](https://img.shields.io/github/stars/MVIG-SJTU/AlphaPose?style=social)](https://github.com/MVIG-SJTU/AlphaPose)

  • [arxiv](https://arxiv.org/abs/2211.03375)

  • [git](https://github.com/tycoer/AlphaPose_jittor), [git](https://github.com/Fang-Haoshu/Halpe-FullBody)

  • [project](https://www.mvig.org/research/alphapose.html)

  • [yt](https://youtu.be/uze6chg-YeU), [yt](https://youtu.be/Z2WPd59pRi8), [yt](https://youtu.be/qW4lb9tnA3I), [yt](https://youtu.be/_qtNzylm1XI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1_3Wxi4H3QGVC28snL3rHIoeMAwI2otMR) | 07.01.2023 |
| HybrIK | Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation |

  • [Jiefeng Li](https://jeffli.site/)
  • [Chao Xu](https://www.isdas.cn/)
  • [Zhicun Chen](https://github.com/chenzhicun)
  • [Siyuan Bian](https://github.com/biansy000)
  • others
  • [Lixin Yang](https://lixiny.github.io/)
  • [Cewu Lu](https://www.mvig.org/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00339)](https://doi.org/10.1109/CVPR46437.2021.00339) [![](https://img.shields.io/github/stars/Jeff-sjtu/HybrIK?style=social)](https://github.com/Jeff-sjtu/HybrIK)

  • [arxiv](https://arxiv.org/abs/2011.14672)

  • [git](https://github.com/mks0601/3DMPPE_POSENET_RELEASE)

  • [project](https://jeffli.site/HybrIK/)

  • [pwc](https://paperswithcode.com/sota/3d-human-pose-estimation-on-3dpw?p=hybrik-a-hybrid-analytical-neural-inverse)

  • [supp](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Li_HybrIK_A_Hybrid_CVPR_2021_supplemental.zip)

  • [yt](https://youtu.be/tvwnXXH7xIw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1n41l7I2NxWseuruVQEU8he2XqzSXhu2f) | 01.01.2023 |
| Score Jacobian Chaining | Apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field |

  • [Haochen Wang](https://whc.is/)
  • [Xiaodan Du](https://xiaodan.io/)
  • [Jiahao Li](https://jiahao.ai/)
  • [Raymond Yeh](https://raymond-yeh.com/)
  • [Greg Shakhnarovich](https://home.ttic.edu/~gregory/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52729.2023.01214)](https://doi.org/10.1109/CVPR52729.2023.01214) [![](https://img.shields.io/github/stars/pals-ttic/sjc?style=social)](https://github.com/pals-ttic/sjc)

  • [arxiv](https://arxiv.org/abs/2212.00774), [arxiv](https://arxiv.org/abs/2206.00364)

  • [hf](https://huggingface.co/spaces/MirageML/sjc)

  • [project](https://pals.ttic.edu/p/score-jacobian-chaining)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/zac8z4/score_jacobian_chaining_lifting_pretrained_2d/)

  • [yt](https://youtu.be/MmDSLc6CjoI), [yt](https://youtu.be/1oeruRLKoiU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1zixo66UYGl70VOPy053o7IV_YkQt5lCZ) | 05.12.2022 |
| Demucs | Hybrid Spectrogram and Waveform Source Separation | [Alexandre Défossez](https://ai.honu.io/) | [![](https://img.shields.io/github/stars/facebookresearch/demucs?style=social)](https://github.com/facebookresearch/demucs)

  • [arxiv](https://arxiv.org/abs/2111.03600), [arxiv](https://arxiv.org/abs/2010.01733), [arxiv](https://arxiv.org/abs/2109.05418), [arxiv](https://arxiv.org/abs/1805.02410)

  • [git](https://github.com/adefossez/mdx21_demucs), [git](https://github.com/CarlGao4/Demucs-Gui), [git](https://github.com/kuielab/mdx-net-submission), [git](https://github.com/f90/Wave-U-Net)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dC9nVxk3V_VPjUADsnFu8EiT-xnU1tGH) | 21.11.2022 |
| StyleCLIP | Text-Driven Manipulation of StyleGAN Imager |

  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Zongze Wu](https://github.com/betterze)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)
  • [Dani Lischinski](https://pages.cs.huji.ac.il/danix/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.00209)](https://doi.org/10.1109/ICCV48922.2021.00209) [![](https://img.shields.io/github/stars/orpatashnik/StyleCLIP?style=social)](https://github.com/orpatashnik/StyleCLIP)

  • [arxiv](https://arxiv.org/abs/2103.17249), [arxiv](https://arxiv.org/abs/2011.12799)

  • [git](https://github.com/rosinality/stylegan2-pytorch/)

  • [yt](https://youtu.be/5icI0NgALnQ), [yt](https://youtu.be/PhR1gpXDu0w), [yt](https://youtu.be/d1OET63Ulwc), [yt](https://youtu.be/RAXrwPskNso)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/orpatashnik/StyleCLIP/blob/main/notebooks/StyleCLIP_global_torch.ipynb) | 30.10.2022 |
| MotionDiffuse | The first diffusion model-based text-driven motion generation framework, which demonstrates several desired properties over existing methods |

  • [Mingyuan Zhang](https://mingyuan-zhang.github.io/)
  • [Zhongang Cai](https://caizhongang.github.io/)
  • [Liang Pan](https://github.com/paul007pl)
  • [Fangzhou Hong](https://hongfz16.github.io/)
  • others
  • [Xinying Guo](https://gxyes.github.io/)
  • [Lei Yang](https://scholar.google.com/citations?user=jZH2IPYAAAAJ)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://img.shields.io/github/stars/mingyuan-zhang/MotionDiffuse?style=social)](https://github.com/mingyuan-zhang/MotionDiffuse)

  • [arxiv](https://arxiv.org/abs/2208.15001)

  • [hf](https://huggingface.co/spaces/mingyuan/MotionDiffuse)

  • [project](https://mingyuan-zhang.github.io/projects/MotionDiffuse.html)

  • [yt](https://youtu.be/U5PTnw490SA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Dp6VsZp2ozKuu9ccMmsDjyij_vXfCYb3) | 13.10.2022 |
| VToonify | Leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details |

  • [Shuai Yang](https://williamyang1991.github.io/)
  • [Liming Jiang](https://liming-jiang.com/)
  • [Ziwei Liu](https://liuziwei7.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3550454.3555437)](https://doi.org/10.1145/3550454.3555437) [![](https://img.shields.io/github/stars/williamyang1991/VToonify?style=social)](https://github.com/williamyang1991/VToonify)

  • [arxiv](https://arxiv.org/abs/2209.11224), [arxiv](https://arxiv.org/abs/2001.02890)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/zllrunning/face-parsing.PyTorch), [git](https://github.com/zhujiapeng/LowRankGAN)

  • [hf](https://huggingface.co/spaces/PKUWilliamYang/VToonify), [hf](https://huggingface.co/PKUWilliamYang/VToonify/tree/main/models)

  • [project](https://www.mmlab-ntu.com/project/vtoonify/)

  • [yt](https://youtu.be/0_OmVhDgYuY)

| [![Open In Colab](images/colab.svg)](http://colab.research.google.com/github/williamyang1991/VToonify/blob/master/notebooks/inference_playground.ipynb) | 07.10.2022 |
| PyMAF | Pyramidal Mesh Alignment Feedback loop in regression network for well-aligned body mesh recovery and extend it for the recovery of expressive full-body models |

  • [Hongwen Zhang](https://hongwenzhang.github.io/)
  • [Yating Tian](https://github.com/tinatiansjz)
  • [Yuxiang Zhang](https://zhangyux15.github.io/)
  • [Mengcheng Li](https://github.com/Dw1010)
  • others
  • [Liang An](https://anl13.github.io/)
  • [Zhenan Sun](http://www.cbsr.ia.ac.cn/users/znsun/)
  • [Yebin Liu](https://www.liuyebin.com/)

| [![](https://img.shields.io/github/stars/HongwenZhang/PyMAF?style=social)](https://github.com/HongwenZhang/PyMAF)

  • [arxiv](https://arxiv.org/abs/2207.06400), [arxiv](https://arxiv.org/abs/2103.16507)

  • [git](https://github.com/facebookresearch/eft), [git](https://github.com/HongwenZhang/DaNet-DensePose2SMPL), [git](https://github.com/facebookresearch/DensePose), [git](https://github.com/Microsoft/human-pose-estimation.pytorch)

  • [project](https://www.liuyebin.com/pymaf-x/)

  • [yt](https://youtu.be/yqEmznSKjYI), [yt](https://youtu.be/ylOB0wCeV34)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/11RXLsH9BdoSCwY6G-IX7KgqDxVoImu6K) | 06.10.2022 |
| AlphaTensor | Discovering faster matrix multiplication algorithms with reinforcement learning |

  • [Alhussein Fawzi](http://www.alhusseinfawzi.info/)
  • [Matej Balog](http://matejbalog.eu/)
  • [Aja Huang](https://en.wikipedia.org/wiki/Aja_Huang)
  • [Thomas Hubert](https://scholar.google.com/citations?user=WXG0QfMAAAAJ)
  • others
  • [Bernardino Romera-Paredes](https://sites.google.com/site/romeraparedes/)
  • [Mohammadamin Barekatain](http://barekatain.me/)
  • [Alexander Novikov](https://scholar.google.com/citations?user=jMUkLqwAAAAJ)
  • [Francisco Ruiz](https://franrruiz.github.io/)
  • [Julian Schrittwieser](https://www.furidamu.org/)
  • [Grzegorz Swirszcz](https://sites.google.com/site/grzegorzswirszcz/home)
  • [David Silver](https://www.davidsilver.uk/)
  • [Demis Hassabis](https://en.wikipedia.org/wiki/Demis_Hassabis)
  • [Pushmeet Kohli](https://sites.google.com/site/pushmeet/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41586-022-05172-4)](https://doi.org/10.1038/s41586-022-05172-4) [![](https://img.shields.io/github/stars/google-deepmind/alphatensor?style=social)](https://github.com/google-deepmind/alphatensor)

  • [deepmind](https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor)

  • [yt](https://youtu.be/3N3Bl5AA5QU), [yt](https://youtu.be/gpYnDls4PdQ), [yt](https://youtu.be/IYgZS2EvnLI), [yt](https://youtu.be/8ILk4Wjo5rc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/alphatensor/blob/master/nonequivalence/inspect_factorizations_notebook.ipynb) | 04.10.2022 |
| Swin2SR | Novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario |

  • [Marcos Conde](https://mv-lab.github.io/)
  • [Ui-Jin Choi](https://github.com/Choiuijin1125)
  • [Maxime Burchi](https://scholar.google.com/citations?user=7S_l2eAAAAAJ)
  • [Radu Timofte](https://www.informatik.uni-wuerzburg.de/computervision/home/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-25063-7_42)](https://doi.org/10.1007/978-3-031-25063-7_42) [![](https://img.shields.io/github/stars/mv-lab/swin2sr?style=social)](https://github.com/mv-lab/swin2sr)

  • [arxiv](https://arxiv.org/abs/2209.11345), [arxiv](https://arxiv.org/abs/2108.10257), [arxiv](https://arxiv.org/abs/2208.11184), [arxiv](https://arxiv.org/abs/2111.09883)

  • [git](https://github.com/cszn/KAIR/), [git](https://github.com/mv-lab/AISP), [git](https://github.com/microsoft/Swin-Transformer)

  • [hf](https://huggingface.co/spaces/jjourney1125/swin2sr)

  • [kaggle](https://www.kaggle.com/code/jesucristo/super-resolution-demo-swin2sr-official/), [kaggle](https://www.kaggle.com/datasets/jesucristo/super-resolution-benchmarks), [kaggle](https://www.kaggle.com/jinssaa/official-swin2sr-demo-results/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1paPrt62ydwLv2U2eZqfcFsePI4X4WRR1) | 03.10.2022 |
| Functa | From data to functa: Your data point is a function and you can treat it like one |

  • [Emilien Dupont](https://emiliendupont.github.io/)
  • [Hyunjik Kim](https://hyunjik11.github.io/)
  • [Ali Eslami](http://arkitus.com/)
  • [Danilo Rezende](https://danilorezende.com/about/)
  • [Dan Rosenbaum](https://danrsm.github.io/)

| [![](https://img.shields.io/github/stars/deepmind/functa?style=social)](https://github.com/deepmind/functa)

  • [arxiv](https://arxiv.org/abs/2201.12204)

  • [git](https://github.com/sxyu/pixel-nerf), [git](https://github.com/deepmind/jaxline)

  • [tf](https://www.tensorflow.org/datasets/catalog/celeb_a_hq)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/functa/blob/main/modulation_visualization_colab.ipynb) | 24.09.2022 |
| Whisper | Automatic speech recognition system trained on 680,000 hours of multilingual and multitask supervised data collected from the web |

  • [Alec Radford](http://newmu.github.io/)
  • [Jong Wook Kim](https://jongwook.kim/)
  • [Tao Xu](https://github.com/bayesian)
  • [Greg Brockman](https://gregbrockman.com/)
  • others
  • [Christine McLeavey](http://christinemcleavey.com/)
  • [Ilya Sutskever](http://www.cs.toronto.edu/~ilya/)

| [![](https://img.shields.io/github/stars/openai/whisper?style=social)](https://github.com/openai/whisper)

  • [arxiv](https://arxiv.org/abs/2212.04356)

  • [blog post](https://openai.com/research/whisper)

  • [git](https://github.com/kkroening/ffmpeg-python)

  • [yt](https://youtu.be/OCBZtgQGt1I), [yt](https://youtu.be/8SQV-B83tPU), [yt](https://youtu.be/nE5iVtwKerA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb) | 21.09.2022 |
| DeOldify (video) | Colorize your own videos! | [Jason Antic](https://github.com/jantic) | [![](https://img.shields.io/github/stars/jantic/DeOldify?style=social)](https://github.com/jantic/DeOldify)

  • [arxiv](https://arxiv.org/abs/1805.08318), [arxiv](https://arxiv.org/abs/1706.08500)

  • [medium](https://medium.com/element-ai-research-lab/stabilizing-neural-style-transfer-for-video-62675e203e42)

  • [model](https://data.deepai.org/deoldify/ColorizeVideo_gen.pth)

  • [reddit](https://www.reddit.com/r/Nickelodeons/), [reddit](https://www.reddit.com/r/silentmoviegifs/)

  • [twitter](https://twitter.com/DeOldify)

  • [website](https://deoldify.ai/)

  • [yt](http://www.youtube.com/watch?v=l3UXXid04Ys), [yt](http://www.youtube.com/watch?v=EXn-n2iqEjI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jantic/DeOldify/blob/master/VideoColorizerColab.ipynb) | 19.09.2022 |
| DeOldify (photo) | Colorize your own photos! |

  • [Jason Antic](https://github.com/jantic)
  • [Matt Robinson](https://github.com/mc-robinson)
  • [María Benavente](https://github.com/mariabg)

| [![](https://img.shields.io/github/stars/jantic/DeOldify?style=social)](https://github.com/jantic/DeOldify)

  • [arxiv](https://arxiv.org/abs/1805.08318), [arxiv](https://arxiv.org/abs/1706.08500)

  • [model](https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth)

  • [reddit](https://www.reddit.com/r/TheWayWeWere/)

  • [twitter](https://twitter.com/DeOldify)

  • [website](https://deoldify.ai/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColab.ipynb) | 19.09.2022 |
| Real-ESRGAN | Extend the powerful ESRGAN to a practical restoration application, which is trained with pure synthetic data |

  • [Xintao Wang](https://xinntao.github.io/)
  • [Liangbin Xie](https://liangbinxie.github.io/)
  • [Chao Dong](https://scholar.google.com/citations?user=OSDCB0UAAAAJ)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCVW54120.2021.00217)](https://doi.org/10.1109/ICCVW54120.2021.00217) [![](https://img.shields.io/github/stars/xinntao/Real-ESRGAN?style=social)](https://github.com/xinntao/Real-ESRGAN)

  • [arxiv](https://arxiv.org/abs/2107.10833)

  • [git](https://github.com/xinntao/ESRGAN), [git](https://github.com/xinntao/facexlib), [git](https://github.com/xinntao/HandyView), [git](https://github.com/Tencent/ncnn), [git](https://github.com/nihui/waifu2x-ncnn-vulkan)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo) | 18.09.2022 |
| IDE-3D | Interactive Disentangled Editing for High-Resolution 3D-aware Portrait Synthesis |

  • [Jingxiang Sun](https://mrtornado24.github.io/)
  • [Xuan Wang](https://xuanwangvc.github.io/)
  • [Yichun Shi](https://seasonsh.github.io/)
  • [Lizhen Wang](https://lizhenwangt.github.io/)
  • others
  • [Jue Wang](https://juewang725.github.io/)
  • [Yebin Liu](http://www.liuyebin.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3550454.3555506)](https://doi.org/10.1145/3550454.3555506) [![](https://img.shields.io/github/stars/MrTornado24/IDE-3D?style=social)](https://github.com/MrTornado24/IDE-3D)

  • [git](https://arxiv.org/abs/2205.15517), [git](https://github.com/NVlabs/eg3d), [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/NVlabs/stylegan3)

  • [yt](https://youtu.be/Kj5XY_J2Alk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/MrTornado24/IDE-3D/blob/main/inversion/notebooks/inference_playground.ipynb) | 08.09.2022 |
| Decision Transformers | An architecture that casts the problem of RL as conditional sequence modeling |

  • [Lili Chen](http://www.lilichen.me/)
  • [Kevin Lu](https://kzl.github.io/)
  • [Aravind Rajeswaran](https://aravindr93.github.io/)
  • [Kimin Lee](https://sites.google.com/view/kiminlee)
  • others
  • [Aditya Grover](https://aditya-grover.github.io/)
  • [Michael Laskin](https://www.mishalaskin.com/)
  • [Pieter Abbeel](http://people.eecs.berkeley.edu/~pabbeel/)
  • [Aravind Srinivas](https://github.com/aravindsrinivas)
  • [Igor Mordatch](https://scholar.google.com/citations?user=Vzr1RukAAAAJ)

| [![](https://img.shields.io/github/stars/kzl/decision-transformer?style=social)](https://github.com/kzl/decision-transformer)

  • [arxiv](https://arxiv.org/abs/2106.01345)

  • [hf](https://huggingface.co/models?other=gym-continous-control), [hf](https://huggingface.co/edbeeching/decision-transformer-gym-hopper-expert), [hf](https://huggingface.co/docs/transformers/model_doc/decision_transformer)

  • [project](https://sites.google.com/berkeley.edu/decision-transformer)

  • [wiki](https://en.wikipedia.org/wiki/Autoregressive_model)

  • [yt](https://youtu.be/k08N5a0gG0A), [yt](https://youtu.be/-buULmf7dec), [yt](https://youtu.be/83QN9S-0I84), [yt](https://youtu.be/w4Bw8WYL8Ps)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF) | 06.09.2022 |
| textual-inversion | An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion |

  • [Rinon Gal](https://rinongal.github.io/)
  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Yuval Atzmon](https://research.nvidia.com/person/yuval-atzmon)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • others
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)
  • [Gal Chechik](https://research.nvidia.com/person/gal-chechik)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://img.shields.io/github/stars/rinongal/textual_inversion?style=social)](https://github.com/rinongal/textual_inversion)

  • [arxiv](https://arxiv.org/abs/2208.01618)

  • [project](https://textual-inversion.github.io/)

  • [yt](https://youtu.be/f3oXa7_SYek), [yt](https://youtu.be/opD_H9bED9Y)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rinongal/textual_inversion/blob/master/scripts/latent_imagenet_diffusion.ipynb) | 21.08.2022 |
| StyleGAN-Human | A Data-Centric Odyssey of Human Generation |

  • [Jianglin Fu](https://github.com/arleneF)
  • [Shikai Li](https://github.com/leeskyed)
  • [Yuming Jiang](https://yumingj.github.io/)
  • [Kwan-Yee Lin](https://kwanyeelin.github.io/)
  • others
  • [Chen Qian](https://scholar.google.com/citations?user=AerkT0YAAAAJ)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
  • [Wayne Wu](https://wywu.github.io/)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-19787-1_1)](https://doi.org/10.1007/978-3-031-19787-1_1) [![](https://img.shields.io/github/stars/stylegan-human/stylegan-human?style=social)](https://github.com/stylegan-human/stylegan-human)

  • [arxiv](https://arxiv.org/abs/2204.11823)

  • [git](https://github.com/NVlabs/stylegan), [git](https://github.com/NVlabs/stylegan2-ada-pytorch), [git](https://github.com/NVlabs/stylegan3)

  • [project](https://stylegan-human.github.io/)

  • [pwc](https://paperswithcode.com/dataset/market-1501)

  • [yt](https://youtu.be/nIrb9hwsdcI), [yt](https://youtu.be/86b49sCz0Gg), [yt](https://youtu.be/g3nmM6MdxwY), [yt](https://youtu.be/p2uwqh_SFL8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1sgxoDM55iM07FS54vz9ALg1XckiYA2On) | 19.08.2022 |
| Make-A-Scene | Scene-Based Text-to-Image Generation with Human Priors |

  • [Oran Gafni](https://github.com/ogafni)
  • [Adam Polyak](https://scholar.google.com/citations?user=CP62OTMAAAAJ)
  • [Oron Ashual](https://scholar.google.com/citations?user=CUA9JCkAAAAJ)
  • [Shelly Sheynin](https://github.com/shellysheynin)
  • others
  • [Devi Parikh](https://faculty.cc.gatech.edu/~parikh/)
  • [Yaniv Taigman](https://ytaigman.github.io/)

| [![](https://img.shields.io/github/stars/CasualGANPapers/Make-A-Scene?style=social)](https://github.com/CasualGANPapers/Make-A-Scene)

  • [arxiv](https://arxiv.org/abs/2203.13131)

  • [yt](https://youtu.be/ZM06MjPdoxw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1SPyQ-epTsAOAu8BEohUokN4-b5RM_TnE) | 12.08.2022 |
| StyleGAN-NADA | Zero-Shot non-adversarial domain adaptation of pre-trained generators |

  • [Rinon Gal](https://rinongal.github.io/)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Haggai Maron](https://haggaim.github.io/)
  • [Gal Chechik](https://research.nvidia.com/person/gal-chechik)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3528223.3530164)](https://doi.org/10.1145/3528223.3530164) [![](https://img.shields.io/github/stars/rinongal/StyleGAN-nada?style=social)](https://github.com/rinongal/StyleGAN-nada)

  • [arxiv](https://arxiv.org/abs/2108.00946), [arxiv](https://arxiv.org/abs/2103.17249), [arxiv](https://arxiv.org/abs/2104.02699)

  • [git](https://github.com/rosinality/stylegan2-pytorch/), [git](https://github.com/NVlabs/stylegan2-ada)

  • [project](https://stylegan-nada.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rinongal/stylegan-nada/blob/main/stylegan_nada.ipynb) | 09.08.2022 |
| YOLOv7 | Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors |

  • [Chien-Yao Wang](https://scholar.google.com/citations?user=DkQh4M4AAAAJ)
  • [Alexey Bochkovskiy](http://www.alexeyab.com/)
  • [Mark Liao](https://www.iis.sinica.edu.tw/pages/liao/)

| [![](https://img.shields.io/github/stars/WongKinYiu/yolov7?style=social)](https://github.com/WongKinYiu/yolov7)

  • [arxiv](https://arxiv.org/abs/2207.02696)

  • [data](http://images.cocodataset.org/annotations/annotations_trainval2017.zip), [data](http://images.cocodataset.org/zips/train2017.zip), [data](http://images.cocodataset.org/zips/val2017.zip), [data](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip)

  • [git](https://github.com/WongKinYiu/yolor), [git](https://github.com/WongKinYiu/PyTorch_YOLOv4), [git](https://github.com/WongKinYiu/ScaledYOLOv4), [git](https://github.com/Megvii-BaseDetection/YOLOX), [git](https://github.com/DingXiaoH/RepVGG), [git](https://github.com/JUGGHM/OREPA_CVPR2022), [git](https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose)

  • [pwc](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=yolov7-trainable-bag-of-freebies-sets-new)

  • [yt](https://www.youtube.com/playlist?list=PL_Nji0JOuXg2QMohGK7wfzgJ-MavzXRHW), [yt](https://youtu.be/-QWxJ0j9EY8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/WongKinYiu/yolov7/blob/main/tools/compare_YOLOv7_vs_YOLOv5m6_half.ipynb) | 09.08.2022 |
| GLIP | Grounded language-image pre-training model for learning object-level, language-aware, and semantic-rich visual representations |

  • [Liunian Harold Li](https://liunian-harold-li.github.io/)
  • [Pengchuan Zhang](https://pzzhang.github.io/pzzhang/)
  • [Haotian Zhang](https://haotian-zhang.github.io/)
  • [Jianwei Yang](https://jwyang.github.io/)
  • others
  • [Chunyuan Li](https://chunyuan.li/)
  • [Yiwu Zhong](https://pages.cs.wisc.edu/~yiwuzhong/)
  • [Lijuan Wang](https://github.com/LijuanWang)
  • [Lu Yuan](https://scholar.google.com/citations?user=k9TsUVsAAAAJ)
  • [Lei Zhang](https://www.leizhang.org/)
  • [Jenq-Neng Hwang](https://people.ece.uw.edu/hwang/)
  • [Kai-Wei Chang](http://web.cs.ucla.edu/~kwchang/)
  • [Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01069)](https://doi.org/10.1109/CVPR52688.2022.01069) [![](https://img.shields.io/github/stars/microsoft/GLIP?style=social)](https://github.com/microsoft/GLIP)

  • [arxiv](https://arxiv.org/abs/2112.03857), [arxiv](https://arxiv.org/abs/2206.05836), [arxiv](https://arxiv.org/abs/2102.01066), [arxiv](https://arxiv.org/abs/2204.08790)

  • [blog post](https://www.microsoft.com/en-us/research/project/project-florence-vl/articles/object-detection-in-the-wild-via-grounded-language-image-pre-training/)

  • [git](https://github.com/gligen/GLIGEN)

  • [hf](https://huggingface.co/harold/GLIP)

  • [medium](https://sh-tsang.medium.com/glip-grounded-language-image-pre-training-2be2483295b3), [medium](https://towardsdatascience.com/glip-introducing-language-image-pre-training-to-object-detection-5ddb601873aa)

  • [yt](https://youtu.be/zu1BGQBI4dU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/12x7v-_miN7-SRiziK3Cx4ffJzstBJNqb) | 30.07.2022 |
| Anycost GAN | Interactive natural image editing |

  • [Ji Lin](http://linji.me/)
  • [Richard Zhang](https://richzhang.github.io/)
  • [Frieder Ganz](https://scholar.google.com/citations?user=u9ySZkUAAAAJ)
  • [Song Han](https://songhan.mit.edu/)
  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01474)](https://doi.org/10.1109/CVPR46437.2021.01474) [![](https://img.shields.io/github/stars/mit-han-lab/anycost-gan?style=social)](https://github.com/mit-han-lab/anycost-gan)

  • [arxiv](https://arxiv.org/abs/2103.03243)

  • [git](https://github.com/NVlabs/stylegan2), [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/switchablenorms/CelebAMask-HQ), [git](https://github.com/fyu/lsun)

  • [project](https://hanlab.mit.edu/projects/anycost-gan/)

  • [yt](https://www.youtube.com/watch?v=_yEziPl9AkM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mit-han-lab/anycost-gan/blob/master/notebooks/intro_colab.ipynb) | 20.07.2022 |
| GFPGAN | Towards Real-World Blind Face Restoration with Generative Facial Prior |

  • [Xintao Wang](https://xinntao.github.io/)
  • [Yu Li](https://yu-li.github.io/)
  • [Honglun Zhang](https://scholar.google.com/citations?user=KjQLROoAAAAJ)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00905)](https://doi.org/10.1109/CVPR46437.2021.00905) [![](https://img.shields.io/github/stars/TencentARC/GFPGAN?style=social)](https://github.com/TencentARC/GFPGAN)

  • [arxiv](https://arxiv.org/abs/2101.04061)

  • [git](https://github.com/xinntao/facexlib), [git](https://github.com/xinntao/HandyView), [git](https://github.com/NVlabs/ffhq-dataset)

  • [project](https://xinntao.github.io/projects/gfpgan)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo) | 13.07.2022 |
| EPro-PnP | Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation |

  • [Hansheng Chen](https://lakonik.github.io/)
  • [Pichao Wang](https://wangpichao.github.io/)
  • [Fan Wang](https://scholar.google.com/citations?user=WCRGTHsAAAAJ)
  • [Wei Tian](https://scholar.google.com/citations?user=aYKQn88AAAAJ)
  • others
  • [Lu Xiong](https://ieeexplore.ieee.org/author/37401835800)
  • [Hao Li](https://scholar.google.com/citations?user=pHN-QIwAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TPAMI.2024.3354997)](https://doi.org/10.1109/TPAMI.2024.3354997) [![](https://img.shields.io/github/stars/tjiiv-cprg/EPro-PnP?style=social)](https://github.com/tjiiv-cprg/EPro-PnP)

  • [arxiv](https://arxiv.org/abs/2203.13254)

  • [git](https://github.com/megvii-research/petr), [git](https://github.com/HuangJunJie2017/BEVDet), [git](https://github.com/fudan-zvg/PolarFormer), [git](https://github.com/zhiqi-li/BEVFormer), [git](https://github.com/open-mmlab/mmdetection3d)

  • [nuScenes](https://www.nuscenes.org/object-detection?externalData=no&mapData=no&modalities=Camera)

  • [yt](https://youtu.be/TonBodQ6EUU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tjiiv-cprg/EPro-PnP/blob/main/demo/fit_identity.ipynb) | 12.07.2022 |
| Text2Human | Text-driven controllable framework for a high-quality and diverse human generation |

  • [Yuming Jiang](https://yumingj.github.io/)
  • [Shuai Yang](https://williamyang1991.github.io/)
  • [Haonan Qiu](http://haonanqiu.com/)
  • [Wayne Wu](https://wywu.github.io/)
  • others
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3528223.3530104)](https://doi.org/10.1145/3528223.3530104) [![](https://img.shields.io/github/stars/yumingj/Text2Human?style=social)](https://github.com/yumingj/Text2Human)

  • [arxiv](https://arxiv.org/abs/2205.15996)

  • [git](https://github.com/yumingj/DeepFashion-MultiModal), [git](https://github.com/samb-t/unleashing-transformers)

  • [hf](https://huggingface.co/spaces/hysts/Text2Human), [hf](https://huggingface.co/spaces/CVPR/drawings-to-human)

  • [project](https://yumingj.github.io/projects/Text2Human.html)

  • [yt](https://youtu.be/yKh4VORA_E0), [yt](https://youtu.be/RV-g5BlH3Zg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1AVwbqLwMp_Gz3KTCgBTtnGVtXIlCZDPk) | 04.07.2022 |
| VQ-Diffusion | Based on a VQ-VAE whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model |

  • [Shuyang Gu](https://github.com/cientgu)
  • [Dong Chen](http://www.dongchen.pro/)
  • [Jianmin Bao](https://jianminbao.github.io/)
  • [Fang Wen](https://www.microsoft.com/en-us/research/people/fangwen/)
  • others
  • [Bo Zhang](https://bo-zhang.me/)
  • [Dongdong Chen](http://www.dongdongchen.bid/)
  • [Lu Yuan](https://scholar.google.com/citations?&user=k9TsUVsAAAAJ)
  • [Baining Guo](https://scholar.google.com/citations?user=h4kYmRYAAAAJ)
  • [Shuyang Gu](https://github.com/cientgu)
  • [Zhicong Tang](https://github.com/zzctan)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01043)](https://doi.org/10.1109/CVPR52688.2022.01043) [![](https://img.shields.io/github/stars/microsoft/VQ-Diffusion?style=social)](https://github.com/microsoft/VQ-Diffusion)

  • [arxiv](https://arxiv.org/abs/2111.14822), [arxiv](https://arxiv.org/abs/2205.16007)

  • [git](https://github.com/ehoogeboom/multinomial_diffusion), [git](https://github.com/openai/improved-diffusion)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Ws0_wK2cnsWEnfB7HtmPT4bjCPElb40C) | 30.06.2022 |
| OPT | Open Pre-trained Transformers is a family of NLP models trained on billions of tokens of text obtained from the internet |

  • [Susan Zhang](https://github.com/suchenzang)
  • [Stephen Roller](https://stephenroller.com/)
  • [Naman Goyal](https://github.com/ngoyal2707)
  • [Mikel Artetxe](https://github.com/artetxem)
  • others
  • [Moya Chen](https://moyachen.com/)
  • [Christopher Dewan](https://github.com/m3rlin45)
  • [Mona Diab](https://scholar.google.com/citations?user=-y6SIhQAAAAJ)
  • [Xi Victoria Lin](http://victorialin.net/)
  • [Todor Mihaylov](https://github.com/tbmihailov)
  • [Myle Ott](https://myleott.com/)
  • [Sam Shleifer](https://github.com/sshleifer)
  • [Kurt Shuster](https://github.com/klshuster)
  • [Daniel Simig](https://scholar.google.com/citations?user=TtWU9fsAAAAJ)
  • [Punit Singh Koura](https://github.com/punitkoura)
  • [Anjali Sridhar](https://www.linkedin.com/in/anjalisridhar/)
  • [Tianlu Wang](https://tianlu-wang.github.io/)
  • [Luke Zettlemoyer](https://www.cs.washington.edu/people/faculty/lsz/)

| [![](https://img.shields.io/github/stars/facebookresearch/metaseq?style=social)](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT)

  • [arxiv](https://arxiv.org/abs/2205.01068), [arxiv](https://arxiv.org/abs/1906.02243), [arxiv](https://arxiv.org/abs/2104.10350), [arxiv](https://arxiv.org/abs/2201.11990)

  • [blog post](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/)

  • [git](https://github.com/NVIDIA/Megatron-LM)

  • [yt](https://youtu.be/Ejg0OunCi9U)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/14wnxMvD9zsiBQo2FtTpxn6w2cpXCcb-7) | 29.06.2022 |
| Customizing a Transformer Encoder | We will learn how to customize the encoder to employ new network architectures | [Chen Chen](https://github.com/chenGitHuber) | [![](https://img.shields.io/github/stars/tensorflow/models?style=social)](https://github.com/tensorflow/models/tree/master/official/nlp/modeling)

  • [arxiv](https://arxiv.org/abs/1706.03762)

  • [git](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/networks/encoder_scaffold.py)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/nlp/customize_encoder.ipynb) | 22.06.2022 |
| MTTR | End-to-End Referring Video Object Segmentation with Multimodal Transformers |

  • [Adam Botach](https://www.linkedin.com/in/adam-botach)
  • [Evgenii Zheltonozhskii](https://evgeniizh.com/)
  • [Chaim Baskin](https://github.com/chaimbaskin)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00493)](https://doi.org/10.1109/CVPR52688.2022.00493) [![](https://img.shields.io/github/stars/mttr2021/MTTR?style=social)](https://github.com/mttr2021/MTTR)

  • [arxiv](https://arxiv.org/abs/2111.14821), [arxiv](https://arxiv.org/abs/1907.11692), [arxiv](https://arxiv.org/abs/2106.13230)

  • [git](https://github.com/SwinTransformer/Video-Swin-Transformer)

  • [hf](https://huggingface.co/spaces/MTTR/MTTR-Referring-Video-Object-Segmentation)

  • [yt](https://youtu.be/YqlhXgq6hcs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/12p0jpSx3pJNfZk-y_L44yeHZlhsKVra-) | 20.06.2022 |
| SwinIR | Image Restoration Using Swin Transformer |

  • [Jingyun Liang](https://jingyunliang.github.io/)
  • [Jiezhang Cao](https://github.com/caojiezhang)
  • [Guolei Sun](https://github.com/GuoleiSun)
  • [Kai Zhang](https://cszn.github.io/)
  • others
  • [Luc Van Gool](https://scholar.google.com/citations?user=TwMib_QAAAAJ)
  • [Radu Timofte](https://www.informatik.uni-wuerzburg.de/computervision/home/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCVW54120.2021.00210)](https://doi.org/10.1109/ICCVW54120.2021.00210) [![](https://img.shields.io/github/stars/JingyunLiang/SwinIR?style=social)](https://github.com/JingyunLiang/SwinIR)

  • [arxiv](https://arxiv.org/abs/2108.10257), [arxiv](https://arxiv.org/abs/2107.10833)

  • [git](https://github.com/cszn/BSRGAN), [git](https://github.com/microsoft/Swin-Transformer), [git](https://github.com/cszn/KAIR)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb) | 17.06.2022 |
| VRT | A Video Restoration Transformer |

  • [Jingyun Liang](https://jingyunliang.github.io/)
  • [Jiezhang Cao](https://github.com/caojiezhang)
  • [Yuchen Fan](https://ychfan.github.io/)
  • [Kai Zhang](https://cszn.github.io/)
  • others
  • [Yawei Li](https://ofsoundof.github.io/)
  • [Radu Timofte](https://www.informatik.uni-wuerzburg.de/computervision/home/)
  • [Luc Van Gool](https://scholar.google.com/citations?user=TwMib_QAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TIP.2024.3372454)](https://doi.org/10.1109/TIP.2024.3372454) [![](https://img.shields.io/github/stars/JingyunLiang/VRT?style=social)](https://github.com/JingyunLiang/VRT)

  • [arxiv](https://arxiv.org/abs/2201.12288)

  • [git](https://github.com/cszn/KAIR), [git](https://github.com/SwinTransformer/Video-Swin-Transformer), [git](https://github.com/open-mmlab/mmediting)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/gist/JingyunLiang/deb335792768ad9eb73854a8efca4fe0/vrt-demo-on-video-restoration.ipynb) | 15.06.2022 |
| Omnivore | A single model which excels at classifying images, videos, and single-view 3D data using exactly the same model parameters |

  • [Rohit Girdhar](http://rohitgirdhar.github.io/)
  • [Mannat Singh](https://scholar.google.com/citations?user=QOO8OCcAAAAJ)
  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Laurens Maaten](https://lvdmaaten.github.io/)
  • others
  • [Armand Joulin](https://ai.facebook.com/people/armand-joulin/)
  • [Ishan Misra](https://imisra.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01563)](https://doi.org/10.1109/CVPR52688.2022.01563) [![](https://img.shields.io/github/stars/facebookresearch/omnivore?style=social)](https://github.com/facebookresearch/omnivore)

  • [arxiv](https://arxiv.org/abs/2201.08377), [arxiv](https://arxiv.org/abs/2206.08356)

  • [hf](https://huggingface.co/spaces/akhaliq/omnivore)

  • [project](https://facebookresearch.github.io/omnivore/)

  • [pwc](https://paperswithcode.com/dataset/epic-kitchens-100)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/omnivore/blob/main/inference_tutorial.ipynb) | 14.06.2022 |
| Dream Fields | Zero-Shot Text-Guided Object Generation |

  • [Ajay Jain](https://ajayj.com/)
  • [Ben Mildenhall](https://bmild.github.io/)
  • [Jon Barron](https://jonbarron.info/)
  • [Pieter Abbeel](https://people.eecs.berkeley.edu/~pabbeel/)
  • [Ben Poole](https://cs.stanford.edu/~poole/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00094)](https://doi.org/10.1109/CVPR52688.2022.00094) [![](https://img.shields.io/github/stars/google-research/google-research?style=social)](https://github.com/google-research/google-research/tree/master/dreamfields)

  • [arxiv](https://arxiv.org/abs/2112.01455), [arxiv](https://arxiv.org/abs/2104.00677), [arxiv](https://arxiv.org/abs/2103.13415)

  • [git](https://github.com/ajayjain/DietNeRF), [git](https://github.com/google/mipnerf)

  • [project](https://ajayj.com/dreamfields)

  • [yt](https://youtu.be/1Fke6w46tv4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/17GtPqdUCbG5CsmTnQFecPpoq_zpNKX7A) | 10.06.2022 |
| Detic | Detecting Twenty-thousand Classes using Image-level Supervision |

  • [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/)
  • [Rohit Girdhar](https://rohitgirdhar.github.io/)
  • [Armand Joulin](https://ai.facebook.com/people/armand-joulin/)
  • [Philipp Krähenbühl](https://github.com/philkr)
  • [Ishan Misra](https://imisra.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20077-9_21)](https://doi.org/10.1007/978-3-031-20077-9_21) [![](https://img.shields.io/github/stars/facebookresearch/Detic?style=social)](https://github.com/facebookresearch/Detic)

  • [arxiv](https://arxiv.org/abs/2201.02605)

  • [git](https://github.com/lvis-dataset/lvis-api)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1QtTW9-ukX2HKZGvt0QvVGqjuqEykoZKI) | 07.06.2022 |
| T0 | Multitask Prompted Training Enables Zero-Shot Task Generalization |

  • [Victor Sanh](https://github.com/VictorSanh)
  • [Albert Webson](https://representation.ai/)
  • [Colin Raffel](https://colinraffel.com//)
  • [Stephen Bach](http://cs.brown.edu/people/sbach/)
  • others
  • [Lintang Sutawika](https://github.com/lintangsutawika)
  • [Zaid Alyafeai](https://github.com/zaidalyafeai)
  • [Antoine Chaffin](https://antoine.chaffin.fr/)
  • [Arnaud Stiegler](https://github.com/arnaudstiegler)
  • [Teven Scao](https://scholar.google.com/citations?user=ik0_vxsAAAAJ)
  • [Arun Raja](https://www.arunraja.dev/)
  • [Manan Dey](https://github.com/manandey)
  • [M Saiful Bari](https://sbmaruf.github.io/)
  • [Canwen Xu](https://www.canwenxu.net/)
  • [Urmish Thakker](https://github.com/Urmish)
  • [Shanya Sharma](https://shanyas10.github.io/)
  • [Eliza Szczechla](https://elsanns.github.io/)
  • [Taewoon Kim](https://tae898.github.io/)
  • [Gunjan Chhablani](https://gchhablani.github.io/)
  • [Nihal Nayak](https://nihalnayak.github.io/)
  • [Debajyoti Datta](http://debajyotidatta.github.io/)
  • [Jonathan Chang](https://github.com/cccntu/)
  • [Mike Tian-Jian Jiang](https://github.com/tianjianjiang)
  • [Matteo Manica](https://github.com/drugilsberg)
  • [Sheng Shen](https://sincerass.github.io/)
  • [Zheng Xin Yong](https://yongzx.github.io/)
  • [Harshit Pandey](https://scholar.google.com/citations?user=BPIs78gAAAAJ)
  • [Rachel Bawden](https://rbawden.github.io/)
  • [Trishala Neeraj](https://github.com/trishalaneeraj)
  • [Jos Rozen](https://scholar.google.com/citations?user=OxEDKogAAAAJ)
  • [Abheesht Sharma](https://github.com/abheesht-sharma)
  • [Andrea Santilli](https://teelinsan.github.io/)
  • [Thibault Fevry](http://thibaultfevry.com/)
  • [Jason Alan Fries](https://web.stanford.edu/~jfries/)
  • [Ryan Teehan](https://github.com/rteehas)
  • [Stella Biderman](https://www.stellabiderman.com/)
  • [Leo Gao](https://github.com/leogao2)
  • [Tali Bers](https://github.com/tbers-coursera)
  • [Thomas Wolf](https://thomwolf.io/)
  • [Alexander M. Rush](https://scholar.google.com/citations?user=LIjnUGgAAAAJ)

| [![](https://img.shields.io/github/stars/bigscience-workshop/promptsource?style=social)](https://github.com/bigscience-workshop/promptsource)

  • [arxiv](https://arxiv.org/abs/2110.08207)

  • [yt](https://youtu.be/iJ0IVZgGjTM), [yt](https://youtu.be/YToXXfrIu6w)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1xx7SgdLaAu23YFBirXmaQViDr8caowX_) | 29.05.2022 |
| AvatarCLIP | A zero-shot text-driven framework for 3D avatar generation and animation |

  • [Fangzhou Hong](https://hongfz16.github.io/)
  • [Mingyuan Zhang](https://scholar.google.com/citations?user=2QLD4fAAAAAJ)
  • [Liang Pan](https://scholar.google.com/citations?user=lSDISOcAAAAJ)
  • [Zhongang Cai](https://caizhongang.github.io/)
  • others
  • [Lei Yang](https://scholar.google.com/citations?user=jZH2IPYAAAAJ)
  • [Ziwei Liu](https://liuziwei7.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3528223.3530094)](https://doi.org/10.1145/3528223.3530094) [![](https://img.shields.io/github/stars/hongfz16/AvatarCLIP?style=social)](https://github.com/hongfz16/AvatarCLIP)

  • [arxiv](https://arxiv.org/abs/2205.08535), [arxiv](https://arxiv.org/abs/2112.01455), [arxiv](https://arxiv.org/abs/2112.03221), [arxiv](https://arxiv.org/abs/2112.05139), [arxiv](https://arxiv.org/abs/2203.13333)

  • [data](https://www.di.ens.fr/willow/research/surreal/data/)

  • [git](https://github.com/daniilidis-group/neural_renderer), [git](https://github.com/GuyTevet/MotionCLIP), [git](https://github.com/Totoro97/NeuS), [git](https://github.com/vchoutas/smplx), [git](https://github.com/nghorbani/human_body_prior)

  • [project](https://hongfz16.github.io/projects/AvatarCLIP.html)

  • [yt](https://youtu.be/-l2ZMeoASGY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dfaecX7xF3nP6fyXc8XBljV5QY1lc1TR) | 15.05.2022 |
| Text2Mesh | Text-Driven Neural Stylization for Meshes |

  • [Oscar Michel](https://ojmichel.github.io/)
  • [Roi Bar-On](https://github.com/roibaron)
  • [Richard Liu](https://github.com/factoryofthesun)
  • [Sagie Benaim](https://sagiebenaim.github.io/)
  • [Rana Hanocka](http://people.cs.uchicago.edu/~ranahanocka/)

| [![](https://img.shields.io/github/stars/threedle/text2mesh?style=social)](https://github.com/threedle/text2mesh)

  • [CLIP](https://openai.com/blog/clip/)

  • [arxiv](https://arxiv.org/abs/2112.03221)

  • [kaggle](https://www.kaggle.com/code/neverix/text2mesh/notebook)

  • [project](https://threedle.github.io/text2mesh/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/threedle/text2mesh/blob/master/colab_demo.ipynb) | 14.05.2022 |
| T5 | Text-To-Text Transfer Transformer |

  • [Colin Raffel](https://colinraffel.com/)
  • [Noam Shazeer](https://scholar.google.com/citations?user=wsGvgA8AAAAJ)
  • [Adam Roberts](https://github.com/adarob)
  • [Katherine Lee](https://github.com/katelee168)
  • others
  • [Sharan Narang](https://github.com/sharannarang)
  • [Michael Matena](https://scholar.google.com/citations?user=rN_9vroAAAAJ)
  • [Yanqi Zhou](https://zhouyanqi.github.io)
  • [Wei Li](https://research.google/people/106528/)
  • [Peter J. Liu](https://scholar.google.com/citations?user=1EPxhywAAAAJ)

| [![](https://img.shields.io/github/stars/google-research/text-to-text-transfer-transformer?style=social)](https://github.com/google-research/text-to-text-transfer-transformer)

  • [arxiv](https://arxiv.org/abs/1910.10683)

  • [git](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow/transformer)

  • [tf](https://www.tensorflow.org/datasets)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) | 11.05.2022 |
| XLS-R | Self-supervised Cross-lingual Speech Representation Learning at Scale |

  • [Arun Babu](https://github.com/arbabu123)
  • [Changhan Wang](https://www.changhan.me/)
  • [Andros Tjandra](https://github.com/androstj)
  • [Kushal Lakhotia](https://about.me/hikushalhere)
  • others
  • [Qiantong Xu](https://github.com/xuqiantong)
  • [Naman Goyal](https://github.com/ngoyal2707)
  • [Kritika Singh](https://scholar.google.com/citations?user=Ltk3SykAAAAJ)
  • [Patrick von Platen](https://github.com/patrickvonplaten)
  • [Yatharth Saraf](https://scholar.google.com/citations?user=KJTtNJwAAAAJ)
  • [Juan Pino](https://scholar.google.com/citations?user=weU_-4IAAAAJ)
  • [Alexei Baevski](https://github.com/alexeib)
  • [Alexis Conneau](https://github.com/aconneau)
  • [Michael Auli](https://github.com/michaelauli)

| [![](https://img.shields.io/github/stars/facebookresearch/fairseq?style=social)](https://github.com/facebookresearch/fairseq/blob/main/examples/wav2vec/xlsr/README.md)

  • [arxiv](https://arxiv.org/abs/2111.09296)

  • [blog post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)

  • [git](https://github.com/facebookresearch/fairscale)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb) | 10.05.2022 |
| MAGIC | Training-free framework, iMAge-Guided text generatIon with CLIP, for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks in a zero-shot manner |

  • [Yixuan Su](https://yxuansu.github.io/)
  • [Tian Lan](https://github.com/gmftbyGMFTBY)
  • [Yahui Liu](https://yhlleo.github.io/)
  • [Fangyu Liu](https://fangyuliu.me/about)
  • others
  • [Dani Yogatama](https://dyogatama.github.io/)
  • [Yan Wang](https://libertywing.github.io/yanwang.github.io/)
  • [Lingpeng Kong](https://www.cs.cmu.edu/~lingpenk/)
  • [Nigel Collier](https://sites.google.com/site/nhcollier/)

| [![](https://img.shields.io/github/stars/yxuansu/magic?style=social)](https://github.com/yxuansu/magic)
  • [arxiv](https://arxiv.org/abs/2205.02655)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1NDVkKpanbsaUwecHoRp_2kIpMztOFW25) | 02.05.2022 |
| DiffCSE | Unsupervised contrastive learning framework for learning sentence embeddings |

  • [Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/)
  • [Rumen Dangovski](http://super-ms.mit.edu/rumen.html)
  • [Hongyin Luo](https://luohongyin.github.io/)
  • [Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/)
  • others
  • [Shiyu Chang](https://code-terminator.github.io/)
  • [Marin Soljačić](http://www.mit.edu/~soljacic/marin.html)
  • [Shang-Wen Li](https://swdanielli.github.io/)
  • [Scott Wen-tau Yih](https://scottyih.org/)
  • [Yoon Kim](https://people.csail.mit.edu/yoonkim/)
  • [James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)

| [![](https://img.shields.io/github/stars/voidism/diffcse?style=social)](https://github.com/voidism/diffcse)

  • [arxiv](https://arxiv.org/abs/2204.10298), [arxiv](https://arxiv.org/abs/2104.08821), [arxiv](https://arxiv.org/abs/2111.00899)

  • [git](https://github.com/princeton-nlp/SimCSE)

  • [hf](https://huggingface.co/voidism)

  • [twitter](https://twitter.com/YungSungChuang/status/1517518077902000129)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb) | 24.04.2022 |
| ViDT+ | An Extendable, Efficient and Effective Transformer-based Object Detector |

  • [Hwanjun Song](https://songhwanjun.github.io/)
  • [Deqing Sun](https://deqings.github.io/)
  • [Sanghyuk Chun](https://sanghyukchun.github.io/home/)
  • [Varun Jampani](https://varunjampani.github.io/)
  • others
  • [Dongyoon Han](https://sites.google.com/site/dyhan0920/)
  • [Byeongho Heo](https://sites.google.com/view/byeongho-heo/home)
  • [Wonjae Kim](https://wonjae.kim/)
  • [Ming-Hsuan Yang](http://faculty.ucmerced.edu/mhyang/)

| [![](https://img.shields.io/github/stars/naver-ai/vidt?style=social)](https://github.com/naver-ai/vidt/tree/vidt-plus)

  • [arxiv](https://arxiv.org/abs/2204.07962), [arxiv](https://arxiv.org/abs/2110.03921)

  • [git](https://github.com/fundamentalvision/Deformable-DETR), [git](https://github.com/EherSenaw/ViDT_colab)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/EherSenaw/ViDT_colab/blob/main/vidt_colab.ipynb) | 20.04.2022 |
| BasicVSR++ | Redesign BasicVSR by proposing second-order grid propagation and flow-guided deformable alignment |

  • [Kelvin Chan](https://ckkelvinchan.github.io/)
  • [Shangchen Zhou](https://shangchenzhou.com/)
  • [Xiangyu Xu](https://xuxy09.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00588)](https://doi.org/10.1109/CVPR52688.2022.00588) [![](https://img.shields.io/github/stars/ckkelvinchan/BasicVSR_PlusPlus?style=social)](https://github.com/ckkelvinchan/BasicVSR_PlusPlus)

  • [arxiv](https://arxiv.org/abs/2104.13371)

  • [git](https://github.com/ckkelvinchan/BasicVSR-IconVSR), [git](https://github.com/ckkelvinchan/offset-fidelity-loss)

  • [project](https://ckkelvinchan.github.io/projects/BasicVSR++/)

  • [yt](https://youtu.be/iIDml09CUc4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1I0kZMM0DQyb4ueHZw5si8fMnRCJ_eUX3) | 18.04.2022 |
| NAFNet | Nonlinear Activation Free Network for Image Restoration |

  • [Liangyu Chen](https://github.com/mayorx)
  • [Xiaojie Chu](https://github.com/chuxiaojie)
  • [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ)
  • [Jian Sun](http://www.jiansun.org/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20071-7_2)](https://doi.org/10.1007/978-3-031-20071-7_2) [![](https://img.shields.io/github/stars/megvii-research/NAFNet?style=social)](https://github.com/megvii-research/NAFNet)

  • [arxiv](https://arxiv.org/abs/2204.04676), [arxiv](https://arxiv.org/abs/2204.08714)

  • [pwc](https://paperswithcode.com/sota/image-deblurring-on-gopro?p=simple-baselines-for-image-restoration), [pwc](https://paperswithcode.com/sota/image-denoising-on-sidd?p=simple-baselines-for-image-restoration)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dkO5AyktmBoWwxBwoKFUurIDn0m4qDXT) | 15.04.2022 |
| Panini-Net | GAN Prior based Degradation-Aware Feature Interpolation for Face Restoration |

  • [Yinhuai Wang](https://github.com/wyhuai)
  • [Yujie Hu](https://villa.jianzhang.tech/people/yujie-hu/)
  • [Jian Zhang](http://jianzhang.tech/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1609/aaai.v36i3.20159)](https://doi.org/10.1609/aaai.v36i3.20159) [![](https://img.shields.io/github/stars/jianzhangcs/panini?style=social)](https://github.com/jianzhangcs/panini)

  • [arxiv](https://arxiv.org/abs/2203.08444)

  • [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/tkarras/progressive_growing_of_gans)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/GeeveGeorge/Panini-Net-Colab/blob/main/PaniniNet_Working.ipynb) | 13.04.2022 |
| E2FGVI | An End-to-End framework for Flow-Guided Video Inpainting through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules |

  • [Zhen Li](https://paper99.github.io/)
  • [Cheng-Ze Lu](https://github.com/LGYoung)
  • [Jianhua Qin](https://scholar.google.com/citations?&user=TAr7TU4AAAAJ)
  • [Chun-Le Guo](https://scholar.google.com/citations?user=RZLYwR0AAAAJ)
  • [Ming-Ming Cheng](https://mmcheng.net/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01704)](https://doi.org/10.1109/CVPR52688.2022.01704) [![](https://img.shields.io/github/stars/MCG-NKU/E2FGVI?style=social)](https://github.com/MCG-NKU/E2FGVI)

  • [arxiv](https://arxiv.org/abs/2204.02663)

  • [data](https://competitions.codalab.org/competitions/19544#participate-get-data), [data](https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip)

  • [git](https://github.com/researchmm/STTN), [git](https://github.com/microsoft/Focal-Transformer), [git](https://github.com/ruiliu-ai/FuseFormer), [git](https://github.com/phoenix104104/fast_blind_video_consistency#evaluation)

  • [medium](https://medium.com/mlearning-ai/end-to-end-framework-for-flow-guided-video-inpainting-c5e2d8b61d20)

  • [yt](https://youtu.be/N--qC3T2wc4), [yt](https://youtu.be/3eH3Fm6gOFk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/12rwY2gtG8jVWlNx9pjmmM8uGmh5ue18G) | 06.04.2022 |
| LDM | High-Resolution Image Synthesis with Latent Diffusion Models |

  • [Robin Rombach](https://github.com/rromb)
  • [Andreas Blattmann](https://github.com/ablattmann)
  • [Dominik Lorenz](https://github.com/qp-qp)
  • [Patrick Esser](https://github.com/pesser)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01042)](https://doi.org/10.1109/CVPR52688.2022.01042) [![](https://img.shields.io/github/stars/CompVis/latent-diffusion?style=social)](https://github.com/CompVis/latent-diffusion)

  • [arxiv](https://arxiv.org/abs/2112.10752), [arxiv](https://arxiv.org/abs/2202.09778), [arxiv](https://arxiv.org/abs/2111.02114)

  • [git](https://github.com/fyu/lsun), [git](https://github.com/openai/guided-diffusion), [git](https://github.com/lucidrains/denoising-diffusion-pytorch), [git](https://github.com/lucidrains/x-transformers)

  • [hf](https://huggingface.co/spaces/multimodalart/latentdiffusion)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CompVis/latent-diffusion/blob/master/scripts/latent_imagenet_diffusion.ipynb) | 04.04.2022 |
| GP-UNIT | Novel framework, Generative Prior-guided UNsupervised Image-to-image Translation, to improve the overall quality and applicability of the translation algorithm |

  • [Shuai Yang](https://williamyang1991.github.io/)
  • [Liming Jiang](https://liming-jiang.com/)
  • [Ziwei Liu](https://liuziwei7.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://img.shields.io/github/stars/williamyang1991/GP-UNIT?style=social)](https://github.com/williamyang1991/GP-UNIT)

  • [ImageNet](https://image-net.org/download.php)

  • [arxiv](https://arxiv.org/abs/2204.03641)

  • [git](https://github.com/clovaai/stargan-v2#datasets-and-pre-trained-networks), [git](https://github.com/switchablenorms/CelebAMask-HQ), [git](https://github.com/NVlabs/metfaces-dataset), [git](https://github.com/TreB1eN/InsightFace_Pytorch), [git](https://github.com/NVlabs/SPADE), [git](https://github.com/nvlabs/imaginaire), [git](https://doi.org/10.1109/CVPR52688.2022.01779)

  • [project](https://www.mmlab-ntu.com/project/gpunit/)

  • [yt](https://youtu.be/dDApWs_oDrM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/williamyang1991/GP-UNIT/blob/main/notebooks/inference_playground.ipynb) | 02.04.2022 |
| DualStyleGAN | More challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain |

  • [Shuai Yang](https://williamyang1991.github.io/)
  • [Liming Jiang](https://liming-jiang.com/)
  • [Ziwei Liu](https://liuziwei7.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00754)](https://doi.org/10.1109/CVPR52688.2022.00754) [![](https://img.shields.io/github/stars/williamyang1991/DualStyleGAN?style=social)](https://github.com/williamyang1991/DualStyleGAN)

  • [arxiv](https://arxiv.org/abs/2203.13248)

  • [data](https://cs.nju.edu.cn/rl/WebCaricature.htm), [data](https://www.gwern.net/Crops#danbooru2019-portraits)

  • [git](https://github.com/lowfuel/progrock-stable), [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/TreB1eN/InsightFace_Pytorch)

  • [hf](https://huggingface.co/spaces/Gradio-Blocks/DualStyleGAN), [hf](https://huggingface.co/spaces/hysts/DualStyleGAN)

  • [project](https://www.mmlab-ntu.com/project/dualstylegan/)

  • [yt](https://youtu.be/scZTu77jixI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/williamyang1991/DualStyleGAN/blob/master/notebooks/inference_playground.ipynb) | 24.03.2022 |
| CLIPasso | Semantically-Aware Object Sketching |

  • [Yael Vinker](https://yaelvi116.wixsite.com/mysite)
  • [Ehsan Pajouheshgar](https://pajouheshgar.github.io/)
  • [Jessica Y. Bo](https://jessica-bo.github.io/)
  • [Roman Bachmann](https://roman-bachmann.github.io/)
  • others
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)
  • [Amir Zamir](https://vilab.epfl.ch/zamir/)
  • [Ariel Shamir](https://faculty.runi.ac.il/arik/site/index.asp)

| [![](https://img.shields.io/github/stars/yael-vinker/CLIPasso?style=social)](https://github.com/yael-vinker/CLIPasso)

  • [arxiv](https://arxiv.org/abs/2202.05822), [arxiv](https://arxiv.org/abs/2106.14843)

  • [demo](https://replicate.com/yael-vinker/clipasso)

  • [git](https://github.com/BachiLi/diffvg)

  • [project](https://clipasso.github.io/clipasso/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/yael-vinker/CLIPasso/blob/main/CLIPasso.ipynb) | 21.03.2022 |
| StyleSDF | A high resolution, 3D-consistent image and shape generation technique |

  • [Roy Or-El](https://homes.cs.washington.edu/~royorel/)
  • [Xuan Luo](https://roxanneluo.github.io/)
  • [Mengyi Shan](https://shanmy.github.io/)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • others
  • [Jeong Joon Park](https://jjparkcv.github.io/)
  • [Ira Kemelmacher-Shlizerman](https://www.irakemelmacher.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01314)](https://doi.org/10.1109/CVPR52688.2022.01314) [![](https://img.shields.io/github/stars/royorel/StyleSDF?style=social)](https://github.com/royorel/StyleSDF)

  • [arxiv](https://arxiv.org/abs/2112.11427)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/yenchenlin/nerf-pytorch)

  • [hf](https://huggingface.co/spaces/SerdarHelli/StyleSDF-3D)

  • [project](https://stylesdf.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/royorel/StyleSDF/blob/main/StyleSDF_demo.ipynb) | 05.03.2022 |
| Disentangled Lifespan Face Synthesis | LFS model is proposed to disentangle the key face characteristics including shape, texture and identity so that the unique shape and texture age transformations can be modeled effectively |

  • [Sen He](https://senhe.github.io/)
  • [Wentong Liao](https://www.tnt.uni-hannover.de/en/staff/liao/)
  • [Michael Yang](https://sites.google.com/site/michaelyingyang/)
  • [Yi-Zhe Song](http://personal.ee.surrey.ac.uk/Personal/Y.Song/)
  • others
  • [Bodo Rosenhahn](https://scholar.google.com/citations?user=qq3TxtcAAAAJ)
  • [Tao Xiang](http://personal.ee.surrey.ac.uk/Personal/T.Xiang/index.html)

| [![](https://img.shields.io/github/stars/SenHe/DLFS?style=social)](https://github.com/SenHe/DLFS)

  • [arxiv](https://arxiv.org/abs/2108.02874)

  • [project](https://senhe.github.io/projects/iccv_2021_lifespan_face/)

  • [yt](https://www.youtube.com/watch?v=uklX03ns0m0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1fgVAoxCSaqPkj0rUK4RmBh7GTQRqLNpE) | 22.02.2022 |
| ClipCap | CLIP Prefix for Image Captioning |

  • [Ron Mokady](https://rmokady.github.io/)
  • [Amir Hertz](https://github.com/amirhertz)
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)

| [![](https://img.shields.io/github/stars/rmokady/CLIP_prefix_caption?style=social)](https://github.com/rmokady/CLIP_prefix_caption)

  • [arxiv](https://arxiv.org/abs/2111.09734)

  • [data](https://cocodataset.org/)

  • [hf](https://huggingface.co/spaces/akhaliq/CLIP_prefix_captioning)

  • [medium](https://medium.com/@uppalamukesh/clipcap-clip-prefix-for-image-captioning-3970c73573bc)

  • [yt](https://youtu.be/VQDrmuccWDo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rmokady/CLIP_prefix_caption/blob/main/notebooks/clip_prefix_captioning_inference.ipynb#scrollTo=glBzYsgIwhwF) | 15.02.2022 |
| ROMP | Monocular, One-stage, Regression of Multiple 3D People |

  • [Yu Sun](https://www.yusun.work/)
  • [Qian Bao](https://github.com/for-code0216)
  • [Wu Liu](https://faculty.ustc.edu.cn/liuwu)
  • [Yili Fu](https://ieeexplore.ieee.org/author/37286601800)
  • others
  • [Michael Black](https://ps.is.mpg.de/~black)
  • [Tao Mei](https://taomei.me/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.01099)](https://doi.org/10.1109/ICCV48922.2021.01099) [![](https://img.shields.io/github/stars/Arthur151/ROMP?style=social)](https://github.com/Arthur151/ROMP)

  • [arxiv](https://arxiv.org/abs/2008.12272), [arxiv](https://arxiv.org/abs/2112.08274), [arxiv](http://arxiv.org/abs/2306.02850)

  • [git](https://github.com/Arthur151/Relative_Human), [git](https://github.com/Arthur151/DynaCam), [git](https://github.com/yanchxx/MoPA)

  • [yt](https://youtu.be/hunBPJxnyBU), [yt](https://youtu.be/Q62fj_6AxRI), [yt](https://youtu.be/l8aLHDXWQRw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1oz9E6uIbj4udOPZvA1Zi9pFx0SWH_UXg) | 11.02.2022 |
| Mask2Former | Masked-attention Mask Transformer for Universal Image Segmentation |

  • [Bowen Cheng](https://bowenc0221.github.io/)
  • [Ishan Misra](https://imisra.github.io/)
  • [Alexander Schwing](https://alexander-schwing.de/)
  • [Alexander Kirillov](https://alexander-kirillov.github.io/)
  • [Rohit Girdhar](https://rohitgirdhar.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00135)](https://doi.org/10.1109/CVPR52688.2022.00135) [![](https://img.shields.io/github/stars/facebookresearch/Mask2Former?style=social)](https://github.com/facebookresearch/Mask2Former)

  • [arxiv](https://arxiv.org/abs/2112.01527), [arxiv](https://arxiv.org/abs/2112.10764)

  • [demo](https://replicate.com/facebookresearch/mask2former)

  • [git](https://github.com/facebookresearch/MaskFormer)

  • [hf](https://huggingface.co/spaces/akhaliq/Mask2Former)

  • [project](https://bowenc0221.github.io/mask2former/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1uIWE5KbGFSjrxey2aRd5pWkKNY1_SaNq) | 09.02.2022 |
| JoJoGAN | One Shot Face Stylization |

  • [Min Jin Chong](https://mchong6.github.io/)
  • [David Forsyth](http://luthuli.cs.uiuc.edu/~daf/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-19787-1_8)](https://doi.org/10.1007/978-3-031-19787-1_8) [![](https://img.shields.io/github/stars/mchong6/JoJoGAN?style=social)](https://github.com/mchong6/JoJoGAN)

  • [arxiv](https://arxiv.org/abs/2112.11641)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/replicate/cog)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/master/stylize.ipynb) | 02.02.2022 |
| Pose with Style | Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN |

  • [Badour AlBahar](https://badouralbahar.github.io/)
  • [Jingwan Lu](https://research.adobe.com/person/jingwan-lu/)
  • [Jimei Yang](https://github.com/jimeiyang)
  • [Zhixin Shu](https://zhixinshu.github.io/)
  • others
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Jia-Bin Huang](https://jbhuang0604.github.io/)

| [![](https://img.shields.io/github/stars/BadourAlBahar/pose-with-style?style=social)](https://github.com/BadourAlBahar/pose-with-style)

  • [arxiv](https://arxiv.org/abs/2109.06166)

  • [git](https://github.com/rosinality/stylegan2-pytorch)

  • [project](https://pose-with-style.github.io/)

  • [yt](https://youtu.be/d_ETeAVLilw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tg-bomze/collection-of-notebooks/blob/master/HomeStylist.ipynb) | 19.01.2022 |
| ConvNeXt | A pure ConvNet model constructed entirely from standard ConvNet modules |

  • [Zhuang Liu](https://liuzhuang13.github.io/)
  • [Hanzi Mao](https://hanzimao.me/)
  • [Chao-Yuan Wu](https://chaoyuan.org/)
  • [Christoph Feichtenhofer](https://feichtenhofer.github.io/)
  • others
  • [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/)
  • [Saining Xie](https://www.sainingxie.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01167)](https://doi.org/10.1109/CVPR52688.2022.01167) [![](https://img.shields.io/github/stars/facebookresearch/ConvNeXt?style=social)](https://github.com/facebookresearch/ConvNeXt)

  • [arxiv](https://arxiv.org/abs/2201.03545)

  • [git](https://github.com/rwightman/pytorch-image-models), [git](https://github.com/facebookresearch/deit), [git](https://github.com/microsoft/unilm/tree/master/beit)

  • [hf](https://huggingface.co/spaces/akhaliq/convnext)

  • [yt](https://youtu.be/QzCjXqFnWPE), [yt](https://youtu.be/idiIllIQOfU), [yt](https://youtu.be/QqejV0LNDHA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1CBYTIZ4tBMsVL5cqu9N_-Q3TBprqsfEO) | 19.01.2022 |
| diffsort | Differentiable Sorting Networks |

  • [Felix Petersen](http://petersen.ai/)
  • [Christian Borgelt](https://borgelt.net/)
  • [Hilde Kuehne](https://hildekuehne.github.io/)
  • [Oliver Deussen](https://www.cgmi.uni-konstanz.de/personen/prof-dr-oliver-deussen/)

| [![](https://img.shields.io/github/stars/Felix-Petersen/diffsort?style=social)](https://github.com/Felix-Petersen/diffsort)

  • [arxiv](https://arxiv.org/abs/2105.04019), [arxiv](https://arxiv.org/abs/2203.09630)

  • [yt](https://youtu.be/Rl-sFaE1z4M)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1q0TZFFYB9FlOJYWKt0_7ZaXQT190anhm) | 17.01.2022 |
| Taming Transformers for High-Resolution Image Synthesis | We combine the efficiancy of convolutional approaches with the expressivity of transformers by introducing a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer |

  • [Patrick Esser](https://github.com/pesser)
  • [Robin Rombach](https://github.com/rromb)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01268)](https://doi.org/10.1109/CVPR46437.2021.01268) [![](https://img.shields.io/github/stars/CompVis/taming-transformers?style=social)](https://github.com/CompVis/taming-transformers)

  • [arxiv](https://arxiv.org/abs/2012.09841)

  • [project](https://compvis.github.io/taming-transformers/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb) | 13.01.2022 |
| RealBasicVSR | Investigating Tradeoffs in Real-World Video Super-Resolution |

  • [Kelvin Chan](https://ckkelvinchan.github.io/)
  • [Shangchen Zhou](https://shangchenzhou.com/)
  • [Xiangyu Xu](https://xuxy09.github.io/)
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00587)](https://doi.org/10.1109/CVPR52688.2022.00587) [![](https://img.shields.io/github/stars/ckkelvinchan/RealBasicVSR?style=social)](https://github.com/ckkelvinchan/RealBasicVSR)

  • [arxiv](https://arxiv.org/abs/2111.12704)

  • [hf](https://huggingface.co/spaces/akhaliq/RealBasicVSR)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/tc8p70/rp_investigating_tradeoffs_in_realworld_video/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1JzWRUR34hpKvtCHm84IGx6nv35LCv20J) | 25.12.2021 |
| GLIDE | Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models |

  • [Alex Nichol](https://aqnichol.com/)
  • [Prafulla Dhariwal](https://github.com/prafullasd)
  • [Aditya Ramesh](http://adityaramesh.com/)
  • [Pranav Shyam](https://github.com/pranv)
  • others
  • [Pamela Mishkin](https://manlikemishap.github.io/)
  • [Bob McGrew](https://github.com/bmcgrew)
  • [Ilya Sutskever](http://www.cs.utoronto.ca/~ilya/)
  • [Mark Chen](https://scholar.google.com/citations?user=5fU-QMwAAAAJ)

| [![](https://img.shields.io/github/stars/openai/glide-text2im?style=social)](https://github.com/openai/glide-text2im)

  • [arxiv](https://arxiv.org/abs/2112.10741)

  • [yt](https://youtu.be/ItKi3h7IY2o)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openai/glide-text2im/blob/master/notebooks/inpaint.ipynb) | 22.12.2021 |
| Nerfies | First method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones |

  • [Keunhong Park](https://keunhong.com/)
  • [Utkarsh Sinha](https://utkarshsinha.com/)
  • [Jon Barron](https://jonbarron.info/)
  • [Sofien Bouaziz](http://sofienbouaziz.com/)
  • others
  • [Dan Goldman](https://www.danbgoldman.com/home/)
  • [Steve Seitz](https://www.smseitz.com/)
  • [Ricardo Martin-Brualla](https://ricardomartinbrualla.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.00581)](https://doi.org/10.1109/ICCV48922.2021.00581) [![](https://img.shields.io/github/stars/google/nerfies?style=social)](https://github.com/google/nerfies)

  • [arxiv](https://arxiv.org/abs/2011.12948)

  • [git](https://github.com/google-research/google-research/tree/master/jaxnerf)

  • [project](https://nerfies.github.io/)

  • [reddit](https://www.reddit.com/r/photogrammetry/comments/k1i0ct/deformable_neural_radiance_fields_nerfies/)

  • [yt](https://youtu.be/MrKrnHhk8IA), [yt](https://youtu.be/IDMiMKWucaI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/nerfies/blob/main/notebooks/Nerfies_Capture_Processing.ipynb) | 06.12.2021 |
| HyperStyle | A hypernetwork that learns to modulate StyleGAN's weights to faithfully express a given image in editable regions of the latent space |

  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Omer Tov](https://github.com/omertov)
  • [Ron Mokady](https://rmokady.github.io/)
  • [Rinon Gal](https://rinongal.github.io/)
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01796)](https://doi.org/10.1109/CVPR52688.2022.01796) [![](https://img.shields.io/github/stars/yuval-alaluf/hyperstyle?style=social)](https://github.com/yuval-alaluf/hyperstyle)

  • [arxiv](https://arxiv.org/abs/2111.15666), [arxiv](https://arxiv.org/abs/1904.03189), [arxiv](https://arxiv.org/abs/2012.09036), [arxiv](https://arxiv.org/abs/2005.07727)

  • [data](https://ai.stanford.edu/~jkrause/cars/car_dataset.html)

  • [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/clovaai/stargan-v2), [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/TreB1eN/InsightFace_Pytorch), [git](https://github.com/HuangYG123/CurricularFace), [git](https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer), [git](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py), [git](https://github.com/dvschultz/stylegan2-ada-pytorch)

  • [project](https://yuval-alaluf.github.io/hyperstyle/)

  • [yt](https://youtu.be/_sbXmLY2jMw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/yuval-alaluf/hyperstyle/blob/master/notebooks/inference_playground.ipynb) | 03.12.2021 |
| encoder4editing | Designing an Encoder for StyleGAN Image Manipulation |

  • [Omer Tov](https://github.com/omertov)
  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Yotam Nitzan](https://yotamnitzan.github.io/)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3450626.3459838)](https://doi.org/10.1145/3450626.3459838) [![](https://img.shields.io/github/stars/omertov/encoder4editing?style=social)](https://github.com/omertov/encoder4editing)

  • [arxiv](https://arxiv.org/abs/2102.02766)

  • [git](https://github.com/eladrich/pixel2style2pixel)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/omertov/encoder4editing/blob/master/notebooks/inference_playground.ipynb) | 02.12.2021 |
| StyleCariGAN | Caricature Generation via StyleGAN Feature Map Modulation |

  • [Wonjong Jang](https://wonjongg.github.io/)
  • [Gwangjin Ju](https://github.com/jugwangjin)
  • [Yucheol Jung](https://ycjung.info/)
  • [Jiaolong Yang](https://jlyang.org/)
  • others
  • [Xin Tong](https://www.microsoft.com/en-us/research/people/xtong/)
  • [Seungyong Lee](https://scholar.google.com/citations?user=yGPH-nAAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3450626.3459860)](https://doi.org/10.1145/3450626.3459860) [![](https://img.shields.io/github/stars/wonjongg/StyleCariGAN?style=social)](https://github.com/wonjongg/StyleCariGAN)

  • [arxiv](https://arxiv.org/abs/2107.04331)

  • [git](https://github.com/NVlabs/stylegan2), [git](https://github.com/rosinality/stylegan2-pytorch)

  • [project](https://wonjongg.github.io/StyleCariGAN/)

  • [yt](https://www.youtube.com/watch?v=kpHbGOlI-BU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1HDRQGm7pvC9mAb6Lktoft_SmY9sCq_Qg) | 30.11.2021 |
| CartoonGAN | The implementation of the cartoon GAN model with PyTorch | [Tobias Sunderdiek](https://github.com/TobiasSunderdiek) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR.2018.00986)](https://doi.org/10.1109/CVPR.2018.00986)

  • [kaggle](https://www.kaggle.com/alamson/safebooru)

  • [project](https://tobiassunderdiek.github.io/cartoon-gan/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/TobiasSunderdiek/cartoon-gan/blob/master/CartoonGAN.ipynb) | 24.11.2021 |
| SimSwap | An efficient framework, called Simple Swap, aiming for generalized and high fidelity face swapping |

  • [Xuanhong Chen](https://github.com/neuralchen)
  • [Bingbing Ni](https://scholar.google.com/citations?user=eUbmKwYAAAAJ)
  • [Yanhao Ge](https://scholar.google.com/citations?user=h6tuBAcAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3394171.3413630)](https://doi.org/10.1145/3394171.3413630) [![](https://img.shields.io/github/stars/neuralchen/SimSwap?style=social)](https://github.com/neuralchen/SimSwap)

  • [arxiv](https://arxiv.org/abs/2106.06340)

  • [git](https://github.com/deepinsight/insightface)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/neuralchen/SimSwap/blob/master/SimSwap%20colab.ipynb) | 24.11.2021 |
| RVM | Robust High-Resolution Video Matting with Temporal Guidance |

  • [Shanchuan Lin](https://github.com/PeterL1n)
  • [Linjie Yang](https://sites.google.com/site/linjieyang89/)
  • [Imran Saleemi](http://www.cs.ucf.edu/~imran/)
  • [Soumyadip Sengupta](https://homes.cs.washington.edu/~soumya91/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/WACV51458.2022.00319)](https://doi.org/10.1109/WACV51458.2022.00319) [![](https://img.shields.io/github/stars/PeterL1n/RobustVideoMatting?style=social)](https://github.com/PeterL1n/RobustVideoMatting)

  • [arxiv](http://arxiv.org/abs/2108.11515)

  • [git](https://github.com/NVIDIA/VideoProcessingFramework), [git](https://github.com/FeiGeChuanShu/ncnn_Android_RobustVideoMatting)

  • [project](https://peterl1n.github.io/RobustVideoMatting)

  • [yt](https://youtu.be/Jvzltozpbpk), [yt](https://youtu.be/Ay-mGCEYEzM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/10z-pNKRnVNsp0Lq9tH1J_XPZ7CBC_uHm) | 24.11.2021 |
| RVM | Robust, real-time, high-resolution human video matting method that achieves new state-of-the-art performance |

  • [Shanchuan Lin](https://github.com/PeterL1n)
  • [Linjie Yang](https://sites.google.com/site/linjieyang89)
  • [Imran Saleemi](https://github.com/imran-saleemi)
  • [Soumyadip Sengupta](https://github.com/senguptaumd)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/WACV51458.2022.00319)](https://doi.org/10.1109/WACV51458.2022.00319) [![](https://img.shields.io/github/stars/PeterL1n/RobustVideoMatting?style=social)](https://github.com/PeterL1n/RobustVideoMatting)

  • [arxiv](https://arxiv.org/abs/2108.11515)

  • [project](https://peterl1n.github.io/RobustVideoMatting)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/pdbpmg/r_robust_highresolution_video_matting_with/)

  • [yt](https://youtu.be/Jvzltozpbpk), [yt](https://youtu.be/Ay-mGCEYEzM), [yt](https://youtu.be/VL-0K6HjhvQ), [yt](https://youtu.be/Jhuf6M_VrBI), [yt](https://youtu.be/_oN9yyRi3HY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/10z-pNKRnVNsp0Lq9tH1J_XPZ7CBC_uHm) | 24.11.2021 |
| AnimeGANv2 | An improved version of AnimeGAN - it prevents the generation of high-frequency artifacts by simply changing the normalization of features in the network |

  • [Xin Chen](https://github.com/TachibanaYoshino)
  • [Gang Liu](https://github.com/lg0061408)
  • [bryandlee](https://github.com/bryandlee)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-981-15-5577-0_18)](https://doi.org/10.1007/978-981-15-5577-0_18) [![](https://img.shields.io/github/stars/bryandlee/animegan2-pytorch?style=social)](https://github.com/bryandlee/animegan2-pytorch)

  • [git](https://github.com/TachibanaYoshino/AnimeGANv2), [git](https://github.com/TachibanaYoshino/AnimeGAN)

  • [hf](https://huggingface.co/spaces/akhaliq/AnimeGANv2)

  • [project](https://tachibanayoshino.github.io/AnimeGANv2/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bryandlee/animegan2-pytorch/blob/master/colab_demo.ipynb) | 17.11.2021 |
| SOAT | StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN |

  • [Min Jin Chong](https://mchong6.github.io/)
  • [Hsin-Ying Lee](http://hsinyinglee.com/)
  • [David Forsyth](http://luthuli.cs.uiuc.edu/~daf/)

| [![](https://img.shields.io/github/stars/mchong6/SOAT?style=social)](https://github.com/mchong6/SOAT)

  • [arxiv](https://arxiv.org/abs/2111.01619)

  • [git](https://github.com/justinpinkney/toonify), [git](https://github.com/rosinality/stylegan2-pytorch)

  • [hf](https://huggingface.co/spaces/akhaliq/SOAT)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mchong6/SOAT/blob/master/infinity.ipynb) | 13.11.2021 |
| Arnheim | Generative Art Using Neural Visual Grammars and Dual Encoders |

  • [Chrisantha Fernando](https://www.chrisantha.co.uk/)
  • [Ali Eslami](http://arkitus.com/)
  • [Jean-Baptiste Alayrac](https://www.jbalayrac.com/)
  • [Piotr Mirowski](https://piotrmirowski.com/)
  • others
  • [Dylan Banarse](https://www.2ne1.com/)
  • [Simon Osindero](https://scholar.google.com/citations?user=Jq8ZS5kAAAAJ)

| [![](https://img.shields.io/github/stars/deepmind/arnheim?style=social)](https://github.com/deepmind/arnheim)

  • [arxiv](https://arxiv.org/abs/2105.00162), [arxiv](https://arxiv.org/abs/2106.14843), [arxiv](https://arxiv.org/abs/1801.07729), [arxiv](https://arxiv.org/abs/1606.02580), [arxiv](https://arxiv.org/abs/1609.09106)

  • [git](https://github.com/openai/dall-e)

  • [wiki](https://en.wikipedia.org/wiki/Compositional_pattern-producing_network)

  • [yt](https://www.youtube.com/watch?v=U7guaMdeF4g), [yt](https://www.youtube.com/watch?v=zh0goLbS-l0), [yt](https://www.youtube.com/watch?v=SYJGNt7yu6M), [yt](https://www.youtube.com/watch?v=MxkYKa0x5AU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/arnheim/blob/master/arnheim_2.ipynb) | 11.11.2021 |
| StyleGAN 2 | Generation of faces, cars, etc. | [Mikael Christensen](https://github.com/Syntopia) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00813)](https://doi.org/10.1109/CVPR42600.2020.00813) [![](https://img.shields.io/github/stars/NVlabs/stylegan2?style=social)](https://github.com/NVlabs/stylegan2)

  • [arxiv](http://arxiv.org/abs/1912.04958)

  • [git](https://github.com/NVlabs/ffhq-dataset)

  • [yt](https://youtu.be/c-NJtV9Jvp0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1ShgW6wohEFQtqs_znMna3dzrcVoABKIH) | 05.11.2021 |
| ByteTrack | Multi-Object Tracking by Associating Every Detection Box |

  • [Yifu Zhang](https://github.com/ifzhang)
  • [Peize Sun](https://peizesun.github.io/)
  • [Yi Jiang](https://github.com/iFighting)
  • [Dongdong Yu](https://miracle-fmh.github.io/)
  • others
  • [Ping Luo](http://luoping.me/)
  • [Xinggang Wang](https://xinggangw.info/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20047-2_1)](https://doi.org/10.1007/978-3-031-20047-2_1) [![](https://img.shields.io/github/stars/ifzhang/ByteTrack?style=social)](https://github.com/ifzhang/ByteTrack)

  • [arxiv](https://arxiv.org/abs/2110.06864)

  • [data](https://motchallenge.net/), [data](https://www.crowdhuman.org/)

  • [git](https://github.com/Megvii-BaseDetection/YOLOX), [git](https://github.com/ifzhang/FairMOT), [git](https://github.com/PeizeSun/TransTrack), [git](https://github.com/samylee/Towards-Realtime-MOT-Cpp)

  • [pwc](https://paperswithcode.com/task/multi-object-tracking)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1bDilg4cmXFa8HCKHbsZ_p16p0vrhLyu0) | 30.10.2021 |
| GPT-2 | Retrain an advanced text generating neural network on any text dataset using gpt-2-simple! | [Max Woolf](https://minimaxir.com/) | [![](https://img.shields.io/github/stars/openai/gpt-2?style=social)](https://github.com/openai/gpt-2)

  • [blog post](https://minimaxir.com/2019/09/howto-gpt2/), [blog post](https://openai.com/research/better-language-models)

  • [git](https://github.com/minimaxir/gpt-2-simple)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/aqlzde/r_openai_better_language_models_and_their/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce) | 18.10.2021 |
| ConvMixer | An extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on patches as input, separates the mixing of spatial and channel dimensions, and maintains equal size and resolution throughout the network |

  • [Asher Trockman](http://ashertrockman.com/)
  • [Zico Kolter](http://zicokolter.com/)

| [![](https://img.shields.io/github/stars/locuslab/convmixer?style=social)](https://github.com/locuslab/convmixer)

  • [arxiv](https://arxiv.org/abs/2201.09792)

  • [git](https://github.com/locuslab/convmixer-cifar10), [git](https://github.com/rwightman/pytorch-image-models)

  • [medium](https://medium.com/codex/an-overview-on-convmixer-patches-are-all-you-need-8502a8d87011)

  • [yt](https://youtu.be/Gl0s0GDqN3c?t=990)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/locuslab/convmixer/blob/main/pytorch-image-models/notebooks/EffResNetComparison.ipynb) | 06.10.2021 |
| IC-GAN | Instance-Conditioned GAN |

  • [Arantxa Casanova](https://github.com/ArantxaCasanova)
  • [Marlène Careil](https://www.linkedin.com/in/marl%C3%A8ne-careil-901804155)
  • [Jakob Verbeek](http://thoth.inrialpes.fr/~verbeek/)
  • [Michał Drożdżal](https://scholar.google.com/citations?user=XK_ktwQAAAAJ)
  • [Adriana Romero-Soriano](https://sites.google.com/site/adriromsor)

| [![](https://img.shields.io/github/stars/facebookresearch/ic_gan?style=social)](https://github.com/facebookresearch/ic_gan)

  • [arxiv](https://arxiv.org/abs/2109.05070)

  • [blog post](https://ai.facebook.com/blog/instance-conditioned-gans/)

  • [git](https://github.com/facebookresearch/faiss), [git](https://github.com/ajbrock/BigGAN-PyTorch), [git](https://github.com/NVlabs/stylegan2-ada-pytorch), [git](https://github.com/bioinf-jku/TTUR), [git](https://github.com/mit-han-lab/data-efficient-gans)

  • [neurips](https://proceedings.neurips.cc/paper/2021/hash/e7ac288b0f2d41445904d071ba37aaff-Abstract.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/ic_gan/blob/master/inference/icgan_colab.ipynb) | 01.10.2021 |
| Skillful Precipitation Nowcasting Using Deep Generative Models of Radar | Open-sourced dataset and model snapshot for precipitation nowcasting |

  • [Suman Ravuri](https://www.linkedin.com/in/suman-ravuri-81928082)
  • [Karel Lenc](https://www.robots.ox.ac.uk/~karel/)
  • [Matthew Willson](https://www.linkedin.com/in/matthew-willson-6a1b422)
  • [Dmitry Kangin](https://scholar.google.com/citations?user=vv-leaMAAAAJ)
  • others
  • [Rémi Lam](https://github.com/remilam)
  • [Piotr Mirowski](https://piotrmirowski.com/)
  • [Maria Athanassiadou](https://scholar.google.com/citations?user=VtkgHP0AAAAJ)
  • [Sheleem Kashem](https://www.linkedin.com/in/sheleemkashem/)
  • [Rachel Prudden](https://computerscience.exeter.ac.uk/staff/rep218)
  • [Amol Mandhane](https://github.com/amol-mandhane)
  • [Aidan Clark](https://scholar.google.com/citations?user=_19DrfIAAAAJ)
  • [Andrew Brock](https://github.com/ajbrock)
  • [Karen Simonyan](https://scholar.google.com/citations?user=L7lMQkQAAAAJ)
  • [Raia Hadsell](https://github.com/raiah)
  • [Niall Robinson](https://github.com/niallrobinson)
  • [Ellen Clancy](https://www.linkedin.com/in/ellen-clancy-815967124)
  • [Shakir Mohamed](https://www.shakirm.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41586-021-03854-z)](https://doi.org/10.1038/s41586-021-03854-z) [![](https://img.shields.io/github/stars/deepmind/deepmind-research?style=social)](https://github.com/deepmind/deepmind-research/tree/master/nowcasting)

  • [arxiv](https://arxiv.org/abs/2104.00954)

  • [blog post](https://deepmind.com/blog/article/nowcasting)

  • [local kernel](https://research.google.com/colaboratory/local-runtimes.html)

  • [tf](https://www.tensorflow.org/hub)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/nowcasting/Open_sourced_dataset_and_model_snapshot_for_precipitation_nowcasting.ipynb) | 29.09.2021 |
| Live Speech Portraits | Real-Time Photorealistic Talking-Head Animation |

  • [Yuanxun Lu](https://github.com/YuanxunLu)
  • [Jinxiang Chai](https://scholar.google.com/citations?user=OcN1_gwAAAAJ)
  • [Xun Cao](https://cite.nju.edu.cn/People/Faculty/20190621/i5054.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3478513.3480484)](https://doi.org/10.1145/3478513.3480484) [![](https://img.shields.io/github/stars/YuanxunLu/LiveSpeechPortraits?style=social)](https://github.com/YuanxunLu/LiveSpeechPortraits)

  • [arxiv](https://arxiv.org/abs/2109.10595)

  • [git](https://github.com/lelechen63/ATVGnet), [git](https://github.com/lelechen63/Talking-head-Generation-with-Rhythmic-Head-Motion), [git](https://github.com/DinoMan/speech-driven-animation), [git](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)

  • [project](https://yuanxunlu.github.io/projects/LiveSpeechPortraits/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1tKvi-9kY3GkEK8lgtfTSM70rMFo_TY50) | 26.09.2021 |
| StylEx | Training a GAN to explain a classifier in StyleSpace |

  • [Oran Lang](https://research.google/people/105975/)
  • [Yossi Gandelsman](https://yossigandelsman.github.io/)
  • [Michal Yarom](https://scholar.google.com/citations?user=GMVxiYgAAAAJ)
  • [Yoav Wald](https://scholar.google.com/citations?user=hh5nOn4AAAAJ)
  • others
  • [Gal Elidan](https://research.google/people/105719/)
  • [Avinatan Hassidim](https://research.google/people/105831/)
  • [William Freeman](https://billf.mit.edu/)
  • [Phillip Isola](http://web.mit.edu/phillipi/)
  • [Amir Globerso](https://cs3801.wixsite.com/amirgloberson)
  • [Michal Irani](http://www.weizmann.ac.il/math/irani/)
  • [Inbar Mosseri](https://research.google/people/InbarMosseri/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.00073)](https://doi.org/10.1109/ICCV48922.2021.00073) [![](https://img.shields.io/github/stars/google/explaining-in-style?style=social)](https://github.com/google/explaining-in-style)

  • [arxiv](https://arxiv.org/abs/2104.13369), [arxiv](https://arxiv.org/abs/1906.10112), [arxiv](https://arxiv.org/abs/2011.12799), [arxiv](https://arxiv.org/abs/1912.04958), [arxiv](https://arxiv.org/abs/1710.01711)

  • [blog post](https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html)

  • [project](https://explaining-in-style.github.io/)

  • [supplementary](https://explaining-in-style.github.io/supmat.html)

  • [yt](https://youtu.be/wLk2eBdXH4M)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/explaining-in-style/blob/main/Explaining_in_Style_AttFind.ipynb) | 25.08.2021 |
| VITS | Parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models |

  • [Jaehyeon Kim](https://jaywalnut310.github.io/)
  • [Jungil Kong](https://github.com/jik876)
  • [Juhee Son](https://juheeuu.github.io/)

| [![](https://img.shields.io/github/stars/jaywalnut310/vits?style=social)](https://github.com/jaywalnut310/vits)

  • [arxiv](https://arxiv.org/abs/2106.06103)

  • [demo](https://jaywalnut310.github.io/vits-demo/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1CO61pZizDj7en71NQG_aqqKdGaA_SaBf) | 23.08.2021 |
| Bringing Old Photo Back to Life | Restoring old photos that suffer from severe degradation through a deep learning approach |

  • [Ziyu Wan](http://raywzy.com/)
  • [Bo Zhang](https://bo-zhang.me/)
  • [Dongdong Chen](http://www.dongdongchen.bid/)
  • [Pan Zhang](https://panzhang0212.github.io/)
  • others
  • [Dong Chen](http://www.dongchen.pro/)
  • [Jing Liao](https://liaojing.github.io/html/)
  • [Fang Wen](https://www.microsoft.com/en-us/research/people/fangwen/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00282)](https://doi.org/10.1109/CVPR42600.2020.00282) [![](https://img.shields.io/github/stars/microsoft/Bringing-Old-Photos-Back-to-Life?style=social)](https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life)

  • [arxiv](https://arxiv.org/abs/2004.09484)

  • [demo](https://replicate.com/microsoft/bringing-old-photos-back-to-life)

  • [project](http://raywzy.com/Old_Photo/)

  • [yt](https://youtu.be/Q5bhszQq9eA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1NEm6AsybIiC5TwTU_4DqDkQO0nFRB-uA) | 13.07.2021 |
| PTI | Pivotal Tuning Inversion enables employing off-the-shelf latent based semantic editing techniques on real images using StyleGAN |

  • [Daniel Roich](https://github.com/danielroich)
  • [Ron Mokady](https://rmokady.github.io/)
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3544777)](https://doi.org/10.1145/3544777) [![](https://img.shields.io/github/stars/danielroich/PTI?style=social)](https://github.com/danielroich/PTI)

  • [arxiv](https://arxiv.org/abs/2106.05744)

  • [git](https://github.com/NVlabs/stylegan2-ada-pytorch), [git](https://github.com/richzhang/PerceptualSimilarity)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/danielroich/PTI/blob/main/notebooks/inference_playground.ipynb) | 01.07.2021 |
| TediGAN | Framework for multi-modal image generation and manipulation with textual descriptions |

  • [Weihao Xia](https://github.com/weihaox)
  • [Yujiu Yang](http://www.fiesta.tsinghua.edu.cn/pi/3/24)
  • [Jing-Hao Xue](http://www.homepages.ucl.ac.uk/~ucakjxu/)
  • [Baoyuan Wu](https://sites.google.com/site/baoyuanwu2015/home)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00229)](https://doi.org/10.1109/CVPR46437.2021.00229) [![](https://img.shields.io/github/stars/IIGROUP/TediGAN?style=social)](https://github.com/IIGROUP/TediGAN)

  • [arxiv](https://arxiv.org/abs/2012.03308), [arxiv](https://arxiv.org/abs/2104.08910)

  • [git](https://github.com/weihaox/Multi-Modal-CelebA-HQ), [git](https://github.com/NVlabs/ffhq-dataset), [git](https://github.com/rosinality/stylegan2-pytorch/), [git](https://github.com/fyu/lsun)

  • [yt](https://youtu.be/L8Na2f5viAM)

| [![Open In Colab](images/colab.svg)](http://colab.research.google.com/github/weihaox/TediGAN/blob/master/playground.ipynb) | 30.06.2021 |
| SCALE | Modeling Clothed Humans with a Surface Codec of Articulated Local Elements |

  • [Qianli Ma](https://qianlim.github.io/)
  • [Shunsuke Saito](https://shunsukesaito.github.io/)
  • [Jinlong Yang](https://is.mpg.de/~jyang)
  • [Siyu Tang](https://scholar.google.com/citations?user=BUDh_4wAAAAJ)
  • [Michael Black](https://ps.is.mpg.de/~black)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01582)](https://doi.org/10.1109/CVPR46437.2021.01582) [![](https://img.shields.io/github/stars/qianlim/SCALE?style=social)](https://github.com/qianlim/SCALE)

  • [arxiv](https://arxiv.org/abs/2104.07660)

  • [data](https://cape.is.tue.mpg.de/)

  • [git](https://github.com/krrish94/chamferdist), [git](https://github.com/shunsukesaito/SCANimate)

  • [poster](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/650/SCALE_poster_CVPR_final_compressed.pdf)

  • [project](https://qianlim.github.io/SCALE.html)

  • [yt](https://youtu.be/-EvWqFCUb7U), [yt](https://youtu.be/v4rWCxJJzhc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1lp6r-A-s1kBorIvg6rLD4Ja3o6JOvu3G) | 26.06.2021 |
| CogView | Mastering Text-to-Image Generation via Transformers |

  • [Ming Ding](https://scholar.google.com/citations?user=Va50YzkAAAAJ)
  • [Zhuoyi Yang](https://scholar.google.com/citations?user=tgAt-gEAAAAJ)
  • [Wenyi Hong](https://github.com/wenyihong)
  • [Wendi Zheng](https://github.com/minkowski0125)
  • others
  • [Chang Zhou](https://scholar.google.com/citations?user=QeSoG3sAAAAJ)
  • [Junyang Lin](https://justinlin610.github.io/)
  • [Xu Zou](http://xuzou.cn/)
  • [Zhou Shao](https://www.researchgate.net/profile/Shao_Zhou4)
  • [Hongxia Yang](https://sites.google.com/site/hystatistics/home)
  • [Jie Tang](https://keg.cs.tsinghua.edu.cn/jietang/)

| [![](https://img.shields.io/github/stars/THUDM/CogView?style=social)](https://github.com/THUDM/CogView)

  • [arxiv](https://arxiv.org/abs/2105.13290)

  • [demo](https://thudm.github.io/CogView/index.html)

  • [git](https://github.com/NVIDIA/apex), [git](https://github.com/Sleepychord/cogdata)

  • [medium](https://towardsdatascience.com/cogview-image-generation-and-language-modelling-at-scale-8d358a0686d2)

  • [neurips](https://proceedings.neurips.cc/paper/2021/hash/a4d92e2cd541fca87e4620aba658316d-Abstract.html)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/nmxsd8/r_cogview_mastering_texttoimage_generation_via/)

  • [yt](https://youtu.be/Cw1r8ACIj8U)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Bi2TnSUp2vNiSUhamsNuC4HqkZ2J4WwZ) | 21.06.2021 |
| GANs N' Roses | Stable, Controllable, Diverse Image to Image Translation |

  • [Min Jin Chong](https://mchong6.github.io/)
  • [David Forsyth](http://luthuli.cs.uiuc.edu/~daf/)

| [![](https://img.shields.io/github/stars/mchong6/GANsNRoses?style=social)](https://github.com/mchong6/GANsNRoses)

  • [arxiv](https://arxiv.org/abs/2106.06561), [arxiv](https://arxiv.org/abs/2007.06600)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/znxlwm/UGATIT-pytorch)

  • [yt](https://youtu.be/VNg0NyCGl_4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mchong6/GANsNRoses/blob/master/inference_colab.ipynb) | 19.06.2021 |
| Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes | A method to stylize images by optimizing parameterized brushstrokes instead of pixels |

  • [Dmytro Kotovenko](https://scholar.google.de/citations?user=T_U8yxwAAAAJ)
  • [Matthias Wright](https://matthias-wright.github.io/)
  • [Arthur Heimbrecht](https://github.com/arwehei)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01202)](https://doi.org/10.1109/CVPR46437.2021.01202) [![](https://img.shields.io/github/stars/CompVis/brushstroke-parameterized-style-transfer?style=social)](https://github.com/CompVis/brushstroke-parameterized-style-transfer)

  • [arxiv](https://arxiv.org/abs/2103.17185)

  • [project](https://compvis.github.io/brushstroke-parameterized-style-transfer/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CompVis/brushstroke-parameterized-style-transfer/blob/tensorflow_v2/notebooks/BrushstrokeStyleTransfer_TF2.ipynb) | 02.06.2021 |
| Pixel2Style2Pixel | Encoding in Style: A StyleGAN Encoder for Image-to-Image Translation |

  • [Elad Richardson](https://github.com/eladrich)
  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Yotam Nitzan](https://yotamnitzan.github.io/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00232)](https://doi.org/10.1109/CVPR46437.2021.00232) [![](https://img.shields.io/github/stars/eladrich/pixel2style2pixel?style=social)](https://github.com/eladrich/pixel2style2pixel)

  • [arxiv](https://arxiv.org/abs/2008.00951)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/HuangYG123/CurricularFace)

  • [project](https://eladrich.github.io/pixel2style2pixel/)

  • [yt](https://youtu.be/bfvSwhqsTgM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/eladrich/pixel2style2pixel/blob/master/notebooks/inference_playground.ipynb) | 01.06.2021 |
| Fine-tuning a BERT | We will work through fine-tuning a BERT model using the tensorflow-models PIP package |

  • [Chen Chen](https://github.com/chenGitHuber)
  • [Claire Yao](https://github.com/claireyao-fen)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/N19-1423)](https://doi.org/10.18653/v1/N19-1423)

  • [arxiv](https://arxiv.org/abs/1810.04805)

  • [tf](https://tensorflow.org/hub)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/fine_tuning_bert.ipynb) | 25.05.2021 |
| ReStyle | A Residual-Based StyleGAN Encoder via Iterative Refinement |

  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.00664)](https://doi.org/10.1109/ICCV48922.2021.00664) [![](https://img.shields.io/github/stars/yuval-alaluf/restyle-encoder?style=social)](https://github.com/yuval-alaluf/restyle-encoder)

  • [arxiv](https://arxiv.org/abs/2104.02699), [arxiv](https://arxiv.org/abs/2008.00951), [arxiv](https://arxiv.org/abs/2102.02766)

  • [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/TreB1eN/InsightFace_Pytorch)

  • [project](https://yuval-alaluf.github.io/restyle-encoder/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/yuval-alaluf/restyle-encoder/blob/master/notebooks/inference_playground.ipynb) | 21.05.2021 |
| Motion Representations for Articulated Animation | Novel motion representations for animating articulated objects consisting of distinct parts |

  • [Aliaksandr Siarohin](https://aliaksandrsiarohin.github.io/aliaksandr-siarohin-website/)
  • [Oliver Woodford](https://ojwoodford.github.io/)
  • [Jian Ren](https://alanspike.github.io/)
  • [Menglei Chai](https://mlchai.com/)
  • [Sergey Tulyakov](http://www.stulyakov.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01344)](https://doi.org/10.1109/CVPR46437.2021.01344) [![](https://img.shields.io/github/stars/snap-research/articulated-animation?style=social)](https://github.com/snap-research/articulated-animation)

  • [arxiv](https://arxiv.org/abs/2104.11280)

  • [project](https://snap-research.github.io/articulated-animation/)

  • [yt](https://www.youtube.com/watch?v=gpBYN8t8_yY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/AliaksandrSiarohin/articulated-animation/blob/master/demo.ipynb) | 29.04.2021 |
| SAM | Age Transformation Using a Style-Based Regression Model |

  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Daniel Cohen-Or](https://danielcohenor.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3450626.3459805)](https://doi.org/10.1145/3450626.3459805) [![](https://img.shields.io/github/stars/yuval-alaluf/SAM?style=social)](https://github.com/yuval-alaluf/SAM)

  • [arxiv](https://arxiv.org/abs/2102.02754)

  • [git](https://github.com/eladrich/pixel2style2pixel), [git](https://github.com/rosinality/stylegan2-pytorch)

  • [project](https://yuval-alaluf.github.io/SAM/)

  • [yt](https://youtu.be/X_pYC_LtBFw)

| [![Open In Colab](images/colab.svg)](http://colab.research.google.com/github/yuval-alaluf/SAM/blob/master/notebooks/animation_inference_playground.ipynb) | 26.04.2021 |
| Geometry-Free View Synthesis | Is a geometric model required to synthesize novel views from a single image? |

  • [Robin Rombach](https://github.com/rromb)
  • [Patrick Esser](https://github.com/pesser)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.01409)](https://doi.org/10.1109/ICCV48922.2021.01409) [![](https://img.shields.io/github/stars/CompVis/geometry-free-view-synthesis?style=social)](https://github.com/CompVis/geometry-free-view-synthesis)

  • [arxiv](https://arxiv.org/abs/2104.07652)

  • [data](https://google.github.io/realestate10k/)

  • [git](https://github.com/colmap/colmap)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CompVis/geometry-free-view-synthesis/blob/master/scripts/braindance.ipynb) | 22.04.2021 |
| NeRViS | An algorithm for full-frame video stabilization by first estimating dense warp fields |

  • [Yu-Lun Liu](http://www.cmlab.csie.ntu.edu.tw/~yulunliu/)
  • [Wei-Sheng Lai](https://www.wslai.net/)
  • [Ming-Hsuan Yang](https://faculty.ucmerced.edu/mhyang/)
  • [Yung-Yu Chuang](https://www.csie.ntu.edu.tw/~cyy/)
  • [Jia-Bin Huang](https://jbhuang0604.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV48922.2021.00230)](https://doi.org/10.1109/ICCV48922.2021.00230) [![](https://img.shields.io/github/stars/alex04072000/NeRViS?style=social)](https://github.com/alex04072000/NeRViS)

  • [arxiv](https://arxiv.org/abs/2102.06205)

  • [data](http://liushuaicheng.org/SIGGRAPH2013/database.html)

  • [git](https://github.com/cxjyxxme/deep-online-video-stabilization), [git](https://github.com/jinsc37/DIFRINT)

  • [project](https://alex04072000.github.io/NeRViS/)

  • [yt](https://youtu.be/KO3sULs4hso)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1l-fUzyM38KJMZyKMBWw_vu7ZUyDwgdYH) | 11.04.2021 |
| NeX | View synthesis based on enhancements of multiplane image that can reproduce NeXt-level view-dependent effects in real time |

  • [Suttisak Wizadwongsa](https://www.linkedin.com/in/suttisak-wizadwongsa-763a931a5/)
  • [Pakkapon Phongthawee](http://pureexe.github.io/)
  • [Jiraphon Yenphraphai](https://www.linkedin.com/in/jiraphon-yenphraphai-990ba6175/)
  • [Supasorn Suwajanakorn](https://www.supasorn.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00843)](https://doi.org/10.1109/CVPR46437.2021.00843) [![](https://img.shields.io/github/stars/nex-mpi/nex-code?style=social)](https://github.com/nex-mpi/nex-code)

  • [arxiv](https://arxiv.org/abs/2103.05606)

  • [data](https://vistec-my.sharepoint.com/personal/pakkapon_p_s19_vistec_ac_th/_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fpakkapon%5Fp%5Fs19%5Fvistec%5Fac%5Fth%2FDocuments%2Fpublic%2FVLL%2FNeX%2Fshiny%5Fdatasets&originalPath=aHR0cHM6Ly92aXN0ZWMtbXkuc2hhcmVwb2ludC5jb20vOmY6L2cvcGVyc29uYWwvcGFra2Fwb25fcF9zMTlfdmlzdGVjX2FjX3RoL0VuSVVoc1JWSk9kTnNaXzRzbWRoeWUwQjh6MFZseHFPUjM1SVIzYnAwdUd1cFE%5FcnRpbWU9WXRVQTQtQTcyVWc), [data](https://vistec-my.sharepoint.com/personal/pakkapon_p_s19_vistec_ac_th/_layouts/15/onedrive.aspx?originalPath=aHR0cHM6Ly92aXN0ZWMtbXkuc2hhcmVwb2ludC5jb20vOmY6L2cvcGVyc29uYWwvcGFra2Fwb25fcF9zMTlfdmlzdGVjX2FjX3RoL0VyalBSUkw5Sm5GSXA4TU42ZDFqRXVvQjNYVm94SmtmZlBqZm9QeWhIa2owZGc%5FcnRpbWU9bC0yYWctRTcyVWc&id=%2Fpersonal%2Fpakkapon%5Fp%5Fs19%5Fvistec%5Fac%5Fth%2FDocuments%2Fpublic%2FVLL%2FNeX%2Fmodified%5Fdataset)

  • [git](https://github.com/Fyusion/LLFF)

  • [project](https://nex-mpi.github.io/)

  • [vistec](https://vistec.ist/)

  • [yt](https://www.youtube.com/watch?v=HyfkF7Z-ddA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1hXVvYdAwLA0EFg2zrafJUE0bFgB_F7PU) | 25.03.2021 |
| Score SDE | Score-Based Generative Modeling through Stochastic Differential Equations |

  • [Yang Song](https://yang-song.net/)
  • [Jascha Sohl-Dickstein](http://www.sohldickstein.com/)
  • [Diederik Kingma](http://dpkingma.com/)
  • [Abhishek Kumar](https://abhishek.umiacs.io/)
  • others
  • [Stefano Ermon](https://cs.stanford.edu/~ermon/)
  • [Ben Poole](https://cs.stanford.edu/~poole/)

| [![](https://img.shields.io/github/stars/yang-song/score_sde?style=social)](https://github.com/yang-song/score_sde)

  • [arxiv](https://arxiv.org/abs/2011.13456), [arxiv](https://arxiv.org/abs/1907.05600), [arxiv](https://arxiv.org/abs/2006.09011), [arxiv](https://arxiv.org/abs/2006.11239)

  • [git](https://github.com/yang-song/score_sde_pytorch), [git](https://github.com/google/ml_collections)

  • [yt](https://youtu.be/L9ZegT87QK8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/yang-song/score_sde/blob/main/Score_SDE_demo.ipynb) | 18.03.2021 |
| Talking Head Anime from a Single Image | The network takes as input an image of an anime character's face and a desired pose, and it outputs another image of the same character in the given pose | [Pramook Khungurn](https://pkhungurn.github.io/) | [![](https://img.shields.io/github/stars/pkhungurn/talking-head-anime-demo?style=social)](https://github.com/pkhungurn/talking-head-anime-demo)

  • [git](https://github.com/lincolnhard/head-pose-estimation)

  • [project](https://pkhungurn.github.io/talking-head-anime/)

  • [wiki](https://en.wikipedia.org/wiki/Virtual_YouTuber), [wiki](https://en.wikipedia.org/wiki/MikuMikuDance)

  • [yt](https://youtu.be/kMQCERkTdO0), [yt](https://youtu.be/T1Gp-RxFZwU), [yt](https://youtu.be/FioRJ6x_RbI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/pkhungurn/talking-head-anime-demo/blob/master/tha_colab.ipynb) | 23.02.2021 |
| NFNet | An adaptive gradient clipping technique, a significantly improved class of Normalizer-Free ResNets |

  • [Andrew Brock](https://github.com/ajbrock)
  • [Soham De](https://sohamde.github.io/)
  • [Samuel L. Smith](https://scholar.google.co.uk/citations?user=fyEqU5oAAAAJ)
  • [Karen Simonyan](https://scholar.google.com/citations?user=L7lMQkQAAAAJ)

| [![](https://img.shields.io/github/stars/deepmind/deepmind-research?style=social)](https://github.com/deepmind/deepmind-research/tree/master/nfnets)

  • [arxiv](https://arxiv.org/abs/2102.06171), [arxiv](https://arxiv.org/abs/2101.08692)

  • [git](https://github.com/deepmind/jaxline)

  • [yt](https://youtu.be/rNkHjZtH0RQ), [yt](https://www.youtube.com/live/qyy2WhRRSI4?feature=share)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/nfnets/nfnet_demo_colab.ipynb) | 17.02.2021 |
| RITM | Simple feedforward model for click-based interactive segmentation that employs the segmentation masks from previous steps |

  • [Konstantin Sofiiuk](https://github.com/ksofiyuk)
  • [Ilia Petrov](https://virtualhumans.mpi-inf.mpg.de/people/Petrov.html)
  • [Anton Konushin](https://scholar.google.com/citations?user=ZT_k-wMAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICIP46576.2022.9897365)](https://doi.org/10.1109/ICIP46576.2022.9897365) [![](https://img.shields.io/github/stars/SamsungLabs/ritm_interactive_segmentation?style=social)](https://github.com/SamsungLabs/ritm_interactive_segmentation)

  • [arxiv](https://arxiv.org/abs/2102.06583)

  • [git](https://github.com/HRNet/HRNet-Image-Classification)

  • [pwc](https://paperswithcode.com/sota/interactive-segmentation-on-grabcut?p=reviving-iterative-training-with-mask), [pwc](https://paperswithcode.com/sota/interactive-segmentation-on-berkeley?p=reviving-iterative-training-with-mask)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/SamsungLabs/ritm_interactive_segmentation/blob/master/notebooks/colab_test_any_model.ipynb) | 13.02.2021 |
| CLIP | A neural network which efficiently learns visual concepts from natural language supervision |

  • [Jong Wook Kim](https://jongwook.kim/)
  • [Alec Radford](http://newmu.github.io/)
  • [Ilya Sutskever](http://www.cs.utoronto.ca/~ilya/)

| [![](https://img.shields.io/github/stars/openai/CLIP?style=social)](https://github.com/openai/CLIP)

  • [arxiv](https://arxiv.org/abs/2103.00020)

  • [data](https://www.cs.toronto.edu/~kriz/cifar.html)

  • [paper](https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf)

  • [project](https://openai.com/blog/clip/)

  • [slides](https://icml.cc/media/icml-2021/Slides/9193.pdf)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openai/clip/blob/master/Interacting_with_CLIP.ipynb) | 29.01.2021 |
| Adversarial Patch | A method to create universal, robust, targeted adversarial image patches in the real world | [Tom Brown](https://github.com/nottombrown) |
  • [arxiv](https://arxiv.org/abs/1712.09665)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/cleverhans-lab/cleverhans/blob/master/examples/adversarial_patch/AdversarialPatch.ipynb) | 27.01.2021 |
| MSG-Net | Multi-style Generative Network with a novel Inspiration Layer, which retains the functionality of optimization-based approaches and has the fast speed of feed-forward networks |

  • [Hang Zhang](https://hangzhang.org/)
  • [Kristin Dana](https://www.ece.rutgers.edu/~kdana/dana.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-030-11018-5_32)](https://doi.org/10.1007/978-3-030-11018-5_32)

  • [arxiv](https://arxiv.org/abs/1703.06953)

  • [project](http://computervisionrutgers.github.io/MSG-Net/)

  • [yt](https://www.youtube.com/watch?v=oy6pWNWBt4Y)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/zhanghang1989/PyTorch-Multi-Style-Transfer/blob/master/msgnet.ipynb) | 25.01.2021 |
| f-BRS | Feature backpropagating refinement scheme that solves an optimization problem with respect to auxiliary variables instead of the network inputs, and requires running forward and backward pass just for a small part of a network |

  • [Konstantin Sofiiuk](https://github.com/ksofiyuk)
  • [Ilia Petrov](https://virtualhumans.mpi-inf.mpg.de/people/Petrov.html)
  • [Olga Barinova](https://github.com/OlgaBarinova)
  • [Anton Konushin](https://scholar.google.com/citations?user=ZT_k-wMAAAAJ)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00865)](https://doi.org/10.1109/CVPR42600.2020.00865) [![](https://img.shields.io/github/stars/SamsungLabs/fbrs_interactive_segmentation?style=social)](https://github.com/SamsungLabs/fbrs_interactive_segmentation)

  • [arxiv](https://arxiv.org/abs/2001.10331)

  • [git](https://github.com/HRNet/HRNet-Image-Classification)

  • [yt](https://youtu.be/ArcZ5xtyMCk), [yt](https://youtu.be/xg-5J9gLuXA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/SamsungLabs/fbrs_interactive_segmentation/blob/master/notebooks/colab_test_any_model.ipynb) | 25.01.2021 |
| Neural Style Transfer | Implementation of Neural Style Transfer in Keras 2.0+ | [Somshubra Majumdar](http://titu1994.github.io/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1167/16.12.326)](https://doi.org/10.1167/16.12.326) [![](https://img.shields.io/github/stars/titu1994/Neural-Style-Transfer?style=social)](https://github.com/titu1994/Neural-Style-Transfer)
  • [arxiv](http://arxiv.org/abs/1508.06576), [arxiv](http://arxiv.org/abs/1605.04603), [arxiv](https://arxiv.org/abs/1606.05897)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/titu1994/Neural-Style-Transfer/blob/master/NeuralStyleTransfer.ipynb) | 22.01.2021 |
| SkyAR | A vision-based method for video sky replacement and harmonization, which can automatically generate realistic and dramatic sky backgrounds in videos with controllable styles | [Zhengxia Zou](http://www-personal.umich.edu/~zzhengxi/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TIP.2022.3192717)](https://doi.org/10.1109/TIP.2022.3192717) [![](https://img.shields.io/github/stars/jiupinjia/SkyAR?style=social)](https://github.com/jiupinjia/SkyAR)

  • [arxiv](https://arxiv.org/abs/2010.11800)

  • [project](https://jiupinjia.github.io/skyar/)

  • [yt](https://www.youtube.com/watch?v=zal9Ues0aOQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jiupinjia/SkyAR/blob/master/colab_demo.ipynb) | 18.01.2021 |
| MusicXML Documentation | The goal of this notebook is to explore one of the magenta libraries for music |

  • [Prakruti Joshi](https://github.com/prakruti-joshi)
  • [Falak Shah](https://falaktheoptimist.github.io/)
  • [Twisha Naik](https://github.com/twisha96)

|

  • [magenta](https://magenta.tensorflow.org/)

  • [music theory](http://musictheoryblog.blogspot.com/2008/02/learn-music-theory.html)

  • [musicXML](https://www.musicxml.com/for-developers/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/magenta/magenta-demos/blob/master/colab-notebooks/MusicXML_Document_Structure_Documentation.ipynb) | 08.01.2021 |
| SVG VAE | A colab demo for the SVG VAE model | [Raphael Gontijo Lopes](https://raphagl.com/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00802)](https://doi.org/10.1109/ICCV.2019.00802)

  • [arxiv](https://arxiv.org/abs/1904.02632)

  • [blog post](https://magenta.tensorflow.org/svg-vae)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/magenta/magenta-demos/blob/master/colab-notebooks/vae_svg_decoding.ipynb) | 08.01.2021 |
| Neural Magic Eye | Learning to See and Understand the Scene Behind an Autostereogram |

  • [Zhengxia Zou](http://www-personal.umich.edu/~zzhengxi/)
  • [Tianyang Shi](https://www.shitianyang.tech/)
  • [Yi Yuan](https://yiyuan1991.github.io/)
  • [Zhenwei Shi](http://levir.buaa.edu.cn/)

| [![](https://img.shields.io/github/stars/jiupinjia/neural-magic-eye?style=social)](https://github.com/jiupinjia/neural-magic-eye)

  • [arxiv](https://arxiv.org/abs/2012.15692)

  • [project](https://jiupinjia.github.io/neuralmagiceye/)

  • [yt](https://www.youtube.com/watch?v=Fkh7DEblqJ8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1f59dFLJ748i2TleE54RkbUZSMo9Hyx7l) | 01.01.2021 |
| FGVC | Method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges |

  • [Chen Gao](http://chengao.vision/)
  • [Ayush Saraf](https://github.com/ayush29feb)
  • [Johannes Kopf](https://johanneskopf.de/)
  • [Jia-Bin Huang](https://jbhuang0604.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-030-58610-2_42)](https://doi.org/10.1007/978-3-030-58610-2_42) [![](https://img.shields.io/github/stars/vt-vl-lab/FGVC?style=social)](https://github.com/vt-vl-lab/FGVC)

  • [arxiv](https://arxiv.org/abs/2009.01835)

  • [project](http://chengao.vision/FGVC/)

  • [yt](https://www.youtube.com/watch?v=CHHVPxHT7rc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1pb6FjWdwq_q445rG2NP0dubw7LKNUkqc) | 30.12.2020 |
| VIBE | Video Inference for Body Pose and Shape Estimation, which makes use of an existing large-scale motion capture dataset together with unpaired, in-the-wild, 2D keypoint annotations |

  • [Muhammed Kocabas](https://ps.is.mpg.de/person/mkocabas)
  • [Nikos Athanasiou](https://github.com/athn-nik)
  • [Michael Black](https://ps.is.mpg.de/person/black)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00530)](https://doi.org/10.1109/CVPR42600.2020.00530) [![](https://img.shields.io/github/stars/mkocabas/VIBE?style=social)](https://github.com/mkocabas/VIBE)

  • [arxiv](https://arxiv.org/abs/1912.05656)

  • [git](https://github.com/carlosedubarreto/vibe_win_install), [git](https://github.com/vchoutas/smplx), [git](https://github.com/akanazawa/human_dynamics), [git](https://github.com/MandyMo/pytorch_HMR), [git](https://github.com/soulslicer/STAF/tree/staf)

  • [pwc](https://paperswithcode.com/sota/3d-human-pose-estimation-on-3dpw?p=vibe-video-inference-for-human-body-pose-and)

  • [yt](https://youtu.be/3qhs5IRJ1LI), [yt](https://youtu.be/w1biKeiQThY), [yt](https://youtu.be/rIr-nX63dUA), [yt](https://youtu.be/fW0sIZfQcIs), [yt](https://youtu.be/8Qt0wA16kTo), [yt](https://youtu.be/xyo5gl5GLEI), [yt](https://youtu.be/XNzgUhxKC38), [yt](https://youtu.be/hErK0MamTY4), [yt](https://youtu.be/Gfmm8uMfMq0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dFfwxZ52MN86FA6uFNypMEdFShd2euQA) | 23.12.2020 |
| SeFa | A closed-form approach for unsupervised latent semantic factorization in GANs |

  • [Yujun Shen](https://shenyujun.github.io/)
  • [Bolei Zhou](https://boleizhou.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00158)](https://doi.org/10.1109/CVPR46437.2021.00158) [![](https://img.shields.io/github/stars/genforce/sefa?style=social)](https://github.com/genforce/sefa)

  • [arxiv](https://arxiv.org/abs/2007.06600)

  • [project](https://genforce.github.io/sefa/)

  • [yt](https://www.youtube.com/watch?v=OFHW2WbXXIQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/genforce/sefa/blob/master/docs/SeFa.ipynb) | 06.12.2020 |
| Stylized Neural Painting | An image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles |

  • [Zhengxia Zou](http://www-personal.umich.edu/~zzhengxi/)
  • [Tianyang Shi](https://www.shitianyang.tech/)
  • [Yi Yuan](https://yiyuan1991.github.io/)
  • [Zhenwei Shi](http://levir.buaa.edu.cn/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01543)](https://doi.org/10.1109/CVPR46437.2021.01543) [![](https://img.shields.io/github/stars/jiupinjia/stylized-neural-painting?style=social)](https://github.com/jiupinjia/stylized-neural-painting)

  • [arxiv](https://arxiv.org/abs/2011.08114)

  • [project](https://jiupinjia.github.io/neuralpainter/)

  • [yt](https://www.youtube.com/watch?v=oerb-nwrXhk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1ch_41GtcQNQT1NLOA21vQJ_rQOjjv9D8) | 01.12.2020 |
| BiT | Big Transfer: General Visual Representation Learning |

  • [Alexander Kolesnikov](https://github.com/akolesnikoff)
  • [Lucas Beyer](http://lucasb.eyer.be)
  • [Xiaohua Zhai](https://github.com/xiaohuazhai)
  • [Joan Puigcerver](https://www.jpuigcerver.net/)
  • others
  • [Jessica Yung](https://github.com/jessicayung)
  • [Sylvain Gelly](https://scholar.google.com/citations?user=m7LvuTkAAAAJ)
  • [Neil Houlsby](https://neilhoulsby.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-030-58558-7_29)](https://doi.org/10.1007/978-3-030-58558-7_29) [![](https://img.shields.io/github/stars/google-research/big_transfer?style=social)](https://github.com/google-research/big_transfer)

  • [arxiv](https://arxiv.org/abs/1912.11370), [arxiv](https://arxiv.org/abs/2106.05237)

  • [hf](https://huggingface.co/google/bit-50)

  • [medium](https://sh-tsang.medium.com/review-big-transfer-bit-general-visual-representation-learning-cb4bf8ed9732)

  • [yt](https://youtu.be/k1GOF2jmX7c), [yt](https://youtu.be/0iTgt5-SOsU), [yt](https://youtu.be/X5Rhm__OxvA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/big_transfer/blob/master/colabs/big_transfer_tf2.ipynb) | 12.11.2020 |
| LaSAFT | Latent Source Attentive Frequency Transformation for Conditioned Source Separation | [Woosung Choi](https://ws-choi.github.io/) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICASSP39728.2021.9413896)](https://doi.org/10.1109/ICASSP39728.2021.9413896) [![](https://img.shields.io/github/stars/ws-choi/Conditioned-Source-Separation-LaSAFT?style=social)](https://github.com/ws-choi/Conditioned-Source-Separation-LaSAFT)

  • [arxiv](https://arxiv.org/abs/2010.11631)

  • [data](https://sigsep.github.io/datasets/musdb.html)

  • [project](https://lasaft.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ws-choi/Conditioned-Source-Separation-LaSAFT/blob/master/colab_demo/LaSAFT_with_GPoCM_Stella_Jang_Example.ipynb) | 01.11.2020 |
| Lifespan Age Transformation Synthesis | Multi-domain image-to-image generative adversarial network architecture, whose learned latent space models a continuous bi-directional aging process |

  • [Roy Or-El](https://homes.cs.washington.edu/~royorel/)
  • [Soumyadip Sengupta](https://homes.cs.washington.edu/~soumya91/)
  • [Ohad Fried](https://www.ohadf.com/)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Ira Kemelmacher-Shlizerman](https://www.irakemelmacher.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-030-58539-6_44)](https://doi.org/10.1007/978-3-030-58539-6_44) [![](https://img.shields.io/github/stars/royorel/Lifespan_Age_Transformation_Synthesis?style=social)](https://github.com/royorel/Lifespan_Age_Transformation_Synthesis)

  • [arxiv](https://arxiv.org/abs/2003.09764)

  • [git](https://github.com/royorel/FFHQ-Aging-Dataset), [git](https://github.com/NVIDIA/pix2pixHD), [git](https://github.com/rosinality/style-based-gan-pytorch)

  • [project](https://grail.cs.washington.edu/projects/lifespan_age_transformation_synthesis/)

  • [yt](https://youtu.be/_jTFcjN2hBk), [yt](https://youtu.be/9fulnt2_q_Y)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/royorel/Lifespan_Age_Transformation_Synthesis/blob/master/LATS_demo.ipynb) | 31.10.2020 |
| HiGAN | Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis |

  • [Ceyuan Yang](https://ceyuan.me/)
  • [Yujun Shen](https://shenyujun.github.io/)
  • [Bolei Zhou](https://boleizhou.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/s11263-020-01429-5)](https://doi.org/10.1007/s11263-020-01429-5) [![](https://img.shields.io/github/stars/genforce/higan?style=social)](https://github.com/genforce/higan)

  • [arxiv](https://arxiv.org/abs/1911.09267), [arxiv](https://arxiv.org/abs/1412.6856), [arxiv](https://arxiv.org/abs/1906.10112)

  • [project](https://genforce.github.io/higan/)

  • [yt](https://www.youtube.com/watch?v=X5yWu2Jwjpg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/genforce/higan/blob/master/docs/HiGAN_Bedroom.ipynb) | 14.10.2020 |
| InterFaceGAN | Interpreting the Latent Space of GANs for Semantic Face Editing |

  • [Yujun Shen](https://shenyujun.github.io/)
  • [Jinjin Gu](https://www.jasongt.com/)
  • [Xiaoou Tang](https://www.ie.cuhk.edu.hk/people/xotang.shtml)
  • [Bolei Zhou](https://boleizhou.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00926)](https://doi.org/10.1109/CVPR42600.2020.00926) [![](https://img.shields.io/github/stars/genforce/interfacegan?style=social)](https://github.com/genforce/interfacegan)

  • [arxiv](https://arxiv.org/abs/1907.10786), [arxiv](https://arxiv.org/abs/2005.09635), [arxiv](https://arxiv.org/abs/1710.10196)

  • [git](https://github.com/tkarras/progressive_growing_of_gans), [git](https://github.com/NVlabs/stylegan)

  • [project](https://genforce.github.io/interfacegan/)

  • [yt](https://www.youtube.com/watch?v=uoftpl3Bj6w)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/genforce/interfacegan/blob/master/docs/InterFaceGAN.ipynb) | 13.10.2020 |
| Instance-aware Image Colorization | Novel deep learning framework to achieve instance-aware colorization | [Jheng-Wei Su](https://github.com/ericsujw) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00799)](https://doi.org/10.1109/CVPR42600.2020.00799) [![](https://img.shields.io/github/stars/ericsujw/InstColorization?style=social)](https://github.com/ericsujw/InstColorization)

  • [arxiv](https://arxiv.org/abs/2005.10825)

  • [project](https://ericsujw.github.io/InstColorization/)

  • [yt](https://www.youtube.com/watch?v=Zj1N4uE1ehk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ericsujw/InstColorization/blob/master/InstColorization.ipynb) | 30.08.2020 |
| MoCo | Momentum Contrast for unsupervised visual representation learning |

  • [Kaiming He](https://kaiminghe.github.io/)
  • [Haoqi Fan](https://haoqifan.github.io/)
  • [Yuxin Wu](https://ppwwyyxx.com/)
  • [Saining Xie](http://sainingxie.com/)
  • [Ross Girshick](https://www.rossgirshick.info/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00975)](https://doi.org/10.1109/CVPR42600.2020.00975) [![](https://img.shields.io/github/stars/facebookresearch/moco?style=social)](https://github.com/facebookresearch/moco)

  • [arxiv](https://arxiv.org/abs/1911.05722), [arxiv](https://arxiv.org/abs/2003.04297), [arxiv](https://arxiv.org/abs/1706.02677)

  • [git](https://github.com/ppwwyyxx/moco.tensorflow)

  • [yt](https://youtu.be/LvHwBQF14zs), [yt](https://youtu.be/4VVGtYPM8JE), [yt](https://youtu.be/o5Qh61dLDf0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/moco/blob/colab-notebook/colab/moco_cifar10_demo.ipynb) | 20.08.2020 |
| CAPE | Learning to Dress 3D People in Generative Clothing |

  • [Qianli Ma](https://qianlim.github.io/)
  • [Jinlong Yang](https://scholar.google.com/citations?user=HGt39SUAAAAJ)
  • [Anurag Ranjan](https://anuragranj.github.io/)
  • [Sergi Pujades](https://github.com/pujades)
  • others
  • [Gerard Pons-Moll](https://virtualhumans.mpi-inf.mpg.de/)
  • [Siyu Tang](https://scholar.google.com/citations?user=BUDh_4wAAAAJ)
  • [Michael Black](https://ps.is.mpg.de/~black)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00650)](https://doi.org/10.1109/CVPR42600.2020.00650) [![](https://img.shields.io/github/stars/qianlim/CAPE?style=social)](https://github.com/qianlim/CAPE)

  • [arxiv](https://arxiv.org/abs/1907.13615), [arxiv](https://arxiv.org/abs/1807.10267), [arxiv](https://arxiv.org/abs/2004.02658)

  • [data](https://cape.is.tue.mpg.de/dataset)

  • [git](https://github.com/MPI-IS/mesh), [git](https://github.com/vchoutas/smplx), [git](https://github.com/anuragranj/coma)

  • [medium](https://medium.com/@mahyarfardinfar/learning-to-dress-3d-people-in-generative-clothing-486eb90136ff)

  • [project](https://cape.is.tue.mpg.de/)

  • [yt](https://youtu.be/e4W-hPFNwDE), [yt](https://youtu.be/NOEA-Rtq6vM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1DCNo2OyyTNi1xDG-7j32FZQ9sBA6i9Ys) | 05.08.2020 |
| Rewriting a Deep Generative Model | We ask if a deep network can be reprogrammed to follow different rules, by enabling a user to directly change the weights, instead of training with a data set |

  • [David Bau](https://people.csail.mit.edu/davidbau/home/)
  • [Steven Liu](http://people.csail.mit.edu/stevenliu/)
  • [Tongzhou Wang](https://ssnl.github.io/)
  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
  • [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-030-58452-8_21)](https://doi.org/10.1007/978-3-030-58452-8_21) [![](https://img.shields.io/github/stars/davidbau/rewriting?style=social)](https://github.com/davidbau/rewriting)

  • [arxiv](https://arxiv.org/abs/2007.15646), [arxiv](https://arxiv.org/abs/1912.04958)

  • [git](https://github.com/NVlabs/stylegan2), [git](https://github.com/rosinality/stylegan2-pytorch)

  • [project](https://rewriting.csail.mit.edu/)

  • [yt](https://www.youtube.com/watch?v=i2_-zNqtEPk), [yt](https://rewriting.csail.mit.edu/video/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/davidbau/rewriting/blob/master/notebooks/rewriting-interface.ipynb) | 01.08.2020 |
| SIREN | Implicit Neural Representations with Periodic Activation Functions |

  • [Vincent Sitzmann](https://vsitzmann.github.io/)
  • [Julien Martel](http://web.stanford.edu/~jnmartel/)

| [![](https://img.shields.io/github/stars/vsitzmann/siren?style=social)](https://github.com/vsitzmann/siren)

  • [arxiv](https://arxiv.org/abs/2006.09661)

  • [data](https://drive.google.com/drive/folders/1_iq__37-hw7FJOEUK1tX7mdp8SKB368K)

  • [neurips](https://proceedings.neurips.cc/paper/2020/hash/53c04118df112c13a8c34b38343b9c10-Abstract.html)

  • [project](https://vsitzmann.github.io/siren/)

  • [yt](https://www.youtube.com/watch?v=Q2fLWGBeaiI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vsitzmann/siren/blob/master/explore_siren.ipynb) | 25.06.2020 |
| 3D Photo Inpainting | Method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view |

  • [Meng-Li Shih](https://shihmengli.github.io/)
  • [Shih-Yang Su](https://lemonatsu.github.io/)
  • [Johannes Kopf](https://johanneskopf.de/)
  • [Jia-Bin Huang](https://jbhuang0604.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00805)](https://doi.org/10.1109/CVPR42600.2020.00805) [![](https://img.shields.io/github/stars/vt-vl-lab/3d-photo-inpainting?style=social)](https://github.com/vt-vl-lab/3d-photo-inpainting)

  • [arxiv](https://arxiv.org/abs/2004.04727)

  • [project](https://shihmengli.github.io/3D-Photo-Inpainting/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1706ToQrkIZshRSJSHvZ1RuCiM__YX3Bz) | 04.05.2020 |
| Motion Supervised co-part Segmentation | A self-supervised deep learning method for co-part segmentation |

  • [Aliaksandr Siarohin](https://aliaksandrsiarohin.github.io/aliaksandr-siarohin-website/)
  • [Subhankar Roy](https://github.com/roysubhankar)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICPR48806.2021.9412520)](https://doi.org/10.1109/ICPR48806.2021.9412520) [![](https://img.shields.io/github/stars/AliaksandrSiarohin/motion-cosegmentation?style=social)](https://github.com/AliaksandrSiarohin/motion-cosegmentation)

  • [arxiv](http://arxiv.org/abs/2004.03234)

  • [git](https://github.com/AliaksandrSiarohin/video-preprocessing)

  • [yt](https://www.youtube.com/watch?v=RJ4Nj1wV5iA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/AliaksandrSiarohin/motion-cosegmentation/blob/master/part_swap.ipynb) | 07.04.2020 |
| Onsets and Frames | Onsets and Frames is an automatic music transcription framework with piano and drums models |

  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Erich Elsen](https://github.com/ekelsen)

| [![](https://img.shields.io/github/stars/magenta/magenta?style=social)](https://github.com/magenta/magenta/tree/main/magenta/models/onsets_frames_transcription)

  • [arxiv](https://arxiv.org/abs/1710.11153), [arxiv](https://arxiv.org/abs/1810.12247), [arxiv](https://arxiv.org/abs/2004.00188)

  • [blog post](http://g.co/magenta/onsets-frames)

  • [data](https://g.co/magenta/maestro-wave2midi2wave), [data](https://magenta.tensorflow.org/datasets/e-gmd)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/magenta/onsets_frames_transcription/onsets_frames_transcription.ipynb) | 02.04.2020 |
| FBA Matting | Low-cost modification to alpha matting networks to also predict the foreground and background colours |

  • [Marco Forte](https://github.com/MarcoForte)
  • [François Pitié](https://francois.pitie.net/)

| [![](https://img.shields.io/github/stars/MarcoForte/FBA_Matting?style=social)](https://github.com/MarcoForte/FBA_Matting)

  • [arxiv](https://arxiv.org/abs/2003.07711)

  • [git](https://github.com/MarcoForte/closed-form-matting)

  • [hf](https://huggingface.co/spaces/leonelhs/FBA-Matting)

  • [pwc](https://paperswithcode.com/sota/image-matting-on-composition-1k?p=f-b-alpha-matting)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Ut2szLBTxPejGHt_GYUkua21yUVWseOE) | 19.03.2020 |
| BERT score | An automatic evaluation metric for text generation | [Tianyi Zhang](https://tiiiger.github.io/) | [![](https://img.shields.io/github/stars/Tiiiger/bert_score?style=social)](https://github.com/Tiiiger/bert_score)
  • [arxiv](https://arxiv.org/abs/1904.09675)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Tiiiger/bert_score/blob/master/example/Demo.ipynb) | 05.03.2020 |
| Generating Piano Music with Transformer | This Colab notebook lets you play with pretrained Transformer models for piano music generation, based on the Music Transformer |

  • [Ian Simon](https://github.com/iansimon)
  • [Anna Huang](https://github.com/czhuang)
  • [Jesse Engel](https://github.com/jesseengel)
  • [Curtis Hawthorne](https://github.com/cghawthorne)

|

  • [arxiv](https://arxiv.org/abs/1706.03762), [arxiv](https://arxiv.org/abs/1809.04281)

  • [blog post](http://g.co/magenta/music-transformer)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/magenta/piano_transformer/piano_transformer.ipynb) | 16.09.2019 |
| HMR | End-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image |

  • [Angjoo Kanazawa](https://people.eecs.berkeley.edu/~kanazawa/)
  • [Michael Black](https://ps.is.mpg.de/person/black)
  • [David Jacobs](https://www.cs.umd.edu/~djacobs/)
  • [Jitendra Malik](https://people.eecs.berkeley.edu/~malik/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR.2018.00744)](https://doi.org/10.1109/CVPR.2018.00744) [![](https://img.shields.io/github/stars/akanazawa/hmr?style=social)](https://github.com/akanazawa/hmr)

  • [arxiv](https://arxiv.org/abs/1712.06584)

  • [docker](https://hub.docker.com/r/dawars/hmr/)

  • [git](https://github.com/mattloper/chumpy), [git](https://github.com/CMU-Perceptual-Computing-Lab/openpose), [git](https://github.com/MandyMo/pytorch_HMR), [git](https://github.com/layumi/hmr), [git](https://github.com/russoale/hmr2.0)

  • [project](https://akanazawa.github.io/hmr/)

  • [yt](https://youtu.be/bmMV9aJKa-c)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Dene33/video_to_bvh/blob/master/video_to_bvh.ipynb) | 15.03.2019 |
| GANSynth | This notebook is a demo GANSynth, which generates audio with Generative Adversarial Networks | [Jesse Engel](https://github.com/jesseengel) | [![](https://img.shields.io/github/stars/magenta/magenta?style=social)](https://github.com/magenta/magenta/tree/main/magenta/models/gansynth)

  • [arxiv](https://arxiv.org/abs/1902.08710), [arxiv](https://arxiv.org/abs/1809.11096)

  • [project](https://storage.googleapis.com/magentadata/papers/gansynth/index.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/magenta/gansynth/gansynth_demo.ipynb) | 25.02.2019 |
| Latent Constraints | Conditional Generation from Unconditional Generative Models |

  • [Jesse Engel](https://github.com/jesseengel)
  • [Matthew Hoffman](http://matthewdhoffman.com/)
  • [Adam Roberts](https://github.com/adarob)

|

  • [arxiv](https://arxiv.org/abs/1711.05772)

  • [data](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/latent_constraints/latentconstraints.ipynb) | 27.11.2017 |
| Performance RNN | This notebook shows you how to generate new performed compositions from a trained model |

  • [Ian Simon](https://github.com/iansimon)
  • [Sageev Oore](https://github.com/osageev)
  • [Curtis Hawthorne](https://github.com/cghawthorne)

| [![](https://img.shields.io/github/stars/magenta/magenta?style=social)](https://github.com/magenta/magenta/tree/master/magenta/models/performance_rnn)

  • [blog post](https://magenta.tensorflow.org/performance-rnn)

  • [data](http://www.piano-e-competition.com/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/magenta/performance_rnn/performance_rnn.ipynb) | 11.07.2017 |
| NSynth | This colab notebook has everything you need to upload your own sounds and use NSynth models to reconstruct and interpolate between them |

  • [Jesse Engel](https://github.com/jesseengel)
  • [Cinjon Resnick](https://github.com/cinjon)
  • [Adam Roberts](https://github.com/adarob)
  • [Sander Dieleman](https://benanne.github.io/)
  • others
  • [Karen Simonyan](https://scholar.google.com/citations?user=L7lMQkQAAAAJ)
  • [Mohammad Norouzi](https://norouzi.github.io/)
  • [Douglas Eck](https://github.com/douglaseck)

| [![](https://img.shields.io/github/stars/tensorflow/magenta?style=social)](https://github.com/tensorflow/magenta/tree/master/magenta/models/nsynth)

  • [arxiv](https://arxiv.org/abs/1704.01279)

  • [blog post](https://magenta.tensorflow.org/nsynth)

  • [data](https://magenta.tensorflow.org/datasets/nsynth)

  • [tutorial](https://magenta.tensorflow.org/nsynth-fastgen)

  • [yt](https://www.youtube.com/watch?v=AaALLWQmCdI), [yt](https://www.youtube.com/watch?v=BOoSy-Pg8is)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/notebooks/magenta/nsynth/nsynth.ipynb) | 06.04.2017 |
## Tutorials
| name | description | authors | links | colaboratory | update |
|------|-------------|:--------|:------|:------------:|:------:|
| Kornia | Library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors |

  • [Edgar Riba](https://github.com/edgarriba)
  • [Dmytro Mishkin](https://dmytro.ai/)
  • [Daniel Ponsa](https://github.com/DanielPonsa)
  • [Ethan Rublee](https://github.com/ethanrublee)
  • [Gary Bradski](https://github.com/garybradski)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/WACV45572.2020.9093363)](https://doi.org/10.1109/WACV45572.2020.9093363) [![](https://img.shields.io/github/stars/kornia/kornia?style=social)](https://github.com/kornia/kornia)

  • [arxiv](https://arxiv.org/abs/1910.02190)

  • [blog post](https://opencv.org/kornia-an-open-source-differentiable-computer-vision-library-for-pytorch/)

  • [docs](https://kornia.readthedocs.io/en/latest/)

  • [slack](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-2AQRi~X9Uu6PLMuUZdvfjA)

  • [twitter](https://twitter.com/kornia_foss)

  • [website](https://kornia.github.io/)

  • [yt](https://www.youtube.com/channel/UCI1SE1Ij2Fast5BSKxoa7Ag), [yt](https://youtu.be/3RmCYFhwclE), [yt](https://youtu.be/AAZa-mXjYF0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/kornia/kornia/blob/master/examples/augmentation/kornia_augmentation.ipynb) | 04.12.2024 |
| AutoGen | Framework that enables development of LLM applications using multiple agents that can converse with each other to solve tasks | [microsoft](https://github.com/microsoft) | [![](https://img.shields.io/github/stars/microsoft/autogen?style=social)](https://github.com/microsoft/autogen)

  • [blog post](https://www.microsoft.com/en-us/research/blog/autogen-enabling-next-generation-large-language-model-applications/)

  • [discord](https://discord.gg/pAbnFJrkgZ)

  • [medium](https://medium.com/@multiplatform.ai/microsoft-autogen-transforming-ai-frameworks-for-enhanced-problem-solving-video-ac2655e7cdf)

  • [project](https://microsoft.github.io/autogen/)

  • [yt](https://youtu.be/zdcCD--IieY), [yt](https://youtu.be/dCCr52uT0W8), [yt](https://youtu.be/JMpgsx74XDI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/microsoft/autogen/blob/main/python/packages/autogen-core/docs/src/user-guide/core-user-guide/quickstart.ipynb) | 04.12.2024 |
| dm_control | DeepMind Infrastructure for Physics-Based Simulation |

  • [Saran Tunyasuvunakool](https://github.com/saran-t)
  • [Alistair Muldal](https://github.com/alimuldal)
  • [Yotam Doron](http://www.yotamdoron.com/)
  • [Siqi Liu](http://siqi.fr/)
  • others
  • [Steven Bohez](https://github.com/sbohez)
  • [Josh Merel](https://sites.google.com/site/jsmerel/)
  • [Tom Erez](https://github.com/erez-tom)
  • [Timothy Lillicrap](https://contrastiveconvergence.net/~timothylillicrap/index.php)
  • [Nicolas Heess](https://scholar.google.com/citations?user=79k7bGEAAAAJ)
  • [Yuval Tassa](https://github.com/yuvaltassa)

| [![](https://img.shields.io/github/stars/deepmind/dm_control?style=social)](https://github.com/deepmind/dm_control)

  • [arxiv](https://arxiv.org/abs/2006.12983), [arxiv](https://arxiv.org/abs/1801.00690), [arxiv](https://arxiv.org/abs/1902.07151), [arxiv](https://arxiv.org/abs/1707.02286), [arxiv](https://arxiv.org/abs/1802.09564), [arxiv](https://arxiv.org/abs/1802.10567)

  • [blog post](https://www.deepmind.com/publications/dm-control-software-and-tasks-for-continuous-control)

  • [wiki](https://en.wikipedia.org/wiki/Tippe_top)

  • [yt](https://youtu.be/CMjoiU482Jk), [yt](https://youtu.be/rAai4QzcYbs), [yt](https://youtu.be/WhaRsrlaXLk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/dm_control/blob/master/tutorial.ipynb) | 03.12.2024 |
| MuJoCo | A general purpose physics engine that aims to facilitate research and development in robotics, biomechanics, graphics and animation, machine learning, and other areas which demand fast and accurate simulation of articulated structures interacting with their environment |

  • [Emo Todorov](https://homes.cs.washington.edu/~todorov/)
  • [Tom Erez](https://github.com/erez-tom)
  • [Yuval Tassa](https://github.com/yuvaltassa)

| [![](https://img.shields.io/github/stars/deepmind/mujoco?style=social)](https://github.com/deepmind/mujoco)

  • [arxiv](https://arxiv.org/abs/2006.12983)

  • [deepmind](https://www.deepmind.com/blog/opening-up-a-physics-simulator-for-robotics), [deepmind](https://www.deepmind.com/blog/open-sourcing-mujoco)

  • [docs](https://mujoco.readthedocs.io/en/latest/overview.html)

  • [website](https://mujoco.org/)

  • [wiki](https://en.wikipedia.org/wiki/Tippe_top), [wiki](https://en.wikipedia.org/wiki/Chaos_theory), [wiki](https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula)

  • [yt](https://youtu.be/0ORsj_E17B0), [yt](https://youtu.be/yHZVVfsJ8mc), [yt](https://youtu.be/eyzzsGJ1iic)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/dm_control/blob/master/dm_control/mujoco/tutorial.ipynb) | 03.12.2024 |
| YOLOv8 | State-of-the-art model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility | [Glenn Jocher](https://github.com/glenn-jocher) | [![](https://img.shields.io/github/stars/ultralytics/ultralytics?style=social)](https://github.com/ultralytics/ultralytics)

  • [COCO](http://cocodataset.org/)

  • [ImageNet](https://www.image-net.org/)

  • [blog post](https://habr.com/ru/articles/710016/)

  • [discord](https://ultralytics.com/discord)

  • [docker](https://hub.docker.com/r/ultralytics/ultralytics)

  • [docs](https://docs.ultralytics.com/)

  • [kaggle](https://www.kaggle.com/ultralytics/yolov8)

  • [twitter](https://twitter.com/ultralytics)

  • [yt](https://youtube.com/ultralytics), [yt](https://youtu.be/m9fH9OWn8YM), [yt](https://youtu.be/wuZtUMEiKWY), [yt](https://youtu.be/gRAyOPjQ9_s), [yt](https://youtu.be/fhzCwJkDONE), [yt](https://youtu.be/IHbJcOex6dk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb) | 03.12.2024 |
| SAE Lens | Training Sparse Autoencoders on Language Models |

  • [Joseph Bloom](https://github.com/jbloomAus)
  • [Curt Tigges](https://curttigges.com/)
  • [David Chanin](https://chanind.github.io/)

| [![](https://img.shields.io/github/stars/jbloomAus/SAELens?style=social)](https://github.com/jbloomAus/SAELens)

  • [docs](https://jbloomaus.github.io/SAELens/)

  • [pypi](https://pypi.org/project/sae-lens/)

  • [slack](https://join.slack.com/t/opensourcemechanistic/shared_invite/zt-2k0id7mv8-CsIgPLmmHd03RPJmLUcapw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jbloomAus/SAELens/blob/main/tutorials/tutorial_2_0.ipynb) | 03.12.2024 |
| moondream | Tiny vision language model that kicks ass and runs anywhere | [Vik Korrapati](https://github.com/vikhyat) | [![](https://img.shields.io/github/stars/vikhyat/moondream?style=social)](https://github.com/vikhyat/moondream)

  • [discord](https://discord.com/invite/tRUdpjDQfH)

  • [git](https://github.com/kijai/ComfyUI-moondream)

  • [hf](https://huggingface.co/vikhyatk/moondream2), [hf](https://huggingface.co/datasets/google/docci), [hf](https://huggingface.co/vikhyatk/moondream1)

  • [medium](https://medium.com/@indradumnabanerjee/getting-started-with-vision-language-model-moondream-783c264a02b9)

  • [website](https://moondream.ai/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vikhyat/moondream/blob/main/notebooks/Finetuning.ipynb) | 30.11.2024 |
| LangGraph | Library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows | [LangChain](https://www.langchain.com/) | [![](https://img.shields.io/github/stars/langchain-ai/langgraph?style=social)](https://github.com/langchain-ai/langgraph)

  • [blog post](https://www.langchain.com/langgraph)

  • [docs](https://langchain-ai.github.io/langgraph/)

  • [git](https://github.com/langchain-ai/langgraphjs)

  • [medium](https://towardsdatascience.com/from-basics-to-advanced-exploring-langgraph-e8c1cf4db787?gi=eb24d42206bf), [medium](https://medium.com/cyberark-engineering/building-production-ready-ai-agents-with-langgraph-a-real-life-use-case-7bda34c7f4e4)

  • [pypi](https://pypi.org/project/langgraph/)

  • [website](https://www.langchain.com/langgraph)

  • [yt](https://www.youtube.com/playlist?list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg), [yt](https://youtu.be/1bUy-1hGZpI), [yt](https://youtu.be/PqS1kib7RTw), [yt](https://youtu.be/PNr3f7QyQU4), [yt](https://youtu.be/qaWOwbFw3cs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/langchain-ai/langgraph/blob/main/docs/docs/tutorials/introduction.ipynb) | 28.11.2024 |
| LangChain | Framework for developing applications powered by large language models | [LangChain](https://www.langchain.com/) | [![](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://github.com/langchain-ai/langchain)

  • [docs](https://python.langchain.com/docs/introduction/)

  • [git](https://github.com/langchain-ai/langchainjs), [git](https://github.com/langchain-ai/langchain-extract), [git](https://github.com/langchain-ai/chat-langchain), [git](https://github.com/langchain-ai/weblangchain)

  • [medium](https://medium.com/@neelmakvana168/what-is-lang-chain-in-llm-e55e021da2b3), [medium](https://medium.com/@bijit211987/llm-powered-applications-building-with-langchain-cad4032d733c), [medium](https://medium.com/munchy-bytes/exploring-langchain-ff13fff63340)

  • [pypi](https://pypi.org/project/langchain/)

  • [twitter](https://twitter.com/langchainai)

  • [wiki](https://en.wikipedia.org/wiki/LangChain)

  • [yt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5), [yt](https://youtu.be/1bUy-1hGZpI), [yt](https://youtu.be/9AXP7tCI9PI), [yt](https://youtu.be/aywZrzNaKjs), [yt](https://youtu.be/dXxQ0LR-3Hg), [yt](https://youtu.be/sVcwVQRHIc8), [yt](https://youtu.be/MlK6SIjcjE8), [yt](https://youtu.be/TLf90ipMzfE), [yt](https://www.youtube.com/playlist?list=PLZoTAELRMXVORE4VF7WQ_fAl0L1Gljtar)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/tutorials/qa_chat_history.ipynb) | 26.11.2024 |
| ARENA | Provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles | [Callum McDougall](https://www.perfectlynormal.co.uk/) | [![](https://img.shields.io/github/stars/callummcdougall/ARENA_3.0?style=social)](https://github.com/callummcdougall/ARENA_3.0)

  • [arxiv](https://arxiv.org/abs/2211.00593)

  • [slack](https://join.slack.com/t/arena-uk/shared_invite/zt-2noug8mpy-TRYbCnc3pzj7ITNrZIjKww)

  • [website](https://arena-resources.notion.site/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1vuQOB2Gd7OcfzH2y9djXm9OdZA_DcxYz) | 26.11.2024 |
| Feast | An open source feature store for machine learning |

  • [Willem Pienaar](https://github.com/woop)
  • [Danny Chiao](https://github.com/adchia)
  • [Achal Shah](http://achals.com/)
  • [Terence Lim](https://terryyylim.github.io/portfolio/)
  • others
  • [Ches Martin](https://github.com/ches)
  • [Judah Rand](https://github.com/judahrand)
  • [Matt Delacour](https://github.com/MattDelac)
  • [Miguel Trejo Marrufo](https://github.com/TremaMiguel)
  • [Francisco Javier Arceo](https://franciscojavierarceo.github.io/)

| [![](https://img.shields.io/github/stars/feast-dev/feast?style=social)](https://github.com/feast-dev/feast)

  • [docs](https://docs.feast.dev/)

  • [git](https://github.com/baineng/feast-hive), [git](https://github.com/Shopify/feast-trino), [git](https://github.com/Azure/feast-azure), [git](https://github.com/amundsen-io/amundsen/blob/main/databuilder/databuilder/extractor/feast_extractor.py)

  • [website](https://feast.dev/)

  • [yt](https://youtu.be/DaNv-Wf1MBA), [yt](https://youtu.be/p2cuq4eJ2BY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/feast-dev/feast/blob/master/examples/quickstart/quickstart.ipynb) | 22.11.2024 |
| VC | Client software for performing real-time voice conversion using various Voice Conversion AI | [w-okada](https://github.com/w-okada) | [![](https://img.shields.io/github/stars/w-okada/voice-changer?style=social)](https://github.com/w-okada/voice-changer)

  • [git](https://github.com/yxlllc/DDSP-SVC)

  • [hf](https://huggingface.co/wok000/vcclient000)

  • [yt](https://youtu.be/POo_Cg0eFMU), [yt](https://youtu.be/fba9Zhsukqw), [yt](https://youtu.be/s_GirFEGvaA), [yt](https://youtu.be/Q7bbEC4aeKM), [yt](https://youtu.be/_JXbvSTGPoo), [yt](https://youtu.be/pHhjg2JwdPI), [yt](https://youtu.be/We5oYpCR3WQ), [yt](https://youtu.be/aVfoC1EHlVs), [yt](https://youtu.be/YF1lBaqeyt8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/hinabl/voice-changer-colab/blob/master/Hina_Modified_Realtime_Voice_Changer_on_Colab.ipynb) | 20.11.2024 |
| CatBoost | High-performance open source library for gradient boosting on decision trees |

  • [Anna Veronika Dorogush](https://github.com/annaveronika)
  • [Vasily Ershov](https://linkedin.com/in/vasily-ershov-04768199)
  • [Andrey Gulin](https://www.linkedin.com/in/andreygulin)
  • [Liudmila Prokhorenkova](https://github.com/ostroumova-la)
  • others
  • [Gleb Gusev](https://scholar.google.com/citations?user=RWX4sYcAAAAJ)
  • [Aleksandr Vorobev](https://scholar.google.com/citations?user=WiCXGGIAAAAJ)

| [![](https://img.shields.io/github/stars/catboost/catboost?style=social)](https://github.com/catboost/catboost)

  • [arxiv](https://arxiv.org/abs/1810.11363), [arxiv](https://arxiv.org/abs/1706.09516)

  • [docs](https://catboost.ai/en/docs/)

  • [medium](https://medium.com/@mohan-gupta/catboost-algorithm-2156129d740d)

  • [neurips](https://papers.nips.cc/paper_files/paper/2018/hash/14491b756b3a51daac41c24863285549-Abstract.html)

  • [pypi](https://pypi.org/project/catboost/)

  • [twitter](https://twitter.com/CatBoostML)

  • [website](https://catboost.ai/)

  • [wiki](https://en.wikipedia.org/wiki/CatBoost)

  • [yt](https://youtu.be/8o0e-r0B5xQ), [yt](https://youtu.be/usdEWSDisS0), [yt](https://youtu.be/KXOTSkPL2X4), [yt](https://youtu.be/UYDwhuyWYSo), [yt](https://youtu.be/xl1fwCza9C8), [yt](https://youtu.be/Q_xa4RvnDcY), [yt](https://youtu.be/ySla2kczbeM), [yt](https://youtu.be/47-mAVms-b8), [yt](https://youtu.be/nrGt5VKZpzc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/catboost/tutorials/blob/master/python_tutorial.ipynb) | 18.11.2024 |
| Gemma 2 | New addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters | [unsloth](https://unsloth.ai/) | [![](https://img.shields.io/github/stars/unslothai/unsloth?style=social)](https://github.com/unslothai/unsloth)

  • [arxiv](https://arxiv.org/abs/2408.00118)

  • [blog post](https://blog.google/technology/developers/google-gemma-2/)

  • [discord](https://discord.gg/unsloth)

  • [hf](https://huggingface.co/google/gemma-2-2b)

  • [kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-gemma-7b-unsloth-notebook/)

  • [pypi](https://pypi.org/project/unsloth/)

  • [yt](https://youtu.be/t3js5iy1pcE), [yt](https://youtu.be/xxCkuxQuT_g), [yt](https://youtu.be/4N38V4h9S0A), [yt](https://youtu.be/qFULISWcjQc), [yt](https://youtu.be/MARG5S1uNbc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4) | 17.11.2024 |
| Llama 3.1 | First openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation | [unsloth](https://unsloth.ai/) | [![](https://img.shields.io/github/stars/unslothai/unsloth?style=social)](https://github.com/unslothai/unsloth)

  • [blog post](https://unsloth.ai/blog/llama3-1)

  • [discord](https://discord.gg/unsloth)

  • [hf](https://huggingface.co/meta-llama)

  • [kaggle](https://www.kaggle.com/danielhanchen/kaggle-llama-3-1-8b-unsloth-notebook)

  • [meta](https://ai.meta.com/blog/meta-llama-3-1/), [meta](https://llama.meta.com/), [meta](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/)

  • [pypi](https://pypi.org/project/unsloth/)

  • [twitter](https://twitter.com/unslothai)

  • [yt](https://youtu.be/QyRWqJehK7I), [yt](https://youtu.be/1xdneyn6zjw), [yt](https://youtu.be/p5O-_AiKD_Q), [yt](https://youtu.be/4rk9fHIOGTU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp) | 17.11.2024 |
| Mistral Small | Enterprise-grade small model | [unsloth](https://unsloth.ai/) | [![](https://img.shields.io/github/stars/unslothai/unsloth?style=social)](https://github.com/unslothai/unsloth)

  • [discord](https://discord.gg/unsloth)

  • [pypi](https://pypi.org/project/unsloth/)

  • [reddit](https://www.reddit.com/r/LocalLLaMA/comments/1fj4unz/mistralaimistralsmallinstruct2409_new_22b_from/)

  • [website](https://mistral.ai/)

  • [yt](https://youtu.be/damcEQdlpqY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1oCEHcED15DzL8xXGU1VTx5ZfOJM8WY01) | 17.11.2024 |
| ORPO | Get up and running with large language models |

  • [Jiwoo Hong](https://jiwooya1000.github.io/)
  • [Noah Lee](https://nlee-208.github.io/)
  • [James Thorne](https://jamesthorne.com/)

| [![](https://img.shields.io/github/stars/xfactlab/orpo?style=social)](https://github.com/xfactlab/orpo)

  • [arxiv](https://arxiv.org/abs/2403.07691)

  • [discord](https://discord.gg/unsloth)

  • [git](https://github.com/unslothai/unsloth)

  • [hf](https://huggingface.co/datasets/reciperesearch/dolphin-sft-v0.1-preference), [hf](https://huggingface.co/docs/trl/main/en/orpo_trainer)

  • [medium](https://medium.com/@AriaLeeNotAriel/numbynum-orpo-monolithic-optimization-without-reference-model-hong-et-al-2024-reviewed-262d0778e08c), [medium](https://medium.com/@zergtant/optimizing-language-model-preferences-without-a-reference-model-introducing-the-orpo-method-1144b3e7aec3)

  • [pypi](https://pypi.org/project/unsloth/)

  • [reddit](https://www.reddit.com/r/LLMResearch/comments/1bh8iq5/orpo_monolithic_preference_optimization_without/)

  • [yt](https://youtu.be/52kMBrAI_IM), [yt](https://youtu.be/6kkJGkPZP88), [yt](https://youtu.be/8MEPCPdKUH8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/11t4njE3c4Lxl-07OD8lJSMKkfyJml3Tn) | 17.11.2024 |
| Phi-3.5 | 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5, despite being small enough to be deployed on a phone | [unsloth](https://unsloth.ai/) | [![](https://img.shields.io/github/stars/unslothai/unsloth?style=social)](https://github.com/unslothai/unsloth)

  • [arxiv](https://arxiv.org/abs/2404.14219)

  • [blog post](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/)

  • [discord](https://discord.gg/unsloth)

  • [hf](https://huggingface.co/collections/microsoft/phi-3-6626e15e9585a200d2d761e3)

  • [medium](https://medium.com/@mysocial81/phi-3-5-microsofts-efficient-multilingual-and-secure-open-source-slms-5ed7d36738aa)

  • [pypi](https://pypi.org/project/unsloth/)

  • [reddit](https://www.reddit.com/r/mlscaling/comments/1cberec/phi3_technical_report_a_highly_capable_language/), [reddit](https://www.reddit.com/r/LocalLLaMA/comments/1ey5i22/phi35_is_very_safe_microsoft_really_outdid/)

  • [twitter](https://twitter.com/unslothai)

  • [website](https://azure.microsoft.com/en-us/products/phi-3)

  • [yt](https://youtu.be/Enp70Kkjb8k)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4) | 17.11.2024 |
| Simple audio recognition | This tutorial will show you how to build a basic speech recognition network that recognizes ten different words | [Google](https://www.tensorflow.org/) |

  • [coursera](https://www.coursera.org/lecture/audio-signal-processing/stft-2-tjEQe)

  • [pwc](https://paperswithcode.com/task/speech-recognition)

  • [tf](https://www.tensorflow.org/datasets/catalog/speech_commands), [tf](https://www.tensorflow.org/tutorials/audio/simple_audio)

  • [tf.js](https://codelabs.developers.google.com/codelabs/tensorflowjs-audio-codelab/index.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/simple_audio.ipynb) | 15.11.2024 |
| xFormers | Toolbox to Accelerate Research on Transformers |

  • [Benjamin Lefaudeux](https://github.com/blefaudeux)
  • [Francisco Massa](https://github.com/fmassa)
  • [Diana Liskovich](https://www.linkedin.com/in/dianaliskovich)
  • [Wenhan Xiong](https://xwhan.github.io/)
  • others
  • [Vittorio Caggiano](https://vittorio-caggiano.github.io/)
  • [Sean Naren](https://github.com/SeanNaren)
  • [Min Xu](https://github.com/min-xu-ai)
  • [Jieru Hu](https://github.com/jieru-hu)
  • [Marta Tintore](https://github.com/MartaTintore)
  • [Susan Zhang](https://suchenzang.github.io/)
  • [Patrick Labatut](https://github.com/patricklabatut)
  • [Daniel Haziza](https://scholar.google.com/citations?user=2eSKdFMAAAAJ)

| [![](https://img.shields.io/github/stars/facebookresearch/xformers?style=social)](https://github.com/facebookresearch/xformers)

  • [docs](https://facebookresearch.github.io/xformers/)

  • [git](https://github.com/google-research/sputnik), [git](https://github.com/hgyhungry/ge-spmm), [git](https://github.com/openai/triton), [git](https://github.com/RobinBruegger/RevTorch), [git](https://github.com/mlpen/Nystromformer), [git](https://github.com/facebookresearch/fairscale), [git](https://github.com/huggingface/pytorch-image-models), [git](https://github.com/Dao-AILab/flash-attention)

  • [yt](https://youtu.be/NJyZCdxnGe4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/xformers/blob/main/docs/source/xformers_mingpt.ipynb) | 13.11.2024 |
| Building Your Own Federated Learning Algorithm | We discuss how to implement federated learning algorithms without deferring to the tff.learning API | [Zachary Charles](https://zachcharles.com/) |

  • [arxiv](https://arxiv.org/abs/1907.08610)

  • [blog post](https://ai.googleblog.com/2020/05/federated-analytics-collaborative-data.html)

  • [pwc](https://paperswithcode.com/task/federated-learning)

  • [tf](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/building_your_own_federated_learning_algorithm.ipynb) | 01.11.2024 |
| Federated Learning for Image Classification | We use the classic MNIST training example to introduce the Federated Learning API layer of TFF, tff.learning - a set of higher-level interfaces that can be used to perform common types of federated learning tasks, such as federated training, against user-supplied models implemented in TensorFlow | [Krzysztof Ostrowski](https://github.com/krzys-ostrowski) |

  • [arxiv](https://arxiv.org/abs/1602.05629)

  • [data](https://www.nist.gov/srd/nist-special-database-19)

  • [medium](https://medium.com/tensorflow/standardizing-on-keras-guidance-on-high-level-apis-in-tensorflow-2-0-bad2b04c819a)

  • [pwc](https://paperswithcode.com/task/federated-learning), [pwc](https://paperswithcode.com/task/image-classification)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_image_classification.ipynb) | 01.11.2024 |
| Federated Learning for Text Generation | We start with a RNN that generates ASCII characters, and refine it via federated learning | [Krzysztof Ostrowski](https://github.com/krzys-ostrowski) |

  • [arxiv](https://arxiv.org/abs/1812.01097), [arxiv](https://arxiv.org/abs/1602.05629)

  • [data](http://www.ibiblio.org/pub/docs/books/gutenberg/9/98/98.txt), [data](http://www.ibiblio.org/pub/docs/books/gutenberg/4/46/46.txt)

  • [tf](https://www.tensorflow.org/hub)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_text_generation.ipynb) | 01.11.2024 |
| Custom Federated Algorithms, Part 1: Introduction to the Federated Core | This tutorial is the first part of a two-part series that demonstrates how to implement custom types of federated algorithms in TensorFlow Federated using the Federated Core - a set of lower-level interfaces that serve as a foundation upon which we have implemented the Federated Learning layer | [Krzysztof Ostrowski](https://github.com/krzys-ostrowski) |

  • [arxiv](https://arxiv.org/abs/1602.05629)

  • [pwc](https://paperswithcode.com/task/federated-learning)

  • [tf](https://www.tensorflow.org/federated/federated_core), [tf](https://www.tensorflow.org/federated/federated_learning)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_1.ipynb) | 01.11.2024 |
| Custom Federated Algorithms, Part 2: Implementing Federated Averaging | This tutorial is the second part of a two-part series that demonstrates how to implement custom types of federated algorithms in TFF using the Federated Core, which serves as a foundation for the Federated Learning layer | [Krzysztof Ostrowski](https://github.com/krzys-ostrowski) | [![](https://img.shields.io/github/stars/tensorflow/federated?style=social)](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)

  • [pwc](https://paperswithcode.com/task/federated-learning)

  • [tf](https://www.tensorflow.org/federated/federated_core), [tf](https://www.tensorflow.org/federated/federated_learning)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_2.ipynb) | 01.11.2024 |
| High-performance simulations with TFF | This tutorial will describe how to setup high-performance simulations with TFF in a variety of common scenarios | [Krzysztof Ostrowski](https://github.com/krzys-ostrowski) |
  • [pwc](https://paperswithcode.com/task/federated-learning)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/simulations.ipynb) | 01.11.2024 |
| Autodistill | Uses big, slower foundation models to train small, faster supervised models | [autodistill](https://github.com/autodistill) | [![](https://img.shields.io/github/stars/autodistill/autodistill?style=social)](https://github.com/autodistill/autodistill)

  • [blog post](https://blog.roboflow.com/autodistill/)

  • [docs](https://docs.autodistill.com/)

  • [git](https://github.com/autodistill/autodistill-grounded-sam), [git](https://github.com/autodistill/autodistill-yolov8), [git](https://github.com/autodistill/autodistill-yolonas), [git](https://github.com/autodistill/autodistill-yolov5), [git](https://github.com/autodistill/autodistill-detr), [git](https://github.com/autodistill/autodistill-detic), [git](https://github.com/autodistill/autodistill-grounding-dino), [git](https://github.com/autodistill/autodistill-owl-vit), [git](https://github.com/autodistill/autodistill-sam-clip), [git](https://github.com/autodistill/autodistill-llava), [git](https://github.com/autodistill/autodistill-kosmos-2), [git](https://github.com/autodistill/autodistill-owlv2), [git](https://github.com/autodistill/autodistill-roboflow-universe), [git](https://github.com/autodistill/autodistill-azure-vision), [git](https://github.com/autodistill/autodistill-rekognition), [git](https://github.com/autodistill/autodistill-gcp-vision), [git](https://github.com/roboflow/inference)

  • [yt](https://youtu.be/gKTYMfwPo4M), [yt](https://youtu.be/M_QZ_Q0zT0k), [yt](https://youtube.com/roboflow)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-auto-train-yolov8-model-with-autodistill.ipynb) | 01.11.2024 |
| LightAutoML | Allows you create machine learning models using just a few lines of code, or build your own custom pipeline using ready blocks |

  • [Alexander Ryzhkov](https://github.com/alexmryzhkov)
  • [Anton Vakhrushev](https://www.kaggle.com/btbpanda)
  • [Dmitry Simakov](https://github.com/DESimakov)

| [![](https://img.shields.io/github/stars/sb-ai-lab/LightAutoML?style=social)](https://github.com/sb-ai-lab/LightAutoML)

  • [arxiv](https://arxiv.org/abs/2109.01528)

  • [docs](https://lightautoml.readthedocs.io/en/latest/)

  • [git](https://github.com/Rishat-skoltech/LightAutoML_GPU), [git](https://github.com/sb-ai-lab/SLAMA)

  • [kaggle](https://www.kaggle.com/alexryzhkov/n3-tps-april-21-lightautoml-starter), [kaggle](https://www.kaggle.com/alexryzhkov/lightautoml-titanic-love), [kaggle](https://www.kaggle.com/alexryzhkov/lightautoml-extreme-short-titanic-solution), [kaggle](https://www.kaggle.com/alexryzhkov/lightautoml-houseprices-love), [kaggle](https://www.kaggle.com/simakov/lama-whitebox-preset-example), [kaggle](https://www.kaggle.com/simakov/lama-custom-automl-pipeline-example), [kaggle](https://www.kaggle.com/code/mikhailkuz/lightautoml-nn-happiness)

  • [medium](https://alexmryzhkov.medium.com/lightautoml-preset-usage-tutorial-2cce7da6f936)

  • [pypi](https://pypi.org/project/lightautoml)

  • [website](https://developers.sber.ru/portal/products/lightautoml)

  • [yt](https://www.youtube.com/live/4pbO673B9Oo), [yt](https://youtu.be/ci8uqgWFJGg), [yt](https://youtu.be/TYu1UG-E9e8), [yt](https://www.youtube.com/playlist?list=PLJU_M19giWaEXcQtWWhpOKJf_luMc12B2), [yt](https://youtu.be/hr8GbPOHaEE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/AILab-MLTools/LightAutoML/blob/master/examples/tutorials/Tutorial_1_basics.ipynb) | 31.10.2024 |
| Crawl4AI | LLM Friendly Web Crawler & Scrapper | [UncleCode](https://github.com/unclecode) | [![](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai)

  • [docs](https://crawl4ai.com/mkdocs/)

  • [medium](https://medium.com/@pankaj_pandey/crawl4ai-your-ultimate-asynchronous-web-crawling-companion-%EF%B8%8F-66a21cf57c0a)

  • [pypi](https://pypi.org/project/Crawl4AI/)

  • [twitter](https://twitter.com/unclecode)

  • [yt](https://youtu.be/Ex3EpKxlMO0), [yt](https://youtu.be/KAvuVUh0XU8), [yt](https://youtu.be/lpOb1bQO7aM), [yt](https://youtu.be/81KIBvg0bsQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/unclecode/crawl4ai/blob/main/docs/examples/quickstart.ipynb) | 30.10.2024 |
| NotebookLlama | Open Source version of NotebookLM | [Meta](https://www.llama.com/) | [![](https://img.shields.io/github/stars/meta-llama/llama-recipes?style=social)](https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/NotebookLlama)

  • [medium](https://medium.com/ai-disruption/meta-launches-open-source-version-notebookllama-rivals-googles-popular-notebooklm-9a41edd99c24)

  • [meidum](https://medium.com/ai-artistry/notebook-llama-an-open-source-guide-to-building-a-pdf-to-podcast-workflow-e8fceec888a9)

  • [reddit](https://www.reddit.com/r/OpenSourceeAI/comments/1gdsmax/meta_ai_silently_releases_notebookllama_an_open/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/NotebookLlama/Step-1%20PDF-Pre-Processing-Logic.ipynb) | 29.10.2024 |
| XGBoost | Optimized distributed gradient boosting library designed to be highly efficient, flexible and portable |

  • [Tianqi Chen](https://tqchen.com/)
  • [Carlos Guestrin](https://guestrin.su.domains/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/2939672.2939785)](https://doi.org/10.1145/2939672.2939785) [![](https://img.shields.io/github/stars/dmlc/xgboost?style=social)](https://github.com/dmlc/xgboost)

  • [docs](https://xgboost.readthedocs.org/)

  • [pypi](https://pypi.python.org/pypi/xgboost/)

  • [twitter](https://twitter.com/XGBoostProject)

  • [wiki](https://en.wikipedia.org/wiki/Gradient_boosting), [wiki](https://en.wikipedia.org/wiki/XGBoost)

  • [yt](https://www.youtube.com/playlist?list=PLblh5JKOoLULU0irPgs1SnKO6wqVjKUsQ), [yt](https://youtu.be/vV12dGe_Fho), [yt](https://youtu.be/gPciUPwWJQQ), [yt](https://youtu.be/TyvYZ26alZs), [yt](https://youtu.be/kho6oANGu_A), [yt](https://youtu.be/0Xc9LIb_HTw), [yt](https://youtu.be/OQKQHNCVf5k)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-training/xgboost/notebooks/how_to_use_comet_with_xgboost_tutorial.ipynb) | 22.10.2024 |
| YOLOv5 | You Only Look Once | [Glenn Jocher](https://github.com/glenn-jocher) | [![](https://img.shields.io/github/stars/ultralytics/yolov5?style=social)](https://github.com/ultralytics/yolov5)

  • [data](http://cocodataset.org/#upload)

  • [kaggle](https://www.kaggle.com/ultralytics/yolov5), [kaggle](https://www.kaggle.com/ultralytics/coco128)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) | 19.10.2024 |
| YOLOv3 | You Only Look Once | [Glenn Jocher](https://github.com/glenn-jocher) | [![](https://img.shields.io/github/stars/ultralytics/yolov3?style=social)](https://github.com/ultralytics/yolov3)

  • [data](http://cocodataset.org/#upload)

  • [kaggle](https://www.kaggle.com/ultralytics/yolov3), [kaggle](https://www.kaggle.com/ultralytics/coco128)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb) | 19.10.2024 |
| Swarm | Educational framework exploring ergonomic, lightweight multi-agent orchestration |

  • [Ilan Bigio](https://ilanbigio.com/)
  • [James Hills](https://github.com/jhills20)
  • [Shyamal Anadkat](https://shyamal.me/)
  • [Charu Jaiswal](https://github.com/charuj)
  • others
  • [Colin Jarvis](https://github.com/colin-openai)
  • [Katia Guzman](https://github.com/katia-openai)

| [![](https://img.shields.io/github/stars/openai/swarm?style=social)](https://github.com/openai/swarm)

  • [medium](https://medium.com/@michael_79773/exploring-openais-swarm-an-experimental-framework-for-multi-agent-systems-5ba09964ca18), [medium](https://ai.plainenglish.io/openai-releases-swarm-what-is-it-b61ecb88d67e)

  • [reddit](https://www.reddit.com/r/LocalLLaMA/comments/1g56itb/openai_swarm_the_agentic_framework_should_you_care/)

  • [yt](https://youtu.be/Cw0ME8OZ0xI), [yt](https://youtu.be/q7_5eCmu0MY), [yt](https://youtu.be/LBih635lzps), [yt](https://youtu.be/npAljHBeKPc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/blog_writer_swarm.ipynb) | 15.10.2024 |
| LM Evaluation Harness | Framework for few-shot evaluation of language models. | [EleutherAI](https://www.eleuther.ai/) | [![](https://img.shields.io/github/stars/EleutherAI/lm-evaluation-harness?style=social)](https://github.com/EleutherAI/lm-evaluation-harness)

  • [arxiv](https://arxiv.org/abs/2005.14165)

  • [discord](https://discord.gg/eleutherai)

  • [git](https://github.com/AutoGPTQ/AutoGPTQ), [git](https://github.com/EleutherAI/gpt-neox), [git](https://github.com/microsoft/Megatron-DeepSpeed), [git](https://github.com/vllm-project/vllm)

  • [project](https://www.eleuther.ai/projects/large-language-model-evaluation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/EleutherAI/lm-evaluation-harness/blob/main/examples/lm-eval-overview.ipynb) | 04.10.2024 |
| Multimodal Maestro | Gives you more control over large multimodal models to get the outputs you want | [Roboflow](https://roboflow.com/about) | [![](https://img.shields.io/github/stars/roboflow/multimodal-maestro?style=social)](https://github.com/roboflow/multimodal-maestro)

  • [arxiv](https://arxiv.org/abs/2310.11441), [arxiv](https://arxiv.org/abs/2309.17421)

  • [blog post](https://blog.roboflow.com/multimodal-maestro-advanced-lmm-prompting/)

  • [reddit](https://www.reddit.com/r/computervision/comments/186o2b2/multimodal_maestro_prompt_tools_for_use_with_lmms/)

  • [website](https://maestro.roboflow.com/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow/multimodal-maestro/blob/develop/cookbooks/multimodal_maestro_gpt_4_vision.ipynb) | 26.09.2024 |
| TRL | Set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step, Reward Modeling step to the Proximal Policy Optimization step |

  • [Leandro von Werra](https://github.com/lvwerra)
  • [Younes Belkada](https://github.com/younesbelkada)
  • [Lewis Tunstall](https://lewtun.github.io/blog/)
  • [Edward Beeching](https://edbeeching.github.io/)
  • others
  • [Tristan Thrush](http://www.tristanthrush.com/)
  • [Nathan Lambert](https://www.natolambert.com/)

| [![](https://img.shields.io/github/stars/huggingface/trl?style=social)](https://github.com/huggingface/trl)

  • [arxiv](https://arxiv.org/abs/1909.08593)

  • [docs](http://hf.co/docs/trl)

  • [git](https://github.com/openai/lm-human-preferences)

  • [yt](https://youtu.be/xQ5nc1CF7iQ), [yt](https://youtu.be/67SO20dszNA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/trl/blob/master/examples/notebooks/best_of_n.ipynb) | 24.09.2024 |
| The Autodiff Cookbook | You'll go through a whole bunch of neat autodiff ideas that you can cherry pick for your own work, starting with the basics |

  • [Alex Wiltschko](https://github.com/alexbw)
  • [Matthew Johnson](http://people.csail.mit.edu/mattjj/)

| [![](https://img.shields.io/github/stars/google/jax?style=social)](https://github.com/google/jax/issues/446#issuecomment-467105048)

  • [arxiv](https://arxiv.org/abs/1406.2572), [arxiv](https://arxiv.org/abs/1706.04454), [arxiv](https://arxiv.org/abs/1802.03451), [arxiv](https://arxiv.org/abs/1811.07062)

  • [book](https://mitpress.mit.edu/sites/default/files/titles/content/sicm_edition_2/book.html), [book](https://mitpress.mit.edu/books/functional-differential-geometry)

  • [git](https://github.com/google/jax#auto-vectorization-with-vmap), [git](https://github.com/hips/autograd)

  • [tutorial](http://videolectures.net/deeplearning2017_johnson_automatic_differentiation/)

  • [wiki](https://en.wikipedia.org/wiki/Truncated_Newton_method), [wiki](https://en.wikipedia.org/wiki/Pullback_(differential_geometry), [wiki](https://en.wikipedia.org/wiki/Holomorphic_function), [wiki](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/jax/blob/main/docs/notebooks/autodiff_cookbook.ipynb) | 20.09.2024 |
| Supervision | Reusable computer vision tools | [Roboflow](https://roboflow.com/about) | [![](https://img.shields.io/github/stars/roboflow/supervision?style=social)](https://github.com/roboflow/supervision)

  • [discord](https://discord.gg/GbfgXGJ8Bk)

  • [docs](https://github.com/roboflow/inference), [docs](https://docs.roboflow.com/)

  • [git](https://github.com/roboflow/notebooks)

  • [hf](https://huggingface.co/spaces/Roboflow/Annotators)

  • [kaggle](https://www.kaggle.com/code/leoroboflow/inferring-on-a-dataset-with-a-roboflow-model)

  • [website](https://supervision.roboflow.com/)

  • [yt](https://youtu.be/uWP6UjDeZvY), [yt](https://youtu.be/4Q3ut7vqD5o), [yt](https://youtube.com/roboflow)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb) | 19.09.2024 |
| PEFT | Parameter-Efficient Fine-Tuning methods enable efficient adaptation of pre-trained language models to various downstream applications without fine-tuning all the model's parameters |

  • [Sourab Mangrulkar](https://github.com/pacman100)
  • [Sylvain Gugger](https://github.com/sgugger)
  • [Lysandre Debut](http://lysand.re/)
  • [Younes Belkada](https://github.com/younesbelkada)
  • [Sayak Paul](https://sayak.dev/)

| [![](https://img.shields.io/github/stars/huggingface/peft?style=social)](https://github.com/huggingface/peft)

  • [blog post](https://www.philschmid.de/fine-tune-flan-t5-peft)

  • [docs](https://huggingface.co/docs/peft)

  • [git](https://github.com/microsoft/DeepSpeed/issues/3002)

  • [hf](https://huggingface.co/datasets/ought/raft/viewer/twitter_complaints), [hf](https://huggingface.co/bigscience/T0_3B), [hf](https://huggingface.co/bigscience/mt0-xxl), [hf](https://huggingface.co/facebook/opt-6.7b), [hf](https://huggingface.co/roberta-large), [hf](https://huggingface.co/datasets/glue/viewer/mrpc)

  • [yt](https://youtu.be/YVU5wAA6Txo), [yt](https://youtu.be/Us5ZFp16PaU), [yt](https://youtu.be/YKCtbIJC3kQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/peft/blob/master/examples/int8_training/Finetune_flan_t5_large_bnb_peft.ipynb) | 13.09.2024 |
| SAA+ | Framework, Segment Any Anomaly +, for zero-shot anomaly segmentation with hybrid prompt regularization to improve the adaptability of modern foundation models |

  • [Yunkang Cao](https://caoyunkang.github.io/)
  • [Xiaohao Xu](https://scholar.google.com/citations?user=3Ifn2DoAAAAJ)
  • [Chen Sun](https://www.researchgate.net/profile/Chen-Sun-58)
  • [Yuqi Cheng](https://scholar.google.com/citations?user=02BC-WgAAAAJ)
  • others
  • [Zongwei Du](https://github.com/duzongwei)
  • [Liang Gao](https://scholar.google.com/citations?user=NqIi8_8AAAAJ)
  • [Weiming Shen](https://scholar.google.com/citations?user=FuSHsx4AAAAJ)

| [![](https://img.shields.io/github/stars/caoyunkang/Segment-Any-Anomaly?style=social)](https://github.com/caoyunkang/Segment-Any-Anomaly)

  • [arxiv](https://arxiv.org/abs/2305.10724)

  • [git](https://github.com/abin24/Magnetic-tile-defect-datasets.), [git](https://github.com/caoyunkang/WinClip)

  • [hf](https://huggingface.co/spaces/Caoyunkang/Segment-Any-Anomaly)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/12Sh0j92YYmTa0oIuSEWWpPBCpIwCSVhz) | 13.09.2024 |
| TensorRT | SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications | [nvidia](https://developer.nvidia.com/) | [![](https://img.shields.io/github/stars/NVIDIA/TensorRT?style=social)](https://github.com/NVIDIA/TensorRT)

  • [blog post](https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorrt-updated/)

  • [docs](https://docs.nvidia.com/deeplearning/tensorrt/)

  • [forum](https://forums.developer.nvidia.com/c/ai-data-science/deep-learning/tensorrt)

  • [website](https://developer.nvidia.com/tensorrt)

  • [yt](https://youtu.be/TU5BMU6iYZ0), [yt](https://youtu.be/6rZNLaS775w), [yt](https://youtu.be/G_KhUFCUSsY), [yt](https://youtu.be/7kJ-jph9gCw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/NVIDIA/TensorRT/blob/main/quickstart/IntroNotebooks/0.%20Running%20This%20Guide.ipynb) | 12.09.2024 |
| DataChain | AI-dataframe to enrich, transform and analyze data from cloud storages for ML training and LLM apps | [Iterative](https://iterative.ai/) | [![](https://img.shields.io/github/stars/iterative/datachain?style=social)](https://github.com/iterative/datachain)

  • [discord](https://dvc.org/chat)

  • [docs](https://datachain.dvc.ai/)

  • [pypi](https://pypi.org/project/datachain/)

  • [twitter](https://twitter.com/DVCorg)

  • [yt](https://youtu.be/qoqhllB3gN8), [yt](https://www.youtube.com/live/JT5AwGz5QMI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/iterative/datachain-examples/blob/main/multimodal/clip_fine_tuning.ipynb) | 09.09.2024 |
| TFF for Federated Learning Research: Model and Update Compression | We use the EMNIST dataset to demonstrate how to enable lossy compression algorithms to reduce communication cost in the Federated Averaging algorithm | [Weikang Song](https://github.com/swkpku) |

  • [arxiv](https://arxiv.org/abs/1602.05629)

  • [pwc](https://paperswithcode.com/task/federated-learning)

  • [tensor encoding](http://jakubkonecny.com/files/tensor_encoding.pdf)

  • [tf](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist), [tf](https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/tff_for_federated_learning_research_compression.ipynb) | 05.09.2024 |
| LlamaIndex | Data framework for your LLM application | [Jerry Liu](https://github.com/jerryjliu) | [![](https://img.shields.io/github/stars/run-llama/llama_index?style=social)](https://github.com/run-llama/llama_index)

  • [discord](https://discord.gg/dGcwcsnxhU)

  • [docs](https://docs.llamaindex.ai/en/stable/)

  • [git](https://github.com/run-llama/LlamaIndexTS), [git](https://github.com/run-llama/llama-lab)

  • [meta](https://llama.meta.com/docs/integration-guides/llamaindex/)

  • [pypi](https://pypi.org/project/llama-index/)

  • [twitter](https://twitter.com/llama_index)

  • [website](https://www.llamaindex.ai/)

  • [yt](https://www.youtube.com/@LlamaIndex), [yt](https://youtu.be/TRjq7t2Ms5I), [yt](https://youtu.be/pApPGFwbigI), [yt](https://youtu.be/zeAyuLc_f3Q), [yt](https://youtu.be/hH4WkgILUD4), [yt](https://youtu.be/v6g8eo86T8A), [yt](https://youtu.be/FQBou-YgxyE), [yt](https://youtu.be/bQw92baScME), [yt](https://youtu.be/cNMYeW2mpBs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/cookbooks/oreilly_course_cookbooks/Module-2/Components_Of_LlamaIndex.ipynb) | 05.09.2024 |
| Deforum Stable Diffusion | Open source project is designed to be free to use and easy to modify for custom needs and pipelines |

  • [EnzymeZoo](https://linktr.ee/enzymezoo)
  • [Артем Храпов](https://github.com/kabachuha)
  • [Forest Star Walz](https://github.com/reallybigname)
  • [pharmapsychotic](https://github.com/pharmapsychotic)

| [![](https://img.shields.io/github/stars/deforum-art/deforum-stable-diffusion?style=social)](https://github.com/deforum-art/deforum-stable-diffusion)

  • [discord](https://discord.gg/deforum)

  • [docs](https://docs.google.com/document/d/1RrQv7FntzOuLg4ohjRZPVL7iptIyBhwwbcEYEW2OfcI)

  • [project](https://deforum.github.io/)

  • [yt](https://youtu.be/w_sxuDMt_V0), [yt](https://youtu.be/bicPayZDI60), [yt](https://youtu.be/dqkQo2alZvU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deforum-art/deforum-stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb) | 30.08.2024 |
| ComfyUI | Powerful and modular stable diffusion GUI and backend | [comfyanonymous](https://github.com/comfyanonymous) | [![](https://img.shields.io/github/stars/comfyanonymous/ComfyUI?style=social)](https://github.com/comfyanonymous/ComfyUI)

  • [examples](https://comfyanonymous.github.io/ComfyUI_examples/)

  • [git](https://github.com/madebyollin/taesd)

  • [pytorch](https://developer.apple.com/metal/pytorch/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/10lzgze/i_figured_out_a_way_to_apply_different_prompts_to/)

  • [yt](https://youtu.be/vUTV85D51yk), [yt](https://youtu.be/gySLXbe7WZQ), [yt](https://youtu.be/ovjeVGmy6ZM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/comfyanonymous/ComfyUI/blob/master/notebooks/comfyui_colab.ipynb) | 30.08.2024 |
| Machine Learning Simplified | A Gentle Introduction to Supervised Learning | [Andrew Wolf](https://5x12.ai/) | [![](https://img.shields.io/github/stars/5x12/themlsbook?style=social)](https://github.com/5x12/themlsbook)

  • [medium](https://medium.com/geekculture/i-found-a-great-machine-learning-book-deed11db2688)

  • [reddit](https://www.reddit.com/r/Python/comments/t8st9l/i_wrote_a_book_on_machine_learning_w_python_code/), [reddit](https://www.reddit.com/r/learnmachinelearning/comments/snxlly/machine_learning_simplified_book/)

  • [website](https://www.themlsbook.com/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/5x12/themlsbook/blob/master/chapter2/knn.ipynb) | 29.08.2024 |
| Anomalib | Deep learning library that aims to collect state-of-the-art anomaly detection algorithms for benchmarking on both public and private datasets |

  • [Samet Akcay](https://github.com/samet-akcay)
  • [Dick Ameln](https://github.com/djdameln)
  • [Ashwin Vaidya](https://ashwinvaidya.com/)
  • [Barath Lakshmanan](https://github.com/blakshma)
  • others
  • [Nilesh Ahuja](https://github.com/nahuja-intel)
  • [Utku Genc](https://github.com/ugenc-intel)

| [![](https://img.shields.io/github/stars/openvinotoolkit/anomalib?style=social)](https://github.com/openvinotoolkit/anomalib)

  • [arxiv](https://arxiv.org/abs/2011.08785)

  • [data](https://www.mvtec.com/company/research/datasets/mvtec-ad)

  • [docs](https://openvinotoolkit.github.io/anomalib/)

  • [git](https://github.com/rwightman/pytorch-image-models), [git](https://github.com/vnk8071/anomaly-detection-in-industry-manufacturing/tree/master/anomalib_contribute)

  • [medium](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055)

  • [pwc](https://paperswithcode.com/lib/timm)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openvinotoolkit/anomalib/blob/main/notebooks/000_getting_started/001_getting_started.ipynb) | 29.08.2024 |
| Anthropic courses | Anthropic's educational courses | [Anthropic](https://www.anthropic.com/) | [![](https://img.shields.io/github/stars/anthropics/courses?style=social)](https://github.com/anthropics/courses)

  • [docs](https://docs.anthropic.com/en/docs/resources/courses)

  • [reddit](https://www.reddit.com/r/ClaudeAI/comments/1f7czsx/anthropics_official_educational_courses_on_prompt/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/anthropics/courses/blob/master/anthropic_api_fundamentals/01_getting_started.ipynb) | 22.08.2024 |
| Nerfstudio | API that allows for a simplified end-to-end process of creating, training, and testing NeRFs |

  • [Matthew Tancik](https://github.com/tancik)
  • [Ethan Weber](https://ethanweber.me/)
  • [Evonne Ng](http://people.eecs.berkeley.edu/~evonne_ng/)
  • [Ruilong Li](http://www.liruilong.cn/)
  • others
  • [Brent Yi](https://github.com/brentyi)
  • [Justin Kerr](https://kerrj.github.io/)
  • [Terrance Wang](https://github.com/terrancewang)
  • [Alexander Kristoffersen](https://akristoffersen.com/)
  • [Jake Austin](https://github.com/jake-austin)
  • [Kamyar Salahi](https://github.com/TheQuantumFractal)
  • [Abhik Ahuja](https://abhikahuja.com/)
  • [David McAllister](https://github.com/mcallisterdavid)
  • [Angjoo Kanazawa](https://github.com/akanazawa)

| [![](https://img.shields.io/github/stars/nerfstudio-project/nerfstudio?style=social)](https://github.com/nerfstudio-project/nerfstudio)

  • [Viewer](https://viewer.nerf.studio/)

  • [arxiv](https://arxiv.org/abs/2302.04264)

  • [discord](https://discord.gg/uMbNqcraFc)

  • [docs](https://docs.nerf.studio/en/latest/)

  • [git](https://github.com/NVlabs/tiny-cuda-nn)

  • [twitter](https://twitter.com/nerfstudioteam)

  • [yt](https://youtu.be/XwKq7qDQCQk), [yt](https://youtu.be/nSFsugarWzk), [yt](https://youtu.be/h5EWiRRxYEQ), [yt](https://youtu.be/8cv9G7izdPY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/nerfstudio-project/nerfstudio/blob/main/colab/demo.ipynb) | 19.08.2024 |
| mlcourse.ai | Open Machine Learning Course | [Yury Kashnitsky](https://yorko.github.io/) | [![](https://img.shields.io/github/stars/Yorko/mlcourse.ai?style=social)](https://github.com/Yorko/mlcourse.ai)

  • [blog post](https://habr.com/company/ods/blog/344044/)

  • [kaggle](https://www.kaggle.com/kashnitsky/mlcourse)

  • [medium](https://medium.com/open-machine-learning-course)

  • [project](https://mlcourse.ai/book/index.html)

  • [slack](https://opendatascience.slack.com/archives/C91N8TL83/p1567408586359500)

  • [yt](https://www.youtube.com/playlist?list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Yorko/mlcourse.ai/blob/main/jupyter_english/topic01_pandas_data_analysis/topic1_pandas_data_analysis.ipynb) | 19.08.2024 |
| PyTerrier | A Python framework for performing information retrieval experiments |

  • [Craig Macdonald](https://www.dcs.gla.ac.uk/~craigm/)
  • [Nicola Tonellotto](https://github.com/tonellotto)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3459637.3482013)](https://doi.org/10.1145/3459637.3482013) [![](https://img.shields.io/github/stars/terrier-org/pyterrier?style=social)](https://github.com/terrier-org/pyterrier)

  • [arxiv](https://arxiv.org/abs/2007.14271)

  • [docs](https://pyterrier.readthedocs.io)

  • [git](https://github.com/terrier-org/ecir2021tutorial), [git](https://github.com/terrierteam/pyterrier_ance), [git](https://github.com/terrierteam/pyterrier_colbert), [git](https://github.com/terrierteam/pyterrier_pisa), [git](https://github.com/terrierteam/pyterrier_t5), [git](https://github.com/terrierteam/pyterrier_doc2query), [git](https://github.com/terrierteam/pyterrier_deepct)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/terrier-org/pyterrier/blob/master/examples/notebooks/non_en_retrieval.ipynb) | 16.08.2024 |
| highway-env | A collection of environments for autonomous driving and tactical decision-making tasks | [Edouard Leurent](https://edouardleurent.com/) | [![](https://img.shields.io/github/stars/eleurent/highway-env?style=social)](https://github.com/eleurent/highway-env)

  • [arxiv](https://arxiv.org/abs/2102.03483), [arxiv](https://arxiv.org/abs/2105.05701), [arxiv](https://arxiv.org/abs/2101.07140)

  • [docs](https://highway-env.readthedocs.io/en/latest/)

  • [git](https://github.com/eleurent/rl-agents), [git](https://github.com/eleurent/finite-mdp), [git](https://github.com/openai/baselines/tree/master/baselines/her)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/eleurent/highway-env/blob/master/scripts/parking_model_based.ipynb) | 09.08.2024 |
| GNN | Production-tested library for building GNNs at large scale |

  • [Oleksandr Ferludin](https://github.com/aferludin)
  • [Arno Eigenwillig](https://github.com/arnoegw)
  • [Martin Blais](https://github.com/blais)
  • [Dustin Zelle](https://github.com/dzelle)
  • others
  • [Jan Pfeifer](https://github.com/janpfeifer)
  • [Alvaro Sanchez-Gonzalez](https://github.com/alvarosg)
  • [Wai Lok Sibon Li](https://scholar.google.com/citations?user=qX9aUx8AAAAJ)
  • [Sami Abu-El-Haija](https://samihaija.github.io/)
  • [Peter Battaglia](https://scholar.google.com/citations?user=nQ7Ij30AAAAJ)
  • [Neslihan Bulut](https://scholar.google.com/citations?user=k_cadGsAAAAJ)
  • [Jonathan Halcrow](https://scholar.google.com/citations?user=2zZucy4AAAAJ)
  • [Filipe Miguel Gonçalves de Almeida](https://github.com/fmgda)
  • [Pedro Gonnet](https://research.google/people/pedro-gonnet/)
  • [Liangze Jiang](https://liangzejiang.github.io/)
  • [Parth Kothari](https://thedebugger811.github.io/)
  • [Silvio Lattanzi](https://sites.google.com/site/silviolattanzi/)
  • [André Linhares](https://scholar.google.com/citations?user=YYRnhTkAAAAJ)
  • [Brandon Mayer](https://github.com/brandonmayer-zz)
  • [Vahab Mirrokni](https://people.csail.mit.edu/mirrokni/Welcome.html)
  • [John Palowitch](http://ml.johnpalowitch.com/)
  • [Mihir Paradkar](https://www.linkedin.com/in/mihir-paradkar-22b88579)
  • [Jennifer She](https://scholar.google.com/citations?user=Gjf_sd0AAAAJ)
  • [Anton Tsitsulin](https://tsitsul.in/)
  • [Kevin Villela](https://www.linkedin.com/in/kevin-villela-612a6443)
  • [Lisa Wang](https://scholar.google.com/citations?user=5KmYPkIAAAAJ)
  • [Bryan Perozzi](http://www.perozzi.net/)

| [![](https://img.shields.io/github/stars/tensorflow/gnn?style=social)](https://github.com/tensorflow/gnn)

  • [arxiv](https://arxiv.org/abs/2207.03522)

  • [kaggle](https://www.kaggle.com/code/fidels/introduction-to-tf-gnn)

  • [medium](https://medium.com/@techtes.com/getting-started-with-tf-gnn-with-python-26d8e341db05)

  • [tf](https://blog.tensorflow.org/2024/02/graph-neural-networks-in-tensorflow.html), [tf](https://blog.tensorflow.org/2021/11/introducing-tensorflow-gnn.html)

  • [yt](https://www.youtube.com/playlist?list=PL2PZTwLd0HMJC1fU_NkwwpRkcjoGqAECX), [yt](https://youtu.be/JqWROPYeqjA), [yt](https://youtu.be/YdGN-J322y4), [yt](https://youtu.be/VDzrvhgyxsU), [yt](https://www.youtube.com/live/e6WHg1l7AMs), [yt](https://youtu.be/a75Q6dtg1_s)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/gnn/blob/master/examples/notebooks/graph_network_shortest_path.ipynb) | 09.08.2024 |
| Pix2Pix | This notebook demonstrates image to image translation using conditional GAN's |

  • [Phillip Isola](https://web.mit.edu/phillipi/)
  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
  • [Tinghui Zhou](https://tinghuiz.github.io/)
  • [Alexei Efros](https://people.eecs.berkeley.edu/~efros/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR.2017.632)](https://doi.org/10.1109/CVPR.2017.632)

  • [arxiv](https://arxiv.org/abs/1611.07004)

  • [data](https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/)

  • [medium](https://medium.com/the-ai-team/image-to-image-translation-using-conditional-dcgans-7edc9e78c476)

  • [tf](https://www.tensorflow.org/tutorials/generative/pix2pix)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/pix2pix.ipynb) | 24.07.2024 |
| Image classification | This tutorial shows how to classify images of flowers | [Billy Lamberta](https://github.com/lamberta) |

  • [pwc](https://paperswithcode.com/task/image-classification)

  • [tf](https://www.tensorflow.org/tutorials/images/classification), [tf](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential), [tf](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb) | 24.07.2024 |
| TransformerLens | Library for doing mechanistic interpretability of GPT-2 Style language models |

  • [Neel Nanda](https://www.neelnanda.io/about)
  • [Joseph Bloom](https://github.com/jbloomAus)

| [![](https://img.shields.io/github/stars/TransformerLensOrg/TransformerLens?style=social)](https://github.com/TransformerLensOrg/TransformerLens)

  • [arxiv](https://arxiv.org/abs/2302.03025), [arxiv](https://arxiv.org/abs/2303.08112)

  • [docs](https://transformerlensorg.github.io/TransformerLens/)

  • [git](https://github.com/jbloomAus/DecisionTransformerInterpretability)

  • [medium](https://medium.com/@fgkffbvkhg/transformerlens-understanding-the-model-e339be551299)

  • [pypi](https://pypi.org/project/transformer-lens/)

  • [slack](https://join.slack.com/t/opensourcemechanistic/shared_invite/zt-1qosyh8g3-9bF3gamhLNJiqCL_QqLFrA)

  • [yt](https://www.youtube.com/channel/UCBMJ0D-omcRay8dh4QT0doQ), [yt](https://youtu.be/oL67e-uEgWI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/TransformerLensOrg/TransformerLens/blob/main/demos/Main_Demo.ipynb) | 23.07.2024 |
| Kor | Half-baked prototype that "helps" you extract structured data from text using LLMs | [Eugene Yurtsev](https://eyurtsev.github.io/) | [![](https://img.shields.io/github/stars/eyurtsev/kor?style=social)](https://github.com/eyurtsev/kor)

  • [discord](https://discord.com/channels/1038097195422978059/1170024642245832774)

  • [docs](https://eyurtsev.github.io/kor/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/eyurtsev/kor/blob/main/docs/source/guidelines.ipynb) | 20.07.2024 |
| Mistral Inference | Minimal code to run Mistral models | [mistral](https://mistral.ai/) | [![](https://img.shields.io/github/stars/mistralai/mistral-inference?style=social)](https://github.com/mistralai/mistral-inference)

  • [blog post](https://mistral.ai/news/announcing-mistral-7b/)

  • [discord](https://discord.com/invite/mistralai)

  • [docs](https://docs.mistral.ai/)

  • [medium](https://medium.com/@parikshitsaikia1619/mistral-mastery-fine-tuning-fast-inference-guide-62e163198b06)

  • [pypi](https://pypi.org/project/mistral-inference/)

  • [yt](https://youtu.be/mYRqvB1_gRk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mistralai/mistral-inference/blob/main/tutorials/getting_started.ipynb) | 16.07.2024 |
| PyTorch3D | Library for deep learning with 3D data |

  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Jeremy Reizenstein](https://github.com/bottler)
  • [David Novotny](https://d-novotny.github.io/)
  • [Taylor Gordon](https://scholar.google.com/citations?user=CNOoeQ0AAAAJ)
  • others
  • [Wan-Yen Lo](https://github.com/wanyenlo)
  • [Justin Johnson](https://web.eecs.umich.edu/~justincj/)
  • [Georgia Gkioxari](https://gkioxari.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/pytorch3d?style=social)](https://github.com/facebookresearch/pytorch3d)

  • [arxiv](https://arxiv.org/abs/2007.08501), [arxiv](https://arxiv.org/abs/1906.02739)

  • [blog post](https://ai.meta.com/blog/implicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d/), [blog post](https://ai.meta.com/blog/-introducing-pytorch3d-an-open-source-library-for-3d-deep-learning/)

  • [docs](https://pytorch3d.readthedocs.org/)

  • [kaggle](https://www.kaggle.com/code/sohonjit/rendering-with-pytorch3d)

  • [medium](https://towardsdatascience.com/glimpse-into-pytorch3d-an-open-source-3d-deep-learning-library-291a4beba30f), [medium](https://medium.com/@phamtdong0406/crafting-realistic-renderings-with-pytorch3d-947a38194f0a), [medium](https://towardsdatascience.com/how-to-render-3d-files-using-pytorch3d-ef9de72483f8)

  • [website](https://pytorch3d.org/)

  • [yt](https://youtu.be/0JEb7knenps), [yt](https://youtu.be/Pph1r-x9nyY), [yt](https://youtu.be/eCDBA_SbxCE), [yt](https://youtu.be/MOBAJb5nJRI), [yt](https://youtu.be/g50RiDnfIfY), [yt](https://youtu.be/hgBk9WlF-XA), [yt](https://youtu.be/Sb9gCCnSAUg), [yt](https://youtu.be/ZLqJ33Ey-MU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/pytorch3d/blob/master/docs/tutorials/implicitron_config_system.ipynb) | 11.07.2024 |
| Stable Diffusion Videos | Create videos with Stable Diffusion by exploring the latent space and morphing between text prompts | [Nathan Raw](https://github.com/nateraw) | [![](https://img.shields.io/github/stars/nateraw/stable-diffusion-videos?style=social)](https://github.com/nateraw/stable-diffusion-videos)
  • [git](https://gist.github.com/karpathy/00103b0037c5aaea32fe1da1af553355), [git](https://gist.github.com/nateraw/c989468b74c616ebbc6474aa8cdd9e53)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/nateraw/stable-diffusion-videos/blob/main/stable_diffusion_videos.ipynb) | 11.07.2024 |
| Transfer learning and fine-tuning | You will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network | [François Chollet](https://fchollet.com/) |

  • [pwc](https://paperswithcode.com/task/transfer-learning)

  • [tf](https://www.tensorflow.org/tutorials/images/transfer_learning)

  • [wiki](https://en.wikipedia.org/wiki/Transfer_learning)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb) | 26.06.2024 |
| MARS5 | Speech model for insane prosody | [CAMB.AI](https://www.camb.ai/) | [![](https://img.shields.io/github/stars/Camb-ai/MARS5-TTS?style=social)](https://github.com/Camb-ai/MARS5-TTS)

  • [demo](https://6b1a3a8e53ae.ngrok.app/)

  • [discord](https://discord.gg/FFQNCSKSXX)

  • [docker](https://hub.docker.com/r/cambai/mars5ttsimage)

  • [docs](https://docs.camb.ai/)

  • [git](https://github.com/RF5/transfusion-asr), [git](https://github.com/ehoogeboom/multinomial_diffusion), [git](https://github.com/karpathy/minbpe)

  • [hf](https://huggingface.co/CAMB-AI/MARS5-TTS)

  • [yt](https://youtu.be/bmJSLPYrKtE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Camb-ai/mars5-tts/blob/master/mars5_demo.ipynb) | 25.06.2024 |
| Deep RL Course | The Hugging Face Deep Reinforcement Learning Course |

  • [Thomas Simonini](https://www.simoninithomas.com/)
  • [Omar Sanseviero](https://osanseviero.github.io/hackerllama/)
  • [Sayak Paul](https://sayak.dev/)

| [![](https://img.shields.io/github/stars/huggingface/deep-rl-class?style=social)](https://github.com/huggingface/deep-rl-class)

  • [git](https://github.com/alex-petrenko/sample-factory)

  • [hf](https://huggingface.co/deep-rl-course/unit0/introduction), [hf](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard)

  • [pt](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)

  • [syllabus](https://simoninithomas.github.io/deep-rl-course)

  • [yt](https://youtu.be/2GwBez0D20A), [yt](https://youtu.be/CsuIANBnSq8), [yt](https://youtu.be/AQKAOXJa6qg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit1/unit1.ipynb) | 24.06.2024 |
| ToonCrafter | Can interpolate two cartoon images by leveraging the pre-trained image-to-video diffusion priors |

  • [Jinbo Xing](https://doubiiu.github.io/)
  • [Hanyuan Liu](https://github.com/hyliu)
  • [Menghan Xia](https://menghanxia.github.io/)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • others
  • [Xintao Wang](https://xinntao.github.io/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Tien-Tsin Wong](https://ttwong12.github.io/myself.html)

| [![](https://img.shields.io/github/stars/ToonCrafter/ToonCrafter?style=social)](https://github.com/ToonCrafter/ToonCrafter)

  • [arxiv](https://arxiv.org/abs/2405.17933v1)

  • [project](https://doubiiu.github.io/projects/ToonCrafter/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1d470rv/tooncrafter_generative_cartoon_interpolation/)

  • [yt](https://youtu.be/u3F35do93_8), [yt](https://youtu.be/E89R5_hQ5bQ), [yt](https://youtu.be/kK-A9jOaO1U), [yt](https://youtu.be/ricylysRayw), [yt](https://youtu.be/hc5nF6rGa68), [yt](https://youtu.be/mEn3CYU7s_A)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/gist/0smboy/baef995b8f5974f19ac114ec20ac37d5/tooncrafter.ipynb) | 20.06.2024 |
| Brax | A differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators |

  • [Daniel Freeman](https://github.com/cdfreeman-google)
  • [Erik Frey](https://fawx.com/)
  • [Anton Raichuk](https://scholar.google.com/citations?user=fquIpvgAAAAJ)
  • [Sertan Girgin](https://sites.google.com/site/girgint/home)
  • others
  • [Igor Mordatch](https://scholar.google.com/citations?user=Vzr1RukAAAAJ)
  • [Olivier Bachem](http://olivierbachem.ch/)

| [![](https://img.shields.io/github/stars/google/brax?style=social)](https://github.com/google/brax)

  • [arxiv](https://arxiv.org/abs/2106.13281)

  • [neurips](https://neurips.cc/Conferences/2021/CallForDatasetsBenchmarks)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/brax/blob/main/notebooks/basics.ipynb) | 07.06.2024 |
| DiffSynth | Restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance | [Artiprocher](https://github.com/Artiprocher) | [![](https://img.shields.io/github/stars/Artiprocher/DiffSynth-Studio?style=social)](https://github.com/Artiprocher/DiffSynth-Studio)

  • [arxiv](https://arxiv.org/abs/2401.16224)

  • [hf](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh), [hf](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Artiprocher/DiffSynth-Studio/blob/main/examples/Diffutoon.ipynb) | 06.06.2024 |
| Transformer | This tutorial trains a Transformer model to translate Portuguese to English |

  • [Ashish Vaswani](https://en.wikipedia.org/wiki/Ashish_Vaswani)
  • [Noam Shazeer](https://en.wikipedia.org/wiki/Noam_Shazeer)
  • [Niki Parmar](https://scholar.google.com/citations?user=q2YXPSgAAAAJ)
  • [Jakob Uszkoreit](http://jakob.uszkoreit.net/)
  • others
  • [Llion Jones](https://scholar.google.com/citations?user=_3_P5VwAAAAJ)
  • [Aidan Gomez](https://aidangomez.ca/)
  • [Łukasz Kaiser](https://scholar.google.com/citations?user=JWmiQR0AAAAJ)
  • [Illia Polosukhin](https://scholar.google.com/citations?user=3SyxFIAAAAAJ)

| [![](https://img.shields.io/github/stars/neulab/word-embeddings-for-nmt?style=social)](https://github.com/neulab/word-embeddings-for-nmt)

  • [arxiv](https://arxiv.org/abs/1706.03762), [arxiv](https://arxiv.org/abs/1903.03878)

  • [link](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii)

  • [neurips](https://papers.nips.cc/paper/7181-attention-is-all-you-need)

  • [tf](https://www.tensorflow.org/text/tutorials/transformer)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/transformer.ipynb) | 31.05.2024 |
| NeMo | A conversational AI toolkit built for researchers working on automatic speech recognition, natural language processing, and text-to-speech synthesis |

  • [Oleksii Kuchaiev](http://kuchaev.com/)
  • [Jason Li](https://scholar.google.com/citations?user=V28bxDwAAAAJ)
  • [Chip Huyen](https://huyenchip.com/)
  • [Oleksii Hrinchuk](https://github.com/AlexGrinch)
  • others
  • [Ryan Leary](https://github.com/ryanleary)
  • [Boris Ginsburg](https://github.com/borisgin)
  • [Samuel Kriman](https://github.com/sam1373)
  • [Stanislav Beliaev](https://github.com/stasbel)
  • [Vitaly Lavrukhin](https://github.com/vsl9)
  • [Jack Cook](https://jackcook.com/)

| [![](https://img.shields.io/github/stars/NVIDIA/NeMo?style=social)](https://github.com/NVIDIA/NeMo)

  • [project](https://docs.nvidia.com/deeplearning/nemo/)

  • [yt](https://youtu.be/wBgpMf_KQVw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/NVIDIA/NeMo/blob/master/tutorials/00_NeMo_Primer.ipynb) | 25.05.2024 |
| SentencePiece | An unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabulary size is predetermined prior to the neural model training |

  • [Taku Kudo](http://chasen.org/~taku/)
  • [John Richardson](https://scholar.google.com/citations?user=PEvmYfgAAAAJ)

| [![](https://img.shields.io/github/stars/google/sentencepiece?style=social)](https://github.com/google/sentencepiece)

  • [arxiv](https://arxiv.org/abs/1808.06226), [arxiv](https://arxiv.org/abs/1508.07909), [arxiv](https://arxiv.org/abs/1804.10959), [arxiv](https://arxiv.org/abs/1910.13267), [arxiv](https://arxiv.org/abs/1609.08144)

  • [git](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl), [git](https://github.com/rsennrich/subword-nmt), [git](https://github.com/gperftools/gperftools), [git](https://github.com/Microsoft/vcpkg)

  • [medium](https://jacky2wong.medium.com/understanding-sentencepiece-under-standing-sentence-piece-ac8da59f6b08)

  • [yt](https://youtu.be/U51ranzJBpY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb) | 21.05.2024 |
| Llama3 from scratch | Llama3 from scratch, one tensor and matrix multiplication at a time | [Nishant Aklecha](https://www.naklecha.com/) | [![](https://img.shields.io/github/stars/naklecha/llama3-from-scratch?style=social)](https://github.com/naklecha/llama3-from-scratch)

  • [git](https://github.com/karpathy/minbpe)

  • [twitter](https://twitter.com/naklecha), [twitter](https://twitter.com/aaaaaaaaaaorg)

  • [yt](https://youtu.be/o29P0Kpobz0?t=530)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/naklecha/llama3-from-scratch/blob/main/llama3-from-scratch.ipynb) | 19.05.2024 |
| Hello, many worlds | This tutorial shows how a classical neural network can learn to correct qubit calibration errors | [Michael Broughton](https://github.com/MichaelBroughton) |

  • [tf](https://www.tensorflow.org/quantum/api_docs/python/tfq/layers), [tf](https://www.tensorflow.org/quantum/api_docs/python/tfq/get_expectation_op), [tf](https://www.tensorflow.org/guide/keras/functional)

  • [wiki](https://en.wikipedia.org/wiki/Pauli_matrices)

  • [yt](https://youtu.be/-o9AhIz1uvo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/hello_many_worlds.ipynb) | 17.05.2024 |
| IC-Light | Manipulate the illumination of images |

  • [Lvmin Zhang](https://github.com/lllyasviel)
  • [Anyi Rao](https://anyirao.com/)
  • [Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/)

| [![](https://img.shields.io/github/stars/lllyasviel/IC-Light?style=social)](https://github.com/lllyasviel/IC-Light)

  • [arxiv](https://arxiv.org/abs/2312.06886), [arxiv](https://arxiv.org/abs/2402.18848)

  • [yt](https://youtu.be/U_ZIkFb9P8w), [yt](https://youtu.be/3EsJrdXGnpo), [yt](https://youtu.be/BuSsw8Nv1N4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/IC-Light-jupyter/blob/main/IC_Light_jupyter.ipynb) | 09.05.2024 |
| Neural style transfer | This tutorial uses deep learning to compose one image in the style of another image |

  • [Leon Gatys](https://scholar.google.com/citations?user=ADMVEmsAAAAJ)
  • [Alexander Ecker](https://eckerlab.org/)
  • [Matthias Bethge](https://bethgelab.org/)

|
  • [arxiv](https://arxiv.org/abs/1508.06576)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/style_transfer.ipynb) | 06.05.2024 |
| TorchGeo | PyTorch domain library that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data |

  • [Adam Stewart](https://github.com/adamjstewart)
  • [Caleb Robinson](https://calebrob.com/)
  • [Isaac Corley](https://github.com/isaaccorley)
  • [Anthony Ortiz](https://github.com/anthonymlortiz)
  • others
  • [Juan Lavista Ferres](https://www.microsoft.com/en-us/research/people/jlavista/)
  • [Arindam Banerjee](https://arindam.cs.illinois.edu/)

| [![](https://img.shields.io/github/stars/microsoft/torchgeo?style=social)](https://github.com/microsoft/torchgeo)

  • [NDBI](https://www.linkedin.com/pulse/ndvi-ndbi-ndwi-calculation-using-landsat-7-8-tek-bahadur-kshetri/)

  • [NDVI](https://gisgeography.com/ndvi-normalized-difference-vegetation-index/)

  • [NDWI](https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/ndwi/)

  • [arxiv](https://arxiv.org/abs/2111.08872)

  • [data](https://docs.sentinel-hub.com/api/latest/data/sentinel-2-l2a/), [data](https://www.cogeo.org/)

  • [git](https://github.com/davemlz/awesome-spectral-indices)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/microsoft/torchgeo/blob/main/docs/tutorials/indices.ipynb) | 03.05.2024 |
| Autoencoders | This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection | [Billy Lamberta](https://github.com/lamberta) |

  • [blog post](https://blog.keras.io/building-autoencoders-in-keras.html)

  • [book](https://www.deeplearningbook.org/contents/autoencoders.html)

  • [data](http://www.timeseriesclassification.com/description.php?Dataset=ECG5000)

  • [examples](https://anomagram.fastforwardlabs.com/#/)

  • [pwc](https://paperswithcode.com/method/autoencoder)

  • [tf](https://www.tensorflow.org/tutorials/generative/autoencoder)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb) | 15.04.2024 |
| MagicTime | Metamorphic time-lapse video generation model, which learns real-world physics knowledge from time-lapse videos and implements metamorphic generation |

  • [Shenghai Yuan](https://shyuanbest.github.io/)
  • [Jinfa Huang](https://infaaa.github.io/)
  • [Yujun Shi](https://yujun-shi.github.io/)
  • [Yongqi Xu](https://cheliosoops.github.io/YongqiXu.io/)
  • others
  • [Ruijie Zhu](https://ruijie-zhu.github.io/)
  • [Bin Lin](https://github.com/LinB203)
  • [Xinhua Cheng](https://cxh0519.github.io/)
  • [Li Yuan](https://yuanli2333.github.io/)
  • [Jiebo Luo](https://www.cs.rochester.edu/u/jluo/)

| [![](https://img.shields.io/github/stars/PKU-YuanGroup/MagicTime?style=social)](https://github.com/PKU-YuanGroup/MagicTime)

  • [arxiv](https://arxiv.org/abs/2404.05014), [arxiv](https://arxiv.org/abs/2406.18522)

  • [git](https://github.com/PKU-YuanGroup/ChronoMagic-Bench), [git](https://github.com/kijai/ComfyUI-MagicTimeWrapper), [git](https://github.com/xuduo35/MakeLongVideo), [git](https://github.com/Vchitect/LaVie), [git](https://github.com/Vchitect/Latte)

  • [hf](https://huggingface.co/spaces/BestWishYsh/MagicTime?logs=build), [hf](https://huggingface.co/datasets/BestWishYsh/ChronoMagic), [hf](https://huggingface.co/cerspense/zeroscope_v2_576w)

  • [project](https://pku-yuangroup.github.io/MagicTime/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1c1rv7q/magictime_demo_timelapse_video_generation_models/)

  • [twitter](https://x.com/_akhaliq/status/1777538468043792473), [twitter](https://twitter.com/vhjf36495872/status/1777525817087553827?s=61&t=r2HzCsU2AnJKbR8yKSprKw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/MagicTime-jupyter/blob/main/MagicTime_jupyter.ipynb) | 14.04.2024 |
| SAGE | Methodology for generative spelling correction, which was tested on English and Russian languages and potentially can be extended to any language with minor changes |

  • [Nikita Martynov](https://github.com/meduzick)
  • [Mark Baushenko](https://github.com/e0xextazy)
  • [Anastasia Kozlova](https://github.com/anastasia-kozlova)
  • [Katerina Kolomeytseva](https://www.linkedin.com/in/katerina-kolomeytseva-394a7a21a)
  • others
  • [Aleksandr Abramov](https://github.com/Ab1992ao)
  • [Alena Fenogenova](https://github.com/Alenush)

| [![](https://img.shields.io/github/stars/ai-forever/sage?style=social)](https://github.com/ai-forever/sage)

  • [arxiv](https://arxiv.org/abs/2308.09435)

  • [git](https://github.com/ai-forever/augmentex)

  • [hf](https://huggingface.co/ai-forever/RuM2M100-1.2B), [hf](https://huggingface.co/ai-forever/FRED-T5-large-spell), [hf](https://huggingface.co/ai-forever/RuM2M100-418M), [hf](https://huggingface.co/ai-forever/T5-large-spell), [hf](https://huggingface.co/datasets/ai-forever/spellcheck_benchmark)

  • [wiki](https://en.wikipedia.org/wiki/Levenshtein_distance)

  • [yt](https://youtu.be/yFfkV0Qjuu0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ai-forever/sage/blob/main/notebooks/text_correction_demo.ipynb) | 11.04.2024 |
| Image segmentation | This tutorial focuses on the task of image segmentation, using a modified U-Net |

  • [Olaf Ronneberger](https://lmb.informatik.uni-freiburg.de/people/ronneber/)
  • [Philipp Fischer](https://scholar.google.com/citations?user=M2j8KYMAAAAJ)
  • [Thomas Brox](https://lmb.informatik.uni-freiburg.de/people/brox/index.en.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-319-24574-4_28)](http://doi.org/10.1007/978-3-319-24574-4_28)

  • [arxiv](https://arxiv.org/abs/1505.04597)

  • [data](https://www.robots.ox.ac.uk/~vgg/data/pets/)

  • [kaggle](https://www.kaggle.com/c/carvana-image-masking-challenge/overview)

  • [tf](https://www.tensorflow.org/tutorials/images/segmentation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/segmentation.ipynb) | 09.04.2024 |
| Open-Sora Plan | Simple and efficient design along with remarkable performance in text-to-video generation | [YUAN Lab at PKU](https://github.com/PKU-YuanGroup) | [![](https://img.shields.io/github/stars/PKU-YuanGroup/Open-Sora-Plan?style=social)](https://github.com/PKU-YuanGroup/Open-Sora-Plan)

  • [arxiv](https://arxiv.org/abs/2306.15595)

  • [discord](https://discord.gg/YtsBNg7n)

  • [git](https://github.com/PKU-YuanGroup/Open-Sora-Dataset), [git](https://github.com/Vchitect/Latte), [git](https://github.com/whlzy/FiT)

  • [hf](https://huggingface.co/spaces/LanguageBind/Open-Sora-Plan-v1.1.0), [hf](https://huggingface.co/datasets/LanguageBind/Open-Sora-Plan-v1.0.0)

  • [yt](https://youtu.be/cRUz3c7hRs4), [yt](https://youtu.be/mYnRwR0RyvE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/Open-Sora-Plan-jupyter/blob/main/Open_Sora_Plan_jupyter.ipynb) | 07.04.2024 |
| Gorilla | Finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls |

  • [Shishir Patil](https://shishirpatil.github.io/)
  • [Tianjun Zhang](https://github.com/tianjunz)
  • [Xin Wang](https://xinw.ai/)
  • [Joseph Gonzalez](https://people.eecs.berkeley.edu/~jegonzal/)

| [![](https://img.shields.io/github/stars/ShishirPatil/gorilla?style=social)](https://github.com/ShishirPatil/gorilla)

  • [arxiv](https://arxiv.org/abs/2305.15334)

  • [discord](https://discord.gg/SwTyuTAxX3)

  • [git](https://github.com/gorilla-llm/gorilla-cli)

  • [medium](https://medium.com/latinxinai/try-gorilla-a-large-language-model-connected-with-massive-apis-442f3b554ffb)

  • [project](http://gorilla.cs.berkeley.edu/)

  • [yt](https://youtu.be/4EdyWkcddPc), [yt](https://youtu.be/RMgM3tPTpXI), [yt](https://youtu.be/CX1Kzijq2TI), [yt](https://youtu.be/8AqQBPI4CFI), [yt](https://youtu.be/iQwYoii4YiI), [yt](https://youtu.be/alDArqcxSvw), [yt](https://youtu.be/EypdTAlmoo4), [yt](https://youtu.be/LkV5DTRNxAg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP) | 06.04.2024 |
| Cleanlab | Helps you clean data and labels by automatically detecting issues in a ML dataset |

  • [Curtis Northcutt](https://www.curtisnorthcutt.com/)
  • [Lu Jiang](http://www.lujiang.info/)
  • [Isaac Chuang](http://feynman.mit.edu/ike/homepage/index.html)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1613/jair.1.12125)](https://doi.org/10.1613/jair.1.12125) [![](https://img.shields.io/github/stars/cleanlab/cleanlab?style=social)](https://github.com/cleanlab/cleanlab)

  • [arxiv](https://arxiv.org/abs/1911.00068)

  • [blog post](https://l7.curtisnorthcutt.com/confident-learning)

  • [docs](https://docs.cleanlab.ai/)

  • [medium](https://medium.com/@sujathamudadla1213/cleanlab-python-library-34e0a37720ef)

  • [slack](https://cleanlab.ai/slack)

  • [twitter](https://twitter.com/CleanlabAI)

  • [yt](https://youtu.be/BnOTv0f9Msk), [yt](https://youtu.be/nGye-lrsLRc), [yt](https://youtu.be/QHaT_AiUljw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/cleanlab/cleanlab/blob/master/docs/source/tutorials/image.ipynb) | 30.03.2024 |
| AniPortrait | Framework for generating high-quality animation driven by audio and a reference portrait image |

  • [Zejun Yang](https://github.com/Zejun-Yang)
  • [Zhisheng Wang](https://scholar.google.com/citations?user=XrK2HNcAAAAJ)

| [![](https://img.shields.io/github/stars/Zejun-Yang/AniPortrait?style=social)](https://github.com/Zejun-Yang/AniPortrait)

  • [arxiv](https://arxiv.org/abs/2403.17694)

  • [git](https://github.com/CelebV-HQ/CelebV-HQ), [git](https://github.com/HumanAIGC/EMO), [git](https://github.com/MooreThreads/Moore-AnimateAnyone), [git](https://github.com/magic-research/magic-animate), [git](https://github.com/guoqincode/Open-AnimateAnyone)

  • [hf](https://huggingface.co/ZJYang/AniPortrait), [hf](https://huggingface.co/runwayml/stable-diffusion-v1-5), [hf](https://huggingface.co/stabilityai/sd-vae-ft-mse), [hf](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoder), [hf](https://huggingface.co/facebook/wav2vec2-base-960h)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1bp7rnj/aniportrait_audiodriven_synthesis_of/)

  • [yt](https://youtu.be/wdRhYLQFQH8), [yt](https://youtu.be/T-B6xJRG6fQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/AniPortrait-jupyter/blob/main/AniPortrait_vid2vid_jupyter.ipynb) | 27.03.2024 |
| OpenVINO | Open-source toolkit for optimizing and deploying AI inference | [intel](https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html) | [![](https://img.shields.io/github/stars/openvinotoolkit/openvino?style=social)](https://github.com/openvinotoolkit/openvino)

  • [blog post](https://blog.openvino.ai/)

  • [discord](https://discord.gg/7pVRxUwdWG)

  • [docs](https://docs.openvino.ai/)

  • [forum](https://software.intel.com/en-us/forums/computer-vision)

  • [git](https://github.com/openvinotoolkit/open_model_zoo), [git](https://github.com/Tencent/TNN), [git](https://github.com/openvinotoolkit/openvino_contrib), [git](https://github.com/openvinotoolkit/training_extensions), [git](https://github.com/openvinotoolkit/model_server), [git](https://github.com/opencv/cvat), [git](https://github.com/openvinotoolkit/datumaro)

  • [hf](https://huggingface.co/OpenVINO)

  • [medium](https://medium.com/@openvino), [medium](https://medium.com/openvino-toolkit)

  • [wiki](https://en.wikipedia.org/wiki/OpenVINO)

  • [yt](https://www.youtube.com/playlist?list=PLg-UKERBljNxdIQir1wrirZJ50yTp4eHv), [yt](https://youtu.be/Je8n8M0OwxQ), [yt](https://youtu.be/Ru51DELfc-Q), [yt](https://youtu.be/5X0RmlH6JI4), [yt](https://youtu.be/hhVRSLbpI5Q), [yt](https://youtu.be/JH8fsEAIaXo), [yt](https://www.youtube.com/playlist?list=PLWw98q-Xe7iH06qxEW5a22SBsSNsGnYjZ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openvinotoolkit/openvino_notebooks/blob/main/notebooks/001-hello-world/001-hello-world.ipynb) | 25.03.2024 |
| Gazelle | Joint Speech Language Model | [Tincans](https://tincans.ai) | [![](https://img.shields.io/github/stars/tincans-ai/gazelle?style=social)](https://github.com/tincans-ai/gazelle)

  • [blog post](https://tincans.ai/slm)

  • [demo](https://demo.tincans.ai/)

  • [discord](https://discord.gg/qyC5h3FSzU)

  • [git](https://www.reddit.com/r/LocalLLaMA/comments/1cr84gb/joint_speechlanguage_model_respond_directly_to/)

  • [hf](https://huggingface.co/tincans-ai/gazelle-v0.1), [hf](https://huggingface.co/tincans-ai/gazelle-v0.2), [hf](https://huggingface.co/tincans-ai/gazelle-v0.2-dpo), [hf](https://huggingface.co/facebook/wav2vec2-base-960h), [hf](https://huggingface.co/meta-llama/Llama-2-7b-chat)

  • [wiki](https://en.wikipedia.org/wiki/Spike_/(software_development)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tincans-ai/gazelle/blob/master/examples/infer_quantized.ipynb) | 20.03.2024 |
| Intel® Extension for Transformers | Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere | [intel](https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html) | [![](https://img.shields.io/github/stars/intel/intel-extension-for-transformers?style=social)](https://github.com/intel/intel-extension-for-transformers)

  • [arxiv](https://arxiv.org/abs/2309.17453), [arxiv](https://arxiv.org/abs/2311.00502), [arxiv](https://arxiv.org/abs/2211.07715), [arxiv](https://arxiv.org/abs/2210.17114), [arxiv](https://arxiv.org/abs/2111.05754)

  • [discord](https://discord.gg/Wxk3J3ZJkU)

  • [docs](https://intel.github.io/intel-extension-for-transformers/latest/docs/Welcome.html)

  • [git](https://github.com/ggerganov/ggml), [git](https://github.com/ggerganov/llama.cpp), [git](https://github.com/TimDettmers/bitsandbytes), [git](https://github.com/lm-sys/FastChat), [git](https://github.com/IntelLabs/fastRAG), [git](https://github.com/IST-DASLab/gptq), [git](https://github.com/mit-han-lab/streaming-llm)

  • [hf](https://huggingface.co/blog/assisted-generation), [hf](https://huggingface.co/Intel/neural-chat-7b-v3-1), [hf](https://huggingface.co/blog/Andyrasika/neural-chat-intel)

  • [medium](https://medium.com/@NeuralCompressor/creating-your-own-llms-on-your-laptop-a08cc4f7c91b), [medium](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3), [medium](https://medium.com/@NeuralCompressor/llm-performance-of-intel-extension-for-transformers-f7d061556176), [medium](https://medium.com/@NeuralCompressor/high-performance-low-bit-layer-wise-weight-only-quantization-on-a-laptop-712580899396), [medium](https://medium.com/intel-analytics-software/reduce-large-language-model-carbon-footprint-with-intel-neural-compressor-and-intel-extension-for-dfadec3af76a)

  • [yt](https://youtu.be/bWhZ1u_1rlc), [yt](https://www.youtube.com/watch?v=RbKRELWP9y8&t=2954s), [yt](https://youtu.be/7_urstS-noU), [yt](https://youtu.be/bWhZ1u_1rlc), [yt](https://youtu.be/KWT6yKfu4n0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/intel/intel-extension-for-transformers/blob/main/docs/tutorials/pytorch/text-classification/SetFit_model_compression_AGNews.ipynb) | 19.03.2024 |
| Datasets | A Community Library for Natural Language Processing |

  • [Quentin Lhoest](https://github.com/lhoestq)
  • [Albert Villanova](https://albertvillanova.github.io/)
  • [Yacine Jernite](https://yjernite.github.io/)
  • [Abhishek Thakur](https://github.com/abhishekkrthakur)
  • others
  • [Patrick von Platen](https://github.com/patrickvonplaten)
  • [Suraj Patil](https://github.com/patil-suraj)
  • [Julien Chaumond](https://github.com/julien-c)
  • [Mariama Dramé](https://scholar.google.com/citations?user=0pwfXH0AAAAJ)
  • [Julien Plu](https://jplu.github.io/)
  • [Lewis Tunstall](https://lewtun.github.io/blog/)
  • [Joe Davison](https://joeddav.github.io/)
  • [Mario Šaško](https://github.com/mariosasko)
  • [Gunjan Chhablani](https://gchhablani.github.io/)
  • [Bhavitvya Malik](https://github.com/bhavitvyamalik)
  • [Simon Brandeis](https://github.com/SBrandeis)
  • [Teven Le Scao](https://github.com/TevenLeScao)
  • [Victor Sanh](https://github.com/VictorSanh)
  • [Canwen Xu](https://www.canwenxu.net/)
  • [Nicolas Patry](https://github.com/Narsil)
  • [Angelina McMillan-Major](https://github.com/mcmillanmajora)
  • [Philipp Schmid](https://www.philschmid.de/)
  • [Sylvain Gugger](https://github.com/sgugger)
  • [Clément Delangue](https://scholar.google.com/citations?user=bRMboT8AAAAJ)
  • [Théo Matussière](https://theo.matussie.re/)
  • [Lysandre Debut](http://lysand.re/)
  • [Stas Bekman](https://stasosphere.com/machine-learning/)
  • [Pierric Cistac](https://github.com/Pierrci)
  • [Thibault Goehringer](https://github.com/beurkinger)
  • [Victor Mustar](https://github.com/gary149)
  • [François Lagunas](https://github.com/madlag)
  • [Alexander Rush](https://rush-nlp.com/)
  • [Thomas Wolf](https://thomwolf.io/)

| [![](https://img.shields.io/github/stars/huggingface/datasets?style=social)](https://github.com/huggingface/datasets)

  • [arxiv](https://arxiv.org/abs/2109.02846)

  • [docs](https://huggingface.co/docs/datasets)

  • [hf](https://huggingface.co/datasets)

  • [kaggle](https://www.kaggle.com/code/nbroad/intro-to-hugging-face-datasets)

  • [yt](https://youtu.be/uaIJ96syPnM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb) | 18.03.2024 |
| Evidently | An open-source framework to evaluate, test and monitor ML models in production |

  • [Elena Samuylova](https://github.com/elenasamuylova)
  • [Emeli Dral](https://github.com/emeli-dral)
  • [Olga Filippova](https://github.com/0lgaF)

| [![](https://img.shields.io/github/stars/evidentlyai/evidently?style=social)](https://github.com/evidentlyai/evidently)

  • [docs](https://docs.evidentlyai.com/)

  • [git](https://github.com/0lgaF/my_tab_with_evidently)

  • [website](https://evidentlyai.com/)

  • [yt](https://www.youtube.com/c/EvidentlyAI), [yt](https://youtu.be/L4Pv6ExBQPM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/evidentlyai/evidently/blob/main/examples/sample_notebooks/getting_started_tutorial.ipynb) | 15.03.2024 |
| Instructor | Library that makes it a breeze to work with structured outputs from large language models | [Jason Liu](https://jxnl.co/) | [![](https://img.shields.io/github/stars/jxnl/instructor?style=social)](https://github.com/jxnl/instructor)

  • [discord](https://discord.gg/CV8sPM5k5Y)

  • [docs](https://python.useinstructor.com/)

  • [twitter](https://twitter.com/jxnlco)

  • [yt](https://youtu.be/rDP44EVpHTA), [yt](https://youtu.be/dq1Sjb8IGow), [yt](https://youtu.be/higlHgYDc5E)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1iBkrEh2G5U8yh8RmI8EkWxjLq6zIIuVm) | 13.03.2024 |
| FiftyOne | Open-source tool for building high-quality datasets and computer vision models |

  • [Brian Moore](https://github.com/brimoor)
  • [Jason Corso](https://web.eecs.umich.edu/~jjcorso/)

| [![](https://img.shields.io/github/stars/voxel51/fiftyone?style=social)](https://github.com/voxel51/fiftyone)

  • [blog post](https://voxel51.com/blog/)

  • [docs](https://docs.voxel51.com/)

  • [git](https://github.com/voxel51/fiftyone-examples)

  • [medium](https://medium.com/voxel51), [medium](https://towardsdatascience.com/open-source-tools-for-fast-computer-vision-model-building-b39755aab490)

  • [slack](https://slack.voxel51.com/)

  • [twitter](https://twitter.com/voxel51)

  • [website](https://voxel51.com/fiftyone/)

  • [yt](https://www.youtube.com/playlist?list=PLuREAXoPgT0SJLKsgFzKxffMApbXp90Gi)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/voxel51/fiftyone-examples/blob/master/examples/quickstart.ipynb) | 27.02.2024 |
| MetaVoice | 1.2B parameter base model trained on 100K hours of speech for TTS | [MetaVoice](https://themetavoice.xyz/) | [![](https://img.shields.io/github/stars/metavoiceio/metavoice-src?style=social)](https://github.com/metavoiceio/metavoice-src)

  • [demo](https://ttsdemo.themetavoice.xyz/)

  • [hf](https://huggingface.co/metavoiceio)

  • [twitter](https://twitter.com/MetaVoiceAI)

  • [yt](https://youtu.be/Y_k3bHPcPTo), [yt](https://youtu.be/gVKbf31hrYs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1UmjE1mzfG4td0rCjJEaAWGQXpn_GuwwY) | 26.02.2024 |
| Generative AI for Beginners - A Course | A 12 Lesson course teaching everything you need to know to start building Generative AI applications | [microsoft](https://www.microsoft.com/) | [![](https://img.shields.io/github/stars/microsoft/xr-development-for-beginners?style=social)](https://github.com/microsoft/xr-development-for-beginners)

  • [discord](https://aka.ms/genai-discord)

  • [git](https://github.com/microsoft/Web-Dev-For-Beginners)

  • [project](https://microsoft.github.io/generative-ai-for-beginners/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/microsoft/generative-ai-for-beginners/blob/main/06-text-generation-apps/notebook-azure-openai.ipynb) | 22.02.2024 |
| OmegaConf | Hierarchical configuration system, with support for merging configurations from multiple sources providing a consistent API regardless of how the configuration was created | [Omry Yadan](https://github.com/omry) | [![](https://img.shields.io/github/stars/omry/omegaconf?style=social)](https://github.com/omry/omegaconf)

  • [docs](https://omegaconf.readthedocs.io/)

  • [medium](https://majianglin2003.medium.com/python-omegaconf-a33be1b748ab)

  • [slides](https://docs.google.com/presentation/d/e/2PACX-1vT_UIV7hCnquIbLUm4NnkUpXvPEh33IKiUEvPRF850WKA8opOlZOszjKdZ3tPmf8u7hGNP6HpqS-NT5/pub)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/omry/omegaconf/blob/master/docs/notebook/Tutorial.ipynb) | 15.02.2024 |
| Optuna | An automatic hyperparameter optimization software framework, particularly designed for machine learning |

  • [Takuya Akiba](https://iwiwi.github.io/)
  • [Shotaro Sano](https://github.com/g-votte)
  • [Toshihiko Yanase](https://github.com/toshihikoyanase)
  • [Takeru Ohta](https://github.com/sile)
  • [Masanori Koyama](https://scholar.google.com/citations?user=oY1gA10AAAAJ)

| [![](https://img.shields.io/github/stars/optuna/optuna?style=social)](https://github.com/optuna/optuna)

  • [arxiv](https://arxiv.org/abs/1907.10902)

  • [docker](https://hub.docker.com/r/optuna/optuna)

  • [docs](https://optuna.readthedocs.io/en/stable/)

  • [git](https://github.com/optuna/optuna-dashboard)

  • [website](https://optuna.org/)

  • [yt](https://youtu.be/J_aymk4YXhg), [yt](https://youtu.be/tcrcLRopTX0), [yt](https://youtu.be/-UeC4MR3PHM), [yt](https://youtu.be/oC8zFYcfYXU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/optuna/optuna-examples/blob/main/quickstart.ipynb) | 15.02.2024 |
| Data augmentation | This tutorial demonstrates data augmentation: a technique to increase the diversity of your training set by applying random transformations such as image rotation | [Billy Lamberta](https://github.com/lamberta) |

  • [pwc](https://paperswithcode.com/task/data-augmentation)

  • [tf](https://www.tensorflow.org/datasets/catalog/tf_flowers), [tf](https://www.tensorflow.org/tutorials/images/data_augmentation)

  • [wiki](https://en.wikipedia.org/wiki/Data_augmentation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb) | 14.02.2024 |
| Stable Cascade | Text to image model introduces an interesting three-stage approach, setting new benchmarks for quality, flexibility, fine-tuning, and efficiency with a focus on further eliminating hardware barriers | [Stability AI](https://stability.ai/research) | [![](https://img.shields.io/github/stars/Stability-AI/StableCascade?style=social)](https://github.com/Stability-AI/StableCascade)

  • [arxiv](https://arxiv.org/abs/2306.00637)

  • [blog post](https://stability.ai/news/introducing-stable-cascade)

  • [discord](https://discord.gg/stablediffusion)

  • [hf](https://huggingface.co/stabilityai/stable-cascade), [hf](https://huggingface.co/datasets/nateraw/parti-prompts)

  • [medium](https://medium.com/intelligent-art/stable-cascade-a-super-easy-local-installation-guide-ce0cbd06d800), [medium](https://medium.com/@yushantripleseven/stable-cascade-training-inference-a52e12ecc5fa)

  • [twitter](https://twitter.com/stabilityai)

  • [yt](https://youtu.be/Ybu6qTbEsewc), [yt](https://youtu.be/JuX-uukwdkI), [yt](https://youtu.be/YMxXtaiVHks), [yt](https://youtu.be/UgM-z2q3Xe0), [yt](https://youtu.be/W6YLIyA3Kco), [yt](https://youtu.be/X1rLWFRagIw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mkshing/notebooks/blob/main/stable_cascade.ipynb) | 14.02.2024 |
| CleanVision | Automatically detects potential issues in image datasets like images that are: blurry, under/over-exposed, (near) duplicates, etc | [cleanlab](https://cleanlab.ai/about/) | [![](https://img.shields.io/github/stars/cleanlab/cleanvision?style=social)](https://github.com/cleanlab/cleanvision)

  • [blog post](https://cleanlab.ai/blog/cleanvision/)

  • [docs](https://cleanvision.readthedocs.io/)

  • [git](https://github.com/cleanlab/cleanvision-examples)

  • [slack](https://cleanlab.ai/slack)

  • [twitter](https://twitter.com/CleanlabAI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/cleanlab/cleanvision/blob/main/docs/source/tutorials/tutorial.ipynb) | 13.02.2024 |
| DynamiCrafter | Animating Open-domain Images with Video Diffusion Priors |

  • [Jinbo Xing](https://doubiiu.github.io/)
  • [Menghan Xia](https://menghanxia.github.io/)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • [Haoxin Chen](https://scutpaul.github.io/)
  • others
  • [Wangbo Yu](https://github.com/GooDrYu)
  • [Hanyuan Liu](https://github.com/hyliu)
  • [Xintao Wang](https://xinntao.github.io/)
  • [Tien-Tsin Wong](https://ttwong12.github.io/myself.html)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)

| [![](https://img.shields.io/github/stars/Doubiiu/DynamiCrafter?style=social)](https://github.com/Doubiiu/DynamiCrafter)

  • [arxiv](https://arxiv.org/abs/2310.12190)

  • [git](https://github.com/chaojie/ComfyUI-DynamiCrafter), [git](https://github.com/AILab-CVC/VideoCrafter), [git](https://github.com/YingqingHe/ScaleCrafter), [git](https://github.com/AILab-CVC/TaleCrafter), [git](https://github.com/AILab-CVC/FreeNoise)

  • [hf](https://huggingface.co/Doubiiu/DynamiCrafter_1024)

  • [project](https://doubiiu.github.io/projects/DynamiCrafter/)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1aj7gcw/dynamicrafter_gets_updated/)

  • [twitter](https://x.com/noguchis/status/1754488826016432341?s=20)

  • [yt](https://youtu.be/0NfmIsNAg-g), [yt](https://youtu.be/PtW7hjCawbo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/DynamiCrafter-colab/blob/main/DynamiCrafter_colab_576_1024.ipynb) | 12.02.2024 |
| Ollama | Get up and running with large language models | [Michael Yang](https://github.com/mxyng) | [![](https://img.shields.io/github/stars/ollama/ollama?style=social)](https://github.com/ollama/ollama)

  • [docker](https://hub.docker.com/r/ollama/ollama)

  • [git](https://github.com/ollama/ollama-python), [git](https://github.com/ollama/ollama-js), [git](https://github.com/ggerganov/llama.cpp)

  • [pypi](https://pypi.org/project/ollama/)

  • [twitter](https://x.com/ollama)

  • [website](https://ollama.com/)

  • [yt](https://youtu.be/rIRkxZSn-A8), [yt](https://youtu.be/1xdneyn6zjw), [yt](https://youtu.be/cTxENLLX1ho), [yt](https://youtu.be/ztBJqzBU5kc), [yt](https://youtu.be/Ox8hhpgrUi0), [yt](https://youtu.be/lhQ8ixnYO2Y), [yt](https://youtu.be/pxhkDaKzBaY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ollama/ollama/blob/master/examples/jupyter-notebook/ollama.ipynb) | 10.02.2024 |
| XLA | Accelerated Linear Algebra is an open-source machine learning compiler for GPUs, CPUs, and ML accelerators | [OpenXLA](https://openxla.org) | [![](https://img.shields.io/github/stars/openxla/xla?style=social)](https://github.com/openxla/xla)

  • [medium](https://medium.com/@muhammedashraf2661/demystifying-xla-unlocking-the-power-of-accelerated-linear-algebra-9b62f8180dbd), [medium](https://runaker.medium.com/one-code-to-rule-them-all-simplifying-ai-development-with-hardware-agnostic-abstraction-layers-a61448bb6d22)

  • [pt](https://pytorch.org/xla)

  • [tf](https://www.tensorflow.org/xla)

  • [wiki](https://en.wikipedia.org/wiki/Accelerated_Linear_Algebra)

  • [yt](https://www.youtube.com/playlist?list=PLlFotmaRrOzs23kqlSF-r8v1dJHz5GxZs), [yt](https://www.youtube.com/playlist?list=PLlFotmaRrOzu8TQsTahDo_Cn7QdntFlUL), [yt](https://www.youtube.com/playlist?list=PLlFotmaRrOzt8xOwckcXL7vObZmr8PK1y), [yt](https://youtu.be/QNSxFXJ-xMM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/openxla/xla/blob/main/docs/tf2xla/tutorials/jit_compile.ipynb) | 02.02.2024 |
| Composer | PyTorch library that enables you to train neural networks faster, at lower cost, and to higher accuracy | [The Mosaic ML Team](https://www.mosaicml.com/team) | [![](https://img.shields.io/github/stars/mosaicml/composer?style=social)](https://github.com/mosaicml/composer)

  • [app](https://app.mosaicml.com/)

  • [arxiv](https://arxiv.org/abs/2202.05924), [arxiv](https://arxiv.org/abs/2002.04688)

  • [blog post](https://www.mosaicml.com/blog/5-best-practices-for-efficient-model-training)

  • [docs](http://docs.mosaicml.com/)

  • [slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-w0tiddn9-WGTlRpfjcO9J5jyrMub1dg)

  • [twitter](https://twitter.com/mosaicml)

  • [website](https://www.mosaicml.com/composer)

  • [wiki](https://en.wikipedia.org/wiki/Amdahl's_law)

  • [yt](https://www.youtube.com/@mosaicml6047/videos), [yt](https://youtu.be/n-1WV5QdIDc), [yt](https://youtu.be/Xi_5wq2MpOw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mosaicml/composer/blob/dev/examples/getting_started.ipynb) | 01.02.2024 |
| CycleGAN | This notebook demonstrates unpaired image to image translation using conditional GAN's |

  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
  • [Taesung Park](https://taesung.me/)
  • [Phillip Isola](https://web.mit.edu/phillipi/)
  • [Alexei Efros](https://people.eecs.berkeley.edu/~efros/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2017.244)](https://doi.org/10.1109/ICCV.2017.244)

  • [arxiv](https://arxiv.org/abs/1703.10593)

  • [tf](https://www.tensorflow.org/datasets/catalog/cycle_gan), [tf](https://www.tensorflow.org/tutorials/generative/cyclegan)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb) | 17.01.2024 |
| Integrated gradients | This tutorial demonstrates how to implement Integrated Gradients, an Explainable AI technique |

  • [Mukund Sundararajan](https://scholar.google.com/citations?user=q39nzokAAAAJ)
  • [Ankur Taly](https://theory.stanford.edu/~ataly/)
  • [Qiqi Yan](https://scholar.google.com/citations?user=Wn8xr_gAAAAJ)

| [![](https://img.shields.io/github/stars/ankurtaly/Integrated-Gradients?style=social)](https://github.com/ankurtaly/Integrated-Gradients)

  • [arxiv](https://arxiv.org/abs/1703.01365)

  • [git](https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/integrated_gradients)

  • [medium](https://medium.com/codex/explainable-ai-integrated-gradients-for-deep-neural-network-predictions-eb4f96248afb), [medium](https://towardsdatascience.com/understanding-deep-learning-models-with-integrated-gradients-24ddce643dbf)

  • [tf](https://www.tensorflow.org/tutorials/interpretability/integrated_gradients)

  • [visualizing](https://distill.pub/2020/attribution-baselines/)

  • [wiki](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence), [wiki](https://en.wikipedia.org/wiki/Linear_interpolation), [wiki](https://en.wikipedia.org/wiki/Riemann_sum)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/interpretability/integrated_gradients.ipynb) | 17.01.2024 |
| MAGNeT | Masked generative sequence modeling method that operates directly over several streams of audio tokens |

  • [Alon Ziv](https://www.cs.huji.ac.il/w~alonzi/)
  • [Itai Gat](https://itaigat.com/)
  • [Gaël Le Lan](https://github.com/gl3lan)
  • [Tal Remez](https://talremez.github.io/)
  • others
  • [Felix Kreuk](https://felixkreuk.github.io/)
  • [Alexandre Défossez](https://ai.honu.io/)
  • [Jade Copet](https://scholar.google.com/citations?&user=GRMLwjAAAAAJ)
  • [Gabriel Synnaeve](https://syhw.github.io/)
  • [Yossi Adi](https://www.cs.huji.ac.il/~adiyoss/)

| [![](https://img.shields.io/github/stars/facebookresearch/audiocraft?style=social)](https://github.com/facebookresearch/audiocraft/blob/main/docs/MAGNET.md)

  • [arxiv](https://arxiv.org/abs/2401.04577), [arxiv](https://arxiv.org/abs/2305.09636), [arxiv](https://arxiv.org/abs/2307.04686)

  • [git](https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/AI-Music-Generation-Audiocraft-Tutorial.md#more-info-about-top-k-top-p-temperature-and-classifier-free-guidance-from-chatgpt)

  • [hf](https://huggingface.co/facebook/magnet-medium-10secs), [hf](https://huggingface.co/facebook/magnet-medium-30secs), [hf](https://huggingface.co/facebook/audio-magnet-medium)

  • [medium](https://generativeai.pub/metas-ai-magnet-the-next-big-thing-in-text-to-audio-technology-7d524d9459ef)

  • [project](https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT/)

  • [reddit](https://www.reddit.com/r/ArtificialInteligence/comments/19808gf/magnet_masked_audio_generation_using_a_single/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/MAGNeT-colab/blob/main/MAGNET_colab.ipynb) | 16.01.2024 |
| AutoFaiss | Automatically create Faiss knn indices with the most optimal similarity search parameters | [Ctiteo](https://github.com/criteo) | [![](https://img.shields.io/github/stars/criteo/autofaiss?style=social)](https://github.com/criteo/autofaiss)

  • [docs](https://criteo.github.io/autofaiss/)

  • [git](https://github.com/facebookresearch/faiss)

  • [medium](https://medium.com/criteo-engineering/introducing-autofaiss-an-automatic-k-nearest-neighbor-indexing-library-at-scale-c90842005a11)

  • [pypi](https://pypi.python.org/pypi/autofaiss)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/criteo/autofaiss/blob/master/docs/notebooks/autofaiss_getting_started.ipynb) | 12.01.2024 |
| Retrieval based Voice Conversion WebUI | An easy-to-use Voice Conversion framework based on VITS | [RVC-Project](https://github.com/RVC-Project) | [![](https://img.shields.io/github/stars/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=social)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)

  • [discord](https://discord.gg/HcsmBBGyVk)

  • [git](https://github.com/auspicious3000/contentvec), [git](https://github.com/jik876/hifi-gan), [git](https://github.com/FFmpeg/FFmpeg), [git](https://github.com/Anjok07/ultimatevocalremovergui), [git](https://github.com/openvpi/audio-slicer), [git](https://github.com/Dream-High/RMVPE)

  • [hf](https://huggingface.co/lj1995/VoiceConversionWebUI)

  • [medium](https://medium.com/@ja.harr91/decoding-the-sound-of-virality-a-deep-dive-into-adversarial-ai-for-voice-conversion-tasks-on-m1-d60d32cfb2d4)

  • [yt](https://youtu.be/-JcvdDErkAU), [yt](https://youtu.be/9TroP5mR3CM), [yt](https://youtu.be/Y8IxVVQBEpc), [yt](https://youtu.be/qZ12-Vm2ryc), [yt](https://youtu.be/5i_Pyw0gH-M)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) | 11.01.2024 |
| Flax | Neural network library and ecosystem for JAX designed for flexibility |

  • [Jonathan Heek](https://github.com/jheek)
  • [Anselm Levskaya](https://anselmlevskaya.com/)
  • [Avital Oliver](https://github.com/avital)
  • [Marvin Ritter](https://github.com/Marvin182)
  • others
  • [Bertrand Rondepierre](https://github.com/BertrandRdp)
  • [Andreas Steiner](https://github.com/andsteing)
  • [Marc van Zee](https://research.google/people/marc-van-zee/)

| [![](https://img.shields.io/github/stars/google/flax?style=social)](https://github.com/google/flax)

  • [docs](https://flax.readthedocs.io/)

  • [hf](https://github.com/huggingface/transformers/tree/main/examples/flax)

  • [medium](https://medium.com/syncedreview/google-introduces-flax-a-neural-network-library-for-jax-84bdc6f8f160)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/erpdf7/p_flax_a_neural_network_library_for_jax_designed/)

  • [yt](https://youtu.be/e8StU6WQCqw), [yt](https://youtu.be/HOlQzrn84A4), [yt](https://youtu.be/5eUSmJvK8WA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/flax/blob/main/docs/quick_start.ipynb) | 10.01.2024 |
| Big Vision | This codebase is designed for training large-scale vision models using Cloud TPU VMs or GPU machines |

  • [Lucas Beyer](http://lucasb.eyer.be/)
  • [Xiaohua Zhai](https://github.com/xiaohuazhai)
  • [Alexander Kolesnikov](https://github.com/akolesnikoff)

| [![](https://img.shields.io/github/stars/google-research/big_vision?style=social)](https://github.com/google-research/big_vision)

  • [arxiv](https://arxiv.org/abs/2010.11929), [arxiv](https://arxiv.org/abs/2106.04560), [arxiv](https://arxiv.org/abs/2105.01601), [arxiv](https://arxiv.org/abs/2205.01580), [arxiv](https://arxiv.org/abs/2212.08013), [arxiv](https://arxiv.org/abs/2305.13035), [arxiv](https://arxiv.org/abs/2303.17376), [arxiv](https://arxiv.org/abs/2306.07915), [arxiv](https://arxiv.org/abs/2305.16999), [arxiv](https://arxiv.org/abs/2302.08242), [arxiv](https://arxiv.org/abs/2006.07159)

  • [tf](https://www.tensorflow.org/guide/data), [tf](https://www.tensorflow.org/datasets)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/lit.ipynb) | 03.01.2024 |
| Open Interpreter | An open-source, locally running implementation of OpenAI's Code Interpreter | [Killian Lucas](https://github.com/KillianLucas) | [![](https://img.shields.io/github/stars/KillianLucas/open-interpreter?style=social)](https://github.com/KillianLucas/open-interpreter)

  • [discord](https://discord.gg/6p3fD6rBVm)

  • [docs](https://docs.openinterpreter.com/)

  • [website](https://openinterpreter.com/)

  • [yt](https://youtu.be/SqnXUHwIa3c), [yt](https://youtu.be/s-f4lCETxu0), [yt](https://youtu.be/J-H2un1Adr0), [yt](https://youtu.be/jaijpff58vw), [yt](https://youtu.be/7KFbG_3dKKs), [yt](https://youtu.be/4OhuFjPyZNQ), [yt](https://youtu.be/01tQLn_RRcE), [yt](https://youtu.be/uyfoHQVgeY0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb) | 03.01.2024 |
| Seamless Communication | Family of AI models that enable more natural and authentic communication across languages |

  • [Loïc Barrault](https://loicbarrault.github.io/)
  • [Yu-An Chung](https://iamyuanchung.github.io/)
  • [Mariano Coria](https://www.linkedin.com/in/marianocoria)
  • [David Dale](https://daviddale.ru/)
  • others
  • [Ning Dong](https://scholar.google.com/citations?user=gg1hvjoAAAAJ)
  • [Mark Duppenthaler](https://github.com/mduppes)
  • [Paul-Ambroise Duquenne](https://scholar.google.com/citations?user=Uah8IcAAAAAJ)
  • [Hady Elsahar](https://www.hadyelsahar.io/)
  • [Min-Jae Hwang](https://mjhwang93.github.io/)
  • [Hirofumi Inaguma](https://hirofumi0810.github.io/)
  • [Ilia Kulikov](https://github.com/uralik)
  • [Pengwei Li](https://scholar.google.com/citations?user=hQB3YsYAAAAJ)
  • [Daniel Licht](https://github.com/Lichtphyz)
  • [Jean Maillard](https://scholar.google.com/citations?user=_ewOoK0AAAAJ)
  • [Ruslan Mavlyutov](https://github.com/mavlyutovr)
  • [Kaushik Ram Sadagopan](https://github.com/kauterry)
  • [Abinesh Ramakrishnan](https://github.com/ibanesh)
  • [Tuan Tran](https://antoine-tran.github.io/)
  • [Guillaume Wenzek](https://github.com/gwenzek)
  • [Yilin Yang](https://yilinyang7.github.io/)
  • [Ethan Ye](https://github.com/yeyinthtoon)
  • [Ivan Evtimov](https://ivanevtimov.eu/)
  • [Pierre Fernandez](https://pierrefdz.github.io/)
  • [Robin San Roman](https://scholar.google.com/citations?user=AJ3ir84AAAAJ)
  • [Bokai Yu](https://scholar.google.com/citations?user=7jNmPwUAAAAJ)
  • [Pierre Andrews](https://github.com/Mortimerp9)
  • [Can Balioglu](http://canbalioglu.com/)
  • [Peng-Jen Chen](https://scholar.google.com/citations?user=rOXs9VMAAAAJ)
  • [Marta Costa-jussà](https://costa-jussa.com/)
  • [Maha Elbayad](http://elbayadm.github.io/)
  • [Hongyu Gong](https://github.com/hygong-fb)
  • [Francisco Guzmán](https://guzmanhe.github.io/)
  • [Kevin Heffernan](https://github.com/heffernankevin)
  • [Somya Jain](https://scholar.google.com/citations?user=AmBxU3kAAAAJ)
  • [Justine Kao](https://scholar.google.com/citations?user=Y9BLeTAAAAAJ)
  • [Ann Lee](https://www.stat.cmu.edu/~annlee/)
  • [Xutai Ma](https://github.com/xutaima)
  • [Benjamin Peloquin](https://scholar.google.com/citations?user=5GNAjB8AAAAJ)
  • [Juan Pino](https://scholar.google.com/citations?user=weU_-4IAAAAJ)
  • [Sravya Popuri](https://scholar.google.com/citations?user=MtmqG3UAAAAJ)
  • [Holger Schwenk](https://github.com/hoschwenk)
  • [Anna Sun](https://github.com/annasun28)
  • [Paden Tomasello](https://scholar.google.com/citations?user=sBtWMGYAAAAJ)
  • [Changhan Wang](https://www.changhan.me/)
  • [Skyler Wang](https://www.skylerwang.com/)
  • [Mary Williamson](https://scholar.google.com/citations?user=Ys4xB-QAAAAJ)

| [![](https://img.shields.io/github/stars/facebookresearch/seamless_communication?style=social)](https://github.com/facebookresearch/seamless_communication)

  • [arxiv](https://arxiv.org/abs/2312.05187)

  • [blog post](https://ai.meta.com/research/seamless-communication/)

  • [git](https://github.com/libsndfile/libsndfile), [git](https://github.com/facebookresearch/fairseq2), [git](https://github.com/facebookresearch/SimulEval), [git](https://github.com/facebookresearch/stopes), [git](https://github.com/facebookresearch/SONAR)

  • [hf](https://huggingface.co/facebook/seamless-m4t-v2-large), [hf](https://huggingface.co/facebook/seamless-expressive), [hf](https://huggingface.co/facebook/seamless-streaming)

  • [medium](https://ngwaifoong92.medium.com/beginners-guide-to-seamlessm4t-81efad6e8ca6)

  • [yt](https://www.youtube.com/watch?v=0padjtkHXTE), [yt](https://youtu.be/rNN7qsoCKBo), [yt](https://youtu.be/RKEFZ44YOcc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/seamless_communication/blob/main/Seamless_Tutorial.ipynb) | 14.12.2023 |
| colab2pdf | Convert your Colab notebook to a PDF | [Drengskapur](https://github.com/drengskapur) | [![](https://img.shields.io/github/stars/drengskapur/colab2pdf?style=social)](https://github.com/drengskapur/colab2pdf) | [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1zqrIYC0iQ_CZkRqGXgZggrwjtt_4BmpL) | 11.12.2023 |
| Sentence Transformers | Multilingual Sentence, Paragraph, and Image Embeddings using BERT & Co |

  • [Nils Reimers](https://www.nils-reimers.de/)
  • [Iryna Gurevych](https://www.informatik.tu-darmstadt.de/ukp/ukp_home/head_ukp/index.en.jsp)

| [![](https://img.shields.io/github/stars/UKPLab/sentence-transformers?style=social)](https://github.com/UKPLab/sentence-transformers)

  • [arxiv](https://arxiv.org/abs/1908.10084), [arxiv](https://arxiv.org/abs/2004.09813), [arxiv](https://arxiv.org/abs/2010.08240)

  • [docs](https://www.sbert.net/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/UKPLab/sentence-transformers/blob/master/examples/applications/retrieve_rerank/retrieve_rerank_simple_wikipedia.ipynb) | 07.12.2023 |
| CleanRL | Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features |

  • [Shengyi Huang](https://costa.sh/)
  • [Rousslan Dossa](https://dosssman.github.io/)
  • [Chang Ye](https://github.com/yooceii)
  • [Jeff Braga](https://github.com/bragajj)
  • others
  • [Dipam Chakraborty](https://github.com/dipamc)
  • [Kinal Mehta](https://kinalmehta.github.io/)
  • [João Araújo](https://github.com/joaogui1)

| [![](https://img.shields.io/github/stars/vwxyzjn/cleanrl?style=social)](https://github.com/vwxyzjn/cleanrl)

  • [arxiv](https://arxiv.org/abs/1707.06347), [arxiv](https://arxiv.org/abs/1707.06887), [arxiv](https://arxiv.org/abs/1812.05905), [arxiv](https://arxiv.org/abs/1509.02971), [arxiv](https://arxiv.org/abs/1802.09477), [arxiv](https://arxiv.org/abs/2009.04416), [arxiv](https://arxiv.org/abs/1810.12894)

  • [docs](https://docs.cleanrl.dev/)

  • [git](https://github.com/tinkoff-ai/CORL), [git](https://github.com/Farama-Foundation/Gymnasium), [git](https://github.com/openai/baselines), [git](https://github.com/ikostrikov/jaxrl)

  • [hf](https://huggingface.co/cleanrl)

  • [paper](https://www.jmlr.org/papers/v23/21-1342.html)

  • [yt](https://www.youtube.com/channel/UCDdC6BIFRI0jvcwuhi3aI6w), [yt](https://youtu.be/dm4HdGujpPs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vwxyzjn/cleanrl/blob/master/docs/get-started/CleanRL_Huggingface_Integration_Demo.ipynb) | 28.11.2023 |
| Vocos | Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis | [Hubert Siuzdak](https://github.com/hubertsiuzdak) | [![](https://img.shields.io/github/stars/gemelo-ai/vocos?style=social)](https://github.com/gemelo-ai/vocos)

  • [arxiv](https://arxiv.org/abs/2306.00814)

  • [project](https://gemelo-ai.github.io/vocos/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/charactr-platform/vocos/blob/main/notebooks/Bark%2BVocos.ipynb) | 21.11.2023 |
| X—LLM | Easy LLM Finetuning using the most advanced methods | [Boris Zubarev](https://github.com/BobaZooba) | [![](https://img.shields.io/github/stars/BobaZooba/xllm?style=social)](https://github.com/BobaZooba/xllm)

  • [arxiv](https://arxiv.org/abs/2305.18290)

  • [discord](https://discord.gg/5znbxBgwZP)

  • [git](https://github.com/BobaZooba/xllm-demo), [git](https://github.com/BobaZooba/wgpt), [git](https://github.com/BobaZooba/shurale)

  • [hf](https://huggingface.co/TachyHealth), [hf](https://huggingface.co/BobaZooba/Shurale7b-v1)

  • [pypi](https://pypi.org/project/xllm/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1zsNmJFns1PKZy5VE5p5nsQL-mZF7SwHf?usp=sharing) | 15.11.2023 |
| Distil-Whisper | Maintains the robustness of the Whisper model to difficult acoustic conditions, while being less prone to hallucination errors on long-form audio |

  • [Sanchit Gandhi](https://github.com/sanchit-gandhi)
  • [Patrick von Platen](https://github.com/patrickvonplaten)
  • [Alexander Rush](https://scholar.google.com/citations?&user=LIjnUGgAAAAJ)

| [![](https://img.shields.io/github/stars/huggingface/distil-whisper?style=social)](https://github.com/huggingface/distil-whisper)

  • [arxiv](https://arxiv.org/abs/2311.00430), [arxiv](https://arxiv.org/abs/2211.17192)

  • [git](https://github.com/huggingface/safetensors), [git](https://github.com/Dao-AILab/flash-attention)

  • [hf](https://huggingface.co/collections/distil-whisper/training-datasets-6538d05c69721489d1db1e49), [hf](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSpeechSeq2Seq), [hf](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoProcessor), [hf](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline), [hf](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example), [hf](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationMixin.generate.assistant_model), [hf](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2), [hf](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#bettertransformer)

  • [medium](https://medium.com/prompt-engineering/transcribing-audio-with-python-and-distil-whisper-9b4fec3d53bf)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/17vqtcb/p_distilwhisper_a_distilled_variant_of_whisper/)

  • [yt](https://youtu.be/46Q6fbdUCbg), [yt](https://youtu.be/SZtHEKyvuug), [yt](https://www.youtube.com/live/kI1pA1CADxM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/Distil_Whisper_Benchmark.ipynb) | 08.11.2023 |
| AnimateDiff | Practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning |

  • [Yuwei Guo](https://github.com/guoyww)
  • [Ceyuan Yang](https://github.com/limbo0000)
  • [Anyi Rao](https://anyirao.com/)
  • [Yaohui Wang](https://wyhsirius.github.io/)
  • others
  • [Yu Qiao](https://mmlab.siat.ac.cn/yuqiao/)
  • [Dahua Lin](http://dahua.site/)
  • [Bo Dai](https://daibo.info/)

| [![](https://img.shields.io/github/stars/guoyww/animatediff?style=social)](https://github.com/guoyww/animatediff/)

  • [arxiv](https://arxiv.org/abs/2307.04725)

  • [git](https://github.com/continue-revolution/sd-webui-animatediff), [git](https://github.com/talesofai/AnimateDiff), [git](https://youtu.be/-wki7IrQ_sU)

  • [project](https://animatediff.github.io/)

  • [yt](https://youtu.be/rdnOhM8L8nE), [yt](https://youtu.be/LcHAZaJjA5k), [yt](https://www.youtube.com/live/66JgpI3a650?feature=share)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) | 30.10.2023 |
| Intel® Neural Compressor | Aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch | [intel](https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html) | [![](https://img.shields.io/github/stars/intel/neural-compressor?style=social)](https://github.com/intel/neural-compressor)

  • [arxiv](https://arxiv.org/abs/2309.14592), [arxiv](https://arxiv.org/abs/2309.05516), [arxiv](https://arxiv.org/abs/2211.07715)

  • [discord](https://discord.com/invite/Wxk3J3ZJkU)

  • [docs](https://github.com/intel/neural-compressor)

  • [git](https://github.com/intel/intel-extension-for-tensorflow), [git](https://github.com/intel/intel-extension-for-pytorch), [git](https://github.com/Lightning-AI/pytorch-lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)

  • [medium](https://medium.com/pytorch/pytorch-inference-acceleration-with-intel-neural-compressor-842ef4210d7d), [medium](https://medium.com/intel-analytics-software/efficient-text-classification-with-intel-neural-compressor-4853296deeac)

  • [neurips](https://neurips.cc/virtual/2022/59433)

  • [pt](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html)

  • [yt](https://youtu.be/SswQbIHUrvQ), [yt](https://youtu.be/5xHKe4wWLes), [yt](https://youtu.be/H7Gg-EmGpAI), [yt](https://youtu.be/ie3w_j0Ntsk), [yt](https://youtu.be/m2LokuUdeVg), [yt](https://youtu.be/38wrDHEQZuM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/intel/neural-compressor/blob/master/examples/notebook/onnxruntime/Quick_Started_Notebook_of_INC_for_ONNXRuntime.ipynb) | 27.10.2023 |
| Bark | Transformer-based text-to-audio model | [suno](https://www.suno.ai/) | [![](https://img.shields.io/github/stars/suno-ai/bark?style=social)](https://github.com/suno-ai/bark)

  • [arxiv](https://arxiv.org/abs/2209.03143), [arxiv](https://arxiv.org/abs/2301.02111)

  • [discord](https://discord.gg/J2B2vsjKuE)

  • [examples](https://suno-ai.notion.site/Bark-Examples-5edae8b02a604b54a42244ba45ebc2e2)

  • [git](https://github.com/facebookresearch/encodec), [git](https://github.com/karpathy/nanoGPT)

  • [hf](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhome)

  • [twitter](https://twitter.com/OnusFM)

  • [yt](https://youtu.be/84LzaXAo6vE), [yt](https://youtu.be/rU5Do9yHbwM), [yt](https://youtu.be/w41-MUfxIWo), [yt](https://youtu.be/_m-MxEpHUQY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd) | 25.10.2023 |
| Mistral Transformer | The most powerful language model for its size to date |

  • [Albert Jiang](https://albertqjiang.github.io/)
  • [Alexandre Sablayrolles](https://github.com/alexandresablayrolles)
  • [Arthur Mensch](https://github.com/arthurmensch)
  • [Chris Bamford](https://griddly.ai/)
  • others
  • [Devendra Chaplot](https://devendrachaplot.github.io/)
  • [Diego Casas](https://github.com/diegolascasas)
  • [Florian Bressand](https://www.linkedin.com/in/florianbressand)
  • [Gianna Lengyel](https://www.linkedin.com/in/gianna-maria-lengyel)
  • [Guillaume Lample](https://github.com/glample)
  • [Lucile Saulnier](https://scholar.google.com/citations?user=Baj_9IsAAAAJ)
  • [Lélio Renard Lavaud](https://github.com/lerela)
  • [Marie-Anne Lachaux](https://scholar.google.com/citations?user=dSEMIJ8AAAAJ)
  • [Pierre Stock](https://github.com/pierrestock)
  • [Teven Scao](https://scholar.google.com/citations?user=ik0_vxsAAAAJ)
  • [Thibaut Lavril](https://scholar.google.com/citations?user=9nPunCEAAAAJ)
  • [Thomas Wang](https://github.com/thomasw21)
  • [Timothée Lacroix](https://scholar.google.com/citations?&user=tZGS6dIAAAAJ)
  • [William Sayed](https://www.linkedin.com/in/william-el-sayed-48672312a)

| [![](https://img.shields.io/github/stars/mistralai/mistral-src?style=social)](https://github.com/mistralai/mistral-src)

  • [arxiv](https://arxiv.org/abs/2310.06825), [arxiv](https://arxiv.org/abs/1904.10509), [arxiv](https://arxiv.org/abs/2004.05150), [arxiv](https://arxiv.org/abs/2306.05685)

  • [blog post](https://mistral.ai/news/announcing-mistral-7b/)

  • [discord](https://discord.com/invite/mistralai)

  • [docs](https://docs.mistral.ai/)

  • [git](https://github.com/vllm-project/vllm), [git](https://github.com/lm-sys/FastChat), [git](https://github.com/ggerganov/ggml), [git](https://github.com/Dao-AILab/flash-attention), [git](https://github.com/skypilot-org/skypilot)

  • [hf](https://huggingface.co/mistralai)

  • [medium](https://towardsdatascience.com/mistral-7b-recipes-for-fine-tuning-and-quantization-on-your-computer-631401583f77)

  • [yt](https://youtu.be/g7kVVBlCGo0), [yt](https://youtu.be/ASpageg8nPw), [yt](https://youtu.be/OMIuP6lQXe4), [yt](https://youtu.be/jnPZApwtE4I), [yt](https://youtu.be/3SdopNwQJ-c)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/camenduru/Mistral-colab/blob/main/Mistral_colab.ipynb) | 09.10.2023 |
| Fooocus | Image generating software | [Lvmin Zhang](https://lllyasviel.github.io/Style2PaintsResearch/lvmin) | [![](https://img.shields.io/github/stars/lllyasviel/Fooocus?style=social)](https://github.com/lllyasviel/Fooocus)

  • [arxiv](https://arxiv.org/abs/2210.00939)

  • [yt](https://youtu.be/8krykSwOz3E), [yt](https://youtu.be/558W8rfnP-Q), [yt](https://youtu.be/TJkrzuPdmvE), [yt](https://youtu.be/NfNwmKM3sxc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/lllyasviel/Fooocus/blob/main/colab.ipynb) | 03.10.2023 |
| Actor-Critic | This tutorial demonstrates how to implement the Actor-Critic method using TensorFlow to train an agent on the Open AI Gym CartPole-V0 environment |

  • [Vijay Konda](https://scholar.google.com/citations?user=bi-WXQIAAAAJ)
  • [John Tsitsiklis](https://web.mit.edu/jnt/www/home.html)

|

  • [gym](https://gym.openai.com/)

  • [neurips](https://papers.nips.cc/paper/1786-actor-critic-algorithms), [neurips](https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf)

  • [tf](https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic)

  • [wiki](https://en.wikipedia.org/wiki/Temporal_difference_learning)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/reinforcement_learning/actor_critic.ipynb) | 28.09.2023 |
| MMagic | AIGC toolbox for professional AI researchers and machine learning engineers to explore image and video processing, editing and generation | [OpenMMLab](https://openmmlab.com/) | [![](https://img.shields.io/github/stars/open-mmlab/mmagic?style=social)](https://github.com/open-mmlab/mmagic)

  • [discord](https://discord.gg/raweFPmdzG)

  • [docs](https://mmagic.readthedocs.io/en/latest/)

  • [git](https://github.com/open-mmlab/mmgeneration), [git](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), [git](https://github.com/open-mmlab/mmcv), [git](https://github.com/open-mmlab/mim)

  • [medium](https://openmmlab.medium.com/)

  • [twitter](https://twitter.com/OpenMMLab)

  • [yt](https://www.youtube.com/openmmlab)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmagic/blob/main/demo/mmagic_inference_tutorial.ipynb) | 11.09.2023 |
| SeqIO | Library for processing sequential data to be fed into downstream sequence models |

  • [Adam Roberts](https://github.com/adarob)
  • [Hyung Won Chung](https://github.com/hwchung27)
  • [Anselm Levskaya](https://anselmlevskaya.com/)
  • [Gaurav Mishra](https://github.com/gauravmishra)
  • others
  • [James Bradbury](https://github.com/jekbradbury)
  • [Daniel Andor](https://github.com/andorardo)
  • [Sharan Narang](https://github.com/sharannarang)
  • [Brian Lester](https://blester125.com/)
  • [Colin Gaffney](https://github.com/cpgaffney1)
  • [Afroz Mohiuddin](https://github.com/afrozenator)
  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Aitor Lewkowycz](https://scholar.google.com/citations?user=Yum1ah0AAAAJ)
  • [Alex Salcianu](https://scholar.google.com/citations?user=HSrT1wsAAAAJ)
  • [Marc van Zee](https://github.com/marcvanzee)
  • [Jacob Austin](https://jacobaustin123.github.io/)
  • [Sebastian Goodman](https://github.com/0x0539)
  • [Livio Baldini Soares](https://liviosoares.github.io/)
  • [Haitang Hu](https://hthu.github.io/)
  • [Sasha Tsvyashchenko](https://endl.ch/)
  • [Aakanksha Chowdhery](http://www.achowdhery.com/)
  • [Jasmijn Bastings](https://jasmijn.ninja/)
  • [Jannis Bulian](http://bulian.org/)
  • [Xavier Garcia](https://scholar.google.com/citations?user=Y2Hio6MAAAAJ)
  • [Jianmo Ni](https://nijianmo.github.io/)
  • [Andrew Chen](https://github.com/andrewluchen)
  • [Kathleen Kenealy](https://github.com/kkenealy)
  • [Jonathan Clark](http://www.cs.cmu.edu/~jhclark/)
  • [Stephan Lee](https://github.com/stephanwlee)
  • [Dan Garrette](https://www.dhgarrette.com/)
  • [James Lee-Thorp](https://scholar.google.com/citations?user=qsPv098AAAAJ)
  • [Colin Raffel](https://www.colinraffel.com/)
  • [Noam Shazeer](https://github.com/nshazeer)
  • [Marvin Ritter](https://github.com/Marvin182)
  • [Maarten Bosma](https://scholar.google.com/citations?user=wkeFQPgAAAAJ)
  • [Alexandre Passos](https://www.ic.unicamp.br/~tachard/)
  • [Jeremy Maitin-Shepard](https://research.google/people/JeremyMaitinShepard/)
  • [Noah Fiedel](https://scholar.google.com/citations?user=XWpV9DsAAAAJ)
  • [Mark Omernick](https://github.com/markomernick)
  • [Brennan Saeta](https://github.com/saeta)
  • [Ryan Sepassi](https://ryansepassi.com/)
  • [Alexander Spiridonov](https://research.google/people/AlexanderSpiridonov/)
  • [Joshua Newlan](https://github.com/joshnewlan)
  • [Andrea Gesmundo](https://github.com/agesmundo)

| [![](https://img.shields.io/github/stars/google/seqio?style=social)](https://github.com/google/seqio)

  • [arxiv](https://arxiv.org/abs/2203.17189), [arxiv](https://arxiv.org/abs/1910.10683), [arxiv](https://arxiv.org/abs/1810.04805), [arxiv](https://arxiv.org/abs/2002.08910)

  • [docs](https://seqio.readthedocs.io/en/latest/)

  • [tf](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), [tf](https://www.tensorflow.org/datasets), [tf](https://www.tensorflow.org/tutorials/load_data/tfrecord), [tf](https://www.tensorflow.org/guide/function#autograph_transformations), [tf](https://www.tensorflow.org/api_docs/python/tf/py_function), [tf](https://www.tensorflow.org/guide/random_numbers#stateless_rngs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/seqio/blob/main/seqio/notebooks/Evaluation.ipynb) | 08.09.2023 |
| MMAction2 | An open-source toolbox for video understanding based on PyTorch | [MMAction2 Contributors](https://openmmlab.com/aboutus) | [![](https://img.shields.io/github/stars/open-mmlab/mmaction2?style=social)](https://github.com/open-mmlab/mmaction2)

  • [arxiv](https://arxiv.org/abs/2106.13230), [arxiv](https://arxiv.org/abs/2107.10161), [arxiv](https://arxiv.org/abs/2103.17263), [arxiv](https://arxiv.org/abs/2104.13586), [arxiv](https://arxiv.org/abs/2102.05095), [arxiv](https://arxiv.org/abs/2003.13042)

  • [data](https://sdolivia.github.io/FineGym/), [data](http://www.svcl.ucsd.edu/projects/resound/dataset.html), [data](https://research.google.com/ava/index.html), [data](https://www.deepmind.com/open-source/kinetics)

  • [docs](https://mmaction2.readthedocs.io/)

  • [git](https://github.com/open-mmlab/mmcv), [git](https://github.com/SwinTransformer/Video-Swin-Transformer), [git](https://github.com/Cogito2012/DEAR), [git](https://github.com/xvjiarui/VFS), [git](https://github.com/holistic-video-understanding/HVU-Dataset)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmaction2/blob/master/demo/mmaction2_tutorial.ipynb) | 06.09.2023 |
| Ray | Unified framework for scaling AI and Python applications |

  • [Philipp Moritz](https://github.com/pcmoritz)
  • [Robert Nishihara](https://github.com/robertnishihara)
  • [Stephanie Wang](https://stephanie-wang.github.io/)
  • [Alexey Tumanov](https://faculty.cc.gatech.edu/~atumanov/)
  • others
  • [Richard Liaw](https://github.com/richardliaw)
  • [Eric Liang](https://github.com/ericl)
  • [Melih Elibol](https://research.nvidia.com/person/melih-elibol)
  • [Zongheng Yang](https://zongheng.me/)
  • [William Paul](https://github.com/Wapaul1)
  • [Michael Jordan](https://people.eecs.berkeley.edu/~jordan/)
  • [Ion Stoica](https://people.eecs.berkeley.edu/~istoica/)

| [![](https://img.shields.io/github/stars/ray-project/ray?style=social)](https://github.com/ray-project/ray)

  • [arxiv](https://arxiv.org/abs/1712.05889), [arxiv](https://arxiv.org/abs/2203.05072), [arxiv](https://arxiv.org/abs/1712.09381), [arxiv](https://arxiv.org/abs/1807.05118), [arxiv](https://arxiv.org/abs/1703.03924)

  • [docs](https://docs.ray.io/en/latest/index.html)

  • [website](https://www.ray.io/)

  • [yt](https://youtu.be/LmROEotKhJA), [yt](https://youtu.be/uzt-CwohQC8), [yt](https://youtu.be/XME90SGL6Vs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ray-project/ray/blob/master/doc/source/tune/examples/optuna_example.ipynb) | 06.09.2023 |
| Home Robot | Low-level API for controlling various home robots | [Chris Paxton](https://cpaxton.github.io/) | [![](https://img.shields.io/github/stars/facebookresearch/home-robot?style=social)](https://github.com/facebookresearch/home-robot)
  • [git](https://github.com/cpaxton/contact_graspnet/tree/cpaxton/devel), [git](https://github.com/facebookresearch/fairo), [git](https://github.com/hello-robot/stretch_body), [git](https://github.com/hello-robot/stretch_firmware), [git](https://github.com/hello-robot/stretch_ros), [git](https://github.com/hello-robot/stretch_ros2), [git](https://github.com/hello-robot/stretch_web_interface), [git](https://github.com/RoboStack/ros-noetic), [git](https://github.com/codekansas/stretch-robot)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/home-robot/blob/master/src/home_robot_sim/notebooks/velocity_control_sim.ipynb) | 30.08.2023 |
| Neural Tangents | Library designed to enable research into infinite-width neural networks |

  • [Roman Novak](https://github.com/romanngg)
  • [Lechao Xiao](https://sites.google.com/site/lechaoxiao/)
  • [Jiri Hron](https://sites.google.com/view/jirihron)
  • [Jaehoon Lee](https://jaehlee.github.io/)
  • others
  • [Alexander Alemi](https://www.alexalemi.com/)
  • [Jascha Sohl-Dickstein](https://sohldickstein.com/)
  • [Samuel Schoenholz](https://scholar.google.com/citations?user=mk-zQBsAAAAJ)

| [![](https://img.shields.io/github/stars/google/neural-tangents?style=social)](https://github.com/google/neural-tangents)

  • [ICLR](https://iclr.cc/virtual_2020/poster_SklD9yrFPS.html)

  • [arxiv](https://arxiv.org/abs/1912.02803), [arxiv](https://arxiv.org/abs/1605.07146), [arxiv](https://arxiv.org/abs/1902.06720), [arxiv](https://arxiv.org/abs/1806.07572), [arxiv](https://arxiv.org/abs/2001.07301), [arxiv](https://arxiv.org/2003.02237)

  • [docs](https://neural-tangents.readthedocs.io/en/latest/)

  • [medium](https://towardsdatascience.com/infinitely-wide-neural-networks-neural-tangents-explained-d6c6d896fcbf)

  • [pypi](https://pypi.org/project/neural-tangents/)

  • [wiki](https://en.wikipedia.org/wiki/Neural_network_Gaussian_process), [wiki](https://en.wikipedia.org/wiki/Neural_tangent_kernel)

  • [yt](https://youtu.be/VUX2bsrYag8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/neural-tangents/blob/main/notebooks/neural_tangents_cookbook.ipynb) | 29.08.2023 |
| Stable Diffusion 2 | New stable diffusion model at 768x768 resolution. Same number of parameters in the U-Net as 1.5, but uses OpenCLIP-ViT/H as the text encoder and is trained from scratch |

  • [Robin Rombach](https://github.com/rromb)
  • [Andreas Blattmann](https://github.com/ablattmann)
  • [Dominik Lorenz](https://github.com/qp-qp)
  • [Patrick Esser](https://github.com/pesser)
  • others
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)
  • [qunash](https://github.com/qunash)

| [![](https://img.shields.io/github/stars/Stability-AI/stablediffusion?style=social)](https://github.com/Stability-AI/stablediffusion)

  • [arxiv](https://arxiv.org/abs/2112.10752), [arxiv](https://arxiv.org/abs/2202.00512), [arxiv](https://arxiv.org/abs/2010.02502), [arxiv](https://arxiv.org/abs/2108.01073), [arxiv](https://arxiv.org/abs/2202.09778), [arxiv](https://arxiv.org/abs/2206.00927)

  • [git](https://github.com/qunash/stable-diffusion-2-gui), [git](https://github.com/isl-org/MiDaS), [git](https://github.com/lucidrains/denoising-diffusion-pytorch), [git](https://github.com/runwayml/stable-diffusion/blob/main/scripts/inpaint_st.py), [git](https://github.com/crowsonkb/k-diffusion)

  • [hf](https://huggingface.co/stabilityai/stable-diffusion-2-1), [hf](https://huggingface.co/stabilityai/stable-diffusion-2-1-base), [hf](https://huggingface.co/stabilityai/stable-diffusion-2-depth), [hf](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)

  • [yt](https://youtu.be/HytucGhwTRs)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/qunash/stable-diffusion-2-gui/blob/main/stable_diffusion_2_0.ipynb) | 26.08.2023 |
| DALL·E Mini | Generate images from a text prompt |

  • [Boris Dayma](https://github.com/borisdayma)
  • [Suraj Patil](https://github.com/patil-suraj)
  • [Pedro Cuenca](https://github.com/pcuenca)
  • [Khalid Saifullah](https://khalidsaifullaah.github.io/)
  • others
  • [Tanishq Abraham](https://github.com/tmabraham)
  • [Phúc H. Lê Khắc](https://lkhphuc.com/)
  • [Luke Melas](https://lukemelas.github.io/)
  • [Ritobrata Ghosh](https://ghosh-r.github.io/)

| [![](https://img.shields.io/github/stars/borisdayma/dalle-mini?style=social)](https://github.com/borisdayma/dalle-mini)

  • [arxiv](https://arxiv.org/abs/2102.08981), [arxiv](https://arxiv.org/abs/2012.09841), [arxiv](https://arxiv.org/abs/1910.13461), [arxiv](https://arxiv.org/abs/2103.00020), [arxiv](https://arxiv.org/abs/2012.09841), [arxiv](https://arxiv.org/abs/1807.04015)

  • [blog post](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA)

  • [data](https://aclanthology.org/P18-1238/)

  • [git](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects), [git](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md)

  • [hf](https://huggingface.co/spaces/flax-community/dalle-mini)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb) | 22.08.2023 |
| Classify text with BERT | This tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews | [Anirudh Dubey](https://github.com/anirudh161) | [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/N19-1423)](https://doi.org/10.18653/v1/N19-1423)

  • [arxiv](https://arxiv.org/abs/1810.04805), [arxiv](https://arxiv.org/abs/1711.05101)

  • [data](https://ai.stanford.edu/~amaas/data/sentiment/)

  • [pwc](https://paperswithcode.com/task/text-classification)

  • [tf](https://tfhub.dev/google/collections/bert/1), [tf](https://www.tensorflow.org/text/tutorials/classify_text_with_bert)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb) | 08.08.2023 |
| Kandinsky 2.1 | As text and image encoder it uses CLIP model and diffusion image prior between latent spaces of CLIP modalities |

  • [Arseniy Shakhmatov](https://github.com/cene555)
  • [Anton Razzhigaev](https://github.com/razzant)
  • [Aleksandr Nikolich](https://github.com/AlexWortega)
  • [Vladimir Arkhipkin](https://github.com/oriBetelgeuse)
  • others
  • [Igor Pavlov](https://github.com/boomb0om)
  • [Andrey Kuznetsov](https://github.com/kuznetsoffandrey)
  • [Denis Dimitrov](https://github.com/denndimitrov)

| [![](https://img.shields.io/github/stars/ai-forever/Kandinsky-2?style=social)](https://github.com/ai-forever/Kandinsky-2)

  • [blog post](https://habr.com/ru/companies/sberbank/articles/725282/)

  • [demo](https://editor.fusionbrain.ai/)

  • [hf](https://huggingface.co/sberbank-ai/Kandinsky_2.1)

  • [yt](https://youtu.be/LZvp4SWcCao), [yt](https://youtu.be/IoPhRE37XSU), [yt](https://youtu.be/dYt9xJ7dnpU), [yt](https://youtu.be/rN2J5TL2RZ0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1xSbu-b-EwYd6GdaFPRVgvXBX_mciZ41e) | 07.08.2023 |
| SoftVC VITS | Singing Voice Conversion | [svc develop team](https://github.com/svc-develop-team) | [![](https://img.shields.io/github/stars/svc-develop-team/so-vits-svc?style=social)](https://github.com/svc-develop-team/so-vits-svc)

  • [git](https://github.com/NaruseMioShirakana/MoeVoiceStudio), [git](https://github.com/openvpi/DiffSinger/tree/refactor/modules/nsf_hifigan), [git](https://github.com/auspicious3000/contentvec), [git](https://github.com/yxlllc/DDSP-SVC), [git](https://github.com/flutydeer/audio-slicer), [git](https://github.com/openvpi/audio-slicer)

  • [hf](https://huggingface.co/NaruseMioShirakana/MoeSS-SUBModel/tree/main)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/svc-develop-team/so-vits-svc/blob/4.1-Stable/sovits4_for_colab.ipynb) | 31.07.2023 |
| threestudio | Unified framework for 3D content creation from text prompts, single images, and few-shot images, by lifting 2D text-to-image generation models |

  • [Yuan-Chen Guo](https://github.com/bennyguo)
  • [Ying-Tian Liu](https://github.com/thuliu-yt16)
  • [Ruizhi Shao](https://github.com/DSaurus)
  • [Christian Laforte](https://github.com/claforte)
  • others
  • [Vikram Voleti](https://github.com/voletiv)
  • [Guan Luo](https://github.com/logan0601)
  • [Chia-Hao Chen](https://scholar.google.com/citations?user=X0zirvMAAAAJ)
  • [Zi-Xin Zou](https://github.com/zouzx)
  • [Chen Wang](https://cwchenwang.github.io/)
  • [Yanpei Cao](https://yanpei.me/)
  • [Song-Hai Zhang](https://scholar.google.com/citations?user=AWtV-EQAAAAJ)

| [![](https://img.shields.io/github/stars/threestudio-project/threestudio?style=social)](https://github.com/threestudio-project/threestudio)

  • [arxiv](https://arxiv.org/abs/2303.15413), [arxiv](https://arxiv.org/abs/2305.16213), [arxiv](https://arxiv.org/abs/2211.10440)

  • [discord](https://discord.gg/ejer2MAB8N)

  • [git](https://github.com/DSaurus/Tensor4D), [git](https://github.com/eladrich/latent-nerf), [git](https://github.com/Gorilla-Lab-SCUT/Fantasia3D), [git](https://github.com/cvlab-columbia/zero123), [git](https://github.com/guochengqian/Magic123), [git](https://github.com/ayaanzhaque/instruct-nerf2nerf), [git](https://github.com/KAIR-BAIR/nerfacc), [git](https://github.com/Lightning-AI/lightning), [git](https://github.com/ashawkey/fantasia3d.unofficial)

  • [hf](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0), [hf](https://huggingface.co/docs/huggingface_hub/v0.14.1/guides/download#download-an-entire-repository)

  • [reddit](https://www.reddit.com/r/StableDiffusion/comments/1635cb0/threestudio_a_unified_framework_for_3d_content/)

  • [yt](https://youtu.be/gT8Xvx5b6IE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/threestudio-project/threestudio/blob/main/threestudio.ipynb) | 28.07.2023 |
| Image captioning | Given an image our goal is to generate a caption |

  • [Kelvin Xu](https://kelvinxu.github.io/)
  • [Jimmy Ba](https://jimmylba.github.io/)
  • [Ryan Kiros](https://github.com/ryankiros)
  • [Kyunghyun Cho](https://kyunghyuncho.me/)
  • others
  • [Aaron Courville](https://mila.quebec/en/directory/aaron-courville)
  • [Ruslan Salakhutdinov](https://www.cs.cmu.edu/~rsalakhu/)
  • [Richard Zemel](https://www.cs.columbia.edu/~zemel/)
  • [Yoshua Bengio](https://yoshuabengio.org/)

|

  • [arxiv](https://arxiv.org/abs/1502.03044)

  • [data](https://cocodataset.org/#home)

  • [medium](https://medium.com/@labbikarmacharya/paper-review-show-attend-and-tell-neural-image-caption-generation-with-visual-attention-03928d8fe17b)

  • [tf](https://www.tensorflow.org/text/tutorials/image_captioning)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb) | 25.07.2023 |
| Word2Vec | Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets | [Google](https://www.tensorflow.org/) |

  • [arxiv](https://arxiv.org/abs/1301.3781)

  • [link](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf)

  • [neurips](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)

  • [projector](http://projector.tensorflow.org/)

  • [pwc](https://paperswithcode.com/method/cbow-word2vec), [pwc](https://paperswithcode.com/method/skip-gram-word2vec)

  • [wiki](https://en.wikipedia.org/wiki/Zipf%27s_law)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/word2vec.ipynb) | 25.07.2023 |
| Word embeddings | This tutorial contains an introduction to word embeddings | [Billy Lamberta](https://github.com/lamberta) |

  • [data](http://ai.stanford.edu/~amaas/data/sentiment/)

  • [projector](http://projector.tensorflow.org/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/word_embeddings.ipynb) | 25.07.2023 |
| Contextualized Topic Models | Family of topic models that use pre-trained representations of language to support topic modeling |

  • [Federico Bianchi](https://federicobianchi.io/)
  • [Silvia Terragni](https://silviatti.github.io/)
  • [Dirk Hovy](http://dirkhovy.com/)
  • [Debora Nozza](https://www.deboranozza.com/)
  • [Elisabetta Fersini](https://www.unimib.it/elisabetta-fersini)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/2021.eacl-main.143)](https://doi.org/10.18653/v1/2021.eacl-main.143) [![](https://img.shields.io/github/stars/MilaNLProc/contextualized-topic-models?style=social)](https://github.com/MilaNLProc/contextualized-topic-models)

  • [arxiv](https://arxiv.org/abs/2004.03974)

  • [docs](https://contextualized-topic-models.readthedocs.io/en/latest/)

  • [git](https://github.com/estebandito22/PyTorchAVITM), [git](https://github.com/dlukes/rbo)

  • [medium](https://medium.com/towards-data-science/contextualized-topic-modeling-with-python-eacl2021-eacf6dfa576)

  • [pypi](https://pypi.python.org/pypi/contextualized_topic_models)

  • [yt](https://youtu.be/n1_G8K07KoM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1fXJjr_rwqvpp1IdNQ4dxqN4Dp88cxO97) | 22.07.2023 |
| Tortoise | A multi-voice TTS system trained with an emphasis on quality | [James Betker](https://nonint.com/) | [![](https://img.shields.io/github/stars/neonbjb/tortoise-tts?style=social)](https://github.com/neonbjb/tortoise-tts)

  • [arxiv](https://arxiv.org/abs/2102.12092), [arxiv](https://arxiv.org/abs/2102.09672), [arxiv](https://arxiv.org/abs/2106.07889)

  • [examples](https://nonint.com/static/tortoise_v2_examples.html)

  • [git](https://github.com/neonbjb/DL-Art-School)

  • [hf](https://huggingface.co/patrickvonplaten), [hf](https://huggingface.co/spaces/osanseviero/tortoisse-tts)

  • [yt](https://youtu.be/J3-jfS29RF4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/neonbjb/tortoise-tts/blob/main/tortoise_tts.ipynb) | 15.07.2023 |
| Petals | Run 100B+ language models at home, BitTorrent-style | [BigScience](https://bigscience.huggingface.co/) | [![](https://img.shields.io/github/stars/bigscience-workshop/petals?style=social)](https://github.com/bigscience-workshop/petals)

  • [arxiv](https://arxiv.org/abs/2209.01188), [arxiv](https://arxiv.org/abs/2108.07258)

  • [git](https://github.com/borzunov/chat.petals.ml), [git](https://github.com/timDettmers/bitsandbytes)

  • [hf](https://huggingface.co/bigscience/bloom)

  • [project](https://petals.ml/)

  • [wiki](https://en.wikipedia.org/wiki/BitTorrent)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Ervk6HPNS6AYVr3xVdQnY5a-TjjmLCdQ) | 05.07.2023 |
| Epistemic Neural Networks | A library for neural networks that know what they don't know |

  • [Ian Osband](http://iosband.github.io/)
  • [Zheng Wen](http://zheng-wen.com/)
  • [Seyed Mohammad Asghari](https://github.com/mohammadasghari)
  • [Vikranth Dwaracherla](https://github.com/dvikranth)
  • others
  • [Morteza Ibrahimi](https://github.com/mibrahimi)
  • [Xiuyuan Lu](https://scholar.google.com/citations?user=SPL_2lIAAAAJ)
  • [Benjamin Van Roy](https://web.stanford.edu/~bvr/)

| [![](https://img.shields.io/github/stars/deepmind/enn?style=social)](https://github.com/deepmind/enn)

  • [arxiv](https://arxiv.org/abs/2107.08924)

  • [medium](https://medium.com/syncedreview/deepminds-epistemic-neural-networks-open-new-avenues-for-uncertainty-modelling-in-large-and-fa83ab00aba3)

  • [yt](https://youtu.be/j8an0dKcX4A)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/enn/blob/master/enn/colabs/enn_demo.ipynb) | 27.06.2023 |
| DeepFloyd IF | State-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding |

  • [Alex Shonenkov](https://linktr.ee/shonenkovAI)
  • [Misha Konstantinov](https://github.com/zeroshot-ai)
  • [Daria Bakshandaeva](https://github.com/Gugutse)
  • [Christoph Schuhmann](http://christoph-schuhmann.de/)
  • others
  • [Ksenia Ivanova](https://github.com/ivksu)
  • [Nadiia Klokova](https://github.com/vauimpuls)

| [![](https://img.shields.io/github/stars/deep-floyd/IF?style=social)](https://github.com/deep-floyd/IF)

  • [arxiv](https://arxiv.org/abs/2205.11487)

  • [discord](https://discord.gg/umz62Mgr)

  • [hf](https://huggingface.co/DeepFloyd), [hf](https://huggingface.co/docs/diffusers/optimization/fp16#model-offloading-for-fast-inference-and-memory-savings), [hf](https://huggingface.co/docs/diffusers/api/pipelines/if#optimizing-for-speed), [hf](https://huggingface.co/docs/diffusers/api/pipelines/if#optimizing-for-memory), [hf](https://huggingface.co/blog/if), [hf](https://huggingface.co/docs/diffusers/main/en/api/pipelines/if)

  • [kaggle](https://www.kaggle.com/code/shonenkov/deepfloyd-if-4-3b-generator-of-pictures)

  • [twitter](https://twitter.com/deepfloydai)

  • [website](https://deepfloyd.ai/deepfloyd-if)

  • [yt](https://youtu.be/4Zkipll5Rjc), [yt](https://youtu.be/tq5ZXZWwTPA), [yt](https://youtu.be/rLtfd1TvYJk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/deepfloyd_if_free_tier_google_colab.ipynb) | 26.06.2023 |
| normflows | PyTorch implementation of discrete normalizing flows |

  • [Vincent Stimper](https://is.mpg.de/person/vstimper)
  • [David Liu](https://davindicode.github.io/)
  • [Andrew Campbell](https://github.com/andrew-cr)
  • [Vincent Berenz](http://vincentberenz.is.tuebingen.mpg.de/)
  • others
  • [Lukas Ryll](https://github.com/lukasryll)
  • [Bernhard Schölkopf](https://scholar.google.com/citations?user=DZ-fHPgAAAAJ)
  • [José Miguel Hernández-Lobato](https://jmhl.org/)

| [![](https://img.shields.io/github/stars/VincentStimper/normalizing-flows?style=social)](https://github.com/VincentStimper/normalizing-flows)

  • [arxiv](https://arxiv.org/abs/2302.12014)

  • [docs](https://vincentstimper.github.io/normalizing-flows/)

  • [git](https://github.com/VincentStimper/resampled-base-flows), [git](https://github.com/VincentStimper/hmc-hyperparameter-tuning)

  • [wiki](https://en.wikipedia.org/wiki/Von_Mises_distribution)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/VincentStimper/normalizing-flows/blob/master/examples/paper_example_nsf_colab.ipynb) | 26.06.2023 |
| MMPose | Toolbox for pose estimation based on PyTorch | [OpenMMLab](https://openmmlab.com/) | [![](https://img.shields.io/github/stars/open-mmlab/mmpose?style=social)](https://github.com/open-mmlab/mmpose)

  • [discord](https://discord.com/channels/1037617289144569886/1072798105428299817)

  • [docs](https://mmpose.readthedocs.io/en/latest/)

  • [medium](https://openmmlab.medium.com/)

  • [pypi](https://pypi.org/project/mmpose/)

  • [twitter](https://twitter.com/OpenMMLab)

  • [yt](https://www.youtube.com/openmmlab), [yt](https://youtu.be/nFcZ2H1Ix3w)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmpose/blob/master/demo/MMPose_Tutorial.ipynb) | 19.06.2023 |
| MyoSuite | A collection of musculoskeletal environments and tasks simulated with the MuJoCo physics engine and wrapped in the OpenAI gym API to enable the application of Machine Learning to bio-mechanic control problems |

  • [Vittorio Caggiano](https://github.com/Vittorio-Caggiano)
  • [Huawei Wang](https://huaweiwang.github.io/)
  • [Guillaume Durandau](https://people.utwente.nl/g.v.durandau)
  • [Massimo Sartori](https://people.utwente.nl/m.sartori)
  • [Vikash Kumar](https://vikashplus.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/myosuite?style=social)](https://github.com/facebookresearch/myosuite)

  • [arxiv](https://arxiv.org/abs/2205.13600)

  • [docs](https://myosuite.readthedocs.io/en/latest/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1U6vo6Q_rPhDaq6oUMV7EAZRm6s0fD1wn) | 16.06.2023 |
| Audiocraft | PyTorch library for deep learning research on audio generation |

  • [Jade Copet](https://scholar.google.com/citations?&user=GRMLwjAAAAAJ)
  • [Felix Kreuk](https://felixkreuk.github.io/)
  • [Itai Gat](https://itaigat.com/)
  • [Tal Remez](https://talremez.github.io/)
  • others
  • [David Kant](https://www.linkedin.com/in/david-kant-339a3b1b7)
  • [Gabriel Synnaeve](https://syhw.github.io/)
  • [Yossi Adi](https://www.cs.huji.ac.il/~adiyoss/)
  • [Alexandre Défossez](https://ai.honu.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/audiocraft?style=social)](https://github.com/facebookresearch/audiocraft)

  • [arxiv](https://arxiv.org/abs/2306.05284), [arxiv](https://arxiv.org/abs/2301.11325)

  • [git](https://github.com/facebookresearch/encodec), [git](https://github.com/camenduru/MusicGen-colab)

  • [hf](https://huggingface.co/facebook/musicgen-large)

  • [project](https://ai.honu.io/papers/musicgen/)

  • [yt](https://youtu.be/v-YpvPkhdO4), [yt](https://www.youtube.com/watch?v=EGfxuTy9Eeo), [yt](https://youtu.be/la2fGS0dW98), [yt](https://youtu.be/v-YpvPkhdO4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-) | 11.06.2023 |
| Detectron2 | FAIR's next-generation platform for object detection and segmentation | [Yuxin Wu](http://ppwwyyxx.com/) | [![](https://img.shields.io/github/stars/facebookresearch/detectron2?style=social)](https://github.com/facebookresearch/detectron2)

  • [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/)

  • [docs](https://detectron2.readthedocs.io/en/latest/)

  • [git](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) | 26.05.2023 |
| Reverb | Efficient and easy-to-use data storage and transport system designed for machine learning research |

  • [Albin Cassirer](https://github.com/acassirer)
  • [Gabriel Barth-Maron](https://github.com/fastturtle)
  • [Eugene Brevdo](https://ebrevdo.github.io/)
  • [Sabela Ramos](https://github.com/sabelaraga)
  • others
  • [Toby Boyd](https://github.com/tfboyd)
  • [Thibault Sottiaux](https://github.com/thso)

| [![](https://img.shields.io/github/stars/google-deepmind/reverb?style=social)](https://github.com/google-deepmind/reverb)

  • [arxiv](https://arxiv.org/abs/2102.04736), [arxiv](https://arxiv.org/abs/1801.01290), [arxiv](https://arxiv.org/abs/1509.02971), [arxiv](https://arxiv.org/abs/1707.01495), [arxiv](https://arxiv.org/abs/1511.05952), [arxiv](https://arxiv.org/abs/1804.08617), [arxiv](https://arxiv.org/abs/1802.01561), [arxiv](https://arxiv.org/abs/1707.06347)

  • [pypi](https://pypi.org/project/dm-reverb/)

  • [reddit](https://www.reddit.com/r/reinforcementlearning/comments/lhnrkd/reverb_a_framework_for_experience_replay/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/demo.ipynb) | 23.05.2023 |
| MMDetection | Open source object detection toolbox based on PyTorch | [OpenMMLab](https://openmmlab.com/) | [![](https://img.shields.io/github/stars/open-mmlab/mmdetection?style=social)](https://github.com/open-mmlab/mmdetection)

  • [arxiv](https://arxiv.org/abs/1906.07155), [arxiv](https://arxiv.org/abs/2401.02361), [arxiv](https://arxiv.org/abs/2212.07784)

  • [discord](https://discord.com/channels/1037617289144569886/1046608014234370059)

  • [docs](https://mmdetection.readthedocs.io/en/latest/)

  • [git](https://github.com/tusen-ai/simpledet), [git](https://github.com/open-mmlab/mmcv), [git](https://github.com/open-mmlab/mmengine)

  • [medium](https://openmmlab.medium.com/)

  • [pwc](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real), [pwc](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real), [pwc](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real)

  • [pypi](https://pypi.org/project/mmdet)

  • [twitter](https://twitter.com/OpenMMLab)

  • [yt](https://www.youtube.com/openmmlab), [yt](https://youtu.be/5kgWyo6Sg4E), [yt](https://youtu.be/4SuwN4xSM3Q), [yt](https://www.youtube.com/live/SWB2pTY3UDM), [yt](https://youtu.be/AEIDB6Dd6bM), [yt](https://youtu.be/7c2JKPMVPm0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmdetection/blob/main/demo/MMDet_Tutorial.ipynb) | 17.05.2023 |
| ChatRWKV | Like ChatGPT but powered by RWKV (100% RNN) language model, which is the only RNN that can match transformers in quality and scaling, while being faster and saves VRAM |

  • [Bo Peng](https://github.com/BlinkDL)
  • [Eric Alcaide](https://hypnopump.github.io/)
  • [Quentin Anthony](https://quentin-anthony.github.io/)
  • [Alon Albalak](https://alon-albalak.github.io/)
  • others
  • [Samuel Arcadinho](https://github.com/SSamDav)
  • [Matteo Grella](http://www.matteogrella.com/)
  • [Kranthi Kiran](https://kranthigv.github.io/)
  • [Haowen Hou](https://github.com/howard-hou)
  • [Przemyslaw Kazienko](https://kazienko.eu/en)
  • [Jan Kocon](https://github.com/KoconJan)
  • [Bartlomiej Koptyra](https://github.com/bkoptyra)
  • [Ipsit Mantri](https://ipsitmantri.github.io/)
  • [Ferdinand Mom](https://3outeille.github.io/)
  • [Xiangru Tang](https://github.com/tangxiangru)
  • [Johan Wind](https://johanwind.github.io/)
  • [Stanisław Woźniak](https://www.researchgate.net/profile/Stanislaw-Wozniak-3)
  • [Qihang Zhao](https://www.researchgate.net/profile/Qihang-Zhao-2)
  • [Peng Zhou](https://pengzhou.sites.ucsc.edu/)
  • [Jian Zhu](https://lingjzhu.github.io/)
  • [Rui-Jie Zhu](https://scholar.google.com/citations?user=08ITzJsAAAAJ)

| [![](https://img.shields.io/github/stars/BlinkDL/ChatRWKV?style=social)](https://github.com/BlinkDL/ChatRWKV)

  • [arxiv](https://arxiv.org/abs/2305.13048)

  • [discord](https://discord.gg/bDSBUMeFpc)

  • [git](https://github.com/saharNooby/rwkv.cpp), [git](https://github.com/harrisonvanderbyl/rwkv-cpp-cuda), [git](https://github.com/Blealtan/RWKV-LM-LoRA), [git](https://github.com/josStorer/RWKV-Runner)

  • [hf](https://huggingface.co/BlinkDL)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/1135aew/r_rwkv4_14b_release_and_chatrwkv_a_surprisingly/)

  • [twitter](https://twitter.com/BlinkDL_AI)

  • [website](https://www.rwkv.com/)

  • [yt](https://youtu.be/UeAD1qWNb1U)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/resloved/RWKV-notebooks/blob/master/RWKV_ChatRWKV.ipynb) | 08.05.2023 |
| Python Data Science Handbook | Jupyter notebook version of the Python Data Science Handbook by Jake VanderPlas | [Jake Vanderplas](http://vanderplas.com/) | [![](https://img.shields.io/github/stars/jakevdp/PythonDataScienceHandbook?style=social)](https://github.com/jakevdp/PythonDataScienceHandbook)
  • [project](https://jakevdp.github.io/PythonDataScienceHandbook/)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb) | 06.05.2023 |
| PGMax | General factor graphs for discrete probabilistic graphical models, and hardware-accelerated differentiable loopy belief propagation in JAX |

  • [Guangyao Zhou](https://stanniszhou.github.io/)
  • [Nishanth Kumar](http://nishanthjkumar.com/)
  • [Antoine Dedieu](https://github.com/antoine-dedieu)
  • [Miguel Lázaro-Gredilla](https://www.tsc.uc3m.es/~miguel/)
  • others
  • [Shrinu Kushagra](https://cs.uwaterloo.ca/~skushagr/)
  • [Dileep George](https://dileeplearning.github.io/)

| [![](https://img.shields.io/github/stars/deepmind/PGMax?style=social)](https://github.com/deepmind/PGMax)

  • [arxiv](https://arxiv.org/abs/2202.04110)

  • [wiki](https://en.wikipedia.org/wiki/Belief_propagation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/PGMax/blob/main/examples/rcn.ipynb) | 05.05.2023 |
| StableLM | Stability AI Language Models | [Stability AI](https://stability.ai/research) | [![](https://img.shields.io/github/stars/Stability-AI/StableLM?style=social)](https://github.com/Stability-AI/StableLM)

  • [blog post](https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models)

  • [git](https://github.com/facebookresearch/llama), [git](https://github.com/tatsu-lab/stanford_alpaca), [git](https://github.com/nomic-ai/gpt4all), [git](https://github.com/databrickslabs/dolly), [git](https://github.com/anthropics/hh-rlhf), [git](https://github.com/ggerganov/llama.cpp)

  • [hf](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [hf](https://huggingface.co/datasets/RyokoAI/ShareGPT52K), [hf](https://huggingface.co/stabilityai)

  • [yt](https://youtu.be/dypPSs4t77g), [yt](https://youtu.be/nWf1StvtoRw), [yt](https://youtu.be/Hg-s2RTaTFE), [yt](https://youtu.be/qXtJjoEfTnA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Stability-AI/StableLM/blob/main/notebooks/stablelm-alpha.ipynb) | 27.04.2023 |
| TTS | A library for advanced Text-to-Speech generation, built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality |

  • [Eren Gölge](https://github.com/erogol)
  • [Aya-AlJafari](https://github.com/Aya-AlJafari)
  • [Edresson Casanova](https://github.com/Edresson)
  • [Josh Meyer](http://jrmeyer.github.io/)
  • others
  • [Kelly Davis](https://github.com/kdavis-coqui)
  • [Reuben Morais](https://github.com/reuben)

| [![](https://img.shields.io/github/stars/coqui-ai/TTS?style=social)](https://github.com/coqui-ai/TTS)

  • [blog post](https://coqui.ai/blog/tts/solving-attention-problems-of-tts-models-with-double-decoder-consistency)

  • [docs](https://tts.readthedocs.io/en/latest/)

  • [git](https://github.com/coqui-ai/TTS-papers)

  • [samples](https://erogol.github.io/ddc-samples/)

  • [website](https://coqui.ai/)

  • [yt](https://youtu.be/ADnBCz0Wd1U), [yt](https://youtu.be/Yglxf2WbkLU), [yt](https://youtu.be/alpI-DnVlO0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/coqui-ai/TTS/blob/dev/notebooks/Tutorial_2_train_your_first_TTS_model.ipynb) | 26.04.2023 |
| OpenCLIP | An open source implementation of CLIP |

  • [Ross Wightman](https://rwightman.com/)
  • [Cade Gordon](https://cadegordon.io/)
  • [Vaishaal Shankar](http://vaishaal.com/)

| [![](https://img.shields.io/github/stars/mlfoundations/open_clip?style=social)](https://github.com/mlfoundations/open_clip)

  • [arxiv](https://arxiv.org/abs/2109.01903), [arxiv](https://arxiv.org/abs/2103.00020), [arxiv](https://arxiv.org/abs/2111.02114), [arxiv](https://arxiv.org/abs/2107.04649), [arxiv](https://arxiv.org/abs/1902.10811), [arxiv](https://arxiv.org/abs/2107.04649)

  • [data](https://ai.google.com/research/ConceptualCaptions/download), [data](https://laion.ai/blog/laion-5b/), [data](https://laion.ai/blog/laion-400-open-dataset/)

  • [git](https://github.com/mlfoundations/wise-ft), [git](https://github.com/webdataset/webdataset), [git](https://github.com/webdataset/tarp), [git](https://github.com/google-research-datasets/conceptual-12m)

  • [hf](https://huggingface.co/datasets/laion/laion2B-en), [hf](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K), [hf](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K), [hf](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K), [hf](https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mlfoundations/open_clip/blob/master/docs/Interacting_with_open_clip.ipynb) | 16.04.2023 |
| Stable Baselines3 | Set of reliable implementations of reinforcement learning algorithms in PyTorch |

  • [Antonin Raffin](https://araffin.github.io/)
  • [Ashley Hill](https://hill-a.me/)
  • [Adam Gleave](https://www.gleave.me/)
  • [Anssi Kanervisto](https://github.com/Miffyli)
  • others
  • [Maximilian Ernestus](https://github.com/ernestum)
  • [Noah Dormann](https://github.com/ndormann)

| [![](https://img.shields.io/github/stars/DLR-RM/stable-baselines3?style=social)](https://github.com/DLR-RM/stable-baselines3)

  • [docs](https://stable-baselines3.readthedocs)

  • [git](https://github.com/Stable-Baselines-Team/stable-baselines3-contrib), [git](https://github.com/hill-a/stable-baselines), [git](https://github.com/openai/gym/wiki/Environments)

  • [paper](https://jmlr.org/papers/v22/20-1364.html)

  • [reddit](https://www.reddit.com/r/reinforcementlearning/)

  • [yt](https://www.youtube.com/playlist?list=PLQVvvaa0QuDf0O2DWwLZBfJeYY-JOeZB1)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/sb3/stable_baselines_getting_started.ipynb) | 14.04.2023 |
| RL Baselines3 Zoo | Training Framework for Stable Baselines3 Reinforcement Learning Agents | [Antonin Raffin](https://araffin.github.io/) | [![](https://img.shields.io/github/stars/DLR-RM/rl-baselines3-zoo?style=social)](https://github.com/DLR-RM/rl-baselines3-zoo)

  • [arxiv](https://arxiv.org/abs/2005.05719)

  • [docs](https://stable-baselines3.readthedocs.io/en/master/)

  • [git](https://github.com/DLR-RM/rl-baselines3-zoo), [git](https://github.com/openai/roboschool), [git](https://github.com/Farama-Foundation/Minigrid)

  • [hf](https://huggingface.co/sb3)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/sb3/rl-baselines-zoo.ipynb) | 14.04.2023 |
| Grounded-SAM | Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect, Segment and Generate Anything | [IDEA-Research](https://www.idea.edu.cn/) | [![](https://img.shields.io/github/stars/IDEA-Research/Grounded-Segment-Anything?style=social)](https://github.com/IDEA-Research/Grounded-Segment-Anything)

  • [arxiv](https://arxiv.org/abs/2304.02643), [arxiv](https://arxiv.org/abs/2303.05499)

  • [git](https://github.com/MasterBin-IIAU/UNINEXT), [git](https://github.com/IDEA-Research/OSX), [git](https://github.com/dvlab-research/VoxelNeXt), [git](https://github.com/UX-Decoder/Semantic-SAM), [git](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once), [git](https://github.com/IDEA-Research/OpenSeeD), [git](https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings), [git](https://github.com/sail-sg/EditAnything), [git](https://github.com/feizc/IEA), [git](https://github.com/Li-Qingyun/sam-mmrotate), [git](https://github.com/VainF/Awesome-Anything), [git](https://github.com/RockeyCoss/Prompt-Segment-Anything)

  • [yt](https://youtu.be/oEQYStnF2l8), [yt](https://youtu.be/gKTYMfwPo4M), [yt](https://youtu.be/0Fpb8TBH0nM), [yt](https://youtu.be/GuEDDBWrN24)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/betogaona7/Grounded-Segment-Anything/blob/main/grounded_sam_colab_demo.ipynb) | 12.04.2023 |
| TFDS | Collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks | [Google](https://www.tensorflow.org/) | [![](https://img.shields.io/github/stars/tensorflow/datasets?style=social)](https://github.com/tensorflow/datasets)

  • [medium](https://towardsdatascience.com/youre-importing-data-wrong-c171f52eea00)

  • [tf](https://www.tensorflow.org/datasets)

  • [yt](https://youtu.be/YrMy-BAqk8k), [yt](https://youtu.be/6th3rahsw9Y), [yt](https://youtu.be/3HYy0SPd7TE), [yt](https://youtu.be/MvcK-MaXbHk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/overview.ipynb) | 11.04.2023 |
| Optimum | Extension of Transformers and Diffusers, providing a set of optimization tools enabling maximum efficiency to train and run models on targeted hardware, while keeping things easy to use | [Hugging Face](https://huggingface.co/) | [![](https://img.shields.io/github/stars/huggingface/optimum?style=social)](https://github.com/huggingface/optimum)

  • [git](https://github.com/openvinotoolkit/nncf)

  • [hf](https://huggingface.co/docs/optimum/index), [hf](https://huggingface.co/docs/transformers/main_classes/trainer)

  • [yt](https://youtu.be/UJnfePM0Ur8), [yt](https://www.youtube.com/live/b1Gk9q9empA), [yt](https://youtu.be/_AKFDOnrZz8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb) | 06.04.2023 |
| MMOCR | Open source toolkit based on PyTorch and MMDetection, supporting numerous OCR-related models, including text detection, text recognition, and key information extraction |

  • [Zhanghui Kuang](https://jeffreykuang.github.io)
  • [Hongbin Sun](https://github.com/cuhk-hbsun)
  • [Zhizhong Li](https://zhizhong.li/)
  • [Xiaoyu Yue](https://yuexy.github.io/#/)
  • others
  • [Tsui Hin Lin](https://dl.acm.org/profile/99659894554)
  • [Jianyong Chen](https://github.com/HolyCrap96)
  • [Huaqiang Wei](https://github.com/weihuaqiang)
  • [Yiqin Zhu](https://scholar.google.com/citations?user=ZH9cp50AAAAJ)
  • [Tong Gao](https://github.com/gaotongxiao)
  • [Wenwei Zhang](https://zhangwenwei.cn/)
  • [Kai Chen](https://chenkai.site/)
  • [Wayne Zhang](https://www.statfe.com/)
  • [Dahua Lin](http://dahua.site/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3474085.3478328)](https://doi.org/10.1145/3474085.3478328) [![](https://img.shields.io/github/stars/open-mmlab/mmocr?style=social)](https://github.com/open-mmlab/mmocr)

  • [arxiv](https://arxiv.org/abs/2108.06543)

  • [discord](https://discord.gg/raweFPmdzG)

  • [docs](https://mmocr.readthedocs.io/en/latest/)

  • [git](https://github.com/open-mmlab/mmengine), [git](https://github.com/open-mmlab/mmcv)

  • [medium](https://openmmlab.medium.com/mmocr-a-comprehensive-toolbox-for-text-detection-recognition-and-understanding-795befa726b8)

  • [pypi](https://pypi.org/project/mmocr/)

  • [twitter](https://twitter.com/OpenMMLab)

  • [yt](https://youtu.be/U7VYfHeE0KQ), [yt](https://youtu.be/Snyu-o8ZdDk), [yt](https://youtu.be/g7qfSYkkpUA), [yt](https://www.youtube.com/openmmlab)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmocr/blob/master/demo/tutorial.ipynb) | 06.04.2023 |
| MMSegmentation | Open source semantic segmentation toolbox based on PyTorch | [OpenMMLab](https://openmmlab.com/) | [![](https://img.shields.io/github/stars/open-mmlab/mmsegmentation?style=social)](https://github.com/open-mmlab/mmsegmentation)

  • [discord](https://discord.gg/raweFPmdzG)

  • [docs](https://mmsegmentation.readthedocs.io/en/main/)

  • [medium](https://openmmlab.medium.com/), [medium](https://mducducd33.medium.com/sematic-segmentation-using-mmsegmentation-bcf58fb22e42)

  • [pypi](https://pypi.org/project/mmsegmentation)

  • [twitter](https://twitter.com/OpenMMLab)

  • [yt](https://www.youtube.com/openmmlab)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/main/demo/MMSegmentation_Tutorial.ipynb) | 31.03.2023 |
| LAVIS | Python deep learning library for LAnguage-and-VISion intelligence research and applications |

  • [Dongxu Li](https://github.com/dxli94)
  • [Junnan Li](https://github.com/LiJunnan1992)
  • [Hung Le](https://sites.google.com/view/henryle2018/home)
  • [Guangsen Wang](https://github.com/guangsen-wang)
  • others
  • [Silvio Savarese](https://scholar.google.com/citations?user=ImpbxLsAAAAJ)
  • [Steven Hoi](https://sites.google.com/view/stevenhoi)

| [![](https://img.shields.io/github/stars/salesforce/LAVIS?style=social)](https://github.com/salesforce/LAVIS)

  • [arxiv](https://arxiv.org/abs/2209.09019), [arxiv](https://arxiv.org/abs/2305.06500), [arxiv](https://arxiv.org/abs/2301.12597), [arxiv](https://arxiv.org/abs/2212.10846), [arxiv](https://arxiv.org/abs/2210.08773)

  • [blog post](https://blog.salesforceairesearch.com/lavis-language-vision-library/)

  • [docs](https://opensource.salesforce.com/LAVIS//latest/index.html)

  • [wiki](https://en.wikipedia.org/wiki/Merlion)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/projects/img2llm-vqa/img2llm_vqa.ipynb) | 24.03.2023 |
| AudioLM | Framework for high-quality audio generation with long-term consistency |

  • [Phil Wang](https://lucidrains.github.io/)
  • [Zalán Borsos](https://zalanborsos.com/)
  • [Raphaël Marinier](https://github.com/RaphaelMarinier)
  • [Damien Vincent](https://www.linkedin.com/in/damien-vincent-1958381)
  • others
  • [Eugene Kharitonov](https://eugene-kharitonov.github.io/)
  • [Olivier Pietquin](https://research.google/people/105812)
  • [Matt Sharifi](https://scholar.google.com/citations?user=GeQNBz0AAAAJ)
  • [Olivier Teboul](https://scholar.google.com/citations?user=ep0OfyAAAAAJ)
  • [David Grangier](http://david.grangier.info/)
  • [Marco Tagliasacchi](https://scholar.google.com/citations?user=zwH1rZQAAAAJ)
  • [Neil Zeghidour](https://github.com/lienz)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/TASLP.2023.3288409)](https://doi.org/10.1109/TASLP.2023.3288409) [![](https://img.shields.io/github/stars/lucidrains/audiolm-pytorch?style=social)](https://github.com/lucidrains/audiolm-pytorch)

  • [arxiv](https://arxiv.org/abs/2209.03143), [arxiv](https://arxiv.org/abs/2107.03312), [arxiv](https://arxiv.org/abs/2305.02765), [arxiv](https://arxiv.org/abs/2305.19466), [arxiv](https://arxiv.org/abs/2002.05202), [arxiv](https://arxiv.org/abs/1911.02150), [arxiv](https://arxiv.org/abs/2207.12598), [arxiv](https://arxiv.org/abs/2105.13290), [arxiv](https://arxiv.org/abs/2210.13432), [arxiv](https://arxiv.org/abs/2111.09883), [arxiv](https://arxiv.org/abs/2104.05707), [arxiv](https://arxiv.org/abs/2210.13438)

  • [blog post](https://blog.research.google/2022/10/audiolm-language-modeling-approach-to.html)

  • [discord](https://discord.gg/xBPBXfcFHd)

  • [git](https://github.com/facebookresearch/encodec), [git](https://github.com/lucidrains/musiclm-pytorch)

  • [project](https://google-research.github.io/seanet/audiolm/examples/)

  • [yt](https://youtu.be/Vucewi_kPEU), [yt](https://youtu.be/behUbh0koZk), [yt](https://youtu.be/olNvmUCmY8o)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/lucidrains/audiolm-pytorch/blob/main/audiolm_pytorch_demo.ipynb) | 23.03.2023 |
| pymdp | Package for simulating Active Inference agents in Markov Decision Process environments |

  • [Conor Heins](https://github.com/conorheins)
  • [Alec Tschantz](https://github.com/alec-tschantz)
  • [Beren Millidge](https://www.beren.io/)
  • [Brennan Klein](https://github.com/jkbren)
  • others
  • [Arun Niranjan](https://github.com/Arun-Niranjan)
  • [Daphne Demekas](https://github.com/daphnedemekas)

| [![](https://img.shields.io/github/stars/infer-actively/pymdp?style=social)](https://github.com/infer-actively/pymdp)

  • [arxiv](https://arxiv.org/abs/2201.03904)

  • [docs](https://pymdp-rtd.readthedocs.io/en/stable/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/infer-actively/pymdp/blob/master/docs/notebooks/active_inference_from_scratch.ipynb) | 19.03.2023 |
| Tzer | Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation |

  • [Jiawei Liu](https://jiawei-site.github.io/)
  • [Yuxiang Wei](https://yuxiang.cs.illinois.edu/)
  • [Sen Yang](https://github.com/syang-ng)
  • [Yinlin Deng](https://dengyinlin.github.io/)
  • [Lingming Zhang](http://lingming.cs.illinois.edu/)

| [![](https://img.shields.io/github/stars/ise-uiuc/tzer?style=social)](https://github.com/ise-uiuc/tzer)

  • [arxiv](https://arxiv.org/abs/2202.09947)

  • [docker](https://hub.docker.com/repository/docker/tzerbot/oopsla)

  • [docs](https://tzer.readthedocs.io/en/latest/index.html)

  • [git](https://github.com/ganler/memcov)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ise-uiuc/tzer/blob/main/bug-report.ipynb) | 09.03.2023 |
| ArtLine | A Deep Learning based project for creating line art portraits | [Vijish Madhavan](https://github.com/vijishmadhavan) | [![](https://img.shields.io/github/stars/vijishmadhavan/ArtLine?style=social)](https://github.com/vijishmadhavan/ArtLine)

  • [arxiv](https://arxiv.org/abs/1805.08318), [arxiv](https://arxiv.org/abs/1710.10196), [arxiv](https://arxiv.org/abs/1707.02921), [arxiv](https://arxiv.org/abs/1603.08155)

  • [data](https://cg.cs.tsinghua.edu.cn/people/~Yongjin/APDrawingDB.zip)

  • [git](https://github.com/yiranran/APDrawingGAN), [git](https://github.com/jantic/DeOldify)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vijishmadhavan/ArtLine/blob/main/ControlNet_%2BArtLine_.ipynb) | 03.03.2023 |
| Haiku | A library built on top of JAX designed to provide simple, composable abstractions for machine learning research |

  • [Tom Hennigan](https://github.com/tomhennigan)
  • [Trevor Cai](https://github.com/trevorcai)
  • [Tamara Norman](https://github.com/tamaranorman)
  • [Igor Babuschkin](https://www.babushk.in/)

| [![](https://img.shields.io/github/stars/deepmind/dm-haiku?style=social)](https://github.com/deepmind/dm-haiku)

  • [docs](https://dm-haiku.readthedocs.io/en/latest/)

  • [website](https://www.haiku-os.org/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/dm-haiku/blob/main/examples/haiku_lstms.ipynb) | 02.03.2023 |
| SAHI | A lightweight vision library for performing large scale object detection & instance segmentation |

  • [Fatih Cagatay Akyon](https://github.com/fcakyon)
  • [Sinan Onur ALTINUÇ](https://github.com/sinanonur)
  • [Alptekin Temizel](https://blog.metu.edu.tr/atemizel/)
  • [Cemil Cengiz](https://scholar.google.com/citations?user=1Ull07EAAAAJ)
  • others
  • [Devrim Çavuşoğlu](https://github.com/devrimcavusoglu)
  • [Kadir Şahin](https://github.com/ssahinnkadir)
  • [Oğulcan Eryüksel](https://github.com/oulcan)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICIP46576.2022.9897990)](https://doi.org/10.1109/ICIP46576.2022.9897990) [![](https://img.shields.io/github/stars/obss/sahi?style=social)](https://github.com/obss/sahi)

  • [arxiv](https://arxiv.org/abs/2202.06934)

  • [git](https://github.com/fcakyon/small-object-detection-benchmark)

  • [hf](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads)

  • [kaggle](https://www.kaggle.com/remekkinas/sahi-slicing-aided-hyper-inference-yv5-and-yx)

  • [medium](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80), [medium](https://medium.com/codable/convert-any-dataset-to-coco-object-detection-format-with-sahi-95349e1fe2b7)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb) | 23.02.2023 |
| AmpliGraph | A suite of neural machine learning models for relational Learning, a branch of machine learning that deals with supervised learning on knowledge graphs |

  • [Luca Costabello](https://luca.costabello.info/)
  • [Adrianna Janik](https://github.com/adrijanik)
  • [Chan Le Van](https://github.com/chanlevan)
  • [Nicholas McCarthy](https://github.com/NicholasMcCarthy)
  • others
  • [Rory McGrath](http://www.rorymcgrath.ie/)
  • [Sumit Pai](https://github.com/sumitpai)

| [![](https://img.shields.io/github/stars/Accenture/AmpliGraph?style=social)](https://github.com/Accenture/AmpliGraph)

  • [arxiv](http://arxiv.org/abs/1702.05563), [arxiv](http://arxiv.org/abs/1705.10744), [arxiv](https://arxiv.org/abs/2105.08683), [arxiv](http://arxiv.org/abs/1612.03975), [arxiv](https://arxiv.org/abs/1912.10000), [arxiv](https://arxiv.org/abs/1412.6575)

  • [docs](https://docs.ampligraph.org)

  • [neurips](https://papers.nips.cc/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html), [neurips](https://papers.nips.cc/paper/2013/hash/b337e84de8752b27eda3a12363109e80-Abstract.html)

  • [yt](https://youtu.be/gX_KHaU8ChI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Accenture/AmpliGraph/blob/main/docs/tutorials/AmpliGraphBasicsTutorial.ipynb) | 23.02.2023 |
| NMT with attention | This notebook trains a seq2seq model for Spanish to English translation |

  • [Minh-Thang Luong](https://nlp.stanford.edu/~lmthang/)
  • [Hieu Pham](https://huyhieupham.github.io/)
  • [Christopher Manning](https://nlp.stanford.edu/~manning/)

|

  • [arxiv](https://arxiv.org/abs/1508.04025), [arxiv](https://arxiv.org/abs/1409.0473)

  • [data](http://www.manythings.org/anki/)

  • [tf](https://www.tensorflow.org/text/tutorials/nmt_with_attention)

  • [wiki](https://en.wikipedia.org/wiki/Neural_machine_translation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb) | 15.02.2023 |
| GLUE using BERT on TPU | This tutorial contains complete end-to-end code to train models on a TPU | [Anirudh Dubey](https://github.com/anirudh161) |

  • [GLUE](https://gluebenchmark.com/)

  • [arxiv](https://arxiv.org/abs/1810.04805)

  • [tf](https://www.tensorflow.org/guide/tpu), [tf](https://www.tensorflow.org/text/tutorials/bert_glue)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb) | 15.02.2023 |
| TensorBoard | Suite of web applications for inspecting and understanding your TensorFlow runs and graphs | [Yuan Tang](https://terrytangyuan.github.io/) | [![](https://img.shields.io/github/stars/tensorflow/tensorboard?style=social)](https://github.com/tensorflow/tensorboard)

  • [git](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/supervisor.py)

  • [tf](https://www.tensorflow.org/tensorboard/get_started), [tf](https://www.tensorflow.org/api_docs/python/tf/summary), [tf](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul), [tf](https://www.tensorflow.org/api_docs/python/tf/nn/relu), [tf](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/summary_iterator)

  • [website](https://tensorboard.dev/)

  • [wiki](https://en.wikipedia.org/wiki/Reservoir_sampling)

  • [yt](https://youtu.be/eBbEDRsCmv4), [yt](https://youtu.be/BqgTU7_cBnk), [yt](https://youtu.be/qEQ-_EId-D0), [yt](https://youtu.be/3bownM3L5zM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb) | 10.02.2023 |
| High-performance Simulation with Kubernetes | This tutorial will describe how to set up high-performance simulation using a TFF runtime running on Kubernetes | [Jason Roselander](https://github.com/roselander) |

  • [GKE](https://cloud.google.com/kubernetes-engine/)

  • [pwc](https://paperswithcode.com/task/federated-learning)

  • [shell](https://cloud.google.com/shell/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/high_performance_simulation_with_kubernetes.ipynb) | 31.01.2023 |
| Compel | Text prompt weighting and blending library for transformers-type text embedding systems | [Damian Stewart](http://damianstewart.com/) | [![](https://img.shields.io/github/stars/damian0815/compel?style=social)](https://github.com/damian0815/compel)

  • [git](https://github.com/invoke-ai/InvokeAI/issues/2832)

  • [hf](https://huggingface.co/cactusfriend/nightmare-invokeai-prompts)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/damian0815/compel/blob/main/compel-demo.ipynb) | 26.01.2023 |
| DALL·E Flow | An interactive workflow for generating high-definition images from text prompt |

  • [Han Xiao](https://hanxiao.io/)
  • [Delgermurun Purevkhuu](https://delgermurun.com/)
  • [Alex Cureton-Griffiths](http://blog.alexcg.net/)

| [![](https://img.shields.io/github/stars/jina-ai/dalle-flow?style=social)](https://github.com/jina-ai/dalle-flow)

  • [git](https://github.com/Jack000/glid-3-xl), [git](https://github.com/jina-ai/docarray)

  • [hf](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)

  • [yt](https://www.youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne), [yt](https://www.youtube.com/c/jina-ai)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jina-ai/dalle-flow/blob/main/client.ipynb) | 26.01.2023 |
| Diffusers | Provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models | [Hugging Face](https://huggingface.co/) | [![](https://img.shields.io/github/stars/huggingface/diffusers?style=social)](https://github.com/huggingface/diffusers)

  • [arxiv](https://arxiv.org/abs/2006.11239), [arxiv](https://arxiv.org/abs/2006.11239), [arxiv](https://arxiv.org/abs/2010.02502), [arxiv](https://arxiv.org/abs/2202.09778), [arxiv](https://arxiv.org/abs/2204.13902)

  • [git](https://github.com/hojonathanho/diffusion), [git](https://github.com/pesser/pytorch_diffusion), [git](https://github.com/ermongroup/ddim), [git](https://github.com/heejkoo/Awesome-Diffusion-Models)

  • [hf](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion), [hf](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion), [hf](https://huggingface.co/spaces/fusing/celeba-diffusion), [hf](https://huggingface.co/spaces/huggingface/diffuse-the-rest), [hf](https://huggingface.co/spaces/Shuang59/Composable-Diffusion)

  • [medium](https://towardsdatascience.com/hugging-face-just-released-the-diffusers-library-846f32845e65)

  • [yt](https://youtu.be/UzkdOg7wWmI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) | 17.01.2023 |
| Sample Factory | One of the fastest RL libraries focused on very efficient synchronous and asynchronous implementations of policy gradients |

  • [Aleksei Petrenko](https://alex-petrenko.github.io/)
  • [Zhehui Huang](https://zhehui-huang.github.io/)
  • [Tushar Kumar](https://github.com/tushartk)
  • [Gaurav Sukhatme](http://robotics.usc.edu/~gaurav/)
  • [Vladlen Koltun](http://vladlen.info/)

| [![](https://img.shields.io/github/stars/alex-petrenko/sample-factory?style=social)](https://github.com/alex-petrenko/sample-factory)

  • [ICML](http://proceedings.mlr.press/v119/petrenko20a.html)

  • [arxiv](https://arxiv.org/abs/2006.11751)

  • [docs](https://www.samplefactory.dev/)

  • [git](https://github.com/alex-petrenko/faster-fifo)

  • [yt](https://youtu.be/lLG17LKKSZc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/alex-petrenko/sample-factory/blob/master/sf_examples/notebooks/samplefactory_hub_example.ipynb) | 17.01.2023 |
| Open-Assistant | Chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so |

  • [Andreas Köpf](https://github.com/andreaskoepf)
  • [Yannic Kilcher](https://github.com/yk)
  • [Huu Nguyen](https://github.com/ontocord)
  • [Christoph Schuhmann](http://christoph-schuhmann.de/)
  • others
  • [Keith Stevens](https://fozziethebeat.github.io/)
  • [Abdullah Barhoum](https://github.com/AbdBarho)
  • [Nguyen Minh Duc](https://github.com/notmd)
  • [Oliver Stanley](https://olliestanley.github.io/)
  • [James Melvin Ebenezer](https://github.com/melvinebenezer)

| [![](https://img.shields.io/github/stars/LAION-AI/Open-Assistant?style=social)](https://github.com/LAION-AI/Open-Assistant)

  • [arxiv](https://arxiv.org/abs/2203.02155)

  • [docs](https://projects.laion.ai/Open-Assistant/)

  • [hf](https://huggingface.co/OpenAssistant)

  • [medium](https://generativeai.pub/open-assistant-a-free-and-open-source-alternative-to-chatgpt-67d15229813)

  • [website](https://open-assistant.io/)

  • [yt](https://youtu.be/64Izfm24FKA), [yt](https://youtu.be/ddG2fM9i4Kk), [yt](https://youtu.be/FQIHLFLrTw0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/LAION-AI/Open-Assistant/blob/main/notebooks/data-augmentation/stackexchange-builder/stackexchange-builder.ipynb) | 14.01.2023 |
| panda-gym | Set of robotic environments based on PyBullet physics engine and gymnasium |

  • [Quentin Gallouédec](https://gallouedec.com/)
  • [Nicolas Cazin](https://github.com/NicolasCAZIN)
  • [Emmanuel Dellandréa](http://perso.ec-lyon.fr/emmanuel.dellandrea/)
  • [Liming Chen](https://sites.google.com/view/limingchen/accueil)

| [![](https://img.shields.io/github/stars/qgallouedec/panda-gym?style=social)](https://github.com/qgallouedec/panda-gym)

  • [arxiv](https://arxiv.org/abs/2106.13687)

  • [docs](https://panda-gym.readthedocs.io/en/latest/)

  • [pypi](https://pypi.org/project/panda-gym/)

  • [yt](https://youtu.be/BgvpoSP45hA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/qgallouedec/panda-gym/blob/master/examples/PickAndPlace.ipynb) | 02.01.2023 |
| BANMo | Given multiple casual videos capturing a deformable object, BANMo reconstructs an animatable 3D model, including an implicit canonical 3D shape, appearance, skinning weights, and time-varying articulations, without pre-defined shape templates or registered cameras |

  • [Gengshan Yang](https://gengshan-y.github.io/)
  • [Minh Vo](https://minhpvo.github.io/)
  • [Natalia Neverova](https://nneverova.github.io/)
  • [Deva Ramanan](http://www.cs.cmu.edu/~deva/)
  • others
  • [Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/)
  • [Hanbyul Joo](https://jhugestar.github.io/)

| [![](https://img.shields.io/github/stars/facebookresearch/banmo?style=social)](https://github.com/facebookresearch/banmo)

  • [arxiv](https://arxiv.org/abs/2112.12761)

  • [git](https://github.com/kwea123/nerf_pl), [git](https://github.com/gengshan-y/rigidmask), [git](https://github.com/ShichenLiu/SoftRas), [git](https://github.com/ThibaultGROUEIX/ChamferDistancePytorch)

  • [project](https://banmo-www.github.io/)

  • [yt](https://youtu.be/1NUa-yvFGA0), [yt](https://youtu.be/jDTy-liFoCQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1dQJn1vsuz0DkyRZbOA1SulkVQ0V1kMUP) | 30.12.2022 |
| tensor_parallel | Run large PyTorch models on multiple GPUs in one line of code with potentially linear speedup | [Andrei Panferov](https://blog.panferov.org/) | [![](https://img.shields.io/github/stars/BlackSamorez/tensor_parallel?style=social)](https://github.com/BlackSamorez/tensor_parallel)

  • [git](https://github.com/microsoft/DeepSpeed), [git](https://github.com/facebookresearch/fairscale), [git](https://github.com/NVIDIA/Megatron-LM), [git](https://github.com/tunib-ai/parallelformers), [git](https://github.com/alpa-projects/alpa)

  • [hf](https://huggingface.co/docs/transformers/model_doc/gpt2)

  • [kaggle](https://www.kaggle.com/code/blacksamorez/tensor-parallel-int4-llm/), [kaggle](https://www.kaggle.com/code/muellerzr/multi-gpu-and-accelerate)

  • [pypi](https://pypi.org/project/tensor-parallel/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/BlackSamorez/tensor_parallel/blob/master/examples/training_flan-t5-xl.ipynb) | 29.12.2022 |
| TPU | Reference models and tools for Cloud TPUs | [Google](https://cloud.google.com/) | [![](https://img.shields.io/github/stars/tensorflow/tpu?style=social)](https://github.com/tensorflow/tpu)

  • [website](https://cloud.google.com/tpu/)

  • [wiki](https://en.wikipedia.org/wiki/Tensor_Processing_Unit)

  • [yt](https://youtu.be/W7A-9MYvPwI), [yt](https://youtu.be/MXxN4fv01c8), [yt](https://youtu.be/FsxthdQ_sL4), [yt](https://youtu.be/zEOtG-ChmZE), [yt](https://youtu.be/kBjYK3K3P6M), [yt](https://youtu.be/8j1MWZGNoXM), [yt](https://youtu.be/hszd5UqnfLk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/keras_mnist_tpu.ipynb) | 20.12.2022 |
| rliable | Library for reliable evaluation, even with a handful of runs, on reinforcement learning and machine learnings benchmarks |

  • [Rishabh Agarwal](https://agarwl.github.io/)
  • [Max Schwarzer](https://scholar.google.com/citations?user=YmWRSvgAAAAJ)
  • [Pablo Castro](https://psc-g.github.io/)
  • [Aaron Courville](https://mila.quebec/en/directory/aaron-courville)
  • [Marc Bellemare](http://www.marcgbellemare.info/)

| [![](https://img.shields.io/github/stars/google-research/rliable?style=social)](https://github.com/google-research/rliable)

  • [arxiv](https://psc-g.github.io/)

  • [blog post](https://research.google/blog/rliable-towards-reliable-evaluation-reporting-in-reinforcement-learning/), [blog post](https://araffin.github.io/post/rliable/)

  • [neurips](https://proceedings.neurips.cc/paper/2021/hash/f514cec81cb148559cf475e7426eed5e-Abstract.html)

  • [podcast](https://podcasts.apple.com/dk/podcast/deep-reinforcement-learning-at-the-edge-of/id1116303051?i=1000551066163)

  • [poster](https://agarwl.github.io/rliable/pdfs/Precipice_poster.pdf)

  • [project](https://agarwl.github.io/rliable/)

  • [slides](https://agarwl.github.io/rliable/assets/slides_mlc.pdf)

  • [twitter](https://x.com/agarwl_/status/1432800830621687817)

  • [yt](https://youtu.be/XSY9JwqD-bw), [yt](https://youtu.be/gO33pSls-jI), [yt](https://youtu.be/HDyK3oNN2i0), [yt](https://youtu.be/mqcnHYwWzD8), [yt](https://youtu.be/E00gxHrHzZ4), [yt](https://youtu.be/M3OzJDAjz3o)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1a0pSD-1tWhMmeJeeoyZM1A-HCW3yf1xR) | 19.12.2022 |
| TF-Agents | A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning |

  • [Sergio Guadarrama](https://github.com/sguada)
  • [Anoop Korattikara](https://github.com/kbanoop)
  • [Oscar Ramirez](https://github.com/oars)
  • [Pablo Castro](https://psc-g.github.io/)
  • others
  • [Ethan Holly](https://github.com/eholly-g)
  • [Sam Fishman](http://sam.fish/)
  • [Ke Wang](https://scholar.google.com/citations?user=QRYX59sAAAAJ)
  • [Ekaterina Gonina](https://github.com/egonina)
  • [Neal Wu](https://twitter.com/WuNeal)
  • [Efi Kokiopoulou](https://github.com/efiko)
  • [Luciano Sbaiz](https://scholar.google.com/citations?user=fKBmhcUAAAAJ)
  • [Jamie Smith](https://scholar.google.com/citations?user=jk17mo8AAAAJ)
  • [Gábor Bartók](https://github.com/bartokg)
  • [Jesse Berent](https://www.linkedin.com/in/jesse-berent-a1b6875)
  • [Chris Harris](https://www.linkedin.com/in/charris)
  • [Vincent Vanhoucke](https://vincent.vanhoucke.com/)
  • [Eugene Brevdo](https://ebrevdo.github.io/)

| [![](https://img.shields.io/github/stars/tensorflow/agents?style=social)](https://github.com/tensorflow/agents)

  • [docs](https://www.tensorflow.org/agents/api_docs/python/tf_agents)

  • [medium](https://towardsdatascience.com/introduction-to-tf-agents-a-library-for-reinforcement-learning-in-tensorflow-68ab9add6ad6), [medium](https://medium.com/analytics-vidhya/tf-agents-a-flexible-reinforcement-learning-library-for-tensorflow-5f125420f64b)

  • [tf](https://www.tensorflow.org/agents)

  • [yt](https://youtu.be/2nKD6zFQ8xI), [yt](https://youtu.be/-TTziY7EmUA), [yt](https://youtu.be/52DTXidSVWc), [yt](https://youtu.be/U7g7-Jzj9qo), [yt](https://youtu.be/tAOApRQAgpc), [yt](https://youtu.be/X4eruXqNbDc), [yt](https://youtu.be/g0yDlAbi6Pc), [yt](https://youtu.be/VmZI_YkfPBM), [yt](https://youtu.be/7QFSziiAnxI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/0_intro_rl.ipynb) | 15.12.2022 |
| PyG | Library built upon PyTorch to easily write and train Graph Neural Networks for a wide range of applications related to structured data |

  • [Matthias Fey](https://rusty1s.github.io/#/)
  • [Jan Eric Lenssen](https://github.com/janericlenssen)

| [![](https://img.shields.io/github/stars/pyg-team/pytorch_geometric?style=social)](https://github.com/pyg-team/pytorch_geometric)

  • [arxiv](https://arxiv.org/abs/1903.02428), [arxiv](https://arxiv.org/abs/1801.07829), [arxiv](https://arxiv.org/abs/1609.02907), [arxiv](https://arxiv.org/abs/2003.03123), [arxiv](https://arxiv.org/abs/1905.05178), [arxiv](https://arxiv.org/abs/1706.08566), [arxiv](https://arxiv.org/abs/1907.10903), [arxiv](https://arxiv.org/abs/1905.07953)

  • [docs](https://pytorch-geometric.readthedocs.io/en/latest/)

  • [git](https://github.com/snap-stanford/ogb/tree/master/examples), [git](https://github.com/pyg-team/pyg-lib), [git](https://github.com/rusty1s/pytorch_scatter), [git](https://github.com/rusty1s/pytorch_sparse), [git](https://github.com/rusty1s/pytorch_cluster), [git](https://github.com/AntonioLonga/PytorchGeometricTutorial)

  • [neurips](https://papers.nips.cc/paper/2018/hash/e77dbaf6759253c7c6d0efc5690369c7-Abstract.html), [neurips](https://papers.nips.cc/paper/2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html), [neurips](https://nips.cc/virtual/2020/public/poster_3fe230348e9a12c13120749e3f9fa4cd.html)

  • [pt](https://pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation)

  • [yt](https://www.youtube.com/playlist?list=PLGMXrbDNfqTzqxB1IGgimuhtfAhGd8lHF), [yt](https://www.youtube.com/playlist?list=PLGMXrbDNfqTwPxitLVHEbT9Pd6-oR_cud), [yt](https://youtu.be/-UjytpbqX4A)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1h3-vJGRVloF5zStxL5I0rSy4ZUPNsjy8) | 08.12.2022 |
| ruGPT3 | Example of inference of RuGPT3XL | [Anton Emelyanov](https://github.com/king-menin) | [![](https://img.shields.io/github/stars/ai-forever/ru-gpts?style=social)](https://github.com/ai-forever/ru-gpts)

  • [cristofari](https://sbercloud.ru/ru/christofari)

  • [git](https://github.com/microsoft/DeepSpeedExamples/tree/master/Megatron-LM)

  • [hf](https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate)

  • [sparse attention](https://www.deepspeed.ai/tutorials/sparse-attention/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ai-forever/ru-gpts/blob/master/examples/ruGPT3XL_generation.ipynb) | 07.12.2022 |
| DSP theory | Theory of digital signal processing: signals, filtration (IIR, FIR, CIC, MAF), transforms (FFT, DFT, Hilbert, Z-transform) etc |

  • [Alexander Kapitanov](https://github.com/hukenovs)
  • [Vladimir Fadeev](https://github.com/kirlf)
  • [Karina Kvanchiani](https://github.com/karinakvanchiani)
  • [Elizaveta Petrova](https://github.com/kleinsbotle)
  • [Andrei Makhliarchuk](https://github.com/anotherhelloworld)

| [![](https://img.shields.io/github/stars/hukenovs/dsp-theory?style=social)](https://github.com/hukenovs/dsp-theory)
  • [blog post](https://habr.com/ru/articles/460445/)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/hukenovs/dsp-theory/blob/master/src/dsp_theory_1_signals.ipynb) | 18.10.2022 |
| Mubert | Prompt-based music generation via Mubert API | [Ilya Belikov](https://github.com/ferluht) | [![](https://img.shields.io/github/stars/MubertAI/Mubert-Text-to-Music?style=social)](https://github.com/MubertAI/Mubert-Text-to-Music)

  • [docs](https://mubert2.docs.apiary.io/)

  • [project](https://mubert.com/)

  • [yt](https://youtu.be/YJu0iXn-T_U), [yt](https://youtu.be/5UsaxJsFvAI), [yt](https://youtu.be/B0kkIpWifG4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ferluht/Mubert-Text-to-Music/blob/main/Mubert_Text_to_Music.ipynb) | 18.10.2022 |
| RuDOLPH | A fast and light text-image-text transformer designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more |

  • [Alex Shonenkov](https://github.com/shonenkov)
  • [Misha Konstantinov](https://github.com/zeroshot-ai)

| [![](https://img.shields.io/github/stars/ai-forever/ru-dolph?style=social)](https://github.com/ai-forever/ru-dolph)

  • [arxiv](https://arxiv.org/abs/2005.14165), [arxiv](https://arxiv.org/abs/2102.12092), [arxiv](https://arxiv.org/abs/2103.00020)

  • [pypi](https://pypi.org/project/rudolph/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ai-forever/ru-dolph/blob/master/jupyters/RUDOLPH_tune_i2t_pl.ipynb) | 06.10.2022 |
| Batch RL | Offline RL using the DQN replay dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games |

  • [Rishabh Agarwal](https://agarwl.github.io/)
  • [Dale Schuurmans](https://webdocs.cs.ualberta.ca/~dale/)
  • [Mohammad Norouzi](https://norouzi.github.io/)

| [![](https://img.shields.io/github/stars/google-research/batch_rl?style=social)](https://github.com/google-research/batch_rl)

  • [DQN](https://www.nature.com/articles/nature14236?wm=book_wap_0005)

  • [arxiv](https://arxiv.org/abs/1907.04543), [arxiv](https://arxiv.org/abs/1709.06009)

  • [blog post](https://ai.googleblog.com/2020/04/an-optimistic-perspective-on-offline.html)

  • [data](https://console.cloud.google.com/storage/browser/atari-replay-datasets), [data](https://research.google/resources/datasets/dqn-replay/)

  • [git](https://github.com/openai/atari-py/tree/0.2.5/atari_py/atari_roms), [git](https://github.com/mgbellemare/Arcade-Learning-Environment), [git](https://github.com/mila-iqia/SGI/blob/master/src/offline_dataset.py), [git](https://github.com/kzl/decision-transformer/tree/master/atari)

  • [project](https://offline-rl.github.io/)

  • [slides](https://docs.google.com/presentation/d/1ROltXr6FIeYKrnGl0tKHGWI0pL4Zo8CnvAK2-cdpQyY)

  • [talk](https://slideslive.com/38928373/an-optimistic-perspective-on-offline-deep-reinforcement-learning)

  • [tf](https://www.tensorflow.org/install/install_linux)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1ktlNni_vwFpFtCgUez-RHW0OdGc2U_Wv) | 04.10.2022 |
| EfficientDet | New family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints |

  • [Mingxing Tan](https://scholar.google.com/citations?user=6POeyBoAAAAJ)
  • [Ruoming Pang](https://scholar.google.com/citations?user=1fsmwB8AAAAJ)
  • [Quoc Le](https://cs.stanford.edu/~quocle/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.01079)](https://doi.org/10.1109/CVPR42600.2020.01079) [![](https://img.shields.io/github/stars/google/automl?style=social)](https://github.com/google/automl/tree/master/efficientdet)

  • [arxiv](https://arxiv.org/abs/1911.09070), [arxiv](https://arxiv.org/abs/2103.13886), [arxiv](https://arxiv.org/abs/1905.11946), [arxiv](https://arxiv.org/abs/1804.02767)

  • [blog post](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html)

  • [medium](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9)

  • [tf](https://tfhub.dev/s?network-architecture=efficientdet)

  • [tutorial](https://cloud.google.com/tpu/docs/tutorials/efficientnet)

  • [yt](https://youtu.be/yJg1FX2goCo), [yt](https://youtu.be/OsA3zH5NKYc), [yt](https://youtu.be/qZobxWXlJ0g)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/automl/blob/master/efficientdet/tf2/tutorial.ipynb) | 27.09.2022 |
| RL Games | High performance RL library |

  • [Denys Makoviichuk](https://github.com/Denys88)
  • [Viktor Makoviychuk](https://github.com/ViktorM)

| [![](https://img.shields.io/github/stars/Denys88/rl_games?style=social)](https://github.com/Denys88/rl_games)

  • [discord](https://discord.gg/hnYRq7DsQh)

  • [git](https://github.com/isaac-sim/IsaacGymEnvs), [git](https://github.com/NVlabs/cule), [git](https://github.com/NVlabs/tiny-cuda-nn)

  • [pypi](https://pypi.org/project/rl-games/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/brax_training.ipynb) | 27.09.2022 |
| ACME | A library of reinforcement learning components and agents |

  • [Matt Hoffman](https://www.mwhoffman.com/)
  • [Bobak Shahriari](https://github.com/bshahr)
  • [John Aslanides](https://www.aslanides.io/)
  • [Gabriel Barth-Maron](https://github.com/fastturtle)
  • others
  • [Feryal Behbahani](https://feryal.github.io/)
  • [Tamara Norman](https://github.com/tamaranorman)
  • [Abbas Abdolmaleki](https://scholar.google.com/citations?user=cCYTVWQAAAAJ)
  • [Albin Cassirer](https://github.com/acassirer)
  • [Fan Yang](https://github.com/ddmbr)
  • [Kate Baumli](https://github.com/katebaumli)
  • [Sarah Henderson](https://www.linkedin.com/in/sarah-henderson-agilecoach/)
  • [Alex Novikov](https://scholar.google.ru/citations?user=jMUkLqwAAAAJ)
  • [Sergio Gómez Colmenarejo](https://scholar.google.ru/citations?user=0Dkf68EAAAAJ)
  • [Serkan Cabi](https://scholar.google.ru/citations?&user=l-HhJaUAAAAJ)
  • [Caglar Gulcehre](https://www.caglarg.com/)
  • [Tom Le Paine](http://tomlepaine.github.io/)
  • [Andrew Cowie](https://scholar.google.ru/citations?&user=aTvi5mUAAAAJ)
  • [Ziyu Wang](https://ziyuw.github.io/)
  • [Bilal Piot](https://scholar.google.ru/citations?&user=fqxNUREAAAAJ)
  • [Nando de Freitas](https://github.com/nandodf)

| [![](https://img.shields.io/github/stars/deepmind/acme?style=social)](https://github.com/deepmind/acme)

  • [arxiv](https://arxiv.org/abs/2006.00979)

  • [blog post](https://www.deepmind.com/publications/acme-a-new-framework-for-distributed-reinforcement-learning)

  • [docs](https://dm-acme.readthedocs.io/en/latest/)

  • [git](https://github.com/deepmind/dm_env)

  • [yt](https://youtu.be/NUwDr42bPOw), [yt](https://youtu.be/J1XCWjuyRaI), [yt](https://youtu.be/pFMuQWpHI5k)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/acme/blob/master/examples/tutorial.ipynb) | 26.09.2022 |
| RWKV | Reinventing RNNs for the Transformer Era |

  • [Bo Peng](https://github.com/BlinkDL)
  • [Eric Alcaide](https://hypnopump.github.io/)
  • [Quentin Anthony](https://quentin-anthony.github.io/)
  • [Alon Albalak](https://alon-albalak.github.io/)
  • others
  • [Samuel Arcadinho](https://github.com/SSamDav)
  • [Matteo Grella](http://www.matteogrella.com/)
  • [Kranthi Kiran](https://kranthigv.github.io/)
  • [Haowen Hou](https://github.com/howard-hou)
  • [Przemyslaw Kazienko](https://kazienko.eu/en)
  • [Jan Kocon](https://github.com/KoconJan)
  • [Bartlomiej Koptyra](https://github.com/bkoptyra)
  • [Ipsit Mantri](https://ipsitmantri.github.io/)
  • [Ferdinand Mom](https://3outeille.github.io/)
  • [Xiangru Tang](https://github.com/tangxiangru)
  • [Johan Wind](https://johanwind.github.io/)
  • [Stanisław Woźniak](https://www.researchgate.net/profile/Stanislaw-Wozniak-3)
  • [Qihang Zhao](https://www.researchgate.net/profile/Qihang-Zhao-2)
  • [Peng Zhou](https://pengzhou.sites.ucsc.edu/)
  • [Jian Zhu](https://lingjzhu.github.io/)
  • [Rui-Jie Zhu](https://scholar.google.com/citations?user=08ITzJsAAAAJ)

| [![](https://img.shields.io/github/stars/BlinkDL/RWKV-LM?style=social)](https://github.com/BlinkDL/RWKV-LM)

  • [arxiv](https://arxiv.org/abs/2305.13048), [arxiv](https://arxiv.org/abs/2105.14103), [arxiv](https://arxiv.org/abs/2002.05202)

  • [data](https://dldata-public.s3.us-east-2.amazonaws.com/simplebooks.zip)

  • [demo](https://josephrocca.github.io/rwkv-v4-web/demo/)

  • [discord](https://discord.gg/bDSBUMeFpc)

  • [git](https://github.com/saharNooby/rwkv.cpp), [git](https://github.com/cgisky1980/ai00_rwkv_server), [git](https://github.com/harrisonvanderbyl/rwkv-cpp-cuda), [git](https://github.com/Blealtan/RWKV-LM-LoRA), [git](https://github.com/TheRamU/Fay/blob/main/README_EN.md), [git](https://github.com/ridgerchu/SpikeGPT), [git](https://github.com/BlinkDL/RWKV-v2-RNN-Pile/tree/main/RWKV-v3), [git](https://github.com/BlinkDL/SmallInitEmb), [git](https://github.com/BlinkDL/RWKV-CUDA), [git](https://github.com/BlinkDL/minGPT-tuned)

  • [hf](https://huggingface.co/BlinkDL), [hf](https://huggingface.co/BlinkDL/clip-guided-binary-autoencoder)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/umq908/r_rwkvv2rnn_a_parallelizable_rnn_with/)

  • [twitter](https://twitter.com/BlinkDL_AI), [twitter](https://twitter.com/HochreiterSepp/status/1524270961314484227)

  • [website](https://www.rwkv.com/)

  • [yt](https://youtu.be/x8pW19wKfXQ), [yt](https://youtu.be/B3Qa2rRsaXo), [yt](https://youtu.be/w-xydM6C6Qc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1F7tZoPZaWJf1fsCmZ5tjw6sYHiFOYVWM) | 21.09.2022 |
| NetKet | Open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and machine learning techniques |

  • [Filippo Vicentini](https://filippovicentini.com/)
  • [Damian Hofmann](https://github.com/femtobit)
  • [Attila Szabó](https://github.com/attila-i-szabo)
  • [Dian Wu](https://github.com/wdphy16)
  • others
  • [Christopher Roth](https://github.com/chrisrothUT)
  • [Clemens Giuliani](https://github.com/inailuig)
  • [Gabriel Pescia](https://github.com/gpescia)
  • [Jannes Nys](https://github.com/jwnys)
  • [Vladimir Vargas-Calderón](https://github.com/VolodyaCO)
  • [Nikita Astrakhantsev](https://github.com/nikita-astronaut)
  • [Giuseppe Carleo](https://github.com/gcarleo)
  • [Kenny Choo](https://github.com/kchoo1118)
  • [James Smith](https://jamesetsmith.github.io/)
  • [Tom Westerhout](https://github.com/twesterhout)
  • [Fabien Alet](https://github.com/fabienalet)
  • [Emily Davis](https://github.com/emilyjd)
  • [Stavros Efthymiou](https://github.com/stavros11)
  • [Ivan Glasser](https://www.researchgate.net/profile/Ivan-Glasser)
  • [Sheng-Hsuan Lin](https://shhslin.github.io/)
  • [Marta Mauri](https://github.com/martamau)
  • [Mazzola Guglielmo](https://www.ics.uzh.ch/en/research/research-groups/Guglielmo-Mazzola0.html)
  • [Christian Mendl](http://christian.mendl.net/)
  • [Evert Nieuwenburg](https://evert.info/)
  • [Ossian O'Reilly](https://github.com/ooreilly)
  • [Hugo Théveniaut](https://github.com/theveniaut)
  • [Giacomo Torlai](https://github.com/GTorlai)
  • [Alexander Wietek](https://awietek.github.io/)

| [![](https://img.shields.io/github/stars/netket/netket?style=social)](https://github.com/netket/netket)

  • [arxiv](https://arxiv.org/abs/2112.10526)

  • [docs](https://netket.readthedocs.io/en/latest/index.html)

  • [git](https://github.com/mpi4jax/mpi4jax), [git](https://github.com/cloudhan/jax-windows-builder)

  • [website](https://www.netket.org/)

  • [yt](https://youtu.be/Ryz-o71tuy8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/PhilipVinc/Lectures/blob/main/2202_NetKet/01_intro.ipynb) | 15.09.2022 |
| Stable Diffusion | A latent text-to-image diffusion model |

  • [Robin Rombach](https://github.com/rromb)
  • [Andreas Blattmann](https://github.com/ablattmann)
  • [Dominik Lorenz](https://github.com/qp-qp)
  • [Patrick Esser](https://github.com/pesser)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)

| [![](https://img.shields.io/github/stars/CompVis/stable-diffusion?style=social)](https://github.com/CompVis/stable-diffusion)

  • [arxiv](https://arxiv.org/abs/2205.11487), [arxiv](https://arxiv.org/abs/2207.12598), [arxiv](https://arxiv.org/abs/2202.09778), [arxiv](https://arxiv.org/abs/2108.01073)

  • [git](https://arxiv.org/abs/2112.10752), [git](https://github.com/christophschuhmann/improved-aesthetic-predictor), [git](https://github.com/ShieldMnt/invisible-watermark), [git](https://github.com/openai/guided-diffusion), [git](https://github.com/lucidrains/denoising-diffusion-pytorch), [git](https://github.com/lucidrains/x-transformers)

  • [hf](https://huggingface.co/CompVis), [hf](https://huggingface.co/datasets/laion/laion2B-en), [hf](https://huggingface.co/datasets/laion/laion-high-resolution)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CompVis/stable-diffusion/blob/main/scripts/latent_imagenet_diffusion.ipynb) | 10.08.2022 |
| Deep-MAC | Welcome to the Novel class segmentation demo | [Vighnesh Birodkar](http://vighneshbirodkar.github.io/) |

  • [arxiv](https://arxiv.org/abs/2104.00613)

  • [pwc](https://paperswithcode.com/method/deep-mac)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/deepmac_colab.ipynb) | 10.08.2022 |
| NL-Augmenter | A collaborative effort intended to add transformations of datasets dealing with natural language |

  • [Aadesh Gupta](https://github.com/aadesh11)
  • [Timothy Sum Hon Mun](https://github.com/timothy22000)
  • [Aditya Srivatsa](https://github.com/kvadityasrivatsa)
  • [Xudong Shen](https://github.com/XudongOliverShen)
  • others
  • [Juan Diego Rodriguez](https://github.com/juand-r)
  • [Ashish Shrivastava](https://github.com/ashish3586)
  • [Nagender Aneja](https://researchid.co/naneja)
  • [Zijie Wang](https://zijie.wang/)
  • [Yiwen Shi](https://github.com/Yiwen-Shi)
  • [Afnan Mir](https://github.com/afnanmmir)
  • [William Soto](https://github.com/sotwi)
  • [Chandan Singh](https://csinva.io/)
  • [Claude Roux](https://github.com/ClaudeRoux)
  • [Abinaya Mahendiran](https://github.com/AbinayaM02)
  • [Anna Shvets](https://github.com/asnota)
  • [Kaustubh Dhole](https://github.com/kaustubhdhole)
  • [Bryan Wilie](https://github.com/bryanwilie)
  • [Jamie Simon](https://james-simon.github.io/)
  • [Mukund Varma](https://github.com/MukundVarmaT)
  • [Sang Han](https://github.com/jjangsangy)
  • [Denis Kleyko](https://github.com/denkle)
  • [Samuel Cahyawijaya](https://github.com/SamuelCahyawijaya)
  • [Filip Cornell](https://github.com/Filco306)
  • [Tanay Dixit](https://tanay2001.github.io/)
  • [Connor Boyle](https://github.com/boyleconnor)
  • [Genta Indra Winata](https://gentawinata.com/)
  • [Seungjae Ryan Lee](https://github.com/seungjaeryanlee)
  • [Marcin Namysl](https://github.com/mnamysl)
  • [Roman Sitelew](https://github.com/RomanPlusPlus)
  • [Zhenhao Li](https://zhenhaoli.net/)
  • [Fiona Tan](https://tanfiona.github.io/)

| [![](https://img.shields.io/github/stars/GEM-benchmark/NL-Augmenter?style=social)](https://github.com/GEM-benchmark/NL-Augmenter)

  • [arxiv](https://arxiv.org/abs/2112.02721)

  • [website](https://gem-benchmark.com/nl_augmenter)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/GEM-benchmark/NL-Augmenter/blob/main/notebooks/Write_a_sample_transformation.ipynb) | 06.08.2022 |
| XManager | Framework for managing machine learning experiment | [Andrew Chen](https://github.com/andrewluchen) | [![](https://img.shields.io/github/stars/google-deepmind/xmanager?style=social)](https://github.com/google-deepmind/xmanager)

  • [pypi](https://pypi.org/project/xmanager/)

  • [slides](https://storage.googleapis.com/gresearch/xmanager/deepmind_xmanager_slides.pdf)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-deepmind/xmanager/blob/main/colab_codelab.ipynb) | 29.07.2022 |
| Accelerate | A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision | [Hugging Face](https://huggingface.co/) | [![](https://img.shields.io/github/stars/huggingface/accelerate?style=social)](https://github.com/huggingface/accelerate)
  • [docs](https://huggingface.co/docs/accelerate/index)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb) | 27.07.2022 |
| YOLOv5 on Custom Objects | This notebook shows training on your own custom objects | [Jacob Solawetz](https://blog.roboflow.com/author/jacob/) |

  • [blog post](https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/)

  • [data](https://public.roboflow.ai/object-detection/bccd)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1gDZ2xcTOgR39tGGs-EZ6i3RTs16wmzZQ) | 20.07.2022 |
| MindsEye | Graphical user interface built to run multimodal ai art models for free from a Google Colab, without needing edit a single line of code or know any programming |

  • [multimodal.art](https://multimodal.art/)
  • [João Paulo Apolinário Passos](http://www.apolinariopassos.com.br/portfolio/)

| [![](https://img.shields.io/github/stars/multimodalart/mindseye?style=social)](https://github.com/multimodalart/mindseye)

  • [git](https://github.com/openai/guided-diffusion)

  • [project](https://multimodal.art/mindseye)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1cg0LZ5OfN9LAIB37Xq49as0fSJxcKtC5) | 06.07.2022 |
| py-irt | Fitting Item Response Theory models using variational inference |

  • [John Lalor](https://jplalor.github.io/)
  • [Hong Yu](https://scholar.google.com/citations?user=TyXe64wAAAAJ)
  • [Pedro Rodriguez](https://www.pedro.ai/)
  • [Joe Barrow](https://jbarrow.ai/)
  • others
  • [Alexander Hoyle](https://alexanderhoyle.com/)
  • [Robin Jia](https://robinjia.github.io/)
  • [Jordan Boyd-Graber](https://github.com/ezubaric)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/2021.acl-long.346)](https://doi.org/10.18653/v1/2021.acl-long.346) [![](https://img.shields.io/github/stars/nd-ball/py-irt?style=social)](https://github.com/nd-ball/py-irt)

  • [arxiv](https://arxiv.org/abs/1908.11421)

  • [paper](https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01422/full)

  • [yt](https://youtu.be/akUxtt21Mlc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/nd-ball/py-irt/blob/master/examples/py-irt_example.ipynb) | 30.06.2022 |
| BIG-bench | A collaborative benchmark intended to probe large language models and extrapolate their future capabilities |

  • [Jaehoon Lee](https://jaehlee.github.io/)
  • [Jascha Sohl-Dickstein](http://www.sohldickstein.com/)
  • [Vinay Ramasesh](https://ramasesh.github.io/)
  • [Sajant Anand](https://github.com/sajantanand)
  • others
  • [Alicia Parrish](https://aliciaparrish.com/)
  • [Ethan Dyer](https://github.com/ethansdyer)
  • [Liam Dugan](http://liamdugan.com/)
  • [Dieuwke Hupkes](https://github.com/dieuwkehupkes)
  • [Daniel Freeman](https://github.com/cdfreeman-google)
  • [Guy Gur-Ari](https://github.com/guygurari)
  • [Aitor Lewkowycz](https://github.com/lewkowycz)

| [![](https://img.shields.io/github/stars/google/BIG-bench?style=social)](https://github.com/google/BIG-bench)

  • [API](https://google.github.io/BIG-bench/docs/html/bigbench/index.html)

  • [arxiv](https://arxiv.org/abs/2206.04615)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/BIG-bench/blob/master/notebooks/colab_examples.ipynb) | 28.06.2022 |
| HuggingArtists | Choose your favorite Artist and train a language model to write new lyrics based on their unique voice | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
  • [hf](https://huggingface.co/spaces/AlekseyKorshuk/huggingartists), [hf](https://huggingface.co/huggingartists)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | 25.06.2022 |
| Introduction to the TensorFlow Models NLP library | You will learn how to build transformer-based models for common NLP tasks including pretraining, span labelling and classification using the building blocks from NLP modeling library | [Chen Chen](https://github.com/chenGitHuber) | [![](https://img.shields.io/github/stars/tensorflow/models?style=social)](https://github.com/tensorflow/models/tree/master/official/nlp/modeling)
  • [arxiv](https://arxiv.org/abs/1810.04805)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/nlp/nlp_modeling_library_intro.ipynb) | 22.06.2022 |
| Cirq | A python framework for creating, editing, and invoking Noisy Intermediate Scale Quantum circuits |

  • [Balint Pato](https://refactorium.com/)
  • [Matthew Harrigan](https://mpharrigan.com/)
  • [Animesh Sinha](https://github.com/AnimeshSinha1309)
  • [Matthew Neeley](https://github.com/maffoo)
  • others
  • [Dave Bacon](https://dabacon.org/)
  • [Matteo Pompili](https://github.com/matpompili)
  • [Michael Broughton](https://github.com/MichaelBroughton)

| [![](https://img.shields.io/github/stars/quantumlib/Cirq?style=social)](https://github.com/quantumlib/Cirq)

  • [wiki](https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_gate)

  • [yt](https://youtu.be/16ZfkPRVf2w)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/basics.ipynb) | 21.06.2022 |
| CLIP-as-service | A low-latency high-scalability service for embedding images and text | [Han Xiao](https://hanxiao.io/) | [![](https://img.shields.io/github/stars/jina-ai/clip-as-service?style=social)](https://github.com/jina-ai/clip-as-service)

  • [data](https://sites.google.com/view/totally-looks-like-dataset)

  • [git](https://github.com/jina-ai/docarray)

  • [website](https://clip-as-service.jina.ai/)

  • [yt](https://www.youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne), [yt](https://www.youtube.com/c/jina-ai)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jina-ai/clip-as-service/blob/main/docs/hosting/cas-on-colab.ipynb) | 19.06.2022 |
| Jina | MLOps framework that empowers anyone to build cross-modal and multi-modal applications on the cloud | [Han Xiao](https://hanxiao.io/) | [![](https://img.shields.io/github/stars/jina-ai/jina?style=social)](https://github.com/jina-ai/jina)

  • [data](https://sites.google.com/view/totally-looks-like-dataset)

  • [docs](https://docs.jina.ai/)

  • [git](https://github.com/jina-ai/example-grafana-prometheus/blob/main/grafana-dashboards/flow.json)

  • [hub](https://hub.jina.ai/)

  • [yt](https://www.youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne), [yt](https://www.youtube.com/c/jina-ai)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/docs/Using_Jina_on_Colab.ipynb) | 11.06.2022 |
| MMRotate | Toolbox for rotated object detection based on PyTorch |

  • [Yue Zhou](https://zytx121.github.io/)
  • [Xue Yang](https://yangxue0827.github.io/)
  • [Gefan Zhang](https://github.com/zhanggefan)
  • [Jiabao Wang](https://jbwang1997.github.io/)
  • others
  • [Yanyi Liu](https://github.com/liuyanyi)
  • [Liping Hou](https://scholar.google.com/citations?user=XoEzZukAAAAJ)
  • [Xue Jiang](https://dl.acm.org/profile/99659833933)
  • [Xingzhao Liu](https://dl.acm.org/profile/81430639972)
  • [Junchi Yan](https://thinklab.sjtu.edu.cn/)
  • [Chengqi Lyu](https://scholar.google.com/citations?user=kV3WvXcAAAAJ)
  • [Wenwei Zhang](https://zhangwenwei.cn/)
  • [Kai Chen](https://chenkai.site/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3503161.3548541)](https://doi.org/10.1145/3503161.3548541) [![](https://img.shields.io/github/stars/open-mmlab/mmrotate?style=social)](https://github.com/open-mmlab/mmrotate)

  • [arxiv](https://arxiv.org/abs/2204.13317)

  • [docs](https://mmrotate.readthedocs.io/en/latest/)

  • [git](https://github.com/open-mmlab/mmcv)

  • [pwc](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real), [pwc](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real), [pwc](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real)

  • [pypi](https://pypi.org/project/mmrotate)

  • [website](https://openmmlab.com/)

  • [yt](https://youtu.be/hKZUV0AySNk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/open-mmlab/mmrotate/blob/main/demo/MMRotate_Tutorial.ipynb) | 10.06.2022 |
| Aesthetics Predictor | A linear estimator on top of clip to predict the aesthetic quality of pictures | [LAION AI](https://laion.ai/) | [![](https://img.shields.io/github/stars/LAION-AI/aesthetic-predictor?style=social)](https://github.com/LAION-AI/aesthetic-predictor)

  • [blog post](https://laion.ai/blog/laion-aesthetics/)

  • [git](https://github.com/rom1504/embedding-reader/blob/main/examples/aesthetic_inference.py)

  • [kaggle](https://www.kaggle.com/discussions/general/464229)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/LAION-AI/aesthetic-predictor/blob/main/asthetics_predictor.ipynb) | 04.06.2022 |
| Flashlight | Fast, flexible machine learning library written entirely in C++ |

  • [Jacob Kahn](https://jacobkahn.me/)
  • [Vineel Pratap](https://github.com/vineelpratap)
  • [Tatiana Likhomanenko](https://github.com/tlikhomanenko)
  • [Qiantong Xu](https://github.com/xuqiantong)
  • others
  • [Awni Hannun](https://awnihannun.com/)
  • [Jeff Cai](https://ieeexplore.ieee.org/author/37086866180)
  • [Paden Tomasello](https://github.com/padentomasello)
  • [Ann Lee](https://scholar.google.com/citations?user=Am6PakYAAAAJ)
  • [Edouard Grave](https://github.com/EdouardGrave)
  • [Gilad Avidov](https://github.com/avidov)
  • [Benoit Steiner](http://bsteiner.info/)
  • [Vitaliy Liptchinsky](https://scholar.google.com/citations?user=zl4dA-gAAAAJ)
  • [Gabriel Synnaeve](https://syhw.github.io/)
  • [Ronan Collobert](https://ronan.collobert.com/)

| [![](https://img.shields.io/github/stars/flashlight/flashlight?style=social)](https://github.com/flashlight/flashlight)

  • [arxiv](https://arxiv.org/abs/2201.12465)

  • [docker](https://hub.docker.com/r/flml/flashlight/tags?page=1&ordering=last_updated&name=cuda-latest)

  • [docs](https://fl.readthedocs.io/en/latest/)

  • [git](https://github.com/arrayfire/arrayfire), [git](https://github.com/microsoft/vcpkg), [git](https://github.com/arrayfire/arrayfire-ml/), [git](https://github.com/nvidia/cub), [git](https://github.com/USCiLab/cereal), [git](https://github.com/nothings/stb), [git](https://github.com/facebookincubator/gloo), [git](https://github.com/oneapi-src/oneDNN), [git](https://github.com/google/glog), [git](https://github.com/gflags/gflags), [git](https://github.com/flashlight/text)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/flashlight/flashlight/blob/master/flashlight/app/asr/tutorial/notebooks/FinetuneCTC.ipynb) | 01.06.2022 |
| RL Unplugged | Suite of benchmarks for offline reinforcement learning |

  • [Caglar Gulcehre](https://www.caglarg.com/)
  • [Ziyu Wang](https://ziyuw.github.io/)
  • [Alexander Novikov](https://scholar.google.com/citations?user=jMUkLqwAAAAJ)
  • [Tom Le Paine](http://tomlepaine.github.io/)
  • others
  • [Sergio Gómez Colmenarejo](https://scholar.google.com/citations?user=0Dkf68EAAAAJ)
  • [Konrad Żołna](https://github.com/kondiz)
  • [Rishabh Agarwal](https://agarwl.github.io/)
  • [Josh Merel](https://sites.google.com/site/jsmerel/)
  • [Daniel Mankowitz](https://danielmankowitz.wixsite.com/danielm)
  • [Cosmin Paduraru](https://scholar.google.com/citations?user=oz4Ca9AAAAAJ)
  • [Gabriel Dulac-Arnold](http://gabe.squirrelsoup.net/)
  • [Jerry Li](https://github.com/jerryli27)
  • [Mohammad Norouzi](https://norouzi.github.io/)
  • [Matt Hoffman](https://www.mwhoffman.com/)
  • [Ofir Nachum](https://scholar.google.com/citations?user=C-ZlBWMAAAAJ)
  • [George Tucker](https://sites.google.com/view/gjt)
  • [Nicolas Heess](https://scholar.google.com/citations?user=79k7bGEAAAAJ)
  • [Nando de Freitas](https://github.com/nandodf)

| [![](https://img.shields.io/github/stars/deepmind/deepmind-research?style=social)](https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged)

  • [arxiv](https://arxiv.org/abs/2006.13888), [arxiv](https://arxiv.org/abs/1907.04543), [arxiv](https://arxiv.org/abs/1709.06009), [arxiv](https://arxiv.org/abs/1811.09656), [arxiv](https://arxiv.org/abs/1811.11711), [arxiv](https://arxiv.org/abs/1909.12238), [arxiv](https://arxiv.org/abs/1911.09451), [arxiv](https://arxiv.org/abs/1801.00690), [arxiv](https://arxiv.org/abs/2003.11881), [arxiv](https://arxiv.org/abs/2103.09575)

  • [data](https://console.cloud.google.com/storage/browser/rl_unplugged)

  • [git](https://github.com/deepmind/lab), [git](https://github.com/google-research/realworldrl_suite#installation)

  • [yt](https://youtu.be/n8yNYzbUMJ0)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/deepmind_research/blob/master/rl_unplugged/dmlab_r2d2.ipynb) | 26.05.2022 |
| Scenic | Codebase with a focus on research around attention-based models for computer vision |

  • [Mostafa Dehghani](https://www.mostafadehghani.com/)
  • [Alexey Gritsenko](https://github.com/AlexeyG)
  • [Anurag Arnab](https://github.com/anuragarnab)
  • [Matthias Minderer](https://matthias.minderer.net/)
  • [Yi Tay](https://vanzytay.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.02070)](https://doi.org/10.1109/CVPR52688.2022.02070) [![](https://img.shields.io/github/stars/google-research/scenic?style=social)](https://github.com/google-research/scenic)

  • [arxiv](https://arxiv.org/abs/2110.11403)

  • [medium](https://medium.com/syncedreview/google-open-sources-scenic-a-jax-library-for-rapid-computer-vision-model-prototyping-and-894dbdeddbae)

  • [reddit](https://www.reddit.com/r/deeplearning/comments/qgyjck/r_google_opensources_scenic_a_jax_library_for/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/scenic/blob/main/scenic/common_lib/colabs/scenic_playground.ipynb) | 04.05.2022 |
| Text generation with RNN | This tutorial demonstrates how to generate text using a character-based RNN | [Anirudh Dubey](https://github.com/anirudh161) |

  • [link](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)

  • [pwc](https://paperswithcode.com/task/text-generation)

  • [tf](https://www.tensorflow.org/text/tutorials/text_generation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_generation.ipynb) | 03.05.2022 |
| CLIPDraw | Synthesize drawings to match a text prompt |

  • [Kevin Frans](https://www.kvfrans.com/)
  • [Lisa Soros](https://scholar.google.com/citations?user=iUkpvMUAAAAJ)
  • [Olaf Witkowski](https://olafwitkowski.com/)

| [![](https://img.shields.io/github/stars/kvfrans/clipdraw?style=social)](https://github.com/kvfrans/clipdraw)

  • [arxiv](https://arxiv.org/abs/2106.14843), [arxiv](https://arxiv.org/abs/1508.06576), [arxiv](https://arxiv.org/abs/2105.00162)

  • [blog post](https://kvfrans.com/clipdraw-exploring-text-to-drawing-synthesis/)

  • [git](https://github.com/BachiLi/diffvg/blob/master/apps/painterly_rendering.py)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb) | 29.04.2022 |
| CodeGen | Family of open-source model for program synthesis |

  • [Erik Nijkamp](https://eriknijkamp.com/)
  • [Bo Pang](https://scholar.google.com/citations?user=s9fNEVEAAAAJ)
  • [Hiroaki Hayashi](https://hiroakih.me/)
  • [Lifu Tu](https://lifu-tu.github.io/)
  • others
  • [Huan Wang](https://huan-december.github.io/)
  • [Yingbo Zhou](https://scholar.google.com/citations?user=H_6RQ7oAAAAJ)
  • [Silvio Savarese](https://cvgl.stanford.edu/silvio/)
  • [Caiming Xiong](http://cmxiong.com/)

| [![](https://img.shields.io/github/stars/salesforce/CodeGen?style=social)](https://github.com/salesforce/CodeGen)

  • [arxiv](https://arxiv.org/abs/2203.13474), [arxiv](https://arxiv.org/abs/2305.02309)

  • [git](https://github.com/salesforce/jaxformer)

  • [hf](https://huggingface.co/models?search=salesforce+codegen)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1fQI8OgzMAR0bquCrvhlAtXSw6iMFbVgI) | 23.04.2022 |
| Jraph | library for graph neural networks in jax |

  • [Jonathan Godwin](https://github.com/jg8610)
  • [Thomas Keck](https://github.com/thomaskeck)
  • [Peter Battaglia](https://scholar.google.com/citations?user=nQ7Ij30AAAAJ)
  • [Victor Bapst](https://linkedin.com/in/victor-bapst-73430a89)
  • others
  • [Thomas Kipf](https://tkipf.github.io/)
  • [Yujia Li](https://yujiali.github.io/)
  • [Kimberly Stachenfeld](https://neurokim.com/)
  • [Petar Veličković](https://petar-v.com/)
  • [Alvaro Sanchez-Gonzalez](https://github.com/alvarosg)

| [![](https://img.shields.io/github/stars/google-deepmind/jraph?style=social)](https://github.com/google-deepmind/jraph)

  • [arxiv](https://arxiv.org/abs/1806.01261)

  • [docs](https://jraph.readthedocs.io/en/latest/)

  • [yt](https://youtu.be/S3sRy4oqvCM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-deepmind/educational/blob/master/colabs/summer_schools/intro_to_graph_nets_tutorial_with_jraph.ipynb) | 15.04.2022 |
| deep-significance | Easy-to-use package containing different significance tests and utility functions specifically tailored towards research needs and usability |

  • [Dennis Ulmer](http://dennisulmer.eu/)
  • [Christian Hardmeier](https://christianhardmeier.rax.ch/)
  • [Jes Frellsen](https://frellsen.org/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/p19-1266)](https://doi.org/10.18653/v1/p19-1266) [![](https://img.shields.io/github/stars/Kaleidophon/deep-significance?style=social)](https://github.com/Kaleidophon/deep-significance)

  • [arxiv](https://arxiv.org/abs/2204.06815)

  • [blog post](https://machinelearningmastery.com/statistical-hypothesis-tests/)

  • [docs](https://deep-significance.readthedocs.io/en/latest/)

  • [git](https://github.com/rtmdrr/replicability-analysis-NLP), [git](https://github.com/rtmdrr/testSignificanceNLP), [git](https://github.com/rtmdrr/DeepComparison)

  • [wiki](https://en.wikipedia.org/wiki/Multiple_comparisons_problem)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Kaleidophon/deep-significance/blob/main/paper/deep-significance%20demo.ipynb) | 12.04.2022 |
| Text classification with RNN | This text classification tutorial trains a recurrent neural network on the IMDB large movie review dataset for sentiment analysis | [Anirudh Dubey](https://github.com/anirudh161) |

  • [data](http://ai.stanford.edu/~amaas/data/sentiment/)

  • [link](https://developers.google.com/machine-learning/glossary/#recurrent_neural_network)

  • [pwc](https://paperswithcode.com/task/text-classification)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_classification_rnn.ipynb) | 17.03.2022 |
| TriMap | Dimensionality reduction technique based on triplet constraints, which preserves the global structure of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP |

  • [Ehsan Amid](https://sites.google.com/view/eamid/)
  • [Manfred Warmuth](https://mwarmuth.bitbucket.io/)

| [![](https://img.shields.io/github/stars/eamid/trimap?style=social)](https://github.com/eamid/trimap)

  • [arxiv](https://arxiv.org/abs/1910.00204)

  • [data](https://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php)

  • [git](https://github.com/google-research/google-research/tree/master/trimap), [git](https://github.com/spotify/annoy), [git](https://github.com/zalandoresearch/fashion-mnist)

  • [wiki](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg), [wiki](https://en.wikipedia.org/wiki/MNIST_database)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/eamid/examples/blob/master/TriMap.ipynb#scrollTo=nSyGA-ymB-jN) | 17.03.2022 |
| RLDS | Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of Sequential Decision Making including RL, Learning for Demonstrations, Offline RL or Imitation Learning |

  • [Sabela Ramos](https://github.com/sabelaraga)
  • [Sertan Girgin](https://sites.google.com/site/girgint/home)
  • [Léonard Hussenot](https://leonardhussenot.github.io/)
  • [Damien Vincent](https://www.linkedin.com/in/damien-vincent-1958381)
  • others
  • [Hanna Yakubovich](https://github.com/yakubanna)
  • [Daniel Toyama](https://github.com/kenjitoyama)
  • [Anita Gergely](https://www.linkedin.com/in/anita-g-318064b2/)
  • [Piotr Stanczyk](https://scholar.google.com/citations?user=fKVK0dYAAAAJ)
  • [Raphaël Marinier](https://github.com/RaphaelMarinier)
  • [Jeremiah Harmsen](https://github.com/jharmsen)
  • [Olivier Pietquin](https://research.google/people/105812/)
  • [Nikola Momchev](https://scholar.google.com/citations?user=PbWgaswAAAAJ)

| [![](https://img.shields.io/github/stars/google-research/rlds?style=social)](https://github.com/google-research/rlds)

  • [arxiv](https://arxiv.org/abs/2111.02767)

  • [blog post](https://ai.googleblog.com/2021/12/rlds-ecosystem-to-generate-share-and.html)

  • [git](https://github.com/deepmind/envlogger), [git](https://github.com/google-research/rlds-creator), [git](https://github.com/Farama-Foundation/D4RL), [git](https://github.com/deepmind/dm_env/blob/master/docs/index.md)

  • [tf](http://www.tensorflow.org/datasets/catalog/overview), [tf](https://www.tensorflow.org/datasets/catalog/robosuite_panda_pick_place_can), [tf](https://www.tensorflow.org/datasets/catalog/locomotion), [tf](https://www.tensorflow.org/datasets/catalog/mt_opt), [tf](https://www.tensorflow.org/datasets/external_tfrecord?hl=en#load_dataset_with_tfds), [tf](https://www.tensorflow.org/api_docs/python/tf/data), [tf](https://www.tensorflow.org/guide/data_performance#optimize_performance), [tf](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle), [tf](https://www.tensorflow.org/datasets/splits), [tf](https://www.tensorflow.org/datasets/api_docs/python/tfds/load)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google-research/rlds/blob/main/rlds/examples/rlds_tutorial.ipynb) | 16.03.2022 |
| Real-Time Voice Cloning | SV2TTS with a vocoder that works in real-time |

  • [Corentin Jemine](https://github.com/CorentinJ)
  • [Erdene-Ochir Tuguldur](https://github.com/tugstugi)

| [![](https://img.shields.io/github/stars/CorentinJ/Real-Time-Voice-Cloning?style=social)](https://github.com/CorentinJ/Real-Time-Voice-Cloning)

  • [arxiv](https://arxiv.org/abs/1806.04558), [arxiv](https://arxiv.org/abs/1802.08435), [arxiv](https://arxiv.org/abs/1703.10135), [arxiv](https://arxiv.org/abs/1710.10467)

  • [git](https://github.com/fatchord/WaveRNN), [git](https://github.com/coqui-ai/tts), [git](https://github.com/resemble-ai/Resemblyzer)

  • [yt](https://youtu.be/-O_hYhToKoA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tugstugi/dl-colab-notebooks/blob/master/notebooks/RealTimeVoiceCloning.ipynb) | 08.03.2022 |
| BLIP | VLP framework which transfers flexibly to both vision-language understanding and generation tasks |

  • [Junnan Li](https://github.com/LiJunnan1992)
  • [Dongxu Li](https://sites.google.com/view/dongxu-li/home)
  • [Caiming Xiong](http://cmxiong.com/)
  • [Steven Hoi](https://sites.google.com/view/stevenhoi)

| [![](https://img.shields.io/github/stars/salesforce/BLIP?style=social)](https://github.com/salesforce/BLIP)

  • [arxiv](https://arxiv.org/abs/2201.12086)

  • [blog post](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/)

  • [git](https://github.com/facebookresearch/fairscale), [git](https://github.com/salesforce/ALPRO), [git](https://github.com/dmlc/decord), [git](https://github.com/salesforce/ALBEF), [git](https://github.com/rwightman/pytorch-image-models/tree/main/timm)

  • [yt](https://youtu.be/X2k7n4FuI7c)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/salesforce/BLIP/blob/main/demo.ipynb) | 03.03.2022 |
| VideoGPT | A conceptually simple architecture for scaling likelihood based generative modeling to natural videos |

  • [Wilson Yan](https://wilson1yan.github.io/)
  • [Yunzhi Zhang](https://zzyunzhi.github.io/)
  • [Pieter Abbeel](https://people.eecs.berkeley.edu/~pabbeel/)
  • [Aravind Srinivas](https://people.eecs.berkeley.edu/~aravind/)

| [![](https://img.shields.io/github/stars/wilson1yan/VideoGPT?style=social)](https://github.com/wilson1yan/VideoGPT)

  • [arxiv](https://arxiv.org/abs/2104.10157), [arxiv](https://arxiv.org/abs/1904.10509)

  • [data](https://www.crcv.ucf.edu/data/UCF101.php)

  • [hf](https://huggingface.co/spaces/akhaliq/VideoGPT)

  • [project](https://wilson1yan.github.io/videogpt/index.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/wilson1yan/VideoGPT/blob/master/notebooks/Using_VideoGPT.ipynb) | 02.03.2022 |
| Silero Models | Pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple | [Silero team](https://www.silero.ai/about/) | [![](https://img.shields.io/github/stars/snakers4/silero-models?style=social)](https://github.com/snakers4/silero-models)

  • [STT](https://thegradient.pub/towards-an-imagenet-moment-for-speech-to-text/), [STT](https://thegradient.pub/a-speech-to-text-practitioners-criticisms-of-industry-and-academia/), [STT](https://habr.com/ru/post/519562/)

  • [TTS](https://habr.com/ru/post/660571/), [TTS](https://habr.com/ru/post/549482/)

  • [Text Enhancement](https://habr.com/ru/post/581960/)

  • [VAD](https://thegradient.pub/one-voice-detector-to-rule-them-all/), [VAD](https://habr.com/ru/post/537276/)

  • [website](https://www.silero.ai/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/snakers4/silero-models/blob/master/examples.ipynb) | 27.02.2022 |
| Real-CUGAN | AI super resolution model for anime images, trained in a million scale anime dataset, using the same architecture as Waifu2x-CUNet | [bilibili](https://github.com/bilibili) | [![](https://img.shields.io/github/stars/bilibili/ailab?style=social)](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)

  • [git](https://github.com/nihui/realcugan-ncnn-vulkan), [git](https://github.com/nagadomi/nunif), [git](https://github.com/Justin62628/Squirrel-RIFE)

  • [hf](https://huggingface.co/spaces/mayhug/Real-CUGAN)

  • [yt](https://youtu.be/IVo19n4zFsc)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bilibili/ailab/blob/main/Real-CUGAN/colab-demo.ipynb) | 27.02.2022 |
| ArcaneGAN | Process video in the style of the Arcane animated series | [Alexander Spirin](https://github.com/Sxela) | [![](https://img.shields.io/github/stars/Sxela/ArcaneGAN?style=social)](https://github.com/Sxela/ArcaneGAN)

  • [git](https://github.com/Sxela/stylegan3_blending)

  • [yt](https://youtu.be/Fi199uFW6jE), [yt](https://youtu.be/AJG4X7IokG8)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1r1hhciakk5wHaUn1eJk7TP58fV9mjy_W) | 17.02.2022 |
| textlesslib | A library aimed to facilitate research in Textless NLP |

  • [Eugene Kharitonov](https://eugene-kharitonov.github.io/)
  • [Jade Copet](https://scholar.google.com/citations?user=GRMLwjAAAAAJ)
  • [Kushal Lakhotia](https://about.me/hikushalhere)
  • [Nguyễn Tú Anh](https://tuanh208.github.io/)
  • others
  • [Paden Tomasello](https://scholar.google.com/citations?user=sBtWMGYAAAAJ)
  • [Ann Lee](https://ai.facebook.com/people/ann-lee)
  • [Ali Elkahky](https://scholar.google.com/citations?user=KB3S8RoAAAAJ)
  • [Wei-Ning Hsu](https://wnhsu.github.io/)
  • [Abdelrahman Mohamed](https://ai.facebook.com/people/abdelrahman-mohamed/)
  • [Emmanuel Dupoux](http://www.lscp.net/persons/dupoux/)
  • [Yossi Adi](https://www.cs.huji.ac.il/~adiyoss/)

| [![](https://img.shields.io/github/stars/facebookresearch/textlesslib?style=social)](https://github.com/facebookresearch/textlesslib)

  • [arxiv](https://arxiv.org/abs/2202.07359)

  • [git](https://github.com/NVIDIA/waveglow), [git](https://github.com/keithito/tacotron), [git](https://github.com/NVIDIA/tacotron2), [git](https://github.com/pseeth/torch-stft)

  • [pwc](https://paperswithcode.com/dataset/librispeech)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/textlesslib/blob/main/examples/resynthesis_and_continuation.ipynb) | 15.02.2022 |
| AV-HuBERT | Self-supervised representation learning framework for audio-visual speech |

  • [Bowen Shi](https://home.ttic.edu/~bshi/)
  • [Wei-Ning Hsu](http://people.csail.mit.edu/wnhsu/)
  • [Kushal Lakhotia](https://about.me/hikushalhere)
  • [Abdelrahman Mohamed](http://www.cs.toronto.edu/~asamir/)

| [![](https://img.shields.io/github/stars/facebookresearch/av_hubert?style=social)](https://github.com/facebookresearch/av_hubert)

  • [arxiv](https://arxiv.org/abs/2201.02184), [arxiv](https://arxiv.org/abs/2201.01763), [arxiv](https://arxiv.org/abs/1810.04805), [arxiv](https://arxiv.org/abs/1911.04890)

  • [blog post](https://ai.facebook.com/blog/ai-that-understands-speech-by-looking-as-well-as-hearing/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1bNXkfpHiVHzXQH8WjGhzQ-fsDxolpUjD) | 12.02.2022 |
| Lingvo | Framework for building neural networks in Tensorflow, particularly sequence models |

  • [Jonathan Shen](https://github.com/jonathanasdf)
  • [Patrick Nguyen](https://scholar.google.com/citations?user=38fqeIYAAAAJ)
  • [Yonghui Wu](https://scholar.google.com/citations?user=55FnA9wAAAAJ)
  • [Zhifeng Chen](https://github.com/zffchen78)

| [![](https://img.shields.io/github/stars/tensorflow/lingvo?style=social)](https://github.com/tensorflow/lingvo)

  • [arxiv](https://arxiv.org/abs/1902.08295), [arxiv](https://arxiv.org/abs/1508.01211), [arxiv](https://arxiv.org/abs/1412.1602), [arxiv](https://arxiv.org/abs/1602.02410), [arxiv](https://arxiv.org/abs/2006.16668), [arxiv](https://arxiv.org/abs/2106.04060)

  • [docker](https://github.com/tensorflow/lingvo/blob/master/docker/dev.Dockerfile), [docker](https://github.com/tensorflow/lingvo/blob/master/docker/lib.dockerfile)

  • [docs](https://tensorflow.github.io/lingvo/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/lingvo/blob/master/codelabs/introduction.ipynb) | 28.01.2022 |
| DeepDream | This tutorial contains a minimal implementation of DeepDream: an experiment that visualizes the patterns learned by a neural network |

  • [Alexander Mordvintsev](https://znah.net/)
  • [Billy Lamberta](https://github.com/lamberta)

|

  • [arxiv](https://arxiv.org/abs/1409.4842)

  • [blog post](https://research.google/blog/inceptionism-going-deeper-into-neural-networks/)

  • [medium](https://medium.com/@nik.nagarajan2/deepdream-a-psychedelic-ai-experience-ab482dd5228b), [medium](https://towardsdatascience.com/dreaming-over-text-f6745c829cee)

  • [tf](https://www.tensorflow.org/tutorials/generative/deepdream)

  • [wiki](https://en.wikipedia.org/wiki/Inception), [wiki](https://en.wikipedia.org/wiki/DeepDream)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb) | 13.01.2022 |
| FuseDream | Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization |

  • [Xingchao Liu](https://scholar.google.com/citations?user=VOTVE0UAAAAJ)
  • [Chengyue Gong](https://github.com/ChengyueGongR)
  • [Lemeng Wu](https://github.com/klightz)
  • [Hao Su](https://cseweb.ucsd.edu//~haosu/)
  • [Qiang Liu](https://www.cs.utexas.edu/~lqiang/)

|
  • [arxiv](https://arxiv.org/abs/2112.01573)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/190tKQf0aFj-Hi8STUrLc2m4DeOviv7NO) | 02.01.2022 |
| MLP | The most basic neural network architectures, a multilayer perceptron, also known as a feedforward network | [Ben Trevett](https://bentrevett.com/) |

  • [NN and DL](http://neuralnetworksanddeeplearning.com/)

  • [arxiv](https://arxiv.org/abs/1702.03118), [arxiv](https://arxiv.org/abs/2108.12943), [arxiv](https://arxiv.org/abs/2111.04020)

  • [optimization](https://ruder.io/optimizing-gradient-descent/)

  • [pt](https://pytorch.org/vision/stable/transforms.html#transforms-on-pil-image-only), [pt](https://pytorch.org/vision/stable/transforms.html#transforms-on-torch-tensor-only)

  • [wiki](https://en.wikipedia.org/wiki/Multilayer_perceptron)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/1_mlp.ipynb) | 26.12.2021 |
| AlexNet | A neural network model that uses convolutional neural network layers and was designed for the ImageNet challenge | [Ben Trevett](https://bentrevett.com/) | [![](https://img.shields.io/github/stars/davidtvs/pytorch-lr-finder?style=social)](https://github.com/davidtvs/pytorch-lr-finder)

  • [ILSVRC](https://image-net.org/challenges/LSVRC/)

  • [LR](https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html)

  • [PMLR](https://proceedings.mlr.press/v9/glorot10a.html)

  • [arxiv](https://arxiv.org/abs/1409.0575)

  • [cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html)

  • [dropout](https://sebastianraschka.com/faq/docs/dropout-activation.html)

  • [neurips](https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html)

  • [pt](https://pytorch.org/vision/stable/models.html)

  • [pwc](https://paperswithcode.com/method/alexnet)

  • [wiki](https://en.wikipedia.org/wiki/Regularization_(mathematics), [wiki](https://en.wikipedia.org/wiki/AlexNet)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/3_alexnet.ipynb) | 26.12.2021 |
| VGG | Very Deep Convolutional Networks for Large-Scale Image Recognition | [Ben Trevett](https://bentrevett.com/) | [![](https://img.shields.io/github/stars/pytorch/vision?style=social)](https://github.com/pytorch/vision/blob/main/torchvision/models/vgg.py#L47)

  • [ILSVRC](https://image-net.org/challenges/LSVRC/)

  • [arxiv](https://arxiv.org/abs/1409.1556), [arxiv](https://arxiv.org/abs/1506.01186), [arxiv](https://arxiv.org/abs/1801.06146), [arxiv](https://arxiv.org/abs/1502.03167), [arxiv](https://arxiv.org/abs/1805.11604)

  • [cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html)

  • [pt](https://pytorch.org/vision/stable/models.html)

  • [pwc](https://paperswithcode.com/method/vgg)

  • [yt](https://youtu.be/HR0lt1hlR6U?t=5900), [yt](https://youtu.be/j1jIoHN3m0s), [yt](https://youtu.be/RNnKtNrsrmg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/4_vgg.ipynb) | 26.12.2021 |
| LeNet | A neural network model that uses convolutional neural network layers and was designed for classifying handwritten characters | [Ben Trevett](https://bentrevett.com/) |

  • [CNN](https://cs231n.github.io/convolutional-networks/)

  • [LeNet-5](http://yann.lecun.com/exdb/lenet/)

  • [guide](https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/)

  • [paper](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf)

  • [pwc](https://paperswithcode.com/method/lenet)

  • [wiki](https://en.wikipedia.org/wiki/Convolution), [wiki](https://en.wikipedia.org/wiki/Sobel_operator), [wiki](https://en.wikipedia.org/wiki/Gaussian_blur)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/2_lenet.ipynb) | 26.12.2021 |
| Music Composer | Synthesizing symbolic music in MIDI format using the Music Transformer model | [bazanovvanya](https://github.com/bazanovvanya) | [![](https://img.shields.io/github/stars/ai-forever/music-composer?style=social)](https://github.com/ai-forever/music-composer)

  • [arxiv](https://arxiv.org/abs/1909.05858)

  • [blog post](https://habr.com/ru/company/sberbank/blog/583592/)

  • [data](https://magenta.tensorflow.org/datasets/maestro), [data](https://colinraffel.com//projects/lmd/)

  • [git](https://github.com/gwinndr/MusicTransformer-Pytorch), [git](https://github.com/bytedance/GiantMIDI-Piano), [git](https://github.com/mdeff/fma)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ai-forever/music-composer/blob/master/src/Music_Composer_Demo_Colab_en.ipynb) | 20.12.2021 |
| FLAML | Lightweight Python library that finds accurate machine learning models automatically, efficiently and economically |

  • [Chi Wang](https://github.com/sonichi)
  • [Qingyun Wu](https://qingyun-wu.github.io/)

| [![](https://img.shields.io/github/stars/microsoft/FLAML?style=social)](https://github.com/microsoft/FLAML)

  • [arxiv](https://arxiv.org/abs/2106.04815), [arxiv](https://arxiv.org/abs/2005.01571)

  • [docs](https://microsoft.github.io/FLAML/)

  • [paper](https://www.microsoft.com/en-us/research/publication/flaml-a-fast-and-lightweight-automl-library/)

  • [yt](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A), [yt](https://youtu.be/euXpDYGgkGM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/microsoft/FLAML/blob/master/notebook/flaml_automl.ipynb) | 17.12.2021 |
| CompilerGym | A reinforcement learning toolkit for compiler optimizations |

  • [Chris Cummins](https://chriscummins.cc/)
  • [Bram Wasti](https://github.com/bwasti)
  • [Jiadong Guo](https://jd-eth.github.io/)
  • [Brandon Cui](https://www.linkedin.com/in/bcui19/)
  • others
  • [Jason Ansel](https://jasonansel.com/)
  • [Sahir Gomez](https://github.com/sahirgomez1)
  • [Olivier Teytaud](https://github.com/teytaud)
  • [Benoit Steiner](http://bsteiner.info/)
  • [Yuandong Tian](http://yuandong-tian.com/)
  • [Hugh Leather](https://github.com/hughleat)

| [![](https://img.shields.io/github/stars/facebookresearch/CompilerGym?style=social)](https://github.com/facebookresearch/CompilerGym)

  • [arxiv](https://arxiv.org/abs/2109.08267)

  • [docs](https://facebookresearch.github.io/CompilerGym/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/CompilerGym/blob/development/examples/getting-started.ipynb) | 16.11.2021 |
| Reformer | Performs on par with Transformer models while being much more memory-efficient and much faster on long sequences |

  • [Phil Wang](https://lucidrains.github.io/)
  • [Nikita Kitaev](https://kitaev.com/)
  • [Łukasz Kaiser](https://scholar.google.com/citations?user=JWmiQR0AAAAJ)
  • [Anselm Levskaya](https://anselmlevskaya.com/)

| [![](https://img.shields.io/github/stars/lucidrains/reformer-pytorch?style=social)](https://github.com/lucidrains/reformer-pytorch)

  • [arxiv](https://arxiv.org/abs/2001.04451), [arxiv](https://arxiv.org/abs/1907.01470), [arxiv](https://arxiv.org/abs/1910.05895), [arxiv](https://arxiv.org/abs/1909.11556), [arxiv](https://arxiv.org/abs/1911.02150), [arxiv](https://arxiv.org/abs/2002.05202), [arxiv](https://arxiv.org/abs/2003.05997), [arxiv](https://arxiv.org/abs/2003.04887), [arxiv](https://arxiv.org/abs/2002.07028), [arxiv](https://arxiv.org/abs/2103.03404), [arxiv](https://arxiv.org/abs/2104.09864)

  • [blog post](https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html)

  • [git](https://github.com/lucidrains/routing-transformer), [git](https://github.com/lucidrains/sinkhorn-transformer), [git](https://github.com/lucidrains/performer-pytorch), [git](https://github.com/lucidrains/linear-attention-transformer/), [git](https://github.com/lucidrains/compressive-transformer-pytorch)

  • [neurips](https://proceedings.neurips.cc/paper/2019/hash/9d8df73a3cfbf3c5b47bc9b50f214aff-Abstract.html), [neurips](https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)

  • [pypi](https://pypi.org/project/reformer-pytorch/)

  • [yt](https://youtu.be/i4H0kjxrias), [yt](https://youtu.be/Kf3x3lqf9cQ), [yt](https://youtu.be/0eTULzrOztQ)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1awNgXYtjvUeXl1gS-v1iyDXTJJ-fyJIK) | 07.11.2021 |
| ruDALL·E | Generate images from texts in Russian | [Alex Shonenkov](https://github.com/shonenkov) | [![](https://img.shields.io/github/stars/ai-forever/ru-dalle?style=social)](https://github.com/ai-forever/ru-dalle)

  • [git](https://github.com/bes-dev/vqvae_dwt_distiller.pytorch), [git](https://github.com/boomb0om/Real-ESRGAN-colab)

  • [hf](https://huggingface.co/spaces/multimodalart/rudalle)

  • [project](https://rudalle.ru/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/ai-forever/ru-dalle/blob/master/jupyters/ruDALLE-example-generation-A100.ipynb) | 03.11.2021 |
| DeepStyle | The Neural Style algorithm synthesizes a pastiche by separating and combining the content of one image with the style of another image using convolutional neural networks |

  • [Cameron Smith](https://github.com/cysmith)
  • [Alexander Spirin](https://github.com/Sxela)

| [![](https://img.shields.io/github/stars/cysmith/neural-style-tf?style=social)](https://github.com/cysmith/neural-style-tf)

  • [arxiv](https://arxiv.org/abs/1604.08610), [arxiv](https://arxiv.org/abs/1606.05897), [arxiv](https://arxiv.org/abs/1508.06576)

  • [cvpr](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf)

  • [wiki](https://en.wikipedia.org/wiki/Pastiche), [wiki](https://en.wikipedia.org/wiki/The_Starry_Night), [wiki](https://en.wikipedia.org/wiki/YUV), [wiki](https://en.wikipedia.org/wiki/Lab_color_space), [wiki](https://en.wikipedia.org/wiki/YCbCr), [wiki](https://en.wikipedia.org/wiki/CIELUV), [wiki](https://en.wikipedia.org/wiki/Pareidolia)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/14aJ7HQPbcP0sNRIY-FRO4u6lxtlyyxI_) | 01.10.2021 |
| Text2Animation | Generate images from text phrases with VQGAN and CLIP with animation and keyframes |

  • [Katherine Crowson](https://kath.io/)
  • [Ryan Murdock](https://twitter.com/advadnoun)
  • [Chigozie Nri](https://github.com/chigozienri)
  • [Denis Malimonov](https://github.com/tg-bomze)

| [![](https://img.shields.io/github/stars/chigozienri/VQGAN-CLIP-animations?style=social)](https://github.com/chigozienri/VQGAN-CLIP-animations)

  • [arxiv](https://arxiv.org/abs/2012.09841), [arxiv](https://arxiv.org/abs/2103.00020)

  • [yt](https://www.youtube.com/channel/UCToztRy9FSTIhEen_1x4FAw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tg-bomze/collection-of-notebooks/blob/master/Text2Animation.ipynb) | 29.09.2021 |
| EfficientNetV2 | A family of image classification models, which achieve better parameter efficiency and faster training speed than prior arts |

  • [Mingxing Tan](https://scholar.google.com/citations?user=6POeyBoAAAAJ)
  • [Quoc Le](https://cs.stanford.edu/~quocle/)

| [![](https://img.shields.io/github/stars/google/automl?style=social)](https://github.com/google/automl/tree/master/efficientnetv2)

  • [arxiv](https://arxiv.org/abs/2104.00298), [arxiv](https://arxiv.org/abs/1905.11946)

  • [git](https://github.com/NVIDIA/TensorRT/tree/master/samples/python/efficientnet)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/automl/blob/master/efficientnetv2/tutorial.ipynb) | 24.09.2021 |
| Clip retrieval | Easily compute clip embeddings and build a clip retrieval system with them | [Romain Beaumont](https://github.com/rom1504) | [![](https://img.shields.io/github/stars/rom1504/clip-retrieval?style=social)](https://github.com/rom1504/clip-retrieval)

  • [discord](https://discord.gg/eq3cAMZtCC)

  • [git](https://github.com/LAION-AI/CLIP_benchmark), [git](https://github.com/rom1504/laion-prepro), [git](https://github.com/dzryk/antarctic-captions), [git](https://github.com/LAION-AI/CLIP-based-NSFW-Detector), [git](https://github.com/ml-research/OffImgDetectionCLIP)

  • [medium](https://rom1504.medium.com/semantic-search-with-embeddings-index-anything-8fb18556443c)

  • [project](https://rom1504.github.io/clip-retrieval)

  • [pypi](https://pypi.python.org/pypi/clip-retrieval)

  • [wiki](https://en.wikipedia.org/wiki/Locality_of_reference)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rom1504/clip-retrieval/blob/master/notebook/clip-retrieval-getting-started.ipynb) | 21.09.2021 |
| img2dataset | Easily turn large sets of image urls to an image dataset | [Romain Beaumont](https://github.com/rom1504) | [![](https://img.shields.io/github/stars/rom1504/img2dataset?style=social)](https://github.com/rom1504/img2dataset)

  • [discord](https://discord.gg/eq3cAMZtCC)

  • [git](https://github.com/uber/petastorm), [git](https://github.com/fsspec/filesystem_spec/blob/6233f315548b512ec379323f762b70764efeb92c/fsspec/registry.py#L87), [git](https://github.com/fsspec/sshfs), [git](https://github.com/rom1504/cah-prepro)

  • [hf](https://huggingface.co/docs/hub/datasets-viewer), [hf](https://huggingface.co/docs/huggingface_hub/guides/hf_file_system)

  • [medium](https://rom1504.medium.com/semantic-search-at-billions-scale-95f21695689a)

  • [pypi](https://pypi.python.org/pypi/img2dataset)

  • [tf](https://www.tensorflow.org/guide/data)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/rom1504/img2dataset/blob/master/notebook/img2dataset_getting_started.ipynb) | 17.09.2021 |
| Droidlet | A modular embodied agent architecture and platform for building embodied agents |

  • [Anurag Pratik](https://github.com/anuragprat1k)
  • [Soumith Chintala](https://soumith.ch/)
  • [Kavya Srinet](https://github.com/kavyasrinet)
  • [Dhiraj Gandhi](https://dhiraj100892.github.io/)
  • others
  • [Rebecca Qian](https://github.com/Rebecca-Qian)
  • [Yuxuan Sun](https://github.com/snyxan)
  • [Ryan Drew](https://rdrew.dev/)
  • [Sara Elkafrawy](https://github.com/saraEbrahim)
  • [Anoushka Tiwari](https://www.linkedin.com/in/anoushka-tiwari)
  • [Tucker Hart](https://www.linkedin.com/in/tucker-hart-05a638133)
  • [Mary Williamson](https://scholar.google.com/citations?user=Ys4xB-QAAAAJ)
  • [Abhinav Gupta](http://www.cs.cmu.edu/~abhinavg/)
  • [Arthur Szlam](https://scholar.google.com/citations?user=u3-FxUgAAAAJ)

| [![](https://img.shields.io/github/stars/facebookresearch/droidlet?style=social)](https://github.com/facebookresearch/droidlet)

  • [arxiv](https://arxiv.org/abs/2101.10384), [arxiv](https://arxiv.org/abs/1907.08584)

  • [docs](https://facebookresearch.github.io/droidlet/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/droidlet/blob/master/examples_and_tutorials/tutorials/droidlet_for_physical_robots.ipynb) | 15.09.2021 |
| GPT-J-6B | A 6 billion parameter, autoregressive text generation model trained on The Pile |

  • [Ben Wang](https://benwang.dev/)
  • [Aran Komatsuzaki](https://arankomatsuzaki.wordpress.com/about-me/)
  • [Janko Prester](https://www.jankoprester.com/)

| [![](https://img.shields.io/github/stars/kingoflolz/mesh-transformer-jax?style=social)](https://github.com/kingoflolz/mesh-transformer-jax)

  • [The Pile](https://pile.eleuther.ai/)

  • [blog post](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/)

  • [git](https://github.com/EleutherAI/gpt-neox), [git](https://github.com/microsoft/DeepSpeed)

  • [web demo](https://6b.eleuther.ai/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) | 15.09.2021 |
| Machine learning course | This course is broad and shallow, but author will provide additional links so that you can deepen your understanding of the ML method you need | [Тимчишин Віталій](https://github.com/fbeilstein) | [![](https://img.shields.io/github/stars/fbeilstein/machine_learning?style=social)](https://github.com/fbeilstein/machine_learning)

  • [blog post](https://vas3k.com/blog/machine_learning/)

  • [yt](https://www.youtube.com/playlist?list=PLkDeTjsoxDVgnb2lIYo9-1l4XYhrIyS6A), [yt](https://youtu.be/-RdOwhmqP5s), [yt](https://youtu.be/R13BD8qKeTg), [yt](https://youtu.be/ZkjP5RJLQF4), [yt](https://youtu.be/J4Wdy0Wc_xQ), [yt](https://youtu.be/mBcLRGuAFUk), [yt](https://youtu.be/YIGtalP1mv0), [yt](https://youtu.be/Yz5pySyEtsU), [yt](https://youtu.be/x5zLaWT5KPs), [yt](https://youtu.be/yBwpo-L80Mc), [yt](https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/fbeilstein/machine_learning/blob/master/lecture_01_introduction.ipynb) | 02.09.2021 |
| Lucid Sonic Dreams | Syncs GAN-generated visuals to music | [Mikael Alafriz](https://github.com/mikaelalafriz) | [![](https://img.shields.io/github/stars/mikaelalafriz/lucid-sonic-dreams?style=social)](https://github.com/mikaelalafriz/lucid-sonic-dreams)

  • [git](https://github.com/NVlabs/stylegan2), [git](https://github.com/justinpinkney/awesome-pretrained-stylegan2)

  • [medium](https://towardsdatascience.com/introducing-lucid-sonic-dreams-sync-gan-art-to-music-with-a-few-lines-of-python-code-b04f88722de1)

  • [yt](https://youtu.be/l-nGC-ve7sI)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1Y5i50xSFIuN3V4Md8TB30_GOAtts7RQD) | 24.08.2021 |
| textgenrnn | Generate text using a pretrained neural network with a few lines of code, or easily train your own text-generating neural network of any size and complexity | [Max Woolf](https://minimaxir.com/) | [![](https://img.shields.io/github/stars/minimaxir/textgenrnn?style=social)](https://github.com/minimaxir/textgenrnn)

  • [blog post](http://minimaxir.com/2018/05/text-neural-networks/)

  • [yt](https://www.youtube.com/watch?v=RW7mP6BfZuY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1mMKGnVxirJnqDViH7BDJxFqWrsXlPSoK) | 13.07.2021 |
| BasicSR | Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. |

  • [Xintao Wang](https://xinntao.github.io/)
  • [Liangbin Xie](https://liangbinxie.github.io/)
  • [Ke Yu](https://github.com/yuke93)
  • [Kelvin Chan](https://ckkelvinchan.github.io/)
  • others
  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
  • [Chao Dong](https://scholar.google.com/citations?user=OSDCB0UAAAAJ)

| [![](https://img.shields.io/github/stars/XPixelGroup/BasicSR?style=social)](https://github.com/XPixelGroup/BasicSR)

  • [arxiv](https://arxiv.org/abs/2012.02181)

  • [docs](https://basicsr.readthedocs.io/en/latest/)

  • [git](https://github.com/xinntao/ESRGAN), [git](https://github.com/xindongzhang/ECBSR), [git](https://github.com/Lotayou/Face-Renovation), [git](https://github.com/csxmli2016/DFDNet), [git](https://github.com/rosinality/stylegan2-pytorch), [git](https://github.com/xinntao/facexlib), [git](https://github.com/xinntao/HandyView), [git](https://github.com/xinntao/HandyFigure), [git](https://github.com/xinntao/SFTGAN), [git](https://github.com/xinntao/DNI), [git](https://github.com/xinntao/HandyCrawler), [git](https://github.com/xinntao/HandyWriting)

  • [yt](https://youtu.be/KaMYsxWkmww)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1JQScYICvEC3VqaabLu-lxvq9h7kSV1ML) | 07.06.2021 |
| TensorFlowTTS | Real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2 |

  • [Minh Nguyen Quan Anh](https://github.com/dathudeptrai)
  • [Eren Gölge](https://github.com/erogol)
  • [Kuan Chen](https://github.com/azraelkuan)
  • [Takuya Ebata](https://github.com/MokkeMeguru)

| [![](https://img.shields.io/github/stars/TensorSpeech/TensorFlowTTS?style=social)](https://github.com/TensorSpeech/TensorFlowTTS)

  • [git](https://github.com/thorstenMueller/Thorsten-Voice)

  • [hf](https://huggingface.co/spaces/akhaliq/TensorFlowTTS), [hf](https://huggingface.co/tensorspeech)

  • [kaggle](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset)

  • [project](https://tensorspeech.github.io/TensorFlowTTS/)

  • [pypi](https://pypi.org/project/TensorFlowTTS/)

  • [tf](https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide), [tf](https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/TensorSpeech/TensorFlowTTS/blob/master/notebooks/TensorFlowTTS_FastSpeech_with_TFLite.ipynb) | 01.06.2021 |
| Hyperopt | Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions |

  • [James Bergstra](https://github.com/jaberg)
  • [Dan Yamins](https://github.com/yamins81)
  • [David Cox](https://scholar.google.com/citations?user=6S-WgLkAAAAJ)

| [![](https://img.shields.io/github/stars/hyperopt/hyperopt?style=social)](https://github.com/hyperopt/hyperopt)

  • [ICML](https://proceedings.mlr.press/v28/bergstra13.html)

  • [docs](http://hyperopt.github.io/hyperopt/)

  • [git](https://github.com/hyperopt/hyperopt-sklearn), [git](https://github.com/hyperopt/hyperopt-nnet), [git](https://github.com/hyperopt/hyperopt-nnet), [git](https://github.com/hyperopt/hyperopt-convnet), [git](https://github.com/hyperopt/hyperopt-gpsmbo)

  • [neurips](https://papers.nips.cc/paper/2011/hash/86e8f7ab32cfd12577bc2619bc635690-Abstract.html)

  • [yt](https://youtu.be/Mp1xnPfE4PY), [yt](https://youtu.be/tdwgR1AqQ8Y), [yt](https://youtu.be/tteE_Vtmrv4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/hyperopt/hyperopt/blob/master/tutorial/01.BasicTutorial.ipynb) | 01.06.2021 |
| CNN | This tutorial demonstrates training a simple Convolutional Neural Network to classify CIFAR images | [Billy Lamberta](https://github.com/lamberta) |

  • [cifar](https://www.cs.toronto.edu/~kriz/cifar.html)

  • [link](https://developers.google.com/machine-learning/glossary/#convolutional_neural_network)

  • [tf](https://www.tensorflow.org/tutorials/images/cnn)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb) | 22.05.2021 |
| Custom GPT-2 + Tokenizer | Train a custom GPT-2 model for free on a GPU using aitextgen! | [Max Woolf](https://minimaxir.com/) | [![](https://img.shields.io/github/stars/minimaxir/aitextgen?style=social)](https://github.com/minimaxir/aitextgen)

  • [data](https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt)

  • [docs](https://docs.aitextgen.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/144MdX5aLqrQ3-YW-po81CQMrD6kpgpYh) | 17.05.2021 |
| Train a GPT-2 Text-Generating Model | Retrain an advanced text generating neural network on any text dataset for free on a GPU using Colaboratory using aitextgen! | [Max Woolf](https://minimaxir.com/) | [![](https://img.shields.io/github/stars/minimaxir/aitextgen?style=social)](https://github.com/minimaxir/aitextgen)

  • [data](https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt)

  • [docs](https://docs.aitextgen.io/)

  • [pwc](https://paperswithcode.com/task/text-generation)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD) | 17.05.2021 |
| EasyNMT | Easy to use, state-of-the-art machine translation for more than 100+ languages | [Nils Reimers](https://www.nils-reimers.de/) | [![](https://img.shields.io/github/stars/UKPLab/EasyNMT?style=social)](https://github.com/UKPLab/EasyNMT)

  • [arxiv](https://arxiv.org/abs/2008.00401), [arxiv](https://arxiv.org/abs/2010.11125)

  • [demo](http://easynmt.net/demo/)

  • [git](https://github.com/Helsinki-NLP/Opus-MT), [git](https://github.com/facebookresearch/fairseq/tree/main/examples/multilingual)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1X47vgSiOphpxS5w_LPtjQgJmiSTNfRNC) | 26.04.2021 |
| SkinDeep | Remove Body Tattoo Using Deep Learning | [Vijish Madhavan](https://github.com/vijishmadhavan) | [![](https://img.shields.io/github/stars/vijishmadhavan/SkinDeep?style=social)](https://github.com/vijishmadhavan/SkinDeep)

  • [arxiv](https://arxiv.org/abs/1805.08318), [arxiv](https://arxiv.org/abs/1710.10196), [arxiv](https://arxiv.org/abs/1707.02921), [arxiv](https://arxiv.org/abs/1603.08155)

  • [git](https://github.com/jantic/DeOldify)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vijishmadhavan/SkinDeep/blob/master/SkinDeep_good.ipynb) | 24.04.2021 |
| PaddleHub | Pre-trained models toolkit based on PaddlePaddle: 400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving |

  • [Zeyu Chen](https://github.com/ZeyuChen)
  • [Zewu Wu](https://github.com/nepeplwu)
  • [Bin Long](https://github.com/sjtubinlong)
  • [Xuefei Zhang](https://github.com/Steffy-zxf)
  • others
  • [Jinxuan Qiu](https://github.com/kinghuin)
  • [Yuhan Shen](https://github.com/ShenYuhan)
  • [Yuying Hao](https://github.com/haoyuying)
  • [Xiaojie Chen](https://github.com/KPatr1ck)

| [![](https://img.shields.io/github/stars/PaddlePaddle/PaddleHub?style=social)](https://github.com/PaddlePaddle/PaddleHub)

  • [docs](https://paddlehub.readthedocs.io/en)

  • [git](https://github.com/PaddlePaddle/PaddleOCR), [git](https://github.com/PaddlePaddle/PaddleDetection), [git](https://github.com/PaddlePaddle/PaddleGAN), [git](https://github.com/CMU-Perceptual-Computing-Lab/openpose), [git](https://github.com/PaddlePaddle/PaddleSeg), [git](https://github.com/PaddlePaddle/PaddleClas), [git](https://github.com/PaddlePaddle/ERNIE), [git](https://github.com/baidu/LAC), [git](https://github.com/baidu/DDParser), [git](https://github.com/PaddlePaddle/PaddleSpeech)

  • [hf](https://huggingface.co/PaddlePaddle)

  • [medium](https://medium.com/analytics-vidhya/paddlehub-fdd1ec75a07b)

  • [website](https://www.paddlepaddle.org.cn/en)

  • [yt](https://youtu.be/9adXuF_lTSg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/PaddlePaddle/PaddleHub/blob/develop/demo/serving/bentoml/cloud-native-model-serving-with-bentoml.ipynb) | 20.04.2021 |
| OCTIS | Framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach |

  • [Silvia Terragni](https://silviatti.github.io/)
  • [Elisabetta Fersini](https://www.unimib.it/elisabetta-fersini)
  • [Antonio Candelieri](https://www.unimib.it/antonio-candelieri)
  • [Pietro Tropeano](https://github.com/pietrotrope)
  • others
  • [Bruno Galuzzi](https://github.com/brunoG89)
  • [Lorenzo Famiglini](https://github.com/lorenzofamiglini)
  • [Davide Pietrasanta](https://github.com/davidepietrasanta)

| [![](https://img.shields.io/github/stars/mind-Lab/octis?style=social)](https://github.com/mind-Lab/octis)

  • [arxiv](https://arxiv.org/abs/1703.01488)

  • [data](https://www.dbpedia.org/resources/ontology/), [data](https://www.statmt.org/europarl/)

  • [git](https://github.com/estebandito22/PyTorchAVITM)

  • [medium](https://towardsdatascience.com/a-beginners-guide-to-octis-optimizing-and-comparing-topic-models-is-simple-590554ec9ba6), [medium](https://towardsdatascience.com/a-beginners-guide-to-octis-vol-2-optimizing-topic-models-1214e58be1e5)

  • [neurips](https://papers.nips.cc/paper/2000/hash/f9d1152547c0bde01830b7e8bd60024c-Abstract.html)

  • [paper](https://aclanthology.org/2021.eacl-demos.31/)

  • [pwc](https://paperswithcode.com/dataset/20-newsgroups)

  • [yt](https://youtu.be/nPmiWBFFJ8E)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/MIND-Lab/OCTIS/blob/master/examples/OCTIS_Optimizing_CTM.ipynb) | 19.04.2021 |
| PyTorchVideo | Deeplearning library with a focus on video understanding work |

  • [Haoqi Fan](https://haoqifan.github.io/)
  • [Tullie Murrell](https://github.com/tullie)
  • [Heng Wang](https://hengcv.github.io/)
  • [Kalyan Vasudev Alwala](https://github.com/kalyanvasudev)
  • others
  • [Yanghao Li](https://github.com/lyttonhao)
  • [Yilei Li](https://liyilui.github.io/personal_page/)
  • [Bo Xiong](https://github.com/bxiong1202)
  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Meng Li](https://mengli.me/)
  • [Haichuan Yang](https://hyang1990.github.io/)
  • [Jitendra Malik](https://scholar.google.com/citations?user=oY9R5YQAAAAJ)
  • [Ross Girshick](https://github.com/rbgirshick)
  • [Matt Feiszli](https://scholar.google.com/citations?user=A-wA73gAAAAJ)
  • [Aaron Adcock](https://scholar.google.com/citations?&user=oa78zHUAAAAJ)
  • [Wan-Yen Lo](https://github.com/wanyenlo)
  • [Christoph Feichtenhofer](http://feichtenhofer.github.io/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3474085.3478329)](https://doi.org/10.1145/3474085.3478329) [![](https://img.shields.io/github/stars/facebookresearch/pytorchvideo?style=social)](https://github.com/facebookresearch/pytorchvideo)

  • [arxiv](https://arxiv.org/abs/2111.09887), [arxiv](https://arxiv.org/abs/2104.11227)

  • [blog post](https://ai.facebook.com/blog/pytorchvideo-a-deep-learning-library-for-video-understanding/)

  • [docs](https://pytorchvideo.readthedocs.io/en/latest/index.html)

  • [website](https://github.com/facebookresearch/pytorchvideo)

  • [yt](https://youtu.be/b7-gnpqz9Qg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/facebookresearch/pytorchvideo/blob/main/tutorials/accelerator/Build_your_model_with_PytorchVideo_Accelerator.ipynb) | 13.04.2021 |
| NeuSpell | Open-source toolkit for spelling correction in English |

  • [Sai Muralidhar Jayanthi](https://github.com/murali1996)
  • [Danish Pruthi](https://danishpruthi.com/)
  • [Graham Neubig](https://phontron.com/index.php)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/2020.emnlp-demos.21)](https://doi.org/10.18653/v1/2020.emnlp-demos.21) [![](https://img.shields.io/github/stars/neuspell/neuspell?style=social)](https://github.com/neuspell/neuspell)

  • [arxiv](https://arxiv.org/abs/2010.11085), [arxiv](https://arxiv.org/abs/1312.3005)

  • [hf](https://huggingface.co/transformers/bertology.html)

  • [medium](https://medium.com/@kunalgkjoshi/implementing-spell-correction-a-journey-with-xfspell-and-neuspell-4bc33e3bcde7)

  • [project](https://neuspell.github.io/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/neuspell/neuspell/blob/master/scripts/english_baselines/2_jamspell.ipynb) | 03.04.2021 |
| GPT Neo | An implementation of model & data parallel GPT2 & GPT3 -like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library | [EleutherAI](https://www.eleuther.ai/) | [![](https://img.shields.io/github/stars/EleutherAI/gpt-neo?style=social)](https://github.com/EleutherAI/gpt-neo)

  • [GPT-2](https://openai.com/blog/better-language-models/)

  • [arxiv](https://arxiv.org/abs/2005.14165), [arxiv](https://arxiv.org/abs/2004.05150), [arxiv](https://arxiv.org/abs/1701.06538)

  • [git](https://github.com/tensorflow/mesh), [git](https://github.com/EleutherAI/gpt-neox/)

  • [pretrained](https://the-eye.eu/public/AI/gptneo-release/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/EleutherAI/GPTNeo/blob/master/GPTNeo_example_notebook.ipynb) | 28.03.2021 |
| CVAE | This notebook demonstrates how train a Variational Autoencoder on the MNIST dataset |

  • [Diederik Kingma](http://www.dpkingma.com/)
  • [Max Welling](https://staff.fnwi.uva.nl/m.welling/)
  • [Danilo Rezende](https://danilorezende.com/about/)
  • [Shakir Mohamed](https://shakirm.com/)
  • [Daan Wierstra](https://scholar.google.com/citations?user=aDbsf28AAAAJ)

|

  • [arxiv](https://arxiv.org/abs/1312.6114), [arxiv](https://arxiv.org/abs/1401.4082)

  • [tf](https://www.tensorflow.org/tutorials/generative/cvae)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cvae.ipynb) | 22.03.2021 |
| Big Sleep | Text to image generation, using OpenAI's CLIP and a BigGAN | [Phil Wang](https://lucidrains.github.io/) | [![](https://img.shields.io/github/stars/lucidrains/big-sleep?style=social)](https://github.com/lucidrains/big-sleep)

  • [arxiv](https://arxiv.org/abs/2103.00020), [arxiv](https://arxiv.org/abs/1809.11096)

  • [pypi](https://pypi.org/project/big-sleep/)

  • [reddit](https://www.reddit.com/r/bigsleep/comments/lxawb4/how_to_use_some_of_the_newer_features_of/), [reddit](https://www.reddit.com/r/bigsleep/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1MEWKbm-driRNF8PrU7ogS5o3se-ePyPb) | 17.03.2021 |
| Deep Daze | Text to image generation using OpenAI's CLIP and Siren | [Phil Wang](https://lucidrains.github.io/) | [![](https://img.shields.io/github/stars/lucidrains/deep-daze?style=social)](https://github.com/lucidrains/deep-daze)

  • [arxiv](https://arxiv.org/abs/2103.00020), [arxiv](https://arxiv.org/abs/2006.09661)

  • [pypi](https://pypi.org/project/deep-daze/)

  • [reddit](https://www.reddit.com/r/deepdaze/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1_YOHdORb0Fg1Q7vWZ_KlrtFe9Ur3pmVj) | 17.03.2021 |
| DCGAN | This tutorial demonstrates how to generate images of handwritten digits using a Deep Convolutional Generative Adversarial Network |

  • [Alec Radford](https://scholar.google.com/citations?user=dOad5HoAAAAJ)
  • [Luke Metz](https://lukemetz.com/)
  • [Soumith Chintala](https://soumith.ch/)

|

  • [arxiv](https://arxiv.org/abs/1511.06434), [arxiv](https://arxiv.org/abs/1701.00160)

  • [kaggle](https://www.kaggle.com/jessicali9530/celeba-dataset)

  • [medium](https://medium.com/@vedantjagtap2002/artificial-intelligence-approach-to-reduce-energy-used-for-cooling-data-centres-d2d78d92c107)

  • [tf](https://www.tensorflow.org/tutorials/generative/dcgan)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb) | 12.03.2021 |
| Adversarial FGSM | This tutorial creates an adversarial example using the Fast Gradient Signed Method attack. This was one of the first and most popular attacks to fool a neural network. |

  • [Ian Goodfellow](https://www.iangoodfellow.com/)
  • [Jonathon Shlens](https://shlens.github.io/)
  • [Christian Szegedy](https://scholar.google.com/citations?user=bnQMuzgAAAAJ)

|

  • [arxiv](https://arxiv.org/abs/1412.6572)

  • [imagenet](http://www.image-net.org/)

  • [medium](https://medium.com/@zachariaharungeorge/a-deep-dive-into-the-fast-gradient-sign-method-611826e34865)

  • [tf](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/applications/MobileNetV2), [tf](https://www.tensorflow.org/tutorials/generative/adversarial_fgsm)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/adversarial_fgsm.ipynb) | 12.03.2021 |
| GAN steerability | We will navigate in GAN latent space to simulate various camera transformations |

  • [Ali Jahanian](http://people.csail.mit.edu/jahanian/)
  • [Lucy Chai](http://people.csail.mit.edu/lrchai/)
  • [Phillip Isola](http://web.mit.edu/phillipi/)

| [![](https://img.shields.io/github/stars/ali-design/gan_steerability?style=social)](https://github.com/ali-design/gan_steerability)

  • [arxiv](https://arxiv.org/abs/1907.07171), [arxiv](https://arxiv.org/abs/1809.11096)

  • [project](https://ali-design.github.io/gan_steerability/)

  • [yt](https://youtu.be/nS0V64sF7Cw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1kn6yG8PqD1U2bUcy32V1iAVjzlcQWcG3) | 04.03.2021 |
| Trax | End-to-end library for deep learning that focuses on clear code and speed | [Google](https://research.google/teams/brain/) | [![](https://img.shields.io/github/stars/google/trax?style=social)](https://github.com/google/trax)

  • [arxiv](https://arxiv.org/abs/1910.00177)

  • [discuss](https://groups.google.com/u/1/g/trax-discuss)

  • [docs](https://trax-ml.readthedocs.io/en/latest/)

  • [kaggle](https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus), [kaggle](https://www.kaggle.com/code/dschettler8845/exploration-of-trax-framework)

  • [medium](https://towardsdatascience.com/get-started-with-google-trax-for-nlp-ff8dcd3119cf), [medium](https://medium.com/analytics-vidhya/brief-view-of-googles-trax-library-b78eae008cb6)

  • [tf](https://www.tensorflow.org/datasets/catalog/overview), [tf](https://tensorflow.org/guide/tf_numpy)

  • [yt](https://youtu.be/qlTsaHAtJBY)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/trax/blob/master/trax/intro.ipynb) | 18.02.2021 |
| bsuite | A collection of carefully-designed experiments that investigate core capabilities of an RL agent with two main objectives |

  • [Ian Osband](http://iosband.github.io/)
  • [Yotam Doron](http://www.yotamdoron.com/)
  • [Matteo Hessel](https://github.com/mtthss)
  • [John Aslanides](https://www.aslanides.io/)
  • others
  • [Eren Sezener](http://erensezener.com/)
  • [Andre Saraiva](https://andresnds.wordpress.com/)
  • [Katrina McKinney](https://medium.com/@katrinamckinney)
  • [Tor Lattimore](http://tor-lattimore.com/)
  • [Csaba Szepesvari](https://sites.ualberta.ca/~szepesva/)
  • [Satinder Singh](http://web.eecs.umich.edu/~baveja/)
  • [Benjamin Van Roy](https://web.stanford.edu/~bvr/)
  • [Richard Sutton](http://www.incompleteideas.net/)
  • [David Silver](https://www.davidsilver.uk/)
  • [Hado Van Hasselt](https://hadovanhasselt.com/)

| [![](https://img.shields.io/github/stars/deepmind/bsuite?style=social)](https://github.com/deepmind/bsuite)

  • [git](https://github.com/openai/gym)

  • [paper](https://openreview.net/forum?id=rygf-kSYwH)

  • [yt](https://youtu.be/Wcv4eU_qtZU)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH) | 13.02.2021 |
| TF-Ranking | End-to-end walkthrough of training a TensorFlow Ranking neural network model which incorporates sparse textual features | [Rama Kumar](https://github.com/ramakumar1729) | [![](https://img.shields.io/github/stars/tensorflow/ranking?style=social)](https://github.com/tensorflow/ranking)

  • [arxiv](https://arxiv.org/abs/1910.09676), [arxiv](https://arxiv.org/abs/1812.00073), [arxiv](https://arxiv.org/abs/1905.08957), [arxiv](https://arxiv.org/abs/1811.04415)

  • [data](http://hamedz.ir/resources/)

  • [git](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.proto#L72)

  • [wiki](https://en.wikipedia.org/wiki/Mean_reciprocal_rank), [wiki](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/ranking/blob/master/tensorflow_ranking/examples/handling_sparse_features.ipynb) | 04.02.2021 |
| Toon-Me | A fun project to toon portrait images | [Vijish Madhavan](https://github.com/vijishmadhavan) | [![](https://img.shields.io/github/stars/vijishmadhavan/Toon-Me?style=social)](https://github.com/vijishmadhavan/Toon-Me)
  • [arxiv](https://arxiv.org/abs/1710.10196), [arxiv](https://arxiv.org/abs/1707.02921), [arxiv](https://arxiv.org/abs/1603.08155)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/vijishmadhavan/Light-Up/blob/master/Toon_Me_(Try_it_on_Colab).ipynb) | 22.01.2021 |
| TensorNetwork | A library for easy and efficient manipulation of tensor networks | [Chase Roberts](http://thenerdstation.github.io/) | [![](https://img.shields.io/github/stars/google/TensorNetwork?style=social)](https://github.com/google/TensorNetwork)

  • [arxiv](https://arxiv.org/abs/1708.00006), [arxiv](https://arxiv.org/abs/1306.2164)

  • [docs](https://tensornetwork.readthedocs.io/)

  • [yt](https://www.youtube.com/watch?v=YN2YBB0viKo)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/TensorNetwork/blob/master/colabs/Tensor_Networks_in_Neural_Networks.ipynb) | 21.01.2021 |
| Spleeter | Deezer source separation library including pretrained models |

  • [Romain Hennequin](http://romain-hennequin.fr/)
  • [Anis Khlif](https://github.com/alreadytaikeune)
  • [Félix Voituret](https://github.com/Faylixe)
  • [Manuel Moussallam](https://mmoussallam.github.io/)

| [![](https://img.shields.io/github/stars/deezer/spleeter?style=social)](https://github.com/deezer/spleeter)

  • [blog post](https://deezer.io/releasing-spleeter-deezer-r-d-source-separation-engine-2b88985e797e)

  • [data](https://sigsep.github.io/datasets/musdb.html)

  • [project](https://research.deezer.com/projects/spleeter.html)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deezer/spleeter/blob/master/spleeter.ipynb) | 10.01.2021 |
| Bullet Physics SDK | Real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc |

  • [Erwin Coumans](https://github.com/erwincoumans)
  • [Yunfei Bai](https://github.com/YunfeiBai)

| [![](https://img.shields.io/github/stars/bulletphysics/bullet3?style=social)](https://github.com/bulletphysics/bullet3)

  • [docs](https://docs.google.com/document/d/10sXEhzFRSnvFcl3XxNGhnD4N2SedqwdAvK3dsihxVUA/edit#heading=h.2ye70wns7io3)

  • [git](https://github.com/Microsoft/vcpkg)

  • [website](https://pybullet.org)

  • [yt](https://www.youtube.com/playlist?list=PLinBNdD-7nkNCfoEKap4z3qadLVj8QB4a), [yt](https://youtu.be/9p0O941opGc), [yt](https://youtu.be/kZxPaGdoSJY), [yt](https://www.youtube.com/playlist?list=PL9LUFPiB6N3YrS0O7XM_1sBVWRnSRB643)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/bulletphysics/bullet3/blob/master/examples/pybullet/notebooks/HelloPyBullet.ipynb) | 14.10.2020 |
| Person Remover | Project that combines Pix2Pix and YOLO arhitectures in order to remove people or other objects from photos |

  • [Javier Gamazo](https://www.javiergamazo.com/)
  • [Daryl Autar](https://github.com/Daryl149)

| [![](https://img.shields.io/github/stars/javirk/Person_remover?style=social)](https://github.com/javirk/Person_remover)

  • [git](https://github.com/javirk/Person-remover-partial-convolutions), [git](https://github.com/zzh8829/yolov3-tf2)

  • [yt](https://www.youtube.com/watch?v=_dRjY9gMcxE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1JDpH8MAjaKoekQ_H9ZaxYJ9_axiDtDGm) | 22.08.2020 |
| Semantic Segmentation | Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset |

  • [Bolei Zhou](https://boleizhou.github.io/)
  • [Hang Zhao](https://hangzhaomit.github.io/)
  • [Xavier Puig](https://people.csail.mit.edu/xavierpuig/)
  • [Sanja Fidler](http://www.cs.toronto.edu/~fidler/index.html)
  • [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/)

| [![](https://img.shields.io/github/stars/CSAILVision/semantic-segmentation-pytorch?style=social)](https://github.com/CSAILVision/semantic-segmentation-pytorch)

  • [arxiv](https://arxiv.org/abs/1608.05442), [arxiv](https://arxiv.org/abs/1612.01105), [arxiv](https://arxiv.org/abs/1807.10221), [arxiv](https://arxiv.org/abs/1904.04514)

  • [git](https://github.com/CSAILVision/sceneparsing), [git](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch), [git](https://github.com/hszhao/semseg)

  • [project](http://sceneparsing.csail.mit.edu/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/CSAILVision/semantic-segmentation-pytorch/blob/master/notebooks/DemoSegmenter.ipynb) | 21.08.2020 |
| Gin Config | Lightweight configuration framework for Python, based on dependency injection |

  • [Dan Holtmann-Rice](https://github.com/dhr)
  • [Sergio Guadarrama](https://github.com/sguada)
  • [Nathan Silberman](http://nsilberman.com/)

| [![](https://img.shields.io/github/stars/google/gin-config?style=social)](https://github.com/google/gin-config)
  • [medium](https://towardsdatascience.com/stop-worrying-about-configs-with-gin-218562dd5c91)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/gin-config/blob/master/gin/gin_intro.ipynb) | 13.08.2020 |
| Dopamine | Research framework for fast prototyping of reinforcement learning algorithms |

  • [Pablo Castro](https://psc-g.github.io/)
  • [Subhodeep Moitra](http://www.deepmoitra.com/)
  • [Carles Gelada](https://github.com/cgel)
  • [Saurabh Kumar](https://scholar.google.com/citations?user=Rkr2uT8AAAAJ)
  • [Marc Bellemare](http://www.marcgbellemare.info/)

| [![](https://img.shields.io/github/stars/google/dopamine?style=social)](https://github.com/google/dopamine)

  • [arxiv](https://arxiv.org/abs/1812.06110), [arxiv](https://arxiv.org/abs/1511.05952), [arxiv](https://arxiv.org/abs/1812.05905), [arxiv](https://arxiv.org/abs/1806.06923)

  • [baselines](https://google.github.io/dopamine/baselines/)

  • [blog post](https://opensource.googleblog.com/2019/02/dopamine-2.0.html)

  • [docker](https://google.github.io/dopamine/docker/)

  • [docs](https://google.github.io/dopamine/docs/)

  • [git](https://github.com/openai/atari-py#roms), [git](https://github.com/openai/mujoco-py#install-mujoco)

  • [medium](https://medium.com/the-21st-century/google-dopamine-new-rl-framework-f84a35b7fb3f)

  • [yt](https://www.youtube.com/live/FWFoyFjeAaM?feature=share), [yt](https://youtu.be/bd4CsDp00RA)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/dopamine/blob/master/dopamine/colab/jax_agent_visualizer.ipynb) | 04.08.2020 |
| Analyzing Tennis Serve | We'll use the Video Intelligence API to analyze a tennis serve, including the angle of the arms and legs during the serve | [Dale Markowitz](https://daleonai.com/) | [![](https://img.shields.io/github/stars/google/making_with_ml?style=social)](https://github.com/google/making_with_ml/tree/master/sports_ai)

  • [blog post](https://daleonai.com/machine-learning-for-sports)

  • [medium](https://manivannan-ai.medium.com/find-the-angle-between-three-points-from-2d-using-python-348c513e2cd)

  • [yt](https://www.youtube.com/watch?v=yLrOy2Xedgk)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/google/making_with_ml/blob/master/sports_ai/Sports_AI_Analysis.ipynb) | 14.07.2020 |
| YOLOv4 | This tutorial will help you build YOLOv4 easily in the cloud with GPU enabled so that you can run object detections in milliseconds! | [Alexey Bochkovskiy](http://www.alexeyab.com/) | [![](https://img.shields.io/github/stars/AlexeyAB/darknet?style=social)](https://github.com/AlexeyAB/darknet)

  • [arxiv](https://arxiv.org/abs/2004.10934), [arxiv](https://arxiv.org/abs/2011.08036)

  • [medium](https://alexeyab84.medium.com/yolov4-the-most-accurate-real-time-neural-network-on-ms-coco-dataset-73adfd3602fe), [medium](https://alexeyab84.medium.com/scaled-yolo-v4-is-the-best-neural-network-for-object-detection-on-ms-coco-dataset-39dfa22fa982)

  • [project](https://pjreddie.com/darknet/)

  • [reddit](https://www.reddit.com/r/MachineLearning/comments/gydxzd/p_yolov4_the_most_accurate_realtime_neural/)

  • [yt](https://youtu.be/1_SiUOYUoOI), [yt](https://youtu.be/YDFf-TqJOFE)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/1_GdoqCJWXsChrOiY8sZMr_zbr_fH-0Fg) | 25.06.2020 |
| TensorFlow Graphics | Differentiable computer graphics in tensorflow |

  • [Julien Valentin](https://github.com/julienvalentin)
  • [Cem Keskin](https://github.com/cem-keskin)
  • [Pavel Pidlypenskyi](https://github.com/podlipensky)
  • [Ameesh Makadia](https://github.com/amakadia)
  • others
  • [Avneesh Sud](https://github.com/avneesh-g)
  • [Sofien Bouaziz](http://sofienbouaziz.com/)

| [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3450508.3464595)](https://doi.org/10.1145/3450508.3464595) [![](https://img.shields.io/github/stars/tensorflow/graphics?style=social)](https://github.com/tensorflow/graphics)

  • [medium](https://medium.com/syncedreview/computer-graphics-computer-vision-tensorflow-graphics-110e955e26bb)

  • [tf](https://www.tensorflow.org/graphic)

  • [twitter](https://twitter.com/_TFGraphics_)

  • [yt](https://youtu.be/Un0JDL3i5Hg)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb) | 20.05.2020 |
| GAN Dissection | Visualizing and Understanding Generative Adversarial Networks |

  • [David Bau](https://people.csail.mit.edu/davidbau/home/)
  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
  • [Hendrik Strobelt](http://hendrik.strobelt.com/)
  • [Bolei Zhou](https://boleizhou.github.io/)
  • others
  • [Joshua Tenenbaum](https://mitibmwatsonailab.mit.edu/people/joshua-tenenbaum/)
  • [William Freeman](https://billf.mit.edu/)
  • [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/)

| [![](https://img.shields.io/github/stars/CSAILVision/GANDissect?style=social)](https://github.com/CSAILVision/GANDissect)

  • [arxiv](https://arxiv.org/abs/1811.10597), [arxiv](https://arxiv.org/abs/1901.09887), [arxiv](https://arxiv.org/abs/1807.10221)

  • [demo](http://gandissect.res.ibm.com/ganpaint.html)

  • [git](https://github.com/CSAILVision/NetDissect), [git](https://github.com/junyanz/iGAN)

  • [project](https://gandissect.csail.mit.edu/)

  • [yt](https://www.youtube.com/watch?v=yVCgUYe4JTM)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/SIDN-IAP/global-model-repr/blob/master/notebooks/gandissect_solutions.ipynb) | 04.05.2020 |
| Sonnet | Library built on top of TensorFlow 2 designed to provide simple, composable abstractions for machine learning research |

  • [Malcolm Reynolds](https://github.com/malcolmreynolds)
  • [Jack Rae](https://github.com/dm-jrae)
  • [Andreas Fidjeland](https://github.com/akfidjeland)
  • [Fabio Viola](https://github.com/fabioviola)
  • others
  • [Adrià Puigdomènech](https://github.com/adria-p)
  • [Frederic Besse](https://github.com/fbesse)
  • [Tim Green](http://tfgg.me/)
  • [Sébastien Racanière](https://scholar.google.com/citations?user=o-h0vrQAAAAJ)
  • [Gabriel Barth-Maron](https://github.com/fastturtle)
  • [Diego Casas](https://github.com/diegolascasas)

| [![](https://img.shields.io/github/stars/deepmind/sonnet?style=social)](https://github.com/deepmind/sonnet)

  • [deepmind](https://www.deepmind.com/blog/open-sourcing-sonnet-a-new-library-for-constructing-neural-networks)

  • [docs](https://sonnet.readthedocs.io/en/latest/index.html)

  • [neurips](https://papers.nips.cc/paper/2016/hash/fb87582825f9d28a8d42c5e5e5e8b23d-Abstract.html)

  • [tf](https://www.tensorflow.org/guide/checkpoint), [tf](https://www.tensorflow.org/guide/saved_model)

  • [yt](https://youtu.be/rlpQjnUvoKw)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/deepmind/sonnet/blob/v2/examples/little_gan_on_mnist.ipynb) | 17.04.2020 |
| Classification of chest vs. adominal X-rays | The goal of this tutorial is to build a deep learning classifier to accurately differentiate between chest and abdominal X-rays | [tmoneyx01](https://github.com/tmoneyx01) | [![](https://img.shields.io/github/stars/mdai/mdai-client-py?style=social)](https://github.com/mdai/mdai-client-py)

  • [annotator](https://public.md.ai/annotator/project/PVq9raBJ)

  • [docs](https://docs.md.ai/)

  • [pypi](https://pypi.org/project/mdai/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/mdai/ml-lessons/blob/master/lesson1-xray-images-classification.ipynb) | 07.03.2020 |
| Earth Engine Python API and Folium Interactive Mapping | This notebook demonstrates how to setup the Earth Engine and provides several examples for visualizing Earth Engine processed data interactively using the folium library | [Qiusheng Wu](https://wetlands.io/) | [![](https://img.shields.io/github/stars/python-visualization/folium?style=social)](https://github.com/python-visualization/folium)
  • [api](https://developers.google.com/earth-engine/python_install)
| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/giswqs/qgis-earthengine-examples/blob/master/Folium/ee-api-folium-setup.ipynb) | 20.01.2020 |
| Tensor2Tensor | Library for deep learning models that is well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model |

  • [Ashish Vaswani](https://scholar.google.com/citations?user=oR9sCGYAAAAJ)
  • [Samy Bengio](https://scholar.google.com/citations?user=Vs-MdPcAAAAJ)
  • [Eugene Brevdo](https://ebrevdo.github.io/)
  • [François Chollet](https://fchollet.com/)
  • others
  • [Aidan Gomez](https://gom.ai/)
  • [Stephan Gouws](https://scholar.google.com/citations?user=lLTdYUYAAAAJ)
  • [Llion Jones](https://www.linkedin.com/in/llion-jones-9ab3064b)
  • [Łukasz Kaiser](https://scholar.google.com/citations?user=JWmiQR0AAAAJ)
  • [Nal Kalchbrenner](https://www.nal.ai/)
  • [Niki Parmar](https://github.com/nikiparmar)
  • [Ryan Sepassi](https://ryansepassi.com/)
  • [Noam Shazeer](https://github.com/nshazeer)
  • [Jakob Uszkoreit](https://scholar.google.com/citations?user=mOG0bwsAAAAJ)

| [![](https://img.shields.io/github/stars/tensorflow/tensor2tensor?style=social)](https://github.com/tensorflow/tensor2tensor)

  • [arxiv](https://arxiv.org/abs/1803.07416), [arxiv](https://arxiv.org/abs/1812.02825), [arxiv](https://arxiv.org/abs/1706.03762), [arxiv](https://arxiv.org/abs/1706.03059), [arxiv](https://arxiv.org/abs/1706.05137), [arxiv](https://arxiv.org/abs/1801.09797)

  • [blog post](https://ai.googleblog.com/2017/06/accelerating-deep-learning-research.html)

  • [data](https://research.fb.com/downloads/babi/)

  • [medium](https://towardsdatascience.com/tensor2tensor-and-one-model-to-learn-them-all-7ef3f9b61ba4)

  • [tf](https://tensorflow.github.io/tensor2tensor/cloud_mlengine.html), [tf](https://tensorflow.github.io/tensor2tensor/cloud_tpu.html)

  • [yt](https://youtu.be/O2UvKxaOH7c), [yt](https://youtu.be/VYQ8n3Besrw), [yt](https://youtu.be/cS2UZKHq4i4)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/Transformer_translate.ipynb) | 14.01.2020 |
| Traffic counting | Making Road Traffic Counting App based on Computer Vision and OpenCV | [Andrey Nikishaev](https://github.com/creotiv) | [![](https://img.shields.io/github/stars/creotiv/object_detection_projects?style=social)](https://github.com/creotiv/object_detection_projects/tree/master/opencv_traffic_counting)

  • [medium](https://medium.com/machine-learning-world/tutorial-making-road-traffic-counting-app-based-on-computer-vision-and-opencv-166937911660)

  • [yt](https://www.youtube.com/watch?v=_o5iLbRHKao)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/drive/12N4m_RYKqrpozRzh9qe7nQE_sIqQH9U8) | 10.01.2020 |
| NYU-DLSP20 | This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition |

  • [Yann LeCun](https://yann.lecun.com/)
  • [Alfredo Canziani](https://atcold.github.io/)

| [![](https://img.shields.io/github/stars/Atcold/NYU-DLSP20?style=social)](https://github.com/Atcold/NYU-DLSP20)

  • [discord](https://discord.gg/CthuqsX8Pb)

  • [git](https://github.com/Atcold/NYU-DLSP21), [git](https://github.com/Atcold/NYU-DLFL22)

  • [reddit](https://www.reddit.com/r/NYU_DeepLearning/)

  • [website](https://atcold.github.io/NYU-DLSP20/)

  • [yt](https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/Atcold/NYU-DLSP20/blob/master/00-logic_neuron_programming.ipynb) | 30.10.2019 |
| Imagededup | This package provides functionality to make use of hashing algorithms that are particularly good at finding exact duplicates as well as convolutional neural networks which are also adept at finding near duplicates |

  • [Tanuj Jain](https://github.com/tanujjain)
  • [Christopher Lennan](https://github.com/clennan)
  • [Dat Tran](https://dat-tran.com/)

| [![](https://img.shields.io/github/stars/idealo/imagededup?style=social)](https://github.com/idealo/imagededup)

  • [arxiv](https://arxiv.org/abs/1704.04861)

  • [medium](https://fullstackml.com/wavelet-image-hash-in-python-3504fdd282b5)

  • [project](https://idealo.github.io/imagededup/)

| [![Open In Colab](images/colab.svg)](https://colab.research.google.com/github/idealo/imagededup/blob/master/examples/CIFAR10_duplicates.ipynb) | 03.10.2019 |
# Best of the best
| authors | repositories | papers | packages |
|---|---|---|---|
|

  • [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
  • [Ziwei Liu](https://liuziwei7.github.io/)
  • [Xintao Wang](https://xinntao.github.io/)
  • [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ)
  • [Daniel Cohen-Or](https://danielcohenor.com/)
  • [Adam Roberts](https://github.com/adarob)
  • [Curtis Hawthorne](https://github.com/cghawthorne)
  • [Jesse Engel](https://github.com/jesseengel)
  • [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/)
  • [Björn Ommer](https://ommer-lab.com/people/ommer/)
  • [Yuval Alaluf](https://yuval-alaluf.github.io/)
  • [Or Patashnik](https://orpatashnik.github.io/)
  • [Michael Black](https://ps.is.mpg.de/~black)
  • [Yong Zhang](https://yzhang2016.github.io/)
  • [Billy Lamberta](https://github.com/lamberta)
  • [Nikhila Ravi](https://nikhilaravi.com/)
  • [Patrick Esser](https://github.com/pesser)
  • [Robin Rombach](https://github.com/rromb)
  • [Amit Bermano](https://www.cs.tau.ac.il/~amberman/)
  • [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
  • [Bolei Zhou](https://boleizhou.github.io/)
  • [Xiaodong Cun](https://vinthony.github.io/academic/)
  • [Krzysztof Ostrowski](https://github.com/krzys-ostrowski)

|

  • ollama [![](https://img.shields.io/github/stars/ollama/ollama?style=social)](https://github.com/ollama/ollama)
  • langchain [![](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://github.com/langchain-ai/langchain)
  • models [![](https://img.shields.io/github/stars/tensorflow/models?style=social)](https://github.com/tensorflow/models)
  • whisper [![](https://img.shields.io/github/stars/openai/whisper?style=social)](https://github.com/openai/whisper)
  • stable-diffusion [![](https://img.shields.io/github/stars/CompVis/stable-diffusion?style=social)](https://github.com/CompVis/stable-diffusion)
  • ComfyUI [![](https://img.shields.io/github/stars/comfyanonymous/ComfyUI?style=social)](https://github.com/comfyanonymous/ComfyUI)
  • open-interpreter [![](https://img.shields.io/github/stars/KillianLucas/open-interpreter?style=social)](https://github.com/KillianLucas/open-interpreter)
  • Real-Time-Voice-Cloning [![](https://img.shields.io/github/stars/CorentinJ/Real-Time-Voice-Cloning?style=social)](https://github.com/CorentinJ/Real-Time-Voice-Cloning)
  • yolov5 [![](https://img.shields.io/github/stars/ultralytics/yolov5?style=social)](https://github.com/ultralytics/yolov5)
  • segment-anything [![](https://img.shields.io/github/stars/facebookresearch/segment-anything?style=social)](https://github.com/facebookresearch/segment-anything)
  • PythonDataScienceHandbook [![](https://img.shields.io/github/stars/jakevdp/PythonDataScienceHandbook?style=social)](https://github.com/jakevdp/PythonDataScienceHandbook)
  • Fooocus [![](https://img.shields.io/github/stars/lllyasviel/Fooocus?style=social)](https://github.com/lllyasviel/Fooocus)
  • stablediffusion [![](https://img.shields.io/github/stars/Stability-AI/stablediffusion?style=social)](https://github.com/Stability-AI/stablediffusion)
  • llama_index [![](https://img.shields.io/github/stars/run-llama/llama_index?style=social)](https://github.com/run-llama/llama_index)
  • Open-Assistant [![](https://img.shields.io/github/stars/LAION-AI/Open-Assistant?style=social)](https://github.com/LAION-AI/Open-Assistant)
  • bark [![](https://img.shields.io/github/stars/suno-ai/bark?style=social)](https://github.com/suno-ai/bark)
  • GFPGAN [![](https://img.shields.io/github/stars/TencentARC/GFPGAN?style=social)](https://github.com/TencentARC/GFPGAN)
  • TTS [![](https://img.shields.io/github/stars/coqui-ai/TTS?style=social)](https://github.com/coqui-ai/TTS)
  • autogen [![](https://img.shields.io/github/stars/microsoft/autogen?style=social)](https://github.com/microsoft/autogen)
  • visual-chatgpt [![](https://img.shields.io/github/stars/microsoft/visual-chatgpt?style=social)](https://github.com/microsoft/visual-chatgpt)
  • google-research [![](https://img.shields.io/github/stars/google-research/google-research?style=social)](https://github.com/google-research/google-research)
  • ray [![](https://img.shields.io/github/stars/ray-project/ray?style=social)](https://github.com/ray-project/ray)
  • ultralytics [![](https://img.shields.io/github/stars/ultralytics/ultralytics?style=social)](https://github.com/ultralytics/ultralytics)

|

  • Image segmentation [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-319-24574-4_28)](http://doi.org/10.1007/978-3-319-24574-4_28)
  • AlphaFold [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41586-021-03819-2)](https://doi.org/10.1038/s41586-021-03819-2)
  • XGBoost [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/2939672.2939785)](https://doi.org/10.1145/2939672.2939785)
  • CycleGAN [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2017.244)](https://doi.org/10.1109/ICCV.2017.244)
  • Pix2Pix [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR.2017.632)](https://doi.org/10.1109/CVPR.2017.632)
  • MoCo [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00975)](https://doi.org/10.1109/CVPR42600.2020.00975)
  • LDM [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01042)](https://doi.org/10.1109/CVPR52688.2022.01042)
  • EfficientDet [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.01079)](https://doi.org/10.1109/CVPR42600.2020.01079)
  • DeepLabCut [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1038/s41593-018-0209-y)](https://doi.org/10.1038/s41593-018-0209-y)
  • StyleGAN 2 [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR42600.2020.00813)](https://doi.org/10.1109/CVPR42600.2020.00813)
  • ConvNeXt [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.01167)](https://doi.org/10.1109/CVPR52688.2022.01167)
  • Classify text with BERT [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.18653/v1/N19-1423)](https://doi.org/10.18653/v1/N19-1423)
  • SwinIR [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCVW54120.2021.00210)](https://doi.org/10.1109/ICCVW54120.2021.00210)
  • Instant-NGP [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1145/3528223.3530127)](https://doi.org/10.1145/3528223.3530127)
  • HMR [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR.2018.00744)](https://doi.org/10.1109/CVPR.2018.00744)
  • Mask2Former [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR52688.2022.00135)](https://doi.org/10.1109/CVPR52688.2022.00135)
  • Taming Transformers for High-Resolution Image Synthesis [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.01268)](https://doi.org/10.1109/CVPR46437.2021.01268)
  • PIFu [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00239)](https://doi.org/10.1109/ICCV.2019.00239)
  • Neural Style Transfer [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1167/16.12.326)](https://doi.org/10.1167/16.12.326)
  • ByteTrack [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1007/978-3-031-20047-2_1)](https://doi.org/10.1007/978-3-031-20047-2_1)
  • SPIN [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCV.2019.00234)](https://doi.org/10.1109/ICCV.2019.00234)
  • Pixel2Style2Pixel [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/CVPR46437.2021.00232)](https://doi.org/10.1109/CVPR46437.2021.00232)
  • Real-ESRGAN [![](https://api.juleskreuer.eu/citation-badge.php?doi=10.1109/ICCVW54120.2021.00217)](https://doi.org/10.1109/ICCVW54120.2021.00217)

|

  • xgboost [![](https://img.shields.io/pypi/dm/xgboost?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/xgboost/)
  • langchain [![](https://img.shields.io/pypi/dm/langchain?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/langchain/)
  • catboost [![](https://img.shields.io/pypi/dm/catboost?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/catboost/)
  • llama-index [![](https://img.shields.io/pypi/dm/llama-index?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/llama-index/)
  • langgraph [![](https://img.shields.io/pypi/dm/langgraph?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/langgraph/)
  • ollama [![](https://img.shields.io/pypi/dm/ollama?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/ollama/)
  • autofaiss [![](https://img.shields.io/pypi/dm/autofaiss?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/autofaiss/)
  • mmdet [![](https://img.shields.io/pypi/dm/mmdet?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmdet/)
  • unsloth [![](https://img.shields.io/pypi/dm/unsloth?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/unsloth/)
  • mmsegmentation [![](https://img.shields.io/pypi/dm/mmsegmentation?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmsegmentation/)
  • transformer-lens [![](https://img.shields.io/pypi/dm/transformer-lens?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/transformer-lens/)
  • mmpose [![](https://img.shields.io/pypi/dm/mmpose?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmpose/)
  • img2dataset [![](https://img.shields.io/pypi/dm/img2dataset?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/img2dataset/)
  • datachain [![](https://img.shields.io/pypi/dm/datachain?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/datachain/)
  • Crawl4AI [![](https://img.shields.io/pypi/dm/Crawl4AI?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/Crawl4AI/)
  • sae-lens [![](https://img.shields.io/pypi/dm/sae-lens?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/sae-lens/)
  • mistral-inference [![](https://img.shields.io/pypi/dm/mistral-inference?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mistral-inference/)
  • reformer-pytorch [![](https://img.shields.io/pypi/dm/reformer-pytorch?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/reformer-pytorch/)
  • dm-reverb [![](https://img.shields.io/pypi/dm/dm-reverb?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/dm-reverb/)
  • clip-retrieval [![](https://img.shields.io/pypi/dm/clip-retrieval?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/clip-retrieval/)
  • rl-games [![](https://img.shields.io/pypi/dm/rl-games?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/rl-games/)
  • tensor-parallel [![](https://img.shields.io/pypi/dm/tensor-parallel?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/tensor-parallel/)
  • mmocr [![](https://img.shields.io/pypi/dm/mmocr?style=flat&logo=pypi&label=%E2%80%8D&labelColor=f7f7f4&color=006dad)](https://pypi.org/mmocr/)

|

[![Stargazers over time](https://starchart.cc/amrzv/awesome-colab-notebooks.svg?variant=adaptive)](https://starchart.cc/amrzv/awesome-colab-notebooks)

(generated by [generate_markdown.py](generate_markdown.py) based on [research.json](data/research.json) and [tutorials.json](data/tutorials.json)