Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Image-Colorization
:books: A collection of Deep Learning based Image Colorization and Video Colorization papers.
https://github.com/MarkMoHR/Awesome-Image-Colorization
Last synced: 5 days ago
JSON representation
-
1. Automatic Image Colorization
-
1.1 Software / Demo
-
1.2 Papers
- Learning Large-Scale Automatic Image Colorization
- Deep Colorization
- Learning Representations for Automatic Colorization
- Colorful Image Colorization
- Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification
- Learning Diverse Image Colorization
- Structural Consistency and Controllability for Diverse Colorization
- Coloring With Limited Data: Few-Shot Colorization via Memory Augmented Networks - PyTorch)|
- ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution
- Instance-aware Image Colorization
- Pixelated Semantic Colorization
- Colorization Transformer - research/google-research/tree/master/coltran) |
- Focusing on Persons: Colorizing Old Images Learning from Modern Historical Movies
- Towards Vivid and Diverse Image Colorization with Generative Color Prior - Colorization) |
- ColorFormer: Image Colorization via Color Memory assisted Hybrid-attention Transformer
- CT2: Colorization Transformer via Color Tokens
- Disentangled Image Colorization via Global Anchors
- Improved Diffusion-based Image Colorization via Piggybacked Models - color.github.io/) |
- DDColor: Towards Photo-Realistic and Semantic-Aware Image Colorization via Dual Decoders - colorization/summary) |
- Region Assisted Sketch Colorization
- Automatic Controllable Colorization via Imagination - cong/imagine-colorization) [[project]](https://xy-cong.github.io/imagine-colorization/) |
- Deep Colorization
-
-
2. User Guided Image Colorization
-
2.1 Based on scribble
- [Petalica Paint (Online service)
- Manga colorization
- LazyBrush: Flexible Painting Tool for Hand-drawn Cartoons - implementation) |
- Outline Colorization through Tandem Adversarial Networks
- Real-Time User-Guided Image Colorization with Learned Deep Priors - deep-colorization) [[code2]](https://github.com/richzhang/colorization-pytorch) |
- Scribbler: Controlling Deep Image Synthesis with Sketch and Color
- Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks
- User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks
- Two-stage Sketch Colorization
- User-Guided Line Art Flat Filling with Split Filling Mechanism
- Dual Color Space Guided Sketch Colorization
- iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer
-
2.2 Based on reference image
- Comicolorization: Semi-Automatic Manga Colorization
- TextureGAN: Controlling Deep Image Synthesis with Texture Patches - TextureGAN) |
- Deep Exemplar-based Colorization - Exemplar-based-Colorization) |
- A Superpixel-based Variational Model for Image Colorization
- Automatic Example-based Image Colourisation using Location-Aware Cross-Scale Matching
- Adversarial Colorization Of Icons Based On Structure And Color Conditions - Colorization-Of-Icons-Based-On-Structure-And-Color-Conditions) |
- Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence
- Stylization-Based Architecture for Fast Deep Exemplar Colorization
- Manga Filling Style Conversion with Screentone Variational Autoencoder
- Colorization of Line Drawings with Empty Pupils
- Active Colorization for Cartoon Line Drawings
- Line Art Correlation Matching Feature Transfer Network for Automatic Animation Colorization
- Globally and Locally Semantic Colorization via Exemplar-Based Broad-GAN
- Style-Structure Disentangled Features and Normalizing Flows for Diverse Icon Colorization
- Eliminating Gradient Conflict in Reference-based Line-Art Colorization
- Semantic-Sparse Colorization Network for Deep Exemplar-based Colorization
- AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models - meng/AnimeDiffusion) |
- Self-driven Dual-path Learning for Reference-based Line Art Colorization under Limited Data
- Unsupervised Deep Exemplar Colorization via Pyramid Dual Non-local Attention - Net) |
- Lightweight Deep Exemplar Colorization via Semantic Attention-Guided Laplacian Pyramid
- SCSNet: An Efficient Paradigm for Learning Simultaneously Image Colorization and Super-Resolution
- Deep Exemplar-based Colorization - Exemplar-based-Colorization) |
-
2.4 Based on language or text
- L-CoDer: Language-based Colorization with Color-object Decoupling Transformer - CoDer) |
- Language-Based Image Editing with Recurrent Attentive Models - Lab/LBIE) |
- Learning to Color from Language
- Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss - gui) |
- Language-based Colorization of Scene Sketches
- Line Art Colorization Based on Explicit Region Segmentation - L-C/ColorizationWithRegion) |
- L-CoDe: Language-based Colorization using Color-object Decoupled Conditions - CoDe) |
- L-CoIns: Language-based Colorization with Instance Awareness
- L-CAD: Language-based Colorization with Any-level Descriptions - CAD) |
- Adding Conditional Control to Text-to-Image Diffusion Models
- Diffusing Colors: Image Colorization with Text Guided Diffusion
- Coloring with Words: Guiding Image Colorization Through Text-based Palette Generation - davian/Text2Colors/) |
-
2.3 Based on palette
- Palette-based Photo Recoloring
- Example-Based Colourization Via Dense Encoding Pyramids - based-Colorization-via-Dense-Encoding-pyramids) |
- SketchDeco: Decorating B&W Sketches with Colour - code) [[webpage]](https://chaitron.github.io/SketchDeco/) |
-
2.5 Multi-modal
- Interactive Deep Colorization Using Simultaneous Global and Local Inputs
- Two-Step Training: Adjustable Sketch Colourization via Reference Image and Text Tag - kanata/sketch_colorizer) |
- Control Color: Multimodal Diffusion-based Interactive Image Colorization - Color) [[project]](https://zhexinliang.github.io/Control_Color/) |
- UniColor: A Unified Framework for Multi-Modal Colorization with Transformer
-
2.6 Interactive Colorization
-
-
3. Techniques of Improving Image Colorization
-
2.6 Interactive Colorization
- Deep Edge-Aware Interactive Colorization against Color-Bleeding Effects - enhancing-colorization/) [[code(metric)]](https://github.com/niceDuckgu/CDR)|
- Line Art Colorization Based on Explicit Region Segmentation - L-C/ColorizationWithRegion) |
-
-
4. Video Colorization
-
4.1 Automatically
- Fully Automatic Video Colorization with Self-Regularization and Diversity - Automatic-Video-Colorization-with-Self-Regularization-and-Diversity) |
-
4.2 Based on reference
- Switchable Temporal Propagation Network
- Tracking Emerges by Colorizing Videos
- Deep Exemplar-based Video Colorization - Exemplar-based-Video-Colorization) |
- DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement
- Reference-Based Video Colorization with Spatiotemporal Correspondence
- The Animation Transformer: Visual Correspondence via Segment Matching
- Line Art Correlation Matching Feature Transfer Network for Automatic Animation Colorization
- Reference-Based Deep Line Art Video Colorization
- BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization
- Exemplar-based Video Colorization with Long-term Spatiotemporal Dependency
- ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization
- ToonCrafter: Generative Cartoon Interpolation
-
4.3 Based on scribble
-
4.4 Based on text
-
4.0 Survey
-
Programming Languages
Categories
Sub Categories
1.2 Papers
22
2.2 Based on reference image
22
4.2 Based on reference
12
2.1 Based on scribble
12
2.4 Based on language or text
12
2.5 Multi-modal
4
2.6 Interactive Colorization
3
2.3 Based on palette
3
1.1 Software / Demo
2
4.0 Survey
1
4.1 Automatically
1
4.4 Based on text
1
4.3 Based on scribble
1