Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-multi-task-learning
2024 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.
https://github.com/thuml/awesome-multi-task-learning
Last synced: 1 day ago
JSON representation
-
Survey
- Multi-Task Learning for Dense Prediction Tasks: A Survey
- Multi-Task Learning with Deep Neural Networks: A Survey
- Multi-task learning for natural language processing in the 2020s: Where are we going?
- A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- An Overview of Multi-Task Learning in Deep Neural Networks
- A Survey on Multi-Task Learning
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
- Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing
-
Benchmark & Dataset
-
Computer Vision
-
NLP
-
RL & Robotics
-
Graph
-
Recommendation
-
-
Codebase
-
Recommendation
- Multi-Task-Learning-PyTorch - task learning architectures
- mtan - to-End Multi-Task Learning with Attention"
- MTReclib - task recommendation models and common datasets.
- LibMTL - Task Learning
- MALSAR - task learning via Structural Regularization (⚠️ Non-deep Learning)
- Multi-Task-Learning-PyTorch - task learning architectures
- mtan - to-End Multi-Task Learning with Attention"
- auto-lambda - Lambda: Disentangling Dynamic Task Relationships"
- astmt - tasking of Multiple Tasks
- mt-dnn - Task Deep Neural Networks for Natural Language Understanding
- mtrl
-
-
Architecture
-
Hard Parameter Sharing
- UniT: Multimodal Multitask Learning with a Unified Transformer
- Multi-Task Deep Neural Networks for Natural Language Understanding
- UberNet: Training a Universal Convolutional Neural Network for Low-, Mid-, and High-Level Vision Using Diverse Datasets and Limited Memory
- MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
- MultiTask-CenterNet (MCN): Efficient and Diverse Multitask Learning using an Anchor Free Approach
- Multitask Learning
- Multitask Learning
- Multitask Learning
- Multitask Learning
-
Soft Parameter Sharing
- Latent Multi-task Architecture Learning
- NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction
- Learning Multiple Tasks with Multilinear Relationship Networks
- Cross-Stitch Networks for Multi-task Learning
- Progressive Neural Networks
- Deep Multi-task Representation Learning: A Tensor Factorisation Approach
- Trace Norm Regularised Deep Multi-Task Learning
-
Decoder-focused Model
- TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene Understanding
- Inverted Pyramid Multi-task Transformer for Dense Scene Understanding
- Exploring Relational Context for Multi-Task Dense Prediction
- MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning
- Pattern-Affinitive Propagation Across Depth, Surface Normal and Semantic Segmentation
- PAD-Net: Multi-tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing
-
Modulation & Adapters
- Learning to Modulate pre-trained Models in RL
- Lossless Adaptation of Pretrained Vision Models For Robotic Manipulation
- Towards a Unified View of Parameter-Efficient Transfer Learning
- Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
- Rethinking Hard-Parameter Sharing in Multi-Domain Learning
- Learning to Prompt for Continual Learning
- The Power of Scale for Parameter-Efficient Prompt Tuning
- Prefix-Tuning: Optimizing Continuous Prompts for Generation
- Counter-Interference Adapter for Multilingual Machine Translation
- LoRA: Low-Rank Adaptation of Large Language Models
- AdapterFusion: Non-Destructive Task Composition for Transfer Learning
- Reparameterizing Convolutions for Incremental Multi-Task Learning without Task Interference
- A Study of Residual Adapters for Multi-Domain Neural Machine Translation
- AdapterHub: A Framework for Adapting Transformers
- MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
- Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
- Many Task Learning With Task Routing
- Parameter-Efficient Transfer Learning for NLP
- BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning
- Efficient Parametrization of Multi-domain Deep Neural Networks
- Learning multiple visual domains with residual adapters
-
Modularity, MoE, Routing & NAS
- Mod-Squad: Designing Mixture of Experts As Modular Multi-Task Learners
- M$^ 3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
- AutoMTL: A Programming Framework for Automated Multi-Task Learning
- An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
- SkillNet-NLU: A Sparsely Activated Model for General-Purpose Natural Language Understanding
- Combining Modular Skills in Multitask Learning
- Multi-Task Reinforcement Learning with Soft Modularization
- AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
- MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
- PLE
- Stochastic Filter Groups for Multi-Task CNNs: Learning Specialist and Generalist Convolution Kernels
- Deep Elastic Networks with Model Selection for Multi-Task Learning
- SNR: Sub-Network Routing for Flexible Parameter Sharing in Multi-Task Learning
- Flexible Multi-task Networks by Learning Parameter Allocation
- Feature Partitioning for Efficient Multi-Task Architectures
- MMoE
- Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning
- Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering
- Evolutionary architecture search for deep multitask networks
- NestedNet: Learning Nested Sparse Structures in Deep Neural Networks
- Modular Multitask Reinforcement Learning with Policy Sketches
- Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks
- SNR: Sub-Network Routing for Flexible Parameter Sharing in Multi-Task Learning
- MMoE
- PLE
- DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning
-
Task Representation
-
Others
- Learning Sparse Sharing Architectures for Multiple Tasks
- Deep Asymmetric Multi-task Feature Learning
- Learning to Multitask
- Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights
- PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
- Asymmetric Multi-task Learning based on Task Relatedness and Confidence
-
-
Optimization
-
Loss & Gradient Strategy
- Population-Based Training
- Aligned-MTL
- MoCo
- FAMO
- AuxiNash
- Do Current Multi-Task Optimization Methods in Deep Learning Even Help?
- Unitary Scalarization
- Rotograd
- RLW / RGW
- PINNsNTK
- Inverse-Dirichlet PINNs
- CAGrad
- Gradient Vaccine
- IMTL
- GradientPathologiesPINNs
- IT-MTL
- GradDrop
- PCGrad
- Online Learning for Auxiliary losses (OL-AUX)
- PopArt
- Learning values across many orders of magnitude
- Geometric Loss Strategy (GLS)
- Orthogonal
- LBTW
- Gradient Cosine Similarity
- Revised Uncertainty
- GradNorm
- Dynamic Task Prioritization
- Uncertainty
- AdaLoss
- Task-wise Early Stopping
- Dynamic Weight Average (DWA)
- FairGrad
- SDMGrad
- MGDA
- Auto-λ
- Nash-MTL
-
Task Interference
- On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment
- Ray Interference: A Source of Plateaus in Deep Reinforcement Learning
- ForkMerge: Overcoming Negative Transfer in Multi-Task Learning
- A Modulation Module for Multi-task Learning with Applications in Image Retrieval
-
Task Sampling
-
Adversarial Training
-
Pareto
-
Distillation
- Multi-Task Self-Training for Learning General Representations
- Knowledge Distillation for Multi-task Learning - Workshop, 2020.
- Distral: Robust Multitask Reinforcement Learning
- Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
- Policy Distillation
- Knowledge Distillation for Multi-task Learning - Workshop, 2020.
- Factorizing Knowledge in Neural Networks
- Universal Representations: A Unified Look at Multiple Task and Domain Learning
-
Consistency
-
-
Task Relationship Learning: Grouping, Tree (Hierarchy) & Cascading
-
Consistency
- Planning-oriented autonomous driving
- Editing Models with Task Arithmetic
- Efficient and Effective Multi-Task Grouping via Meta Learning on Task Combinations
- A Tree-Structured Multi-Task Model Recommender - Conf, 2022.
- Efficiently Identifying Task Groupings for Multi-Task Learning
- Branched Multi-Task Networks: Deciding What Layers To Share
- Which Tasks Should Be Learned Together in Multi-task Learning?
- Learning to Branch for Multi-Task Learning
- Task2Vec: Task Embedding for Meta-Learning
- Representation Similarity Analysis for Efficient Task Taxonomy & Transfer Learning
- AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning
- A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
- Taskonomy: Disentangling Task Transfer Learning
- SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization
- When is multitask learning effective? Semantic sequence prediction under varying data conditions
- Identifying beneficial task relations for multi-task learning in deep neural networks
- Fully-adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classification
- A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks
- Deep multi-task learning with low level tasks supervised at lower layers
- Learning Task Grouping and Overlap in Multi-task Learning
- Learning with Whom to Share in Multi-task Feature Learning
- A Convex Formulation for Learning Task Relationships in Multi-Task Learning
- Task2Vec: Task Embedding for Meta-Learning
- Task2Vec: Task Embedding for Meta-Learning
- Automated Search for Resource-Efficient Branched Multi-Task Networks
- Attributes for Improved Attributes: A Multi-Task Network Utilizing Implicit and Explicit Relationships for Facial Attribute Classification
-
-
Theory
-
Consistency
- Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation
- Deciphering and Optimizing Multi-Task Learning: A Random Matrix Approach
- On the Theory of Transfer Learning: The Importance of Task Diversity
- Understanding and Improving Information Transfer in Multi-Task Learning
-
-
Misc
-
Consistency
- MultiMAE: Multi-modal Multi-task Masked Autoencoders
- What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments?
- Multitask Learning Strengthens Adversarial Robustness
- Multi-Task Adversarial Attack
- BAM! Born-Again Multi-Task Networks for Natural Language Understanding
- OmniNet: A unified architecture for multi-modal multi-task learning
- Tasks Without Borders: A New Approach to Online Multi-Task Learning
- Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains
- Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing---and Back
- Unifying and Merging Well-trained Deep Neural Networks for Inference Stage - ECAI, 2018.
- Multi-task Self-Supervised Visual Learning
- Federated Multi-Task Learning
- One Model To Learn Them All
- Unifying Multi-Domain Multi-Task Learning: Tensor and Neural Network Perspectives
- A Unified Perspective on Multi-Domain and Multi-Task Learning
- 12-in-1: Multi-Task Vision and Language Representation Learning
-
Categories
Sub Categories
Hard Parameter Sharing
63
Consistency
47
Graph
38
Loss & Gradient Strategy
37
Modularity, MoE, Routing & NAS
27
Modulation & Adapters
21
Recommendation
14
Computer Vision
11
Distillation
8
Soft Parameter Sharing
7
Others
6
Decoder-focused Model
6
Task Interference
4
NLP
4
Pareto
3
Adversarial Training
3
Task Sampling
3
RL & Robotics
2
Task Representation
1
Keywords
multi-task-learning
6
pytorch
4
deep-learning
3
multitask-learning
3
nlp
2
meta-learning
2
multitask-recommendation
1
ctr-prediction
1
advertising
1
text-classification
1
sentiment-analysis
1
scaling
1
preprocessings
1
natural-language-inference
1
multi-task-learning-scaling
1
instruction-tuning
1
huggingface
1
glue
1
extreme-multi-task-learning
1
extreme-mtl
1
discriminative
1
dataset-collection
1
curated-datasets
1
crossfit
1
bigbench
1
benchmark
1
natural-language-processing
1
ranking
1
natural-language-understanding
1
named-entity-recognition
1
microsoft
1
machine-reading-comprehension
1
bert
1
auxiliary-learning
1
attention-model
1
segmentation
1
scene-understanding
1
pascal
1
nyud
1
eccv2020
1
computer-vision
1
multi-task-clustering
1
matlab
1
python
1
ple
1
multiobjective-optimization
1
multi-objective-optimization
1
multi-domain-learning
1
mtl
1
mmoe
1