Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-representation-learning
The newest reading list for representation learning
https://github.com/Mehooz/awesome-representation-learning
Last synced: about 18 hours ago
JSON representation
-
Core Areas
-
Object-based Representation
- Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions
- Entity Abstraction in Visual Model-Based Reinforcement Learning
- Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
- Object-oriented state editing for HRL
- MONet: Unsupervised Scene Decomposition and Representation
- Multi-Object Representation Learning with Iterative Variational Inference
- GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
- Generative Modeling of Infinite Occluded Objects for Compositional Scene Representation
- SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
- COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
- Object-Oriented Dynamics Predictor
- Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions
- Unsupervised Video Object Segmentation for Deep Reinforcement Learning
- Object-Oriented Dynamics Learning through Multi-Level Abstraction
- Language as an Abstraction for Hierarchical Deep Reinforcement Learning
- Interaction Networks for Learning about Objects, Relations and Physics
- Learning Compositional Koopman Operators for Model-Based Control
- Unmasking the Inductive Biases of Unsupervised Object Representations for Video Sequences
- Graph Representation Learning
- Workshop on Representation Learning for NLP - 2020
- Berkeley CS 294-158, Deep Unsupervised Learning
- Contrastive Learning of Structured World Models
- Entity Abstraction in Visual Model-Based Reinforcement Learning
- Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
- Object-oriented state editing for HRL
- GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
- SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
- COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
- Object-Oriented Dynamics Predictor
- Unsupervised Video Object Segmentation for Deep Reinforcement Learning
- Object-Oriented Dynamics Learning through Multi-Level Abstraction
- Language as an Abstraction for Hierarchical Deep Reinforcement Learning
- Learning Compositional Koopman Operators for Model-Based Control
- Unmasking the Inductive Biases of Unsupervised Object Representations for Video Sequences
- Interaction Networks for Learning about Objects, Relations and Physics
-
Generative Model
- Made: Masked autoencoder for distribution estimation
- Wavenet: A generative model for raw audio
- Pixel Recurrent Neural Networks
- Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications
- Pixelsnail: An improved autoregressive generative model
- Parallel Multiscale Autoregressive Density Estimation
- Flow++: Improving Flow-Based Generative Models with VariationalDequantization and Architecture Design
- Improved Variational Inferencewith Inverse Autoregressive Flow
- Glow: Generative Flowwith Invertible 1×1 Convolutions
- Masked Autoregressive Flow for Density Estimation
- Neural Discrete Representation Learning
- Made: Masked autoencoder for distribution estimation
- Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications
- Improved Variational Inferencewith Inverse Autoregressive Flow
- Masked Autoregressive Flow for Density Estimation
- Glow: Generative Flowwith Invertible 1×1 Convolutions
- Pixel Recurrent Neural Networks
-
Non-Generative Model
- Unsupervised Visual Representation Learning by Context Prediction
- Representation Learning withContrastive Predictive Coding
- Contrastive Multiview Coding
- Momentum Contrast for Unsupervised Visual Representation Learning
- A Simple Framework for Contrastive Learning of Visual Representations
- Contrastive Representation Distillation
- Neural Predictive Belief Representations
- World Discovery Models
- Deep Variational Information Bottleneck
- Learning deep representations by mutual information estimation and maximization
- Putting An End to End-to-End:Gradient-Isolated Learning of Representations
- What Makes for Good Views for Contrastive Learning?
- Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
- Mitigating Embedding and Class Assignment Mismatch in Unsupervised Image Classification
- Improving Unsupervised Image Clustering With Robust Learning
- Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
- Improving Unsupervised Image Clustering With Robust Learning
- Contrastive Representation Distillation
- Neural Predictive Belief Representations
- World Discovery Models
- Deep Variational Information Bottleneck
- What Makes for Good Views for Contrastive Learning?
- Representation Learning withContrastive Predictive Coding
- Momentum Contrast for Unsupervised Visual Representation Learning
- A Simple Framework for Contrastive Learning of Visual Representations
- Learning deep representations by mutual information estimation and maximization
-
Representation Learning in Reinforcement Learning
- InfoBot: Transfer and Exploration via the Information Bottleneck
- Reinforcement Learning with Unsupervised Auxiliary Tasks
- World Models
- Learning Latent Dynamics for Planning from Pixels
- Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images
- DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
- Count-Based Exploration with Neural Density Models
- Learning Actionable Representations with Goal-Conditioned Policies
- Automatic Goal Generation for Reinforcement Learning Agents
- VIME: Variational Information Maximizing Exploration
- Unsupervised State Representation Learning in Atari
- Learning Invariant Representations for Reinforcement Learning without Reconstruction
- CURL: Contrastive Unsupervised Representations for Reinforcement Learning
- DeepMDP: Learning Continuous Latent Space Models for Representation Learning
- CURL: Contrastive Unsupervised Representations for Reinforcement Learning
- Reinforcement Learning with Unsupervised Auxiliary Tasks
- DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
- Count-Based Exploration with Neural Density Models
- Learning Actionable Representations with Goal-Conditioned Policies
- Automatic Goal Generation for Reinforcement Learning Agents
- VIME: Variational Information Maximizing Exploration
- Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images
- Learning Invariant Representations for Reinforcement Learning without Reconstruction
-
Disentangled Representation
- beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
- Isolating Sources of Disentanglement in Variational Autoencoders
- Disentangling by Factorising
- InfoGAN: Interpretable Representation Learning byInformation Maximizing Generative Adversarial Nets
- Spatial Broadcast Decoder: A Simple Architecture forLearning Disentangled Representations in VAEs
- Challenging Common Assumptions in the Unsupervised Learning ofDisentangled Representations
- Isolating Sources of Disentanglement in Variational Autoencoders
- Disentangling by Factorising
- Spatial Broadcast Decoder: A Simple Architecture forLearning Disentangled Representations in VAEs
-
-
Survey