https://github.com/oelin/density-estimation
Literature on density estimation.
https://github.com/oelin/density-estimation
density-estimation literature
Last synced: 3 months ago
JSON representation
Literature on density estimation.
- Host: GitHub
- URL: https://github.com/oelin/density-estimation
- Owner: oelin
- License: mit
- Created: 2024-02-20T19:01:48.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-07T06:52:12.000Z (about 1 year ago)
- Last Synced: 2024-04-07T15:31:48.825Z (about 1 year ago)
- Topics: density-estimation, literature
- Homepage:
- Size: 132 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Paradigms for Generative Modeling
## 1. Autoregressive Models
- [A Neural Probabilistic Language Model](https://dl.acm.org/doi/pdf/10.5555/944919.944966) (Bengio et al., 2003)
- [Attention is All You Need](https://arxiv.org/abs/1706.03762) (Vaswani et al., 2017)
- [Efficiently Modeling Long Sequences with Structured State Spaces](https://arxiv.org/abs/2111.00396) (Gu et al., 2021)
- [MaskGIT: Masked Generative Image Transformer](https://arxiv.org/abs/2202.04200) (Chang et al., 2022)
- [MAGVIT: Masked Generative Video Transformer](https://arxiv.org/abs/2402.01203) (Yu et al., 2022)
- [SoundStorm: Efficient Parallel Audio Generation](https://arxiv.org/abs/2305.09636) (Borsos et al., 2022)
- [GIVT: Generative Infinite-vocabulary Transformers](https://arxiv.org/abs/2312.02116) (Tschannen et al., 2023)
- [Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction](https://arxiv.org/2404.02905) (Tian et al., 2024)
- [Alternators For Sequence Modeling](https://arxiv.org/abs/2405.11848) (Rezaei et al., 2024)## 2. Diffusion Models
- [Deep Unsupervised Learning using Nonequilibrium Thermodynamics](https://arxiv.org/abs/1503.03585) (Sohl-Dickstein et al., 2015)
- [Generative Modeling by Estimating the Gradients of the Data Distribution](https://arxiv.org/abs/1907.05600) (Song et al., 2019)
- [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) (Ho et al., 2020)
- [Score-based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) (Song et al., 2020)
- [Adversarial Score Matching and Improved Sampling for Image Generation](https://arxiv.org/abs/2009.05475) (Jolicoeur et al., 2020)
- [Score-based Generative Modeling with Critically-Damped Langevin Diffusion](https://arxiv.org/abs/2112.07068) (Dockhorn et al., 2021)
- [Gotta Go Fast When Generating Data with Score-based Models](https://arxiv.org/abs/2105.14080) (Jolicoeur et al., 2021)
- [Come-closer-diffuse-faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction](https://arxiv.org/abs/2112.05146) (Chung et al., 2021)
- [Learning to Efficiently Sample from Diffusion Probabilistic Models](https://arxiv.org/abs/2106.03802) (Watson et al., 2021)
- [Diffusion Priors in Variational Autoencoders](https://arxiv.org/abs/2106.15671) (Wehenkel et al., 2021)
- [A Variational Perspective on Diffusion-based Generative Models and Score Matching](https://arxiv.org/abs/2106.02808) (Huang et al., 2021)
- [Variational Diffusion Models](https://arxiv.org/abs/2107.00630) (Kingma et al., 2021)
- [Improved Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2102.09672) (Nichol et al., 2021)
- [Structured Denoising Diffusion Models in Discrete State-Spaces](https://arxiv.org/abs/2107.03006) (Austin et al., 2021)
- [Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions](https://arxiv.org/abs/2102.05379) (Hoogeboom et al., 2021)
- [Autoregressive Diffusion Models](https://arxiv.org/abs/2110.02037) (Hoogeboom et al., 2022)
- [Learning Fast Samplers for Diffusion Models by Differentiating through Sample Quality](https://arxiv.org/abs/2202.05830) (Watson et al., 2022)
- [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (Song et al. 2022)
- [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) (Liu et al., 2022)
- [Elucidating the Design Space of Diffusion-based Generative Models](https://arxiv.org/abs/2206.00364) (Karras et al., 2022)
- [GENIE: Higher-Order Denoising Diffusion Solvers](https://arxiv.org/abs/2210.05475) (Dockhorn et al., 2022)
- [gDDIM: Generalizing Denoising Diffusion Implicit Models](https://arxiv.org/abs/2206.05564) (Zhang et al., 2022)
- [Fast Sampling of Diffusion Models with Exponential Integrator](https://arxiv.org/abs/2204.13902) (Zhang et al., 2022)
- [Analytic-DPM: An Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models](https://arxiv.org/abs/2201.06503) (Bao et al., 2022)
- [Progressive Distillation for Fast Sampling of Diffusion Models](https://arxiv.org/abs/2202.00512) (Salimans et al., 2022)
- [On Distillation of Guided Diffusion Models](https://arxiv.org/abs/2210.03142) (Meng et al., 2022)
- [Concrete Score Matching: Generalized Score Matching for Discrete Data](https://arxiv.org/abs/2211.00802) (Meng et al., 2022)
- [Generative Modelling with Inverse Heat Dissipation](https://arxiv.org/abs/2206.13397) (Rissanen et al., 2022)
- [Blurring Diffusion Models](https://arxiv.org/abs/2209.05557) (Hoogeboom et al., 2022)
- [Flow Matching for Generative Modeling](https://arxiv.org/pdf/2210.02747.pdf) (Lipman et al., 2022)
- [Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow](https://arxiv.org/abs/2209.03003) (Liu et al., 2022)
- [Diffusion Autoencoders: Toward a Meaningful and Decodable Representation](https://openaccess.thecvf.com/content/CVPR2022/papers/Preechakul_Diffusion_Autoencoders_Toward_a_Meaningful_and_Decodable_Representation_CVPR_2022_paper.pdf) (Preechakul et al., 2022)
- [Consistency Models](https://arxiv.org/abs/2303.01469) (Song et al., 2023)
- [BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping](https://arxiv.org/abs/2306.05544) (Gu et al., 2023)
- [Learning Diffusion Bridges on Constrained Domains](https://openreview.net/pdf?id=WH1yCa0TbB) (Liu et al., 2023)
- [GenPhys: From Physical Processes to Generative Models](https://arxiv.org/abs/2304.02637) (Ziming et al., 2023)
- [Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation](https://arxiv.org/abs/2303.00848) (Kingma et al., 2023)
- [Rolling Diffusion Models](https://arxiv.org/abs/2402.09470) (Ruhe et al., 2024)
- [Diffusion Models: A Comprehensive Survey of Methods and Applications (v12)](https://arxiv.org/pdf/2209.00796.pdf) (Yang et al., 2024)
- [FiT: Flexible Vision Transformer for Diffusion Model](https://arxiv.org/abs/2402.12376v1) (Lu et al., 2024)
- [Structure Preserving Diffusion Models](https://arxiv.org/abs/2402.19369) (Lu et al., 2024)
- [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159) (Zheng et al., 2024)
- [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://arxiv.org/abs/2403.03206) (Esser et al., 2024)
- [Align Your Steps: Optimizing Sampling Schedules in Diffusion Models](https://arxiv.org/abs/2404.14507) (Sabour et al., 2024)
- [Variational Schrödinger Diffusion Models](https://arxiv.org/pdf/2405.04795) (Deng et al., 2024)
- [Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation](https://arxiv.org/pdf/2405.05224) (Kohler et al., 2024)
- [Characteristic Learning for Provable One Step Generation](https://arxiv.org/abs/2405.05512v1) (Ding et al., 2024)
- [Discriminator-Guided Cooperative Diffusion for Joint Audio and Video Generation](https://arxiv.org/abs/2405.17842) (Hayakawa et al., 2024)
- [Masked Diffusion Models Are Fast Distribution Learners](https://arxiv.org/abs/2306.11363) (Lei et al., 2024)
- [Phased Consistency Models](https://arxiv.org/abs/2405.18407) (Wang et al., 2024)
- [UDPM: Upsampling Diffusion Probabilistic Models](https://arxiv.org/abs/2305.16269) (Abu-Hussein et al., 2024)
- [Fast Samplers for Inverse Problems in Iterative Refinement Models](https://arxiv.org/abs/2405.17673) (Wang et al., 2024)## 3. Energy-based Models
- [Self-regularizing Restricted Boltzmann Machines](https://arxiv.org/abs/1912.05634v1) (Loukas, 2019)
- [Implicit Generation and Generalization in Energy-based Models](https://arxiv.org/abs/1903.08689) (Du et al., 2019)
- [How to Train Your Energy-based Models](https://arxiv.org/abs/2101.03288) (Song et al., 2021)
- [Learning Latent Space Hierarchical EBM Diffusion Models](https://arxiv.org/abs/2405.13910) (Cui et al., 2024)## 4. Generative Adversarial Networks
- [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) (Goodfellow et al., 2014)
- [Conditional Generative Adversarial Nets](https://arxiv.org/abs/1411.1784) (Mirza et al., 2014)
- [Conditional Image Synthesis with Auxiliary Classifier GANs](https://arxiv.org/abs/1610.09585) (Odena et al., 2016)
- [InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](https://arxiv.org/abs/1606.03657) (Chen et al., 2016)
- [Image-to-image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004) (Isola et al., 2016)
- [Unpaired Image-to-image Translation using Cycle-consistent Adversarial Networks](https://arxiv.org/abs/1703.10593) (Zhu et al., 2017)
- [Wasserstein GAN](https://arxiv.org/abs/1701.07875) (Arjovsky et al., 2017)
- [Improved Training of Wasserstein GANs](https://arxiv.org/abs/1704.00028) (Gulrajani et al., 2017)
- [DualGAN: Unsupervised Dual Learning for Image-to-Image Translation](https://arxiv.org/abs/1704.02510) (Yi et al., 2017)
- [Learning to Discover Cross-domain Relations with Generative Adversarial Networks](https://arxiv.org/abs/1703.05192) (Kim et al., 2017)
- [Progressive Growing of GANs for Improved Quality, Stability, and Variation](https://arxiv.org/abs/1710.10196) (Karras et al., 2017)
- [A Style-based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948) (Karras et al., 2017)
- [Self-attention Generative Adversarial Networks](https://arxiv.org/abs/1805.08318) (Zhang et al., 2018)
- [Dynamically Grown Generative Adversarial Networks](https://arxiv.org/abs/2106.08505) (Liu et al., 2021)
- [VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance](https://arxiv.org/abs/2204.08583) (Crowson et al., 2021)
- [Ten Years of GANs: A Survey of the State-of-the-art](https://export.arxiv.org/abs/2308.16316) (Chakraborty et al., 2023)
- [A Survey on GANs for Computer Vision: Recent Research, Analysis and Taxonomy](https://arxiv.org/pdf/2203.11242.pdf) (Iglesias et al., 2024)## 5. Neural Processes
- [Neural Processes](https://arxiv.org/abs/1807.01622) (Garnelo et al., 2018)
- [Conditional Neural Processes](https://arxiv.org/abs/1807.01613) (Garnelo et al., 2018)
- [Attentive Neural Processes](https://arxiv.org/abs/1901.05761) (Kim et al., 2019)
- [Neural Diffusion Processes](https://arxiv.org/abs/2206.03992) (Dutordoir et al., 2022)
- [The Neural Process Family: Survey, Applications and Perspectives](https://arxiv.org/abs/2209.00517) (Jha et al., 2022)
- [Spectral Convolutional Conditional Neural Processes](https://arxiv.org/abs/2404.13182v1) (Mohseni et al., 2024)## 6. Normalizing Flows
- [Density Estimation by Dual Ascent of the Log-likelihood](https://math.nyu.edu/~tabak/publications/CMSV8-1-10.pdf) (Tabak et al., 2010)
- [A Family of Non-parametric Density Estimation Algorithms](https://math.nyu.edu/~tabak/publications/Tabak-Turner.pdf) (Tabak et al., 2013)
- [Variational Inference with Normalizing Flows](https://arxiv.org/abs/1505.05770) (Jimenez et al., 2015)
- [Density Modeling of Images using a Generalized Normalization Transformation](https://arxiv.org/abs/1511.06281) (Balle et al., 2016)
- [Density Estimation Using Real NVP](https://arxiv.org/abs/1605.08803) (Dinh et al., 2016)
- [Glow: Generative Flow with Invertible 1x1 Convolutions](https://arxiv.org/abs/1807.03039) (Kingma et al., 2018)
- [Neural Ordinary Differential Equations](https://arxiv.org/abs/1806.07366) (Chen et al., 2018)
- [FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models](https://arxiv.org/abs/1810.01367) (Grathwohl et al., 2018)
- [A RAD Approach to Deep Mixture Models](https://arxiv.org/abs/1903.07714) (Dinh et al., 2019)
- [Normalizing Flows for Probabilistic Modeling and Inference](https://jmlr.csail.mit.edu/papers/volume22/19-1028/19-1028.pdf) (Papamakarios et al., 2019)
- [Normalizing Flows: An Introduction and Review of Current Methods](https://arxiv.org/abs/1908.09257) (Kobyzev et al., 2019)
- [Latent Normalizing Flows for Discrete Sequences](https://arxiv.org/abs/1901.10548) (Ziegler et al., 2019)
- [Discrete Flows: Invertible Generative Models of Discrete Data](https://arxiv.org/abs/1905.10347) (Tran et al., 2019)
- [Temporal Normalizing Flows](https://arxiv.org/abs/1912.09092) (Both et al., 2019)
- [Stochastic Normalizing Flows](https://arxiv.org/abs/2002.06707) (Wu et al., 2020)
- [Self Normalizing Flows](https://arxiv.org/abs/2011.07248) (Keller et al., 2020)
- [Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows](https://arxiv.org/abs/2002.10516) (Deng et al., 2020)
- [Gradient Boosted Normalizing Flows](https://arxiv.org/abs/2002.11896) (Giaquinto et al., 2020)
- [Principled Interpolation of Normalizing Flows](https://arxiv.org/abs/2010.12059) (Fadel et al., 2020)
- [Lossy Image Compression with Normalizing Flows](https://arxiv.org/abs/2008.10486) (Helminger et al., 2020)
- [Multi-resolution Normalizing Flows](https://arxiv.org/abs/2106.08462) (Voleti et al., 2021)
- [Diffusion Normalizing Flow](https://arxiv.org/abs/2110.07579) (Zhang et al., 2021)
- [Implicit Normalizing Flows](https://arxiv.org/abs/2103.09527) (Lu et al., 2021)
- [Neural Flows: Efficient Alternative to Neural ODEs](https://arxiv.org/abs/2110.13040) (Bilos et al., 2021)## 7. Variational Autoencoders
- [The Helmholtz Machine](https://www.gatsby.ucl.ac.uk/~dayan/papers/hm95.pdf) (Dayan et al., 1995)
- [Neural Variational Inference and Learning in Belief Networks](https://arxiv.org/abs/1402.0030) (Mnih et al., 2014)
- [Autoencoding Variational Bayes](https://arxiv.org/abs/1312.6114) (Kingma et al., 2015)
- [Hierarchical Variational Models](https://arxiv.org/abs/1511.02386) (Ranganath et al., 2015)
- [Importance Weighted Autoencoders](https://arxiv.org/abs/1509.00519) (Burda et al., 2015)
- [Ladder Variational Autoencoders](https://arxiv.org/abs/1602.02282) (Kaae et al., 2016)
- [Discrete Variational Autoencoders](https://arxiv.org/abs/1609.02200) (Rolfe, 2016)
- [The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables](https://arxiv.org/abs/1611.00712) (Maddison et al., 2016)
- [Categorical Reparameterization with Gumbel-softmax](https://arxiv.org/abs/1611.01144) (Jang et al., 2016)
- [Conditional Image Generation with Gated PixelCNN Decoders](https://arxiv.org/abs/1606.05328) (Oord et al., 2016)
- [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (van den Oord et al., 2017)
- [VAE With a VampPrior](https://arxiv.org/abs/1705.07120) (Tomczak et al., 2017)
- [DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors](https://arxiv.org/abs/1805.07445) (Vahdat et al., 2018)
- [An Introduction to Variational Autoencoders](https://arxiv.org/abs/1906.02691) (Kingma et al., 2019)
- [Generating Diverse High-fidelity Images with VQ-VAE-2](https://arxiv.org/abs/1906.00446) (Razavi et al., 2019)
- [Preventing Posterior Collapse with Delta-VAEs](https://arxiv.org/abs/1901.03416) (Razavi et al., 2019)
- [PixelVAE++: Improved PixelVAE with Discrete Prior](https://arxiv.org/abs/1908.09948) (Sadeghi et al., 2019)
- [VIBA: A Very Deep Hierarchy of Latent Variables for Generative Modeling](https://arxiv.org/abs/1902.02102) (Maaloe et al., 2019)
- [Taming Transformers for High-resolution Image Synthesis](https://arxiv.org/abs/2012.09841) (Esser et al., 2020)
- [DVAE++: Discrete Variational Autoencoder with Overlapping Transformations](https://arxiv.org/abs/1802.04920) (Vahdat et al., 2018)
- [NVAE: A Deep Hierarchical Variational Autoencoder](https://arxiv.org/abs/2007.03898) (Vahdat et al., 2020)
- [Dynamical Variational Autoencoders: A Comprehensive Survey](https://arxiv.org/abs/2008.12595) (Girin et al., 2020)
- [Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images](https://arxiv.org/abs/2011.10650) (Child, 2020)
- [Variational Hyper-encoding Networks](https://arxiv.org/abs/2005.08482) (Nguyen et al., 2020)
- [TimeVAE: A Variational Auto-encoder for Multivariate Time Series Generation](https://arxiv.org/abs/2111.08095) (Desai et al., 2021)
- [AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-encoders for Language Modeling](https://arxiv.org/abs/2205.05862) (Tu et al., 2022)
- [Disentangling Variational Autoencoders](https://arxiv.org/abs/2211.07700) (Pastrana, 2022)
- [Latent Variable Modelling using Variational Autoencoders: A Survey](https://arxiv.org/abs/2206.09891) (Kalingeri, 2022)
- [Efficient VDVAE: Less is More](https://arxiv.org/abs/2203.13751) (Hazami et al., 2022)