Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/WeiHuang05/Awesome-Feature-Learning-in-Deep-Learning-Thoery

Welcome to the Awesome Feature Learning in Deep Learning Thoery Reading Group! This repository serves as a collaborative platform for scholars, enthusiasts, and anyone interested in delving into the fascinating world of feature learning within deep learning theory.
https://github.com/WeiHuang05/Awesome-Feature-Learning-in-Deep-Learning-Thoery

List: Awesome-Feature-Learning-in-Deep-Learning-Thoery

Last synced: about 2 months ago
JSON representation

Welcome to the Awesome Feature Learning in Deep Learning Thoery Reading Group! This repository serves as a collaborative platform for scholars, enthusiasts, and anyone interested in delving into the fascinating world of feature learning within deep learning theory.

Awesome Lists containing this project

README

        

# Feature Learning in Deep Learning Theory Reading Group

## Introduction

Welcome to the GitHub repository of our Feature Learning in Deep Learning Theory Reading Group! This group is dedicated to the study, discussion, and understanding of feature learning concepts and techniques in the field of Deep Learning.
Our objective is to bring together researchers, professionals, students, and anyone interested in feature learning, to learn from each other, discuss recent advancements and challenges, and contribute to the knowledge pool of Deep Learning Theory.

## Participation

We warmly invite anyone interested to join us. To participate:

1. **Follow this Repository**: Keep up to date with the reading materials we will be discussing.
2. **Discussion**: Participate in discussions on the `Issues` tab. Each paper will have a dedicated issue where the discussion will take place.

## Table of Contents

1. [Classification](#classification)
2. [Regression](#regression)
3. [Transformers](#transformers)

## Classification

- Towards Understanding **Ensemble**, **Knowledge Distillation** and **Self-Distillation** in Deep Learning, *ICLR 2023*. [(link)](https://arxiv.org/abs/2012.09816)

- Benign Overfitting in Two-layer **Convolutional Neural Networks**, *NeurIPS 2022*. [(link)](https://arxiv.org/abs/2202.06526) [(video)](https://www.youtube.com/watch?v=n_F17KVDQHI)

- **Graph Neural Networks** Provably Benefit from Structural Information: A Feature Learning Perspective. *ICML 2023 Workshop, Contribued Talk*. [(link)](https://arxiv.org/abs/2306.13926)

- Feature purification: How **adversarial training** performs robust deep learning, *FOCS 2021*. [(link)](https://arxiv.org/abs/2005.10190)

- Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss, *COLT 2020* [(link)](https://arxiv.org/abs/2002.04486)

- Toward understanding the feature learning process of self-supervised **contrastive learning**, *ICML 2021*. [(link)](https://arxiv.org/abs/2105.15134)

- Towards Understanding **Mixture of Experts** in Deep Learning, *NeurIPS 2022*. [(link)](https://arxiv.org/abs/2208.02813)

- Understanding the Generalization of **Adam** in Learning Neural Networks with Proper Regularization, *ICLR 2023* [(link)](https://arxiv.org/abs/2108.11371)

- Towards Understanding Feature Learning in **Out-of-Distribution** Generalization, *NeurIPS 2023* [(link)](https://arxiv.org/abs/2304.11327)

- Benign Overfitting for Two-layer **ReLU Networks**, *ICML 2023*. [(link)](https://arxiv.org/pdf/2303.04145.pdf)

- **Data Augmentation** as Feature Manipulation, *ICML 2022*. [(link)](https://arxiv.org/abs/2203.01572)

- Towards understanding how **momentum** improves generalization in deep learning, *ICML 2022*. [(link)](https://arxiv.org/abs/2207.05931)

- The Benefits of **Mixup** for Feature Learning. [(link)](https://arxiv.org/abs/2303.08433)

- **Pruning Before Training** May Improve Generalization, Provably. [(link)](https://arxiv.org/abs/2301.00335)

- Provably Learning Diverse Features in Multi-View Data with Midpoint **Mixup**, *ICML 2023*. [(link)](https://proceedings.mlr.press/v202/chidambaram23a/chidambaram23a.pdf)

- How Does **Semi-supervised** Learning with Pseudo-Labels? A Case Study, *ICLR 2023*. [(link)](https://openreview.net/forum?id=Dzmd-Cc8OI)

- **Local Signal Adaptivity**: Provable Feature Learning in Neural Networks Beyond Kernels, *NeurIPS 2021*. [(link)](https://proceedings.neurips.cc/paper/2021/hash/d064bf1ad039ff366564f352226e7640-Abstract.html)

- Provable Guarantees for Neural Networks via **Gradient Feature Learning**, *NeurIPS 2023*. [(link)](https://arxiv.org/abs/2310.12408)

- Robust Learning with Progressive Data Expansion Against **Spurious Correlation**, *NeurIPS 2023*. [(link)](https://arxiv.org/abs/2306.04949)

- Understanding Transferable Representation Learning and Zero-shot Transfer in **CLIP**, [(link)](https://arxiv.org/pdf/2310.00927.pdf)

- Why Does **Sharpness-Aware Minimization** Generalize Better Than SGD? *NeurIPS 2023*, [(link)](https://nips.cc/virtual/2023/poster/72901)

- Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for **Noisy Linear Data**, *COLT 2022*, [(link)](https://proceedings.mlr.press/v178/frei22a/frei22a.pdf)

- **Random Feature Amplification**: Feature Learning and Generalization in Neural Networks, *JMLR 2023*, [(link)](https://arxiv.org/abs/2202.07626)

- Benign Overfitting in **Adversarially Robust** Linear Classification, *UAI 2023*, [(link)](https://proceedings.mlr.press/v216/chen23b.html)

- Benign Oscillation of Stochastic Gradient Descent with **Large Learning Rates**, *ICLR 2024*, [(link)](https://arxiv.org/abs/2310.17074)

- Understanding Convergence and Generalization in **Federated Learning** through Feature Learning Theory, *ICLR 2024*, [(link)](https://openreview.net/pdf?id=EcetCr4trp)

- Benign Overfitting and Grokking in ReLU Networks for **XOR** Cluster Data, *ICLR 2024*, [(link)](https://arxiv.org/abs/2310.02541)

- Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for **XOR** Data. [(link)](https://arxiv.org/abs/2310.01975)

- SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the **XOR** problem, *ICLR 2024*, [(link)](https://arxiv.org/abs/2309.15111)

- Feature learning via mean-field Langevin dynamics: classifying **sparse parities** and beyond, *NeurIPS 2023*, [(link)](https://openreview.net/forum?id=tj86aGVNb3)

- Improved statistical and computational complexity of the mean-field Langevin dynamics under **structured data**, *ICLR 2024*, [(link)](https://openreview.net/forum?id=Of2nEDc4s7)

- What Improves the Generalization of **Graph Transformer**? A Theoretical Dive into Self-attention and Positional Encoding, [(link)](https://openreview.net/forum?id=aJl5aK9n7e)

- Joint Edge-Model Sparse Learning is Provably Efficient for **Graph Neural Networks**, *ICLR 2023*, [(link)](https://openreview.net/pdf?id=4UldFtZ_CVF)

- Provably **Neural Active Learning** Succeeds via Prioritizing Perplexing Samples, *ICML 2024*, [link](https://arxiv.org/abs/2406.03944)

- Provable Benefits of Local Steps in Heterogeneous **Federated Learning** for Neural Networks: A Feature Learning Perspective, *ICML 2024*, [link](https://proceedings.mlr.press/v235/bao24a.html)

## Regression

- Feature Learning in Infinite-Width Neural Networks, *ICML 2021*. [(link)](https://arxiv.org/abs/2011.14522)

- High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation, *NeurIPS 2022*. [(link)](https://arxiv.org/abs/2205.01445)

- Self-consistent dynamical field theory of kernel evolution in wide neural networks, *NeurIPS 2022*. [(link)](https://arxiv.org/abs/2205.09653)

- Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity, *NeurIPS 2022* [(link)](https://arxiv.org/pdf/2205.15809)

- Gradient-Based Feature Learning under Structured Data, *NeurIPS 2023*. [(link)](https://arxiv.org/abs/2309.03843)

- The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks, *COLT 2022* [(link)](https://par.nsf.gov/servlets/purl/10344261)

- Neural Networks can Learn Representations with Gradient Descent, *COLT 2022*. [(link)](https://arxiv.org/abs/2206.15144)

- Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks. [(link)](https://arxiv.org/abs/2305.06986)

- The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks, *COLT 2022*. [(link)](https://arxiv.org/abs/2202.08658)

- Neural Networks Efficiently Learn Low-Dimensional Representations with SGD, *ICLR 2023*. [(link)](https://arxiv.org/abs/2209.14863)

- SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics, *COLT 2023* [(link)](https://proceedings.mlr.press/v195/abbe23a/abbe23a.pdf)

- Learning Two-Layer Neural Networks, One (Giant) Step at a Time [(link)](https://arxiv.org/abs/2305.18270)

- Gradient-Based Feature Learning under Structured Data, *NeurIPS 2023*. [(link)](https://arxiv.org/pdf/2309.03843.pdf)

- Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime, *ICLR 2024*, [(link)](https://openreview.net/forum?id=Jc0FssXh2R)

- Three Mechanisms of Feature Learning in the Exact Solution of a Latent Variable Model, [(link)](https://arxiv.org/abs/2401.07085)

- Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes, *NeurIPS 2024* [(link)](https://arxiv.org/abs/2405.17580)

## Transformers

- Vision Transformers provably learn spatial structure, *NeurIPS 2022*. [(link)](https://arxiv.org/abs/2210.09221)

- A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity, *ICLR 2023*. [(link)](https://arxiv.org/abs/2302.06015)

- Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer, *NeurIPS 2023* [(link)](https://arxiv.org/abs/2305.16380)

- JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention

- On the Role of Attention in Prompt-tuning, *ICML 2023* [(link)](https://arxiv.org/pdf/2306.03435.pdf)

- In-Context Convergence of Transformers, *NeurIPS 2023 workshop* [(link)](https://arxiv.org/abs/2310.05249)

- Training Nonlinear Transformers for Efficient In-Context Learning: A Theoretical Learning and Generalization Analysis, [(link)](https://arxiv.org/abs/2402.15607)

- Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization, NeurIPS 2024, [(link)](https://arxiv.org/abs/2409.19345)

- Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism, [(link)](https://arxiv.org/abs/2409.17625)

- Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Context, [(link)](https://arxiv.org/abs/2410.01774)


## Contact

For any queries, please open an issue or feel free to reach out to us via email at [email protected]

## Code of Conduct

We aim to maintain a respectful and inclusive environment for everyone, and we expect all participants to uphold this standard.

We look forward to your active participation and happy reading!