Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Dataset-Distillation
A curated list of awesome papers on dataset distillation and related applications.
https://github.com/Guang000/Awesome-Dataset-Distillation
Last synced: 3 days ago
JSON representation
-
Main
-
- Dataset Distillation - distillation) [:book:](./citations/wang2018datasetdistillation.txt)
-
Early Work
-
Gradient/Trajectory Matching Surrogate Objective
- Dataset Condensation with Gradient Matching - UoE/DatasetCondensation) [:book:](./citations/zhao2021datasetcondensation.txt)
- Dataset Condensation with Differentiable Siamese Augmentation - UoE/DatasetCondensation) [:book:](./citations/zhao2021differentiatble.txt)
- Dataset Distillation by Matching Training Trajectories - distillation/) [:octocat:](https://github.com/georgecazenavette/mtt-distillation) [:book:](./citations/cazenavette2022dataset.txt)
- Dataset Condensation with Contrastive Signals - lee/dcc) [:book:](./citations/lee2022dataset.txt)
- Delving into Effective Gradient Matching for Dataset Condensation
- Loss-Curvature Matching for Dataset Selection and Condensation - AI/LCMat) [:book:](./citations/shin2023lcmat.txt)
- Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation - distillation) [:book:](./citations/du2023minimizing.txt)
- Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
- Sequential Subset Matching for Dataset Distillation
- Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
- Neural Spectral Decomposition for Dataset Distillation
- SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching
- Prioritize Alignment in Dataset Distillation - HPC-AI-Lab/PAD) [:book:](./citations/li2024pad.txt)
- Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios - HPC-AI-Lab/EDF) [:book:](./citations/wang2024edf.txt)
- SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching
- Dataset Distillation by Automatic Training Trajectories
-
Distribution/Feature Matching Surrogate Objective
- CAFE: Learning to Condense Dataset by Aligning Features
- Dataset Condensation with Distribution Matching - UoE/DatasetCondensation) [:book:](./citations/zhao2023distribution.txt)
- Improved Distribution Matching for Dataset Condensation
- DataDAM: Efficient Dataset Distillation with Attention Matching
- M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy - Zhang/M3D) [:book:](./citations/zhang2024m3d.txt)
- Dataset Condensation with Latent Quantile Matching
- Dataset Distillation via the Wasserstein Metric
- Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation
- Diversified Semantic Distribution Matching for Dataset Distillation
- DANCE: Dual-View Distribution Alignment for Dataset Condensation - Zhang/DANCE) [:book:](./citations/zhang2024dance.txt)
-
Kernel-Based Distillation
- Dataset Meta-Learning from Kernel Ridge-Regression - tangents) [:book:](./citations/nguyen2021kip.txt)
- Dataset Distillation with Infinitely Wide Convolutional Networks - tangents) [:book:](./citations/nguyen2021kipimprovedresults.txt)
- Dataset Distillation using Neural Feature Regression
- Efficient Dataset Distillation using Random Feature Approximation
- Dataset Distillation with Convexified Implicit Gradients
-
Better Optimization
- Accelerating Dataset Distillation via Model Augmentation - dk-lab/Acc-DD) [:book:](./citations/zhang2023accelerating.txt)
- DREAM: Efficient Dataset Distillation by Representative Matching
- MIM4DD: Mutual Information Maximization for Dataset Distillation
- Can Pre-Trained Models Assist in Dataset Distillation? - zjut/DDInterpreter) [:book:](./citations/lu2023pre.txt)
- DREAM+: Efficient Dataset Distillation by Bidirectional Representative Matching
- Embarassingly Simple Dataset Distillation
- Dataset Distillation in Latent Space
- Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation
- BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
- You Only Condense Once: Two Rules for Pruning Condensed Datasets - y/you-only-condense-once) [:book:](./citations/he2023yoco.txt)
- Large Scale Dataset Distillation with Domain Shift
- Towards Model-Agnostic Dataset Condensation by Heterogeneous Models - Yeong Moon et al., ECCV 2024) [:octocat:](https://github.com/khu-agi/hmdc) [:book:](./citations/moon2024hmdc.txt)
- Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching
- Multisize Dataset Condensation - y/Multisize-Dataset-Condensation) [:book:](./citations/he2024mdc.txt)
- Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality - Group/ProgressiveDD) [:book:](./citations/chen2024vodka.txt)
-
Better Understanding
- On the Size and Approximation Error of Distilled Sets
- Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning
- Mitigating Bias in Dataset Distillation
- What is Dataset Distillation Learning? - is-Dataset-Distillation-Learning) [:book:](./citations/yang2024learning.txt)
- A Theoretical Study of Dataset Distillation
- Optimizing Millions of Hyperparameters by Implicit Differentiation - in-100-Lines-of-Code/tree/main/Optimizing_Millions_of_Hyperparameters_by_Implicit_Differentiation) [:book:](./citations/lorraine2020optimizing.txt)
- On Implicit Bias in Overparameterized Bilevel Optimization
- Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation
-
Decoupled Distillation
- Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective - Lab/SRe2L/tree/main/SRe2L) [:book:](./citations/yin2023sre2l.txt)
- Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment
- On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm - lab/RDED) [:book:](./citations/sun2024rded.txt)
- Elucidating the Design Space of Dataset Condensation
- Dataset Distillation in Large Data Era - Lab/SRe2L/tree/main/CDA) [:book:](./citations/yin2023cda.txt)
- Information Compensation: A Fix for Any-scale Dataset Distillation
- Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching
- Curriculum Dataset Distillation
-
Distilled Dataset Parametrization
- Dataset Condensation via Efficient Synthetic-Data Parameterization - Hyun Kim et al., ICML 2022) [:octocat:](https://github.com/snu-mllab/efficient-dataset-condensation) [:book:](./citations/kim2022dataset.txt)
- Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks - DatasetDistillation) [:book:](./citations/deng2022remember.txt)
- On Divergence Measures for Bayesian Pseudocoresets - divergences) [:book:](./citations/kim2022divergence.txt)
- Dataset Distillation via Factorization
- PRANC: Pseudo RAndom Networks for Compacting Deep Models
- Dataset Condensation with Latent Space Knowledge Factorization and Sharing
- Slimmable Dataset Condensation
- Few-Shot Dataset Distillation via Translative Pre-Training
- Sparse Parameterization for Epitomic Dataset Distillation - XJTU/SPEED) [:book:](./citations/wei2023sparse.txt)
- Frequency Domain-based Dataset Distillation
- MGDD: A Meta Generator for Fast Dataset Distillation
- Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
- FYI: Flip Your Images for Dataset Distillation - yonsei/FYI) [:book:](./citations/son2024fyi.txt)
-
Label Distillation
- Flexible Dataset Distillation: Learn Labels Instead of Images - distillation) [:book:](./citations/bohdal2020flexible.txt)
- Soft-Label Dataset Distillation and Text Dataset Distillation - distillation) [:book:](./citations/sucholutsky2021soft.txt)
- Label-Augmented Dataset Distillation
- A Label is Worth a Thousand Images in Dataset Distillation - distillation) [:book:](./citations/qin2024label.txt)
- DRUPI: Dataset Reduction Using Privileged Information
- Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? - y/soft-label-pruning-for-dataset-distillation) [:book:](./citations/xiao2024soft.txt)
-
Dataset Quantization
- Dataset Quantization - research/Dataset_Quantization) [:book:](./citations/zhou2023dataset.txt)
- Dataset Quantization with Active Learning based Adaptive Sampling
-
Multimodal Distillation
-
Self-Supervised Distillation
- Self-Supervised Dataset Distillation for Transfer Learning - Lee/selfsup_dd) [:book:](./citations/lee2024self.txt)
- Efficiency for Free: Ideal Data Are Transportable Representations - lab/ReLA) [:book:](./citations/sun2024rela.txt)
- Self-supervised Dataset Distillation: A Good Compression Is All You Need - Lab/SRe2L/tree/main/SCDD/) [:book:](./citations/zhou2024self.txt)
-
Benchmark
- DC-BENCH: Dataset Condensation Benchmark - bench.github.io/) [:octocat:](https://github.com/justincui03/dc_benchmark) [:book:](./citations/cui2022dc.txt)
- A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness
- DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
-
Survey
- Data Distillation: A Survey
- A Survey on Dataset Distillation: Approaches, Applications and Future Directions - Dataset-Distillation) [:book:](./citations/geng2023survey.txt)
- A Comprehensive Survey to Dataset Distillation - Dataset-Distillation) [:book:](./citations/lei2023survey.txt)
- Dataset Distillation: A Comprehensive Review - Dataset-Distillation) [:book:](./citations/yu2023review.txt)
-
Ph.D. Thesis
-
Generative Prior
-
Workshop
-
Generative Distillation
- Dataset Condensation via Generative Model
- Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation - GLaD) [:book:](./citations/zhong2024hglad.txt)
- Generative Dataset Distillation Based on Diffusion Model - Dataset-Distillation-Based-on-Diffusion-Model) [:book:](./citations/su2024diffusion.txt)
- Generative Dataset Distillation: Balancing Global Structure and Local Details
- Efficient Dataset Distillation via Minimax Diffusion - gu/MinimaxDiffusion) [:book:](./citations/gu2024efficient.txt)
- Latent Dataset Distillation with Diffusion Models
- Synthesizing Informative Training Samples with GAN - uoe/it-gan) [:book:](./citations/zhao2022synthesizing.txt)
- Generalizing Dataset Distillation via Deep Generative Prior
- DiM: Distilling Dataset into Generative Model - gu/DiM) [:book:](./citations/wang2023dim.txt)
- D4M: Dataset Distillation via Disentangled Diffusion Model
-
Challenge
- The First Dataset Distillation Challenge
- :globe_with_meridians: - Dataset-Distillation-Challenge)
-
-
Applications
-
Medical
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Soft-Label Anonymous Gastric X-ray Image Distillation - distillation) [:book:](./citations/li2020soft.txt)
- Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing - distillation) [:book:](./citations/li2022compressed.txt)
- Dataset Distillation for Medical Dataset Sharing - distillation) [:book:](./citations/li2023sharing.txt)
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Image Distillation for Safe Data Sharing in Histopathology
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Progressive Trajectory Matching for Medical Dataset Distillation
- Dataset Distillation in Medical Imaging: A Feasibility Study
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Dataset Distillation for Histopathology Image Classification
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Importance-Aware Adaptive Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
- Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
-
Continual Learning
- Reducing Catastrophic Forgetting with Learning on Synthetic Data
- Condensed Composite Memory Continual Learning
- Distilled Replay: Overcoming Forgetting through Synthetic Samples
- Sample Condensation in Online Continual Learning
- Summarizing Stream Data for Memory-Restricted Online Continual Learning - gu/SSD) [:book:](./citations/gu2024ssd.txt)
- An Efficient Dataset Condensation Plugin and Its Application to Continual Learning - Efficient-Dataset-Condensation-Plugin) [:book:](./citations/yang2023efficient.txt)
-
Privacy
- SecDD: Efficient and Secure Method for Remotely Training Neural Networks
- Privacy for Free: How does Dataset Condensation Help Privacy?
- Private Set Generation with Discriminative Information - Set) [:book:](./citations/chen2022privacy.txt)
- Backdoor Attacks Against Dataset Distillation
- Differentially Private Kernel Inducing Points (DP-KIP) for Privacy-preserving Data Distillation
- No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
- Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
- Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective - Yu Chung et al., ICLR 2024) [:book:](./citations/chung2024backdoor.txt)
- Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
- Differentially Private Dataset Condensation
-
Federated Learning
- Federated Learning via Synthetic Data
- Distilled One-Shot Federated Learning
- DENSE: Data-Free One-Shot Federated Learning - jayzhang/DENSE) [:book:](./citations/zhang2022dense.txt)
- FedSynth: Gradient Compression via Synthetic Data in Federated Learning
- DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics
- Meta Knowledge Condensation for Federated Learning
- Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments
- Federated Virtual Learning on Heterogeneous Data with Local-global Distillation - Yin Huang et al., 2023) [:book:](./citations/huang2023federated.txt)
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity
- DCFL: Non-IID Awareness Dataset Condensation Aided Federated Learning
- Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents - main) [:book:](./citations/jia2024feddg.txt)
- Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors - Yin Huang et al., ICML 2024) [:octocat:](https://github.com/ubc-tea/DESA) [:book:](./citations/huang2024desa.txt)
- FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning - distribution-matching) [:book:](./citations/xiong2023feddm.txt)
- FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations - Po Wang et al., 2023) [:octocat:](https://github.com/a514514772/fedlap-dp) [:book:](./citations/wang2023fed.txt)
-
Graph Neural Network
- Graph Condensation for Graph Neural Networks
- Condensing Graphs via One-Step Gradient Matching - research/DosCond) [:book:](./citations/jin2022condensing.txt)
- Graph Condensation via Receptive Field Distribution Matching
- CaT: Balanced Continual Graph Learning with Graph Condensation - CGL) [:book:](./citations/liu2023cat.txt)
- Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data - zheng/sfgc) [:book:](./citations/zheng2023sfgc.txt)
- Does Graph Distillation See Like Vision Dataset Counterpart?
- Fair Graph Distillation
- Mirage: Model-Agnostic Graph Distillation for Graph Classification
- Graph Distillation with Eigenbasis Matching - tian/GDEM) [:book:](./citations/liu2024gdem.txt)
- Graph Data Condensation via Self-expressive Graph Structure Reconstruction
- Two Trades is not Baffled: Condensing Graph via Crafting Rational Gradient Matching - hpc-ai-lab/ctrl) [:book:](./citations/zhang2024ctrl.txt)
- Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching - hpc-ai-lab/geom) [:book:](./citations/zhang2024geom.txt)
- GC-Bench: A Benchmark Framework for Graph Condensation with New Insights - Melody/GraphSlim) [:book:](./citations/gong2024graphslim.txt)
- GC-Bench: An Open and Unified Benchmark for Graph Condensation - Bench) [:book:](./citations/sun2024gcbench.txt)
- A Comprehensive Survey on Graph Reduction: Sparsification, Coarsening, and Condensation - Melody/awesome-graph-reduction) [:book:](./citations/hashemi2024awesome.txt)
- Graph Condensation: A Survey - condensation-papers) [:book:](./citations/gao2024graph.txt)
- A Survey on Graph Condensation - Graph-Condensation) [:book:](./citations/xu2024survey.txt)
- GCondenser: Benchmarking Graph Condensation
- Kernel Ridge Regression-Based Graph Dataset Distillation
-
Neural Architecture Search
- Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data - research/GTN) [:book:](./citations/such2020generative.txt)
- Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation - medvedev/efficientdistillation) [:book:](./citations/medvedev2021tabular.txt)
- Calibrated Dataset Condensation for Faster Hyperparameter Search
-
Fashion, Art, and Design
- Wearable ImageNet: Synthesizing Tileable Textures via Dataset Distillation - distillation/) [:octocat:](https://github.com/georgecazenavette/mtt-distillation) [:book:](./citations/cazenavette2022textures.txt)
- Learning from Designers: Fashion Compatibility Analysis Via Dataset Distillation
- Galaxy Dataset Distillation with Self-Adaptive Trajectory Matching - Dataset-Distillation) [:book:](./citations/guan2023galaxy.txt)
-
Knowledge Distillation
-
Recommender Systems
-
Blackbox Optimization
- Bidirectional Learning for Offline Infinite-width Model-based Optimization
- Bidirectional Learning for Offline Model-based Biological Sequence Design - ICML2023-Submission) [:book:](./citations/chen2023bidirectional.txt)
-
Trustworthy
- Rethinking Data Distillation: Do Not Overlook Calibration
- Towards Trustworthy Dataset Distillation
- Group Distributionally Robust Dataset Distillation with Risk Minimization
- Can We Achieve Robustness from Data Alone?
- Towards Robust Dataset Learning
- Towards Adversarially Robust Dataset Distillation by Curvature Regularization
-
Retrieval
-
Text
- Data Distillation for Text Classification
- Dataset Distillation with Attention Labels for Fine-tuning BERT - distillation-with-attention-labels) [:book:](./citations/maekawa2023text.txt)
- DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
-
Tabular
- New Properties of the Data Distillation Method When Working With Tabular Data - medvedev/dataset-distillation) [:book:](./citations/medvedev2020tabular.txt)
-
Reinforcement Learning
-
Time Series
-
Machine Unlearning
-
Domain Adaptation
-
Long-Tail
-
Video
-
Super Resolution
-
Speech
- Dataset-Distillation Generative Model for Speech Emotion Recognition - Gutierrez et al., Interspeech 2024) [:book:](./citations/fabian2024speech.txt)
-
-
Media Coverage
-
Reinforcement Learning
-
Long-Tail
-
-
Acknowledgments
-
Long-Tail
- Homepage
- Nikolaos Tsilivis - medvedev), [Seungjae Shin](https://github.com/SJShin-AI), [Jiawei Du](https://github.com/AngusDujw), [Yidi Jiang](https://github.com/Jiang-Yidi), [Xindi Wu](https://github.com/XindiWu), [Guangyi Liu](https://github.com/lgy0404), [Yilun Liu](https://github.com/superallen13), [Kai Wang](https://github.com/kaiwang960112), [Yue Xu](https://github.com/silicx), [Anjia Cao](https://github.com/CAOANJIA), [Jianyang Gu](https://github.com/vimar-gu), [Yuanzhen Feng](https://github.com/fengyzpku), [Peng Sun](https://github.com/sp12138), [Ahmad Sajedi](https://github.com/AhmadSajedii),
- Zhihao Sui - Hy), [Eduardo Montesuma](https://github.com/eddardd), [Shengbo Gong](https://github.com/rockcor), [Zheng Zhou](https://github.com/zhouzhengqd), [Zhenghao Zhao](https://github.com/ichbill), [Duo Su](https://github.com/suduo94), [Tianhang Zheng](https://github.com/tianzheng4), [Shijie Ma](https://github.com/mashijie1028), [Wei Wei](https://github.com/WeiWeic6222848), [Yantai Yang](https://github.com/Hiter-Q), [Shaobo Wang](https://github.com/gszfwsb), [Xinhao Zhong](https://github.com/ndhg1213), [Zhiqiang Shen](https://github.com/szq0214), [Cong Cong](https://github.com/thomascong121), [Chun-Yin Huang](https://github.com/chunyinhuang), [Dai Liu](https://github.com/NiaLiu), and [Ruonan Yu](https://github.com/Lexie-YU) for their valuable suggestions and contributions.
-
-
Star History
-
Tabular
- ![Star History Chart - history.com/#Guang000/Awesome-Dataset-Distillation&Date)
-
Long-Tail
- ![Star History Chart - history.com/#Guang000/Awesome-Dataset-Distillation&Date)
-
Programming Languages
Sub Categories
Medical
35
Graph Neural Network
19
Gradient/Trajectory Matching Surrogate Objective
16
Better Optimization
15
Federated Learning
14
Distilled Dataset Parametrization
13
Generative Distillation
10
Distribution/Feature Matching Surrogate Objective
10
Privacy
10
Decoupled Distillation
8
Better Understanding
8
Long-Tail
8
Trustworthy
6
Continual Learning
6
Label Distillation
6
Kernel-Based Distillation
5
Survey
4
Self-Supervised Distillation
3
Fashion, Art, and Design
3
Benchmark
3
Neural Architecture Search
3
Text
3
Reinforcement Learning
2
Recommender Systems
2
Time Series
2
Dataset Quantization
2
Multimodal Distillation
2
Blackbox Optimization
2
Tabular
2
Challenge
2
Machine Unlearning
2
Early Work
1
Super Resolution
1
Retrieval
1
Generative Prior
1
Ph.D. Thesis
1
Speech
1
Knowledge Distillation
1
Video
1
Domain Adaptation
1
Workshop
1
Keywords