Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/huckiyang/awesome-neural-reprogramming-prompting

A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022
https://github.com/huckiyang/awesome-neural-reprogramming-prompting

List: awesome-neural-reprogramming-prompting

Last synced: about 2 months ago
JSON representation

A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022

Awesome Lists containing this project

README

        

# awesome-neural-reprogramming-acoustic-prompting ![Awesome](https://awesome.re/badge.svg)

A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022.

**News** - Dr. Pin-Yu Chen and Huck will give a tutortial on adversarial reprogramming at ICASSP 2022. [Video](https://www.youtube.com/watch?v=-iirkbYkyXI)

How to **empower frozen large-scale pre-trained models** with reprogramming and prompting toward different applications is the next big challenge of Deep Learning. Welcome to commit and pull request!

## Neural Adapters in Decoder, Encoder, and Inputs

| Title | Authors | Code | Year |
| ----- | ------- | -------- | ---- |
|[Differentially Private Adapters for Parameter Efficient Acoustic Modeling](https://arxiv.org/abs/2305.11360)|C.-W. Ho et al.|[code](https://github.com/Chun-wei-Ho/)|Interspeech 2023|
|[Parameter-Efficient Learning for Text-to-Speech Accent Adaptation](https://arxiv.org/abs/2305.11320)|L.-J. Yang et al.|[code](https://tts-research.github.io/)|Interspeech 2023|
|[A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model](https://arxiv.org/pdf/2305.11244)|S. Radhakrishnan et al.|[code](https://github.com/Srijith-rkr/KAUST-Whisper-Adapter)|Interspeech 2023|

## Neural Reprogramming or Adversarial Reprogramming or Offsite-Tuning

| Title | Authors | Code | Year |
| ----- | ------- | -------- | ---- |
|[Reprogramming Self-supervised Learning-based Speech Representations for Speaker Anonymization](https://arxiv.org/abs/2311.10664)|X. Chen et al.|-|ACM MM Asia 2023|
|[Offsite-Tuning: Transfer Learning without Full Model](https://arxiv.org/pdf/2302.04870.pdf)|G. Xiao et al.|[code](https://github.com/mit-han-lab/offsite-tuning)|arxiv 2023|
|[Music Instrument Classification Reprogrammed](https://arxiv.org/pdf/2211.08379.pdf)|H.-H. Chen et al.|-|MMM 2023|
|[From English to More Languages: Parameter-Efficient Model Reprogramming for Cross-Lingual Speech Recognition](https://arxiv.org/pdf/2301.07851.pdf)|CHH Yang et al.|-|ICASSP 2023|
|[Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming](https://openreview.net/pdf?id=00EiAK1LHs)|Huzaifa Arif et al.|[code](https://github.com/IBM/reprogrammble-FL)|SaTML 2023|
|[Fairness Reprogramming](https://openreview.net/pdf?id=Nay_rOB-dZv)|G. Zhang et al.|[code](https://github.com/ucsb-nlp-chang/fairness-reprogramming)|NeurIPS 2022|
|[Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming](https://arxiv.org/pdf/2211.01317.pdf)|Y.-N. Hung et al.|[code](https://github.com/biboamy/music-repro)|ICASSP 2023|
|[Reprogramming Large Pretrained Language Models for Antibody Sequence Infilling](https://arxiv.org/pdf/2210.07144.pdf)|I. Melnyk et al.|-|Arxiv 2022|
|[Adversarial Reprogramming Revisited](https://arxiv.org/abs/2206.03466)|M. Englert et al.|-|NeurIPS 2022|
|[Reprogramming Large Pretrained Language Models for Antibody Sequence Infilling](https://arxiv.org/pdf/2210.07144.pdf)|I. Melnyk et al.|-|Arxiv 2022|
|[Rep-Net: Efficient On-Device Learning via Feature Reprogramming](https://openaccess.thecvf.com/content/CVPR2022/papers/Yang_Rep-Net_Efficient_On-Device_Learning_via_Feature_Reprogramming_CVPR_2022_paper.pdf)|L. Yang et al.|[code](https://github.com/ASU-ESIC-FAN-Lab/RepNet)|CVPR 2022|
|[Improved Input Reprogramming for GAN Conditioning](https://arxiv.org/pdf/2201.02692.pdf)|T. Dinh et al.|-|Arxiv 2022|
|[Cross-modal Adversarial Reprogramming](https://openaccess.thecvf.com/content/WACV2022/papers/Neekhara_Cross-Modal_Adversarial_Reprogramming_WACV_2022_paper.pdf)| P. Neekhara et al. |[code](https://github.com/paarthneekhara/multimodal_rerprogramming)|WACV 2022|
|[A Study of Low-Resource Speech Commands Recognition based on Adversarial Reprogramming](https://arxiv.org/pdf/2110.03894.pdf)|H Yen et al.|[code](https://github.com/dodohow1011/SpeechAdvReprogram)|Interspeech 23|
|[WARP: Word-level Adversarial ReProgramming](https://aclanthology.org/2021.acl-long.381.pdf)|K. Hambardzumyan et al.|[code](https://github.com/YerevaNN/WARP)|ACL 2021|
|[Voice2series: Reprogramming acoustic models for time series classification](https://arxiv.org/pdf/2106.09296.pdf)| CHH Yang, et al.|[code](https://github.com/huckiyang/Voice2Series-Reprogramming)|ICML 2021|
|[Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources](http://proceedings.mlr.press/v119/tsai20a/tsai20a.pdf)|Y Tsai et al.|[code](https://github.com/yunyuntsai/Black-box-Adversarial-Reprogramming)|ICML 2020|
|[Reprogramming GANs via Input Noise Design](http://csuh.kaist.ac.kr/Suh_Reprogramming_GAN.pdf)|K Lee et al.|-|ECML 2019|
|[Adversarial Reprogramming of Text Classification Neural Networks](https://arxiv.org/abs/1809.01829)| P. Neekhara et al. |[code](https://github.com/paarthneekhara/rnn_adversarial_reprogramming)|EMNLP 2019|
|[Adversarial reprogramming of neural networks](https://arxiv.org/pdf/1806.11146.pdf)|F. Elsayed et al.|[code](https://github.com/Prinsphield/Adversarial_Reprogramming)|ICLR 2019|

## Input-Level Neural Model Prompting for Vision and Speech

| Title | Authors | Code | Year |
| ----- | ------- | -------- | ---- |
|[SPEECHPROMPT V2: PROMPT TUNING FOR SPEECH CLASSIFICATION TASKS](https://arxiv.org/pdf/2303.00733.pdf)| K.-W. Chang et al. |[code](https://github.com/ga642381/SpeechPrompt)|Arxiv 2023|
|[Understanding and Improving Visual Prompting: A Label-Mapping Perspective](https://arxiv.org/abs/2211.11635)|A. Chen et al.|-|Arxiv 2022|
|[AudioLM: a Language Modeling Approach to Audio Generation](https://arxiv.org/abs/2209.03143)|Z. Borsos et al.|-|Arxiv 2022|
|[WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen Language Models](https://arxiv.org/pdf/2203.15863.pdf)|H. Gao et al.|-|Arxiv 2022|
|[An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks](https://arxiv.org/pdf/2203.16773.pdf)| K.-W. Chang et al. |[code](https://github.com/ga642381/SpeechPrompt)|Interspeech 2022|
|[Visual Prompting: Modifying Pixel Space to Adapt Pre-trained Models](https://arxiv.org/pdf/2203.17274.pdf)|H. Bahng et al.|-|Arxiv 2022|

## Theory

| Title | Authors | Code | Year |
| ----- | ------- | -------- | ---- |
|[Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution](https://arxiv.org/abs/2202.10054)|A. Kumar et al.|-|ICLR 2022|