Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/SinclairCoder/Instruction-Tuning-Papers

Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
https://github.com/SinclairCoder/Instruction-Tuning-Papers

cross-task-generalization generalization instruction instruction-following large-language-models multi-task-learning natural-language-generation natural-language-processing natural-language-understanding prompt-learning

Last synced: 3 months ago
JSON representation

Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).

Lists

README

        

# Instruction-Tuning-Papers

![](https://img.shields.io/badge/PRs-welcome-brightgreen) ![](https://img.shields.io/github/stars/sinclaircoder/Instruction-Tuning-Papers?style=social)

A trend starts from `Natrural-Instruction` (ACL 2022), `FLAN` (ICLR 2022) and `T0` (ICLR 2022).

What's the instruction-tuning? It aims to teach language models to follow natural language (including prompt, positive or negative examples, and constraints etc.), to perform better multi-task learning on training tasks and generalization on unseen tasks.

## Papers

1. **Cross-task generalization via natural language crowdsourcing instructions**

*Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi* [[paper]](https://aclanthology.org/2022.acl-long.244/) 2021.4

1. **Finetuned language models are zero-shot learners**

*Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le* [[paper]](https://arxiv.org/abs/2109.01652) 2021.9

1. **Multitask Prompted Training Enables Zero-Shot Task Generalization**

*Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, Alexander M. Rush* [[paper]](https://arxiv.org/abs/2110.08207) 2021.10

1. **ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization**

*Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang* [[paper]](https://arxiv.org/abs/2201.06910) 2022.1

1. **UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models**

*Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu* [[paper]](https://arxiv.org/abs/2201.05966) 2022.1

1. **Training language models to follow instructions with human feedback**

*Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe* [[paper]](https://arxiv.org/abs/2203.02155) 2022.3

1. **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks**

*Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, Daniel Khashabi* [[paper]](https://arxiv.org/abs/2204.07705) 2022.4

1. **In-BoXBART: Get Instructions into Biomedical Multi-Task Learning**

*Mihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, M. Hassan Murad, Chitta Baral* [[paper]](https://arxiv.org/abs/2204.07600) 2022.4

1. **Unsupervised Cross-Task Generalization via Retrieval Augmentation**

*Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, Xiang Ren* [[paper]](https://arxiv.org/abs/2204.07937) 2022.4

1. **Prompt Consistency for Zero-Shot Task Generalization**

*Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig* [[paper]](https://arxiv.org/abs/2205.00049) 2022.5

1. **Instruction Induction: From Few Examples to Natural Language Task Descriptions**

*Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy* [[paper]](https://arxiv.org/abs/2205.10782) 2022.5

1. **InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning**

*Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi, Jeffrey P. Bigham* [[paper]](https://arxiv.org/abs/2205.12673) 2022.5

1. **reStructured Pre-training**

*Weizhe Yuan, Pengfei Liu* [[paper]](https://arxiv.org/abs/2206.11147) 2022.6

1. **Improving Task Generalization via Unified Schema Prompt**

*Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan* [[paper]](https://arxiv.org/abs/2208.03229) 2022.8

1. **Scaling Instruction-Finetuned Language Models**

*Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei* [[paper]](https://arxiv.org/abs/2210.11416) 2022.10

1. **Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners**

*Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo* [[paper]](https://arxiv.org/abs/2210.02969) 2022.10

1. **Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization**

*Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo* [[paper]](https://arxiv.org/abs/2210.03029) 2022.10

1. **Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks**

*Zhenhailong Wang, Xiaoman Pan, Dian Yu, Dong Yu, Jianshu Chen, Heng Ji* [[paper]](https://arxiv.org/abs/2210.00185) 2022.10

1. **Learning Instructions with Unlabeled Data for Zero-Shot Cross-Task Generalization**

*Yuxian Gu, Pei Ke, Xiaoyan Zhu, Minlie Huang* [[paper]](https://arxiv.org/abs/2210.09175) 2022.10

1. **Crosslingual Generalization through Multitask Finetuning**

*Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, Colin Raffel* [[paper]](https://arxiv.org/abs/2211.01786) 2022.11

1. **Task-aware Retrieval with Instructions**

*Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, Wen-tau Yih* [[paper]](https://arxiv.org/abs/2211.09260) 2022.11

1. **UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction Tuning**

*Zengzhi Wang, Rui Xia, Jianfei Yu* [[paper]](https://arxiv.org/abs/2211.10986) 2022.11

1. **Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor**

*Or Honovich, Thomas Scialom, Omer Levy, Timo Schick* [[paper]](https://arxiv.org/abs/2212.09689) 2022.12

1. **Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations**

*Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang* [[paper]](https://arxiv.org/abs/2212.08780) 2022.12

1. **Self-Instruct: Aligning Language Model with Self Generated Instructions**

*Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi* [[paper]](https://arxiv.org/abs/2212.10560) 2022.12

1. **One Embedder, Any Task: Instruction-Finetuned Text Embeddings**

*Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu* [[paper]](https://arxiv.org/abs/2212.09741) 2022.12

1. **HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation**

*Hamish Ivison, Akshita Bhagia, Yizhong Wang, Hannaneh Hajishirzi, Matthew Peters* [[paper]](https://arxiv.org/abs/2212.10315) 2022.12

1. **MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning**

*Zhiyang Xu, Ying Shen, Lifu Huang* [[paper]](https://arxiv.org/abs/2212.10773) 2022.12

1. **OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization**

*Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov*. [[paper]](https://arxiv.org/abs/2212.12017) 2022.12

1. **Data-Efficient Finetuning Using Cross-Task Nearest Neighbors**

*Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi* [[paper]](https://arxiv.org/abs/2212.00196)

1. **The Flan Collection: Designing Data and Methods for Effective Instruction Tuning**
*Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, Adam Roberts*. [[paper]](https://arxiv.org/abs/2301.13688) 2023.1

1. **Exploring the Benefits of Training Expert Language Models over Instruction Tuning**

*Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo* [[paper]](https://arxiv.org/abs/2302.03202) 2023.2

1. **GPTScore: Evaluate as You Desire**

*Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, Pengfei Liu* [[paper]](https://arxiv.org/abs/2302.04166) 2023.2

1. **Adding Instructions during Pretraining: Effective Way of Controlling Toxicity in Language Models**

*Shrimai Prabhumoye, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro* [[paper]](https://arxiv.org/abs/2302.07388) 2023.2

1. **The Wisdom of Hindsight Makes Language Models Better Instruction Followers**

*Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E. Gonzalez* [[paper]](https://arxiv.org/abs/2302.05206) 2023.2

1. **In-Context Instruction Learning**

*Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo* [[paper]](https://arxiv.org/abs/2302.14691) 2023.2

1. **Exploring the Impact of Instruction Data Scaling on Large Language Models: An Empirical Study on Real-World Use Cases**

*Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, Xiangang Li* [[paper]](https://arxiv.org/abs/2303.14742) 2023.3

1. **Unified Text Structuralization with Instruction-tuned Language Models**

*Xuanfan Ni, Piji Li, Huayang Li* [[paper]](https://arxiv.org/abs/2303.14956) 2023.3

1. **Instruction Tuning with GPT-4**

*Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao* [[paper]](https://arxiv.org/abs/2304.03277) 2023.4

1. **ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human**

*Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang, Qinghao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, Jingren Zhou* [[paper]](https://arxiv.org/abs/2304.07849) 2023.4

1. **Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation**

*Yunjie Ji, Yan Gong, Yong Deng, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li* [[paper]](https://arxiv.org/abs/2304.07854) 2023.4

1. **Chinese Open Instruction Generalist: A Preliminary Release**

*Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wenhao Huang, Jie Fu* [[pape]](https://arxiv.org/abs/2304.07987) 2023.4

1. **From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning**

*Qian Liu, Fan Zhou, Zhengbao Jiang, Longxu Dou, Min Lin* [[paper]](https://arxiv.org/abs/2304.07995) 2023.4

1. **InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction**

*Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, Chunsai Du* [[paper]](https://arxiv.org/abs/2304.08085) 2023.4

1. **A Comparative Study between Full-Parameter and LoRA-based Fine-Tuning on Chinese Instruction Data for Instruction Following Large Language Model**

*Xianghui Sun, Yunjie Ji, Baochang Ma, Xiangang Li* [[paper]](https://arxiv.org/abs/2304.08109) 2023.4

1. **LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction**

*Abdullatif Köksal, Timo Schick, Anna Korhonen, Hinrich Schütze* [[paper]](https://arxiv.org/abs/2304.08460) 2023.4

1. **WizardLM: Empowering Large Language Models to Follow Complex Instructions**

*Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang* [[paper]](https://arxiv.org/abs/2304.12244) 2023.4

1. **AMR Parsing with Instruction Fine-tuned Pre-trained Language Models**

*Young-Suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos* [[paper]](https://arxiv.org/abs/2304.12272) 2023.4

1. **Controlled Text Generation with Natural Language Instructions**

*Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, Mrinmaya Sachan* [[paper]](https://arxiv.org/abs/2304.14293) 2023.4

1. **LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions**

*Minghao Wu, Abdul Waheed, Chiyu Zhang, Muhammad Abdul-Mageed, Alham Fikri Aji* [[paper]](https://arxiv.org/abs/2304.14402) 2023.4

1. **Visual Instruction Tuning**

*Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee* [[paper]](https://arxiv.org/abs/2304.08485) 2023.4

1. **TABLET: Learning From Instructions For Tabular Data**

*Dylan Slack, Sameer Singh* [[paper]](https://arxiv.org/abs/2304.13188) 2023.4

1. **LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model**

*Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, Yu Qiao* [[paper]](https://arxiv.org/abs/2304.15010) 2023.4

1. **LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity**

*Anjana Arunkumar, Shubham Sharma, Rakhi Agrawal, Sriram Chandrasekaran, Chris Bryan* [[paper]](https://arxiv.org/abs/2304.06184) 2023.4

1. **Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model**

*Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, Soujanya Poria* [[paper]](https://arxiv.org/abs/2304.13731) 2023.4

1. **Resources and Few-shot Learners for In-context Learning in Slavic Languages**

*Michal Štefánik, Marek Kadlčík, Piotr Gramacki, Petr Sojka* [[paper]](https://arxiv.org/abs/2304.01922) 2023.4

1. **Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT**

*Ruohong Zhang, Yau-Shian Wang, Yiming Yang* [[paper]](https://arxiv.org/abs/2304.11872) 2023.4

1. **Poisoning Language Models During Instruction Tuning**

*Alexander Wan, Eric Wallace, Sheng Shen, Dan Klein* [[paper]](https://arxiv.org/abs/2305.00944) 2023.5

1. **Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models**

*Fangkai Jiao, Bosheng Ding, Tianze Luo, Zhanfeng Mo* [[paper]](https://arxiv.org/abs/2305.03025) 2023.5

1. **Improving Cross-Task Generalization with Step-by-Step Instructions**

*Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong* [[paper]](https://arxiv.org/abs/2305.04429) 2023.5

1. **Towards Building the Federated GPT: Federated Instruction Tuning**

*Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Guoyin Wang, Yiran Chen* [[paper]](https://arxiv.org/abs/2305.05644) 2023.5

1. **STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation**

*Yulun Du, Lydia Chilton* [[paper]](https://arxiv.org/abs/2305.08152) 2023.5

1. **COEDIT: Text Editing by Task-Specific Instruction Tuning**

*Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang* [[paper]](https://arxiv.org/abs/2305.09857) 2023.5

1. **Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors**

*Kai Zhang, Bernal Jiménez Gutiérrez, Yu Su* [[paper]](https://arxiv.org/abs/2305.11159) 2023.5

1. **Otter: A Multi-Modal Model with In-Context Instruction Tuning**

*Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, Ziwei Liu* [[paper]](https://arxiv.org/abs/2305.03726) 2023.5

1. **Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach**

*Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen* [[paper]](https://arxiv.org/abs/2305.07001) 2023.5

1. **Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning**

*Hao Chen, Yiming Zhang, Qi Zhang, Hantao Yang, Xiaomeng Hu, Xuetao Ma, Yifan Yanggong, Junbo Zhao* [[paper]](https://arxiv.org/abs/2305.09246) 2023.5

1. **Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation**

*Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, Kai-Wei Chang* [[paper]](https://arxiv.org/abs/2305.14327) 2023.5

1. **The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning**

*Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo* [[paper]](https://arxiv.org/abs/2305.14045) 2023.5

1. **LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion**

*Dongfu Jiang, Xiang Ren, Bill Yuchen Lin* [[paper]](https://arxiv.org/abs/2306.02561) 2023.6

1. **InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models**

*Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou* [[paper]](https://arxiv.org/abs/2306.03082) 2023.6

1. **M3IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning**

*Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, Qi Liu* [[paper]](https://arxiv.org/abs/2306.04387) 2023.6

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=SinclairCoder/Instruction-Tuning-Papers&type=Date)](https://star-history.com/#SinclairCoder/Instruction-Tuning-Papers&Date)