Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-llm-if
An Awesome List to LLM Instruction Following
https://github.com/thinkwee/awesome-llm-if
Last synced: 5 days ago
JSON representation
-
Uncategorized
-
Uncategorized
- DO LLMS “KNOW” INTERNALLY WHEN THEY FOLLOW INSTRUCTIONS?
- Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
- Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models
- Instruction Pre-Training: Language Models are Supervised Multitask Learners
- LMOps
- Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators
- alpaca_eval - lab/alpaca_eval.svg)
- INFOBENCH: Evaluating Instruction Following Ability in Large Language Models
- InfoBench
- STRUC-BENCH: Are Large Language Models Good at Generating Complex Structured Tabular Data?
- Struc-Bench - Bench.svg)
- FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capability
- FoFo
- AlignBench: Benchmarking Chinese Alignment of Large Language Models
- AlignBench
- llm_judge - sys/FastChat.svg)
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition
- ComplexBench - coai/ComplexBench.svg)
- EVALUATING LARGE LANGUAGE MODELS AT EVALUATING INSTRUCTION FOLLOWING
- LLMBar - nlp/LLMBar.svg)
- Instruction-Following Evaluation for Large Language Models
- instruction_following_eval - research/google-research.svg)
- llm-controlgen - controlgen.svg)
- FollowEval: A Multi-Dimensional Benchmark for Assessing the Instruction-Following Capability of Large Language Models
- Can Large Language Models Understand Real-World Complex Instructions?
- CELLO
- FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models
- FollowBench
- Evaluating Large Language Models on Controlled Generation Tasks
- SELF-PLAY WITH EXECUTION FEEDBACK: IMPROVING INSTRUCTION-FOLLOWING CAPABILITIES OF LARGE LANGUAGE MODELS
- AutoIF
- LESS: Selecting Influential Data for Targeted Instruction Tuning
- LESS - nlp/less.svg)
- WizardLM: Empowering Large Language Models to Follow Complex Instructions
- WizardLM
- Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models
- Instruction Pre-Training: Language Models are Supervised Multitask Learners
- LMOps
- Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators
- alpaca_eval - lab/alpaca_eval.svg)
- INFOBENCH: Evaluating Instruction Following Ability in Large Language Models
- InfoBench
- STRUC-BENCH: Are Large Language Models Good at Generating Complex Structured Tabular Data?
- Struc-Bench - Bench.svg)
- FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capability
- FoFo
- AlignBench: Benchmarking Chinese Alignment of Large Language Models
- AlignBench
- Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
- llm_judge - sys/FastChat.svg)
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition
- ComplexBench - coai/ComplexBench.svg)
- EVALUATING LARGE LANGUAGE MODELS AT EVALUATING INSTRUCTION FOLLOWING
- LLMBar - nlp/LLMBar.svg)
- Instruction-Following Evaluation for Large Language Models
- instruction_following_eval - research/google-research.svg)
- FollowEval: A Multi-Dimensional Benchmark for Assessing the Instruction-Following Capability of Large Language Models
- Can Large Language Models Understand Real-World Complex Instructions?
- CELLO
- FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models
- FollowBench
- Evaluating Large Language Models on Controlled Generation Tasks
- llm-controlgen - controlgen.svg)
-
Programming Languages
Categories
Sub Categories
Keywords
llm
7
large-language-models
6
nlp
4
instruction-following
4
llama
3
chatgpt
2
chatglm
2
rlhf
2
leaderboard
2
foundation-models
2
evaluation
2
deep-learning
2
x-prompt
2
promptist
2
prompt
2
pretraining
2
lmops
2
lm
2
language-model
2
gpt
2
multi-level
2
constraints
2
benchmark
2
qlora
2
open-source-models
2
open-source
2
open-models
2
lora
2
instruction-set
2
instruct-gpt
2
generative-ai
2
finetune
2
chinese-nlp
2
chinese-llm
2
agi
2
data
1
data-selection
1
influence
1
instruction-tuning
1
mistral
1