Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/krafton-ai/KORani
https://github.com/krafton-ai/KORani
Last synced: about 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/krafton-ai/KORani
- Owner: krafton-ai
- License: apache-2.0
- Created: 2023-04-26T06:13:49.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-05-08T12:45:16.000Z (over 1 year ago)
- Last Synced: 2024-08-01T13:22:15.584Z (5 months ago)
- Language: Python
- Size: 1.56 MB
- Stars: 98
- Watchers: 3
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-korean-llm - KORani
README
# KORani
- KORani: Large Language Models for π°π· Korean and πΊπΈ English using LLaMA 13B and Polyglot 12.8B.
- Tested which LLM is effective for π°π· Korean tasks after finetuning.
- π€ You can download the weights from the [Link](https://huggingface.co/KRAFTON).## Release
This repository contains inference code for KORani models that are based on [LLaMA 13B](https://arxiv.org/abs/2302.13971v1) and [Polyglot 12.8B](https://huggingface.co/EleutherAI/polyglot-ko-12.8b).
KORani models are finetuned using [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) & [KoVicuna](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) dataset. This work is hugely influenced by [Vicuna](https://github.com/lm-sys/FastChat) project.### Models
We offer three types of models as follows.| Model | Base | Train dataset | Huggingface Link |
| --- | ---: | ---: | ---: |
| 1οΈβ£ KORani-v1-13B | Polyglot 12.8B | KoVicuna dataset | [Link 1](https://huggingface.co/KRAFTON/KORani-v1-13B) |
| 2οΈβ£ KORani-v2-13B | LLaMA 13B | KoVicuna dataset | [Link 2](https://huggingface.co/KRAFTON/KORani-v2-13B) |
| 3οΈβ£ KORani-v3-13B | LLaMA 13B | ShareGPT & KoVicuna dataset | [Link 3](https://huggingface.co/KRAFTON/KORani-v3-13B) |### Notes
* We used LLaMA 13B from [here](https://huggingface.co/decapoda-research/llama-13b-hf).
* We extracted only the data from [Kovicuna](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) that corresponds to the first and second parts of the conversation, which are 'human' and 'GPT'.
* The model finetuning was conducted on eight A100 40GB GPUs. The code used for training is based on the [Fastchat](https://github.com/lm-sys/FastChat).## Local Setup
1. Install dependencies
```bash
pip install -r requirements.txt
```## How to use
1. Prepare your prompt at `prompts/{task_name}.txt`
2. Run `inference.py`
```bash
python inference.py --model_path MODEL_NAME --task TASK_NAME
```
### Command
`--model_path` (str): model path for evaluation. (e.g. KRAFTON/KORani-v3-13B)
`--task` (str): choose which task you want to evaluate. (e.g. only [QA, summarization, translation] are available in this repo.)## Examples
You can check how to get the evaluation score in the tables from this git repository.
https://github.com/krafton-ai/AutoEvalGPT### 1. Question Answering (QA)
```bash
python inference.py --model_path "KRAFTON/KORani-v3-13B" --task "QA"
```This is the prompt for QA task. You can modify it in the [QA.txt](prompts/QA.txt).
```python
PROMPT = """μ°λ¦¬λ μλμ κ°μ μ 보λ₯Ό κ°κ³ μμ΅λλ€.
---------------------
{context}
---------------------
### μ£Όμ΄μ§ μ 보μ λ°λΌ, μ§λ¬Έμ λ΅ν΄μ£ΌμΈμ.: '{question}'
### Assistant:"""
```For example,
```python
context = "ν¨λ¦¬ ꡬμ€νν λͺ°λμ¨(μμ΄: Henry Gustav Molaison, 1926λ 2μ 26μΌ ~ 2008λ 12μ 2μΌ)μ λμ μ¦μ μΉλ£νκΈ° μν΄μ μμ μ μΌλ‘ ν΄λ§λ₯Ό ν¬ν¨ν λ΄μΈ‘μΈ‘λμ½μ΄ μ κ±°λ λ―Έκ΅μ κΈ°μ΅μ₯μ νμ H.MμΌλ‘ μ λΆν° μ΄λ―Έ μλ €μ Έ μμλ€.
κ·Έλ 1957λ λ§λΆν° κ·Έκ° μ£½μ λ κΉμ§ κ·Έλ κ΄λ²μνκ² μ°κ΅¬κ° λμλ€. κ·Έμ κ²½μ°λ λ κΈ°λ₯κ³Ό κΈ°μ΅ μ¬μ΄μ λ§ν¬λ₯Ό μ€λͺ νλ μ΄λ‘ λ°λ¬κ³Ό λμΈμ§μ¬λ¦¬νμ λ°λ¬, λμ ꡬ쑰μ κΈ°λ₯μ΄ νΉμ μ¬λ¦¬μ κ³Όμ κ³Ό κ΄ν μ΄ν΄λ₯Ό λͺ©νλ‘ νλ μ¬λ¦¬νμ ν μ§μ μ λ§€μ° μ€μν μν μ νμλ€.
κ·Έκ° μ£½κΈ° μ μ, κ·Έλ μ½λ€ν°μ»· μ£Ό μμ λ½μ€μ μμΉν κ΄λ¦¬κΈ°κ΄μ μ΄λ©΄μ μ§μμ μΌλ‘ μ‘°μ¬κ° λμλ€.
κ·Έμ λλ μ§κΈ 2009λ 12μ 4μΌμ μ‘°μ§νμ μΌλ‘ μ¬λΌμ΄μ€λμ΄ λ―Έκ΅ μλμμ΄κ³ μ μλ€."
question = "ν¨λ¦¬ ꡬμ€ννμ λλ νμ¬ μ΄λμ μμΉν΄ μλκ°?"
```| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | ν¨λ¦¬ ꡬμ€ννμ λλ νμ¬ λ―Έκ΅ μλμμ΄κ³ μ μμΉν΄ μλ€. |
| GPT-3.5-turbo | 7.1 | ν¨λ¦¬ ꡬμ€ννμ λλ μ§κΈ 2009λ 12μ 4μΌμ μ‘°μ§νμ μΌλ‘ μ¬λΌμ΄μ€λμ΄ λ―Έκ΅ μλμμ΄κ³ μ μλ€. |
| Vicuna-13B | 7.2 | ν¨λ¦¬ ꡬμ€ννμ λλ 2009λ 12μ 4μΌμ μ‘°μ§νμ μΌλ‘ μ¬λΌμ΄μ€λμ΄ λ―Έκ΅ μλμμ΄κ³ μ μμ΅λλ€. |
| KoAlpaca-13B | 2.6 | Henry Gustav Molaisonμ λλ 2008λ μ μ¬λ§νμμ΅λλ€. κ·Έμ λλ μ½λ€ν°μ»· μ£Ό μμ λ½μ€μ μμΉν κΈ°κ΄μ 보κ΄λμ΄ μμΌλ©°, νμ¬λ κ³μ μ‘°μ¬μ€μ λλ€. |
| KORani-v1 | 6.6 | ν¨λ¦¬ ꡬμ€νν λͺ°λ μ΄μ¨μ λλ νμ¬ λ―Έκ΅ μλμμ΄κ³ μ μμ΅λλ€. κ·Έμ λλ 1957λ μμ λ‘ μ κ±°λ κ·Έμ λ΄μΈ‘ μΈ‘λμ½μ μ리λ₯Ό νμΈνκΈ° μν΄ μ‘°μ§νμ μΌλ‘ μ λ¨λμμ΅λλ€. μ΄ μ°κ΅¬λ λμ ꡬ쑰μ κΈ°λ₯μ΄ νΉμ μ¬λ¦¬μ κ³Όμ κ³Ό κ΄λ ¨λ μ΄ν΄λ₯Ό λͺ©νλ‘ νλ μ¬λ¦¬νμ ν λΆμΌμΈ μΈμ§ μ κ²½κ³Όνμ λν κ·Έμ 곡νμ 기리기 μν κ²μ λλ€." |
| KORani-v2 | 8.0 | ν¨λ¦¬ ꡬμ€ννμ λλ νμ¬ μΊλ¦¬ν¬λμ μλμμ΄κ³ μ μλ μλμμ΄κ³ λνκ΅ μ¨μ€νΈ μΊ νΌμ€μ μ κ²½κ³Όν μ€νμ€μ 보κ΄λμ΄ μμ΅λλ€. |
| KORani-v3 | 6.4 | ν¨λ¦¬ ꡬμ€ννμ λλ νμ¬ μ½λ€ν°μ»· μ£Ό μμ λ½μ€μ μλ κ΄λ¦¬ κΈ°κ΄μ μμ΅λλ€. |### 2. Translation
```bash
python inference.py --model_path "KRAFTON/KORani-v3-13B" --task "translation"
```This is the prompt for translation. You can modify it in the [translation.txt](prompts/translation.txt).
```python
PROMPT = """### Instruction: Translate English sentence into Korean. You may leave specific names as they are.
English: μΌλ§λ λ§μ μΈμ΄λ₯Ό λ§ν μ μλ?
Korean: How many languages can you speak?#
English: μΌ λ€ λλ¬μ΄?
Korean: Did you finish your work?#
English: {source_sentence}
Korean:"""
```
For example,
```python
source_sentence = "Hey! I have some Kissflow Legal Review requests that are blocked by Tax and Accounting, can this be looked at?"
```| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | ν€μ΄! μ κ° Tax and Accountingμ μν΄ μ°¨λ¨λ λͺ κ°μ§ Kissflow λ²λ₯ κ²ν μμ²μ΄ μλλ°, μ΄κ²μ νμΈν΄ μ£Όμ€ μ μλμ? |
| DeepL | 9.4 | μλ νμΈμ! μΈλ¬΄ λ° νκ³μμ μ°¨λ¨λ Kissflow λ²λ₯ κ²ν μμ²μ΄ λͺ κ° μλλ°μ, μ΄ μμ²μ μ΄ν΄λ³Ό μ μλμ? |
| GPT-3.5-turbo | 8.6 | μλ νμΈμ! μΈλ¬΄ λ° νκ³ λΆμμμ μ°¨λ¨λ λͺ κ°μ§ Kissflow Legal Review μμ²μ΄ μμ΅λλ€. νμΈν΄ μ£Όμ€ μ μλμ? |
| Vicuna-13B | 3.8 | μλ νμΈμ! μΈκΈκ³μ°κ³Ό νκ³λΆμκ° μ°¨λ¨ν΄ μλ Kissflow Legal Review μμ²μ΄ λͺ κ°κ° μμ΅λκΉ? μ΄κ²μ μ΄ν΄λ³΄μκ² μ΅λκΉ? |
| KoAlpaca-13B | 5.6 | μΌ! μΈκΈκ³Ό νκ³μ λ§ν ν€μ±νλ‘μ° λ²μ κ²ν μμ²μ΄ μλλ°, μ΄κ±° κ²ν ν΄μ€ μ μμ΄? |
| KORani-v1 | 7.5 | μλ νμΈμ! μΈκΈκ³Ό νκ³λ‘ μΈν΄ λ§ν μλ ν€μ€νλ‘μ° λ²λ₯ κ²ν μμ²μ΄ λͺ κ° μλλ°, κ²ν ν΄ μ£Όμ€ μ μλμ? |
| KORani-v2 | 5.4 | μλ νμΈμ! μ κ° Kissflow Legal Review μμ²μ λͺ©κ²©νλλ°, μΈλ¬΄ λ° νκ³ λΆμμμ μ°¨λ¨νκ³ μλλ° μ΄ λ¬Έμ λ₯Ό μ‘°μ¬ν΄ μ£Όμκ² μ΄μ? |
| KORani-v3 | 7.1 | μλ νμΈμ! μ λ Kissflow Legal Review μμ²μ΄ μΈκΈκ³Ό νκ³μ μν΄ μ°¨λ¨λκ³ μλλ°, μ΄ λ¬Έμ κ° μ΄ν΄λ³Ό μ μμκΉμ? |### 3. Summarization
```bash
python inference.py --model_path "KRAFTON/KORani-v3-13B" --task "summarization"
```This is the prompt for summarization. You can modify it in the [summarization link](prompts/summarization.txt). Keep in mind you did not exceed the maximum length = 2048.
```python
PROMPT = """# Meeting note
{target_document}# Summarize the meeting note into 3 Korean sentences.
### Output: 1)"""
```
For example,
```python
target_document = """# Document
μ λ λ λλΉ 79λͺ λμ΄ 1019λͺ , νμ μμ λμ² κ΄μμμ 곡무μ μ μμ΄ ν¬κ² λμ΄λ νμ μλΉμ€ ν₯μμ΄ κΈ°λλλ€.
μλ νμ μμ λΆμμ λ°νν 2018λ λ μμΉλ¨μ²΄ κΈ°μ€μΈκ±΄λΉμμ κ΄μμμ μΌλ°μ§ μ μμ΄ μ§λν΄λ³΄λ€ 79λͺ μ΄ λμ΄λ 1019λͺ μΌλ‘ μ°μ λλ€κ³ λ°νλ€.
μ§λ 1995λ λλν΅ν© λΉμ 991λͺ μ΄μλ κ΄μμ 곡무μ μ μμ IMFμμ μ νΉλ ν ꡬ쑰쑰μ μ κ±°μΉλ©΄μ 2002λ μλ 788λͺ μΌλ‘ 200μ¬λͺ μ΄ κ°κΉμ΄ μ€μ΄λ€μμΌλ μ΄λ² μ μ νλ³΄λ‘ κ³΅λ¬΄μ μ μ 1000λͺ μλλ₯Ό λ§κ² λλ€.
κ·Έλμ κ΄μμλ ν¬μ€μ½λ₯Ό μ€μ¬μΌλ‘ ν μ°μ λ¨μ§μ 컨ν μ΄λλΆλ, κ²½μ μμ ꡬμ, νμ§ κ°λ°, λ€μν λ³΅μ§ μ μ± λ± μλ―Όμ μΆμ μ§ ν₯μμ μν νμ μμκ° λ λ‘ μ¦ννλ λ°μ λΉν΄ νμ λ 곡무μ μ μμΌλ‘ λ§μ μ΄λ €μμ κ²ͺμ΄ μμλ€.
μμ μ΄λ² μ μ μΆ©μμ μ§κΈκΉμ§ 격무μ μλ¬λ €μ¨ 곡무μλ€μκ² λμ± μ΄μ¬ν μΌν μ μλλ‘ νλ ₯μ λΆμ΄λ£μΌλ©΄μ μ§μκ³Ό λλΌ λ°μ μ λ·λ°μΉ¨ νλ κ²½μ λμλ‘μμ μμμ λλμ΄λ μλμ§ ν¨κ³Όλ‘ μ΄μ΄μ§ κ²μΌλ‘ κΈ°λλλ€.
ννΈ, κ΄μμλ νμ기ꡬμΈβκΈ°μ μ μΉμΆμ§λ¨βμ΄ 2017λ μ°λ§λ‘ νμ§λ¨μ λ°λΌ μ λΌλ¨λλ‘λΆν° 4κΈ μμ€κΈ°κ΅¬μΈβμ°λ¨λ Ήμ§κ΄λ¦¬μΌν°βλ₯Ό μΉμΈ λ°μ μ¬ν΄ 1μλΆν° μ΄μνκ² λ¨μΌλ‘μ¨ λμ κ°λ°μ νμΈ΅ νλ ₯μ λ°μ κ²μΌλ‘ μ λ§νκ³ μλ€.
μ νκΈ° μ΄λ¬΄κ³Όμ₯μβμ°ΈμΌλ‘ μ΄λ ΅κ² κ°μ Έμ¨ 곡무μ μ μκ³Ό 4κΈ μμ€κΈ°κ΅¬λ 격무μ μλ¬λ €μ¨ μ§μλ€μ μ¨ν΅μ νΈμ΄κ² ν΄ 15λ§ μλ―Όμ μν νμ μλΉμ€ μ§μ ν₯μμν¬ κ²μΌλ‘ κΈ°λλλ€βλ©°βμ΄λ¬ μμ μ μ μ‘°λ‘λ₯Ό κ°μ νκ³ , μ§λ¬΄λΆμμ ν΅ν΄ λΆμλ³ μ μμ μ‘°μ ν κ³νμ΄λ€βκ³ λ§νλ€.νκ΅μ΄ 3λ¬Έμ₯μΌλ‘ μμ Documentλ₯Ό μμ½ν΄μ€.
### Assistant: 1)"""
```| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 1) κ΄μμμ 곡무μ μ μμ΄ μ λ λ λλΉ 79λͺ λμ΄λ 1019λͺ μΌλ‘ μ¦κ°νλ€. 2) μ΄λ‘ μΈν΄ νμ μλΉμ€ ν₯μμ΄ κΈ°λλλ©°, μ§μκ³Ό λλΌ λ°μ μ λ·λ°μΉ¨νλ κ²½μ λμλ‘μμ μμμ΄ μμΉν κ²μΌλ‘ μ λ§λλ€. 3) ννΈ, κ΄μμλ μ λΌλ¨λλ‘λΆν° 4κΈ μμ€κΈ°κ΅¬μΈ 'μ°λ¨λ Ήμ§κ΄λ¦¬μΌν°'λ₯Ό μΉμΈλ°μ λμ κ°λ°μ νλ ₯μ λ°μ κ²μΌλ‘ μμλλ€. |
| GPT-3.5-turbo | 8.5 | 1) κ΄μμ 곡무μ μ μμ΄ 79λͺ λμ΄ 1019λͺ μΌλ‘ λμ΄λ νμ μλΉμ€ ν₯μμ΄ κΈ°λλλ€κ³ λ°νλ€.2) κ΄μμλ μ΄λ² μ μ μΆ©μμΌλ‘ μ§μκ³Ό λλΌ λ°μ μ λ·λ°μΉ¨ νλ κ²½μ λμ μμμ λλμΌ κ²μΌλ‘ κΈ°λλλ€. 3) μ μ μ‘°λ‘λ₯Ό κ°μ νκ³ , μ§λ¬΄λΆμμ ν΅ν΄ λΆμλ³ μ μμ μ‘°μ ν κ³νμ΄λ€. |
| Vicuna-13B | 7.4 | 1) 2018λ λ κ΄μμμ 곡무μ μ μ 1019λͺ , μ§λν΄ 79λͺ μ¦κ°. 2) μ΄λ² μ μ μΆ©μμ μ§κΈκΉμ§ 격무μ μλ¬λ €μ¨ 곡무μλ€μκ² λμ± μ΄μ¬ν μΌν μ μλλ‘ νλ ₯μ λΆμ΄λ£μΌλ©΄μ μ§μκ³Ό λλΌ λ°μ μ λ·λ°μΉ¨ νλ κ²½μ λμλ‘μμ μμμ λλμ΄λ μλμ§ ν¨κ³Όλ‘ μ΄μ΄μ§ κ²μΌλ‘ κΈ°λλλ€. 3) κ΄μμλ μ λΌλ¨λλ‘λΆν° 4κΈ μμ€κΈ°κ΅¬μΈβμ°λ¨λ Ήμ§κ΄λ¦¬μΌν°βλ₯Ό μΉμΈ λ°μ μ΄μνκ² λ¨μΌλ‘μ¨ λμ κ°λ°μ νμΈ΅ νλ ₯μ λ°μ κ²μΌλ‘ μ λ§νκ³ μλ€. |
| KoAlpaca-13B | 5.7 | 1) κ΄μμλ μλ λ³΄λ€ 79λͺ λμ΄λ 1019λͺ μ 곡무μμ λ μ μκ² λμμ΅λλ€. 2) μ΄λ νμ μμ λΆμ μ§μΉ¨μ λ°λ₯Έ κ²μ λλ€. 3) μ΄λ‘μ¨ κ΄μμλ νμ λ μμ°μΌλ‘ μΈν΄ λ°μνλ 곡무μλ€μ μ΄κ³Ό 근무 μλΉ μ§κΈ λ±μ λ¬Έμ λ₯Ό ν΄κ²° ν μ μκ² λμμ΅λλ€. |
| KORani-v1 | 7.0 | 1) κ΄μμλ μλ λ³΄λ€ 79λͺ μ΄ λμ΄λ 1019λͺ μ μΌλ°μ§ 곡무μ μ μμ νμλΆλ‘λΆν° μΉμΈλ°μμ΅λλ€. 2) μ΄λ κ΄μμμ νμ μμ μ¦κ°λ₯Ό μμ©νκΈ° μν κ²μ λλ€. 3) μ΄ μ μ μ¦κ°λ κ΄μμ 곡무μλ€μκ² λ λ§μ κΈ°νλ₯Ό μ 곡ν κ²μ λλ€. |
| KORani-v2 | 6.2 | 1)κ΄μμμ 곡무μ μ μμ μ΄λ² μ μ νλ³΄λ‘ 1000λͺ μ£ΌλΉ κ³΅λ¬΄μ μ μμ λ¬μ±νμ΅λλ€. 2) μ΄λ¬ν μΆ©μμ μ§κΈκΉμ§ 격무μ μλ¬λ €μ¨ 곡무μλ€μκ² λμ± μ΄μ¬ν μΌν μ μλλ‘ νλ ₯μ λΆμ΄λ£μΌλ©΄μ μ§μμ¬νμ κ΅κ°μ λ°μ μ μ§μνλ μλμ§ ν¨κ³Όλ‘ μ΄μ΄μ§ κ²μΌλ‘ κΈ°λλ©λλ€. 3) κ΄μμλ 'κΈ°μ μ μΉμΆμ§λ¨'μ νμ§νκ³ 'μ°λ¨λ Ήμ§κ΄λ¦¬μΌν°'λ₯Ό μΉμΈνμΌλ©°, μ΄λ λμ κ°λ°μ νμΈ΅ νλ ₯μ λ°μ κ²μΌλ‘ κΈ°λλ©λλ€. |
| KORani-v3 | 6.9 | 1)κ΄μμμ 곡무μ μ μμ μ§λν΄λ³΄λ€ 79λͺ μ΄ λμ΄λ 1019λͺ μΌλ‘ μ°μ λμμ΅λλ€.2) μ΄λ¬ν μΆ©μμ μ§μμ¬νμ κ΅κ° λ°μ μ μ§μνλ νμ μλΉμ€μ ν₯μμΌλ‘ μ΄μ΄μ§ κ²μΌλ‘ κΈ°λλ©λλ€.3) μλ 'μ΄λ¬ λ΄μ μ μ μ‘°λ‘λ₯Ό κ°μ νκ³ μ§λ¬΄λΆμμ ν΅ν΄ λΆμλ³λ‘ μ μμ μ‘°μ ν κ³ν'μ΄λ©° νμ μλΉμ€ ν₯μμ μν΄ λ Έλ ₯ν κ²μ΄λΌκ³ λ°νμ΅λλ€. |## Evaluation
We tested model performance using GPT-4, and the code and results of the test can be found through the [AutoEvalGPT](https://github.com/krafton-ai/AutoEvalGPT).## Limitations
The Korean performance of our models is not as good as the English performance of [Vicuna](https://github.com/lm-sys/FastChat). We believe this is due to the not enough quality of foundation models in the Korean tasks (compared to Llama in English tasks) and the dataset quality, which is primarily translational. We will continue to update the new versions of the Korani models as soon as we achieve better results.## License
Our github repo and models are intended for research purpose, non-commercial use only, subject to the model License of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us If you find any potential violation.
The code is released under the Apache License 2.0.