Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jackfsuia/cvx-coder
LLM good at answering questions and coding about Matlab CVX. 微调大模型写CVX代码,回答CVX问题。
https://github.com/jackfsuia/cvx-coder
cvx llm matlab
Last synced: 3 days ago
JSON representation
LLM good at answering questions and coding about Matlab CVX. 微调大模型写CVX代码,回答CVX问题。
- Host: GitHub
- URL: https://github.com/jackfsuia/cvx-coder
- Owner: jackfsuia
- License: mit
- Created: 2024-07-05T00:35:01.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-09-15T14:26:50.000Z (2 months ago)
- Last Synced: 2024-09-16T15:14:54.905Z (2 months ago)
- Topics: cvx, llm, matlab
- Homepage:
- Size: 13.7 KB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# cvx-coder
[Hugging Face](https://huggingface.co/tim1900/cvx-coder) | [魔搭社区](https://www.modelscope.cn/models/tommy1235/cvx-coder)cvx-coder aims to improve the [Matlab CVX](https://cvxr.com/cvx) code ability and QA ability of LLMs. It is a [phi-3 model](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) finetuned on a dataset consisting of CVX docs, codes, [forum conversations](https://ask.cvxr.com/).
## Quick Start
For one quick test, run the following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="tim1900/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}
content='''my problem is not convex, can i use cvx? if not, what should i do, be specific.'''
messages = [
{"role": "user", "content": content},
]
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
For the **chat mode** in web, run the following:
```python
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="tim1900/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}def assistant_talk(message, history):
message=[
{"role": "user", "content": message},
]
temp=[]
for i in history:
temp+=[{"role": "user", "content": i[0]},{"role": "assistant", "content": i[1]}]
messages =temp + messageoutput = pipe(messages, **generation_args)
return output[0]['generated_text']
gr.ChatInterface(assistant_talk).launch()
```