Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/biobootloader/wolverine
https://github.com/biobootloader/wolverine
Last synced: 21 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/biobootloader/wolverine
- Owner: biobootloader
- License: mit
- Created: 2023-03-18T22:13:19.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-03-08T07:33:01.000Z (8 months ago)
- Last Synced: 2024-10-01T14:02:22.769Z (about 1 month ago)
- Language: Python
- Size: 57.6 KB
- Stars: 5,211
- Watchers: 88
- Forks: 743
- Open Issues: 28
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-ChatGPT - wolverine🔥 - 4会编辑它们并解释错误所在。即使你有很多错误,它也会反复重新运行,直到它被修复。 | | (精选开源项目合集 / GPT工具)
- awesome-open-gpt - wolverine - 4会编辑它们并解释错误所在。即使你有很多错误,它也会反复重新运行,直到它被修复。 | | (精选开源项目合集 / GPT工具)
- awesome-gpt-security - wolverine - Auto fix the bugs in your Python Script/Code (Tools / Fix)
- my-awesome-starred - biobootloader/wolverine - (Python)
- allinchatgpt - wolverine - 可以自动执行和修复代码的脚本程序 (Uncategorized / Uncategorized)
- awesome-ChatGPT-repositories - wolverine - @denfromufa @openai nice tool! but i meant a tool to chat with gpt-4 in my terminal, i.e., something like (Openai)
- awesome-gpt4 - Wolverine - Run Python scripts, and when they crash, GPT-4 edits them and explains what went wrong. (Tools / Open-source projects)
- stars - biobootloader/wolverine - (Python)
- awesome-gpt4-zh-CN - 狼人 - 运行Python脚本,当它们崩溃时,GPT-4会编辑它们并解释出了什么问题。 (提示- [会议安排](https://github.com/stephanj/Scheduling-using-GPT4) 使用GPT-4。 / 开源项目)
- my-awesome - biobootloader/wolverine - 03 star:5.2k fork:0.7k (Python)
- AiTreasureBox - biobootloader/wolverine - 11-02_5205_0](https://img.shields.io/github/stars/biobootloader/wolverine.svg) |Automatically repair python scripts through GPT-4 to give them regenerative abilities.| (Repos)
- jimsghstars - biobootloader/wolverine - (Python)
README
# DEPRECATED: Try Mentat instead! https://github.com/AbanteAI/mentat
# Wolverine
## About
Give your python scripts regenerative healing abilities!
Run your scripts with Wolverine and when they crash, GPT-4 edits them and explains what went wrong. Even if you have many bugs it will repeatedly rerun until it's fixed.
For a quick demonstration see my [demo video on twitter](https://twitter.com/bio_bootloader/status/1636880208304431104).
## Setup
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.sample .envAdd your openAI api key to `.env`
_warning!_ By default wolverine uses GPT-4 and may make many repeated calls to the api.
## Example Usage
To run with gpt-4 (the default, tested option):
python -m wolverine examples/buggy_script.py "subtract" 20 3
You can also run with other models, but be warned they may not adhere to the edit format as well:
python -m wolverine --model=gpt-3.5-turbo examples/buggy_script.py "subtract" 20 3
If you want to use GPT-3.5 by default instead of GPT-4 uncomment the default model line in `.env`:
DEFAULT_MODEL=gpt-3.5-turbo
You can also use flag `--confirm=True` which will ask you `yes or no` before making changes to the file. If flag is not used then it will apply the changes to the file
python -m wolverine examples/buggy_script.py "subtract" 20 3 --confirm=True
## Environment variables
| env name | description | default value |
| ------------------- | ----------------------------------------------------------------- | ------------- |
| OPENAI_API_KEY | OpenAI API key | None |
| DEFAULT_MODEL | GPT model to use | "gpt-4" |
| VALIDATE_JSON_RETRY | Number of retries when requesting OpenAI API (-1 means unlimites) | -1 |## Future Plans
This is just a quick prototype I threw together in a few hours. There are many possible extensions and contributions are welcome:
- add flags to customize usage, such as asking for user confirmation before running changed code
- further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
- a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement
- multiple files / codebases: send GPT everything that appears in the stacktrace
- graceful handling of large files - should we just send GPT relevant classes / functions?
- extension to languages other than python## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=biobootloader/wolverine&type=Date)](https://star-history.com/#biobootloader/wolverine)