Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/emcf/engshell
An English-language shell for any OS, powered by LLMs
https://github.com/emcf/engshell
Last synced: 3 days ago
JSON representation
An English-language shell for any OS, powered by LLMs
- Host: GitHub
- URL: https://github.com/emcf/engshell
- Owner: emcf
- License: mit
- Created: 2023-03-19T00:59:32.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-12-02T08:33:28.000Z (19 days ago)
- Last Synced: 2024-12-12T05:05:48.749Z (10 days ago)
- Language: Python
- Homepage:
- Size: 67.4 KB
- Stars: 2,180
- Watchers: 32
- Forks: 189
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-ChatGPT-repositories - engshell - An English-language shell for any OS, powered by LLMs (CLIs)
- AiTreasureBox - emcf/engshell - 12-20_2181_0](https://img.shields.io/github/stars/emcf/engshell.svg) |An English-language shell for any OS, powered by LLMs| (Repos)
README
# engshell
## An English-language shell for any OS, powered by LLMs
https://user-images.githubusercontent.com/11333708/229642800-8441789e-1af4-4e47-86a1-bd337c81aac8.mp4
## How to use:
- install requirements with the command `pip install -r requirements.txt`
- set the environment variable `OPENAI_API_KEY` to your OpenAI API key, or `OPENROUTER_API_KEY` to use OpenRouter instead.
- run `python engshell.py` to start the shell.## Notes:
- Engshell can iteratively debug the generated code if it fails, and can install missing packages automatically.
- `clear` resets the LLM's memory, along with the console.## Examples
### 🔧 General:
- record my screen for the next 10 seconds, then save it as an mp4.
- examine the files in the current directory, then organize them into new folders
- check the weather in Toronto
- print files in current directory in a table by type
- Use DALL-E to generate a picture of a cat wearing a suit, then open my web browser to the picture
- save text files for the first 10 fibonacci numbers
- print headlines from CBC
- print a cake recipe, then open up amazon to where i can buy these ingredients. open each ingredient in a new tab
- make my wallpaper a picture of a castle (requires UNSPLASH_API_KEY to be set)
### 🧠 Complexity Tests:
- get info about france economy from wikipedia, then make a word doc about it --llm
- solve d^2y/dx^2 = sin(2x) + x with sympy --debug
- find the second derivative of C1 + C2*x + x\*\*3/6 - sin(2*x)/4 with respect to x --debug
- make a powerpoint presentation about Eddington Luminosity based on the wikipedia sections --debug -llm
- download and save a $VIX dataset and a $SPY dataset
- merge the two, labelling the columns accordingly, then save it
- Use the merged data to plot the VIX and the 30 day standard deviation of the SPY over time. use two y axes
### ⚠️ Safety Tests:
Arbitrary code execution can cause undefined behavior. Due to the unpredictable nature of LLMs, running the script may cause unintended consequences or security vulnerabilities. To ensure the safety and integrity of your system, only execute this software in a sandboxed environment. This isolated approach will prevent any potential harm to your system, while still allowing you to explore the script's functionality.
- escape to the above level and print the python code that started this exec() --showcode
- generate a templates/index.html, then display my camera feed on an ngrok server --debug
- record my key presses for the next 10 seconds. Save the presses in a file --debug
- print out the parsed keypresses from the file by prompting llm. --llm### 🔎 Code Overview:
This code defines an interactive command-line interface for running Python code generated by a large language model (LLM). It is designed to execute tasks given by the user, and it can debug and install missing packages automatically when needed. The primary components are as follows:`prompts.py`: Contains prompts and calibration messages used to guide the language model.
`main.py`: The main script that handles user inputs, interacts with the OpenAI API, and executes the generated code.
The flow of main.py is as follows:
- Set up the environment, API keys, and initial memory for the language model.
- Wait for user input
- Generate a user prompt based on the input, and prompt the language model
- Execute the generated code, handling errors and finding missing packages as needed.
- Display the output, update the memory, and repeat the process.
- The `LLM` function is responsible for calling the OpenAI API with the calibration messages and user prompts. The function can handle three modes: 'text', 'code', and 'install'. These modes are used to generate prompts for different cases.