Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mountaineerbr/shellchatgpt
Shell wrapper for OpenAI's ChatGPT, DALL-E, Whisper, and TTS. Features LocalAI, Ollama, Gemini, Mistral, Groq, and Anthropic integration.
https://github.com/mountaineerbr/shellchatgpt
awesome-chatgpt-prompts awesome-chatgpt-prompts-zh bash chat-completions chatbot claude-3 davinci gemini-api gemini-pro gpt-4-vision gpt-4o groq llama3 localai mistral-api o1-preview ollama terminal text-completions tts
Last synced: 14 days ago
JSON representation
Shell wrapper for OpenAI's ChatGPT, DALL-E, Whisper, and TTS. Features LocalAI, Ollama, Gemini, Mistral, Groq, and Anthropic integration.
- Host: GitHub
- URL: https://github.com/mountaineerbr/shellchatgpt
- Owner: mountaineerbr
- License: gpl-3.0
- Created: 2023-08-23T22:36:48.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-29T13:03:19.000Z (15 days ago)
- Last Synced: 2024-10-29T15:14:46.321Z (15 days ago)
- Topics: awesome-chatgpt-prompts, awesome-chatgpt-prompts-zh, bash, chat-completions, chatbot, claude-3, davinci, gemini-api, gemini-pro, gpt-4-vision, gpt-4o, groq, llama3, localai, mistral-api, o1-preview, ollama, terminal, text-completions, tts
- Language: Shell
- Homepage: https://gitlab.com/fenixdragao/shellchatgpt
- Size: 1.81 MB
- Stars: 68
- Watchers: 0
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# shellChatGPT
Shell wrapper for OpenAI's ChatGPT, DALL-E, Whisper, and TTS. Features LocalAI, Ollama, Gemini, Mistral, Groq, and GitHub Models integration.![Showing off Chat Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat_cpls.gif)
Chat completions with streaming by defaults.
Expand Markdown Processing
![Chat with Markdown rendering](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat_cpls_md.gif)
Markdown rendering of chat response (_optional_).
Expand Text Completions
![Plain Text Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/text_cpls.gif)
In pure text completions, start by typing some text that is going to be completed, such as news, stories, or poems.
Expand Insert Mode
![Insert Text Completions](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/text_insert.gif)
Add the insert tag `[insert]` where it is going to be completed.
Mistral `code models` work well with the insert / fill-in-the-middel (FIM) mode!
If no suffix is provided, it works as plain text completions.## Index
Click to expand!
- 1. [Index](#index)
- 2. [Features](#-features)
- 3. [Getting Started](#-getting-started)
- 3.1 [Required Packages](#-required-packages)
- 3.2 [Optional Packages](#optional-packages)
- 3.3 [Installation](#-installation)
- 3.4 [Usage Examples](#-usage-examples-)
- 3.5 [Native Chat Completions](#-native-chat-completions)
- 3.5.1 [Vision and Multimodal Models](#vision-and-multimodal-models)
- 3.5.2 [Text, PDF, Doc, and URL Dumps](#text-pdf-doc-and-url-dumps)
- 3.5.3 [File Picker and Shell Dump](#file-picker-and-shell-dump)
- 3.5.4 [Voice In and Out + Chat Completions](#voice-in-and-out-chat-completions)
- 3.6 [Chat Mode of Text Completions](#chat-mode-of-text-completions)
- 3.7 [Text Completions](#-text-completions)
- 3.7.1 [Insert Mode of Text Completions](#insert-mode-of-text-completions)
- 4. [Markdown](#markdown)
- 5. [Prompts](#-prompts)
- 5.1 [Custom Prompts](#-custom-prompts)
- 5.2 [Awesome Prompts](#-awesome-prompts)
- 6. [Shell Completion](#shell-completion)
- 6.1 [Bash](#bash)
- 6.2 [Zsh](#zsh)
- 6.3 [Troubleshoot](#troubleshoot-shell)
- 7. [Notes and Tips](#-notes-and-tips)
- 8. [More Script Modes](#more-script-modes)
- 8.1 [Image Generations](#-image-generations)
- 8.2 [Image Variations](#image-variations)
- 8.3 [Image Edits](#image-edits)
- 8.3.1 [Outpaint - Canvas Extension](#outpaint---canvas-extension)
- 8.3.2 [Inpaint - Fill in the Gaps](#inpaint---fill-in-the-gaps)
- 8.4 [Audio Transcriptions / Translations](#-audio-transcriptions--translations)
- 9. [Service Providers](#service-providers)
- 9.1 [LocalAI](#localai)
- 9.1.1 [LocalAI Server](#localai-server)
- 9.1.2 [Tips](#tips)
- 9.1.3 [Running the shell wrapper](#running-the-shell-wrapper)
- 9.1.4 [Installing Models](#installing-models)
- 9.1.5 [Host API Configuration](#host-api-configuration)
- 9.2 [Ollama](#ollama)
- 9.3 [Google AI](#google-ai)
- 9.4 [Mistral AI](#mistral-ai)
- 9.5 [Groq](#groq)
- 9.6 [Anthropic](#anthropic)
- 9.7 [GitHub](#github)
- 10. [Arch Linux Users](#arch-linux-users)
- 11. [Termux Users](#termux-users)
- 11.1 [Dependencies](#dependencies-termux)
- 11.2 [TTS Chat - Removal of Markdown](#tts-chat---removal-of-markdown)
- 11.3 [Tiktoken](#tiktoken)
- 11.4 [Troubleshoot](#troubleshoot-termux)
- 12. [Project Objectives](#-project-objectives)
- 13. [Limitations](#%EF%B8%8F-limitations)
- 14. [Bug report](#bug-report)
- 15. [Help Pages](#-help-pages)
- 16. [Contributors](#-contributors)
- 17. [Acknowledgements](#acknowledgements)## ๐ Features
- Text and chat completions, support for [vision](#vision-models-gpt-4-vision) and **reasoning models**
- **Text editor interface**, _Bash readline_, and _multiline/cat_ modes
- [**Markdown rendering**](#markdown) support in response
- **Preview** and [**regenerate responses**](#--notes-and-tips)
- **Manage sessions**, _print out_ previous sessions
- [Instruction prompt manager](#%EF%B8%8F--custom-prompts),
easily create and set the initial system prompt
- **Voice in** (Whisper) plus **voice out** (TTS) [_chat mode_](#voice-in-and-out--chat-completions) (`options -cczw`)
- Integration with [LocalAI](#localai), [Ollama](#ollama),
[Google AI](#google-ai), [Mistral AI](#mistral-ai), [Groq](#groq), [Anthropic](#anthropic), and [GitHub Models](#github)
- Support for [awesome-chatgpt-prompts](#-awesome-prompts) and
[Chinese awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)
- [Command line completion](#shell-completion) and [file picker](#file-picker-and-shell-dump) dialogs for a smoother experience ๐ป
- Colour scheme personalisation ๐จ and a configuration file
- Stdin and text file input support
- Shouldโข work on Linux, FreeBSD, MacOS, and [Termux](#termux-users)
- **Fast** shell code for a responsive experience! โก๏ธ
## โจ Getting Started
### โ๏ธ Required Packages
- `Bash`
- `cURL`, and `JQ`### Optional Packages
Packages required for specific features.
Click to expand!
- `Base64` - Image endpoint, multimodal models
- `Python` - Modules tiktoken, markdown, bs4
- `ImageMagick`/`fbida` - Image edits and variations
- `SoX`/`Arecord`/`FFmpeg` - Record input (Whisper)
- `mpv`/`SoX`/`Vlc`/`FFplay`/`afplay` - Play TTS output
- `xdg-open`/`open`/`xsel`/`xclip`/`pbcopy` - Open images, set clipboard
- `W3M`/`Lynx`/`ELinks`/`Links` - Dump URL text
- `bat`/`Pygmentize`/`Glow`/`mdcat`/`mdless` - Markdown support
- `termux-api`/`termux-tools`/`play-audio` - Termux system
- `poppler`/`gs`/`abiword`/`ebook-convert`/`LibreOffice` - Dump PDF or Doc as text
- `dialog`/`kdialog`/`zenity`/`osascript`/`termux-dialog` - File picker### ๐พ Installation
**A.** Download the stand-alone
[`chatgpt.sh` script](https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/chatgpt.sh)
and make it executable:wget https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/chatgpt.sh
chmod +x ./chatgpt.sh
**B.** Or clone this repo:
git clone https://gitlab.com/fenixdragao/shellchatgpt.git
**C.** Optionally, download and set the configuration file
[`~/.chatgpt.conf`](https://gitlab.com/fenixdragao/shellchatgpt/-/raw/main/.chatgpt.conf):#save configuration template:
chatgpt.sh -FF >> ~/.chatgpt.conf
#edit:
chatgpt.sh -F# Or
vim ~/.chatgpt.conf### ๐ฅ Usage Examples ๐ฅ
![Chat cmpls with prompt confirmation](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat_cpls_verb.gif)
### ๐ฌ Native Chat Completions
With command line `options -cc`, some properties are set automatically to create a chat bot.
Start a new session in chat mode, and set a different temperature (*gpt-3.5 and gpt-4+ models*):chatgpt.sh -cc -t0.7
Create **Marv, the sarcastic bot** manually:
chatgpt.sh -60 -cc --frequency-penalty=0.5 --temp=0.5 --top_p=0.3 --restart-seq='\nYou: ' --start-seq='\nMarv:' --stop='You:' --stop='Marv:' -S'Marv is a factual chatbot that reluctantly answers questions with sarcastic responses.'
Load the *unix instruction* file ("unix.pr") for a new session.
The command line syntaxes below are all aliases:chatgpt.sh -cc .unix
chatgpt.sh -cc.unix
chatgpt.sh -cc -.unix
chatgpt.sh -cc -S .unixTo only chage the history file that the session will be recorded,
set the first positional argument in command line with the operator forward slash "`/`"
and the name of the history file (defaults to the `/session` command).
chatgpt.sh -cc /testchatgpt.sh -cc /stest
chatgpt.sh -cc "/session test"
Load an older session from the current (defaults) history file.
chatgpt.sh -cc /sub
chatgpt.sh -cc /.
chatgpt.sh -cc /fork.
chatgpt.sh -cc "/fork current"
In chat mode, simple run `!sub` or the equivalent command `!fork current`.
To load an older session from a history file that is different from the defaults,
there are some options.Change to it with command `!session [name]`.
To copy a previous session, run `/sub` or `/grep [regex]` to load that
session and resume from it.Print out last session, optionally set the history name:
chatgpt.sh -P
chatgpt.sh -P /test
#### Vision and Multimodal Models
To send an `image` / `url` to vision models, start the script and then either
set the image with the `!img` chat command with one or more filepaths / URLs.chatgpt.sh -cc -m gpt-4-vision-preview '!img path/to/image.jpg'
Alternatively, set the image paths / URLs at the end of the prompt:
chatgpt.sh -cc -m gpt-4-vision-preview
[...]
Q: In this first user prompt, what can you see? https://i.imgur.com/wpXKyRo.jpeg**TIP:** Run chat command `!info` to check model configuration!
**DEBUG:** Set `option -V` to see the raw JSON request body.
#### Text, PDF, Doc, and URL Dumps
To make an easy workfow, the user may add a filepath or URL to the end
of the prompt. The file is then read and the text content appended to the user prompt.
This is a basic text feature that works with any model.chatgpt.sh -cc
[...]
Q: What is this page: https://example.comQ: Help me study this paper. ~/Downloads/Prigogine\ Perspective\ on\ Nature.pdf
In the **second example**, the _PDF_ will be dumped as text.
For PDF text dump support, `poppler/abiword` is required.
For _doc_ and _odt_ files, `LibreOffice` is required.
See the [Optional Packages](#optional-packages) section.Also note that file paths containing white spaces must be
**blackslash-escaped**.#### File Picker and Shell Dump
The `/pick` command opens a file picker (usually a command-line
file manager). The selected file's path will be appended to the
current prompt in editing mode.The `/pick` and `/sh` commands may be run when typed at the end of
the current prompt, such as `[PROMPT] /sh`, which opens a new
shell instance to execute commands interactively. The output of these
commands is appended to the current prompt.When the `/pick` command is run at the end of the prompt, the selected
file path is appended instead._File paths_ that contain white spaces need backslash-escaping
in some functions.#### Voice In and Out + Chat Completions
๐ฃ๏ธ Chat completion with audio in and out (Whisper plus TTS):
chatgpt.sh -ccwz
Chat in Portuguese with Whisper and set _onyx_ as the TTS voice:
chatgpt.sh -ccwz -- pt -- onyx
**Chat mode** provides a conversational experience,
prompting the user to confirm each step.For a more automated execution, set `option -v`,
and `-vv` for hands-free experience (_live chat_ with silence detection),
such as:chatgpt.sh -cc -w -z -v
chatgpt.sh -cc -w -z -vv
### Chat Mode of Text Completions
When text completions is set for chatting with `option -c`,
some properties are configured automatically to instruct the bot.chatgpt.sh -c "Hello there! What is your name?"
### ๐ Text Completions
This is the pure text completions endpoint. It is typically used to
complete input text, such as for completing part of an essay.One-shot text completion, sets max completion tokens to 128 and the text completion model name:
chatgpt.sh -128 -m gpt-3.5-turbo-instruct "Hello there! Your name is"
**NOTE:** For multiturn mode with history support, set `option -d`.
A strong Instruction prompt may be needed for the language model to do what is required.
Set an instruction prompt for better results:
chatgpt.sh -d -S 'The following is a newspaper article.' "It all starts when FBI agents arrived at the governor house and"chatgpt.sh -d -S'You are an AI assistant.' "The list below contain the 10 biggest cities in the w"
#### Insert Mode of Text Completions
Set `option -q` (or `-qq` for multiturn) to enable insert mode and add the
string `[insert]` where the model should insert text:chatgpt.sh -q 'It was raining when [insert] tomorrow.'
**NOTE:** This example works with _no instruction_ prompt!
An instruction prompt in this mode may interfere with insert completions.**NOTE:** [Insert mode](https://openai.com/blog/gpt-3-edit-insert)
works with model `instruct models`.Mistral AI has a nice FIM (fill-in-the-middle) endpoint that works
with `code` models and is really good!## Markdown
To enable markdown rendering of responses, set command line `option --markdown`,
or run `/md` in chat mode. To render last response in markdown once,
run `//md`.The markdown option uses `bat` as it has line buffering on by defaults,
however other software is supported.
Set it such as `--markdown=glow` or `/md mdless` on chat mode.Type in any of the following markdown software as argument to the option:
`bat`, `pygmentize`, `glow`, `mdcat`, or `mdless`.## โ๏ธ Prompts
Unless the chat `option -c` or `-cc` are set, _no instruction_ is
given to the language model. On chat mode, if no instruction is set,
minimal instruction is given, and some options set, such as increasing
temp and presence penalty, in order to un-lobotomise the bot.Prompt engineering is an art on itself. Study carefully how to
craft the best prompts to get the most out of text, code and
chat completions models.The model steering and capabilities require prompt engineering
to even know that it should answer the questions.### โจ๏ธ Custom Prompts
Set a one-shot instruction prompt with `option -S`:
chatgpt.sh -cc -S 'You are a PhD psycologist student.'
chatgpt.sh -ccS'You are a professional software programmer.'
To create or load a prompt template file, set the first positional argument
as `.prompt_name` or `,prompt_name`.
In the second case, load the prompt and single-shot edit it.chatgpt.sh -cc .psycologist
chatgpt.sh -cc ,software_programmer
Alternatively, set `option -S` with the operator and the name of
the prompt as an argument:chatgpt.sh -cc -S .psycologist
chatgpt.sh -cc -S,software_programmer
This will load the custom prompt or create it if it does not yet exist.
In the second example, single-shot editing will be available after
loading prompt _software_programmer_.Please note and make sure to backup your important custom prompts!
They are located at "`~/.cache/chatgptsh/`" with the extension "_.pr_".### ๐ Awesome Prompts
Set a prompt from [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
or [awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh),
(use with davinci and gpt-3.5+ models):chatgpt.sh -cc -S /linux_terminal
chatgpt.sh -cc -S /Relationship_Coachchatgpt.sh -cc -S '%ๆ ไปป้ ๆๅไฝ่ๅฎ'
## Shell Completion
This project includes shell completions to enhance the user command-line experience.
### Bash
**Install** following one of the methods below.
**System-wide**
```
sudo cp comp/bash/chatgpt.sh /usr/share/bash-completion/completions/
```**User-specific**
```
mkdir -p ~/.local/share/bash-completion/completions/
cp comp/bash/chatgpt.sh ~/.local/share/bash-completion/completions/
```Visit the [bash-completion repository](https://github.com/scop/bash-completion).
### Zsh
**Install** at the **system location**
```
sudo cp comp/zsh/_chatgpt.sh /usr/share/zsh/site-functions/
```**User-specific** location
To set **user-specific** completion, make sure to place the completion
script under a directory in the `$fpath` array.The user may create the `~/.zfunc/` directory, for example, and
add the following lines to her `~/.zshrc`:```
[[ -d ~/.zfunc ]] && fpath=(~/.zfunc $fpath)autoload -Uz compinit
compinit
```Make sure `compinit` is run **after setting `$fpath`**!
Visit the [zsh-completion repository](https://github.com/zsh-users/zsh-completions).
### Troubleshoot Shell
Bash and Zsh completions should be active in new terminal sessions.
If not, ensure your `~/.bashrc` and `~/.zshrc` source
the completion files correctly.## ๐ก Notes and Tips
- Run chat commands with either _operator_ `!` or `/`.
- Edit live history entries with command `!hist`, for context injection.
- Add operator forward slash `/` to the end of prompt to trigger **preview mode**.
- One can regenerate a response typing in a new prompt a single slash `/`,
or `//` to have last prompt edited before new request.## More Script Modes
### ๐ผ๏ธ Image Generations
Generate image according to prompt:
chatgpt.sh -i "Dark tower in the middle of a field of red roses."
chatgpt.sh -i "512x512" "A tower."
### Image Variations
Generate image variation:
chatgpt.sh -i path/to/image.png
### Image Edits
chatgpt.sh -i path/to/image.png path/to/mask.png "A pink flamingo."
#### Outpaint - Canvas Extension
![Displaying Image Edits - Extending the Canvas](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/img_edits.gif)
In this example, a mask is made from the white colour.
#### Inpaint - Fill in the Gaps
![Showing off Image Edits - Inpaint](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/img_edits2.gif)
Adding a bat in the night sky.
### ๐ Audio Transcriptions / Translations
Generate transcription from audio file. A prompt to guide the model's style is optional.
The prompt should match the audio language:chatgpt.sh -w path/to/audio.mp3
chatgpt.sh -w path/to/audio.mp3 "en" "This is a poem about X."
**1.** Generate transcription from voice recording, set Portuguese as the language to transcribe to:
chatgpt.sh -w pt
This also works to transcribe from one language to another.
**2.** Transcribe any language audio input **to Japanese** (_prompt_ should be in
the same language as the input audio language, preferably):chatgpt.sh -w ja "A job interview is currently being done."
**3.1** Translate English audio input to Japanese, and generate audio output from text.
chatgpt.sh -wz ja "Getting directions to famous places in the city."
**3.2** Also doing it conversely, this gives an opportunity to (manual)
conversation turns of two speakers of different languages. Below,
a Japanese speaker can translate its voice and generate audio in the target language.chatgpt.sh -wz en "Providing directions to famous places in the city."
**4.** Translate audio file or voice recording from any language to English:
chatgpt.sh -W [audio_file]
chatgpt.sh -W
To retry with the last microphone recording saved in the cache, set
_audio_file_ as `last` or `retry`.**NOTE:** Generate **phrasal-level timestamps** double setting `option -ww` or `option -WW`.
For **word-level timestamps**, set option `-www` or `-WWW`.![Transcribe audio with timestamps](https://gitlab.com/mountaineerbr/etc/-/raw/main/gfx/chat_trans.png)
## Service Providers
### LocalAI
#### LocalAI Server
Make sure you have got [mudler's LocalAI](https://github.com/mudler/LocalAI),
server set up and running.The server can be run as a docker container or a
[binary can be downloaded](https://github.com/mudler/LocalAI/releases).
Check LocalAI tutorials
[Container Images](https://localai.io/basics/getting_started/#container-images),
and [Run Models Manually](https://localai.io/docs/getting-started/manual)
for an idea on how to install, download a model and set it up.โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Fiber v2.50.0 โ
โ http://127.0.0.1:8080 โ
โ (bound on host 0.0.0.0 and port 8080) โ
โ โ
โ Handlers ............. 1 Processes ........... 1 โ
โ Prefork ....... Disabled PID ..................1 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ#### Tips
*1.* Download a binary of `localai` for your system from [Mudler's release GitHub repo](https://github.com/mudler/LocalAI/releases).
*2.* Run `localai run --help` to check comamnd line options and environment variables.
*3.* Set up `$GALLERIES` before starting up the server:
export GALLERIES='[{"name":"localai", "url":"github:mudler/localai/gallery/index.yaml"}]' #defaults
export GALLERIES='[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}]'
export GALLERIES='[{"name":"huggingface", "url": "github:go-skynet/model-gallery/huggingface.yaml"}]'
*4.* Install the model named `phi-2-chat` from a `yaml` file manually, while the server is running:
curl -L http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "config_url": "https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/phi-2-chat.yaml" }'
#### Running the shell wrapper
Finally, when running `chatgpt.sh`, set the model name:
chatgpt.sh --localai -cc -m luna-ai-llama2
Setting some stop sequences may be needed to prevent the
model from generating text past context:chatgpt.sh --localai -cc -m luna-ai-llama2 -s'### User:' -s'### Response:'
Optionally set restart and start sequences for text completions
endpoint (`option -c`), such as `-s'\n### User: ' -s'\n### Response:'`
(do mind setting newlines *\n and whitespaces* correctly).And that's it!
#### Installing Models
Model names may be printed with `chatgpt.sh -l`. A model may be
supplied as argument, so that only that model details are shown.**NOTE:** Model management (downloading and setting up) must follow
the LocalAI and Ollama projects guidelines and methods.For image generation, install Stable Diffusion from the URL
`github:go-skynet/model-gallery/stablediffusion.yaml`,
and for audio transcription, download Whisper from the URL
`github:go-skynet/model-gallery/whisper-base.yaml`.#### Host API Configuration
If the host address is different from the defaults, we need editing
the script configuration file `.chatgpt.conf`.vim ~/.chatgpt.conf
# Or
chatgpt.sh -F
Set the following variable:
# ~/.chatgpt.conf
OPENAI_API_HOST="http://127.0.0.1:8080"_Alternatively_, set `$OPENAI_API_HOST` on invocation:
OPENAI_API_HOST="http://127.0.0.1:8080" chatgpt.sh -c -m luna-ai-llama2
### Ollama
Visit [Ollama repository](https://github.com/ollama/ollama/),
and follow the instructions to install, download models, and set up
the server.After having Ollama server running, set `option -O` (`--ollama`),
and the name of the model in `chatgpt.sh`:chatgpt.sh -cc -O -m llama2
If Ollama server URL is not the defaults `http://localhost:11434`,
edit `chatgpt.sh` configuration file, and set the following variable:# ~/.chatgpt.conf
OLLAMA_API_HOST="http://192.168.0.3:11434"### Google AI
Get a free [API key for Google](https://gemini.google.com/) to be able to
use Gemini and vision models. Users have a free bandwidth of 60 requests per minute, and the script offers a basic implementation of the API.Set the enviroment variable `$GOOGLE_API_KEY` and run the script
with `option --google`, such as:chatgpt.sh --google -cc -m gemini-pro-vision
*OBS*: Google Gemini vision models _are not_ enabled for multiturn at the API side, so we hack it.
To list all available models, run `chatgpt.sh --google -l`.
### Mistral AI
Set up a [Mistral AI account](https://mistral.ai/),
declare the enviroment variable `$MISTRAL_API_KEY`,
and run the script with `option --mistral` for complete integration.### Groq
Sign in to [Groq](https://console.groq.com/playground).
Create a new API key or use an existing one to set
the environmental variable `$GROQ_API_KEY`.
Run the script with `option --groq`.Currently, **llamma3.1** models are available at lightening speeds!
### Anthropic
Sign in to [Antropic AI](https://docs.anthropic.com/).
Create a new API key or use an existing one to set
the environmental variable `$ANTHROPIC_API_KEY`.
Run the script with `option --anthropic` or `--ant`.Check the **Claude-3** models! Run the script as:
```
chatgpt.sh --anthropic -cc -m claude-3-5-sonnet-20240620
```The script also works on **text completions** with models such as
`claude-2.1`, although the API documentation flags it as deprecated.Try:
```
chatgpt.sh --ant -c -m claude-2.1
```### GitHub
GitHub has partnered with Azure to use its infratructure.
As a GitHub user, join the [waitlist](https://github.com/marketplace/models/waitlist/join)
and then generate a [personal token](https://github.com/settings/tokens).
Set the environmental variable `$GITHUB_TOKEN` and run the
script with `option --github` or `--git`.Check the [on-line model list](https://github.com/marketplace/models)
or list the available models and their original names with `chatgpt.sh --github -l`.```
chatgpt.sh --github -m Phi-3-small-8k-instruct
```See also the [GitHub Model Catalog - Getting Started](https://techcommunity.microsoft.com/t5/educator-developer-blog/github-model-catalog-getting-started/ba-p/4212711) page.
## Arch Linux Users
This project PKGBUILD is available at the
[Arch Linux User Repository (*AUR*)](https://aur.archlinux.org/packages/chatgpt.sh)
to install the software in Arch Linux and derivative distros.To install the programme from the AUR, you can use an *AUR helper*
like `yay` or `paru`. For example, with `yay`:yay -S chatgpt.sh
## Termux Users
### Dependencies Termux
Install the `Termux` and `Termux:API` apps from the *F-Droid store*.
Give all permissions to `Termux:API` in your phone app settings.
We reccommend to also install `sox`, `ffmpeg`, `pulseaudio`, `imagemagick`, and `vim` (or `nano`).
Remember to execute `termux-setup-storage` to set up access to the phone storage.
In Termux proper, install the `termux-api` and `termux-tools` packages (`pkg install termux-api termux-tools`).
When recording audio (Whisper, `option -w`),
if `pulseaudio` is configured correctly,
the script uses `sox`, `ffmpeg` or other competent software,
otherwise it defaults to `termux-microphone-record`Likewise, when playing audio (TTS, `option -z`),
depending on `pulseaudio` configuration use `sox`, `mpv` or
fallback to termux wrapper playback (`play-audio` is optional).To set the clipboard, it is required `termux-clipboard-set` from the `termux-api` package.
### TTS Chat - Removal of Markdown
*Markdown in TTS input* may stutter the model audio generation a little.
If `python` modules `markdown` and `bs4` are available, TTS input will
be converted to plain text. As fallback, `pandoc` is used if present
(chat mode only).### Tiktoken
Under Termux, make sure to have your system updated and installed with
`python`, `rust`, and `rustc-dev` packages for building `tiktoken`.pkg update
pkg upgrade
pkg install python rust rustc-dev
pip install tiktoken### Troubleshoot Termux
In order to set Termux access to recording the microphone and playing audio
(with `sox` and `ffmpeg`), follow the instructions below.**A.** Set `pulseaudio` one time only, execute:
pulseaudio -k
pulseaudio -L "module-sles-source" -D**B.** To set a permanent configuration:
1. Kill the process with `pulseaudio -k`.
2. Add `load-module module-sles-source` to _one of the files_:
```
~/.config/pulse/default.pa
/data/data/com.termux/files/usr/etc/pulse/default.pa
```
3. Restart the server with `pulseaudio -D`.**C.** To create a new user `~/.config/pulse/default.pa`, you may start with the following template:
#!/usr/bin/pulseaudio -nF
.include /data/data/com.termux/files/usr/etc/pulse/default.pa
load-module module-sles-source### Acess file
To access your Termux files using Android's file manager, install a decent file manager such as `FX File Explorer` from a Play Store and configure it, or run the following command in your Termux terminal:
am start -a android.intent.action.VIEW -d "content://com.android.externalstorage.documents/root/primary"
Source:
## ๐ฏ Project Objectives
- Implement nice features from `OpenAI API version 1`.
- Provide the closest API defaults.
- Let the user customise defaults (as homework).
- Premiรจre of `chatgpt.sh version 1.0` should occur at the time
when OpenAI launches its next major API version update.## โ ๏ธ Limitations
- OpenAI **API version 1** is the focus of the present project implementation.
Not all features of the API will be covered.- This project _doesn't_ support "Function Calling" or "Structured Outputs".
- Probably, we will _not_ support "Real-Time" chatting.
- We _aren't_ very much keen on implementing video capabilities.
- Bash shell truncates input on `\000` (null).
- Bash "read command" may not correctly display input buffers larger than
the TTY screen size during editing. However, input buffers remain
unaffected. Use the text editor interface for big prompt editing.- The script logic resembles a bowl of spaghetti code after a cat fight.
- Garbage in, garbage out. An idiot savant.
- See _BUGS AND LIMITS_ section in the [man page](man/README.md#bugs).
## Bug report
Please leave bug reports at the
[GitHub issues page](https://github.com/mountaineerbr/shellChatGPT/issues).## ๐ Help Pages
Read the online [**man page here**](man/README.md).
Alternatively, a help page snippet can be printed with `chatgpt.sh -h`.
## ๐ช Contributors
***Many Thanks*** to everyone who contributed to this project.
- [edshamis](https://www.github.com/edshamis)
- [johnd0e](https://github.com/johnd0e)## Acknowledgements
The following projects are worth remarking.
They were studied during development of this script and used as referencial code sources.1. [TheR1D's shell_gpt](https://github.com/TheR1D/shell_gpt/)
2. [xenodium's chatgpt-shell](https://github.com/xenodium/chatgpt-shell)
3. [llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine)
4. [0xacx's chatGPT-shell-cli](https://github.com/0xacx/chatGPT-shell-cli)
5. [mudler's LocalAI](https://github.com/mudler/LocalAI)
6. [Ollama](https://github.com/ollama/ollama/)
7. [Google Gemini](https://gemini.google.com/)
8. [Groq](https://console.groq.com/docs/api-reference)
9. [Antropic AI](https://docs.anthropic.com/)
10. [f's awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
11. [PlexPt's awesome-chatgpt-prompts-zh](https://github.com/PlexPt/awesome-chatgpt-prompts-zh)
Everyone is [welcome to submit issues, PRs, and new ideas](https://github.com/mountaineerbr/shellChatGPT/discussions/1)!
---
**[The project home is at GitLab](https://gitlab.com/fenixdragao/shellchatgpt)**
_Mirror_