{"id":13581174,"url":"https://github.com/kardolus/chatgpt-cli","last_synced_at":"2025-10-05T17:39:12.914Z","repository":{"id":160148389,"uuid":"634938527","full_name":"kardolus/chatgpt-cli","owner":"kardolus","description":"ChatGPT CLI is a versatile tool for interacting with LLM models through OpenAI and Azure, as well as models from Perplexity AI and Llama. It supports prompts and history tracking for seamless, context-aware interactions. With extensive configuration options, it’s designed for both users and developers to create a customized GPT experience.","archived":false,"fork":false,"pushed_at":"2025-03-27T15:03:18.000Z","size":10642,"stargazers_count":662,"open_issues_count":1,"forks_count":47,"subscribers_count":13,"default_branch":"main","last_synced_at":"2025-03-29T18:03:41.655Z","etag":null,"topics":["azure","chatgpt","cli","go","golang","gpt","language-model","llama","openai","perplexity"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kardolus.png","metadata":{"files":{"readme":"README.md","changelog":"history/history.go","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-01T15:40:34.000Z","updated_at":"2025-03-29T03:14:35.000Z","dependencies_parsed_at":"2023-09-26T18:08:02.962Z","dependency_job_id":"f678e136-019b-4d26-ab27-bf42d96dae1d","html_url":"https://github.com/kardolus/chatgpt-cli","commit_stats":null,"previous_names":[],"tags_count":51,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kardolus%2Fchatgpt-cli","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kardolus%2Fchatgpt-cli/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kardolus%2Fchatgpt-cli/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kardolus%2Fchatgpt-cli/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kardolus","download_url":"https://codeload.github.com/kardolus/chatgpt-cli/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247445661,"owners_count":20939953,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["azure","chatgpt","cli","go","golang","gpt","language-model","llama","openai","perplexity"],"created_at":"2024-08-01T15:01:58.829Z","updated_at":"2025-10-05T17:39:12.908Z","avatar_url":"https://github.com/kardolus.png","language":"Go","readme":"# ChatGPT CLI\n\n![Test Workflow](https://github.com/kardolus/chatgpt-cli/actions/workflows/test.yml/badge.svg?branch=main) [![Public Backlog](https://img.shields.io/badge/public%20backlog-808080)](https://github.com/users/kardolus/projects/2)\n\n**Tested and Compatible with OpenAI ChatGPT, Azure OpenAI Service, Perplexity AI, Llama and 302.AI!**\n\nChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and\nAzure, featuring streaming capabilities and extensive configuration options.\n\n![a screenshot](cmd/chatgpt/resources/vhs.gif)\n\n## Table of Contents\n\n- [Features](#features)\n    - [Prompt Support](#prompt-support)\n        - [Using the prompt flag](#using-the---prompt-flag)\n        - [Example](#example)\n        - [Explore More Prompts](#explore-more-prompts)\n    - [MCP Support](#mcp-support)\n        - [Overview](#overview)\n        - [Examples](#examples)\n        - [Default Version Behavior](#default-version-behavior)\n        - [Handling MCP Replies](#handling-mcp-replies)\n        - [Config](#config)\n- [Installation](#installation)\n    - [Using Homebrew (macOS)](#using-homebrew-macos)\n    - [Direct Download](#direct-download)\n        - [Apple Silicon](#apple-silicon)\n        - [macOS Intel chips](#macos-intel-chips)\n        - [Linux (amd64)](#linux-amd64)\n        - [Linux (arm64)](#linux-arm64)\n        - [Linux (386)](#linux-386)\n        - [FreeBSD (amd64)](#freebsd-amd64)\n        - [FreeBSD (arm64)](#freebsd-arm64)\n        - [Windows (amd64)](#windows-amd64)\n- [Getting Started](#getting-started)\n- [Configuration](#configuration)\n    - [General Configuration](#general-configuration)\n    - [LLM Specific Configuration](#llm-specific-configuration)\n    - [Custom Config and Data Directory](#custom-config-and-data-directory)\n        - [Example for Custom Directories](#example-for-custom-directories)\n        - [Variables for interactive mode](#variables-for-interactive-mode)\n    - [Switching Between Configurations with --target](#switching-between-configurations-with---target)\n    - [Azure Configuration](#azure-configuration)\n    - [Perplexity Configuration](#perplexity-configuration)\n    - [302 AI Configuration](#302ai-configuration)\n    - [Command-Line Autocompletion](#command-line-autocompletion)\n        - [Enabling Autocompletion](#enabling-autocompletion)\n        - [Persistent Autocompletion](#persistent-autocompletion)\n- [Markdown Rendering](#markdown-rendering)\n- [Development](#development)\n    - [Using the Makefile](#using-the-makefile)\n    - [Testing the CLI](#testing-the-cli)\n- [Reporting Issues and Contributing](#reporting-issues-and-contributing)\n- [Uninstallation](#uninstallation)\n- [Useful Links](#useful-links)\n- [Additional Resources](#additional-resources)\n\n## Features\n\n* **Streaming mode**: Real-time interaction with the GPT model.\n* **Query mode**: Single input-output interactions with the GPT model.\n* **Interactive mode**: The interactive mode allows for a more conversational experience with the model. Prints the\n  token usage when combined with query mode.\n* **Thread-based context management**: Enjoy seamless conversations with the GPT model with individualized context for\n  each thread, much like your experience on the OpenAI website. Each unique thread has its own history, ensuring\n  relevant and coherent responses across different chat instances.\n* **Sliding window history**: To stay within token limits, the chat history automatically trims while still preserving\n  the necessary context. The size of this window can be adjusted through the `context-window` setting.\n* **Custom context from any source**: You can provide the GPT model with a custom context during conversation. This\n  context can be piped in from any source, such as local files, standard input, or even another program. This\n  flexibility allows the model to adapt to a wide range of conversational scenarios.\n* **Support for images**: Upload an image or provide an image URL using the `--image` flag. Note that image support may\n  not be available for all models. You can also pipe an image directly: `pngpaste - | chatgpt \"What is this photo?\"`\n* **Generate images**: Use the `--draw` and `--output` flags to generate an image from a prompt (requires image-capable\n  models like `gpt-image-1`).\n* **Edit images**: Use the `--draw` flag with `--image` and `--output` to modify an existing image using a prompt (\n  e.g., \"add sunglasses to the cat\"). Supported formats: PNG, JPEG, and WebP.\n* **Audio support**: You can upload audio files using the `--audio` flag to ask questions about spoken content.\n  This feature is compatible only with audio-capable models like gpt-4o-audio-preview. Currently, only `.mp3` and `.wav`\n  formats are supported.\n* **Transcription support**: You can also use the `--transcribe` flag to generate a transcript of the uploaded audio.\n  This uses OpenAI’s transcription endpoint (compatible with models like gpt-4o-transcribe) and supports a wider range\n  of formats, including `.mp3`, `.mp4`, `.mpeg`, `.mpga`, `.m4a`, `.wav`, and `.webm`.\n* **Text-to-speech support**: Use the `--speak` and `--output` flags to convert text to speech (works with models like\n  `gpt-4o-mini-tts`).\n  If you have `afplay` installed (macOS), you can even chain playback like this:\n    ```shell\n    chatgpt --speak \"convert this to audio\" --output test.mp3 \u0026\u0026 afplay test.mp3\n    ```\n* **Model listing**: Access a list of available models using the `-l` or `--list-models` flag.\n* **Advanced configuration options**: The CLI supports a layered configuration system where settings can be specified\n  through default values, a `config.yaml` file, and environment variables. For quick adjustments,\n  various `--set-\u003cvalue\u003e` flags are provided. To verify your current settings, use the `--config` or `-c` flag.\n\n### Prompt Support\n\nWe’re excited to introduce support for prompt files with the `--prompt` flag in **version 1.7.1**! This feature\nallows you to provide a rich and detailed context for your conversations directly from a file.\n\n#### Using the `--prompt` Flag\n\nThe `--prompt` flag lets you specify a file containing the initial context or instructions for your ChatGPT\nconversation. This is especially useful when you have detailed instructions or context that you want to reuse across\ndifferent conversations.\n\nTo use the `--prompt` flag, pass the path of your prompt file like this:\n\n```shell\nchatgpt --prompt path/to/your/prompt.md \"Use a pipe or provide a query here\"\n```\n\nThe contents of `prompt.md` will be read and used as the initial context for the conversation, while the query you\nprovide directly will serve as the specific question or task you want to address.\n\n#### Example\n\nHere’s a fun example where you can use the output of a `git diff` command as a prompt:\n\n```shell\ngit diff | chatgpt --prompt ../prompts/write_pull-request.md\n```\n\nIn this example, the content from the `write_pull-request.md` prompt file is used to guide the model's response based on\nthe diff data from `git diff`.\n\n#### Explore More Prompts\n\nFor a variety of ready-to-use prompts, check out this [awesome prompts repository](https://github.com/kardolus/prompts).\nThese can serve as great starting points or inspiration for your own custom prompts!\n\nHere’s the updated README section for MCP Support, placed after the ### Prompt Support section you shared:\n\n### MCP Support\n\nWe’re excited to introduce Model Context Protocol (MCP) support in version 1.8.3+, allowing you to enrich your chat\nsessions with structured, live data. For now, this feature is limited to Apify integrations.\n\n#### Overview\n\nMCP enables the CLI to call external plugins — like Apify actors — and inject their responses into the chat context\nbefore your actual query is sent. This is useful for fetching weather, scraping Google Maps, or summarizing PDFs.\n\nYou can use either `--param` (for individual key=value pairs) or `--params` (for raw JSON).\n\n#### Examples\n\nUsing `--param` flags:\n\n```shell\nchatgpt --mcp apify/epctex~weather-scraper \\\n    --param locations='[\"Brooklyn\"]' \\\n    --param language=en \\\n    --param forecasts=true \\\n    \"what should I wear today\"\n```\n\nUsing a single `--params` flag:\n\n```shell\nchatgpt --mcp apify/epctex~weather-scraper \\\n    --params '{\"locations\": [\"Brooklyn\"], \"language\": \"en\", \"forecasts\": true}' \\\n    \"what should I wear today\"\n```\n\n#### Default Version Behavior\n\nIf no version is specified, `@latest` is assumed:\n\n```shell\nchatgpt --mcp apify/user~weather\n```\n\nis equivalent to:\n\n```shell\nchatgpt --mcp apify/user~weather@latest\n```\n\n#### Handling MCP Replies\n\nResponses from MCP plugins are automatically injected into the conversation thread as context. You can use MCP in two\ndifferent modes:\n\n1. MCP-only mode (Context Injection Only)\n\n    ```shell\n    chatgpt --mcp apify/epctex~weather-scraper --param location=Brooklyn\n    ```\n\n    * Fetches live data\n    * Injects it into the current thread\n    * Does not trigger a GPT completion\n    * CLI prints a confirmation\n\n2. MCP + Query mode (Context + Completion)\n\n    ```shell\n    chatgpt --mcp apify/epctex~weather-scraper --param location=Brooklyn \"What should I wear today?\"\n    ```\n\n    * Fetches and injects MCP data\n    * Immediately sends your query to GPT\n    * Returns the assistant’s response\n\n#### Config\n\nYou’ll need to set the `APIFY_API_KEY` as an environment variable or config value\n\nExample:\n\n```shell\nexport APIFY_API_KEY=your-api-key\n```\n\n## Installation\n\n### Using Homebrew (macOS)\n\nYou can install chatgpt-cli using Homebrew:\n\n```shell\nbrew tap kardolus/chatgpt-cli \u0026\u0026 brew install chatgpt-cli\n```\n\n### Direct Download\n\nFor a quick and easy installation without compiling, you can directly download the pre-built binary for your operating\nsystem and architecture:\n\n#### Apple Silicon\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-arm64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### macOS Intel chips\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-amd64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### Linux (amd64)\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-amd64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### Linux (arm64)\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-arm64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### Linux (386)\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-386 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### FreeBSD (amd64)\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-freebsd-amd64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### FreeBSD (arm64)\n\n```shell\ncurl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-freebsd-arm64 \u0026\u0026 chmod +x chatgpt \u0026\u0026 sudo mv chatgpt /usr/local/bin/\n```\n\n#### Windows (amd64)\n\nDownload the binary\nfrom [this link](https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-windows-amd64.exe) and add it\nto your PATH.\n\nChoose the appropriate command for your system, which will download the binary, make it executable, and move it to your\n/usr/local/bin directory (or %PATH% on Windows) for easy access.\n\n## Getting Started\n\n1. Set the `OPENAI_API_KEY` environment variable to\n   your [ChatGPT secret key](https://platform.openai.com/account/api-keys). To set the environment variable, you can add\n   the following line to your shell profile (e.g., ~/.bashrc, ~/.zshrc, or ~/.bash_profile), replacing your_api_key with\n   your actual key:\n\n    ```shell\n    export OPENAI_API_KEY=\"your_api_key\"\n    ```\n\n2. To enable history tracking across CLI calls, create a ~/.chatgpt-cli directory using the command:\n\n    ```shell\n    mkdir -p ~/.chatgpt-cli\n    ```\n\n   Once this directory is in place, the CLI automatically manages the message history for each \"thread\" you converse\n   with. The history operates like a sliding window, maintaining context up to a configurable token maximum. This\n   ensures a balance between maintaining conversation context and achieving optimal performance.\n\n   By default, if a specific thread is not provided by the user, the CLI uses the default thread and stores the history\n   at `~/.chatgpt-cli/history/default.json`. You can find more details about how to configure the `thread` parameter in\n   the\n   [Configuration](#configuration) section of this document.\n\n3. Try it out:\n\n    ```shell\n    chatgpt what is the capital of the Netherlands\n    ```\n\n4. To start interactive mode, use the `-i` or `--interactive` flag:\n\n    ```shell\n    chatgpt --interactive\n    ```\n\n   If you want the CLI to automatically create a new thread for each session, ensure that the `auto_create_new_thread`\n   configuration variable is set to `true`. This will create a unique thread identifier for each interactive session.\n\n5. To use the pipe feature, create a text file containing some context. For example, create a file named context.txt\n   with the following content:\n\n    ```shell\n    Kya is a playful dog who loves swimming and playing fetch.\n    ```\n\n   Then, use the pipe feature to provide this context to ChatGPT:\n\n    ```shell\n    cat context.txt | chatgpt \"What kind of toy would Kya enjoy?\"\n    ```\n\n6. To list all available models, use the -l or --list-models flag:\n\n    ```shell\n    chatgpt --list-models\n    ```\n\n7. For more options, see:\n\n   ```shell\n   chatgpt --help\n   ```\n\n## Configuration\n\nThe ChatGPT CLI adopts a four-tier configuration strategy, with different levels of precedence assigned to flags,\nenvironment variables, a config.yaml file, and default values, in that respective order:\n\n1. Flags: Command-line flags have the highest precedence. Any value provided through a flag will override other\n   configurations.\n2. Environment Variables: If a setting is not specified by a flag, the corresponding environment variable (prefixed with\n   the name field from the config) will be checked.\n3. Config file (config.yaml): If neither a flag nor an environment variable is set, the value from the config.yaml file\n   will be used.\n4. Default Values: If no value is specified through flags, config.yaml, or environment variables, the CLI will fall back\n   to its built-in default values.\n\n### General Configuration\n\n| Variable                 | Description                                                                                                                                                                                           | Default                   |\n|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| `name`                   | The prefix for environment variable overrides.                                                                                                                                                        | 'openai'                  |\n| `thread`                 | The name of the current chat thread. Each unique thread name has its own context.                                                                                                                     | 'default'                 |\n| `target`                 | Load configuration from config._target_.yaml                                                                                                                                                          | ''                        |\n| `omit_history`           | If true, the chat history will not be used to provide context for the GPT model.                                                                                                                      | false                     |\n| `command_prompt`         | The command prompt in interactive mode. Should be single-quoted.                                                                                                                                      | '[%datetime] [Q%counter]' |\n| `output_prompt`          | The output prompt in interactive mode. Should be single-quoted.                                                                                                                                       | ''                        |\n| `command_prompt_color`   | The color of the command_prompt in interactive mode. Supported colors: \"red\", \"green\", \"blue\", \"yellow\", \"magenta\".                                                                                   | ''                        |\n| `output_prompt_color`    | The color of the output_prompt in interactive mode. Supported colors: \"red\", \"green\", \"blue\", \"yellow\", \"magenta\".                                                                                    | ''                        |\n| `auto_create_new_thread` | If set to `true`, a new thread with a unique identifier (e.g., `int_a1b2`) will be created for each interactive session. If `false`, the CLI will use the thread specified by the `thread` parameter. | `false`                   |\n| `track_token_usage`      | If set to true, displays the total token usage after each query in --query mode, helping you monitor API usage.                                                                                       | `false`                   |\n| `debug`                  | If set to true, prints the raw request and response data during API calls, useful for debugging.                                                                                                      | `false`                   |\n| `custom_headers`         | Add a map of custom headers to each http request                                                                                                                                                      | {}                        |\n| `skip_tls_verify`        | If set to true, skips TLS certificate verification, allowing insecure HTTPS requests.                                                                                                                 | `false`                   |\n| `multiline`              | If set to true, enables multiline input mode in interactive sessions.                                                                                                                                 | `false`                   |\n| `role_file`              | Path to a file that overrides the system role (role).                                                                                                                                                 | ''                        |\n| `prompt`                 | Path to a file that provides additional context before the query.                                                                                                                                     | ''                        |\n| `image`                  | Local path or URL to an image used in the query.                                                                                                                                                      | ''                        |\n| `audio`                  | Path to an audio file (MP3/WAV) used as part of the query.                                                                                                                                            | ''                        |\n| `output`                 | Path where synthesized audio is saved when using --speak.                                                                                                                                             | ''                        |\n| `transcribe`             | Enables transcription mode. This flags takes the path of an audio file.                                                                                                                               | `false`                   |\n| `speak`                  | If true, enables text-to-speech synthesis for the input query.                                                                                                                                        | `false`                   |\n| `draw`                   | If true, generates an image from a prompt and saves it to the path specified by `output`. Requires image-capable models.                                                                              | `false`                   |\n\n### LLM-Specific Configuration\n\n| Variable                 | Description                                                                                                                                            | Default                        |\n|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|\n| `api_key`                | Your API key.                                                                                                                                          | (none for security)            |\n| `auth_header`            | The header used for authorization in API requests.                                                                                                     | 'Authorization'                |\n| `auth_token_prefix`      | The prefix to be added before the token in the `auth_header`.                                                                                          | 'Bearer '                      |\n| `completions_path`       | The API endpoint for completions.                                                                                                                      | '/v1/chat/completions'         |\n| `context_window`         | The memory limit for how much of the conversation can be remembered at one time.                                                                       | 8192                           |\n| `effort`                 | Sets the reasoning effort. Used by o1-pro models.                                                                                                      | 'low'                          |\n| `frequency_penalty`      | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far.                                 | 0.0                            |\n| `image_edits_path`       | The API endpoint for image editing.                                                                                                                    | '/v1/images/edits'             |\n| `image_generations_path` | The API endpoint for image generation.                                                                                                                 | '/v1/images/generations'       |\n| `max_tokens`             | The maximum number of tokens that can be used in a single API call.                                                                                    | 4096                           |\n| `model`                  | The GPT model used by the application.                                                                                                                 | 'gpt-4o'                       |\n| `models_path`            | The API endpoint for accessing model information.                                                                                                      | '/v1/models'                   |\n| `presence_penalty`       | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.                                      | 0.0                            |\n| `responses_path`         | The API endpoint for responses. Used by o1-pro models.                                                                                                 | '/v1/responses'                |\n| `role`                   | The system role                                                                                                                                        | 'You are a helpful assistant.' |\n| `seed`                   | Sets the seed for deterministic sampling (Beta). Repeated requests with the same seed and parameters aim to return the same result.                    | 0                              |\n| `speech_path`            | The API endpoint for text-to-speech synthesis.                                                                                                         | '/v1/audio/transcriptions'     |\n| `temperature`            | What sampling temperature to use, between 0 and 2. Higher values make the output more random; lower values make it more focused and deterministic.     | 1.0                            |\n| `top_p`                  | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. | 1.0                            |\n| `transcriptions_path`    | The API endpoint for audio transcription requests.                                                                                                     | '/v1/audio/speech'             |\n| `url`                    | The base URL for the OpenAI API.                                                                                                                       | 'https://api.openai.com'       |\n| `user_agent`             | The header used for the user agent in API requests.                                                                                                    | 'chatgpt-cli'                  |\n| `voice`                  | The voice to use when generating audio with TTS models like gpt-4o-mini-tts.                                                                           | 'nova'                         |\n\n### Custom Config and Data Directory\n\nBy default, ChatGPT CLI stores configuration and history files in the `~/.chatgpt-cli` directory. However, you can\neasily\noverride these locations by setting environment variables, allowing you to store configuration and history in custom\ndirectories.\n\n| Environment Variable | Description                                  | Default Location         |\n|----------------------|----------------------------------------------|--------------------------|\n| `OPENAI_CONFIG_HOME` | Overrides the default config directory path. | `~/.chatgpt-cli`         |\n| `OPENAI_DATA_HOME`   | Overrides the default data directory path.   | `~/.chatgpt-cli/history` |\n\n#### Example for Custom Directories\n\nTo change the default configuration or data directories, set the appropriate environment variables:\n\n```\nexport OPENAI_CONFIG_HOME=\"/custom/config/path\"\nexport OPENAI_DATA_HOME=\"/custom/data/path\"\n```\n\nIf these environment variables are not set, the application defaults to ~/.chatgpt-cli for configuration files and ~\n/.chatgpt-cli/history for history.\n\n### Switching Between Configurations with --target\n\nYou can maintain multiple configuration files side by side and switch between them using the `--target` flag. This is\nespecially useful if you use multiple LLM providers (like OpenAI, Perplexity, Azure, etc.) or have different contexts or\nworkflows that require distinct settings.\n\nHow it Works\n\nWhen you use the `--target` flag, the CLI loads a config file named:\n\n```shell\nconfig.\u003ctarget\u003e.yaml\n```\n\nFor example:\n\n```shell\nchatgpt --target perplexity --config\n```\n\nThis will load:\n\n```shell\n~/.chatgpt-cli/config.perplexity.yaml\n```\n\nIf the --target flag is not provided, the CLI falls back to:\n\n```shell\n~/.chatgpt-cli/config.yaml\n```\n\nExample Setup\n\nYou can maintain the following structure:\n\n```shell\n~/.chatgpt-cli/\n├── config.yaml # Default (e.g., OpenAI)\n├── config.perplexity.yaml # Perplexity setup\n├── config.azure.yaml # Azure-specific config\n└── config.llama.yaml # LLaMA setup\n```\n\nThen switch between them like so:\n\n```shell\nchatgpt --target azure \"Explain Azure's GPT model differences\"\nchatgpt --target perplexity \"What are some good restaurants in the Red Hook area\"\n```\n\nOr just use the default:\n\n```shell\nchatgpt \"What's the capital of Sweden?\"\n```\n\nCLI and Environment Interaction\n\n* The value of `--target` is never persisted — it must be explicitly passed for each run.\n* The config file corresponding to the target is loaded before any environment variable overrides are applied.\n* Environment variables still follow the name: field inside the loaded config, so name: perplexity enables\n  `PERPLEXITY_API_KEY`.\n\n#### Variables for interactive mode:\n\n- `%date`: The current date in the format `YYYY-MM-DD`.\n- `%time`: The current time in the format `HH:MM:SS`.\n- `%datetime`: The current date and time in the format `YYYY-MM-DD HH:MM:SS`.\n- `%counter`: The total number of queries in the current session.\n- `%usage`: The usage in total tokens used (only works in query mode).\n\nThe defaults can be overridden by providing your own values in the user configuration file. The structure of this file\nmirrors that of the default configuration. For instance, to override\nthe `model` and `max_tokens` parameters, your file might look like this:\n\n```yaml\nmodel: gpt-3.5-turbo-16k\nmax_tokens: 4096\n```\n\nThis alters the `model` to `gpt-3.5-turbo-16k` and adjusts `max_tokens` to `4096`. All other options, such as `url`\n, `completions_path`, and `models_path`, can similarly be modified.\n\nYou can also add custom HTTP headers to all API requests. This is useful when working with proxies, API gateways, or\nservices that require additional headers:\n\n```yaml\ncustom_headers:\n  X-Custom-Header: \"custom-value\"\n  X-API-Version: \"v2\"\n  X-Client-ID: \"my-client-id\"\n```\n\nIf the user configuration file cannot be accessed or\nis missing, the application will resort to the default configuration.\n\nAnother way to adjust values without manually editing the configuration file is by using environment variables.\nThe `name` attribute forms the prefix for these variables. As an example, the `model` can be modified using\nthe `OPENAI_MODEL` environment variable. Similarly, to disable history during the execution of a command, use:\n\n```shell\nOPENAI_OMIT_HISTORY=true chatgpt what is the capital of Denmark?\n```\n\nThis approach is especially beneficial for temporary changes or for testing varying configurations.\n\nMoreover, you can use the `--config` or `-c` flag to view the present configuration. This handy feature allows users to\nswiftly verify their current settings without the need to manually inspect the configuration files.\n\n```shell\nchatgpt --config\n```\n\nExecuting this command will display the active configuration, including any overrides instituted by environment\nvariables or the user configuration file.\n\nTo facilitate convenient adjustments, the ChatGPT CLI provides flags for swiftly modifying the `model`, `thread`\n, `context-window` and `max_tokens` parameters in your user configured `config.yaml`. These flags are `--set-model`\n, `--set-thread`, `--set-context-window` and `--set-max-tokens`.\n\nFor instance, to update the model, use the following command:\n\n```shell\nchatgpt --set-model gpt-3.5-turbo-16k\n```\n\nThis feature allows for rapid changes to key configuration parameters, optimizing your experience with the ChatGPT CLI.\n\n### Azure Configuration\n\nFor Azure, you need to configure these, or similar, value\n\n```yaml\nname: azure\napi_key: \u003cyour azure api key\u003e\nurl: https://\u003cyour_resource\u003e.openai.azure.com\ncompletions_path: /openai/deployments/\u003cyour_deployment\u003e/chat/completions?api-version=\u003cyour_api\u003e\nauth_header: api-key\nauth_token_prefix: \" \"\n```\n\nYou can set the API key either in the config.yaml file as shown above or export it as an environment variable:\n\n```shell\nexport AZURE_API_KEY=\u003cyour_key\u003e\n```\n\n### Perplexity Configuration\n\nFor Perplexity, you will need something equivelent to the following values:\n\n```yaml\nname: perplexity\napi_key: \u003cyour perplexity api key\u003e\nmodel: sonar\nurl: https://api.perplexity.ai\n```\n\nYou can set the API key either in the config.yaml file as shown above or export it as an environment variable:\n\n```shell\nexport PERPLEXITY_API_KEY=\u003cyour_key\u003e\n```\n\nYou can set the API key either in the `config.yaml` file as shown above or export it as an environment variable:\n\n```shell\nexport AZURE_API_KEY=\u003cyour_key\u003e\n```\n\n### 302.AI Configuration\n\nI successfully tested 302.AI with the following values\n\n```yaml\nname: ai302 # environment variables cannot start with numbers\napi_key: \u003cyour 302.AI api key\u003e\nurl: https://api.302.ai\n```\n\nYou can set the API key either in the config.yaml file as shown above or export it as an environment variable:\n\n```shell\nexport AI302_API_KEY=\u003cyour_key\u003e\n```\n\n### Command-Line Autocompletion\n\nEnhance your CLI experience with our new autocompletion feature for command flags!\n\n#### Enabling Autocompletion\n\nAutocompletion is currently supported for the following shells: Bash, Zsh, Fish, and PowerShell. To activate flag\ncompletion in your current shell session, execute the appropriate command based on your shell:\n\n- **Bash**\n    ```bash\n    . \u003c(chatgpt --set-completions bash)\n    ```\n- **Zsh**\n    ```zsh\n    . \u003c(chatgpt --set-completions zsh)\n    ```\n- **Fish**\n    ```fish\n    chatgpt --set-completions fish | source\n    ```\n- **PowerShell**\n    ```powershell\n    chatgpt --set-completions powershell | Out-String | Invoke-Expression\n    ```\n\n#### Persistent Autocompletion\n\nFor added convenience, you can make autocompletion persist across all new shell sessions by adding the appropriate\nsourcing command to your shell's startup file. Here are the files typically used for each shell:\n\n- **Bash**: Add to `.bashrc` or `.bash_profile`\n- **Zsh**: Add to `.zshrc`\n- **Fish**: Add to `config.fish`\n- **PowerShell**: Add to your PowerShell profile script\n\nFor example, for Bash, you would add the following line to your `.bashrc` file:\n\n```bash\n. \u003c(chatgpt --set-completions bash)\n```\n\nThis ensures that command flag autocompletion is enabled automatically every time you open a new terminal window.\n\n## Markdown Rendering\n\nYou can render markdown in real-time using the `mdrender.sh` script, located [here](scripts/mdrender.sh). You'll first\nneed to\ninstall [glow](https://github.com/charmbracelet/glow).\n\nExample:\n\n```shell\nchatgpt write a hello world program in Java | ./scripts/mdrender.sh\n```\n\n## Development\n\nTo start developing, set the `OPENAI_API_KEY` environment variable to\nyour [ChatGPT secret key](https://platform.openai.com/account/api-keys).\n\n### Using the Makefile\n\nThe Makefile simplifies development tasks by providing several targets for testing, building, and deployment.\n\n* **all-tests**: Run all tests, including linting, formatting, and go mod tidy.\n  ```shell \n  make all-tests\n  ```\n* **binaries**: Build binaries for multiple platforms.\n  ```shell \n  make binaries\n  ```\n* **shipit**: Run the release process, create binaries, and generate release notes.\n  ```shell \n  make shipit\n  ```\n* **updatedeps**: Update dependencies and commit any changes.\n  ```shell \n  make updatedeps\n  ```\n\nFor more available commands, use:\n\n```shell\nmake help\n```\n\n#### Windows build script\n\n```ps1\n.\\scripts\\install.ps1\n```\n\n### Testing the CLI\n\n1. After a successful build, test the application with the following command:\n\n    ```shell\n    ./bin/chatgpt what type of dog is a Jack Russel?\n    ```\n\n2. As mentioned previously, the ChatGPT CLI supports tracking conversation history across CLI calls. This feature\n   creates a seamless and conversational experience with the GPT model, as the history is utilized as context in\n   subsequent interactions.\n\n   To enable this feature, you need to create a `~/.chatgpt-cli` directory using the command:\n\n    ```shell\n    mkdir -p ~/.chatgpt-cli\n    ```\n\n## Reporting Issues and Contributing\n\nIf you encounter any issues or have suggestions for improvements,\nplease [submit an issue](https://github.com/kardolus/chatgpt-cli/issues/new) on GitHub. We appreciate your feedback and\ncontributions to help make this project better.\n\n## Uninstallation\n\nIf for any reason you wish to uninstall the ChatGPT CLI application from your system, you can do so by following these\nsteps:\n\n### Using Homebrew (macOS)\n\nIf you installed the CLI using Homebrew you can do:\n\n```shell\nbrew uninstall chatgpt-cli\n```\n\nAnd to remove the tap:\n\n```shell\nbrew untap kardolus/chatgpt-cli\n```\n\n### MacOS / Linux\n\nIf you installed the binary directly, follow these steps:\n\n1. Remove the binary:\n\n    ```shell\n    sudo rm /usr/local/bin/chatgpt\n    ```\n\n2. Optionally, if you wish to remove the history tracking directory, you can also delete the `~/.chatgpt-cli` directory:\n\n    ```shell\n    rm -rf ~/.chatgpt-cli\n    ```\n\n### Windows\n\n1. Navigate to the location of the `chatgpt` binary in your system, which should be in your PATH.\n\n2. Delete the `chatgpt` binary.\n\n3. Optionally, if you wish to remove the history tracking, navigate to the `~/.chatgpt-cli` directory (where `~` refers\n   to your user's home directory) and delete it.\n\nPlease note that the history tracking directory `~/.chatgpt-cli` only contains conversation history and no personal\ndata. If you have any concerns about this, please feel free to delete this directory during uninstallation.\n\n## Useful Links\n\n* [Amazing Prompts](https://github.com/kardolus/prompts)\n* [OpenAI API Reference](https://platform.openai.com/docs/api-reference/chat/create)\n* [OpenAI Key Usage Dashboard](https://platform.openai.com/account/usage)\n* [OpenAI Pricing Page](https://openai.com/pricing)\n* [Perplexity API Reference](https://docs.perplexity.ai/reference/post_chat_completions)\n* [Perplexity Key Usage Dashboard](https://www.perplexity.ai/settings/api)\n* [Perplexity Models](https://docs.perplexity.ai/docs/model-cards)\n* [302.AI API Reference](https://302ai-en.apifox.cn/api-207705102)\n\n## Additional Resources\n\n* [\"Summarize any text instantly with a single shortcut\"](https://medium.com/@kardolus/summarize-any-text-instantly-with-a-single-shortcut-582551bcc6e2)\n  on Medium: Dive deep into the capabilities of this CLI tool with this detailed walkthrough.\n* [Join the conversation](https://www.reddit.com/r/ChatGPT/comments/14ip6pm/summarize_any_text_instantly_with_a_single/)\n  on Reddit: Discuss the tool, ask questions, and share your experiences with our growing community.\n\nThank you for using ChatGPT CLI!\n\n\u003cdiv align=\"center\" style=\"text-align: center; display: flex; justify-content: center; align-items: center;\"\u003e\n    \u003ca href=\"#top\"\u003e\n        \u003cimg src=\"https://img.shields.io/badge/Back%20to%20Top-000000?style=for-the-badge\u0026logo=github\u0026logoColor=white\" alt=\"Back to Top\"\u003e\n    \u003c/a\u003e\n\u003c/div\u003e\n\n","funding_links":[],"categories":["CLIs","Chat UIs","Go","📚 Projects (1974 total)"],"sub_categories":["MCP Clients"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkardolus%2Fchatgpt-cli","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkardolus%2Fchatgpt-cli","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkardolus%2Fchatgpt-cli/lists"}