{"id":26208483,"url":"https://github.com/mwmdev/neuma","last_synced_at":"2025-12-25T02:52:54.360Z","repository":{"id":153512759,"uuid":"620479733","full_name":"mwmdev/neuma","owner":"mwmdev","description":"neuma is a ChatGPT CLI written in python. It has conversation management, profiles, custom modes, voice and image generation support.","archived":false,"fork":false,"pushed_at":"2024-05-15T14:23:48.000Z","size":2599,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-05-16T03:40:13.720Z","etag":null,"topics":["chat","chatgpt","cli","cli-app","gruvbox","gtts","openai","python","rich","stt","tts","whisper"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mwmdev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-03-28T19:06:33.000Z","updated_at":"2024-05-29T23:04:59.767Z","dependencies_parsed_at":"2024-04-11T19:43:02.162Z","dependency_job_id":"f74d4483-fe24-467d-b4f4-af454b06665a","html_url":"https://github.com/mwmdev/neuma","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mwmdev%2Fneuma","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mwmdev%2Fneuma/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mwmdev%2Fneuma/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mwmdev%2Fneuma/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mwmdev","download_url":"https://codeload.github.com/mwmdev/neuma/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243170574,"owners_count":20247788,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chat","chatgpt","cli","cli-app","gruvbox","gtts","openai","python","rich","stt","tts","whisper"],"created_at":"2025-03-12T06:28:51.356Z","updated_at":"2025-12-25T02:52:54.311Z","avatar_url":"https://github.com/mwmdev.png","language":"Python","readme":"\u003cimg align=\"right\" style=\"width:20%; min-width:150px; max-width:250px; margin:40px 20px;\" alt = \"terminal based chatgpt\" title = \"Ah0 M374KI45E\" src = \"public/neuma.png\"/\u003e\n\n`neuma` is a minimalistic ChatGPT interface for the command line.\n\n![render1682022113695](https://user-images.githubusercontent.com/31964517/233479690-81521ceb-2443-4a0e-ab1f-0b0b100a75db.gif)\n\n## Table of contents\n- [Features](#features)\n- [Installation](#installation)\n  - [Dependencies](#dependencies)\n  - [One line install script](#one-line-install-script)\n  - [Manual install](#manual-install)\n  - [Alias shortcut](#alias-shortcut)\n- [Usage](#usage)\n  - [Conversations](#conversations)\n  - [Modes](#modes)\n    - [Table display](#table-display)\n    - [Code generator](#code-generator)\n    - [Translator](#translator)\n    - [Character impersonator](#character-impersonator)\n    - [CSV generator](#csv-generator)\n    - [Image generator](#image-generator)\n    - [Terminal commands generator](#terminal-commands-generator)\n  - [Personae](#personae)\n  - [Speech support](#speech-support)\n    - [Voice output](#voice-output)\n    - [Voice input](#voice-input)\n  - [Embeddings](#embeddings)\n  - [Special placeholders](#special-placeholders)\n  - [GPT models](#gpt-models)\n  - [Other commands](#other-commands)\n  - [Command line arguments](#command-line-arguments)\n  - [Color theme](#color-theme)\n- [Troubleshooting](#troubleshooting)\n- [What's in a name?](#whats-in-a-name)\n\n## Features\n\n- **Conversations** management (create, save, copy, delete)\n- **Modes** (normal, table, code, translate, impersonate, summarize, csv, image, terminal)\n- **Personae** profiles with custom starting prompt\n- **Embeddings** management (embed documents, create vector dbs)\n- **Voice input / output**\n- and a few other things...\n\n## Installation\n\nThose instructions are for Linux, they may vary for other systems.\n\n### Dependencies\n\nMake sure a recent version of the following packages are installed on your system: \n\n`git python python-pip python-virtualenv portaudio19-dev`\n\n### One line install script\n\nYou can launch the install script with the following command:\n\n```shell\nbash \u003c(wget -qO- https://raw.githubusercontent.com/mwmdev/neuma/main/install.sh)\n```\nDuring the installation process you will be prompted for a  [ChatGPT API key](https://platform.openai.com/account/api-keys).\n\n### Manual install\n\nIf you prefer manual install, do the following:\n\nClone this repository to your local machine using:\n\n```shell\ngit clone https://github.com/mwmdev/neuma.git\n```\n\nNavigate to the directory where the repository was cloned:\n\n```shell\ncd neuma\n```\n\nCreate a virtual environment with:\n\n```shell\npython -m venv env\n```\n\nActivate the virtual environment with:\n\n```shell\nsource env/bin/activate\n```\n\nInstall the required dependencies by running:\n\n```shell\npip install -r requirements.txt\n```\n\nRename the `.env_example` to `.env` with:\n\n```shell\nmv .env_example .env\n```\n\nEdit `.env` and add your  [ChatGPT API key](https://platform.openai.com/account/api-keys).\n\nMove all config files to your `.config/neuma/` folder with:\n\n```shell\nmkdir ~/.config/neuma \u0026\u0026 mv .env config.toml persona.toml ~/.config/neuma/\n```\n\nFinally, run the script with:\n\n```shell\npython neuma.py\n```\n\n### Alias shortcut\n\nTo make it easier to run `neuma`, you can create an alias in your `.bashrc` or `.zshrc` file by adding the following line:\n\n```\nalias n='source /path/to/neuma/env/bin/activate \u0026\u0026 python /path/to/neuma.py'\n```\n\n## Usage\n\nUse `neuma` as an interactive chat, write your prompt and press `Enter`. Wait for the answer, then continue the discussion.\n\nPress `h` followed by `Enter` to list all the commands.\n\n```\n\u003e h\n┌───────────────────┬─────────────────────────────────────────────────┐\n│ Command           │ Description                                     │\n├───────────────────┼─────────────────────────────────────────────────┤\n│ h                 │ Display this help section                       │\n│ r                 │ Restart                                         │\n│ c                 │ List saved conversations                        │\n│ c [conversation]  │ Open conversation [conversation]                │\n│ cc                │ Create a new conversation                       │\n│ cs [conversation] │ Save the current conversation as [conversation] │\n│ ct [conversation] │ Trash conversation [conversation]               │\n│ cy                │ Copy current conversation to clipboard          │\n│ m                 │ List available modes                            │\n│ m [mode]          │ Switch to mode [mode]                           │\n│ p                 │ List available personae                         │\n│ p [persona]       │ Switch to persona [persona]                     │\n│ vi                │ Switch to voice input                           │\n│ vo                │ Switch on voice output                          │\n│ d                 │ List available vector dbs                       │\n│ d [db]            │ Create or switch to vector db [db]              │\n│ dt [db]           │ Trash vector db [db]                            │\n│ e [/path/to/folder] │ Embed all files in [/path/to/folder] into current db  │\n│ y                 │ Copy last answer to clipboard                   │\n│ t                 │ Get the current temperature                     │\n│ t [temp]          │ Set the temperature to [temp]                   │\n│ mt                │ Get the current max_tokens value                │\n│ mt [max_tokens]   │ Set the max_tokens to [max_tokens]              │\n│ g                 │ List available GPT models                       │\n│ g [model]         │ Set GPT model to [model]                        │\n│ lm                │ List available microphones                      │\n│ cls               │ Clear the screen                                │\n│ q                 │ Quit                                            │\n└───────────────────┴─────────────────────────────────────────────────┘\n```\n\n### Conversations\n\nA conversaton is a series of prompts and answers. Conversations are stored as `.neu` text files in the data folder defined in `config.toml`.\n\n`c` : List all saved conversations\n\n`c [conversation]` : Open conversation [conversation]\n\n`cc` : Create a new conversation\n\n`cs [conversation]` : Save the current conversation as [conversation]\n\n`ct [conversation]` : Trash the conversation [conversation]\n\n`cy` : Copy the current conversation to the clipboard\n\n### Modes\n\nModes define specific expected output behaviors. Custom modes are added by editing the `[modes]` section in the `config.toml` file.\n\n`m` : List available modes\n\n`m [mode]` : Switch to mode [mode]\n\nHere are some of the built-in modes :\n\n#### Table display\n\n`m table`\n\nDisplays the response in a table. Works best when column headers are defined explicitly in the prompt and `temperature` is set to 0.\n\nExample:\n\n```\n\u003e Five Hugo prize winners by : Name, Book, Year\n```\n\nOutput:\n\n```\n  ┏━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━┓\n  ┃ Name               ┃ Book                                  ┃ Year ┃\n  ┡━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━┩\n  │ Isaac Asimov       │ Foundation’s Edge                     │ 1983 │\n  ├────────────────────┼───────────────────────────────────────┼──────┤\n  │ Orson Scott Card   │ Ender’s Game                          │ 1986 │\n  ├────────────────────┼───────────────────────────────────────┼──────┤\n  │ Ursula K. Le Guin  │ The Dispossessed: An Ambiguous Utopia │ 1975 │\n  ├────────────────────┼───────────────────────────────────────┼──────┤\n  │ Arthur C. Clarke   │ Rendezvous with Rama                  │ 1974 │\n  ├────────────────────┼───────────────────────────────────────┼──────┤\n  │ Robert A. Heinlein │ Double Star                           │ 1956 │\n  └────────────────────┴───────────────────────────────────────┴──────┘\n```\n\n#### Code generator\n\n`m code`\n\nDisplays syntax highlighted code. Works best when `temperature` is set to 0.\n\nStart with `#` followed by the name of the language and the prompt.\n\nExample:\n\n```\n\u003e #html simple login form\n```\n\nOutput:\n```html\n  \u003c!DOCTYPE html\u003e\n  \u003chtml\u003e\n    \u003chead\u003e\n      \u003ctitle\u003eLogin Form\u003c/title\u003e\n    \u003c/head\u003e\n    \u003cbody\u003e\n      \u003c!-- Login form starts here --\u003e\n      \u003cform action=\"#\" method=\"post\"\u003e\n        \u003ch2\u003eLogin\u003c/h2\u003e\n        \u003clabel for=\"username\"\u003eUsername:\u003c/label\u003e\u003cbr\u003e\n        \u003cinput type=\"text\" id=\"username\" name=\"username\"\u003e\u003cbr\u003e\u003cbr\u003e\n        \u003clabel for=\"password\"\u003ePassword:\u003c/label\u003e\u003cbr\u003e\n        \u003cinput type=\"password\" id=\"password\" name=\"password\"\u003e\u003cbr\u003e\u003cbr\u003e\n        \u003cinput type=\"submit\" value=\"Submit\"\u003e\n      \u003c/form\u003e\n      \u003c!-- Login form ends here --\u003e\n    \u003c/body\u003e\n  \u003c/html\u003e\n```\n\n#### Translator\n\n`m trans`\n\nTranslates text into another language. Works best when `temperature` is set to 0.\n\nStart with `#` followed by the name of the language to translate into and the word or phrase to translate.\n\nExample:\n```\n\u003e #german What's the carbon footprint of nuclear energy ?\n```\n\nOutput:\n```\n  Wie groß ist der CO2-Fußabdruck von Kernenergie?\n```\n\n#### Character impersonator\n\n`m char`\n\nImpersonates a character.\n\nStart with `#` followed by the name of the character you want to be impersonated and your prompt.\n\nExample:\n\n```\n\u003e #Bob_Marley Write the chorus to a new song.\n```\n\nOutput:\n```\n  \"Rise up and stand tall,\n  Embrace the love that's all,\n  Let your heart blaze and brawl,\n  As we rock to the beat of this call.\"\n```\n\n#### CSV generator\n\n`m csv`\n\nGenerates a CSV table. Works best when `temperature` is set to 0.\n\nStart with `#` followed by the separator you want to use and your prompt.\n\nExample:\n```\n\u003e #; Five economics nobel prize winners by name, year, country and school of thought\n```\n\nOutput:\n\n```\n  1; Milton Friedman; 1976; USA; Monetarism;\n  2; Amartya Sen; 1998; India; Welfare economics;\n  3; Joseph Stiglitz; 2001; USA; Information economics;\n  4; Paul Krugman; 2008; USA; New trade theory;\n  5; Esther Duflo; 2019; France; Development economics\n```\n\n#### Image generator\n\n`m img`\n\nGenerate images with `dall-e`.\n\nExample:\n\n```\n\u003e a peaceful lake scenery\n```\n\nOutput:\n\n```\n Image generated and saved to : ./img/a-peaceful-lake-scenery-20240328175639.png\n```\n\n\u003cimg style=\"width: 50%;margin:40px auto;\" alt = \"\" title = \"\" src = \"public/a-peaceful-lake-scenery-20240328175639.png\"/\u003e\n\nImage settings are available in the `config.toml` file :\n\n```toml\n[images]\nmodel = \"dall-e-2\" # either \"dall-e-2\" or \"dall-e-3\"\nsize = \"1024x1024\" # for available sizes see https://platform.openai.com/docs/api-reference/images/create \nquality = \"standard\" # either \"standard\" or \"hd\" (only for \"dall-e-3\")\npath = \"./img/\" # path to save the generated images\nopen = false # open the generated image automatically \nopen_command = \"feh\" # the command to open the image\n```\n\n#### Terminal commands generator\n\n`m term`\n\nGenerates terminal commands. Works best when `temperature` is set to 0.\n\nDescribe what you want to achieve and it will return a corresponding terminal command.\n\nExample:\n```\n\u003e find all files in this directory modified in the last 7 days\n```\nOutput:\n\n```\nfind . -type f -mtime -7\n```\nYou can then copy-paste the command into your terminal and run it (use with caution!).\n\n### Personae\n\nPersonae are profiles defined by a specific starting prompt and temperature, they are configured in the `personae.toml` file.\n\n`p` : List available personae\n\n`p [persona]` : Switch to persona\n\nThe default persona has this starting prompt :\n\n```toml\n[[persona]]\nname = \"default\"\ntemp = 0.5\n[[persona.messages]]\nrole = \"system\"\ncontent = \"You are a helpful assistant.\"\n[[persona.messages]]\nrole = \"user\"\ncontent = \"What is the capital of Mexico?\"\n[[persona.messages]]\nrole = \"assistant\"\ncontent = \"The capital of Mexico is Mexico City\"\n```\n\nTo add new personae, copy paste the default persona and give it a new name, then edit the system prompt.\n\nThe user and assistant messages are optional, but help with accuracy. You can add as many user/assistant messages as you like (increases token count).\n\nHere are some examples of personae :\n\n```toml\n[[persona]]\nname = \"teacher\"\ntemp = 0.5\n[[persona.messages]]\nrole = \"system\"\ncontent = \"Teach me how # works by asking questions about my level of understanding of necessary concepts. With each response, fill in gaps in my understanding, then recursively ask me more questions to check my understanding.\"\n```\n\n```toml\n[[persona]]\nname = \"handyman\"\ntemp = 0.65\n[[persona.messages]]\nrole = \"system\"\ncontent = \"You are a helpful handyman and a DIY expert. You will teach me to complete simple home improvementand maintenance projects using lists of necessary tools and simple step by step instructions.\"\n[[persona.messages]]\nrole = \"user\"\ncontent = \"My lightbulb is broken.\"\n[[persona.messages]]\nrole = \"assistant\"\ncontent = \"I can help you replace your lightbulb. You will need : a ladder, a new lightbulb, and a screwdriver. 1. First, turn off the light switch. For more security you can also turn off the electricity at the circuit breaker. 2. Then, climb the ladder and unscrew the lightbulb. 3. Finally, screw in the new lightbulb and turn the light switch back on.\"\n```\n\n### Speech support\n\n#### Voice output\n\nVoice is defined in `config.toml`, here's a [list of supported voices](https://platform.openai.com/docs/guides/text-to-speech/voice-options).\n\n`vo` : Toggle voice output\n\n#### Voice input\n\nVoice input can be used to transcribe voice to text.\n\n`vi` :  Switch to voice input\n\nSaying \"_Disable voice input_\" will switch back to text input mode.\n\nYou can list available microphones with `lm` and set the one you want to use in the `audio` section of the config file.\n\n```toml\n[audio]\ninput_device = 4 # the device for voice input (list devices with \"lm\")\ninput_timeout = 5 # the number of seconds after which listening stops and transcription starts\ninput_limit = 20 # the maximum number of seconds that can be listened to in one go\n```\n### Embeddings\n\nEmbeddings allow you to embed documents into the discussion to serve as context for the answers.\n \n`d` : List all available vector dbs\n\n`d [db]` : Create or switch to [db] vector db\n\n`dt [db]` : Trash [db] vector db (will delete all files and folders related to this vector db)\n\n`e [/path/to/files]` : Embed all files in `/path/to/files/` and store them in the current vector db\n\nSo, to chat with documents you can do the following :\n\n- Create a persona with a profile that restricts answers to the context, here's an example:\n```toml\n[[persona]]\nname = \"docs\"\ntemp = 0.2\n[[persona.messages]]\nrole = \"system\"\ncontent = \"Answer the question based only on the following context: \\n\\n {context} \\n\\n---\\n\\n Answer the question based on the above context: \"\n```\n- Switch to that persona with `p docs`\n- Create a vector db with `d mydb`\n- Embed the documents with `e /path/to/files`\n- Ask a question\n \n\n### Special placeholders\n\nYou can also reference documents directly (without embedding), using the `~{f:` `}~` notation.\n\n```\n\u003e Refactor the following code : ~{f:example.py}~\n```\n\nUse the `~{w:` `}~` notation to insert the content of a URL into the prompt.\n\n```\n\u003e Summarize the following article : ~{w:https://www.freethink.com/health/lsd-mindmed-phase-2}~\n```\n\n__Note__: This can highly increase the number of tokens, use with caution. For large content use embeddings instead.\n\n### GPT models\n\nYou can switch between different GPT models. The default model is defined in the `config.toml` file.\n\n`g` : List available GPT models\n\n```\n\u003e g\n  GPT Models                                                                                                                                                                                        \n  gpt-3.5-turbo-0125                                                                                                                                                                                      \n  gpt-4-turbo-preview                                                                                                                                                                                      \n  gpt-4-0125-preview                                                                                                                                                                                       \n  gpt-3.5-turbo-1106                                                                                                                                                                                       \n  gpt-4-1106-preview                                                                                                                                                                                       \n  gpt-4-vision-preview                                                                                                                                                                                     \n  gpt-3.5-turbo-instruct-0914                                                                                                                                                                              \n  gpt-3.5-turbo-instruct                                                                                                                                                                                   \n  gpt-4                                                                                                                                                                                                    \n  gpt-4-0613                                                                                                                                                                                               \n  gpt-3.5-turbo-0613                                                                                                                                                                                       \n  gpt-3.5-turbo-16k-0613                                                                                                                                                                                   \n  gpt-3.5-turbo-16k                                                                                                                                                                                        \n  gpt-3.5-turbo-0301                                                                                                                                                                                       \n  gpt-3.5-turbo \u003c\n```\n\n`g [model]` : Set GPT model to [model]\n\n```\n\u003e g gpt-3.5-turbo\n  Model set to gpt-3.5-turbo.                                                                                                                                                                              \n\u003e when is your knowledge cutoff\n  My training data includes information up until September 2021.\n  \n\u003e g gpt-4-turbo-preview\n  Model set to gpt-4-turbo-preview.                                                                                                                                                                        \n\u003e when is your knowledge cutoff\n  My knowledge is up to date until April 2023.\n```\n\n### Other commands\n\n`y` : Copy the last answer to the clipboard\n\n`t [temperature]` : Set the ChatGPT model's [temperature](https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature).\n\n`tp [top_p]` : Set the ChatGPT model's [top_p](https://platform.openai.com/docs/api-reference/completions/create#completions/create-top_p).\n\n`mt [max_tokens]` : Set the ChatGPT model's [max_tokens](https://platform.openai.com/docs/api-reference/completions/create#completions/create-max_tokens).\n\n`cls` : Clear the screen\n\n`r` : Restart the application\n\n`q` : Quit\n\n### Command line arguments\n\nBy default `neuma` starts in  interactive mode, but you can also use command line arguments to return an answer right away, which can be useful for output redirection or piping.\n\n```\n\u003e python neuma.py -h\n\nusage: neuma.py [-h] [-i INPUT] [-p PERSONAE] [-m MODE] [-t TEMP]\n\nneuma is a minimalistic ChatGPT interface for the command line.\n\noptions:\n  -h, --help                          Show this help message and exit\n  -i INPUT, --input INPUT             Input prompt\n  -p PERSONA, --persona PERSONA       Set persona\n  -m MODE, --mode MODE                Set mode\n  -t TEMP, --temp TEMP                Set temperature\n  -vo, --voice-output                 Enable voice output\n```\n\nExamples :\n\n```shell\n\u003e python neuma.py -t 1.2 -i \"Write a haiku about the moon\"\n  \n  Silver orb casts light,\n  Guiding night journeys below\n  Moon’s tranquil, bright glow.\n```\n\n```shell\n\u003e python neuma.py -t 0 -m \"table\" -i \"Five US National parks by : name, size, climate\"\n\n  ┏━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━┓\n  ┃  ┃  National Park     ┃  Size (acres)  ┃  Climate                  ┃  ┃\n  ┡━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━┩\n  │  │  Yellowstone       │  2,219,791     │  Continental              │  │\n  ├──┼────────────────────┼────────────────┼───────────────────────────┼──┤\n  │  │  Yosemite          │  761,747       │  Mediterranean            │  │\n  ├──┼────────────────────┼────────────────┼───────────────────────────┼──┤\n  │  │  Grand Canyon      │  1,217,262     │  Arid                     │  │\n  ├──┼────────────────────┼────────────────┼───────────────────────────┼──┤\n  │  │  Glacier           │  1,013,125     │  Continental              │  │\n  ├──┼────────────────────┼────────────────┼───────────────────────────┼──┤\n  │  │  Rocky Mountain    │  265,807       │  Alpine                   │  │\n  └──┴────────────────────┴────────────────┴───────────────────────────┴──┘\n```\n\n```shell\n\u003e python neuma.py -m img -i \"Escher's lost masterpiece\"\nImage generated and saved to : ./img/escher-s-lost-masterpiece-20240411203242.png\n```\n\n```shell\npython neuma.py -m term -i \"join all PDFs in this directory ordered by name into presentation.pdf\"\npdfunite $(ls -1v *.pdf) presentation.pdf\n```\n\n## Color theme\n\nThe colors of each type of text (prompt, answer, info msg, etc.) are defined in the `config.toml` file (default is [gruvbox](https://github.com/morhetz/gruvbox) dark).\n\n```toml\n[theme]\nsection = \"#d3869b\" # pink\ninfo = \"#8ec07c\"    # aqua\nsuccess = \"#b8bb26\" # green\nwarning = \"#fabd2f\" # yellow\nerror = \"#fb4934\"   # red\nprompt = \"#928374\"  # grey\nanswer = \"#83a598\"  # blue\n```\n\n## Troubleshooting\n\nIf you get a `ImportError: GLIBCXX_3.4.30 not found` error during install, run the following command:\n\n```shell\nconda install -c conda-forge gcc=12.1.0\n``` \n\n\n## What's in a name?\n\n`neuma` is derived from the greek `πνεῦμα` meaning _breath_ or _spirit_.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmwmdev%2Fneuma","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmwmdev%2Fneuma","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmwmdev%2Fneuma/lists"}