{"id":21058264,"url":"https://github.com/kuvaus/gptj-chat","last_synced_at":"2025-04-30T09:49:55.912Z","repository":{"id":154434394,"uuid":"631016312","full_name":"kuvaus/gptj-chat","owner":"kuvaus","description":"Simple chat program for GPT-J models","archived":false,"fork":false,"pushed_at":"2023-04-26T15:22:41.000Z","size":30,"stargazers_count":7,"open_issues_count":0,"forks_count":3,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-01-20T19:38:45.465Z","etag":null,"topics":["ai","cpp","gpt","gpt4all","gptj"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kuvaus.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-04-21T17:50:53.000Z","updated_at":"2023-12-09T16:15:08.000Z","dependencies_parsed_at":null,"dependency_job_id":"b8763494-fd2e-4003-ae37-22bd407d59c0","html_url":"https://github.com/kuvaus/gptj-chat","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kuvaus%2Fgptj-chat","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kuvaus%2Fgptj-chat/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kuvaus%2Fgptj-chat/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kuvaus%2Fgptj-chat/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kuvaus","download_url":"https://codeload.github.com/kuvaus/gptj-chat/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243502919,"owners_count":20301146,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","cpp","gpt","gpt4all","gptj"],"created_at":"2024-11-19T17:07:19.739Z","updated_at":"2025-03-14T00:28:32.135Z","avatar_url":"https://github.com/kuvaus.png","language":"C++","readme":"[![CMake](https://github.com/kuvaus/gptj-chat/actions/workflows/cmake.yml/badge.svg)](https://github.com/kuvaus/gptj-chat/actions/workflows/cmake.yml)\n# GPTJ-Chat\nSimple command line chat program for [GPT-J](https://en.wikipedia.org/wiki/GPT-J) models written in C++. Based on [ggml](https://github.com/ggerganov/ggml/) and [gptj.cpp](https://github.com/marella/gptj.cpp/).\n\n\u003cimg alt=\"GPTJ-Chat demo\" src=\"https://user-images.githubusercontent.com/22169537/234323778-64365dc9-8bd9-4a48-b7de-ec0280a5fb4e.gif\" width=\"600\" /\u003e\n\n# Table of contents\n\u003c!-- TOC --\u003e\n* [GPT-J model](#gpt-j-model)\n* [Installation](#installation)\n* [Usage](#usage)\n* [Detailed command list](#detailed-command-list)\n* [License](#license)\n\u003c!-- TOC --\u003e\n\n## GPT-J model\nYou need to download a GPT-J model first. Here are direct links to models:\n\n\u003e- The default version is **v1.0**: [ggml-gpt4all-j.bin](https://gpt4all.io/models/ggml-gpt4all-j.bin)\n\u003e- At the time of writing the newest is **1.3-groovy**: [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin)\n\nThey're around 3.8 Gb each. The chat program stores the model in RAM on runtime so you need enough memory to run. You can get more details on GPT-J models from [gpt4all.io](https://gpt4all.io/) or [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) github.\n\n## Installation\n### Download\n\n```sh\ngit clone --recurse-submodules https://github.com/kuvaus/gptj-chat\ncd gptj-chat\n```\n\n### Build\n\n```sh\nmkdir build\ncd build\ncmake ..\ncmake --build . --parallel\n```\n\n## Usage\n\nAfter compiling, the binary is located at:\n\n```sh\nbuild/bin/chat\n```\nBut you're free to move it anywhere. Simple command for 4 threads to get started:\n```sh\n./chat -m \"/path/to/modelfile/ggml-gpt4all-j.bin\" -t 4\n```\n\nHappy chatting!\n\n## Detailed command list\nYou can view the help and full parameter list with:\n`\n./chat -h\n`\n\n```sh\nusage: ./bin/chat [options]\n\nA simple chat program for GPT-J based models.\nYou can set specific initial prompt with the -p flag.\nRuns default in interactive and continuous mode.\nType 'quit', 'exit' or, 'Ctrl+C' to quit.\n\noptions:\n  -h, --help            show this help message and exit\n  --run-once            disable continuous mode\n  --no-interactive      disable interactive mode altogether (uses given prompt only)\n  -s SEED, --seed SEED  RNG seed (default: -1)\n  -t N, --threads N     number of threads to use during computation (default: 4)\n  -p PROMPT, --prompt PROMPT\n                        prompt to start generation with (default: empty)\n  --random-prompt       start with a randomized prompt.\n  -n N, --n_predict N   number of tokens to predict (default: 200)\n  --top_k N             top-k sampling (default: 40)\n  --top_p N             top-p sampling (default: 0.9)\n  --temp N              temperature (default: 0.9)\n  -b N, --batch_size N  batch size for prompt processing (default: 8)\n  -r N, --remember N    number of chars to remember from start of previous answer (default: 200)\n  -j,   --load_json FNAME\n                        load options instead from json at FNAME (default: empty/no)\n  -m FNAME, --model FNAME\n                        model path (current: models/ggml-gpt4all-j.bin)\n```\n\nYou can also fetch parameters from a json file with `--load_json \"/path/to/file.json\"` flag.  The json file has to be in following format:\n\n```javascript\n{\"top_p\": 0.9, \"top_k\": 40, \"temp\": 0.9, \"n_batch\": 8}\n```\nThis is useful when you want to store different temperature and sampling settings.\n\n## License\n\nThis project is licensed under the MIT [License](https://github.com/kuvaus/gptj-chat/blob/main/LICENSE)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkuvaus%2Fgptj-chat","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkuvaus%2Fgptj-chat","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkuvaus%2Fgptj-chat/lists"}