Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/alexrozanski/llamachat
Chat with your favourite LLaMA models in a native macOS app
https://github.com/alexrozanski/llamachat
ai llama llamacpp machine-learning macos swift swiftui
Last synced: about 1 month ago
JSON representation
Chat with your favourite LLaMA models in a native macOS app
- Host: GitHub
- URL: https://github.com/alexrozanski/llamachat
- Owner: alexrozanski
- License: mit
- Created: 2023-03-26T00:10:36.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-06-09T00:13:34.000Z (over 1 year ago)
- Last Synced: 2024-10-10T18:04:11.833Z (about 1 month ago)
- Topics: ai, llama, llamacpp, machine-learning, macos, swift, swiftui
- Language: Swift
- Homepage: https://llamachat.app
- Size: 14.6 MB
- Stars: 1,453
- Watchers: 15
- Forks: 56
- Open Issues: 24
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
![LlamaChat banner](./Resources/banner-a5248619.png)
Chat with your favourite LLaMA models, right on your Mac
**LlamaChat** is a macOS app that allows you to chat with [LLaMA](http://github.com/facebookresearch/llama), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [GPT4All](https://github.com/nomic-ai/gpt4all) models all running locally on your Mac.
## ๐ Getting Started
LlamaChat requires macOS 13 Ventura, and either an Intel or Apple Silicon processor.
### Direct Download
Download a `.dmg` containing the latest version [๐ here ๐](https://llamachat.app/api/download).
### Building from Source
```bash
git clone https://github.com/alexrozanski/LlamaChat.git
cd LlamaChat
open LlamaChat.xcodeproj
```**NOTE:** LlamaChat includes [Sparkle](https://github.com/sparkle-project/Sparkle) for autoupdates, which will fail to load if LlamaChat is not signed. Ensure that you use a valid signing certificate when building and running LlamaChat.
**NOTE:** model inference runs really slowly in Debug builds, so if building from source make sure that the `Build Configuration` in `LlamaChat > Edit Scheme... > Run` is set to `Release`.
## โจ Features
- **Supported Models:** LlamaChat supports LLaMA, Alpaca and GPT4All models out of the box. Support for other models including [Vicuna](https://vicuna.lmsys.org/) and [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/) is coming soon. We are also looking for Chinese and French speakers to add support for [Chinese LLaMA/Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Vigogne](https://github.com/bofenghuang/vigogne).
- **Flexible Model Formats:** LLamaChat is built on top of [llama.cpp](https://github.com/ggerganov/llama.cpp) and [llama.swift](https://github.com/alexrozanski/llama.swift). The app supports adding LLaMA models in either their raw `.pth` PyTorch checkpoints form or the `.ggml` format.
- **Model Conversion:** If raw PyTorch checkpoints are added these can be converted to `.ggml` files compatible with LlamaChat and llama.cpp within the app.
- **Chat History:** Chat history is persisted within the app. Both chat history and model context can be cleared at any time.
- **Funky Avatars:** LlamaChat ships with [7 funky avatars](https://github.com/alexrozanski/LlamaChat/tree/main/LlamaChat/Assets.xcassets/avatars) that can be used with your chat sources.
- **Advanced Source Naming:** LlamaChat uses Special Magicโข to generate playful names for your chat sources.
- **Context Debugging:** For the keen ML enthusiasts, the current model context can be viewed for a chat in the info popover.## ๐ฎ Models
**NOTE:** LlamaChat doesn't ship with any model files and requires that you obtain these from the respective sources in accordance with their respective terms and conditions.
- **Model formats:** LlamaChat allows you to use the LLaMA family of models in either their raw Python checkpoint form (`.pth`) or pre-converted `.ggml` file (the format used by [llama.cpp](https://github.com/ggerganov/llama.cpp), which powers LlamaChat).
- **Using LLaMA models:** When importing LLaMA models in the `.pth` format:
- You should select the appropriate parameter size directory (e.g. `7B`, `13B` etc) in the conversion flow, which includes the `consolidated.NN.pth` and `params.json` files.
- As per the LLaMA model release, the parent directory should contain `tokenizer.model`. E.g. to use the LLaMA-13B model, your model directory should look something like the below, and you should select the `13B` directory:```bash
.
โ ...
โโโ 13B
โย ย โโโ checklist.chk.txt
โย ย โโโ consolidated.00.pth
โย ย โโโ consolidated.01.pth
โย ย โโโ params.json
โ ...
โโโ tokenizer.model
```- **Troubleshooting:** If using `.ggml` files, make sure these are up-to-date. If you run into problems, you may need to use the conversion scripts from [llama.cpp](https://github.com/ggerganov/llama.cpp):
- For the GPT4All model, you may need to use [convert-gpt4all-to-ggml.py](https://github.com/ggerganov/llama.cpp/blob/master/convert-gpt4all-to-ggml.py)
- For the Alpaca model, you may need to use [convert-unversioned-ggml-to-ggml.py](https://github.com/ggerganov/llama.cpp/blob/master/convert-unversioned-ggml-to-ggml.py)
- You may also need to use [migrate-ggml-2023-03-30-pr613.py](https://github.com/ggerganov/llama.cpp/blob/master/migrate-ggml-2023-03-30-pr613.py) as well. For more information check out the [llama.cpp](https://github.com/ggerganov/llama.cpp) repo.## ๐ฉโ๐ป Contributing
Pull Requests and Issues are welcome and much appreciated. Please make sure to adhere to the [Code of Conduct](CODE_OF_CONDUCT.md) at all times.
LlamaChat is fully built using Swift and SwiftUI, and makes use of [llama.swift](https://github.com/alexrozanski/llama.swift) under the hood to run inference and perform model operations.
The project is mostly built using MVVM and makes heavy use of Combine and Swift Concurrency.
## โ๏ธ License
LlamaChat is licensed under the [MIT license](LICENSE).