Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/janhq/cortex.cpp
Run and customize Local LLMs.
https://github.com/janhq/cortex.cpp
gguf llamacpp onnx onnxruntime tensorrt-llm
Last synced: 7 days ago
JSON representation
Run and customize Local LLMs.
- Host: GitHub
- URL: https://github.com/janhq/cortex.cpp
- Owner: janhq
- License: apache-2.0
- Created: 2023-09-11T03:56:09.000Z (about 1 year ago)
- Default Branch: dev
- Last Pushed: 2024-10-29T09:33:52.000Z (11 days ago)
- Last Synced: 2024-10-29T09:52:07.505Z (11 days ago)
- Topics: gguf, llamacpp, onnx, onnxruntime, tensorrt-llm
- Language: C++
- Homepage: https://cortex.so
- Size: 133 MB
- Stars: 1,975
- Watchers: 16
- Forks: 111
- Open Issues: 114
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-local-ai - Cortex - Multi-engine engine embeddable in your apps. Uses llama.cpp and more | Both | Both | ❌ | Text-Gen | (Inference Engine)
- awesome-local-llms - cortex.cpp
README
# Cortex.cpp
Documentation - API Reference
- Changelog - Bug reports - Discord> ⚠️ **Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.**
## Overview
Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like [Jan.ai](https://jan.ai/).
Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but also supports the following:
- [`llamacpp`](https://github.com/janhq/cortex.llamacpp)
- [`onnx`](https://github.com/janhq/cortex.onnx)
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)## Installation
This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process.
Alternatively, Cortex is available with a [Network Installer](#network-installer) which downloads the necessary dependencies from the internet during the installation.
### Stable
### Windows:
### MacOS:
### Linux:
Download the installer and run the following command in terminal:
```bash
sudo apt install ./cortex-local-installer.deb
# or
sudo apt install ./cortex-network-installer.deb
```The binary will be installed in the `/usr/bin/` directory.
## Usage
After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For Beta preview, you can run `cortex-beta --help`.
## Built-in Model Library
Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored in `~\cortexcpp\models`.
Example models:
| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command |
| -------------- | --------------------- | ------------------------ | ----------------------- | ----------------------------- |
| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
| mistral | ✅ | ✅ | ✅ | cortex run mistral |
| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
| codestral | ✅ | | | cortex run codestral:22b-gguf |
| command-r | ✅ | | | cortex run command-r:35b-gguf |
| gemma | ✅ | | ✅ | cortex run gemma |
| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |> **Note**:
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.## Cortex.cpp CLI Commands
For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli).
## REST API
Cortex.cpp includes a REST API accessible at `localhost:39281`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference).
## Advanced Installation
### Local Installer: Beta & Nightly Versions
Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback.
Nightly is our development version of Cortex. It is released every night and may contain bugs and experimental features.
Version Type
Windows
MacOS
Linux
Stable (Recommended)
cortex-local-installer.exe
cortex-local-installer.pkg
cortex-local-installer.deb
Beta (Preview)
cortex-local-installer.exe
cortex-local-installer.pkg
cortex-local-installer.deb
Nightly Build (Experimental)
cortex-local-installer.exe
cortex-local-installer.pkg
cortex-local-installer.deb
### Network Installer
Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
Version Type
Windows
MacOS
Linux
Stable (Recommended)
cortex-network-installer.exe
cortex-network-installer.pkg
cortex-network-installer.deb
Beta (Preview)
cortex-network-installer.exe
cortex-network-installer.pkg
cortex-network-installer.deb
Nightly Build (Experimental)
cortex-network-installer.exe
cortex-network-installer.pkg
cortex-network-installer.deb
### Build from Source
#### Windows
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:```bash
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
```4. Build the Cortex.cpp inside the `build` folder:
```bash
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
```5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.```sh
# Get the help information
cortex -h
```#### MacOS
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:```bash
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```4. Build the Cortex.cpp inside the `build` folder:
```bash
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.```sh
# Get the help information
cortex -h
```#### Linux
1. Clone the Cortex.cpp repository [here](https://github.com/janhq/cortex.cpp).
2. Navigate to the `engine > vcpkg` folder.
3. Configure the vpkg:```bash
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
```4. Build the Cortex.cpp inside the `build` folder:
```bash
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
```5. Use Visual Studio with the C++ development kit to build the project using the files generated in the `build` folder.
6. Verify that Cortex.cpp is installed correctly by getting help information.```sh
# Get help
cortex
```## Uninstallation
### Windows
1. Open the Windows Control Panel.
2. Navigate to `Add or Remove Programs`.
3. Search for `cortexcpp` and double click to uninstall. (for beta and nightly builds, search for `cortexcpp-beta` and `cortexcpp-nightly` respectively)### MacOs
Run the uninstaller script:
```bash
sudo sh cortex-uninstall.sh
```For MacOS, there is a uninstaller script comes with the binary and added to the `/usr/local/bin/` directory. The script is named `cortex-uninstall.sh` for stable builds, `cortex-beta-uninstall.sh` for beta builds and `cortex-nightly-uninstall.sh` for nightly builds.
### Linux
```bash
# For stable builds
sudo apt remove cortexcpp
```## Contact Support
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex.cpp/issues/new/choose).
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
- For long-form inquiries, please email [[email protected]](mailto:[email protected]).