Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Mobile-Artificial-Intelligence/maid_llm
maid_llm is a dart implementation of llama.cpp used by the mobile artificial intelligence distribution (maid)
facebook flutter-ai gemma ggml gguf llama llama2 llamacpp llm llm-inference local-ai meta mistral mixtral mobile-ai
Last synced: 01 Jul 2024
![](https://github.com/Mobile-Artificial-Intelligence.png)
https://github.com/withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
ai bindings catai cmake cmake-js cuda gguf grammar json-schema llama llama-cpp llm metal nodejs prebuilt-binaries self-hosted
Last synced: 11 Jun 2024
![](https://github.com/withcatai.png)
https://github.com/BrutalCoding/shady.ai
Making offline AI models accessible to all types of edge devices.
android cross-platform dart fastlane flutter gguf ios linux linux-desktop llama-cpp llama-dart llvm macos material-design rwkv serverpod shady-ai web whisper-cpp windows
Last synced: 04 May 2024
![](https://github.com/BrutalCoding.png)
https://github.com/antirez/gguf-tools
GGUF implementation in C as a library and a tools CLI program
Last synced: 04 May 2024
![](https://github.com/antirez.png)
https://github.com/latestissue/AltaeraAI
A set of bash scripts to automate deployment of GGML/GGUF models [default: RWKV] with the use of KoboldCpp on Android - Termux
ai android ggml gguf koboldai koboldcpp llama-2 llama2 llamacpp mistral mistral-7b phi phi-2 phi2 rwkv rwkv4 rwkvcpp termux vicuna
Last synced: 04 May 2024
![](https://github.com/latestissue.png)
https://github.com/janhq/cortex
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers π Jan
accelerated ai cuda gguf inference-engine llama llama2 llamacpp llm llms openai-api stable-diffusion tensorrt-llm
Last synced: 30 Apr 2024
![](https://github.com/janhq.png)
https://github.com/withcatai/catai
UI for π¦model . Run AI assistant locally β¨
ai ai-assistant catai chatbot chatgpt chatui dalai ggmlv3 gguf llama-cpp llm local-llm localai node-llama-cpp openai vicuna vicuna-installation-guide wizardlm
Last synced: 09 Apr 2024
![](https://github.com/withcatai.png)
https://github.com/Mobile-Artificial-Intelligence/maid
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
android android-ai chatbot chatgpt facebook ffigen flutter gguf large-language-models llama llama-cpp llama2 llamacpp local-ai mistral mobile-ai mobile-artificial-intelligence ollama openai openorca
Last synced: 31 Mar 2024
![](https://github.com/Mobile-Artificial-Intelligence.png)
https://github.com/albertstarfield/alpaca-electron-zephyrine
Introducing Project Zephyrine: Elevating Your Interaction Plug and Play, and Employing GPU Acceleration within a Modernized Automata Local Graphical User Interface.
chatgpt cuda electron falcon gemma ggml gguf gpt-3 gui llama llama-2 llm metal opencl
Last synced: 16 Mar 2024
![](https://github.com/albertstarfield.png)