Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wambugu71/offlinegpt-
Local gpt in llama.cpp models with chat interface
https://github.com/wambugu71/offlinegpt-
llama-cpp-python llamacpp openai python python3 streamlit
Last synced: 12 days ago
JSON representation
Local gpt in llama.cpp models with chat interface
- Host: GitHub
- URL: https://github.com/wambugu71/offlinegpt-
- Owner: wambugu71
- License: mit
- Created: 2024-06-10T19:07:11.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-06-10T20:08:19.000Z (7 months ago)
- Last Synced: 2024-12-22T06:40:11.243Z (12 days ago)
- Topics: llama-cpp-python, llamacpp, openai, python, python3, streamlit
- Language: Python
- Homepage:
- Size: 11.7 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LocalGPT
A Streamlit app for interacting with LLAMACPP-based GPT models locally in llama_cpp .
Download or convert `safetensors` language models from hugging face to gguf format.
Tested [Phi3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) model on i5 8th gen CPU with output of 12 token/s.
## Table of Contents
- [Overview](#overview)
- [Installation](#installation)
- [Usage](#usage)-
- [Customization](#customization)
- [Contributing](#contributing)
- [License](#license)## Overview
LocalGPT is a Streamlit app that enables users to interact with LLAMACPP GPT models locally, without the need for an internet connection or remote server. It provides a user-friendly interface for generating text (streaming), and exploring the capabilities of the model.## Installation
To install the LocalGPT Streamlit app, follow these steps:
1. Clone the repository:
```
git clone https://github.com/wambugu71/OfflineGPT-
```
2. Navigate to the project directory:
```
cd OfflineGPT-
```
3.Install the required packages:
```
pip install -r requirements.txt
```
4. (Optional) If you plan to use a GPU for acceleration (nvidia,intel,amd) gpu's install the GPU-specific packages:
- See [llama_cpp_python installation GPU](https://llama-cpp-python.readthedocs.io/en/latest/)
5. Run your model as a server from the terminal.
```Bash
python -m llama_cpp.server --model .gguf
```## Usage
To run the LocalGPT Streamlit app, simply execute the following command in the project directory:
```
cd src
streamlit run app.py
```
This will launch the app in your default web browser. You can then interact with the app (after running the server) by providing input text, and generating text.## Customization
LocalGPT allows for customization of the app to suit your specific needs. You can modify the `app.py` file to adjust the generation settings, or add additional functionality. Refer to the [Streamlit documentation](https://docs.streamlit.io/) for more information on customizing Streamlit apps.## Contributing
Contributions are welcome! If you have suggestions, improvements, or bug fixes, feel free to create a pull request or open an issue.
## License
LocalGPT is licensed under the MIT License. See the [LICENSE](LICENSE) file for more information.