Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wendellast/gui
👽 GUI is a personal AI developed in Python3. It is a great tool that can be used to improve communication and understanding.
https://github.com/wendellast/gui
assistive-technology chatbot gpt ia llm personal-assistant python3 streamlit textual
Last synced: about 2 months ago
JSON representation
👽 GUI is a personal AI developed in Python3. It is a great tool that can be used to improve communication and understanding.
- Host: GitHub
- URL: https://github.com/wendellast/gui
- Owner: wendellast
- License: apache-2.0
- Created: 2024-01-01T02:56:15.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-12T20:56:51.000Z (2 months ago)
- Last Synced: 2024-11-12T21:35:40.301Z (2 months ago)
- Topics: assistive-technology, chatbot, gpt, ia, llm, personal-assistant, python3, streamlit, textual
- Language: Python
- Homepage: https://gui-ia.streamlit.app/
- Size: 219 KB
- Stars: 9
- Watchers: 3
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# GUI IA
Welcome! The GUI is a virtual assistant designed to answer your questions in a friendly, fun, and interactive way. Created to be your digital companion, it provides engaging responses to make your experience more enjoyable.
- **Model Used**: By default, we use the **meta-llama/Llama-3.2-3B-Instruct** model, but you can easily change it to any other model of your choice.
## Recommended
- Python 3.11.0 or higher
- A Hugging Face account to obtain your API token## Installation
1. **Clone the repository**:
```bash
git clone https://github.com/wendellast/Gui.git
cd Gui
```2. **Install the dependencies**:
```bash
pip install -r requirements.txt
```3. **Configure the `.env` file**:
- Rename the `.env-example` file to `.env`:
```bash
mv .env-example .env
```
- Edit the `.env` file and add your Hugging Face token:
```
HUGGINGFACEHUB_API_TOKEN=your_token_here
```
> **Note**: You can get your Hugging Face API token at [Hugging Face Tokens](https://huggingface.co/settings/tokens).> **Note**: If you're using a graphical interface with `Python gui.py` you don't need the Hugging Face token; it works without it.
## How to Run
### option 1: Running via Server
To start the server, run:```bash
python server.py
```Open your browser and go to `http://localhost:7860` to interact with the AI chatbot.
### Option 2: Running via Graphical Interface
To start the server and open the graphical interface, simply run the following command:
```bash
python gui.py
```This will launch the application with the virtual assistant interface, where you can interact using voice or buttons.
---
## Speech Configuration
The virtual assistant uses speech synthesis to respond to the user. jWe recommend using the **LetÃcia** voice, a high-quality Brazilian voice, for the best experience.
### 1. Using the **LetÃcia** Voice
We recommend using the **LetÃcia** voice. jTo set it up, follow these steps:
- Visit the [Louderpages - LetÃcia](https://louderpages.org/leticia) website.
- Github [Rhvoices](https://github.com/RHVoice/RHVoice)
- Follow the instructions to configure the **LetÃcia** voice.### 2. Other Alternatives
If you prefer, you can also use other speech synthesis options:
- **Espeak**: jAn open-source alternative.
- **SAPI5 (Windows)**: jThe native speech synthesis API for Windows.## API
### Example Usage via API:
```python
from gradio_client import Client# ==========TEST API==========
def response_gui(input_text):
client = Client("wendellast/GUI")
result = client.predict(
message=input_text,
max_tokens=512,
temperature=0.7,
top_p=0.95,
api_name="/chat",
)
return result# Example call:
input_text = "Hello, how are you?"
response = response_gui(input_text)
print("AI Response:", response)
```### Usage via LangChain Model
You can also use the model directly via LangChain:
- **Define the model** you want to use, such as `meta-llama/Llama-3.2-3B-Instruct`.
- Configure your access token in the `.env` file.
- Instantiate the wrapper for the model using the `GuiChat` class.### Supported Parameters:
- `temperature`: Controls the randomness of the response.
- `top_p`: Controls the diversity of the responses.
- `repetition_penalty`: Penalizes repetitions for more varied answers.
- `max_new_tokens`: Maximum number of tokens generated in the response.**Example usage**:
```python
from util.token_access import load_token
from your_package import GuiChattoken = load_token()
chatbot = GuiChat(auth_token=token)while True:
question = input("Ask here: ")
answer = chatbot._call(question)
print(f"Response: {answer}")
```