Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/jeddyhhh/ziptie

A web interface for llama.cpp cli written in js, jQuery and php.
https://github.com/jeddyhhh/ziptie

cpp ggml jquery js llama llamacpp llm php wsl ziptie

Last synced: 3 days ago
JSON representation

A web interface for llama.cpp cli written in js, jQuery and php.

Awesome Lists containing this project

README

        

# ziptie
A web interface for llama.cpp cli written in js, jQuery and php.



 



ziptiebot - a i5 2400 with 8gb of RAM running 7b models, also what ziptie was developed on.

I wrote this interface because the version of llama.cpp that oobabooga webui (at the time, not sure if this has changed) uses doesn't compile correctly for older processors without AVX2 support, the current mainline llama.cpp (which is command line only) does compile and run correctly on older processors but I didn't want to use cli to interact with the program.

This web-ui is only for one shot prompts and does not use the interactive mode, it will take 1 prompt and generate text until it runs out of tokens.

Supports ggml models, I have not tried gptq models.

Install instructions/commands (clean install of Ubuntu server or WSL):

Note for WSL users: It's possible to access WSL linux files from `\\wsl.localhost` in Windows Explorer, you may not want to install the vsftpd package.

`sudo apt update`

`sudo apt install apache2 php libapache2-mod-php git build-essential vsftpd`

`sudo ufw allow "Apache Full"`

`sudo nano /etc/vsftpd.conf` - uncomment `write_enable=YES` and save.

`cd /var/www/html`

`sudo git clone https://github.com/jeddyhhh/ziptie`

`cd ziptie`

`sudo service vsftpd restart`

`sudo chown -R ["yourusername"]:www-data /var/www/html/ziptie`

`sudo chmod -R 775 /var/www/html/ziptie`

`./installLlama.sh`

Transfer model files via ftp to `/var/www/html/ziptie/llama.cpp/models/["model-name"]/["model-name"].bin`

Example: `/var/www/html/ziptie/llama.cpp/models/vicuna-7b/ggml-model-q4_0.bin`

WSL Users: You can go to `\\wsl.localhost\["distro-name"]\var\www\html\ziptie\llama.cpp\models` then drag and drop model folders to here.

`["distro-name"]` is usually `Ubuntu`.

`sudo service apache2 restart`

go to http://localhost/ziptie (or http://"server-ip-address"/ziptie) to use ziptie.

You can change site settings in adminSettings.txt, there is options to lock certain setting fields and set a default setting file to be loaded on startup.

Quick Start:

1. On very first load, hit the "Reload/Rescan All Settings/Models and Prompts" button (click it a few times for max reliability), this will scan the models and prompts you have transfered and put them into a list for the website to read.

2. Edit any parameters in settings and hit "Save". (Selecting "Save as Default" will change the default settings file to be loaded on startup)

3. You can now hit "Submit Prompt", it will now start generating text.

You can use the "Alt. Output file name" option to save the llama output into a seperate file, this could be anything (.html, .php, js)

After restart of WSL (not for Ubuntu server):

WSL doesn't auto start services, so you need to run these commands after a restart of WSL and/or Windows:

`wsl sudo service apache2 start`

`wsl sudo service vsftpd start`

WSL is now running in the background with the web server/ftp server, you can now go to http://localhost/ziptie

.bat files for these scripts are in `includes/wsl-scripts` of this repository

To update ziptie:

run `./updateZiptie.sh`

To update llama.cpp:

run `./updateLlama.sh`