https://github.com/intelligencedev/eternal
Eternal is an experimental platform for machine learning models and workflows.
https://github.com/intelligencedev/eternal
ai go htmx inference-api llamacpp ml stable-diffusion
Last synced: 10 months ago
JSON representation
Eternal is an experimental platform for machine learning models and workflows.
- Host: GitHub
- URL: https://github.com/intelligencedev/eternal
- Owner: intelligencedev
- License: other
- Created: 2023-12-25T16:37:32.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-04-14T03:43:32.000Z (almost 2 years ago)
- Last Synced: 2024-04-14T09:53:08.207Z (almost 2 years ago)
- Topics: ai, go, htmx, inference-api, llamacpp, ml, stable-diffusion
- Language: Go
- Homepage: https://intelligence.dev
- Size: 22.7 MB
- Stars: 10
- Watchers: 2
- Forks: 0
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Eternal
Eternal is an experimental platform for machine learning workflows.
NOTE: This app is a work in progress and not stable. Please consider this repo for your reference. We
welcome contributors and constructive feedback. You are also welcome to use it as reference for your own projects.
Eternal integrates various projects such as `llama.cpp`, `stable diffusion.cpp` and `codapi` among many other projects whose
developers were kind enough to share with the world. All credit belongs to the respective contributors of all dependencies this
repo relies on. Thank you for sharing your projects with the world.
The Eternal frontend is rendered with the legendary `HTMX` framework.
IMPORTANT:
Configure the quant level of the models in your `config.yml` appropriately for your system specs. If a local fails to run, investigate the reason by viewing the generated `main.log` file. The most common reason is insufficient RAM or incorrect prompt template. We will implement more robust error handling and logging in a future commit.
## Features
- Language model catalog for easy download and configuration.
- Text generation using local language models, OpenAI GPT-4 an Google Gemini 1.5 Pro.
- Web retrieval that fetches URL content for LLM to reference.
- Web Search to automatically retrieve top results for a user's prompt for LLM to reference. _Requires Chrome browser installation._
- Image generation using Stable Diffusion backend.
## Getting Started
Basic [documentation](https://github.com/intelligencedev/eternal/blob/main/docs/general_instructions.md) is provided in the `docs` folder in this repository.
## Showcase
### Web Retrieval - Search
Web tools can be enabled via the `config.yml` file.
- `webget`: Attempts to fetch a URL passed in as part of the prompt.
- `websearch`: Searches the public web for pages related to your prompt.
_Requires Chrome browser installation._
### Code Execution
Execute and edit LLM generated code in the chat view in a secure sandbox. For now, JavaScript is implemented via WASM. More languages coming soon!
### Image Generation
Embedded Stable Diffusion for easy high quality image generation.
## Configuration
Rename the provided `.config.yml` file to `config.yml` and place it in the same path as the application binary. Modify the contents for your environment and use case.
## Build
Eternal currently supports building on Linux or Windows WSL using CUDA (nVidia GPU required) or MacOS/Metal (M-series Max required).
To build the application:
```
$ git clone https://github.com/intelligencedev/eternal.git
$ cd eternal
$ git submodule update --init --recursive
$ make all
```
Please submit an issue if you encounter any issues with the build process.
## Disclaimer
This README is a high-level overview of the Eternal application. Detailed setup instructions and a complete list of features, dependencies, and configurations should be consulted in the actual application documentation.