https://github.com/iyaja/llama-fs
A self-organizing file system with llama 3
https://github.com/iyaja/llama-fs
Last synced: 29 days ago
JSON representation
A self-organizing file system with llama 3
- Host: GitHub
- URL: https://github.com/iyaja/llama-fs
- Owner: iyaja
- License: mit
- Created: 2024-05-11T19:23:19.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2025-02-18T01:58:14.000Z (2 months ago)
- Last Synced: 2025-03-20T08:48:15.960Z (about 1 month ago)
- Language: TypeScript
- Homepage:
- Size: 14.7 MB
- Stars: 5,201
- Watchers: 39
- Forks: 327
- Open Issues: 50
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-ccamel - iyaja/llama-fs - A self-organizing file system with llama 3 (TypeScript)
- project-awesome - iyaja/llama-fs - A self-organizing file system with llama 3 (Jupyter Notebook)
- awesome-repositories - iyaja/llama-fs - A self-organizing file system with llama 3 (Jupyter Notebook)
- jimsghstars - iyaja/llama-fs - A self-organizing file system with llama 3 (Jupyter Notebook)
- StarryDivineSky - iyaja/llama-fs - fs 是一个利用 Llama 3 构建的自组织文件系统。它能根据文件内容自动组织文件,无需手动管理。项目特色在于其智能的文件分类和检索能力,通过 Llama 3 理解文件语义并进行归类。工作原理是读取文件内容,利用 Llama 3 进行语义分析,然后根据分析结果将文件放置在合适的位置。这简化了文件管理流程,提高了文件查找效率。它旨在提供一种更智能、更便捷的文件组织方式,摆脱传统文件系统的限制。该项目可以帮助用户更好地管理大量文件,并快速找到所需信息。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
# LlamaFS
## Inspiration
[Watch the explainer video](https://x.com/AlexReibman/status/1789895425828204553)
Open your `~/Downloads` directory. Or your Desktop. It's probably a mess...
> There are only two hard things in Computer Science: cache invalidation and **naming things**.
## What it does
LlamaFS is a self-organizing file manager. It automatically renames and organizes your files based on their content and well-known conventions (e.g., time). It supports many kinds of files, including images (through Moondream) and audio (through Whisper).
LlamaFS runs in two "modes" - as a batch job (batch mode), and an interactive daemon (watch mode).
In batch mode, you can send a directory to LlamaFS, and it will return a suggested file structure and organize your files.
In watch mode, LlamaFS starts a daemon that watches your directory. It intercepts all filesystem operations and uses your most recent edits to proactively learn how you rename file. For example, if you create a folder for your 2023 tax documents, and start moving 1-3 files in it, LlamaFS will automatically create and move the files for you!
Uh... Sending all my personal files to an API provider?! No thank you!
It also has a toggle for "incognito mode," allowing you route every request through Ollama instead of Groq. Since they use the same Llama 3 model, the perform identically.
## How we built it
We built LlamaFS on a Python backend, leveraging the Llama3 model through Groq for file content summarization and tree structuring. For local processing, we integrated Ollama running the same model to ensure privacy in incognito mode. The frontend is crafted with Electron, providing a sleek, user-friendly interface that allows users to interact with the suggested file structures before finalizing changes.
- **It's extremely fast!** (by LLM standards)! Most file operations are processed in <500ms in watch mode (benchmarked by [AgentOps](https://agentops.ai/?utm_source=llama-fs)). This is because of our smart caching that selectively rewrites sections of the index based on the minimum necessary filesystem diff. And of course, Groq's super fast inference API. 😉
- **It's immediately useful** - It's very low friction to use and addresses a problem almost everyone has. We started using it ourselves on this project (very Meta).
## What's next for LlamaFS
- Find and remove old/unused files
- We have some really cool ideas for - filesystem diffs are hard...## Installation
### Prerequisites
Before installing, ensure you have the following requirements:
- Python 3.10 or higher
- pip (Python package installer)### Installing
To install the project, follow these steps:
1. Clone the repository:
```bash
git clone https://github.com/iyaja/llama-fs.git
```2. Navigate to the project directory:
```bash
cd llama-fs
```3. Install requirements
```bash
pip install -r requirements.txt
```4. Update your `.env`
Copy `.env.example` into a new file called `.env`. Then, provide the following API keys:
* Groq: You can obtain one from [here](https://console.groq.com/keys).
* AgentOps: You can obtain one from [here](https://app.agentops.ai/settings/projects).Groq is used for fast cloud inference but can be replaced with Ollama in the code directly (TODO.)
AgentOps is used for logging and monitoring and will report the latency, cost per session, and give you a full session replay of each LlamaFS call.
5. (Optional) Install moondream if you want to use the incognito mode
```bash
ollama pull moondream
```## Usage
To serve the application locally using FastAPI, run the following command
```bash
fastapi dev server.py
```This will run the server by default on port 8000. The API can be queried using a `curl` command, and passing in the file path as the argument. For example, on the Downloads folder:
```bash
curl -X POST http://127.0.0.1:8000/batch \
-H "Content-Type: application/json" \
-d '{"path": "/Users//Downloads/", "instruction": "string", "incognito": false}'
```