Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jjleng/sensei
Yet another open source Perplexity
https://github.com/jjleng/sensei
Last synced: 2 months ago
JSON representation
Yet another open source Perplexity
- Host: GitHub
- URL: https://github.com/jjleng/sensei
- Owner: jjleng
- License: apache-2.0
- Created: 2024-06-17T18:07:10.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-07-22T17:59:50.000Z (6 months ago)
- Last Synced: 2024-08-01T22:51:52.283Z (6 months ago)
- Language: TypeScript
- Homepage: https://www.heysensei.app
- Size: 1.7 MB
- Stars: 174
- Watchers: 2
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - jjleng/sensei - R, Qwen-2-72b-instruct, WizardLM-2 8x22B, Claude Haiku, GPT-3.5-turbo 搜索:SearxNG、必应 内存:Redis 部署:AWS、 Paka (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
# Sensei Search
Sensei Search is an AI-powered answer engine.
## 📸 Screenshots
### Light Mode
### Dark Mode
## 💡 Insights from Utilizing Open Source LLMs
The key takeaways and experiences of working with open source Large Language Models are summarized in a detailed discussion. For more information, you can read the full discussion on Reddit:
- [Building an Open Source Perplexity AI with Open Source LLMs - Reddit Post](https://www.reddit.com/r/LocalLLaMA/comments/1dj7mkq/building_an_open_source_perplexity_ai_with_open/)
## 🛠️ Tech Stack
Sensei Search is built using the following technologies:- Frontend: Next.js, Tailwind CSS
- Backend: FastAPI, OpenAI client
- LLMs: Command-R, Qwen-2-72b-instruct, WizardLM-2 8x22B, Claude Haiku, GPT-3.5-turbo
- Search: SearxNG, Bing
- Memory: Redis
- Deployment: AWS, [Paka](https://github.com/jjleng/paka)## 🏃♂️ How to Run Sensei Search
You can run Sensei Search either locally on your machine or in the cloud.
### Running Locally
Follow these steps to run Sensei Search locally:
1. Prepare the backend environment:
```bash
cd sensei_root_folder/backend/
mv .env.development.example .env.development
```
Edit `.env.development` as needed. The example environment assumes you run models through Ollama. Make sure you have reasonably good GPUs to run the command-r/Qwen-2-72b-instruct/WizardLM-2 8x22B model.2. No need to do anything for the frontend.
3. Run the app with the following command:
```bash
cd sensei_root_folder/
docker compose up
```4. Open your browser and go to [http://localhost:3000](http://localhost:3000)
### Running in the Cloud
We deploy the app to AWS using [paka](https://github.com/jjleng/paka). Please note that the models require GPU instances to run.
Before you start, make sure you have:
- An AWS account
- Requested GPU quota in your AWS accountThe configuration for the cluster is located in the `cluster.yaml` file. You'll need to replace the `HF_TOKEN` value in `cluster.yaml` with your own Hugging Face token. This is necessary because the `mistral-7b` and `command-r` models require your account to have accepted their terms and conditions.
Follow these steps to run Sensei Search in the cloud:
1. Install paka:
```bash
pip install paka
```2. Provision the cluster in AWS:
```bash
make provision-prod
```3. Deploy the backend:
```bash
make deploy-backend
```4. Deploy the frontend:
```bash
make deploy-frontend
```5. Get the URL of the frontend:
```bash
paka function list
```6. Open the URL in your browser.