Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/manekinekko/llama-index-azure-search-javascript
This is a RAG sample app built with LlamaIndex and uses Azure AI Search Vector Store.
https://github.com/manekinekko/llama-index-azure-search-javascript
azure azure-openai azureai llamaindex nextjs openai rag vector-search
Last synced: about 1 month ago
JSON representation
This is a RAG sample app built with LlamaIndex and uses Azure AI Search Vector Store.
- Host: GitHub
- URL: https://github.com/manekinekko/llama-index-azure-search-javascript
- Owner: manekinekko
- Created: 2024-11-18T20:52:54.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2024-11-18T20:58:44.000Z (about 1 month ago)
- Last Synced: 2024-11-18T21:55:43.198Z (about 1 month ago)
- Topics: azure, azure-openai, azureai, llamaindex, nextjs, openai, rag, vector-search
- Language: TypeScript
- Homepage:
- Size: 785 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
This is a RAG sample app built with [LlamaIndex](https://www.llamaindex.ai/) and uses Azure AI Search Vector Store.
## Prerequisites
- Create an Azure AI Search instance (Basic SKU)
- Create an Azure OpenAI instance
- Create an `.env` file with the following variables:```
AZURE_AI_SEARCH_ENDPOINT=https://.search.windows.net
AZURE_OPENAI_ENDPOINT=https://.openai.azure.com/
AZURE_AI_SEARCH_KEY=
OPENAI_API_KEY=AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-ada-002
AZURE_OPENAI_DEPLOYMENT=gpt-4
AZURE_API_VERSION=2024-09-01-preview
AZURE_SEARCH_INDEX_NAME=llamaindex-vector-search
```## Getting Started
First, install the dependencies:
```
npm install
```Next, generate the embeddings of the documents in the `./data` directory (if this folder exists - otherwise, skip this step):
```
npm run generate
```Third, run the development server:
```
npm run dev
```Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font.
## Using Docker
1. Build an image for the Next.js app:
```
docker build -t .
```2. Generate embeddings:
Parse the data and generate the vector embeddings if the `./data` folder exists - otherwise, skip this step:
```
docker run \
--rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/data:/app/data \
-v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
\
npm run generate
```3. Start the app:
```
docker run \
--rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/cache:/app/cache \ # Use your file system to store gea vector database
-p 3000:3000 \
```## Learn More
To learn more about LlamaIndex, take a look at the following resources:
- [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features).
- [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features).You can check out [the LlamaIndexTS GitHub repository](https://github.com/run-llama/LlamaIndexTS) - your feedback and contributions are welcome!