https://github.com/peterpme/react-gpt
✨ An experimental chat-gpt experience focused on React using LangChain & OpenAI
https://github.com/peterpme/react-gpt
langchain openai
Last synced: 2 months ago
JSON representation
✨ An experimental chat-gpt experience focused on React using LangChain & OpenAI
- Host: GitHub
- URL: https://github.com/peterpme/react-gpt
- Owner: peterpme
- Created: 2023-02-20T03:27:36.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-06-05T02:23:13.000Z (almost 2 years ago)
- Last Synced: 2025-03-18T03:21:54.567Z (3 months ago)
- Topics: langchain, openai
- Language: TypeScript
- Homepage: https://react-gpt.fly.dev/
- Size: 4.71 MB
- Stars: 69
- Watchers: 2
- Forks: 11
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# react-gpt
This is an experiment for a context-driven chatbot using LangChain, OpenAI, Next, Fly
## Getting Started
First, create a new `.env` file from `.env.example` and add your OpenAI API key found [here](https://platform.openai.com/account/api-keys).
```bash
cp .env.example .env
```Next, we'll need to load our data source.
### Data Ingestion
Data ingestion happens in two steps.
First, you should run
```bash
pip install -r ingest/requirements.txt
sh ingest/download.sh
```This will download our data source (in this case the Langchain docs ) and parse it.
Next, install dependencies and run the ingestion script:
```bash
yarn && yarn ingest
```This will split text, create embeddings, store them in a vectorstore, and
then save it to the `data/` directory.We save it to a directory because we only want to run the (expensive) data ingestion process once.
The Next.js server relies on the presence of the `data/` directory. Please
make sure to run this before moving on to the next step.### Running the Server
Then, run the development server:
```bash
yarn dev
```Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
### Deploying the server
The production version of this repo is hosted on
[fly](https://chat-langchainjs.fly.dev/). To deploy your own server on Fly, you
can use the provided `fly.toml` and `Dockerfile` as a starting point.**Note:** As a Next.js app it seems like Vercel is a natural place to
host this site. Unfortunately there are
[limitations](https://github.com/websockets/ws/issues/1786#issuecomment-678315435)
to secure websockets using `ws` with Next.js which requires using a custom
server which cannot be hosted on Vercel. Even using server side events, it
seems, Vercel's serverless functions seem to prohibit streaming responses
(e.g. see
[here](https://github.com/vercel/next.js/issues/9965#issuecomment-820156947))## Inspirations
I basically copied stuff from all of these great people:
- [ChatLangChain](https://github.com/hwchase17/chat-langchain) - for the backend and data ingestion logic
- [LangChain Chat NextJS](https://github.com/zahidkhawaja/langchain-chat-nextjs) - for the frontend.
- [Chat Langchain-js](https://github.com/zahidkhawaja/chat-langchainjs) - for everything