Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/luismda/upload-ai
React app to upload a video and generate suggested titles and descriptions with AI.
https://github.com/luismda/upload-ai
ai fastify nodejs openai prisma reactjs shadcn-ui tailwindcss typescript
Last synced: about 1 month ago
JSON representation
React app to upload a video and generate suggested titles and descriptions with AI.
- Host: GitHub
- URL: https://github.com/luismda/upload-ai
- Owner: luismda
- Created: 2023-09-17T20:26:57.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-09-24T18:40:59.000Z (over 1 year ago)
- Last Synced: 2024-04-23T02:51:12.808Z (9 months ago)
- Topics: ai, fastify, nodejs, openai, prisma, reactjs, shadcn-ui, tailwindcss, typescript
- Language: TypeScript
- Homepage:
- Size: 9.71 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# upload.ai 🤖
This project is an application with a front-end in **React.js** and back-end in **Node.js**, which uses the **OpenAI API** together with the **Vercel AI SDK** to automate the process of generating suggested titles and descriptions for **YouTube** videos, for example.
Furthermore, this project was developed during [**Rocketseat**](https://github.com/rocketseat-education/nlw-ai-mastery) NLW AI event.
## About
In this application, the user can upload a local video and enter some keywords spoken in that video. When sending, the video will go through a conversion process to audio, using the **ffmpeg** tool with support for **Web Assembly**, which allows this task to be performed directly in the user's browser, without the need to carry out this processing in the back-end.
With this video converted to audio, the transcription process is carried out, that is, converting the audio into text. To do this, **OpenAI Whisper** model was used, sending the video's keywords to facilitate transcription. This way, both the audio and the transcription are saved in a **PostgreSQL** database on the backend.
Finally, the user can select previously registered prompts (instructions of what the AI should do), but with the possibility of customization, as well as the temperature (controls the creativity of the response provided by the AI), to then obtain suggestions for eye-catching titles, or even a catchy description for the uploaded video.
The answer generation process also uses OpenAI, but with the **GTP 3.5 Turbo 16k** model, which allows the generation of questions and answers with up to 16 thousand tokens. Furthermore, the **Vercel AI SDK** was used to facilitate sending the request to the developed back-end and to receive the response generated by the AI in stream format.
## Technologies
This project was developed with a **monorepo** architecture using **pnpm workspaces**, but the following tools were also used:
### AI Tools
- OpenAI
- Whisper
- GTP 3.5 Turbo 16k### Front-end (web)
- Vercel AI SDK
- TypeScript
- React.js
- Vite
- TailwindCSS
- Shadcn UI
- ffmpeg### Back-end
- Vercel AI SDK
- TypeScript
- Node.js
- Fastify
- Prisma ORM
- PostgreSQL
- Docker## Instructions
To run this project locally on your machine, you can follow these steps:
Environment:
- Node.js LTS
- pnpm1. Clone this repository:
```sh
git clone https://github.com/luismda/upload-ai.git
```2. Install the dependencies:
```sh
pnpm i
```### Back-end:
Access the `apps/server` directory and proceed with the next steps.
1. Raise the docker container:
```sh
docker compose start -d
```2. Create a `.env` file in the root with the same format as `.env.example`. You will need to enter the PostgreSQL connection URL and your OpenAI `API_KEY`, which can be found in the [OpenAI API panel](https://platform.openai.com/account/api-keys).
3. Run the migrations to create the tables in the database with Prisma:
```sh
pnpm prisma db push
```4. Run this command to register some prompts in the database:
```sh
pnpm prisma db seed
```5. Start the server:
```sh
pnpm run dev
```You can look up the API routes in the `routes.http` file, which uses the [VSCode Rest Client](https://github.com/Huachao/vscode-restclient) extension.
### Front-end
Access the `apps/web` directory and proceed with the next steps.
1. Create a `.env` file in the root with the same format as `.env.example`. You will need to provide the base URL for the local API.
2. Run the project:
```sh
pnpm run dev
```Open the application in the browser and test the features.
## Created by
LuÃs Miguel | [**LinkedIn**](https://www.linkedin.com/in/luis-miguel-dutra-alves/)
##
**#NeverStopLearning 🚀**