https://github.com/pgflow-dev/ai-web-scraper
https://github.com/pgflow-dev/ai-web-scraper
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/pgflow-dev/ai-web-scraper
- Owner: pgflow-dev
- Created: 2025-05-19T12:35:15.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2025-05-19T14:55:08.000Z (10 months ago)
- Last Synced: 2025-06-23T12:43:19.507Z (9 months ago)
- Language: PLpgSQL
- Size: 16.6 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# AI Web Scraper – Quick-start Cheat Sheet
[→ View the full tutorial here](https://pgflow.dev/tutorials/ai-web-scraper/)
---
## 0. One-time setup
```bash
# Clone & enter the repo
git clone https://github.com/pgflow-dev/ai-web-scraper.git
cd ai-web-scraper
# Copy the environment file example and add your OpenAI key (required by the tasks)
cp supabase/functions/.env.example supabase/functions/.env
# Edit the .env file and add your OpenAI API key
# OPENAI_API_KEY=sk-...
```
---
## 1. Boot the local Supabase stack
```bash
npx supabase@2.22.12 start
```
---
## 2. Run all database migrations (table + flow)
```bash
npx supabase@2.22.12 migrations up --local
```
---
## 3. Serve the Edge Functions (keep this terminal open)
```bash
npx supabase@2.22.12 functions serve
```
---
## 4. Start the worker (new terminal)
```bash
curl -X POST http://127.0.0.1:54321/functions/v1/analyze_website_worker
```
The first `curl` boots the worker; it stays alive and polls for jobs.
---
## 5. Trigger a job (SQL editor or psql)
```sql
select * from pgflow.start_flow(
flow_slug => 'analyzeWebsite',
input => '{"url":"https://supabase.com"}'
);
```
---
## 6. Check results
```sql
select * from websites; -- scraped data
select * from pgflow.runs; -- run history
```
That’s it – scrape, summarize, tag, store!