Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gaurav031/scrape
https://github.com/gaurav031/scrape
Last synced: 25 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/gaurav031/scrape
- Owner: gaurav031
- Created: 2024-06-28T18:04:38.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-06-28T18:59:42.000Z (6 months ago)
- Last Synced: 2024-06-28T19:47:37.628Z (6 months ago)
- Language: TypeScript
- Size: 2.77 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## π Table of Contents
1. π€ [Introduction](#introduction)
2. βοΈ [Tech Stack](#tech-stack)
3. π [Features](#features)
4. π€Έ [Quick Start](#quick-start)
5. πΈοΈ [Snippets](#snippets)
6. π [Links](#links)
7. π [More](#more)## π¨ Tutorial
Developed using Next.js and Bright Data's webunlocker, this e-commerce product scraping site is designed to assist users in making informed decisions. It notifies users when a product drops in price and helps competitors by alerting them when the product is out of stock, all managed through cron jobs.
- Next.js
- Bright Data
- Cheerio
- Nodemailer
- MongoDB
- Headless UI
- Tailwind CSSπ **Header with Carousel**: Visually appealing header with a carousel showcasing key features and benefits
π **Product Scraping**: A search bar allowing users to input Amazon product links for scraping.
π **Scraped Projects**: Displays the details of products scraped so far, offering insights into tracked items.
π **Scraped Product Details**: Showcase the product image, title, pricing, details, and other relevant information scraped from the original website
π **Track Option**: Modal for users to provide email addresses and opt-in for tracking.
π **Email Notifications**: Send emails product alert emails for various scenarios, e.g., back in stock alerts or lowest price notifications.
π **Automated Cron Jobs**: Utilize cron jobs to automate periodic scraping, ensuring data is up-to-date.
and many more, including code architecture and reusability
Follow these steps to set up the project locally on your machine.
**Prerequisites**
Make sure you have the following installed on your machine:
- [Git](https://git-scm.com/)
- [Node.js](https://nodejs.org/en)
- [npm](https://www.npmjs.com/) (Node Package Manager)**Cloning the Repository**
```bash
git clone https://github.com/gaurav031/Scrape.git
cd Scrape
```**Installation**
Install the project dependencies using npm:
```bash
npm install
```**Set Up Environment Variables**
Create a new file named `.env` in the root of your project and add the following content:
```env
#SCRAPER
BRIGHT_DATA_USERNAME=
BRIGHT_DATA_PASSWORD=#DB
MONGODB_URI=#OUTLOOK
EMAIL_USER=
EMAIL_PASS=
```Replace the placeholder values with your actual credentials. You can obtain these credentials by signing up on these specific websites from [BrightData](https://brightdata.com/), [MongoDB](https://www.mongodb.com/), and [Node Mailer](https://nodemailer.com/)
**Running the Project**
```bash
npm run dev
```Open [http://localhost:3000](http://localhost:3000) in your browser to view the project.