Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jeanrauwers/followers-scraper-serverless
Now you can keep track of your followers from YouTube, Instagram and Twitter accounts - Followers scraper API on AWS serverless
https://github.com/jeanrauwers/followers-scraper-serverless
aws aws-lambda aws-serverless followers-scraper instagram instagram-scraper instagramscraper lambda nodejs-lambda scraper twitter twitter-scraper twittersc typescript webscraper webscraper-api webscraping website-scraper youtube
Last synced: 2 months ago
JSON representation
Now you can keep track of your followers from YouTube, Instagram and Twitter accounts - Followers scraper API on AWS serverless
- Host: GitHub
- URL: https://github.com/jeanrauwers/followers-scraper-serverless
- Owner: jeanrauwers
- License: mit
- Created: 2019-05-05T17:01:42.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-01-23T23:54:56.000Z (about 2 years ago)
- Last Synced: 2023-03-03T00:20:14.881Z (almost 2 years ago)
- Topics: aws, aws-lambda, aws-serverless, followers-scraper, instagram, instagram-scraper, instagramscraper, lambda, nodejs-lambda, scraper, twitter, twitter-scraper, twittersc, typescript, webscraper, webscraper-api, webscraping, website-scraper, youtube
- Language: TypeScript
- Homepage:
- Size: 2.51 MB
- Stars: 13
- Watchers: 2
- Forks: 1
- Open Issues: 22
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
![image003](https://user-images.githubusercontent.com/10606291/57485195-f3ad4c80-72a2-11e9-98cc-46be69d53de2.png)
## Followers Scraper API on AWS Serverless Lambda
#### Supports YouTube, Twitter and Instagram Followers
#### Usage
To use this repo locally you need to have the [Serverless framework](https://serverless.com) installed.
``` bash
$ npm install serverless -g
```Clone this repo and install the NPM packages.
``` bash
$ git clone https://github.com/jeanrauwers/followers-scraper-serverless
$ npm install
```Run a single API on local.
``` bash
$ npm run dev
```
You need to have your own [aws-cli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) with your credentials configured and than you can run $npm run deploy to deploy the lambda to AWS.You can create your creditials on IAM panel, remember to provide the right access to your user so you can write to S3 Buckets.
The scraper lambda works based on CloudWatch schedule that you can define at your AWS console as a cron job.
### Data is returned on /api/likes end point on the following format :
``` json {
{
"data": [
{
"twitter": "307",
"instagram": "3895",
"youtube": "389",
"date": "11/02/2020"
},
{
"twitter": "308",
"instagram": "3896",
"youtube": "389",
"date": "11/02/2020"
},
{
"twitter": "308",
"instagram": "3896",
"youtube": "389",
"date": "10/02/2020"
}
]
}
```
### If you need to force to update the data before the cron job you can hit /api/likes/update with a get request and it will trigger the update and return the latest scrape dataEnjoy and have fun!