Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gill-singh-a/github-analytics-tool
A Program made in Python, that uses requests module to fetches and analysis publically available information of Github account
https://github.com/gill-singh-a/github-analytics-tool
beautifulsoup beautifulsoup4 git github html-parser python requests scrapping scrapping-python
Last synced: 16 days ago
JSON representation
A Program made in Python, that uses requests module to fetches and analysis publically available information of Github account
- Host: GitHub
- URL: https://github.com/gill-singh-a/github-analytics-tool
- Owner: Gill-Singh-A
- Created: 2023-06-17T16:46:16.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-11-14T18:40:35.000Z (about 1 year ago)
- Last Synced: 2024-11-09T13:23:37.452Z (2 months ago)
- Topics: beautifulsoup, beautifulsoup4, git, github, html-parser, python, requests, scrapping, scrapping-python
- Language: Python
- Homepage:
- Size: 36.1 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Github Analytics Tool
A Program made in Python, that uses requests module to fetches and analysis publically available information of Github account## Requirements
Language Used = Python3
Modules/Packages used:
* requests
* os
* pathlib
* subprocess
* datetime
* bs4
* optparse
* pickle
* colorama
* timeInstall the dependencies:
```bash
pip install -r requirements.txt
```### main.py
It takes the arguments from the command that is used to run the Python Program.
It takes in the following arguments:
* '-u', "--users" : ID of the Users to get Details. (seperated by ',')
* '-l', "--load" : File from which to load the Users
* '-w', "--write" : Name of the file to dump extracted data
* '-r', "--read" : Read a dump file
* '-c', "--clone-repositories" : Clone All Repositories of a User (True/False)### Files
It makes **data** and **users** folders.
* data: It stores the dumped data of the users. (Dumped using *pickle*)
* users: It stores the cloned repositories of the Users. (with User's name as the Parent Folder of the Cloned Repositories)### Note
Reason of using *requests* instead of *github API* is that, by using this method of approach, it gives a proof that scrapping other Website that don't have API would be easy. Because the documentation of API is given online and can be easily used to get all required data.