https://github.com/quantumudit/analyzing-quotes
This project focuses on scraping all the quotes and their related data from the "Quotes To Scrape" website; performing necessary transformations on the scraped data and then analyzing & visualizing it using Jupyter Notebook and Power BI.
https://github.com/quantumudit/analyzing-quotes
data-analysis data-science data-transformation data-visualization etl jupyter-notebook power-bi python webscraping
Last synced: 5 months ago
JSON representation
This project focuses on scraping all the quotes and their related data from the "Quotes To Scrape" website; performing necessary transformations on the scraped data and then analyzing & visualizing it using Jupyter Notebook and Power BI.
- Host: GitHub
- URL: https://github.com/quantumudit/analyzing-quotes
- Owner: quantumudit
- License: other
- Created: 2021-11-10T16:25:47.000Z (almost 4 years ago)
- Default Branch: master
- Last Pushed: 2021-12-11T12:43:36.000Z (almost 4 years ago)
- Last Synced: 2025-02-17T10:49:42.912Z (8 months ago)
- Topics: data-analysis, data-science, data-transformation, data-visualization, etl, jupyter-notebook, power-bi, python, webscraping
- Language: Python
- Homepage:
- Size: 800 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
README
![Project Logo][project_logo]
---
Scraping & Analyzing Quotes from Quotes to Scrape website with Python and Power BI
![]()
![]()
![]()
Overview •
Prerequisites •
Architecture •
Demo •
Support •
License## Overview
This project focuses on scraping quotes and their associated attributes from [Quotes to Scrape][website_link], performing exploratory data analysis to generate insights and visualize them with the help of Power BI.
The repository directory structure is as follows:
Analyzing-Quotes
├─ 01_WEBSCRAPING
├─ 02_ETL
├─ 03_DATA
├─ 04_ANALYSIS
├─ 05_DASHBOARD
├─ 06_RESOURCESThe type of content present in the directories is as follows:
**01_WEBSCRAPING**
This directory contains the python script to scrape data from the website along with flat file that has the scraped data.
**02_ETL**
This directory contains the ETL script that takes the scraped dataset as input, transforms it and exports an analysis-ready dataset into the _03_DATA_ directory.
**03_DATA**
This directory contains the data that can be directly used for exploratory data analysis and data visualization purposes.
**04_ANALYSIS**
This directory contains the python notebooks that analyzes the clean dataset to generate insights
**05_DASHBOARD**
This directory contains the python notebook with an embedded Power BI report that visualizes the data. The Power BI dashboard contains slicers, cross-filtering and other advance capabilities that end user can play with to visualize a specific facet of the data or, to get additional insights.
**06_RESOURCES**
This directory contains images, icons, layouts, etc. that are used in this project
## Prerequisites
The major skills that are required as prerequisite to fully understand this project are as follows:
- Basics of Python
- Basics of HTML
- Python libraries: Requests, BeautifulSoup, Pandas, DateTime, Time
- Basics of Python Notebooks
- Basics of Power BIIn order to complete the project, I've used the following applications and libraries
- Python
- Python libraries mentioned in requirements.txt file
- Jupyter Notebook
- Visual Studio Code
- Microsoft Power BI> The choice of applications & their installation might vary based on individual preferences & system settings.
## Architecture
The project architecture is quite straight forward and can be explained through the below image:
![Process Architecture][process_workflow]
As per the above workflow suggests; we are first scraping the data from the website using the Python script and collecting the same in a flat file which is then processed and cleaned with another ETL specific Python script.
Finally; we leverage the clean & analysis-ready dataset for some exploratory data analysis (EDA) using Jupyter Notebook and creating an insightful report using Power BI
## Demo
The below graphic shows scraping of data from the website:
![Scraping Graphic][scraping_graphic]
From this graphic we can observe a significant reduce in the time of scraping due to the use of multi-threading.
## Support
If you have any doubts, queries or, suggestions then, please connect with me in any of the following platforms:
[![Linkedin Badge][linkedinbadge]][linkedin] [![Twitter Badge][twitterbadge]][twitter]
If you like my work then, you may support me at Patreon:
## License
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.
[project_logo]: 06_RESOURCES/project_cover_image.png
[process_workflow]: 06_RESOURCES/process_architecture.png
[scraping_graphic]: 06_RESOURCES/scraping_graphic.gif[website_link]: https://quotes.toscrape.com/
[linkedin]: https://www.linkedin.com/in/uditkumarchatterjee/
[twitter]: https://twitter.com/quantumudit[linkedinbadge]: https://img.shields.io/badge/-uditkumarchatterjee-0e76a8?style=flat&labelColor=0e76a8&logo=linkedin&logoColor=white
[twitterbadge]: https://img.shields.io/badge/-@quantumudit-1ca0f1?style=flat&labelColor=1ca0f1&logo=twitter&logoColor=white&link=https://twitter.com/quantumudit