Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/deolae/data-engineering-project-pipeline
This project was made in Jupyter Notebook, it included data ingestion from a CSV file, PDF file, and web scraping. Then transforming and cleaning the data, and lastly storing the data in a storage solution, in which I used MongoDB to store the data.
https://github.com/deolae/data-engineering-project-pipeline
data-pipeline jupyter-notebook python
Last synced: 1 day ago
JSON representation
This project was made in Jupyter Notebook, it included data ingestion from a CSV file, PDF file, and web scraping. Then transforming and cleaning the data, and lastly storing the data in a storage solution, in which I used MongoDB to store the data.
- Host: GitHub
- URL: https://github.com/deolae/data-engineering-project-pipeline
- Owner: Deolae
- Created: 2023-07-04T23:33:06.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-07-04T23:41:01.000Z (over 1 year ago)
- Last Synced: 2024-11-17T22:14:06.563Z (1 day ago)
- Topics: data-pipeline, jupyter-notebook, python
- Language: Jupyter Notebook
- Homepage:
- Size: 6.23 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Data-Engineering-project
Project Steps:
Identify the relevant sources of data, such as CSV files, websites,PDF documents , and texts files .
Understand the structure and format of the data in each source, and design a data schema that
accommodates the different data types and relationships.• Data Ingestion:
Develop a data ingestion module capable of fetching data from CSV files, web APIs, or web scraping
techniques. For CSV files, implement a mechanism to read and parse the data, handling any
inconsistencies or missing values. For web sources, utilize libraries or tools to retrieve data from APIs or
scrape data from web pages.• Data Transformation and Cleaning:
Design and implement data transformation routines to convert the raw data into a consistent format. This
may involve cleaning, standardizing, and normalizing the data across sources. Handle any data quality
issues, such as missing values, outliers, or data inconsistencies.
Web Scraping and Data Extraction:
For web sources, implement web scraping techniques to extract structured data from HTML pages. Utilize
libraries such as BeautifulSoup or Selenium to navigate web pages, extract relevant data elements, and
convert them into a usable format for further processing.• Data Storage and Persistence:
Choose a suitable data storage solution, such as a relational database or a data lake, to store the transformed
and integrated data. Implement the necessary mechanisms to load the processed data into the chosen
storage system, ensuring data integrity and efficient retrieval.• Documentation and Monitoring:
Document the pipeline architecture, data workflows, and dependencies. Create clear instructions for
maintaining and updating the pipeline.By completing this project, you will have a versatile data engineering pipeline capable of integrating,
transforming, and persisting data from diverse sources such as CSV files, web APIs, and PDF documents.
This will enable efficient data analysis, reporting, and decision-making process