{"id":21232519,"url":"https://github.com/essien1990/webscraping_using_python","last_synced_at":"2025-03-15T02:41:36.951Z","repository":{"id":107275867,"uuid":"417357351","full_name":"essien1990/WebScraping_using_Python","owner":"essien1990","description":"Web Scraping using Python to scrape Tonaton.com and Weather.gov websites to extract specific content, transform the data, and load or store in PostgreSQL database.","archived":false,"fork":false,"pushed_at":"2021-10-19T17:22:11.000Z","size":283,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-21T18:31:10.491Z","etag":null,"topics":["jupyter-lab","jupyter-notebook","pandas-dataframe","pgadmin4","postgresql-database","python3","regex","requests","scrapy","scrapy-crawler","scrapy-sp","webscraping"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/essien1990.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-10-15T03:29:05.000Z","updated_at":"2022-05-23T10:48:55.000Z","dependencies_parsed_at":null,"dependency_job_id":"823e27e1-c98a-4dac-ad28-9dc5b54128f3","html_url":"https://github.com/essien1990/WebScraping_using_Python","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/essien1990%2FWebScraping_using_Python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/essien1990%2FWebScraping_using_Python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/essien1990%2FWebScraping_using_Python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/essien1990%2FWebScraping_using_Python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/essien1990","download_url":"https://codeload.github.com/essien1990/WebScraping_using_Python/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243676705,"owners_count":20329431,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["jupyter-lab","jupyter-notebook","pandas-dataframe","pgadmin4","postgresql-database","python3","regex","requests","scrapy","scrapy-crawler","scrapy-sp","webscraping"],"created_at":"2024-11-20T23:52:47.909Z","updated_at":"2025-03-15T02:41:36.922Z","avatar_url":"https://github.com/essien1990.png","language":"Jupyter Notebook","readme":"# WebScraping using Python(BeautifulSoup \u0026 Scrapy)\n- BeautifulSoup was used to scrape the content of the website including the lxml parser \n- Requests was used to get the website URL\n- Some transformation was done i.e. creating new columns that stores specific information of the weather needed \n- Pandas was used to store the scraped data into a dataframe\n- The data was transformed and loaded into Json and CSV file format\n- SQLAlchemy was used to create a PostgreSQL engine and connection to load the data into PostgreSQL database\n- [DataFrame stored in Database Weather](https://user-images.githubusercontent.com/5301791/137428662-06a7fbad-047e-436a-86f7-abca0dbdc8ed.png)\n- [DataFrame stored in Database Weather in Schema forecasts](https://user-images.githubusercontent.com/5301791/137428668-0fe365f7-9c22-4fd1-8e0e-03f94f68d1b5.png)\n- Regex pattern was used to extract specific data from the Tonaton web page\n- Scrapy framework was used to scrape a Shop website extracted the name, price and link of the items and stored in CSV and JSON.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fessien1990%2Fwebscraping_using_python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fessien1990%2Fwebscraping_using_python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fessien1990%2Fwebscraping_using_python/lists"}