{"id":15036214,"url":"https://github.com/yhangf/pythoncrawler","last_synced_at":"2025-05-15T22:05:25.507Z","repository":{"id":41243279,"uuid":"63214339","full_name":"yhangf/PythonCrawler","owner":"yhangf","description":" :heartpulse:用python编写的爬虫项目集合","archived":false,"fork":false,"pushed_at":"2025-04-17T06:14:18.000Z","size":84,"stargazers_count":1630,"open_issues_count":0,"forks_count":493,"subscribers_count":63,"default_branch":"master","last_synced_at":"2025-05-15T22:04:41.360Z","etag":null,"topics":["python3","scripts","spiders"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/yhangf.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2016-07-13T04:29:41.000Z","updated_at":"2025-05-14T02:17:43.000Z","dependencies_parsed_at":"2024-01-15T23:26:53.233Z","dependency_job_id":"7714e11a-42e9-4df4-9cad-60a242560888","html_url":"https://github.com/yhangf/PythonCrawler","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yhangf%2FPythonCrawler","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yhangf%2FPythonCrawler/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yhangf%2FPythonCrawler/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yhangf%2FPythonCrawler/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/yhangf","download_url":"https://codeload.github.com/yhangf/PythonCrawler/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254430327,"owners_count":22069908,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["python3","scripts","spiders"],"created_at":"2024-09-24T20:30:31.860Z","updated_at":"2025-05-15T22:05:25.500Z","avatar_url":"https://github.com/yhangf.png","language":"Python","readme":"```shell\r\n        (                                                                        \r\n       )\\ )          )    )               (                       (             \r\n      (()/( (     ( /( ( /(               )\\   (       )  (  (    )\\   (   (    \r\n      /(_)))\\ )  )\\()))\\())  (    (    (((_)  )(   ( /(  )\\))(  ((_) ))\\  )(   \r\n      (_)) (()/( (_))/((_)\\   )\\   )\\ ) )\\___ (()\\  )(_))((_)()\\  _  /((_)(()\\  \r\n      | _ \\ )(_))| |_ | |(_) ((_) _(_/(((/ __| ((_)((_)_ _(()((_)| |(_))   ((_)\r\n      |  _/| || ||  _|| ' \\ / _ \\| ' \\))| (__ | '_|/ _` |\\ V  V /| |/ -_) | '_|\r\n      |_|   \\_, | \\__||_||_|\\___/|_||_|  \\___||_|  \\__,_| \\_/\\_/ |_|\\___| |_|   \r\n      |__/  \r\n                                                  —————— by yanghangfeng\r\n```\r\n# \u003cp align=\"center\"\u003ePythonCrawler: 用  python编写的爬虫项目集合:bug:(本项目代码仅作为爬虫技术学习之用，学习者务必遵循中华人民共和国法律！)\u003c/p\u003e\r\n\r\n\u003cp align=\"center\"\u003e\r\n    \u003ca href=\"https://github.com/yhangf/PythonCrawler/blob/master/LICENSE\"\u003e\r\n        \u003cimg src=\"https://img.shields.io/cocoapods/l/EFQRCode.svg?style=flat\"\u003e\r\n        \u003c/a\u003e\r\n    \u003ca href=\"\"\u003e\r\n        \u003cimg src=\"https://img.shields.io/badge/未完-间断性更新-orange.svg\"\u003e\r\n        \u003c/a\u003e\r\n    \u003ca href=\"https://github.com/python/cpython\"\u003e\r\n        \u003cimg src=\"https://img.shields.io/badge/language-python-ff69b4.svg\"\u003e\r\n        \u003c/a\u003e\r\n    \u003ca href=\"https://github.com/yhangf/PythonCrawler\"\u003e\r\n    \u003cimg src=\"https://img.shields.io/github/stars/yhangf/PythonCrawler.svg?style=social\u0026label=Star\"\u003e\r\n        \u003c/a\u003e\r\n    \u003ca href=\"https://github.com/yhangf/PythonCrawler\"\u003e\r\n    \u003cimg src=\"https://img.shields.io/github/forks/yhangf/PythonCrawler.svg?style=social\u0026label=Fork\"\u003e\r\n        \u003c/a\u003e\r\n\u003c/p\u003e\r\n\r\n# IPWO全球代理资源 | 为采集、跨境与测试项目提供支持（免费试用，爬虫使用强烈推荐！！！）\r\n### 官网地址\r\n[👉 访问 IPWO 官网](https://www.ipwo.net/?code=WSESV2ONN)\r\n### 产品简介\r\n* 免费试用，先体验再选择\r\n* 9000万+真实住宅IP，覆盖220+国家和地区\r\n* 支持动态住宅代理、静态住宅代理（ISP）\r\n* 适用于数据抓取、电商、广告验证、SEO监控等场景\r\n* 支持HTTP/HTTPS/SOCKS5协议，兼容性强\r\n* 纯净IP池，实时更新，99.9%连接成功率\r\n* 支持指定国家城市地区访问，保护隐私\r\n\r\n# spiderFile模块简介\r\n\r\n1.    [baidu_sy_img.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/baidu_sy_img.py): **抓取百度的`高清摄影`图片。**\r\n2.    [baidu_wm_img.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/baidu_wm_img.py): **抓取百度图片`唯美意境`模块。**\r\n3.    [get_photos.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_photos.py): **抓取百度贴吧某话题下的所有图片。**\r\n4.    [get_web_all_img.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_web_all_img.py): **抓取整个网站的图片。**\r\n5.    [lagou_position_spider.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/lagou_position_spider.py): **任意输入关键字，一键抓取与关键字相关的职位招聘信息，并保存到本地文件。**\r\n6.    [student_img.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/student_img.py): **自动化获取自己学籍证件照。**\r\n7.    [JD_spider.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/JD_spider.py): **大批量抓取京东商品id和标签。**\r\n8.    [ECUT_pos_html.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/ECUT_pos_html.py): **抓取学校官网所有校园招聘信息，并保存为html格式，图片也会镶嵌在html中。**\r\n9.    [ECUT_get_grade.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/ECUT_get_grade.py): **模拟登陆学校官网，抓取成绩并计算平均学分绩。**\r\n10.    [github_hot.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/github_hot.py): **抓取github上面热门语言所对应的项目，并把项目简介和项目主页地址保存到本地文件。**\r\n11.    [xz_picture_spider.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/xz_picture_spider.py): **应一位知友的请求，抓取某网站上面所有的写真图片。**\r\n12.    [one_img.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/one_img.py): **抓取one文艺网站的图片。**\r\n13.    [get_baike.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_baike.py): **任意输入一个关键词抓取百度百科的介绍。**\r\n14.    [kantuSpider.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/kantuSpider.py): **抓取看图网站上的所有图片。**\r\n15.    [fuckCTF.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/fuckCTF.py): **通过selenium模拟登入合天网站，自动修改原始密码。**\r\n16.    [one_update.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/one_update.py): **更新抓取one文艺网站的代码，添加一句箴言的抓取。**\r\n17.    [get_history_weather.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_history_weather.py): **抓取广州市2019年第一季度的天气数据。**\r\n18.    [search_useful_camera_ip_address.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/search_useful_camera_ip_address.py): **摄像头弱密码安全科普。**\r\n19.    [get_top_sec_com.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_top_sec_com.py): **异步编程获取A股市场网络安全版块公司市值排名情况，并以图片格式保存下来。**\r\n20.    [get_tf_accident_info.py](https://github.com/yhangf/PythonCrawler/blob/master/spiderFile/get_tj_accident_info.py): **同步和异步编程结合获取天津市应急管理局所有事故信息。**\r\n---\r\n# spiderAPI模块简介\r\n\r\n#### 本模块提供一些网站的API爬虫接口，功能可能不是很全因此可塑性很大智慧的你如果有兴趣可以继续改进。\r\n\r\n##### 1.大众点评\r\n\r\n```python\r\nfrom spiderAPI.dianping import *\r\n\r\n'''\r\ncitys = {\r\n    '北京': '2', '上海': '1', '广州': '4', '深圳': '7', '成都': '8', '重庆': '9', '杭州': '3', '南京': '5', '沈阳': '18', '苏州': '6', '天津': '10','武汉': '16', '西安': '17', '长沙': '344', '大连': '19', '济南': '22', '宁波': '11', '青岛': '21', '无锡': '13', '厦门': '15', '郑州': '160'\r\n}\r\n\r\nranktype = {\r\n    '最佳餐厅': 'score', '人气餐厅': 'popscore', '口味最佳': 'score1', '环境最佳': 'score2', '服务最佳': 'score3'\r\n}\r\n'''\r\n\r\nresult=bestRestaurant(cityId=1, rankType='popscore')#获取人气餐厅\r\n\r\nshoplist=dpindex(cityId=1, page=1)#商户风云榜\r\n\r\nrestaurantlist=restaurantList('http://www.dianping.com/search/category/2/10/p2')#获取餐厅\r\n\r\n```\r\n\r\n##### 2.获取代理IP\r\n爬取[代理IP](http://proxy.ipcn.org)\r\n```python\r\nfrom spiderAPI.proxyip import get_enableips\r\n\r\nenableips=get_enableips()\r\n\r\n```\r\n\r\n##### 3.百度地图\r\n\r\n百度地图提供的API,对查询有一些限制，这里找出了web上查询的接口。\r\n```python\r\nfrom spiderAPI.baidumap import *\r\n\r\ncitys=citys()#获取城市列表\r\nresult=search(keyword=\"美食\", citycode=\"257\", page=1)#获取搜索结果\r\n\r\n```\r\n\r\n##### 4.模拟登录github\r\n```python\r\nfrom spiderAPI.github import GitHub\r\n\r\ngithub = GitHub()\r\ngithub.login() # 这一步会提示你输入用户名和密码\r\ngithub.show_timeline() # 获取github主页时间线\r\n# 更多的功能有待你们自己去发掘\r\n```\r\n\r\n##### 5.拉勾网\r\n```python\r\nfrom spiderAPI.lagou import *\r\n\r\nlagou_spider(key='数据挖掘', page=1) # 获取关键字为数据挖掘的招聘信息\r\n```\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyhangf%2Fpythoncrawler","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fyhangf%2Fpythoncrawler","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyhangf%2Fpythoncrawler/lists"}