{"id":48065513,"url":"https://github.com/wsijp/scrapepath","last_synced_at":"2026-04-04T14:35:57.892Z","repository":{"id":43367391,"uuid":"157008691","full_name":"wsijp/scrapepath","owner":"wsijp","description":"Web scraping syntax","archived":false,"fork":false,"pushed_at":"2022-07-06T19:55:48.000Z","size":24,"stargazers_count":0,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-09-29T13:27:13.999Z","etag":null,"topics":["easy","python","scraping-python","syntax","templates"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wsijp.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-11-10T18:18:41.000Z","updated_at":"2019-06-13T06:23:27.000Z","dependencies_parsed_at":"2022-07-08T00:20:29.895Z","dependency_job_id":null,"html_url":"https://github.com/wsijp/scrapepath","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/wsijp/scrapepath","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wsijp%2Fscrapepath","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wsijp%2Fscrapepath/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wsijp%2Fscrapepath/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wsijp%2Fscrapepath/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wsijp","download_url":"https://codeload.github.com/wsijp/scrapepath/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wsijp%2Fscrapepath/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31402987,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["easy","python","scraping-python","syntax","templates"],"created_at":"2026-04-04T14:35:57.674Z","updated_at":"2026-04-04T14:35:57.861Z","avatar_url":"https://github.com/wsijp.png","language":"Python","readme":"\nScrapepath\n----------\n\n[Scrapepath](https://github.com/wsijp/scrapepath) is a templated web scraping syntax. [Scrapepath is pip installable](https://pypi.org/project/scrapepath/) via `pip install scrapepath`.\n\n\nRequirements\n------------\n\nInstall the required Python dependencies using the provided requirements.txt file, by:\n\n```bash\npip install -r requirements.txt\n```\n\n\nUsage\n-----\n\nTo run an example, execute on the command line without arguments:\n\n```bash\n./parser\n```\n\nTo use within Python:\n\n```python\nfrom parser import NodeParser\n\nnp = NodeParser(soup_template, soup, live_url)\nnp.hop_template()\nprint (json.dumps(np.result_dict, indent = 2, default = str))\n```\n\nWhere `soup_template` is a `BeautifulSoup` of the template file, `soup` is a `BeautifulSoup` of the scraped page and `live_url` the url of the scraped page.\n\nTemplates\n---------\n\nHTML pages are scraped using HTML templates, consisting of a mixture of the most important tags, and statements.\n\nTemplates consist of HTML files containing nested tags leading to the scraping element of interest.\n\nThe parser is based on `BeautifulSoup`.\n\nExample 1: Scraping data\n-----------------------\n\nThe following examples are from scraped pages `examples/example1a.html` and template `examples/scraped1.html`. Run the example using:\n\n`./parser.py examples/example1a.html examples/scraped1.html`\n\nThis scrapes the target page `scraped1.html` using the `template example1a.html`. The text item \"Tea\" is scraped from the target page using the `record` attribute in the template page. A path to the target text (\"Tea\") is specified in the template using tags that correspond to the target page. So, to scrape from:\n\n```html\n\n\u003cul class = \"my_list\"\u003e\n  \u003cli class = \"my_item\"\u003eCoffee\u003c/li\u003e\n  \u003cli class = \"my_item\"\u003e\u003cspan class = \"cuppa\"\u003eTea\u003c/span\u003e\u003c/li\u003e\n  \u003cli class = \"my_item\"\u003eMilk\u003c/li\u003e\n\u003c/ul\u003e\n\n```\n\nUse template:\n\n```html\n\u003cul class = \"my_list\"\u003e\n  \u003cspan class = \"cuppa\" record = \"text as favorite\"\u003e\u003c/span\u003e\n\u003c/ul\u003e\n\n```\n\nThis yields a dictionary containing the scraped data under the key \"favorite\" as specified in the `record` attribute:\n\n```json\n{\n  \"favorite\": \"Tea\"\n}\n```\nThe `text` statement within the record attribute corresponds to a function that obtains text from inside the HTML tag, and `favorite` is the key to record the data against. The `text` function can be replaced with custom Python functions.\n\n Starting from the outer node, `\u003cul\u003e` , in the template, the parser looks for the first node in the scraped page that matches the template node in type and attributes. In this case, matching a ul with a ul, and class my_list with class my_list. Then, the same search takes place using the template node children, now confined within the children of the scraped node. So nested template nodes represent paths. The `\u003cli\u003e` node is not included in the template, as it would point the search to the first element of the list.\n\n In this case, nesting the template nodes is needlessly specific. There are no other nodes of class \"cuppa\", so we can omit the `\u003cul\u003e` and `\u003cli\u003e` items, and the following template will record the same data:\n\n ```html\n \u003cspan class = \"cuppa\" record = \"text as favorite\"\u003e\u003c/span\u003e\n\n ```\n\nSo paths along many nested nodes in the scraped page can be summarized by only a few nodes that define a unique path to the scraped data.\n\n\nLoops:\n\nA `for` loop scrapes all items in the list. In this simple example, we record only one variable (item_text) per item:\n\nTemplate:\n\n```html\n    \u003cul class = \"my_list\"\u003e\n      \u003cfor items = \"items\" condition = \"i \u003c 5\"\u003e\n        \u003cli class =\"my_item\" record = \"text as item_text\"\u003e\n        \u003c/li\u003e\n      \u003c/for\u003e\n    \u003c/ul\u003e\n```\n\nThis results in the output:\n\n```json\n{\n  \"items\": [\n    {\n      \"item_text\": \"Coffee\"\n    },\n    {\n      \"item_text\": \"Tea\"\n    },\n    {\n      \"item_text\": \"Milk\"\n    },\n    {\n      \"item_text\": \"Biscuits\"\n    },\n    {\n      \"item_text\": \"Chocolate\"\n    }\n  ]\n}\n```\n\nHere, the parser matches all the children of the `\u003cfor\u003e` template node to the children of the `\u003cul\u003e` node in the scraped page `scraped1.html` . Run the example using: `./parser.py examples/example1b.html examples/scraped1.html`. The `condition` node indicates that only the first 5 items should be recorded, where `i` is the loop counter variable.\n\nExample 2: for loops on mixed nodes\n----------------------------------\n\nIn the following html, a `\u003cfor\u003e` template loop node needs to enclose two template nodes, one for each tag (div and p) and class (my_item and milk_class):\n\nTo scrape from:\n\n```html\n\u003cdiv class = \"my_list\"\u003e\n  \u003cdiv class = \"my_item\"\u003eCoffee\u003c/div\u003e\n  \u003cdiv class = \"my_item\"\u003e\u003cspan class = \"cuppa\"\u003eTea\u003c/span\u003e\u003c/div\u003e\n  \u003cp class = \"milk_class\"\u003eMilk\u003c/p\u003e\n  \u003cdiv class = \"my_item\"\u003eBiscuits\u003c/div\u003e\n  Chocolate\n\u003c/div\u003e\n```\n\nUse template:\n\n```html\n\u003cdiv class = \"my_list\"\u003e\n  \u003cfor items = \"items\" \u003e\n    \u003cdiv class =\"my_item\" record = \"text as item_text\"\u003e\u003c/div\u003e\n    \u003cp class =\"milk_class\" record = \"text as item_text\"\u003e\u003c/p\u003e\n  \u003c/for\u003e\n\u003c/div\u003e\n\n```\n\nHowever, the `\u003cfor\u003e` template loop node is unable to record the text element \"chocolate\", as the `\u003cfor\u003e` only looks for proper nodes among the children of the `\u003cdiv class = \"my_list\"\u003e` node. To do this, a `\u003cforchild\u003e` template loop node is needed, along with a `\u003cstr\u003e` template node to record the `NavigableString` element \"chocolate\":\n\nTemplate:\n\n```html\n\n\u003cdiv class = \"my_list\"\u003e\n  \u003cforchild items = \"items_with_string\" \u003e\n    \u003cdiv class =\"my_item\" record = \"text as item_text\"\u003e\u003c/div\u003e\n    \u003cp class =\"milk_class\" record = \"text as item_text\"\u003e\u003c/p\u003e\n    \u003cstr record = \"text as item_text\"\u003e\u003c/div\u003e\n  \u003c/forchild\u003e\n\u003c/div\u003e\n\n```\n\nIn this case, the parser looks for the first match to the first template node (the child of the `\u003cfor\u003e` node), and loops over its sibblings, probing with all template nodes (the children of this for node). Run this example using `examples/example1b.html` and `examples/scraped1.html`.\n\nExample 3: Jumping to linked pages\n---------------------------------\n\nFollow links on pages using the `\u003cjump\u003e` template node:\n\nTo scrape from:\n\n```html\n\n\u003ca href=\"example_linked.html\"\u003e\u003c/a\u003e\n\n```\n\nUse template:\n\n```html\n    \u003ca record = \"href as my_link\"\u003e\n      \u003cjump on = \"my_link\"\u003e\n        \u003cibody\u003e\n          \u003cdiv class = \"message\" record = \"text as msg_from_link\"\u003e\u003c/div\u003e\n        \u003c/ibody\u003e\n      \u003c/jump\u003e\n    \u003ca\u003e\n```\n\nHere, the nodes within the `\u003cjump\u003e` node act on the linked page.\n\nThis example is invoked with:\n\n```bash\n./parser.py examples/example3a.html examples/scraped3.html\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwsijp%2Fscrapepath","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwsijp%2Fscrapepath","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwsijp%2Fscrapepath/lists"}