{"id":13394092,"url":"https://github.com/propublica/upton","last_synced_at":"2026-01-27T02:05:40.970Z","repository":{"id":8699462,"uuid":"10364296","full_name":"propublica/upton","owner":"propublica","description":"A batteries-included framework for easy web-scraping. Just add CSS! (Or do more.)","archived":false,"fork":false,"pushed_at":"2018-12-26T02:48:38.000Z","size":3038,"stargazers_count":1601,"open_issues_count":11,"forks_count":110,"subscribers_count":77,"default_branch":"master","last_synced_at":"2026-01-14T09:28:05.087Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/propublica.png","metadata":{"files":{"readme":"README.md","changelog":"changelog","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2013-05-29T16:41:10.000Z","updated_at":"2025-12-06T14:39:13.000Z","dependencies_parsed_at":"2022-09-19T06:51:32.088Z","dependency_job_id":null,"html_url":"https://github.com/propublica/upton","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/propublica/upton","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/propublica%2Fupton","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/propublica%2Fupton/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/propublica%2Fupton/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/propublica%2Fupton/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/propublica","download_url":"https://codeload.github.com/propublica/upton/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/propublica%2Fupton/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28796962,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T01:07:07.743Z","status":"online","status_checked_at":"2026-01-27T02:00:07.755Z","response_time":168,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T17:01:08.680Z","updated_at":"2026-01-27T02:05:40.950Z","avatar_url":"https://github.com/propublica.png","language":"HTML","readme":"# Upton\n\nUpton is a framework for easy web-scraping with a useful debug mode that doesn't hammer your target's servers. It does the repetitive parts of writing scrapers, so you only have to write the unique parts for each site.\n\n## Installation\n\nAdd the gem to your `Gemfile` and run the bundle command:\n\n```ruby\ngem 'upton'\n```\n\n## Documentation\n\nWith Upton, you can scrape complex sites to a CSV in just a few lines of code:\n\n```ruby\nscraper = Upton::Scraper.new(\"http://www.propublica.org\", \"section#river h1 a\")\nscraper.scrape_to_csv \"output.csv\" do |html|\n  Nokogiri::HTML(html).search(\"#comments h2.title-link\").map \u0026:text\nend\n```\n\nJust specify a URL to a list of links -- or simply a list of links --, an XPath expression or CSS selector for the links and a block of what to do with the content of the pages you've scraped. Upton comes with some pre-written blocks (Procs, technically) for scraping simple lists and tables, like the `list` function above.\n\nUpton operates on the theory that, for most scraping projects, you need to scrape two types of pages:\n\n1. Instance pages, which are the goal of your scraping, e.g. job listings or news articles.\n1. Index pages, which list instance pages. For example, a job search site's search page or a newspaper's homepage.\n\nFor more complex use cases, subclass `Upton::Scraper` and override the relevant methods. If you're scraping links from an API, you would override `get_index`; if you need to log in before scraping a site or do something special with the scraped instance page, you would override `get_instance`.\n\nThe `get_instance` and `get_index` methods use a protected method `get_page(url)` which, well, gets a page. That's not very special. The more interesting part is that `get_page(url, stash)` transparently stashes the response of each request if the second parameter, `stash`, is true. Whenever you repeat a request (with `true` as the second parameter), the stashed HTML is returned without going to the server. This is helpful in the development stages of a project when you're testing some aspect of the code and don't want to hit a server each time. If you are using `get_instance` and `get_index`, this can be en/disabled per instance of `Upton::Scraper` or its subclasses with the `@debug` option. Setting the `stash` parameter of the `get_page` method should only be used if you've overridden `get_instance` or `get_index` in a subclass.\n\nUpton also sleeps (by default) 30 seconds between non-stashed requests, to reduce load on the server you're scraping. This is configurable with the `@sleep_time_between_requests` option.\n\nUpton can handle pagination too. Scraping paginated index pages that use a query string parameter to track the current page (e.g. `/search?q=test\u0026page=2`) is possible by setting `@paginated` to true. Use `@pagination_param` to set the query string parameter used to specify the current page (the default value is `page`). Use `@pagination_max_pages` to specify the number of pages to scrape (the default is two pages). You can also set @pagination_interval` if you want to increment pages by a number other than 1 (i.e. if the first page is 1 and lists instances 1 through 20, the second page is 21 and lists instances 21-41, etc.) See the Examples section below.\n\nTo handle non-standard pagination, you can override the `next_index_page_url` and `next_instance_page_url` methods; Upton will get each page's URL returned by these functions and return their contents.\n\n\u003cb\u003eFor more complete documentation\u003c/b\u003e, see [the RDoc](http://rubydoc.info/gems/upton/frames/index).\n\n\u003cb\u003eImportant Note:\u003c/b\u003e Upton is alpha software. The API may change at any time. \n\n#### How is this different than Nokogiri?\nUpton is, in essence, sugar around RestClient and Nokogiri. If you just used those tools by themselves to write scrapers, you'd be responsible for writing code to fetch, save (maybe), debug and sew together all the pieces in a slightly different way for each scraper. Upton does most of that work for you, so you can skip the boilerplate.\n\n#### Upton doesn't quite fit your needs?\nHere are some similar libraries to check out for inspiration. No promises, since I've never used them, but they seem similar and were [recommended by various HN commenters](https://news.ycombinator.com/item?id=6086031): \n\n- [Pismo](https://github.com/peterc/pismo)\n- [Spidey](https://github.com/joeyAghion/spidey)\n- [Anemone](http://anemone.rubyforge.org/)\n- [Pupa.rb](https://github.com/opennorth/pupa-ruby) / [Pupa](https://github.com/opencivicdata/pupa)\n\nAnd these are some libraries that do related things:\n\n- [SelectorGadget](http://selectorgadget.com/)\n- [HayStax](https://github.com/danhillreports/haystax)\n\n\n## Examples\n\nIf you want to scrape ProPublica's website with Upton, this is how you'd do it. (Scraping our [RSS feed](http://feeds.propublica.org/propublica/main) would be smarter, but not every site has a full-text RSS feed...)\n\n```ruby\nscraper = Upton::Scraper.new(\"http://www.propublica.org\", \"section#river section h1 a\")\nscraper.scrape do |article_html_string|\n  puts \"here is the full html content of the ProPublica article listed on the homepage: \"\n  puts \"#{article_html_string}\"\n  #or, do other stuff here.\nend\n```\n\nSimple sites can be scraped with pre-written `list` block in `Upton::Utils', as below:\n\n```ruby\nscraper = Upton::Scraper.new(\"http://nytimes.com\", \"ul.headlinesOnly a\")\nscraper.scrape_to_csv(\"output.csv\", \u0026Upton::Utils.list(\"h6.byline\"))\n```\n\nA `table` block also exists in `Upton::Utils` to scrape tables to an array of arrays, as below:\n\n```ruby\n\u003e scraper = Upton::Scraper.new([\"http://website.com/story.html\"])\n\u003e scraper.scrape(\u0026Upton::Utils.table(\"//table[2]\"))\n[[\"Jeremy\", \"$8.00\"], [\"John Doe\", \"$15.00\"]]\n```\n\nThis example shows how to scrape the first three pages of ProPublica's search results for the term `tools`:\n\n```ruby\nscraper = Upton::Scraper.new(\"http://www.propublica.org/search/search.php?q=tools\",\n                             \".compact-list a.title-link\")\nscraper.paginated = true\nscraper.pagination_param = 'p'    # default is 'page'\nscraper.pagination_max_pages = 3  # default is 2\nscraper.scrape_to_csv(\"output.csv\", \u0026Upton::Utils.list(\"h2\"))\n```\n\n\n## Contributing\n\nI'd love to hear from you if you're using Upton. I also appreciate your suggestions/complaints/bug reports/pull requests. If you're interested, check out the issues tab or [drop me a note](http://github.com/jeremybmerrill).\n\nIn particular, if you have a common, *abstract* use case, please add them to [lib/utils.rb](https://github.com/propublica/upton/blob/master/lib/utils.rb). Check out the `table_to_csv` and `list_to_csv` methods for examples.\n\n(The pull request process is pretty easy. Fork the project in Github (or via the `git` CLI), make your changes, then submit a pull request on Github.) \n\n## Why \"Upton\"\n\nUpton Sinclair was a pioneering, muckraking journalist who is most famous for _The Jungle_, a novel portraying the reality of immigrant labor struggles in Chicago meatpacking plants at the start of the 1900s. Upton, the gem, sprang out of a ProPublica project pertaining to labor issues.\n\n## Notes\n\nTest data is copyrighted by either ProPublica or various Wikipedia contributors.\nIn either case, it's reproduced here under a Creative Commons license. In ProPublica's case, it's BY-NC-ND; in Wikipedia's it's BY-SA.\n","funding_links":[],"categories":["HTML","All","Web Crawling","Ruby"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpropublica%2Fupton","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpropublica%2Fupton","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpropublica%2Fupton/lists"}