{"id":13478969,"url":"https://github.com/JustAnotherArchivist/snscrape","last_synced_at":"2025-03-27T08:31:12.437Z","repository":{"id":37458206,"uuid":"148062266","full_name":"JustAnotherArchivist/snscrape","owner":"JustAnotherArchivist","description":"A social networking service scraper in Python","archived":false,"fork":false,"pushed_at":"2023-11-15T00:21:52.000Z","size":370,"stargazers_count":4463,"open_issues_count":75,"forks_count":708,"subscribers_count":101,"default_branch":"master","last_synced_at":"2024-10-29T15:06:12.060Z","etag":null,"topics":["python","scraper","social-media","social-network"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/JustAnotherArchivist.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-09-09T20:16:31.000Z","updated_at":"2024-10-28T21:02:17.000Z","dependencies_parsed_at":"2023-02-18T22:31:03.670Z","dependency_job_id":"b8d15d17-0b84-4e70-af96-a4b551013fdd","html_url":"https://github.com/JustAnotherArchivist/snscrape","commit_stats":null,"previous_names":[],"tags_count":19,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JustAnotherArchivist%2Fsnscrape","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JustAnotherArchivist%2Fsnscrape/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JustAnotherArchivist%2Fsnscrape/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JustAnotherArchivist%2Fsnscrape/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/JustAnotherArchivist","download_url":"https://codeload.github.com/JustAnotherArchivist/snscrape/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243681061,"owners_count":20330156,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["python","scraper","social-media","social-network"],"created_at":"2024-07-31T16:02:06.826Z","updated_at":"2025-03-27T08:31:12.415Z","avatar_url":"https://github.com/JustAnotherArchivist.png","language":"Python","readme":"# snscrape\nsnscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the discovered items, e.g. the relevant posts.\n\nThe following services are currently supported:\n\n* Facebook: user profiles, groups, and communities (aka visitor posts)\n* Instagram: user profiles, hashtags, and locations\n* Mastodon: user profiles and toots (single or thread)\n* Reddit: users, subreddits, and searches (via Pushshift)\n* Telegram: channels\n* Twitter: users, user profiles, hashtags, searches (live tweets, top tweets, and users), tweets (single or surrounding thread), list posts, communities, and trends\n* VKontakte: user profiles\n* Weibo (Sina Weibo): user profiles\n\n## Requirements\nsnscrape requires Python 3.8 or higher. The Python package dependencies are installed automatically when you install snscrape.\n\nNote that one of the dependencies, lxml, also requires libxml2 and libxslt to be installed.\n\n## Installation\n    pip3 install snscrape\n\nIf you want to use the development version:\n\n    pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git\n\n## Usage\n### CLI\nThe generic syntax of snscrape's CLI is:\n\n    snscrape [GLOBAL-OPTIONS] SCRAPER-NAME [SCRAPER-OPTIONS] [SCRAPER-ARGUMENTS...]\n\n`snscrape --help` and `snscrape SCRAPER-NAME --help` provide details on the options and arguments. `snscrape --help` also lists all available scrapers.\n\nThe default output of the CLI is the URL of each result.\n\nSome noteworthy global options are:\n\n* `--jsonl` to get output as JSONL. This includes all information extracted by snscrape (e.g. message content, datetime, images; details vary by scraper).\n* `--max-results NUMBER` to only return the first `NUMBER` results.\n* `--with-entity` to get an item on the entity being scraped, e.g. the user or channel. This is not supported on all scrapers. (You can use this together with `--max-results 0` to only fetch the entity info.)\n\n#### Examples\nCollect all tweets by Jason Scott (@textfiles):\n\n    snscrape twitter-user textfiles\n\nIt's usually useful to redirect the output to a file for further processing, e.g. in bash using the filename `twitter-@textfiles`:\n\n```bash\nsnscrape twitter-user textfiles \u003etwitter-@textfiles\n```\n\nTo get the latest 100 tweets with the hashtag #archiveteam:\n\n    snscrape --max-results 100 twitter-hashtag archiveteam\n\n### Library\nIt is also possible to use snscrape as a library in Python, but this is currently undocumented.\n\n## Issue reporting\nIf you discover an issue with snscrape, please report it at \u003chttps://github.com/JustAnotherArchivist/snscrape/issues\u003e. If you use the CLI, please run snscrape with `-vv` and include the log output in the issue. If you use snscrape as a module, please enable debug-level logging using `import logging; logging.basicConfig(level = logging.DEBUG)` (before using snscrape at all) and include the log output in the issue.\n\n### Dump files\nIn some cases, debugging may require more information than is available in the log. The CLI has a `--dump-locals` option that enables dumping all local variables within snscrape based on important log messages (rather than, by default, only on crashes). Note that the dump files may contain sensitive information in some cases and could potentially be used to identify you (e.g. if the service includes your IP address in its response). If you prefer to arrange a file transfer privately, just mention that in the issue.\n\n## License\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License along with this program.  If not, see \u003chttps://www.gnu.org/licenses/\u003e.\n","funding_links":[],"categories":["Python","[🔬 semantics](https://github.com/stars/ketsapiwiq/lists/semantics)","[](#table-of-contents) Table of contents","1. [↑](#-content) OSINT","Download utilities","Pentesting","🕸️ Web Scraping \u0026 Crawling"],"sub_categories":["[](#universal)Universal","Application-specific","OSINT - Open Source INTelligence","Tools"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJustAnotherArchivist%2Fsnscrape","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FJustAnotherArchivist%2Fsnscrape","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJustAnotherArchivist%2Fsnscrape/lists"}