{"id":14978793,"url":"https://github.com/altimis/scweet","last_synced_at":"2026-04-01T19:41:56.673Z","repository":{"id":37211654,"uuid":"321375645","full_name":"Altimis/Scweet","owner":"Altimis","description":"A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers, user info, images...","archived":false,"fork":false,"pushed_at":"2025-01-14T16:57:24.000Z","size":28903,"stargazers_count":1122,"open_issues_count":93,"forks_count":232,"subscribers_count":17,"default_branch":"master","last_synced_at":"2025-04-03T09:47:27.856Z","etag":null,"topics":["dowload-images","followers","following","python","save-image","scrape","scrape-followers","scrape-following","scrape-images","scrape-likes","scrape-tweets","scraper","scraping","selenium-webdriver","tweets","twitter","twitter-scraper"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Altimis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"custom":["https://www.paypal.me/scweet"]}},"created_at":"2020-12-14T14:36:34.000Z","updated_at":"2025-04-01T08:37:49.000Z","dependencies_parsed_at":"2025-03-27T09:04:04.199Z","dependency_job_id":"71615cc9-f2f9-4b69-9f4a-f1a4c9f61d84","html_url":"https://github.com/Altimis/Scweet","commit_stats":{"total_commits":194,"total_committers":17,"mean_commits":"11.411764705882353","dds":"0.23711340206185572","last_synced_commit":"76e7086a725980dbd5cf8d46bfc27bd4c1d6816f"},"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Altimis%2FScweet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Altimis%2FScweet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Altimis%2FScweet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Altimis%2FScweet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Altimis","download_url":"https://codeload.github.com/Altimis/Scweet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248261916,"owners_count":21074225,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dowload-images","followers","following","python","save-image","scrape","scrape-followers","scrape-following","scrape-images","scrape-likes","scrape-tweets","scraper","scraping","selenium-webdriver","tweets","twitter","twitter-scraper"],"created_at":"2024-09-24T13:58:24.987Z","updated_at":"2026-04-01T19:41:56.656Z","avatar_url":"https://github.com/Altimis.png","language":"Python","readme":"[![Tests](https://github.com/Altimis/Scweet/actions/workflows/tests.yml/badge.svg)](https://github.com/Altimis/Scweet/actions/workflows/tests.yml)\n[![PyPI Version](https://img.shields.io/pypi/v/scweet.svg)](https://pypi.org/project/scweet/)\n[![PyPI Downloads](https://static.pepy.tech/badge/scweet/month)](https://pepy.tech/projects/scweet)\n[![Stars](https://img.shields.io/github/stars/Altimis/Scweet)](https://github.com/Altimis/Scweet/stargazers)\n[![License](https://img.shields.io/github/license/Altimis/scweet)](https://github.com/Altimis/scweet/blob/main/LICENSE)\n[![Scweet Actor Status](https://apify.com/actor-badge?actor=altimis/scweet)](https://apify.com/altimis/scweet)\n\n# Scweet — Twitter / X Scraper\n\nScrape tweets, profiles, followers and more from Twitter/X. **No official API key needed** — uses X's own web GraphQL API, authenticated with your browser cookies.\n\n*Last verified working: March 2026*\n\n**What you can scrape:**\n- **Tweets** — by keyword, hashtag, user, date range, engagement filters, language, location\n- **Profile timelines** — a user's full tweet history\n- **Followers / Following** — full account lists at scale\n- **User profiles** — bio, follower count, verification status, and more\n\n---\n\n## Get started\n\n### Hosted — no setup needed\n\nThe quickest way to get Twitter/X data: run on Apify with no code, no cookies, and no account management. Free tier included.\n\n[![Run on Apify](https://apify.com/static/run-on-apify-button.svg)](https://apify.com/altimis/scweet?fpr=a40q9)\n\n|  | Free | Paid |\n|---|---|---|\n| Tweets / day | 1,000 | Unlimited |\n| Speed | Standard | Up to 1,000 / min |\n| Price | Free | $0.30 / 1,000 tweets |\n| Export | JSON · CSV · XLSX | JSON · CSV · XLSX |\n\n---\n\n### Python library\n\n**1. Install**\n\n```bash\npip install -U Scweet\n```\n\n**2. Get your `auth_token`**\n\n**From your own account:** Log into [x.com](https://x.com) → DevTools `F12` → **Application** → **Cookies** → `https://x.com` → copy the `auth_token` value.\n\n\u003c!-- TODO: Replace the link below with your affiliate account provider URL when ready --\u003e\n**Need dedicated accounts?** You can buy ready-to-use X accounts from an account provider and use them directly with Scweet. Paste the `auth_token` alone and Scweet auto-bootstraps the `ct0` CSRF token — or use the `cookies.json` format below for multiple accounts at once.\n\n**3. Scrape**\n\n```python\nfrom Scweet import Scweet\n\n# First run: credentials are stored in scweet_state.db automatically\n# Use a proxy to avoid rate limits and bans\n# All methods have async variants: asearch(), aget_profile_tweets(), aget_followers(), ...\ns = Scweet(auth_token=\"YOUR_AUTH_TOKEN\", proxy=\"http://user:pass@host:port\")\n\n# Search and save to CSV  (save_format=\"json\" or \"both\" also works; use save_dir= and save_name= to control the output path)\ntweets = s.search(\"bitcoin\", since=\"2025-01-01\", limit=500, save=True)\n\n# Profile timeline\ntweets = s.get_profile_tweets([\"elonmusk\"], limit=200)\n\n# Followers\nusers = s.get_followers([\"elonmusk\"], limit=1000)\n\n# Next run: reuse provisioned accounts — no credentials needed again\ns = Scweet(db_path=\"scweet_state.db\")\ntweets = s.search(\"ethereum\", limit=500, save=True)\n```\n\n**Multiple accounts with per-account proxies** — for higher throughput and reduced ban risk:\n\n```json\n[\n  { \"username\": \"acct1\", \"cookies\": { \"auth_token\": \"...\" }, \"proxy\": \"http://user1:pass1@host1:port1\" },\n  { \"username\": \"acct2\", \"cookies\": { \"auth_token\": \"...\" }, \"proxy\": \"http://user2:pass2@host2:port2\" }\n]\n```\n\n```python\ns = Scweet(cookies_file=\"cookies.json\")  # proxies are read from the file, one per account\n```\n\n\u003e Always set `limit` — without it, scraping continues until your account's daily cap is hit.\n\n\u003e For the full list of supported search operators, see [twitter-advanced-search](https://github.com/igorbrigadir/twitter-advanced-search).\n\n**From the CLI — no Python code needed:**\n\n```bash\n# Search with proxy, save to CSV\nscweet --auth-token YOUR_AUTH_TOKEN --proxy http://user:pass@host:port search \"bitcoin\" --since 2025-01-01 --limit 500 --save\n\n# Followers, saved as JSON\nscweet --auth-token YOUR_AUTH_TOKEN followers elonmusk --limit 1000 --save --save-format json\n```\n\nFor structured search filters, async patterns, resume, multiple accounts, and the full API reference — see [**Full Documentation**](DOCUMENTATION.md).\n\n---\n\n## Why Scweet?\n\n| | twint | snscrape | twscrape | **Scweet** |\n|---|---|---|---|---|\n| **Works in 2026** | ❌ unmaintained | ❌ broken | ✅ | ✅ |\n| Cookie / token auth | ❌ | ❌ | ✅ | ✅ |\n| Multi-account pooling | ❌ | ❌ | ✅ | ✅ |\n| Proxy support | ❌ | ❌ | ✅ | ✅ |\n| Resume interrupted scrapes | ❌ | ❌ | ❌ | ✅ |\n| Built-in CSV / JSON output | ✅ | ✅ | ❌ | ✅ |\n| Sync + async API | ❌ | ❌ | Async only | ✅ both |\n| Hosted, no-setup option | ❌ | ❌ | ❌ | ✅ Apify |\n| Active maintenance | ❌ | ❌ | ⚠️ | ✅ |\n\n[twint](https://github.com/twintproject/twint) has been unmaintained since 2023. [snscrape](https://github.com/JustAnotherArchivist/snscrape) broke after X's backend changes. [twscrape](https://github.com/vladkens/twscrape) is the closest active alternative — worth knowing, but async-only, no built-in file output, and no resume support.\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eFAQ\u003c/strong\u003e\u003c/summary\u003e\n\n\u003cbr\u003e\n\n**Does it work without an official Twitter API key?**\nYes. Scweet calls X's internal GraphQL API — the same one the web app uses. No developer account or API key required.\n\n**Is it a replacement for twint or snscrape?**\nYes. Both are broken as of 2024–2025. Scweet uses a different, currently-working approach: cookies + GraphQL instead of legacy unauthenticated endpoints.\n\n**How many tweets can I scrape?**\nA single account typically handles hundreds to a few thousand tweets per day before hitting rate limits. Multi-account pooling scales this proportionally. The hosted Apify actor manages accounts and rate limits automatically.\n\n**Will my account get banned?**\nNever use your personal account — use dedicated accounts only. To further reduce risk: use **multiple accounts** (distributes the load across them) and pair each with a **proxy** (prevents all requests coming from a single IP). The Apify actor handles both automatically — managed accounts and proxies are included.\n\n**Does it work for private accounts?**\nNo. Only publicly visible content is accessible.\n\n**Does it still work in 2025 / 2026?**\nYes — last verified working in March 2026 against X's current GraphQL API.\n\n\u003c/details\u003e\n\n---\n\n## Documentation\n\nFull API reference, all config options, structured search filters, async patterns, resume, proxies, and troubleshooting:\n\n→ [**DOCUMENTATION.md**](DOCUMENTATION.md)\n\n---\n\n## Community\n\nHave a question or want to share what you built with Scweet?\nOpen a thread in [**GitHub Discussions**](https://github.com/Altimis/Scweet/discussions).\n\n**Found it useful? [Star the repo ⭐](https://github.com/Altimis/Scweet/stargazers)** — it helps others find Scweet.\n\n---\n\n## Contributing\n\nBug reports, feature suggestions, and PRs are welcome. See [CONTRIBUTING.md](CONTRIBUTING.md).\n\n---\n\n*MIT License*\n","funding_links":["https://www.paypal.me/scweet"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faltimis%2Fscweet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faltimis%2Fscweet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faltimis%2Fscweet/lists"}