{"id":15011368,"url":"https://github.com/the-convocation/twitter-scraper","last_synced_at":"2026-02-18T06:03:10.606Z","repository":{"id":57684305,"uuid":"493117802","full_name":"the-convocation/twitter-scraper","owner":"the-convocation","description":"A port of n0madic/twitter-scraper to Node.js.","archived":false,"fork":false,"pushed_at":"2025-12-31T05:30:41.000Z","size":4034,"stargazers_count":563,"open_issues_count":37,"forks_count":85,"subscribers_count":12,"default_branch":"main","last_synced_at":"2026-01-04T06:52:15.222Z","etag":null,"topics":["node-js","scraper","twitter"],"latest_commit_sha":null,"homepage":"https://the-convocation.github.io/twitter-scraper/","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/the-convocation.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2022-05-17T06:09:47.000Z","updated_at":"2026-01-03T15:37:44.000Z","dependencies_parsed_at":"2025-11-14T04:04:33.293Z","dependency_job_id":null,"html_url":"https://github.com/the-convocation/twitter-scraper","commit_stats":{"total_commits":208,"total_committers":11,"mean_commits":18.90909090909091,"dds":"0.19711538461538458","last_synced_commit":"910b03847024fb820ab0206198adb93b6670b8f1"},"previous_names":[],"tags_count":69,"template":false,"template_full_name":null,"purl":"pkg:github/the-convocation/twitter-scraper","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/the-convocation%2Ftwitter-scraper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/the-convocation%2Ftwitter-scraper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/the-convocation%2Ftwitter-scraper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/the-convocation%2Ftwitter-scraper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/the-convocation","download_url":"https://codeload.github.com/the-convocation/twitter-scraper/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/the-convocation%2Ftwitter-scraper/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29569996,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-18T04:18:28.490Z","status":"ssl_error","status_checked_at":"2026-02-18T04:13:49.018Z","response_time":162,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["node-js","scraper","twitter"],"created_at":"2024-09-24T19:40:52.663Z","updated_at":"2026-02-18T06:03:10.600Z","avatar_url":"https://github.com/the-convocation.png","language":"TypeScript","readme":"# twitter-scraper\n\n[![Documentation badge](https://img.shields.io/badge/docs-here-informational)](https://the-convocation.github.io/twitter-scraper/)\n\nA port of the now-archived [n0madic/twitter-scraper](https://github.com/n0madic/twitter-scraper) to Node.js.\n\n\u003e Twitter's API is annoying to work with, and has lots of limitations — luckily\n\u003e their frontend (JavaScript) has it's own API, which I reverse-engineered. No\n\u003e API rate limits. No tokens needed. No restrictions. Extremely fast.\n\u003e\n\u003e You can use this library to get the text of any user's Tweets trivially.\n\nMany things have changed since X (the company formerly known as Twitter) was acquired in 2022:\n\n- Several operations require logging in with a real user account via\n  `scraper.login()`. **While we are not aware of confirmed cases caused\n  by this library, any account you log into with this library is subject\n  to being banned at any time. You have been warned.**\n- Twitter's frontend API does in fact have rate limits\n  ([#11](https://github.com/the-convocation/twitter-scraper/issues/11)).\n  The rate limits are dynamic and sometimes change, so we don't know\n  exactly what they are at all times. Refer to [rate limiting](#rate-limiting)\n  for more information.\n- Twitter's authentication requirements and frontend API endpoints\n  change frequently, breaking this library. Fixes for these issues\n  typically take at least a few days to go out.\n\n## Installation\n\nThis package requires Node.js v16.0.0 or greater.\n\nNPM:\n\n```sh\nnpm install @the-convocation/twitter-scraper\n```\n\nYarn:\n\n```sh\nyarn add @the-convocation/twitter-scraper\n```\n\nTypeScript types have been bundled with the distribution.\n\n## Usage\n\nMost use cases are exactly the same as in\n[n0madic/twitter-scraper](https://github.com/n0madic/twitter-scraper). Channel\niterators have been translated into\n[AsyncGenerator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/AsyncGenerator)\ninstances, and can be consumed with the corresponding\n`for await (const x of y) { ... }` syntax.\n\n### Browser usage\n\nThis package directly invokes the Twitter API, which does not have permissive\nCORS headers. With the default settings, requests will fail unless you disable\nCORS checks, which is not advised. Instead, applications must provide a CORS\nproxy and configure it in the `Scraper` options.\n\nProxies (and other request mutations) can be configured with the request\ninterceptor transform:\n\n```ts\nconst scraper = new Scraper({\n  transform: {\n    request(input: RequestInfo | URL, init?: RequestInit) {\n      // The arguments here are the same as the parameters to fetch(), and\n      // are kept as-is for flexibility of both the library and applications.\n      if (input instanceof URL) {\n        const proxy =\n          'https://corsproxy.io/?' + encodeURIComponent(input.toString());\n        return [proxy, init];\n      } else if (typeof input === 'string') {\n        const proxy = 'https://corsproxy.io/?' + encodeURIComponent(input);\n        return [proxy, init];\n      } else {\n        // Omitting handling for example\n        throw new Error('Unexpected request input type');\n      }\n    },\n  },\n});\n```\n\n[corsproxy.io](https://corsproxy.io) is a public CORS proxy that works correctly\nwith this package.\n\nThe public CORS proxy [corsproxy.org](https://corsproxy.org) _does not work_ at\nthe time of writing (at least not using their recommended integration on the\nfront page).\n\n#### Next.js 13.x example:\n\n```tsx\n'use client';\n\nimport { Scraper, Tweet } from '@the-convocation/twitter-scraper';\nimport { useEffect, useMemo, useState } from 'react';\n\nexport default function Home() {\n  const scraper = useMemo(\n    () =\u003e\n      new Scraper({\n        transform: {\n          request(input: RequestInfo | URL, init?: RequestInit) {\n            if (input instanceof URL) {\n              const proxy =\n                'https://corsproxy.io/?' + encodeURIComponent(input.toString());\n              return [proxy, init];\n            } else if (typeof input === 'string') {\n              const proxy =\n                'https://corsproxy.io/?' + encodeURIComponent(input);\n              return [proxy, init];\n            } else {\n              throw new Error('Unexpected request input type');\n            }\n          },\n        },\n      }),\n    [],\n  );\n  const [tweet, setTweet] = useState\u003cTweet | null\u003e(null);\n\n  useEffect(() =\u003e {\n    async function getTweet() {\n      const latestTweet = await scraper.getLatestTweet('twitter');\n      if (latestTweet) {\n        setTweet(latestTweet);\n      }\n    }\n\n    getTweet();\n  }, [scraper]);\n\n  return (\n    \u003cmain className=\"flex min-h-screen flex-col items-center justify-between p-24\"\u003e\n      {tweet?.text}\n    \u003c/main\u003e\n  );\n}\n```\n\n### Edge runtimes\n\nThis package currently uses\n[`cross-fetch`](https://www.npmjs.com/package/cross-fetch) as a portable\n`fetch`. Edge runtimes such as CloudFlare Workers sometimes have `fetch`\nfunctions that behave differently from the web standard, so you may need to\noverride the `fetch` function the scraper uses. If so, a custom `fetch` can be\nprovided in the options:\n\n```ts\nconst scraper = new Scraper({\n  fetch: fetch,\n});\n```\n\nNote that this does not change the arguments passed to the function, or the\nexpected return type. If the custom `fetch` function produces runtime errors\nrelated to incorrect types, be sure to wrap it in a shim (not currently\nsupported directly by interceptors):\n\n```ts\nconst scraper = new Scraper({\n  fetch: (input, init) =\u003e {\n    // Transform input and init into your function's expected types...\n    return fetch(input, init).then((res) =\u003e {\n      // Transform res into a web-compliant response...\n      return res;\n    });\n  },\n});\n```\n\n### Bypassing Cloudflare bot detection\n\nIn some cases, Twitter's authentication endpoints may be protected by Cloudflare's advanced bot detection, resulting in `403 Forbidden` errors during login. This typically happens because standard Node.js TLS fingerprints are detected as non-browser clients.\n\nTo bypass this protection, you can use the optional CycleTLS `fetch` integration to mimic Chrome browser TLS fingerprints:\n\n**Installation:**\n\n```sh\nnpm install cycletls\n# or\nyarn add cycletls\n```\n\n**Usage:**\n\n```ts\nimport { Scraper } from '@the-convocation/twitter-scraper';\nimport {\n  cycleTLSFetch,\n  cycleTLSExit,\n} from '@the-convocation/twitter-scraper/cycletls';\n\nconst scraper = new Scraper({\n  fetch: cycleTLSFetch,\n});\n\n// Use the scraper normally\nawait scraper.login(username, password, email);\n\n// Important: cleanup CycleTLS resources when done\ncycleTLSExit();\n```\n\n**Note:** The `/cycletls` entrypoint is Node.js only and will not work in browser environments. It's provided as a separate optional entrypoint to avoid bundling binaries in environments where they cannot run.\n\nSee the [cycletls example](./examples/cycletls/) for a complete working example.\n\n### Cookie-based authentication\n\nIf you're encountering `error 399` (\"Incorrect. Please try again\") or Twitter's suspicious activity detection during login, you can use cookies exported from an already-authenticated browser session instead. This approach:\n\n- Avoids Twitter's anti-bot protection that blocks automated logins\n- No need to store or handle passwords in code\n- Uses your established browser session\n- Bypasses rate limiting on authentication endpoints\n\n**Step 1: Export cookies from your browser**\n\nUsing Chrome/Edge:\n\n1. Log in to X.com in your browser\n2. Open DevTools (F12) → Application tab → Cookies\n3. Click the URL bar that says \"Filter cookies\" and press Ctrl+A to select all cookies\n4. Copy all cookies (they'll be in format: `name1=value1; name2=value2; ...`)\n\nUsing Firefox:\n\n1. Log in to X.com in your browser\n2. Open DevTools (F12) → Storage tab → Cookies → `https://x.com`\n3. Find the `ct0` cookie and copy its value\n4. Find the `auth_token` cookie and copy its value\n5. Construct the cookie string: `ct0=\u003cvalue\u003e; auth_token=\u003cvalue\u003e`\n\n\u003e **Tip:** You can use the [Cookie-Editor](https://addons.mozilla.org/en-US/firefox/addon/cookie-editor/) extension to export cookies in a convenient format.\n\n**Step 2: Use cookies in your code**\n\n```ts\nimport { Cookie } from 'tough-cookie';\nimport { Scraper } from '@the-convocation/twitter-scraper';\n\n// Your cookie string from browser (name=value; name2=value2; ...)\nconst cookieString = 'ct0=abc123; auth_token=xyz789; lang=en; ...';\n\n// Parse the cookie string\nconst cookies = cookieString\n  .split(';')\n  .map((c) =\u003e Cookie.parse(c))\n  .filter(Boolean);\n\n// Create scraper and set cookies\nconst scraper = new Scraper();\nawait scraper.setCookies(cookies);\n\n// Verify authentication works\nconst isLoggedIn = await scraper.isLoggedIn();\nif (isLoggedIn) {\n  console.log('✓ Successfully authenticated with cookies!');\n  // Now you can use authenticated features\n  const profile = await scraper.getProfile('username');\n}\n```\n\nCookies expire over time. If authentication fails, you may need to export fresh cookies from your browser.\n\n### Rate limiting\n\nThe Twitter API heavily rate-limits clients, requiring that the scraper has its own\nrate-limit handling to behave predictably when rate-limiting occurs. By default, the\nscraper uses a rate-limiting strategy that waits for the current rate-limiting period\nto expire before resuming requests.\n\n**This has been known to take a very long time, in some cases (up to 13 minutes).**\n\nYou may want to change how rate-limiting events are handled, potentially by pooling\nscrapers logged-in to different accounts (refer to [#116](https://github.com/the-convocation/twitter-scraper/pull/116) for how to do this yourself). The rate-limit handling strategy can be configured by passing a custom\nimplementation to the `rateLimitStrategy` option in the scraper constructor:\n\n```ts\nimport { Scraper, RateLimitStrategy } from '@the-convocation/twitter-scraper';\n\nclass CustomRateLimitStrategy implements RateLimitStrategy {\n  async onRateLimit(event: RateLimitEvent): Promise\u003cvoid\u003e {\n    // your own logic...\n  }\n}\n\nconst scraper = new Scraper({\n  rateLimitStrategy: new CustomRateLimitStrategy(),\n});\n```\n\nMore information on this interface can be found on the [`RateLimitStrategy`](https://the-convocation.github.io/twitter-scraper/interfaces/RateLimitStrategy.html)\npage in the documentation. The library provides two pre-written implementations to choose from:\n\n- `WaitingRateLimitStrategy`: The default, which waits for the limit to expire.\n- `ErrorRateLimitStrategy`: A strategy that throws if any rate-limit event occurs.\n\n## Contributing\n\n### Setup\n\nThis project currently requires Node 18.x for development and uses Yarn for\npackage management.\n[Corepack](https://nodejs.org/dist/latest-v18.x/docs/api/corepack.html) is\nconfigured for this project, so you don't need to install a particular package\nmanager version manually.\n\n\u003e The project supports Node 16.x at runtime, but requires Node 18.x to run its\n\u003e build tools.\n\nJust run `corepack enable` to turn on the shims, then run `yarn` to install the\ndependencies.\n\n#### Basic scripts\n\n- `yarn build`: Builds the project into the `dist` folder\n- `yarn test`: Runs the package tests (see [Testing](#testing) first)\n\nRun `yarn help` for general `yarn` usage information.\n\n### Testing\n\nThis package includes unit tests for all major functionality. Given the speed at\nwhich Twitter's private API changes, failing tests are to be expected.\n\n```sh\nyarn test\n```\n\nBefore running tests, you should configure environment variables for\nauthentication.\n\n```\nTWITTER_USERNAME=    # Account username\nTWITTER_PASSWORD=    # Account password\nTWITTER_EMAIL=       # Account email\nTWITTER_COOKIES=     # JSON-serialized array of cookies of an authenticated session\nPROXY_URL=           # HTTP(s) proxy for requests (optional)\n```\n\n### Commit message format\n\nWe use [Conventional Commits](https://www.conventionalcommits.org), and enforce\nthis with precommit checks. Please refer to the Git history for real examples of\nthe commit message format.\n","funding_links":[],"categories":["TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthe-convocation%2Ftwitter-scraper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthe-convocation%2Ftwitter-scraper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthe-convocation%2Ftwitter-scraper/lists"}