{"id":48891936,"url":"https://github.com/brightdata/cli","last_synced_at":"2026-04-16T08:38:39.615Z","repository":{"id":345300916,"uuid":"1175168650","full_name":"brightdata/cli","owner":"brightdata","description":"Official Bright Data CLI - scrape, search, and extract structured web data directly from your terminal.","archived":false,"fork":false,"pushed_at":"2026-04-05T08:18:49.000Z","size":407,"stargazers_count":144,"open_issues_count":0,"forks_count":12,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-05T10:12:51.091Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/brightdata.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-07T10:25:57.000Z","updated_at":"2026-04-05T08:16:44.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/brightdata/cli","commit_stats":null,"previous_names":["brightdata/cli"],"tags_count":8,"template":false,"template_full_name":null,"purl":"pkg:github/brightdata/cli","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brightdata%2Fcli","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brightdata%2Fcli/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brightdata%2Fcli/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brightdata%2Fcli/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/brightdata","download_url":"https://codeload.github.com/brightdata/cli/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brightdata%2Fcli/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31878592,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-16T07:36:03.521Z","status":"ssl_error","status_checked_at":"2026-04-16T07:35:53.576Z","response_time":69,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-16T08:38:38.756Z","updated_at":"2026-04-16T08:38:39.589Z","avatar_url":"https://github.com/brightdata.png","language":"TypeScript","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/brightdata/cli/main/assets/banner.gif\" alt=\"Bright Data CLI\" width=\"800\" /\u003e\n\u003c/p\u003e\n\n\u003ch1 align=\"center\"\u003eBright Data CLI\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  Scrape, search, and extract structured web data — directly from your terminal.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://www.npmjs.com/package/%40brightdata%2Fcli\"\u003e\u003cimg src=\"https://img.shields.io/npm/v/%40brightdata%2Fcli?color=black\u0026label=npm\" alt=\"npm version\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://img.shields.io/badge/node-%3E%3D20-black\" alt=\"node requirement\" /\u003e\n  \u003cimg src=\"https://img.shields.io/badge/license-MIT-black\" alt=\"license\" /\u003e\n\u003c/p\u003e\n\n---\n\n## Overview\n\n`@brightdata/cli` is the official npm package for the [Bright Data](https://brightdata.com) CLI. It installs the `brightdata` command (with `bdata` as a shorthand alias) for access to the full Bright Data API surface:\n\n| Command | What it does |\n|---|---|\n| `brightdata scrape` | Scrape any URL — bypasses CAPTCHAs, JS rendering, anti-bot protections |\n| `brightdata search` | Google / Bing / Yandex search with structured JSON output |\n| `brightdata discover` | AI-powered web discovery - find and rank results by intent with optional full-page content |\n| `brightdata pipelines` | Extract structured data from 40+ platforms (Amazon, LinkedIn, TikTok…) |\n| `brightdata browser` | Control a real browser via Bright Data's Scraping Browser — navigate, snapshot, click, type, and more |\n| `brightdata zones` | List and inspect your Bright Data proxy zones |\n| `brightdata budget` | View account balance and per-zone cost \u0026 bandwidth |\n| `brightdata skill` | Install Bright Data AI agent skills into your coding agent |\n| `brightdata add mcp` | Add the Bright Data MCP server to Claude Code, Cursor, or Codex |\n| `brightdata config` | Manage CLI configuration |\n| `brightdata init` | Interactive setup wizard |\n\n---\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n- [Authentication](#authentication)\n- [Commands](#commands)\n  - [init](#init)\n  - [scrape](#scrape)\n  - [search](#search)\n  - [discover](#discover)\n  - [pipelines](#pipelines)\n  - [browser](#browser)\n  - [status](#status)\n  - [zones](#zones)\n  - [budget](#budget)\n  - [skill](#skill)\n  - [add mcp](#add-mcp)\n  - [config](#config)\n  - [login / logout](#login--logout)\n- [Configuration](#configuration)\n- [Environment Variables](#environment-variables)\n- [Output Modes](#output-modes)\n- [Pipe-Friendly Usage](#pipe-friendly-usage)\n- [Dataset Types Reference](#dataset-types-reference)\n- [Troubleshooting](#troubleshooting)\n\n---\n\n## Installation\n\n\u003e **Requires [Node.js](https://nodejs.org/) ≥ 20**\n\n### macOS / Linux\n\n```bash\ncurl -fsSL https://cli.brightdata.com/install.sh | sh\n```\n\n### Windows\n\n```powershell\nnpm install -g @brightdata/cli\n```\n\n### Or install manually on any platform\n\n```bash\nnpm install -g @brightdata/cli\n```\n\nYou can also run without installing:\n\n```bash\nnpx --yes --package @brightdata/cli brightdata \u003ccommand\u003e\n```\n\n---\n\n## Quick Start\n\n```bash\n# 1. Run the interactive setup wizard\nbrightdata init\n\n# 2. Scrape a page as markdown\nbrightdata scrape https://example.com\n\n# 3. Search Google\nbrightdata search \"web scraping best practices\"\n\n# 4. Extract a LinkedIn profile\nbrightdata pipelines linkedin_person_profile \"https://linkedin.com/in/username\"\n\n# 5. Check your account balance\nbrightdata budget\n\n# 6. Install the Bright Data MCP server into your coding agent\nbrightdata add mcp\n```\n\n---\n\n## Authentication\n\nGet your API key from [brightdata.com/cp/setting/users](https://brightdata.com/cp/setting/users).\n\n```bash\n# Interactive — opens browser, saves key automatically\nbrightdata login\n\n# Non-interactive — pass key directly\nbrightdata login --api-key \u003cyour-api-key\u003e\n\n# Environment variable — no login required\nexport BRIGHTDATA_API_KEY=your-api-key\n```\n\nOn first login the CLI checks for required zones (`cli_unlocker`, `cli_browser`) and creates them automatically if missing.\n\n```bash\n# Clear saved credentials\nbrightdata logout\n```\n\n`brightdata add mcp` uses the API key stored by `brightdata login`. It does not currently read `BRIGHTDATA_API_KEY` or the global `--api-key` flag, so log in first before using it.\n\n---\n\n## Commands\n\n### `init`\n\nInteractive setup wizard. The recommended way to get started.\n\n```bash\nbrightdata init\n```\n\nWalks through: API key detection → zone selection → default output format → quick-start examples.\n\n| Flag | Description |\n|---|---|\n| `--skip-auth` | Skip the authentication step |\n| `-k, --api-key \u003ckey\u003e` | Provide API key directly |\n\n---\n\n### `scrape`\n\nScrape any URL using Bright Data's Web Unlocker. Handles CAPTCHAs, JavaScript rendering, and anti-bot protections automatically.\n\n```bash\nbrightdata scrape \u003curl\u003e [options]\n```\n\n| Flag | Description |\n|---|---|\n| `-f, --format \u003cfmt\u003e` | `markdown` · `html` · `screenshot` · `json` (default: `markdown`) |\n| `--country \u003ccode\u003e` | Geo-target by ISO country code (e.g. `us`, `de`, `jp`) |\n| `--zone \u003cname\u003e` | Web Unlocker zone name |\n| `--mobile` | Use a mobile user agent |\n| `--async` | Submit async job, return a snapshot ID |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n**Examples**\n\n```bash\n# Scrape as markdown (default)\nbrightdata scrape https://news.ycombinator.com\n\n# Scrape as raw HTML\nbrightdata scrape https://example.com -f html\n\n# US geo-targeting, save to file\nbrightdata scrape https://amazon.com -f json --country us -o product.json\n\n# Pipe to a markdown viewer\nbrightdata scrape https://docs.github.com | glow -\n\n# Async — returns a snapshot ID you can poll with `status`\nbrightdata scrape https://example.com --async\n```\n\n---\n\n### `search`\n\nSearch Google, Bing, or Yandex via Bright Data's SERP API. Google results include structured data (organic results, ads, people-also-ask, related searches).\n\n```bash\nbrightdata search \u003cquery\u003e [options]\n```\n\n| Flag | Description |\n|---|---|\n| `--engine \u003cname\u003e` | `google` · `bing` · `yandex` (default: `google`) |\n| `--country \u003ccode\u003e` | Localized results (e.g. `us`, `de`) |\n| `--language \u003ccode\u003e` | Language code (e.g. `en`, `fr`) |\n| `--page \u003cn\u003e` | Page number, 0-indexed (default: `0`) |\n| `--type \u003ctype\u003e` | `web` · `news` · `images` · `shopping` (default: `web`) |\n| `--device \u003ctype\u003e` | `desktop` · `mobile` |\n| `--zone \u003cname\u003e` | SERP zone name |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n**Examples**\n\n```bash\n# Formatted table output (default)\nbrightdata search \"typescript best practices\"\n\n# German localized results\nbrightdata search \"restaurants berlin\" --country de --language de\n\n# News search\nbrightdata search \"AI regulation\" --type news\n\n# Page 2 of results\nbrightdata search \"web scraping\" --page 1\n\n# Extract just the URLs\nbrightdata search \"open source scraping\" --json | jq -r '.organic[].link'\n\n# Search Bing\nbrightdata search \"bright data pricing\" --engine bing\n```\n\n---\n\n### `discover`\n\nAI-powered web discovery. Submit a query with optional intent, and Bright Data finds, ranks, and optionally extracts full-page content for each result.\n\n```bash\nbrightdata discover \u003cquery\u003e [options]\n```\n\n| Flag | Description |\n|---|---|\n| `--intent \u003ctext\u003e` | AI intent to evaluate and rank result relevance |\n| `--country \u003ccode\u003e` | ISO country code (default: `US`) |\n| `--city \u003cname\u003e` | City for localized results (e.g. `\"New York\"`) |\n| `--language \u003ccode\u003e` | Language code (default: `en`) |\n| `--num-results \u003cn\u003e` | Number of results to return |\n| `--filter-keywords \u003cwords\u003e` | Comma-separated keywords that must appear in results |\n| `--include-content` | Include full page content in each result |\n| `--no-remove-duplicates` | Keep duplicate results |\n| `--start-date \u003cdate\u003e` | Only content updated from date (`YYYY-MM-DD`) |\n| `--end-date \u003cdate\u003e` | Only content updated until date (`YYYY-MM-DD`) |\n| `--timeout \u003cseconds\u003e` | Polling timeout (default: `600`) |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n**Examples**\n\n```bash\n# Basic discovery — table output\nbrightdata discover \"AI trends\"\n\n# With AI intent for relevance ranking\nbrightdata discover \"AI trends\" \\\n  --intent \"Prioritize institutional reports for VC research\"\n\n# Include full page content as markdown\nbrightdata discover \"AI trends\" --include-content --num-results 5\n\n# Geo-targeted with date range\nbrightdata discover \"best restaurants\" --country US --city \"New York\" \\\n  --start-date 2025-01-01 --end-date 2025-12-31\n\n# Filter results by keywords\nbrightdata discover \"generative AI SaaS\" --filter-keywords \"revenue,SaaS\"\n\n# JSON output to file\nbrightdata discover \"AI trends\" --num-results 10 --pretty -o results.json\n\n# Pipe-friendly — redirected stdout outputs JSON automatically\nbrightdata discover \"AI trends\" --include-content --num-results 3 \u003e results.json\n```\n\n---\n\n### `pipelines`\n\nExtract structured data from 40+ platforms using Bright Data's Web Scraper API. Triggers an async collection job, polls until ready, and returns results.\n\n```bash\nbrightdata pipelines \u003ctype\u003e [params...] [options]\n```\n\n| Flag | Description |\n|---|---|\n| `--format \u003cfmt\u003e` | `json` · `csv` · `ndjson` · `jsonl` (default: `json`) |\n| `--timeout \u003cseconds\u003e` | Polling timeout (default: `600`) |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n```bash\n# List all available dataset types\nbrightdata pipelines list\n```\n\n**Examples**\n\n```bash\n# LinkedIn profile\nbrightdata pipelines linkedin_person_profile \"https://linkedin.com/in/username\"\n\n# Amazon product → CSV\nbrightdata pipelines amazon_product \"https://amazon.com/dp/B09V3KXJPB\" \\\n  --format csv -o product.csv\n\n# Instagram profile\nbrightdata pipelines instagram_profiles \"https://instagram.com/username\"\n\n# Amazon search by keyword\nbrightdata pipelines amazon_product_search \"laptop\" \"https://amazon.com\"\n\n# Google Maps reviews\nbrightdata pipelines google_maps_reviews \"https://maps.google.com/...\" 7\n\n# YouTube comments (top 50)\nbrightdata pipelines youtube_comments \"https://youtube.com/watch?v=...\" 50\n```\n\nSee [Dataset Types Reference](#dataset-types-reference) for the full list.\n\n---\n\n### `browser`\n\nControl a real browser session powered by [Bright Data's Scraping Browser](https://brightdata.com/products/scraping-browser). A lightweight local daemon holds the browser connection open between commands, giving you persistent state without reconnecting on every call.\n\n```bash\nbrightdata browser open \u003curl\u003e              # Start a session and navigate\nbrightdata browser snapshot                # Get an accessibility tree of the page\nbrightdata browser screenshot [path]       # Take a PNG screenshot\nbrightdata browser click \u003cref\u003e             # Click an element\nbrightdata browser type \u003cref\u003e \u003ctext\u003e       # Type into an element\nbrightdata browser fill \u003cref\u003e \u003cvalue\u003e      # Fill a form field\nbrightdata browser select \u003cref\u003e \u003cvalue\u003e    # Select a dropdown option\nbrightdata browser check \u003cref\u003e             # Check a checkbox / radio\nbrightdata browser uncheck \u003cref\u003e           # Uncheck a checkbox\nbrightdata browser hover \u003cref\u003e             # Hover over an element\nbrightdata browser scroll                  # Scroll the page\nbrightdata browser get text [selector]     # Get text content\nbrightdata browser get html [selector]     # Get HTML content\nbrightdata browser back                    # Navigate back\nbrightdata browser forward                 # Navigate forward\nbrightdata browser reload                  # Reload the page\nbrightdata browser network                 # Show captured network requests\nbrightdata browser cookies                 # Show cookies\nbrightdata browser status                  # Show session state\nbrightdata browser sessions                # List all active sessions\nbrightdata browser close                   # Close session and stop daemon\n```\n\n**Global flags** (work with every subcommand)\n\n| Flag | Description |\n|---|---|\n| `--session \u003cname\u003e` | Session name — run multiple isolated sessions in parallel (default: `default`) |\n| `--country \u003ccode\u003e` | Geo-target by ISO country code (e.g. `us`, `de`). On `open`, changing country reconnects the browser |\n| `--zone \u003cname\u003e` | Scraping Browser zone (default: `cli_browser`) |\n| `--timeout \u003cms\u003e` | IPC command timeout in milliseconds (default: `30000`) |\n| `--idle-timeout \u003cms\u003e` | Daemon auto-shutdown after idle (default: `600000` = 10 min) |\n| `--json` / `--pretty` | JSON output |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n---\n\n#### `browser open \u003curl\u003e`\n\nNavigate to a URL. Starts the daemon and browser session automatically if not already running.\n\n```bash\nbrightdata browser open https://example.com\nbrightdata browser open https://amazon.com --country us --session shop\n```\n\n| Flag | Description |\n|---|---|\n| `--country \u003ccode\u003e` | Geo-targeting. Reconnects the browser if the country changes on an existing session |\n| `--zone \u003cname\u003e` | Browser zone name |\n| `--idle-timeout \u003cms\u003e` | Daemon idle timeout for this session |\n\n---\n\n#### `browser snapshot`\n\nCapture the page as a text accessibility tree. This is the primary way AI agents read page content — far more token-efficient than raw HTML.\n\n```bash\nbrightdata browser snapshot\nbrightdata browser snapshot --compact          # Interactive elements + ancestors only\nbrightdata browser snapshot --interactive      # Interactive elements as a flat list\nbrightdata browser snapshot --depth 3          # Limit tree depth\nbrightdata browser snapshot --selector \"main\"  # Scope to a CSS subtree\nbrightdata browser snapshot --wrap             # Wrap output in AI-safe content boundaries\n```\n\n**Output format:**\n```\nPage: Example Domain\nURL: https://example.com\n\n- heading \"Example Domain\" [level=1]\n- paragraph \"This domain is for use in illustrative examples.\"\n- link \"More information...\" [ref=e1]\n```\n\nEach interactive element gets a `ref` (e.g. `e1`, `e2`) that you pass to `click`, `type`, `fill`, etc.\n\n| Flag | Description |\n|---|---|\n| `--compact` | Only interactive elements and their ancestors (70–90% fewer tokens) |\n| `--interactive` | Only interactive elements, as a flat list |\n| `--depth \u003cn\u003e` | Limit tree depth to a non-negative integer |\n| `--selector \u003csel\u003e` | Scope snapshot to elements matching a CSS selector |\n| `--wrap` | Wrap output in `--- BRIGHTDATA_BROWSER_CONTENT ... ---` boundaries (useful for AI agent prompt injection safety) |\n\n---\n\n#### `browser screenshot [path]`\n\nCapture a PNG screenshot of the current viewport.\n\n```bash\nbrightdata browser screenshot\nbrightdata browser screenshot ./result.png\nbrightdata browser screenshot --full-page -o page.png\nbrightdata browser screenshot --base64\n```\n\n| Flag | Description |\n|---|---|\n| `[path]` | Where to save the PNG (default: temp directory) |\n| `--full-page` | Capture the full scrollable page, not just the viewport |\n| `--base64` | Output base64-encoded PNG data instead of saving to a file |\n\n---\n\n#### `browser click \u003cref\u003e`\n\nClick an element by its snapshot ref.\n\n```bash\nbrightdata browser click e3\nbrightdata browser click e3 --session shop\n```\n\n---\n\n#### `browser type \u003cref\u003e \u003ctext\u003e`\n\nType text into an element. Clears the field first by default.\n\n```bash\nbrightdata browser type e5 \"search query\"\nbrightdata browser type e5 \" more text\" --append   # Append to existing value\nbrightdata browser type e5 \"search query\" --submit  # Press Enter after typing\n```\n\n| Flag | Description |\n|---|---|\n| `--append` | Append to existing value using key-by-key simulation |\n| `--submit` | Press Enter after typing |\n\n---\n\n#### `browser fill \u003cref\u003e \u003cvalue\u003e`\n\nFill a form field directly (no keyboard simulation). Use `type` if you need to trigger `keydown`/`keyup` events.\n\n```bash\nbrightdata browser fill e2 \"user@example.com\"\n```\n\n---\n\n#### `browser select \u003cref\u003e \u003cvalue\u003e`\n\nSelect a dropdown option by its visible label.\n\n```bash\nbrightdata browser select e4 \"United States\"\n```\n\n---\n\n#### `browser check \u003cref\u003e` / `browser uncheck \u003cref\u003e`\n\nCheck or uncheck a checkbox or radio button.\n\n```bash\nbrightdata browser check e7\nbrightdata browser uncheck e7\n```\n\n---\n\n#### `browser hover \u003cref\u003e`\n\nHover the mouse over an element (triggers hover states, tooltips, dropdowns).\n\n```bash\nbrightdata browser hover e2\n```\n\n---\n\n#### `browser scroll`\n\nScroll the viewport or scroll an element into view.\n\n```bash\nbrightdata browser scroll                        # Scroll down 300px (default)\nbrightdata browser scroll --direction up\nbrightdata browser scroll --direction down --distance 600\nbrightdata browser scroll --ref e10              # Scroll element e10 into view\n```\n\n| Flag | Description |\n|---|---|\n| `--direction \u003cdir\u003e` | `up`, `down`, `left`, `right` (default: `down`) |\n| `--distance \u003cpx\u003e` | Pixels to scroll (default: `300`) |\n| `--ref \u003cref\u003e` | Scroll this element into view instead of the viewport |\n\n---\n\n#### `browser get text [selector]`\n\nGet the text content of the page or a scoped element.\n\n```bash\nbrightdata browser get text           # Full page text\nbrightdata browser get text \"h1\"      # Text of the first h1\nbrightdata browser get text \"#price\"  # Text inside #price\n```\n\n---\n\n#### `browser get html [selector]`\n\nGet the HTML of the page or a scoped element.\n\n```bash\nbrightdata browser get html              # Full page outer HTML\nbrightdata browser get html \".product\"   # innerHTML of .product\nbrightdata browser get html --pretty     # JSON output with selector field\n```\n\n---\n\n#### `browser network`\n\nShow HTTP requests captured since the last navigation.\n\n```bash\nbrightdata browser network\nbrightdata browser network --json\n```\n\n**Example output:**\n```\nNetwork Requests (5 total):\n[GET] https://example.com/ =\u003e [200]\n[GET] https://example.com/style.css =\u003e [200]\n[POST] https://api.example.com/track =\u003e [204]\n```\n\n---\n\n#### `browser cookies`\n\nShow cookies for the active session.\n\n```bash\nbrightdata browser cookies\nbrightdata browser cookies --pretty\n```\n\n---\n\n#### `browser status`\n\nShow the current state of a browser session.\n\n```bash\nbrightdata browser status\nbrightdata browser status --session shop --pretty\n```\n\n---\n\n#### `browser sessions`\n\nList all active browser daemon sessions.\n\n```bash\nbrightdata browser sessions\nbrightdata browser sessions --pretty\n```\n\n---\n\n#### `browser close`\n\nClose a session and stop its daemon.\n\n```bash\nbrightdata browser close                   # Close the default session\nbrightdata browser close --session shop    # Close a named session\nbrightdata browser close --all             # Close all active sessions\n```\n\n---\n\n**Example: AI agent workflow**\n\n```bash\n# Open a US-targeted session\nbrightdata browser open https://example.com --country us\n\n# Read the page structure (compact for token efficiency)\nbrightdata browser snapshot --compact\n\n# Interact using refs from the snapshot\nbrightdata browser click e3\nbrightdata browser type e5 \"search query\" --submit\n\n# Get updated snapshot after interaction\nbrightdata browser snapshot --compact\n\n# Save a screenshot for visual verification\nbrightdata browser screenshot ./result.png\n\n# Done\nbrightdata browser close\n```\n\n**Example: multi-session comparison**\n\n```bash\nbrightdata browser open https://amazon.com --session us --country us\nbrightdata browser open https://amazon.com --session de --country de\n\nbrightdata browser snapshot --session us --json \u003e us.json\nbrightdata browser snapshot --session de --json \u003e de.json\n\nbrightdata browser close --all\n```\n\n---\n\n### `status`\n\nCheck the status of an async snapshot job (returned by `--async` or `pipelines`).\n\n```bash\nbrightdata status \u003cjob-id\u003e [options]\n```\n\n| Flag | Description |\n|---|---|\n| `--wait` | Poll until the job completes |\n| `--timeout \u003cseconds\u003e` | Polling timeout (default: `600`) |\n| `-o, --output \u003cpath\u003e` | Write output to file |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n```bash\n# Check current status\nbrightdata status s_abc123xyz\n\n# Block until complete\nbrightdata status s_abc123xyz --wait --pretty\n\n# Custom timeout (5 minutes)\nbrightdata status s_abc123xyz --wait --timeout 300\n```\n\n---\n\n### `zones`\n\nList and inspect your Bright Data proxy zones.\n\n```bash\nbrightdata zones               # List all active zones\nbrightdata zones info \u003cname\u003e   # Show full details for a zone\n```\n\n```bash\n# Export all zones as JSON\nbrightdata zones --json -o zones.json\n\n# Inspect a specific zone\nbrightdata zones info my_unlocker_zone --pretty\n```\n\n---\n\n### `budget`\n\nView your account balance and per-zone cost and bandwidth usage. Read-only — no writes to the API.\n\n```bash\nbrightdata budget                     # Show account balance (quick view)\nbrightdata budget balance             # Account balance + pending charges\nbrightdata budget zones               # Cost \u0026 bandwidth table for all zones\nbrightdata budget zone \u003cname\u003e         # Detailed cost \u0026 bandwidth for one zone\n```\n\n| Flag | Description |\n|---|---|\n| `--from \u003cdatetime\u003e` | Start of date range (e.g. `2024-01-01T00:00:00`) |\n| `--to \u003cdatetime\u003e` | End of date range |\n| `--json` / `--pretty` | JSON output (raw / indented) |\n| `-k, --api-key \u003ckey\u003e` | Override API key |\n\n```bash\n# Current account balance\nbrightdata budget\n\n# Zone costs for January 2024\nbrightdata budget zones --from 2024-01-01T00:00:00 --to 2024-02-01T00:00:00\n\n# Detailed view of a specific zone\nbrightdata budget zone my_unlocker_zone\n```\n\n---\n\n### `skill`\n\nInstall Bright Data AI agent skills into your coding agent (Claude Code, Cursor, Copilot, etc.). Skills provide your agent with context and instructions for using Bright Data APIs effectively.\n\n```bash\nbrightdata skill add              # Interactive picker — choose skill + agent\nbrightdata skill add \u003cname\u003e       # Install a specific skill directly\nbrightdata skill list             # List all available Bright Data skills\n```\n\n**Available skills**\n\n| Skill | Description |\n|---|---|\n| `search` | Search Google and get structured JSON results |\n| `scrape` | Scrape any webpage as clean markdown with bot bypass |\n| `data-feeds` | Extract structured data from 40+ websites |\n| `bright-data-mcp` | Orchestrate 60+ Bright Data MCP tools |\n| `bright-data-best-practices` | Reference knowledge base for writing Bright Data code |\n\n```bash\n# Interactive — select skills and choose which agents to install to\nbrightdata skill add\n\n# Install the scrape skill directly\nbrightdata skill add scrape\n\n# See what's available\nbrightdata skill list\n```\n\n---\n\n### `add mcp`\n\nWrite a Bright Data MCP server entry into Claude Code, Cursor, or Codex config files using the API key already stored by `brightdata login`.\n\n```bash\nbrightdata add mcp                               # Interactive agent + scope prompts\nbrightdata add mcp --agent claude-code --global\nbrightdata add mcp --agent claude-code,cursor --project\nbrightdata add mcp --agent codex --global\n```\n\n| Flag | Description |\n|---|---|\n| `--agent \u003cagents\u003e` | Comma-separated targets: `claude-code,cursor,codex` |\n| `--global` | Install to the agent's global config file |\n| `--project` | Install to the current project's config file |\n\n**Config targets**\n\n| Agent | Global path | Project path |\n|---|---|---|\n| Claude Code | `~/.claude.json` | `.claude/settings.json` |\n| Cursor | `~/.cursor/mcp.json` | `.cursor/mcp.json` |\n| Codex | `$CODEX_HOME/mcp.json` or `~/.codex/mcp.json` | Not supported |\n\nThe command writes the MCP server under `mcpServers[\"bright-data\"]`:\n\n```json\n{\n  \"mcpServers\": {\n    \"bright-data\": {\n      \"command\": \"npx\",\n      \"args\": [\"@brightdata/mcp\"],\n      \"env\": {\n        \"API_TOKEN\": \"\u003cstored-api-key\u003e\"\n      }\n    }\n  }\n}\n```\n\nBehavior notes:\n- Existing config is preserved; only `mcpServers[\"bright-data\"]` is added or replaced.\n- If the target config contains invalid JSON, the CLI warns and offers to overwrite it in interactive mode.\n- In non-interactive mode, pass both `--agent` and the appropriate scope flag to skip prompts.\n\n---\n\n### `config`\n\nView and manage CLI configuration.\n\n```bash\nbrightdata config                              # Show all config\nbrightdata config get \u003ckey\u003e                    # Get a single value\nbrightdata config set \u003ckey\u003e \u003cvalue\u003e            # Set a value\n```\n\n| Key | Description |\n|---|---|\n| `default_zone_unlocker` | Default zone for `scrape` and `search` |\n| `default_zone_serp` | Default zone for `search` (overrides unlocker zone) |\n| `default_format` | Default output format: `markdown` or `json` |\n| `api_url` | Override the Bright Data API base URL |\n\n```bash\nbrightdata config set default_zone_unlocker my_zone\nbrightdata config set default_format json\n```\n\n---\n\n### `login` / `logout`\n\n```bash\nbrightdata login                      # Interactive login\nbrightdata login --api-key \u003ckey\u003e      # Non-interactive\nbrightdata logout                     # Clear saved credentials\n```\n\n---\n\n## Configuration\n\nConfig is stored in an OS-appropriate location:\n\n| OS | Path |\n|---|---|\n| macOS | `~/Library/Application Support/brightdata-cli/` |\n| Linux | `~/.config/brightdata-cli/` |\n| Windows | `%APPDATA%\\brightdata-cli\\` |\n\nTwo files are stored:\n- `credentials.json` — API key\n- `config.json` — zones, output format, preferences\n\n**Priority order** (highest → lowest):\n\n```\nCLI flags  →  Environment variables  →  config.json  →  Defaults\n```\n\n---\n\n## Environment Variables\n\n| Variable | Description |\n|---|---|\n| `BRIGHTDATA_API_KEY` | API key (overrides stored credentials) |\n| `BRIGHTDATA_UNLOCKER_ZONE` | Default Web Unlocker zone |\n| `BRIGHTDATA_SERP_ZONE` | Default SERP zone |\n| `BRIGHTDATA_POLLING_TIMEOUT` | Default polling timeout in seconds |\n| `BRIGHTDATA_BROWSER_ZONE` | Default Scraping Browser zone (default: `cli_browser`) |\n| `BRIGHTDATA_DAEMON_DIR` | Override the directory used for browser daemon socket and PID files |\n\n```bash\nBRIGHTDATA_API_KEY=xxx BRIGHTDATA_UNLOCKER_ZONE=my_zone \\\n  brightdata scrape https://example.com\n```\n\n---\n\n## Output Modes\n\nEvery command supports:\n\n| Mode | Flag | Behavior |\n|---|---|---|\n| Human-readable | *(default)* | Formatted table or markdown, with colors |\n| JSON | `--json` | Compact JSON to stdout |\n| Pretty JSON | `--pretty` | Indented JSON to stdout |\n| File | `-o \u003cpath\u003e` | Write to file; format inferred from extension |\n\n**Auto-detected file formats:**\n\n| Extension | Format |\n|---|---|\n| `.json` | JSON |\n| `.md` | Markdown |\n| `.html` | HTML |\n| `.csv` | CSV |\n\n---\n\n## Pipe-Friendly Usage\n\nWhen stdout is not a TTY, colors and spinners are automatically disabled. Errors go to `stderr`, data to `stdout`.\n\n```bash\n# Extract URLs from search results\nbrightdata search \"nodejs tutorials\" --json | jq -r '.organic[].link'\n\n# Scrape and view with a markdown reader\nbrightdata scrape https://docs.github.com | glow -\n\n# Save scraped content to a file\nbrightdata scrape https://example.com -f markdown \u003e page.md\n\n# Amazon product data as CSV\nbrightdata pipelines amazon_product \"https://amazon.com/dp/xxx\" --format csv \u003e product.csv\n\n# Chain search → scrape\nbrightdata search \"top open source projects\" --json \\\n  | jq -r '.organic[0].link' \\\n  | xargs brightdata scrape\n```\n\n---\n\n## Dataset Types Reference\n\n```bash\nbrightdata pipelines list   # See all types in your terminal\n```\n\n### E-Commerce\n\n| Type | Platform |\n|---|---|\n| `amazon_product` | Amazon product page |\n| `amazon_product_reviews` | Amazon reviews |\n| `amazon_product_search` | Amazon search results |\n| `walmart_product` | Walmart product page |\n| `walmart_seller` | Walmart seller profile |\n| `ebay_product` | eBay listing |\n| `bestbuy_products` | Best Buy |\n| `etsy_products` | Etsy |\n| `homedepot_products` | Home Depot |\n| `zara_products` | Zara |\n| `google_shopping` | Google Shopping |\n\n### Professional Networks\n\n| Type | Platform |\n|---|---|\n| `linkedin_person_profile` | LinkedIn person |\n| `linkedin_company_profile` | LinkedIn company |\n| `linkedin_job_listings` | LinkedIn jobs |\n| `linkedin_posts` | LinkedIn posts |\n| `linkedin_people_search` | LinkedIn people search |\n| `crunchbase_company` | Crunchbase |\n| `zoominfo_company_profile` | ZoomInfo |\n\n### Social Media\n\n| Type | Platform |\n|---|---|\n| `instagram_profiles` | Instagram profiles |\n| `instagram_posts` | Instagram posts |\n| `instagram_reels` | Instagram reels |\n| `instagram_comments` | Instagram comments |\n| `facebook_posts` | Facebook posts |\n| `facebook_marketplace_listings` | Facebook Marketplace |\n| `facebook_company_reviews` | Facebook reviews |\n| `facebook_events` | Facebook events |\n| `tiktok_profiles` | TikTok profiles |\n| `tiktok_posts` | TikTok posts |\n| `tiktok_shop` | TikTok shop |\n| `tiktok_comments` | TikTok comments |\n| `x_posts` | X (Twitter) posts |\n| `youtube_profiles` | YouTube channels |\n| `youtube_videos` | YouTube videos |\n| `youtube_comments` | YouTube comments |\n| `reddit_posts` | Reddit posts |\n\n### Other\n\n| Type | Platform |\n|---|---|\n| `google_maps_reviews` | Google Maps reviews |\n| `google_play_store` | Google Play |\n| `apple_app_store` | Apple App Store |\n| `reuter_news` | Reuters news |\n| `github_repository_file` | GitHub repository files |\n| `yahoo_finance_business` | Yahoo Finance |\n| `zillow_properties_listing` | Zillow |\n| `booking_hotel_listings` | Booking.com |\n\n---\n\n## Troubleshooting\n\n**`Error: No Web Unlocker zone specified`**\n```bash\nbrightdata config set default_zone_unlocker \u003cyour-zone-name\u003e\n# or\nexport BRIGHTDATA_UNLOCKER_ZONE=\u003cyour-zone-name\u003e\n```\n\n**`Error: Invalid or expired API key`**\n```bash\nbrightdata login\n```\n\n**`Error: Access denied`**\n\nCheck zone permissions in the [Bright Data control panel](https://brightdata.com/cp).\n\n**`Error: Rate limit exceeded`**\n\nWait a moment and retry. Use `--async` for large jobs to avoid timeouts.\n\n**Async job is too slow**\n```bash\nbrightdata pipelines amazon_product \u003curl\u003e --timeout 1200\n# or\nexport BRIGHTDATA_POLLING_TIMEOUT=1200\n```\n\n**`No active browser session \"default\"`**\n```bash\n# Start a session first\nbrightdata browser open https://example.com\n```\n\n**Browser daemon won't start**\n```bash\n# Check if a stale socket file exists and clear it\nbrightdata browser close\n# Then retry\nbrightdata browser open https://example.com\n```\n\n**Element ref not found after interaction**\n\nRefs are re-assigned on every `snapshot` call. If you navigate or click (which may cause the page to change), take a fresh snapshot before using refs again:\n```bash\nbrightdata browser click e3\nbrightdata browser snapshot --compact   # refresh refs\nbrightdata browser type e5 \"text\"\n```\n\n**Garbled output in non-interactive terminal**\n\nColors and spinners are disabled automatically when not in a TTY. If you still see ANSI codes, add `| cat` at the end of your command.\n\n---\n\n## Links\n\n- [Bright Data Website](https://brightdata.com)\n- [Control Panel](https://brightdata.com/cp)\n- [API Key Settings](https://brightdata.com/cp/setting/users)\n- [API Reference](https://docs.brightdata.com/api-reference)\n- [Report an Issue](https://github.com/brightdata/cli/issues)\n\n---\n\n\u003cp align=\"center\"\u003e\n  \u003csub\u003e© Bright Data · ISC License\u003c/sub\u003e\n\u003c/p\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrightdata%2Fcli","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbrightdata%2Fcli","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrightdata%2Fcli/lists"}