{"id":39792911,"url":"https://github.com/datacoon/undatum","last_synced_at":"2026-01-18T12:18:35.317Z","repository":{"id":45905035,"uuid":"256185430","full_name":"datacoon/undatum","owner":"datacoon","description":"undatum: a command-line tool for data processing. Brings CSV simplicity to NDJSON, BSON, XML and other data files","archived":false,"fork":false,"pushed_at":"2025-12-12T12:41:48.000Z","size":5655,"stargazers_count":50,"open_issues_count":33,"forks_count":6,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-12-14T03:06:48.181Z","etag":null,"topics":["bson","cli","command-line","csv","data","dataset","json","jsonl","jsonlines","parquet"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/datacoon.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2020-04-16T10:43:22.000Z","updated_at":"2025-12-13T15:29:45.000Z","dependencies_parsed_at":"2023-10-11T14:28:50.161Z","dependency_job_id":"20e28321-2493-4ec9-b535-a3e61e27b85c","html_url":"https://github.com/datacoon/undatum","commit_stats":{"total_commits":53,"total_committers":3,"mean_commits":"17.666666666666668","dds":"0.39622641509433965","last_synced_commit":"db81de8dcb30835d29f7fb2e641f848631f5fa2f"},"previous_names":[],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/datacoon/undatum","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/datacoon%2Fundatum","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/datacoon%2Fundatum/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/datacoon%2Fundatum/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/datacoon%2Fundatum/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/datacoon","download_url":"https://codeload.github.com/datacoon/undatum/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/datacoon%2Fundatum/sbom","scorecard":{"id":324334,"data":{"date":"2025-08-11","repo":{"name":"github.com/datacoon/undatum","commit":"711a80aff240cd9b52bf941c1dfda7585f22790e"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":1.8,"checks":[{"name":"Code-Review","score":0,"reason":"Found 0/30 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Maintained","score":2,"reason":"3 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 2","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"SAST","score":0,"reason":"no SAST tool detected","details":["Warn: no pull requests merged into dev branch"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Pinned-Dependencies","score":-1,"reason":"no dependencies found","details":null,"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: MIT License: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":0,"reason":"Project has not signed or included provenance with any releases.","details":["Warn: release artifact release_14 not signed: https://api.github.com/repos/datacoon/undatum/releases/72694148","Warn: release artifact release_12 not signed: https://api.github.com/repos/datacoon/undatum/releases/58380355","Warn: release artifact Releases not signed: https://api.github.com/repos/datacoon/undatum/releases/58257131","Warn: release artifact release_14 does not have provenance: https://api.github.com/repos/datacoon/undatum/releases/72694148","Warn: release artifact release_12 does not have provenance: https://api.github.com/repos/datacoon/undatum/releases/58380355","Warn: release artifact Releases does not have provenance: https://api.github.com/repos/datacoon/undatum/releases/58257131"],"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Vulnerabilities","score":0,"reason":"11 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: PYSEC-2023-188","Warn: Project is vulnerable to: PYSEC-2024-203","Warn: Project is vulnerable to: PYSEC-2024-25","Warn: Project is vulnerable to: PYSEC-2024-40 / GHSA-pwr2-4v36-6qpr","Warn: Project is vulnerable to: PYSEC-2021-47 / GHSA-5jqp-qgf6-3pvh","Warn: Project is vulnerable to: GHSA-mr82-8j83-vxmv","Warn: Project is vulnerable to: GHSA-m87m-mmvp-v9qm","Warn: Project is vulnerable to: PYSEC-2025-49 / GHSA-5rjg-fvgr-3xxf","Warn: Project is vulnerable to: GHSA-cx63-2mw6-8hw5","Warn: Project is vulnerable to: PYSEC-2022-43012 / GHSA-r9hx-vwmv-q579","Warn: Project is vulnerable to: PYSEC-2017-74"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}}]},"last_synced_at":"2025-08-18T02:06:41.835Z","repository_id":45905035,"created_at":"2025-08-18T02:06:41.836Z","updated_at":"2025-08-18T02:06:41.836Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28535634,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-18T10:13:46.436Z","status":"ssl_error","status_checked_at":"2026-01-18T10:13:11.045Z","response_time":98,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bson","cli","command-line","csv","data","dataset","json","jsonl","jsonlines","parquet"],"created_at":"2026-01-18T12:18:33.542Z","updated_at":"2026-01-18T12:18:34.986Z","avatar_url":"https://github.com/datacoon.png","language":"Python","readme":"# undatum\r\n\r\n\u003e A powerful command-line tool for data processing and analysis\r\n\r\n**undatum** (pronounced *un-da-tum*) is a modern CLI tool designed to make working with large datasets as simple and efficient as possible. It provides a unified interface for converting, analyzing, validating, and transforming data across multiple formats.\r\n\r\n## Features\r\n\r\n- **Multi-format support**: CSV, JSON Lines, BSON, XML, XLS, XLSX, Parquet, AVRO, ORC\r\n- **Compression support**: ZIP, XZ, GZ, BZ2, ZSTD\r\n- **Low memory footprint**: Streams data for efficient processing of large files\r\n- **Automatic detection**: Encoding, delimiters, and file types\r\n- **Data validation**: Built-in rules for emails, URLs, and custom validators\r\n- **Advanced statistics**: Field analysis, frequency calculations, and date detection\r\n- **Flexible filtering**: Query and filter data using expressions\r\n- **Schema generation**: Automatic schema detection and generation\r\n- **Database ingestion**: Ingest data to MongoDB, PostgreSQL, DuckDB, MySQL, SQLite, and Elasticsearch with retry logic and error handling\r\n- **AI-powered documentation**: Automatic field and dataset descriptions using multiple LLM providers (OpenAI, OpenRouter, Ollama, LM Studio, Perplexity) with structured JSON output\r\n\r\n## Installation\r\n\r\n### Using pip (Recommended)\r\n\r\n```bash\r\npip install --upgrade pip setuptools\r\npip install undatum\r\n```\r\n\r\nDependencies are declared in `pyproject.toml` and will be installed automatically by modern versions of `pip` (23+). If you see missing-module errors after installation, upgrade `pip` and retry.\r\n\r\n### Requirements\r\n\r\n- Python 3.9 or greater\r\n\r\n### Install from source\r\n\r\n```bash\r\npython -m pip install --upgrade pip setuptools wheel\r\npython -m pip install .\r\n# or build distributables\r\npython setup.py sdist bdist_wheel\r\n```\r\n\r\n## Quick Start\r\n\r\n```bash\r\n# Get file headers\r\nundatum headers data.jsonl\r\n\r\n# Analyze file structure\r\nundatum analyze data.jsonl\r\n\r\n# Get statistics\r\nundatum stats data.csv\r\n\r\n# Convert XML to JSON Lines\r\nundatum convert --tagname item data.xml data.jsonl\r\n\r\n# Get unique values\r\nundatum uniq --fields category data.jsonl\r\n\r\n# Calculate frequency\r\nundatum frequency --fields status data.csv\r\n\r\n# Count rows\r\nundatum count data.csv\r\n\r\n# View first 10 rows\r\nundatum head data.jsonl\r\n\r\n# View last 10 rows\r\nundatum tail data.csv\r\n\r\n# Display formatted table\r\nundatum table data.csv --limit 20\r\n```\r\n\r\n## Commands\r\n\r\n### `analyze`\r\n\r\nAnalyzes data files and provides human-readable insights about structure, encoding, fields, and data types. With `--autodoc`, automatically generates field descriptions and dataset summaries using AI.\r\n\r\n```bash\r\n# Basic analysis\r\nundatum analyze data.jsonl\r\n\r\n# With AI-powered documentation\r\nundatum analyze data.jsonl --autodoc\r\n\r\n# Using specific AI provider\r\nundatum analyze data.jsonl --autodoc --ai-provider openai --ai-model gpt-4o-mini\r\n\r\n# Output to file\r\nundatum analyze data.jsonl --output report.yaml --autodoc\r\n```\r\n\r\n**Output includes:**\r\n- File type, encoding, compression\r\n- Number of records and fields\r\n- Field types and structure\r\n- Table detection for nested data (JSON/XML)\r\n- AI-generated field descriptions (with `--autodoc`)\r\n- AI-generated dataset summary (with `--autodoc`)\r\n\r\n**AI Provider Options:**\r\n- `--ai-provider`: Choose provider (openai, openrouter, ollama, lmstudio, perplexity)\r\n- `--ai-model`: Specify model name (provider-specific)\r\n- `--ai-base-url`: Custom API endpoint URL\r\n\r\n**Supported AI Providers:**\r\n\r\n1. **OpenAI** (default if `OPENAI_API_KEY` is set)\r\n   ```bash\r\n   export OPENAI_API_KEY=sk-...\r\n   undatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini\r\n   ```\r\n\r\n2. **OpenRouter** (supports multiple models via unified API)\r\n   ```bash\r\n   export OPENROUTER_API_KEY=sk-or-...\r\n   undatum analyze data.csv --autodoc --ai-provider openrouter --ai-model openai/gpt-4o-mini\r\n   ```\r\n\r\n3. **Ollama** (local models, no API key required)\r\n   ```bash\r\n   # Start Ollama and pull a model first: ollama pull llama3.2\r\n   undatum analyze data.csv --autodoc --ai-provider ollama --ai-model llama3.2\r\n   # Or set custom URL: export OLLAMA_BASE_URL=http://localhost:11434\r\n   ```\r\n\r\n4. **LM Studio** (local models, OpenAI-compatible API)\r\n   ```bash\r\n   # Start LM Studio and load a model\r\n   undatum analyze data.csv --autodoc --ai-provider lmstudio --ai-model local-model\r\n   # Or set custom URL: export LMSTUDIO_BASE_URL=http://localhost:1234/v1\r\n   ```\r\n\r\n5. **Perplexity** (backward compatible, uses `PERPLEXITY_API_KEY`)\r\n   ```bash\r\n   export PERPLEXITY_API_KEY=pplx-...\r\n   undatum analyze data.csv --autodoc --ai-provider perplexity\r\n   ```\r\n\r\n**Configuration Methods:**\r\n\r\nAI provider can be configured via:\r\n1. **Environment variables** (lowest precedence):\r\n   ```bash\r\n   export UNDATUM_AI_PROVIDER=openai\r\n   export OPENAI_API_KEY=sk-...\r\n   ```\r\n\r\n2. **Config file** (medium precedence):\r\n   Create `undatum.yaml` in your project root or `~/.undatum/config.yaml`:\r\n   ```yaml\r\n   ai:\r\n     provider: openai\r\n     api_key: ${OPENAI_API_KEY}  # Can reference env vars\r\n     model: gpt-4o-mini\r\n     timeout: 30\r\n   ```\r\n\r\n3. **CLI arguments** (highest precedence):\r\n   ```bash\r\n   undatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini\r\n   ```\r\n\r\n### `convert`\r\n\r\nConverts data between different formats. Supports CSV, JSON Lines, BSON, XML, XLS, XLSX, Parquet, AVRO, and ORC.\r\n\r\n```bash\r\n# XML to JSON Lines\r\nundatum convert --tagname item data.xml data.jsonl\r\n\r\n# CSV to Parquet\r\nundatum convert data.csv data.parquet\r\n\r\n# JSON Lines to CSV\r\nundatum convert data.jsonl data.csv\r\n```\r\n\r\n**Supported conversions:**\r\n\r\n| From / To | CSV | JSONL | BSON | JSON | XLS | XLSX | XML | Parquet | ORC | AVRO |\r\n|-----------|-----|-------|------|------|-----|------|-----|---------|-----|------|\r\n| CSV       | -   | ✓     | ✓    | -    | -   | -    | -   | ✓       | ✓   | ✓    |\r\n| JSONL    | ✓   | -     | -    | -    | -   | -    | -   | ✓       | ✓   | -    |\r\n| BSON     | -   | ✓     | -    | -    | -   | -    | -   | -       | -   | -    |\r\n| JSON     | -   | ✓     | -    | -    | -   | -    | -   | -       | -   | -    |\r\n| XLS      | -   | ✓     | ✓    | -    | -   | -    | -   | -       | -   | -    |\r\n| XLSX     | -   | ✓     | ✓    | -    | -   | -    | -   | -       | -   | -    |\r\n| XML      | -   | ✓     | -    | -    | -   | -    | -   | -       | -   | -    |\r\n\r\n### `count`\r\n\r\nCounts the number of rows in a data file. With DuckDB engine, counting is instant for supported formats.\r\n\r\n```bash\r\n# Count rows in CSV file\r\nundatum count data.csv\r\n\r\n# Count rows in JSONL file\r\nundatum count data.jsonl\r\n\r\n# Use DuckDB engine for faster counting\r\nundatum count data.parquet --engine duckdb\r\n```\r\n\r\n### `head`\r\n\r\nExtracts the first N rows from a data file. Useful for quick data inspection.\r\n\r\n```bash\r\n# Extract first 10 rows (default)\r\nundatum head data.csv\r\n\r\n# Extract first 20 rows\r\nundatum head data.jsonl --n 20\r\n\r\n# Save to file\r\nundatum head data.csv --n 5 output.csv\r\n```\r\n\r\n### `tail`\r\n\r\nExtracts the last N rows from a data file. Uses efficient buffering for large files.\r\n\r\n```bash\r\n# Extract last 10 rows (default)\r\nundatum tail data.csv\r\n\r\n# Extract last 50 rows\r\nundatum tail data.jsonl --n 50\r\n\r\n# Save to file\r\nundatum tail data.csv --n 20 output.csv\r\n```\r\n\r\n### `enum`\r\n\r\nAdds row numbers, UUIDs, or constant values to records. Useful for adding unique identifiers or sequential numbers.\r\n\r\n```bash\r\n# Add row numbers (default field: row_id, starts at 1)\r\nundatum enum data.csv output.csv\r\n\r\n# Add UUIDs\r\nundatum enum data.jsonl --field id --type uuid output.jsonl\r\n\r\n# Add constant value\r\nundatum enum data.csv --field status --type constant --value \"active\" output.csv\r\n\r\n# Custom starting number\r\nundatum enum data.jsonl --field sequence --start 100 output.jsonl\r\n```\r\n\r\n### `reverse`\r\n\r\nReverses the order of rows in a data file.\r\n\r\n```bash\r\n# Reverse rows\r\nundatum reverse data.csv output.csv\r\n\r\n# Reverse JSONL file\r\nundatum reverse data.jsonl output.jsonl\r\n```\r\n\r\n### `table`\r\n\r\nDisplays data in a formatted, aligned table for inspection. Uses the rich library for beautiful terminal output.\r\n\r\n```bash\r\n# Display first 20 rows (default)\r\nundatum table data.csv\r\n\r\n# Display with custom limit\r\nundatum table data.jsonl --limit 50\r\n\r\n# Display only specific fields\r\nundatum table data.csv --fields name,email,status\r\n```\r\n\r\n### `fixlengths`\r\n\r\nEnsures all rows have the same number of fields by padding shorter rows or truncating longer rows. Useful for data cleaning workflows.\r\n\r\n```bash\r\n# Pad rows with empty string (default)\r\nundatum fixlengths data.csv --strategy pad output.csv\r\n\r\n# Pad with custom value\r\nundatum fixlengths data.jsonl --strategy pad --value \"N/A\" output.jsonl\r\n\r\n# Truncate longer rows\r\nundatum fixlengths data.csv --strategy truncate output.csv\r\n```\r\n\r\n### `headers`\r\n\r\nExtracts field names from data files. Works with CSV, JSON Lines, BSON, and XML files.\r\n\r\n```bash\r\nundatum headers data.jsonl\r\nundatum headers data.csv --limit 50000\r\n```\r\n\r\n### `stats`\r\n\r\nGenerates detailed statistics about your dataset including field types, uniqueness, lengths, and more. With DuckDB engine, statistics generation is 10-100x faster for supported formats (CSV, JSONL, JSON, Parquet).\r\n\r\n```bash\r\nundatum stats data.jsonl\r\nundatum stats data.csv --checkdates\r\nundatum stats data.parquet --engine duckdb\r\n```\r\n\r\n**Statistics include:**\r\n- Field types and array flags\r\n- Unique value counts and percentages\r\n- Min/max/average lengths\r\n- Date field detection\r\n\r\n**Performance:** DuckDB engine automatically selected for supported formats, providing columnar processing and SQL-based aggregations for faster statistics.\r\n\r\n### `frequency`\r\n\r\nCalculates frequency distribution for specified fields.\r\n\r\n```bash\r\nundatum frequency --fields category data.jsonl\r\nundatum frequency --fields status,region data.csv\r\n```\r\n\r\n### `uniq`\r\n\r\nExtracts all unique values from specified field(s).\r\n\r\n```bash\r\n# Single field\r\nundatum uniq --fields category data.jsonl\r\n\r\n# Multiple fields (unique combinations)\r\nundatum uniq --fields status,region data.jsonl\r\n```\r\n\r\n### `sort`\r\n\r\nSorts rows by one or more columns. Supports multiple sort keys, ascending/descending order, and numeric sorting.\r\n\r\n```bash\r\n# Sort by single column ascending\r\nundatum sort data.csv --by name output.csv\r\n\r\n# Sort by multiple columns\r\nundatum sort data.jsonl --by name,age output.jsonl\r\n\r\n# Sort descending\r\nundatum sort data.csv --by date --desc output.csv\r\n\r\n# Numeric sort\r\nundatum sort data.csv --by price --numeric output.csv\r\n```\r\n\r\n### `sample`\r\n\r\nRandomly selects rows from a data file using reservoir sampling algorithm.\r\n\r\n```bash\r\n# Sample fixed number of rows\r\nundatum sample data.csv --n 1000 output.csv\r\n\r\n# Sample by percentage\r\nundatum sample data.jsonl --percent 10 output.jsonl\r\n```\r\n\r\n### `search`\r\n\r\nFilters rows using regex patterns. Searches across specified fields or all fields.\r\n\r\n```bash\r\n# Search across all fields\r\nundatum search data.csv --pattern \"error|warning\"\r\n\r\n# Search in specific fields\r\nundatum search data.jsonl --pattern \"^[0-9]+$\" --fields id,code\r\n\r\n# Case-insensitive search\r\nundatum search data.csv --pattern \"ERROR\" --ignore-case\r\n```\r\n\r\n### `dedup`\r\n\r\nRemoves duplicate rows. Can deduplicate by all fields or specified key fields.\r\n\r\n```bash\r\n# Deduplicate by all fields\r\nundatum dedup data.csv output.csv\r\n\r\n# Deduplicate by key fields\r\nundatum dedup data.jsonl --key-fields email output.jsonl\r\n\r\n# Keep last duplicate\r\nundatum dedup data.csv --key-fields id --keep last output.csv\r\n```\r\n\r\n### `fill`\r\n\r\nFills empty or null values with specified values or strategies (forward-fill, backward-fill).\r\n\r\n```bash\r\n# Fill with constant value\r\nundatum fill data.csv --fields name,email --value \"N/A\" output.csv\r\n\r\n# Forward fill (use previous value)\r\nundatum fill data.jsonl --fields status --strategy forward output.jsonl\r\n\r\n# Backward fill (use next value)\r\nundatum fill data.csv --fields category --strategy backward output.csv\r\n```\r\n\r\n### `rename`\r\n\r\nRenames fields by exact mapping or regex patterns.\r\n\r\n```bash\r\n# Rename by exact mapping\r\nundatum rename data.csv --map \"old_name:new_name,old2:new2\" output.csv\r\n\r\n# Rename using regex\r\nundatum rename data.jsonl --pattern \"^prefix_\" --replacement \"\" output.jsonl\r\n```\r\n\r\n### `explode`\r\n\r\nSplits a column by separator into multiple rows. Creates one row per value, duplicating other fields.\r\n\r\n```bash\r\n# Explode comma-separated values\r\nundatum explode data.csv --field tags --separator \",\" output.csv\r\n\r\n# Explode pipe-separated values\r\nundatum explode data.jsonl --field categories --separator \"|\" output.jsonl\r\n```\r\n\r\n### `replace`\r\n\r\nPerforms string replacement in specified fields. Supports simple string replacement and regex-based replacement.\r\n\r\n```bash\r\n# Simple string replacement\r\nundatum replace data.csv --field name --pattern \"Mr\\.\" --replacement \"Mr\" output.csv\r\n\r\n# Regex replacement\r\nundatum replace data.jsonl --field email --pattern \"@old.com\" --replacement \"@new.com\" --regex output.jsonl\r\n\r\n# Global replacement (all occurrences)\r\nundatum replace data.csv --field text --pattern \"old\" --replacement \"new\" --global output.csv\r\n```\r\n\r\n### `cat`\r\n\r\nConcatenates files by rows or columns.\r\n\r\n```bash\r\n# Concatenate files by rows (vertical)\r\nundatum cat file1.csv file2.csv --mode rows output.csv\r\n\r\n# Concatenate files by columns (horizontal)\r\nundatum cat file1.csv file2.csv --mode columns output.csv\r\n```\r\n\r\n### `join`\r\n\r\nPerforms relational joins between two files. Supports inner, left, right, and full outer joins.\r\n\r\n```bash\r\n# Inner join by key field\r\nundatum join data1.csv data2.csv --on email --type inner output.csv\r\n\r\n# Left join (keep all rows from first file)\r\nundatum join data1.jsonl data2.jsonl --on id --type left output.jsonl\r\n\r\n# Right join (keep all rows from second file)\r\nundatum join data1.csv data2.csv --on id --type right output.csv\r\n\r\n# Full outer join (keep all rows from both files)\r\nundatum join data1.jsonl data2.jsonl --on id --type full output.jsonl\r\n```\r\n\r\n### `diff`\r\n\r\nCompares two files and shows differences (added, removed, and changed rows).\r\n\r\n```bash\r\n# Compare files by key\r\nundatum diff file1.csv file2.csv --key id\r\n\r\n# Output differences to file\r\nundatum diff file1.jsonl file2.jsonl --key email --output changes.jsonl\r\n\r\n# Show unified diff format\r\nundatum diff file1.csv file2.csv --key id --format unified\r\n```\r\n\r\n### `exclude`\r\n\r\nRemoves rows from input file where keys match exclusion file. Uses hash-based lookup for performance.\r\n\r\n```bash\r\n# Exclude rows by key\r\nundatum exclude data.csv blacklist.csv --on email output.csv\r\n\r\n# Exclude with multiple key fields\r\nundatum exclude data.jsonl exclude.jsonl --on id,email output.jsonl\r\n```\r\n\r\n### `transpose`\r\n\r\nSwaps rows and columns, handling headers appropriately.\r\n\r\n```bash\r\n# Transpose CSV file\r\nundatum transpose data.csv output.csv\r\n\r\n# Transpose JSONL file\r\nundatum transpose data.jsonl output.jsonl\r\n```\r\n\r\n### `sniff`\r\n\r\nDetects file properties including delimiter, encoding, field types, and record count.\r\n\r\n```bash\r\n# Detect file properties (text output)\r\nundatum sniff data.csv\r\n\r\n# Output sniff results as JSON\r\nundatum sniff data.jsonl --format json\r\n\r\n# Output as YAML\r\nundatum sniff data.csv --format yaml\r\n```\r\n\r\n### `slice`\r\n\r\nExtracts specific rows by range or index list. Supports efficient DuckDB-based slicing for supported formats.\r\n\r\n```bash\r\n# Slice by range\r\nundatum slice data.csv --start 100 --end 200 output.csv\r\n\r\n# Slice by specific indices\r\nundatum slice data.jsonl --indices 1,5,10,20 output.jsonl\r\n```\r\n\r\n### `fmt`\r\n\r\nReformats CSV data with specific formatting options (delimiter, quote style, escape character, line endings).\r\n\r\n```bash\r\n# Change delimiter\r\nundatum fmt data.csv --delimiter \";\" output.csv\r\n\r\n# Change quote style\r\nundatum fmt data.csv --quote always output.csv\r\n\r\n# Change escape character\r\nundatum fmt data.csv --escape backslash output.csv\r\n\r\n# Change line endings\r\nundatum fmt data.csv --line-ending crlf output.csv\r\n```\r\n\r\n### `select`\r\n\r\nSelects and reorders columns from files. Supports filtering.\r\n\r\n```bash\r\nundatum select --fields name,email,status data.jsonl\r\nundatum select --fields name,email --filter \"`status` == 'active'\" data.jsonl\r\n```\r\n\r\n### `split`\r\n\r\nSplits datasets into multiple files based on chunk size or field values.\r\n\r\n```bash\r\n# Split by chunk size\r\nundatum split --chunksize 10000 data.jsonl\r\n\r\n# Split by field value\r\nundatum split --fields category data.jsonl\r\n```\r\n\r\n### `validate`\r\n\r\nValidates data against built-in or custom validation rules.\r\n\r\n```bash\r\n# Validate email addresses\r\nundatum validate --rule common.email --fields email data.jsonl\r\n\r\n# Validate Russian INN\r\nundatum validate --rule ru.org.inn --fields VendorINN data.jsonl --mode stats\r\n\r\n# Output invalid records\r\nundatum validate --rule ru.org.inn --fields VendorINN data.jsonl --mode invalid\r\n```\r\n\r\n**Available validation rules:**\r\n- `common.email` - Email address validation\r\n- `common.url` - URL validation\r\n- `ru.org.inn` - Russian organization INN identifier\r\n- `ru.org.ogrn` - Russian organization OGRN identifier\r\n\r\n### `schema`\r\n\r\nGenerates data schemas from files. Supports multiple output formats including YAML, JSON, Cerberus, JSON Schema, Avro, and Parquet.\r\n\r\n```bash\r\n# Generate schema in default YAML format\r\nundatum schema data.jsonl\r\n\r\n# Generate schema in JSON Schema format\r\nundatum schema data.jsonl --format jsonschema\r\n\r\n# Generate schema in Avro format\r\nundatum schema data.jsonl --format avro\r\n\r\n# Generate schema in Parquet format\r\nundatum schema data.jsonl --format parquet\r\n\r\n# Generate Cerberus schema (for backward compatibility with deprecated `scheme` command)\r\nundatum schema data.jsonl --format cerberus\r\n\r\n# Save to file\r\nundatum schema data.jsonl --output schema.yaml\r\n\r\n# Generate schema with AI-powered field documentation\r\nundatum schema data.jsonl --autodoc --output schema.yaml\r\n```\r\n\r\n**Supported schema formats:**\r\n- `yaml` (default) - YAML format with full schema details\r\n- `json` - JSON format with full schema details\r\n- `cerberus` - Cerberus validation schema format (for backward compatibility with deprecated `scheme` command)\r\n- `jsonschema` - JSON Schema (W3C/IETF standard) - Use for API validation, OpenAPI specs, and tool integration\r\n- `avro` - Apache Avro schema format - Use for Kafka message schemas and Hadoop data pipelines\r\n- `parquet` - Parquet schema format - Use for data lake schemas and Parquet file metadata\r\n\r\n**Use cases:**\r\n- **JSON Schema**: API documentation, data validation in web applications, OpenAPI specifications\r\n- **Avro**: Kafka message schemas, Hadoop ecosystem integration, schema registry compatibility\r\n- **Parquet**: Data lake schemas, Parquet file metadata, analytics pipeline definitions\r\n- **Cerberus**: Python data validation (legacy, use `scheme` command or `schema --format cerberus`)\r\n\r\n**Examples:**\r\n\r\n```bash\r\n# Generate JSON Schema for API documentation\r\nundatum schema api_data.jsonl --format jsonschema --output api_schema.json\r\n\r\n# Generate Avro schema for Kafka\r\nundatum schema events.jsonl --format avro --output events.avsc\r\n\r\n# Generate Parquet schema for data lake\r\nundatum schema data.csv --format parquet --output schema.json\r\n\r\n# Generate Cerberus schema (deprecated, use schema command instead)\r\nundatum schema data.jsonl --format cerberus --output validation_schema.json\r\n```\r\n\r\n**Note:** The `scheme` command is deprecated. Use `undatum schema --format cerberus` instead. The `scheme` command will show a deprecation warning but continues to work for backward compatibility.\r\n\r\n### `query`\r\n\r\nQuery data using MistQL query language (experimental).\r\n\r\n```bash\r\nundatum query data.jsonl \"SELECT * WHERE status = 'active'\"\r\n```\r\n\r\n### `flatten`\r\n\r\nFlattens nested data structures into key-value pairs.\r\n\r\n```bash\r\nundatum flatten data.jsonl\r\n```\r\n\r\n### `apply`\r\n\r\nApplies a transformation script to each record in the file.\r\n\r\n```bash\r\nundatum apply --script transform.py data.jsonl output.jsonl\r\n```\r\n\r\n### `ingest`\r\n\r\nIngests data from files into databases. Supports MongoDB, PostgreSQL, and Elasticsearch with robust error handling, retry logic, and progress tracking.\r\n\r\n```bash\r\n# Ingest to MongoDB\r\nundatum ingest data.jsonl mongodb://localhost:27017 mydb mycollection\r\n\r\n# Ingest to PostgreSQL (append mode)\r\nundatum ingest data.csv postgresql://user:pass@localhost:5432/mydb mytable --dbtype postgresql\r\n\r\n# Ingest to PostgreSQL with auto-create table\r\nundatum ingest data.jsonl postgresql://user:pass@localhost:5432/mydb mytable \\\r\n  --dbtype postgresql \\\r\n  --create-table\r\n\r\n# Ingest to PostgreSQL with upsert (update on conflict)\r\nundatum ingest data.jsonl postgresql://user:pass@localhost:5432/mydb mytable \\\r\n  --dbtype postgresql \\\r\n  --mode upsert \\\r\n  --upsert-key id\r\n\r\n# Ingest to PostgreSQL (replace mode - truncates table first)\r\nundatum ingest data.csv postgresql://user:pass@localhost:5432/mydb mytable \\\r\n  --dbtype postgresql \\\r\n  --mode replace\r\n\r\n# Ingest to DuckDB (file database)\r\nundatum ingest data.csv duckdb:///path/to/database.db mytable --dbtype duckdb\r\n\r\n# Ingest to DuckDB (in-memory database)\r\nundatum ingest data.jsonl duckdb:///:memory: mytable --dbtype duckdb\r\n\r\n# Ingest to DuckDB with auto-create table\r\nundatum ingest data.jsonl duckdb:///path/to/database.db mytable \\\r\n  --dbtype duckdb \\\r\n  --create-table\r\n\r\n# Ingest to DuckDB with upsert\r\nundatum ingest data.jsonl duckdb:///path/to/database.db mytable \\\r\n  --dbtype duckdb \\\r\n  --mode upsert \\\r\n  --upsert-key id\r\n\r\n# Ingest to DuckDB with Appender API (streaming)\r\nundatum ingest data.jsonl duckdb:///path/to/database.db mytable \\\r\n  --dbtype duckdb \\\r\n  --use-appender\r\n\r\n# Ingest to MySQL\r\nundatum ingest data.csv mysql://user:pass@localhost:3306/mydb mytable --dbtype mysql\r\n\r\n# Ingest to MySQL with auto-create table\r\nundatum ingest data.jsonl mysql://user:pass@localhost:3306/mydb mytable \\\r\n  --dbtype mysql \\\r\n  --create-table\r\n\r\n# Ingest to MySQL with upsert\r\nundatum ingest data.jsonl mysql://user:pass@localhost:3306/mydb mytable \\\r\n  --dbtype mysql \\\r\n  --mode upsert \\\r\n  --upsert-key id\r\n\r\n# Ingest to SQLite (file database)\r\nundatum ingest data.csv sqlite:///path/to/database.db mytable --dbtype sqlite\r\n\r\n# Ingest to SQLite (in-memory database)\r\nundatum ingest data.jsonl sqlite:///:memory: mytable --dbtype sqlite\r\n\r\n# Ingest to SQLite with auto-create table\r\nundatum ingest data.jsonl sqlite:///path/to/database.db mytable \\\r\n  --dbtype sqlite \\\r\n  --create-table\r\n\r\n# Ingest to SQLite with upsert\r\nundatum ingest data.jsonl sqlite:///path/to/database.db mytable \\\r\n  --dbtype sqlite \\\r\n  --mode upsert \\\r\n  --upsert-key id\r\n\r\n# Ingest to Elasticsearch\r\nundatum ingest data.jsonl https://elasticsearch:9200 myindex myindex --dbtype elasticsearch --api-key YOUR_API_KEY --doc-id id\r\n\r\n# Ingest with options\r\nundatum ingest data.csv mongodb://localhost:27017 mydb mycollection \\\r\n  --batch 5000 \\\r\n  --drop \\\r\n  --totals \\\r\n  --timeout 30 \\\r\n  --skip 100\r\n\r\n# Ingest multiple files\r\nundatum ingest \"data/*.jsonl\" mongodb://localhost:27017 mydb mycollection\r\n```\r\n\r\n**Key Features:**\r\n- **Automatic retry**: Retries failed operations with exponential backoff (3 attempts)\r\n- **Connection pooling**: Efficient connection management for all databases\r\n- **Progress tracking**: Real-time progress bar with throughput (rows/second)\r\n- **Error handling**: Continues processing after batch failures, logs detailed errors\r\n- **Summary statistics**: Displays total rows, successful rows, failed rows, and throughput at completion\r\n- **Connection validation**: Tests database connection before starting ingestion\r\n- **PostgreSQL optimizations**: Uses COPY FROM for maximum performance (10-100x faster than INSERT)\r\n- **Schema management**: Auto-create tables from data schema or validate existing schemas\r\n\r\n**Options:**\r\n- `--batch`: Batch size for ingestion (default: 1000, PostgreSQL recommended: 10000, DuckDB recommended: 50000, MySQL recommended: 10000, SQLite recommended: 5000)\r\n- `--dbtype`: Database type: `mongodb` (default), `postgresql`, `postgres`, `duckdb`, `mysql`, `sqlite`, `elasticsearch`, or `elastic`\r\n- `--drop`: Drop existing collection/table before ingestion (MongoDB, Elasticsearch)\r\n- `--mode`: Ingestion mode for PostgreSQL/DuckDB/MySQL/SQLite: `append` (default), `replace`, or `upsert`\r\n- `--create-table`: Auto-create table from data schema (PostgreSQL/DuckDB/MySQL/SQLite)\r\n- `--upsert-key`: Field name(s) for conflict resolution in upsert mode (PostgreSQL/DuckDB/MySQL/SQLite, comma-separated for multiple keys)\r\n- `--use-appender`: Use Appender API for DuckDB (streaming insertion, default: False)\r\n- `--totals`: Show total record counts during ingestion (uses DuckDB for counting)\r\n- `--timeout`: Connection timeout in seconds (positive values, default uses database defaults)\r\n- `--skip`: Number of records to skip at the beginning\r\n- `--api-key`: API key for database authentication (Elasticsearch)\r\n- `--doc-id`: Field name to use as document ID (Elasticsearch, default: `id`)\r\n- `--verbose`: Enable verbose logging output\r\n\r\n**PostgreSQL-Specific Features:**\r\n- **COPY FROM**: Fastest bulk loading method (100,000+ rows/second)\r\n- **Upsert support**: `INSERT ... ON CONFLICT` for idempotent ingestion\r\n- **Schema auto-creation**: Automatically creates tables with inferred types\r\n- **Connection pooling**: Efficient connection reuse\r\n- **Transaction management**: Atomic batch operations\r\n\r\n**DuckDB-Specific Features:**\r\n- **Fast batch inserts**: Optimized executemany for high throughput (200,000+ rows/second)\r\n- **Appender API**: Streaming insertion for real-time data ingestion\r\n- **Upsert support**: `INSERT ... ON CONFLICT` for idempotent ingestion\r\n- **Schema auto-creation**: Automatically creates tables with inferred types\r\n- **File and in-memory**: Supports both file-based and in-memory databases\r\n- **No server required**: Embedded database, no separate server needed\r\n- **Analytical database**: Optimized for analytical workloads and OLAP queries\r\n\r\n**MySQL-Specific Features:**\r\n- **Multi-row INSERT**: Efficient batch operations (10,000+ rows/second)\r\n- **Upsert support**: `INSERT ... ON DUPLICATE KEY UPDATE` for idempotent ingestion\r\n- **Schema auto-creation**: Automatically creates tables with inferred types\r\n- **Connection management**: Efficient connection handling\r\n- **Transaction support**: Atomic batch operations\r\n\r\n**SQLite-Specific Features:**\r\n- **PRAGMA optimizations**: Automatic performance tuning (synchronous=OFF, journal_mode=WAL)\r\n- **Fast batch inserts**: Optimized executemany (10,000+ rows/second)\r\n- **Upsert support**: `INSERT ... ON CONFLICT` for idempotent ingestion (SQLite 3.24+)\r\n- **Schema auto-creation**: Automatically creates tables with inferred types\r\n- **File and in-memory**: Supports both file-based and in-memory databases\r\n- **No server required**: Embedded database, no separate server needed\r\n- **Built-in**: Uses Python's built-in sqlite3 module, no dependencies required\r\n\r\n**Error Handling:**\r\n- Transient failures (connection timeouts, network errors) are automatically retried\r\n- Partial batch failures are logged but don't stop ingestion\r\n- Failed records are tracked and reported in the summary\r\n- Detailed error messages help identify problematic data\r\n\r\n**Performance:**\r\n- Batch processing for efficient ingestion\r\n- Connection pooling reduces overhead\r\n- Progress tracking shows real-time throughput\r\n- Optimized for large files with streaming support\r\n\r\n**Example Output:**\r\n```\r\nIngesting data.jsonl to mongodb://localhost:27017 with db mydb table mycollection\r\nIngesting to mongodb: 100%|████████████| 10000/10000 [00:05\u003c00:00, 2000 rows/s]\r\n\r\nIngestion Summary:\r\n  Total rows processed: 10000\r\n  Successful rows: 10000\r\n  Failed rows: 0\r\n  Batches processed: 10\r\n  Time elapsed: 5.00 seconds\r\n  Average throughput: 2000 rows/second\r\n```\r\n\r\n## Advanced Usage\r\n\r\n### Working with Compressed Files\r\n\r\nundatum can process files inside compressed containers (ZIP, GZ, BZ2, XZ, ZSTD) with minimal memory usage.\r\n\r\n```bash\r\n# Process file inside ZIP archive\r\nundatum headers --format-in jsonl data.zip\r\n\r\n# Process XZ compressed file\r\nundatum uniq --fields country --format-in jsonl data.jsonl.xz\r\n```\r\n\r\n### Filtering Data\r\n\r\nMost commands support filtering using expressions:\r\n\r\n```bash\r\n# Filter by field value\r\nundatum select --fields name,email --filter \"`status` == 'active'\" data.jsonl\r\n\r\n# Complex filters\r\nundatum frequency --fields category --filter \"`price` \u003e 100\" data.jsonl\r\n```\r\n\r\n**Filter syntax:**\r\n- Field names: `` `fieldname` ``\r\n- String values: `'value'`\r\n- Operators: `==`, `!=`, `\u003e`, `\u003c`, `\u003e=`, `\u003c=`, `and`, `or`\r\n\r\n### Date Detection\r\n\r\nAutomatic date/datetime field detection:\r\n\r\n```bash\r\nundatum stats --checkdates data.jsonl\r\n```\r\n\r\nThis uses the `qddate` library to automatically identify and parse date fields.\r\n\r\n### Custom Encoding and Delimiters\r\n\r\nOverride automatic detection:\r\n\r\n```bash\r\nundatum headers --encoding cp1251 --delimiter \";\" data.csv\r\nundatum convert --encoding utf-8 --delimiter \",\" data.csv data.jsonl\r\n```\r\n\r\n## Data Formats\r\n\r\n### JSON Lines (JSONL)\r\n\r\nJSON Lines is a text format where each line is a valid JSON object. It combines JSON flexibility with line-by-line processing capabilities, making it ideal for large datasets.\r\n\r\n```jsonl\r\n{\"name\": \"Alice\", \"age\": 30}\r\n{\"name\": \"Bob\", \"age\": 25}\r\n{\"name\": \"Charlie\", \"age\": 35}\r\n```\r\n\r\n### CSV\r\n\r\nStandard comma-separated values format. undatum automatically detects delimiters (comma, semicolon, tab) and encoding.\r\n\r\n### BSON\r\n\r\nBinary JSON format used by MongoDB. Efficient for binary data storage.\r\n\r\n### XML\r\n\r\nXML files can be converted to JSON Lines by specifying the tag name containing records.\r\n\r\n## AI Provider Troubleshooting\r\n\r\n### Common Issues\r\n\r\n**Provider not found:**\r\n```bash\r\n# Error: No AI provider specified\r\n# Solution: Set environment variable or use --ai-provider\r\nexport UNDATUM_AI_PROVIDER=openai\r\n# or\r\nundatum analyze data.csv --autodoc --ai-provider openai\r\n```\r\n\r\n**API key not found:**\r\n```bash\r\n# Error: API key is required\r\n# Solution: Set provider-specific API key\r\nexport OPENAI_API_KEY=sk-...\r\nexport OPENROUTER_API_KEY=sk-or-...\r\nexport PERPLEXITY_API_KEY=pplx-...\r\n```\r\n\r\n**Ollama connection failed:**\r\n```bash\r\n# Error: Connection refused\r\n# Solution: Ensure Ollama is running and model is pulled\r\nollama serve\r\nollama pull llama3.2\r\n# Or specify custom URL\r\nexport OLLAMA_BASE_URL=http://localhost:11434\r\n```\r\n\r\n**LM Studio connection failed:**\r\n```bash\r\n# Error: Connection refused\r\n# Solution: Start LM Studio server and load a model\r\n# In LM Studio: Start Server, then:\r\nexport LMSTUDIO_BASE_URL=http://localhost:1234/v1\r\n```\r\n\r\n**Structured output errors:**\r\n- All providers now use JSON Schema for reliable parsing\r\n- If a provider doesn't support structured output, it will fall back gracefully\r\n- Check provider documentation for model compatibility\r\n\r\n### Provider-Specific Notes\r\n\r\n- **OpenAI**: Requires API key, supports `gpt-4o-mini`, `gpt-4o`, `gpt-3.5-turbo`, etc.\r\n- **OpenRouter**: Unified API for multiple providers, supports models from OpenAI, Anthropic, Google, etc.\r\n- **Ollama**: Local models, no API key needed, but requires Ollama to be installed and running\r\n- **LM Studio**: Local models, OpenAI-compatible API, requires LM Studio to be running\r\n- **Perplexity**: Requires API key, uses `sonar` model by default\r\n\r\n## Performance Tips\r\n\r\n1. **Use appropriate formats**: Parquet/ORC for analytics, JSONL for streaming\r\n2. **Compression**: Use ZSTD or GZIP for better compression ratios\r\n3. **Chunking**: Split large files for parallel processing\r\n4. **Filtering**: Apply filters early to reduce data volume\r\n5. **Streaming**: undatum streams data by default for low memory usage\r\n6. **AI Documentation**: Use local providers (Ollama/LM Studio) for faster, free documentation generation\r\n7. **Batch Processing**: AI descriptions are generated per-table, consider splitting large datasets\r\n\r\n## AI-Powered Documentation\r\n\r\nThe `analyze` command can automatically generate field descriptions and dataset summaries using AI when `--autodoc` is enabled. This feature supports multiple LLM providers and uses structured JSON output for reliable parsing.\r\n\r\n### Quick Examples\r\n\r\n```bash\r\n# Basic AI documentation (auto-detects provider from environment)\r\nundatum analyze data.csv --autodoc\r\n\r\n# Use OpenAI with specific model\r\nundatum analyze data.csv --autodoc --ai-provider openai --ai-model gpt-4o-mini\r\n\r\n# Use local Ollama model\r\nundatum analyze data.csv --autodoc --ai-provider ollama --ai-model llama3.2\r\n\r\n# Use OpenRouter to access various models\r\nundatum analyze data.csv --autodoc --ai-provider openrouter --ai-model anthropic/claude-3-haiku\r\n\r\n# Output to YAML with AI descriptions\r\nundatum analyze data.csv --autodoc --output schema.yaml --outtype yaml\r\n```\r\n\r\n### Configuration File Example\r\n\r\nCreate `undatum.yaml` in your project:\r\n\r\n```yaml\r\nai:\r\n  provider: openai\r\n  model: gpt-4o-mini\r\n  timeout: 30\r\n```\r\n\r\nOr use `~/.undatum/config.yaml` for global settings:\r\n\r\n```yaml\r\nai:\r\n  provider: ollama\r\n  model: llama3.2\r\n  ollama_base_url: http://localhost:11434\r\n```\r\n\r\n### Language Support\r\n\r\nGenerate descriptions in different languages:\r\n\r\n```bash\r\n# English (default)\r\nundatum analyze data.csv --autodoc --lang English\r\n\r\n# Russian\r\nundatum analyze data.csv --autodoc --lang Russian\r\n\r\n# Spanish\r\nundatum analyze data.csv --autodoc --lang Spanish\r\n```\r\n\r\n### What Gets Generated\r\n\r\nWith `--autodoc` enabled, the analyzer will:\r\n\r\n1. **Field Descriptions**: Generate clear, concise descriptions for each field explaining what it represents\r\n2. **Dataset Summary**: Provide an overall description of the dataset based on sample data\r\n\r\nExample output:\r\n\r\n```yaml\r\ntables:\r\n  - id: data.csv\r\n    fields:\r\n      - name: customer_id\r\n        ftype: VARCHAR\r\n        description: \"Unique identifier for each customer\"\r\n      - name: purchase_date\r\n        ftype: DATE\r\n        description: \"Date when the purchase was made\"\r\n    description: \"Customer purchase records containing transaction details\"\r\n```\r\n\r\n## Examples\r\n\r\n### Data Pipeline Example\r\n\r\n```bash\r\n# 1. Analyze source data\r\nundatum analyze source.xml\r\n\r\n# 2. Convert to JSON Lines\r\nundatum convert --tagname item source.xml data.jsonl\r\n\r\n# 3. Validate data\r\nundatum validate --rule common.email --fields email data.jsonl --mode invalid \u003e invalid.jsonl\r\n\r\n# 4. Get statistics\r\nundatum stats data.jsonl \u003e stats.json\r\n\r\n# 5. Extract unique categories\r\nundatum uniq --fields category data.jsonl \u003e categories.txt\r\n\r\n# 6. Convert to Parquet for analytics\r\nundatum convert data.jsonl data.parquet\r\n```\r\n\r\n### Data Quality Check\r\n\r\n```bash\r\n# Check for duplicate emails\r\nundatum frequency --fields email data.jsonl | grep -v \"1$\"\r\n\r\n# Validate all required fields\r\nundatum validate --rule common.email --fields email data.jsonl\r\nundatum validate --rule common.url --fields website data.jsonl\r\n\r\n# Generate schema with AI documentation\r\nundatum schema data.jsonl --output schema.yaml --autodoc\r\n```\r\n\r\n### AI Documentation Workflow\r\n\r\n```bash\r\n# 1. Analyze dataset with AI-generated descriptions\r\nundatum analyze sales_data.csv --autodoc --ai-provider openai --output analysis.yaml\r\n\r\n# 2. Review generated field descriptions\r\ncat analysis.yaml\r\n\r\n# 3. Use descriptions in schema generation\r\nundatum schema sales_data.csv --autodoc --output documented_schema.yaml\r\n\r\n# 4. Bulk schema extraction with AI documentation\r\nundatum schema_bulk ./data_dir --autodoc --output ./schemas --mode distinct\r\n```\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n## License\r\n\r\nMIT License - see LICENSE file for details.\r\n\r\n## Links\r\n\r\n- [GitHub Repository](https://github.com/datacoon/undatum)\r\n- [Issue Tracker](https://github.com/datacoon/undatum/issues)\r\n\r\n## Support\r\n\r\nFor questions, issues, or feature requests, please open an issue on GitHub.\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatacoon%2Fundatum","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdatacoon%2Fundatum","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatacoon%2Fundatum/lists"}