https://github.com/run-llama/llama-cloud-ts
Typescript SDK for OCR and document parsing in the cloud with LlamaParse
https://github.com/run-llama/llama-cloud-ts
agent agents document-agent document-processing information-extraction llamaparse ocr parser
Last synced: 4 days ago
JSON representation
Typescript SDK for OCR and document parsing in the cloud with LlamaParse
- Host: GitHub
- URL: https://github.com/run-llama/llama-cloud-ts
- Owner: run-llama
- License: mit
- Created: 2025-11-06T21:59:46.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2026-04-02T21:03:26.000Z (9 days ago)
- Last Synced: 2026-04-03T06:49:03.573Z (8 days ago)
- Topics: agent, agents, document-agent, document-processing, information-extraction, llamaparse, ocr, parser
- Language: TypeScript
- Homepage: https://cloud.llamaindex.ai/
- Size: 2.28 MB
- Stars: 7
- Watchers: 0
- Forks: 3
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Security: SECURITY.md
Awesome Lists containing this project
README
# Llama Cloud TypeScript SDK
[![NPM version]()](https://npmjs.org/package/@llamaindex/llama-cloud) 
The official TypeScript SDK for [LlamaParse](https://cloud.llamaindex.ai) - the enterprise platform for agentic OCR and document processing.
With this SDK, create powerful workflows across many features:
- **Parse** - Agentic OCR and parsing for 130+ formats
- **Extract** - Structured data extraction with custom schemas
- **Classify** - Document categorization with natural-language rules
- **Agents** - Deploy document agents as APIs
- **Index** - Document ingestion and embedding for RAG
## Documentation
- [Get an API Key](https://cloud.llamaindex.ai)
- [Getting Started Guide](https://developers.llamaindex.ai/typescript/cloud/)
- [Full API Reference](https://developers.api.llamaindex.ai/api/typescript)
## Installation
```sh
npm install @llamaindex/llama-cloud
```
## Quick Start
```ts
import LlamaCloud from '@llamaindex/llama-cloud';
const client = new LlamaCloud({
apiKey: process.env['LLAMA_CLOUD_API_KEY'], // This is the default and can be omitted
});
// Parse a document
const job = await client.parsing.create({
tier: 'agentic',
version: 'latest',
file_id: 'your-file-id',
});
console.log(job.id);
```
## File Uploads
```ts
import fs from 'fs';
import LlamaCloud from '@llamaindex/llama-cloud';
const client = new LlamaCloud();
// Upload using a file stream
await client.files.create({
file: fs.createReadStream('/path/to/document.pdf'),
purpose: 'purpose',
});
// Or using a File object
await client.files.create({
file: new File(['content'], 'document.txt'),
purpose: 'purpose',
});
```
## MCP Server
Use the Llama Cloud MCP Server to enable AI assistants to interact with the API:
[](https://cursor.com/en-US/install-mcp?name=%40llamaindex%2Fllama-cloud-mcp&config=eyJuYW1lIjoiQGxsYW1haW5kZXgvbGxhbWEtY2xvdWQtbWNwIiwidHJhbnNwb3J0IjoiaHR0cCIsInVybCI6Imh0dHBzOi8vbGxhbWFjbG91ZC1wcm9kLnN0bG1jcC5jb20iLCJoZWFkZXJzIjp7IngtbGxhbWEtY2xvdWQtYXBpLWtleSI6Ik15IEFQSSBLZXkifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22%40llamaindex%2Fllama-cloud-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fllamacloud-prod.stlmcp.com%22%2C%22headers%22%3A%7B%22x-llama-cloud-api-key%22%3A%22My%20API%20Key%22%7D%7D)
## Error Handling
When the API returns a non-success status code, an `APIError` subclass is thrown:
```ts
await client.pipelines.list({ project_id: 'my-project-id' }).catch((err) => {
if (err instanceof LlamaCloud.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
}
});
```
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
## Retries and Timeouts
The SDK automatically retries requests 2 times on connection errors, timeouts, rate limits, and 5xx errors. Requests timeout after 1 minute by default. Functions that combine multiple API calls (e.g. `client.parsing.parse()`) will have larger timeouts by default to account for the multiple requests and polling.
```ts
const client = new LlamaCloud({
maxRetries: 0, // Disable retries (default: 2)
timeout: 30 * 1000, // 30 second timeout (default: 1 minute)
});
```
## Pagination
List methods support auto-pagination with `for await...of`:
```ts
async function fetchAllExtractV2Jobs(params) {
const allExtractV2Jobs = [];
// Automatically fetches more pages as needed.
for await (const extractV2Job of client.extract.list({ page_size: 20 })) {
allExtractV2Jobs.push(extractV2Job);
}
return allExtractV2Jobs;
}
```
Or fetch one page at a time:
```ts
let page = await client.extract.list({ page_size: 20 });
for (const extractV2Job of page.items) {
console.log(extractV2Job);
}
while (page.hasNextPage()) {
page = await page.getNextPage();
}
```
## Logging
Configure logging via the `LLAMA_CLOUD_LOG` environment variable or the `logLevel` option:
```ts
const client = new LlamaCloud({
logLevel: 'debug', // 'debug' | 'info' | 'warn' | 'error' | 'off'
});
```
## Requirements
- TypeScript >= 4.9
- Node.js 20+, Deno 1.28+, Bun 1.0+, or modern browsers
## Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md).