{"id":16566869,"url":"https://github.com/instructor-ai/instructor","last_synced_at":"2025-04-28T10:18:26.117Z","repository":{"id":175256690,"uuid":"653589102","full_name":"instructor-ai/instructor","owner":"instructor-ai","description":"structured outputs for llms ","archived":false,"fork":false,"pushed_at":"2025-04-19T23:09:56.000Z","size":134269,"stargazers_count":10204,"open_issues_count":24,"forks_count":773,"subscribers_count":56,"default_branch":"main","last_synced_at":"2025-04-21T09:19:23.428Z","etag":null,"topics":["openai","openai-function-calli","openai-functions","pydantic-v2","python","validation"],"latest_commit_sha":null,"homepage":"https://python.useinstructor.com/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/instructor-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"jxnl"}},"created_at":"2023-06-14T10:42:23.000Z","updated_at":"2025-04-21T08:22:32.000Z","dependencies_parsed_at":null,"dependency_job_id":"d9e6f296-3c4a-45dc-8ae3-de26f56fe1c9","html_url":"https://github.com/instructor-ai/instructor","commit_stats":null,"previous_names":["jxnl/openai_function_call","instructor-ai/instructor"],"tags_count":88,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/instructor-ai%2Finstructor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/instructor-ai%2Finstructor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/instructor-ai%2Finstructor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/instructor-ai%2Finstructor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/instructor-ai","download_url":"https://codeload.github.com/instructor-ai/instructor/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250028611,"owners_count":21363162,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["openai","openai-function-calli","openai-functions","pydantic-v2","python","validation"],"created_at":"2024-10-11T21:01:51.669Z","updated_at":"2025-04-21T09:19:33.530Z","avatar_url":"https://github.com/instructor-ai.png","language":"Python","readme":"# Instructor, The Most Popular Library for Simple Structured Outputs\n\nInstructor is the most popular Python library for working with structured outputs from large language models (LLMs), boasting over 1 million monthly downloads. Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows with the community's top choice!\n\n[![Twitter Follow](https://img.shields.io/twitter/follow/jxnlco?style=social)](https://twitter.com/jxnlco)\n[![Discord](https://img.shields.io/discord/1192334452110659664?label=discord)](https://discord.gg/bD9YE9JArw)\n[![Downloads](https://img.shields.io/pypi/dm/instructor.svg)](https://pypi.python.org/pypi/instructor)\n\n## Want your logo on our website?\n\nIf your company uses Instructor a lot, we'd love to have your logo on our website! Please fill out [this form](https://q7gjsgfstrp.typeform.com/to/wluQlVVQ)\n\n## Key Features\n\n- **Response Models**: Specify Pydantic models to define the structure of your LLM outputs\n- **Retry Management**: Easily configure the number of retry attempts for your requests\n- **Validation**: Ensure LLM responses conform to your expectations with Pydantic validation\n- **Streaming Support**: Work with Lists and Partial responses effortlessly\n- **Flexible Backends**: Seamlessly integrate with various LLM providers beyond OpenAI\n- **Support in many Languages**: We support many languages including [Python](https://python.useinstructor.com), [TypeScript](https://js.useinstructor.com), [Ruby](https://ruby.useinstructor.com), [Go](https://go.useinstructor.com), and [Elixir](https://hex.pm/packages/instructor)\n\n## Get Started in Minutes\n\nInstall Instructor with a single command:\n\n```bash\npip install -U instructor\n```\n\nNow, let's see Instructor in action with a simple example:\n\n```python\nimport instructor\nfrom pydantic import BaseModel\nfrom openai import OpenAI\n\n\n# Define your desired output structure\nclass UserInfo(BaseModel):\n    name: str\n    age: int\n\n\n# Patch the OpenAI client\nclient = instructor.from_openai(OpenAI())\n\n# Extract structured data from natural language\nuser_info = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    response_model=UserInfo,\n    messages=[{\"role\": \"user\", \"content\": \"John Doe is 30 years old.\"}],\n)\n\nprint(user_info.name)\n#\u003e John Doe\nprint(user_info.age)\n#\u003e 30\n```\n\n### Using Hooks\n\nInstructor provides a powerful hooks system that allows you to intercept and log various stages of the LLM interaction process. Here's a simple example demonstrating how to use hooks:\n\n```python\nimport instructor\nfrom openai import OpenAI\nfrom pydantic import BaseModel\n\n\nclass UserInfo(BaseModel):\n    name: str\n    age: int\n\n\n# Initialize the OpenAI client with Instructor\nclient = instructor.from_openai(OpenAI())\n\n\n# Define hook functions\ndef log_kwargs(**kwargs):\n    print(f\"Function called with kwargs: {kwargs}\")\n\n\ndef log_exception(exception: Exception):\n    print(f\"An exception occurred: {str(exception)}\")\n\n\nclient.on(\"completion:kwargs\", log_kwargs)\nclient.on(\"completion:error\", log_exception)\n\nuser_info = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    response_model=UserInfo,\n    messages=[\n        {\"role\": \"user\", \"content\": \"Extract the user name: 'John is 20 years old'\"}\n    ],\n)\n\n\"\"\"\n{\n        'args': (),\n        'kwargs': {\n            'messages': [\n                {\n                    'role': 'user',\n                    'content': \"Extract the user name: 'John is 20 years old'\",\n                }\n            ],\n            'model': 'gpt-4o-mini',\n            'tools': [\n                {\n                    'type': 'function',\n                    'function': {\n                        'name': 'UserInfo',\n                        'description': 'Correctly extracted `UserInfo` with all the required parameters with correct types',\n                        'parameters': {\n                            'properties': {\n                                'name': {'title': 'Name', 'type': 'string'},\n                                'age': {'title': 'Age', 'type': 'integer'},\n                            },\n                            'required': ['age', 'name'],\n                            'type': 'object',\n                        },\n                    },\n                }\n            ],\n            'tool_choice': {'type': 'function', 'function': {'name': 'UserInfo'}},\n        },\n    }\n\"\"\"\n\nprint(f\"Name: {user_info.name}, Age: {user_info.age}\")\n#\u003e Name: John, Age: 20\n```\n\nThis example demonstrates:\n\n1. A pre-execution hook that logs all kwargs passed to the function.\n2. An exception hook that logs any exceptions that occur during execution.\n\nThe hooks provide valuable insights into the function's inputs and any errors,\nenhancing debugging and monitoring capabilities.\n\n### Using Anthropic Models\n\n```python\nimport instructor\nfrom anthropic import Anthropic\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_anthropic(Anthropic())\n\n# note that client.chat.completions.create will also work\nresp = client.messages.create(\n    model=\"claude-3-opus-20240229\",\n    max_tokens=1024,\n    system=\"You are a world class AI that excels at extracting user data from a sentence\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n### Using Cohere Models\n\nMake sure to install `cohere` and set your system environment variable with `export CO_API_KEY=\u003cYOUR_COHERE_API_KEY\u003e`.\n\n```\npip install cohere\n```\n\n```python\nimport instructor\nimport cohere\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_cohere(cohere.Client())\n\n# note that client.chat.completions.create will also work\nresp = client.chat.completions.create(\n    model=\"command-r-plus\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n### Using Gemini Models\n\nMake sure you [install](https://ai.google.dev/api/python/google/generativeai#setup) the Google AI Python SDK. You should set a `GOOGLE_API_KEY` environment variable with your API key.\nGemini tool calling also requires `jsonref` to be installed.\n\n```\npip install google-generativeai jsonref\n```\n\n```python\nimport instructor\nimport google.generativeai as genai\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\n# genai.configure(api_key=os.environ[\"API_KEY\"]) # alternative API key configuration\nclient = instructor.from_gemini(\n    client=genai.GenerativeModel(\n        model_name=\"models/gemini-1.5-flash-latest\",  # model defaults to \"gemini-pro\"\n    ),\n    mode=instructor.Mode.GEMINI_JSON,\n)\n```\n\nAlternatively, you can [call Gemini from the OpenAI client](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library#python). You'll have to setup [`gcloud`](https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev), get setup on Vertex AI, and install the Google Auth library.\n\n```sh\npip install google-auth\n```\n\n```python\nimport google.auth\nimport google.auth.transport.requests\nimport instructor\nfrom openai import OpenAI\nfrom pydantic import BaseModel\n\ncreds, project = google.auth.default()\nauth_req = google.auth.transport.requests.Request()\ncreds.refresh(auth_req)\n\n# Pass the Vertex endpoint and authentication to the OpenAI SDK\nPROJECT = 'PROJECT_ID'\nLOCATION = (\n    'LOCATION'  # https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations\n)\nbase_url = f'https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT}/locations/{LOCATION}/endpoints/openapi'\n\nclient = instructor.from_openai(\n    OpenAI(base_url=base_url, api_key=creds.token), mode=instructor.Mode.JSON\n)\n\n\n# JSON mode is req'd\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nresp = client.chat.completions.create(\n    model=\"google/gemini-1.5-flash-001\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n### Using Perplexity Sonar Models\n\n```python\nimport instructor\nfrom openai import OpenAI\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_perplexity(OpenAI(base_url=\"https://api.perplexity.ai\"))\n\nresp = client.chat.completions.create(\n    model=\"sonar\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n### Using Litellm\n\n```python\nimport instructor\nfrom litellm import completion\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_litellm(completion)\n\nresp = client.chat.completions.create(\n    model=\"claude-3-opus-20240229\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n## Types are inferred correctly\n\nThis was the dream of Instructor but due to the patching of OpenAI, it wasn't possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few `create_*` methods to make it easier to create iterables and partials, and to access the original completion.\n\n### Calling `create`\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\nuser = client.chat.completions.create(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n```\n\nNow if you use an IDE, you can see the type is correctly inferred.\n\n![type](./docs/blog/posts/img/type.png)\n\n### Handling async: `await create`\n\nThis will also work correctly with asynchronous clients.\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.AsyncOpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nasync def extract():\n    return await client.chat.completions.create(\n        model=\"gpt-4-turbo-preview\",\n        messages=[\n            {\"role\": \"user\", \"content\": \"Create a user\"},\n        ],\n        response_model=User,\n    )\n```\n\nNotice that simply because we return the `create` method, the `extract()` function will return the correct user type.\n\n![async](./docs/blog/posts/img/async_type.png)\n\n### Returning the original completion: `create_with_completion`\n\nYou can also return the original completion object\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nuser, completion = client.chat.completions.create_with_completion(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n```\n\n![with_completion](./docs/blog/posts/img/with_completion.png)\n\n### Streaming Partial Objects: `create_partial`\n\nIn order to handle streams, we still support `Iterable[T]` and `Partial[T]` but to simplify the type inference, we've added `create_iterable` and `create_partial` methods as well!\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nuser_stream = client.chat.completions.create_partial(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n\nfor user in user_stream:\n    print(user)\n    #\u003e name=None age=None\n    #\u003e name=None age=None\n    #\u003e name=None age=None\n    #\u003e name=None age=None\n    #\u003e name=None age=None\n    #\u003e name=None age=None\n    #\u003e name='John Doe' age=None\n    #\u003e name='John Doe' age=None\n    #\u003e name='John Doe' age=None\n    #\u003e name='John Doe' age=30\n    #\u003e name='John Doe' age=30\n    # name=None age=None\n    # name='' age=None\n    # name='John' age=None\n    # name='John Doe' age=None\n    # name='John Doe' age=30\n```\n\nNotice now that the type inferred is `Generator[User, None]`\n\n![generator](./docs/blog/posts/img/generator.png)\n\n### Streaming Iterables: `create_iterable`\n\nWe get an iterable of objects when we want to extract multiple objects.\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nusers = client.chat.completions.create_iterable(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create 2 users\"},\n    ],\n    response_model=User,\n)\n\nfor user in users:\n    print(user)\n    #\u003e name='John Doe' age=30\n    #\u003e name='Jane Doe' age=28\n    # User(name='John Doe', age=30)\n    # User(name='Jane Smith', age=25)\n```\n\n![iterable](./docs/blog/posts/img/iterable.png)\n\n## [Evals](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests)\n\nWe invite you to contribute to evals in `pytest` as a way to monitor the quality of the OpenAI models and the `instructor` library. To get started check out the evals for [Anthropic](https://github.com/jxnl/instructor/blob/main/tests/llm/test_anthropic/evals/test_simple.py) and [OpenAI](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests) and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.\n\n## Contributing\n\nWe welcome contributions to Instructor! Whether you're fixing bugs, adding features, improving documentation, or writing blog posts, your help is appreciated.\n\n### Getting Started\n\nIf you're new to the project, check out issues marked as [`good-first-issue`](https://github.com/jxnl/instructor/labels/good%20first%20issue) or [`help-wanted`](https://github.com/jxnl/instructor/labels/help%20wanted). These could be anything from code improvements, a guest blog post, or a new cookbook.\n\n### Setting Up the Development Environment\n\n1. **Fork and clone the repository**\n   ```bash\n   git clone https://github.com/YOUR-USERNAME/instructor.git\n   cd instructor\n   ```\n\n2. **Set up the development environment**\n   \n   We use `uv` to manage dependencies, which provides faster package installation and dependency resolution than traditional tools. If you don't have `uv` installed, [install it first](https://github.com/astral-sh/uv).\n   \n   ```bash\n   # Create and activate a virtual environment\n   uv venv .venv\n   source .venv/bin/activate  # On Windows: .venv\\Scripts\\activate\n   \n   # Install dependencies with all extras \n   # You can specify specific groups if needed\n   uv sync --all-extras --group dev\n   \n   # Or for a specific integration\n   # uv sync --all-extras --group dev,anthropic\n   ```\n\n3. **Install pre-commit hooks**\n   \n   We use pre-commit hooks to ensure code quality:\n   \n   ```bash\n   uv pip install pre-commit\n   pre-commit install\n   ```\n   \n   This will automatically run Ruff formatters and linting checks before each commit, ensuring your code meets our style guidelines.\n\n### Running Tests\n\nTests help ensure that your contributions don't break existing functionality:\n\n```bash\n# Run all tests\nuv run pytest\n\n# Run specific tests\nuv run pytest tests/path/to/test_file.py\n\n# Run tests with coverage reporting\nuv run pytest --cov=instructor\n```\n\nWhen submitting a PR, make sure to write tests for any new functionality and verify that all tests pass locally.\n\n### Code Style and Quality Requirements\n\nWe maintain high code quality standards to keep the codebase maintainable and consistent:\n\n- **Formatting and Linting**: We use `ruff` for code formatting and linting, and `pyright` for type checking.\n  ```bash\n  # Check code formatting\n  uv run ruff format --check\n  \n  # Apply formatting\n  uv run ruff format\n  \n  # Run linter\n  uv run ruff check\n  \n  # Fix auto-fixable linting issues\n  uv run ruff check --fix\n  ```\n\n- **Type Hints**: All new code should include proper type hints.\n\n- **Documentation**: Code should be well-documented with docstrings and comments where appropriate.\n\nMake sure these checks pass when you submit a PR:\n- Linting: `uv run ruff check`\n- Formatting: `uv run ruff format`\n- Type checking: `uv run pyright`\n\n### Development Workflow\n\n1. **Create a branch for your changes**\n   ```bash\n   git checkout -b feature/your-feature-name\n   ```\n\n2. **Make your changes and commit them**\n   ```bash\n   git add .\n   git commit -m \"Your descriptive commit message\"\n   ```\n\n3. **Keep your branch updated with the main repository**\n   ```bash\n   git remote add upstream https://github.com/instructor-ai/instructor.git\n   git fetch upstream\n   git rebase upstream/main\n   ```\n\n4. **Push your changes**\n   ```bash\n   git push origin feature/your-feature-name\n   ```\n\n### Pull Request Process\n\n1. **Create a Pull Request** from your fork to the main repository.\n\n2. **Fill out the PR template** with a description of your changes, relevant issue numbers, and any other information that would help reviewers understand your contribution.\n\n3. **Address review feedback** and make any requested changes.\n\n4. **Wait for CI checks** to pass. The PR will be reviewed by maintainers once all checks are green.\n\n5. **Merge**: Once approved, a maintainer will merge your PR.\n\n### Contributing to Evals\n\nWe encourage contributions to our evaluation tests. See the [Evals documentation](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests) for details on writing and running evaluation tests.\n\n### Pre-commit Hooks\n\nWe use pre-commit hooks to ensure code quality. To set up pre-commit hooks:\n\n1. Install pre-commit: `pip install pre-commit`\n2. Set up the hooks: `pre-commit install`\n\nThis will automatically run Ruff formatters and linting checks before each commit, ensuring your code meets our style guidelines.\n\n## CLI\n\nWe also provide some added CLI functionality for easy convenience:\n\n- `instructor jobs` : This helps with the creation of fine-tuning jobs with OpenAI. Simple use `instructor jobs create-from-file --help` to get started creating your first fine-tuned GPT-3.5 model\n\n- `instructor files` : Manage your uploaded files with ease. You'll be able to create, delete and upload files all from the command line\n\n- `instructor usage` : Instead of heading to the OpenAI site each time, you can monitor your usage from the CLI and filter by date and time period. Note that usage often takes ~5-10 minutes to update from OpenAI's side\n\n## License\n\nThis project is licensed under the terms of the MIT License.\n\n## Citation\n\nIf you use Instructor in your research, please cite it using the following BibTeX:\n\n```bibtex\n@software{liu2024instructor,\n  author = {Jason Liu and Contributors},\n  title = {Instructor: A library for structured outputs from large language models},\n  url = {https://github.com/instructor-ai/instructor},\n  year = {2024},\n  month = {3}\n}\n```\n\n# Contributors\n\n\u003c!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --\u003e\n\u003c!-- prettier-ignore-start --\u003e\n\u003c!-- markdownlint-disable --\u003e\n\n\u003c!-- markdownlint-restore --\u003e\n\u003c!-- prettier-ignore-end --\u003e\n\n\u003c!-- ALL-CONTRIBUTORS-LIST:END --\u003e\n\n\u003ca href=\"https://github.com/instructor-ai/instructor/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=instructor-ai/instructor\" /\u003e\n\u003c/a\u003e\n","funding_links":["https://github.com/sponsors/jxnl"],"categories":["Python","NLP","Frameworks","\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools","Data Pipeline","Inference","Agent Integration \u0026 Deployment Tools"],"sub_categories":["Bleeding Edge ⚗️","Output","AI Agent Development"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finstructor-ai%2Finstructor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Finstructor-ai%2Finstructor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finstructor-ai%2Finstructor/lists"}