{"id":15169643,"url":"https://github.com/leftmove/cria","last_synced_at":"2026-03-10T11:04:55.088Z","repository":{"id":234809736,"uuid":"789542959","full_name":"leftmove/cria","owner":"leftmove","description":"Run LLMs locally with as little friction as possible.","archived":false,"fork":false,"pushed_at":"2025-05-03T02:09:09.000Z","size":85,"stargazers_count":121,"open_issues_count":2,"forks_count":5,"subscribers_count":3,"default_branch":"main","last_synced_at":"2026-01-07T04:44:11.468Z","etag":null,"topics":["collaborate","github","github-codespaces","github-copilot","llama","llm","ollama","python"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/cria/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leftmove.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":null,"patreon":null,"open_collective":null,"ko_fi":"anonyon","tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"custom":null}},"created_at":"2024-04-20T21:02:08.000Z","updated_at":"2025-11-25T23:17:37.000Z","dependencies_parsed_at":"2024-08-03T07:49:57.987Z","dependency_job_id":"0829bd13-775d-4fd8-ad79-cb81d2eb76d0","html_url":"https://github.com/leftmove/cria","commit_stats":{"total_commits":46,"total_committers":3,"mean_commits":"15.333333333333334","dds":0.08695652173913049,"last_synced_commit":"e1a979a1716a0f023f9ed0a8fffac8f875d0dcb5"},"previous_names":["leftmove/cria"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/leftmove/cria","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leftmove%2Fcria","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leftmove%2Fcria/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leftmove%2Fcria/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leftmove%2Fcria/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leftmove","download_url":"https://codeload.github.com/leftmove/cria/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leftmove%2Fcria/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30331650,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T05:25:20.737Z","status":"ssl_error","status_checked_at":"2026-03-10T05:25:17.430Z","response_time":106,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["collaborate","github","github-codespaces","github-copilot","llama","llm","ollama","python"],"created_at":"2024-09-27T07:04:14.756Z","updated_at":"2026-03-10T11:04:55.076Z","avatar_url":"https://github.com/leftmove.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/leftmove/cria\"\u003e\u003cimg src=\"https://i.imgur.com/vjGJOLQ.png\" alt=\"cria\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n    \u003cem\u003eCria, use Python to run LLMs with as little friction as possible.\u003c/em\u003e\n\u003c/p\u003e\n\nCria is a library for programmatically running Large Language Models through Python. Cria is built so you need as little configuration as possible — even with more advanced features.\n\n- **Easy**: No configuration is required out of the box. Getting started takes just five lines of code.\n- **Concise**: Write less code to save time and avoid duplication.\n- **Local**: Free and unobstructed by rate limits, running LLMs requires no internet connection.\n- **Efficient**: Use advanced features with your own `ollama` instance, or a subprocess.\n\n\u003c!-- \u003cp align=\"center\"\u003e\n  \u003cem\u003e\n    Cria uses \u003ca href=\"https://ollama.com/\"\u003eollama\u003c/a\u003e.\n  \u003c/em\u003e\n\u003c/p\u003e --\u003e\n\n## Guide\n\n- [Quick Start](#quickstart)\n- [Installation](#installation)\n  - [Windows](#windows)\n  - [Mac](#mac)\n  - [Linux](#linux)\n- [Advanced Usage](#advanced-usage)\n  - [Custom Models](#custom-models)\n  - [Streams](#streams)\n  - [Closing](#closing)\n  - [Message History](#message-history)\n    - [Follow-Up](#follow-up)\n    - [Clear Message History](#clear-message-history)\n    - [Passing In Custom Context](#passing-in-custom-context)\n  - [Interrupting](#interrupting)\n    - [With Message History](#with-message-history)\n    - [Without Message History](#without-message-history)\n  - [Multiple Models and Parallel Conversations](#multiple-models-and-parallel-conversations)\n    - [Models](#models)\n    - [With](#with-model)\n    - [Standalone](#standalone-model)\n  - [Running Standalone](#running-standalone)\n  - [Formatting](#formatting)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Quickstart\n\nRunning Cria is easy. After installation, you need just five lines of code — no configurations, no manual downloads, no API keys, and no servers to worry about.\n\n```python\nimport cria\n\nai = cria.Cria()\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.chat(prompt):\n    print(chunk, end=\"\")\n```\n\n```\n\u003e\u003e\u003e The CEO of OpenAI is Sam Altman!\n```\n\nor, you can run this more configurable example.\n\n```python\nimport cria\n\nwith cria.Model(\"llama3.1:8b\") as ai:\n  prompt = \"Who is the CEO of OpenAI?\"\n  response = ai.chat(prompt, stream=False)\n  print(response)\n\nwith cria.Model(\"llama3:8b\") as ai:\n  prompt = \"Who is the CEO of OpenAI?\"\n  response = ai.chat(prompt, stream=False)\n  print(response)\n```\n\n```\n\u003e\u003e\u003e The CEO of OpenAI is Sam Altman.\n\u003e\u003e\u003e The CEO of OpenAI is Sam Altman!\n```\n\n\u003e [!WARNING]\n\u003e If no model is configured, Cria automatically installs and runs the default model: `llama3.1:8b` (4.7GB).\n\n## Installation\n\n1. Cria uses [`ollama`](https://ollama.com/), to install it, run the following.\n\n   ### Windows\n\n   [Download](https://ollama.com/download/windows)\n\n   ### Mac\n\n   [Download](https://ollama.com/download/mac)\n\n   ### Linux\n\n   ```\n   curl -fsSL https://ollama.com/install.sh | sh\n   ```\n\n2. Install Cria with `pip`.\n\n   ```\n   pip install cria\n   ```\n\n## Advanced Usage\n\n### Custom Models\n\nTo run other LLMs, pass them into your `ai` variable.\n\n```python\nimport cria\n\nai = cria.Cria(\"llama2\")\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.chat(prompt):\n    print(chunk, end=\"\") # The CEO of OpenAI is Sam Altman. He co-founded OpenAI in 2015 with...\n```\n\nYou can find available models [here](https://ollama.com/library).\n\n### Streams\n\nStreams are used by default in Cria, but you can turn them off by passing in a boolean for the `stream` parameter.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman!\n```\n\n### Closing\n\nBy default, models are closed when you exit the Python program, but closing them manually is a best practice.\n\n```python\nai.close()\n```\n\nYou can also use [`with`](#with-model) statements to close models automatically (recommended).\n\n### Message History\n\n#### Follow-Up\n\nMessage history is automatically saved in Cria, so asking follow-up questions is easy.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n\nprompt = \"Tell me more about him.\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...\n```\n\n#### Clear Message History\n\nYou can reset message history by running the `clear` method.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...\n\nai.clear()\n\nprompt = \"Tell me more about him.\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # I apologize, but I don't have any information about \"him\" because the conversation just started...\n```\n\n#### Passing In Custom Context\n\nYou can also create a custom message history, and pass in your own context.\n\n```python\ncontext = \"Our AI system employed a hybrid approach combining reinforcement learning and generative adversarial networks (GANs) to optimize the decision-making...\"\nmessages = [\n    {\"role\": \"system\", \"content\": \"You are a technical documentation writer\"},\n    {\"role\": \"user\", \"content\": context},\n]\n\nprompt = \"Write some documentation using the text I gave you.\"\nfor chunk in ai.chat(messages=messages, prompt=prompt):\n    print(chunk, end=\"\") # AI System Optimization: Hybrid Approach Combining Reinforcement Learning and...\n```\n\nIn the example, instructions are given to the LLM as the `system`. Then, extra context is given as the `user`. Finally, the prompt is entered (as a `user`). You can use any mixture of roles to specify the LLM to your liking.\n\nThe available roles for messages are:\n\n- `user` - Pass prompts as the user.\n- `system` - Give instructions as the system.\n- `assistant` - Act as the AI assistant yourself, and give the LLM lines.\n\nThe prompt parameter will always be appended to messages under the `user` role, to override this, you can choose to pass in nothing for `prompt`.\n\n### Interrupting\n\n#### With Message History\n\nIf you are streaming messages with Cria, you can interrupt the prompt mid-way.\n\n```python\nresponse = \"\"\nmax_token_length = 5\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor i, chunk in enumerate(ai.chat(prompt)):\n  if i \u003e= max_token_length:\n    ai.stop()\n  response += chunk\n\nprint(response) # The CEO of OpenAI is\n```\n\nIn the example, after the AI generates five tokens (units of text that LLMs use for compression), text generation is stopped via the `stop` method. After `stop` is called, you can safely `break` out of the `for` loop.\n\n#### Without Message History\n\nBy default, Cria automatically saves responses in message history, even if the stream is interrupted. To prevent this behaviour though, you can pass in the `allow_interruption` boolean.\n\n```python\nai = cria.Cria(allow_interruption=False)\n\nresponse = \"\"\nmax_token_length = 5\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor i, chunk in enumerate(ai.chat(prompt)):\n\n  if i \u003e= max_token_length:\n    ai.stop()\n    break\n\n  print(chunk, end=\"\") # The CEO of OpenAI is\n\nprompt = \"Tell me more about him.\"\nfor chunk in ai.chat(prompt):\n  print(chunk, end=\"\") # I apologize, but I don't have any information about \"him\" because the conversation just started...\n```\n\n### Multiple Models and Parallel Conversations\n\n#### Models\n\nIf you are running multiple models or parallel conversations, the `Model` class is also available. This is recommended for most use cases.\n\n```python\nimport cria\n\nai = cria.Model()\n\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n```\n\n_All methods that apply to the `Cria` class also apply to `Model`._\n\n#### With Model\n\nMultiple models can be run through a `with` statement. This automatically closes them after use.\n\n```python\nimport cria\n\nprompt = \"Who is the CEO of OpenAI?\"\n\nwith cria.Model(\"llama3\") as ai:\n  response = ai.chat(prompt, stream=False)\n  print(response) # OpenAI's CEO is Sam Altman, who also...\n\nwith cria.Model(\"llama2\") as ai:\n  response = ai.chat(prompt, stream=False)\n  print(response) # The CEO of OpenAI is Sam Altman.\n```\n\n#### Standalone Model\n\nOr, models can be run traditionally.\n\n```python\nimport cria\n\n\nprompt = \"Who is the CEO of OpenAI?\"\n\nllama3 = cria.Model(\"llama3\")\nresponse = llama3.chat(prompt, stream=False)\nprint(response) # OpenAI's CEO is Sam Altman, who also...\n\nllama2 = cria.Model(\"llama2\")\nresponse = llama2.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n\n# Not required, but best practice.\nllama3.close()\nllama2.close()\n\n```\n\n### Generate\n\nCria also has a `generate` method.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.generate(prompt):\n    print(chunk, end=\"\") # The CEO of OpenAI (Open-source Artificial Intelligence) is Sam Altman.\n\npromt = \"Tell me more about him.\"\nresponse = ai.generate(prompt, stream=False)\nprint(response) # I apologize, but I think there may have been some confusion earlier. As this...\n```\n\n### Running Standalone\n\nWhen you run `cria.Cria()`, an `ollama` instance will start up if one is not already running. When the program exits, this instance will terminate.\n\nHowever, if you want to save resources by not exiting `ollama`, either run your own `ollama` instance in another terminal, or run a managed subprocess.\n\n#### Running Your Own Ollama Instance\n\n```bash\nollama serve\n```\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nwith cria.Model() as ai:\n    response = ai.generate(\"Who is the CEO of OpenAI?\", stream=False)\n    print(response)\n```\n\n#### Running A Managed Subprocess (Reccomended)\n\n```python\n\n# If it is the first time you start the program, ollama will start automatically\n# If it is the second time (or subsequent times) you run the program, ollama will already be running\n\nai = cria.Cria(standalone=True, close_on_exit=False)\nprompt = \"Who is the CEO of OpenAI?\"\n\nwith cria.Model(\"llama2\") as llama2:\n    response = llama2.generate(\"Who is the CEO of OpenAI?\", stream=False)\n    print(response)\n\nwith cria.Model(\"llama3\") as llama3:\n    response = llama3.generate(\"Who is the CEO of OpenAI?\", stream=False)\n    print(response)\n\nquit()\n# Despite exiting, olama will keep running, and be used the next time this program starts.\n```\n\n### Formatting\n\nTo format the output of the LLM, pass in the format keyword.\n\n```python\nai = cria.Cria()\n\nprompt = \"Return a JSON array of AI companies.\"\nresponse = ai.chat(prompt, stream=False, format=\"json\")\nprint(response) # [\"OpenAI\", \"Anthropic\", \"Meta\", \"Google\", \"Cohere\", ...].\n```\n\nThe current supported formats are:\n\n* JSON \n\n## Contributing\n\nIf you have a feature request, feel free to make an issue!\n\nContributions are highly appreciated.\n\n## License\n\n[MIT](./LICENSE.md)\n","funding_links":["https://ko-fi.com/anonyon"],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleftmove%2Fcria","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleftmove%2Fcria","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleftmove%2Fcria/lists"}