{"id":13593786,"url":"https://github.com/Forethought-Technologies/AutoChain","last_synced_at":"2025-04-09T05:32:17.059Z","repository":{"id":179753317,"uuid":"642942708","full_name":"Forethought-Technologies/AutoChain","owner":"Forethought-Technologies","description":"AutoChain: Build lightweight, extensible, and testable LLM Agents","archived":false,"fork":false,"pushed_at":"2024-05-23T18:19:29.000Z","size":1032,"stargazers_count":1846,"open_issues_count":24,"forks_count":101,"subscribers_count":11,"default_branch":"main","last_synced_at":"2025-04-07T16:16:26.539Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://autochain.forethought.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Forethought-Technologies.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-19T17:49:27.000Z","updated_at":"2025-04-04T09:52:40.000Z","dependencies_parsed_at":"2024-01-06T21:44:20.956Z","dependency_job_id":"dec68219-8c04-45ec-a846-12fdfcebceea","html_url":"https://github.com/Forethought-Technologies/AutoChain","commit_stats":null,"previous_names":["forethought-technologies/autochain"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Forethought-Technologies%2FAutoChain","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Forethought-Technologies%2FAutoChain/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Forethought-Technologies%2FAutoChain/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Forethought-Technologies%2FAutoChain/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Forethought-Technologies","download_url":"https://codeload.github.com/Forethought-Technologies/AutoChain/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247986922,"owners_count":21028890,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T16:01:24.515Z","updated_at":"2025-04-09T05:32:16.489Z","avatar_url":"https://github.com/Forethought-Technologies.png","language":"Python","readme":"# AutoChain\n\nLarge language models (LLMs) have shown huge success in different text generation tasks and\nenable developers to build generative agents based on objectives expressed in natural language.\n\nHowever, most generative agents require heavy customization for specific purposes, and\nsupporting different use cases can sometimes be overwhelming using existing tools\nand frameworks. As a result, it is still very challenging to build a custom generative agent.\n\nIn addition, evaluating such generative agents, which is usually done by manually trying different\nscenarios, is a very manual, repetitive, and expensive task.\n\nAutoChain takes inspiration from LangChain and AutoGPT and aims to solve\nboth problems by providing a lightweight and extensible framework\nfor developers to build their own agents using LLMs with custom tools and\n[automatically evaluating](#workflow-evaluation) different user scenarios with simulated\nconversations. Experienced user of LangChain would find AutoChain is easy to navigate since\nthey share similar but simpler concepts.\n\nThe goal is to enable rapid iteration on generative agents, both by simplifying agent customization\nand evaluation.\n\nIf you have any questions, please feel free to reach out to Yi Lu \u003cyi.lu@forethought.ai\u003e\n\n## Features\n\n- 🚀 lightweight and extensible generative agent pipeline.\n- 🔗 agent that can use different custom tools and\n  support OpenAI [function calling](https://platform.openai.com/docs/guides/gpt/function-calling)\n- 💾 simple memory tracking for conversation history and tools' outputs\n- 🤖 automated agent multi-turn conversation evaluation with simulated conversations\n\n## Setup\n\nQuick install\n\n```shell\npip install autochain\n```\n\nOr install from source after cloning this repository\n\n```shell\ncd autochain\npyenv virtualenv 3.10.11 venv\npyenv local venv\n\npip install .\n```\n\nSet `PYTHONPATH` and `OPENAI_API_KEY`\n\n```shell\nexport OPENAI_API_KEY=\nexport PYTHONPATH=`pwd`\n```\n\nRun your first conversation with agent interactively\n\n```shell\npython autochain/workflows_evaluation/conversational_agent_eval/generate_ads_test.py -i\n```\n\n## How does AutoChain simplify building agents?\n\nAutoChain aims to provide a lightweight framework and simplifies the agent building process in a\nfew\nways, as compared to existing frameworks\n\n1. Easy prompt update  \n   Engineering and iterating over prompts is a crucial part of building generative\n   agent. AutoChain makes it very easy to update prompts and visualize prompt\n   outputs. Run with `-v` flag to output verbose prompt and outputs in console.\n2. Up to 2 layers of abstraction  \n   As part of enabling rapid iteration, AutoChain chooses to remove most of the\n   abstraction layers from alternative frameworks\n3. Automated multi-turn evaluation  \n   Evaluation is the most painful and undefined part of building generative agents. Updating the\n   agent to better perform in one scenario often causes regression in other use cases. AutoChain\n   provides a testing framework to automatically evaluate agent's ability under different\n   user scenarios.\n\n## Example usage\n\nIf you have experience with LangChain, you already know 80% of the AutoChain interfaces.\n\nAutoChain aims to make building custom generative agents as straightforward as possible, with as\nlittle abstractions as possible.\n\nThe most basic example uses the default chain and `ConversationalAgent`:\n\n```python\nfrom autochain.chain.chain import Chain\nfrom autochain.memory.buffer_memory import BufferMemory\nfrom autochain.models.chat_openai import ChatOpenAI\nfrom autochain.agent.conversational_agent.conversational_agent import ConversationalAgent\n\nllm = ChatOpenAI(temperature=0)\nmemory = BufferMemory()\nagent = ConversationalAgent.from_llm_and_tools(llm=llm)\nchain = Chain(agent=agent, memory=memory)\n\nprint(chain.run(\"Write me a poem about AI\")['message'])\n```\n\nJust like in LangChain, you can add a list of tools to the agent\n\n```python\ntools = [\n    Tool(\n        name=\"Get weather\",\n        func=lambda *args, **kwargs: \"Today is a sunny day\",\n        description=\"\"\"This function returns the weather information\"\"\"\n    )\n]\n\nmemory = BufferMemory()\nagent = ConversationalAgent.from_llm_and_tools(llm=llm, tools=tools)\nchain = Chain(agent=agent, memory=memory)\nprint(chain.run(\"What is the weather today\")['message'])\n```\n\nAutoChain also added support\nfor [function calling](https://platform.openai.com/docs/guides/gpt/function-calling)\nin OpenAI models. Behind the scenes, it turns the function spec into OpenAI format without explicit\ninstruction, so you can keep following the same `Tool` interface you are familiar with.\n\n```python\nllm = ChatOpenAI(temperature=0)\nagent = OpenAIFunctionsAgent.from_llm_and_tools(llm=llm, tools=tools)\n```\n\nSee [more examples](./docs/examples.md) under `autochain/examples` and [workflow\nevaluation](./docs/workflow-evaluation.md) test cases which can also be run interactively.\n\nRead more about detailed [components overview](./docs/components_overview.md)\n\n## Workflow Evaluation\n\nIt is notoriously hard to evaluate generative agents in LangChain or AutoGPT. An agent's behavior\nis nondeterministic and susceptible to small changes to the prompt or model. As such, it is\nhard to know what effects an update to the agent will have on all relevant use cases.\n\nThe current path for\nevaluation is running the agent through a large number of preset queries and evaluate the\ngenerated responses. However, that is limited to single turn conversation, general and not\nspecific to tasks and expensive to verify.\n\nTo facilitate agent evaluation, AutoChain introduces the workflow evaluation framework. This\nframework runs conversations between a generative agent and LLM-simulated test users. The test\nusers incorporate various user contexts and desired conversation outcomes, which enables easy\naddition of test cases for new user scenarios and fast evaluation. The framework leverages LLMs to\nevaluate whether a given multi-turn conversation has achieved the intended outcome.\n\nRead more about our [evaluation strategy](./docs/workflow-evaluation.md).\n\n### How to run workflow evaluations\n\nYou can either run your tests in interactive mode, or run the full suite of test cases at once.\n`autochain/workflows_evaluation/conversational_agent_eval/generate_ads_test.py` contains a few\nexample test cases.\n\nTo run all the cases defined in a test file:\n\n```shell\npython autochain/workflows_evaluation/conversational_agent_eval/generate_ads_test.py\n```\n\nTo run your tests interactively `-i`:\n\n```shell\npython autochain/workflows_evaluation/conversational_agent_eval/generate_ads_test.py -i\n```\n\nLooking for more details on how AutoChain works? See\nour [components overview](./docs/components_overview.md)\n","funding_links":[],"categories":["Python","Frameworks","Summary","其他LLM框架","Other LLM Frameworks","Agent Integration \u0026 Deployment Tools","Agent Categories","Agents 开发平台"],"sub_categories":["Advanced Components","文章","Videos Playlists","AI Agent Deployment","\u003ca name=\"Unclassified\"\u003e\u003c/a\u003eUnclassified"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FForethought-Technologies%2FAutoChain","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FForethought-Technologies%2FAutoChain","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FForethought-Technologies%2FAutoChain/lists"}