Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mpaepper/llm_agents
Build agents which are controlled by LLMs
https://github.com/mpaepper/llm_agents
deep-learning langchain llms machine-learning
Last synced: about 24 hours ago
JSON representation
Build agents which are controlled by LLMs
- Host: GitHub
- URL: https://github.com/mpaepper/llm_agents
- Owner: mpaepper
- License: mit
- Created: 2023-04-04T21:25:35.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2025-01-03T10:33:32.000Z (23 days ago)
- Last Synced: 2025-01-18T21:37:42.654Z (7 days ago)
- Topics: deep-learning, langchain, llms, machine-learning
- Language: Python
- Homepage: https://www.paepper.com/blog/posts/intelligent-agents-guided-by-llms/
- Size: 16.6 KB
- Stars: 951
- Watchers: 10
- Forks: 70
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-langchain-zh - LLM Agents
- awesome-langchain - LLM Agents
- awesome-agents - LLM Agents
README
## LLM Agents
Small library to build agents which are controlled by large language models (LLMs) which is heavily inspired by langchain.
The goal was to get a better grasp of how such an agent works and understand it all in very few lines of code.
Langchain is great, but it already has a few more files and abstraction layers, so I thought it would be nice to build the most important parts of a simple agent from scratch.
Some more infos are in this Hacker News discussion from April 5th 2023 and the related blog post.
### How it works
The agent works like this:
* It gets instructed by a prompt which tells it the basic way to solve a task using tools
* Tools are custom build components which the agent can use
* So far, I've implemented the ability to execute Python code in a REPL, to use the Google search and to search on Hacker News
* The agent runs in a loop of Thought, Action, Observation, Thought, ...
* The Thought and Action (with the Action Input to the action) are the parts which are generated by an LLM
* The Observation is generated by using a tool (for example the print outputs of Python or the text result of a Google search)
* The LLM gets the new information appended to the prompt in each loop cycle and thus can act on that information
* Once the agent has enough information it provides the final answerFor more details on how it works, check out this blog post
### How to use it
You can install this library locally by running:
```
pip install -r requirements.txt
pip install -e .
```inside it's directory after cloning it.
You also need to provide the following env variables:
* `OPENAI_API_KEY` to use the OpenAI API (obtainable at: https://platform.openai.com/account/api-keys)
* `SERPAPI_API_KEY` to use the Google Search in case you use that tool (obtainable at: https://serpapi.com/)You can simply export them in bash like: `export OPENAI_API_KEY='sh-lsdf....'`
Then you can run the script `python run_agent.py` and ask your question.
To construct your own agent do it like this:
```python
from llm_agents import Agent, ChatLLM, PythonREPLTool, HackerNewsSearchTool, SerpAPIToolagent = Agent(llm=ChatLLM(), tools=[PythonREPLTool(), SerpAPITool(), HackerNewsSearchTool()])
result = agent.run("Your question to the agent")print(f"Final answer is {result}")
```Of course, you can also build your custom tools or omit tools, for exmaple if you don't want to create a SERPAPI key.