Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/JoongWonSeo/agentools

Essentials for LLM-based assistants and agents using OpenAI and function tools
https://github.com/JoongWonSeo/agentools

Last synced: 3 days ago
JSON representation

Essentials for LLM-based assistants and agents using OpenAI and function tools

Awesome Lists containing this project

README

        

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# [AgenTools](https://github.com/JoongWonSeo/agentools) - Async Generator Tools for LLMs\n",
"\n",
"A simple set of modules, wrappers and utils that are essential for LLM-based assistants and agents using the OpenAI API and function tools. It is useful for:\n",
"\n",
"- **OpenAI API:** Simple wrapper for the OpenAI API to provide mocked endpoints for easy testing without costing money, accumulating the delta chunks from streamed responses into partial responses, and easier token counting/tracking.\n",
"- **Function Tools:** Easily convert any (async) python function into a function tool that the LLM model can call, with automatic validation and retrying with error messages.\n",
"- **Structured Data:** Easily define a Pydantic model that can be generated by the LLM model, also with validation and retries.\n",
"- **Assistants:** Event-based architecture with async generators that yield events that you can iterate through and handle only the events you care about, such as whether you want to stream the response or not, cancel the generation prematurely, or wait for user input (human-in-the-loop) before continuing, etc.\n",
"- **Copilots:** Integrate right into an editor with stateful system messages to allow the copilot to see the latest state of the editor and function tools to interact with the editor.\n",
"\n",
"**Yet to come:**\n",
"\n",
"- **Agents:** Autoprompting, self-prompting, chain-of-thought, sketchpads, memory management, planning, and more.\n",
"- **Multi-Agents**: Communication channels, organization structuring, and more.\n",
"\n",
"## Quick Start\n",
"\n",
"### Installation\n",
"\n",
"```bash\n",
"pip install agentools\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Assistant and ChatGPT\n",
"\n",
"A high-level interface to use ChatGPT or other LLM-based assistants! The default implementation of ChatGPT has:\n",
"\n",
"- a message history to remember the conversation so far (including the system prompt)\n",
"- ability to use tools\n",
"- efficient async streaming support\n",
"- simple way to customize/extend/override the default behavior\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from agentools import *\n",
"\n",
"# empty chat history and default model (gpt-3.5)\n",
"model = ChatGPT()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then simply call the model as if it was a function, with a prompt:\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello! How can I assist you today?'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await model(\"Hey!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the model is async and it simply returns the resonse as a string.\n",
"\n",
"Both your prompt and the response are stored in the history, so you can keep calling the model with new prompts and it will remember the conversation so far.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Of course! You said, \"Hey!\"'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await model(\"Can you repeat my last message please?\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'user', 'content': 'Hey!'},\n",
" {'content': 'Hello! How can I assist you today?', 'role': 'assistant'},\n",
" {'role': 'user', 'content': 'Can you repeat my last message please?'},\n",
" {'content': 'Of course! You said, \"Hey!\"', 'role': 'assistant'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.messages.history"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### System prompt and more on `MessageHistory`\n",
"\n",
"Notice that our model has no system prompt in the beginning. `ChatGPT`'s constructor by default creates an empty chat history, but you can explicitly create a `MessageHistory` object and pass it to the constructor:\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I love cats!\n",
"I like both cats and dogs!\n"
]
}
],
"source": [
"translate = ChatGPT(\n",
" messages=SimpleHistory.system(\"Translate the user message to English\")\n",
")\n",
"# SimpleHistory.system(s) is just shorthand for SimpleHistory([msg(system=s)])\n",
"\n",
"print(await translate(\"Ich liebe Katzen!\"))\n",
"print(await translate(\"고양이랑 강아지 둘다 좋아!\"))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system', 'content': 'Translate the user message to English'},\n",
" {'role': 'user', 'content': 'Ich liebe Katzen!'},\n",
" {'content': 'I love cats!', 'role': 'assistant'},\n",
" {'role': 'user', 'content': '고양이랑 강아지 둘다 좋아!'},\n",
" {'content': 'I like both cats and dogs!', 'role': 'assistant'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"translate.messages.history"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that here, we're wasting tokens by remembering the chat history, since it's not really a conversation. There's a simple `GPT` class, which simply resets the message history after each prompt:\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system', 'content': 'Translate the user message to English'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"translate = GPT(messages=SimpleHistory.system(\"Translate the user message to English\"))\n",
"\n",
"await translate(\"Ich liebe Katzen!\")\n",
"await translate(\"고양이랑 강아지 둘다 좋아!\")\n",
"\n",
"translate.messages.history"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OpenAI API: changing the model and mocked API\n",
"\n",
"You can set the default model in the constructor, or override it for each prompt:\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello, world!'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# default model is now gpt-4 💸\n",
"model = ChatGPT(model=\"gpt-4\")\n",
"\n",
"# but you can override it for each prompt anyways\n",
"await model(\"Heyo!\", model=\"mocked\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you see, our wrapper provides a simple mocked \"model\", which will simply return `\"Hello, world!\"` for any prompt, with some simulated latency. This will also work with streaming responses, and in either cases, you won't be able to tell the difference between the real API and the mocked one.\n",
"\n",
"There are more mocked models for your convinience:\n",
"\n",
"- `mocked`: always returns `\"Hello, world!\"`\n",
"- `mocked:TEST123`: returns the string after the colon, e.g. `\"TEST123\"`\n",
"- `echo`: returns the user prompt itself\n",
"\n",
"Let's print all events to the console to take a peek at the event-based generator:\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ResponseStartEvent]: prompt=Heya!, tools=None, model=echo, max_function_calls=100, openai_kwargs={}\n",
"[CompletionStartEvent]: call_index=0\n",
"[CompletionEvent]: completion=ChatCompletion(id='mock', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None))], created=1721161834, model='mock', object='chat.completion', service_tier=None, system_fingerprint=None, usage=None), call_index=0\n",
"[FullMessageEvent]: message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None), choice_index=0\n",
"[TextMessageEvent]: content=Heya!\n",
"[ResponseEndEvent]: content=Heya!\n"
]
},
{
"data": {
"text/plain": [
"'Heya!'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await model(\"Heya!\", model=\"echo\", event_logger=print)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wow, quite a lot going on for a simple prompt! While it might seem like too many events, this offers a lot of flexibility and customizability.\n",
"\n",
"You can easily handle only the events you are interested in, useful when e.g:\n",
"\n",
"- updating the frontend when streaming the responses,\n",
"- cancelling the generation early,\n",
"- or implementing human-in-the-loop for function calls.\n",
"\n",
"For instance, the `GPT` class from above is as simple as:\n",
"\n",
"```python\n",
"async for event in self.response_events(prompt, **openai_kwargs):\n",
" match event:\n",
" case self.ResponseEndEvent():\n",
" await self.messages.reset()\n",
" return event.content\n",
"```\n",
"\n",
"This generator-based architecture is a good balance between flexibility and simplicity!\n",
"\n",
"While we won't go deeper into the low-level API in this quickstart, you can look at the `advanced.ipynb` notebook for more details.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tools: `@function_tool`\n",
"\n",
"You can turn any function into a tool usable by the model by decorating it with `@function_tool`:\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello from python!\n"
]
},
{
"data": {
"text/plain": [
"'success'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"@function_tool\n",
"def print_to_console(text: str) -> str:\n",
" \"\"\"\n",
" Print text to console\n",
"\n",
" Args:\n",
" text: text to print\n",
" \"\"\"\n",
" print(text)\n",
" return \"success\" # the model will see the return value\n",
"\n",
"\n",
"# normal call\n",
"print_to_console(\"Hello from python!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use the tool from python as you normally would, and the model will also be able to use it simply by passing it to the `tools` parameter during init (as default) or prompting it (as a one-off).\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"hello from GPT\n"
]
},
{
"data": {
"text/plain": [
"'The message \"hello from GPT\" has been successfully printed to the console.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ChatGPT(tools=print_to_console)\n",
"await model(\"Say 'hello from GPT' to console!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To make the function a `@function_tool`, you must do the following:\n",
"\n",
"- The parameters must be type annotated, and all parameters must be JSON-serializable (e.g. `str`, `int`, `float`, `bool`, `list`, `dict`, `None`, etc).\n",
"- The return type should be a `str` or something that can be converted to a `str`.\n",
"- It must be documented with a `'''docstring'''`, including each parameter (most [formats supported](https://github.com/rr-/docstring_parser), e.g. [Google-style](https://gist.github.com/redlotus/3bc387c2591e3e908c9b63b97b11d24e#file-docstrings-py-L67), [NumPy-style](https://gist.github.com/eikonomega/910512d92769b0cc382a09ae4de41771), sphinx-style, etc, see [this overview](https://gist.github.com/nipunsadvilkar/fec9d2a40f9c83ea7fd97be59261c400))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Showing off some more goodies:\n",
"\n",
"- Even async functions should seamlessly work, just don't forget to `await` them.\n",
"- `@fail_with_message(err)` is a decorator that will catch any exceptions thrown by the function and instead return the error message. This is useful for when you want to handle errors in a more graceful way than just crashing the model. It also takes an optional logger, which by default takes the `print` function, but any callable that takes a string will work, such as `logger.error` from the `logging` module.\n",
"- Usually, the `@function_tool` decorator will throw an assertion error if you forget to provide the description for any of the function or their parameters. If you really don't want to provide descriptions for some (or all), maybe because it's so self-explanatory or you need to save tokens, then you can explicitly turn off the docstring parsing by passing `@function_tool(check_description=False)`. This is not recommended, but it's there if you need it.\n",
"\n",
"Note that by returning descriptive error strings, the model can read the error message and retry, increasing the robustness!\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"ERROR:root:Tool call fib(-10) failed: n must be >= 0\n"
]
},
{
"data": {
"text/plain": [
"'Error: n must be >= 0'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import asyncio\n",
"import logging\n",
"\n",
"\n",
"@function_tool(name=\"Fibonacci\", require_doc=False)\n",
"@fail_with_message(\"Error\", logger=logging.error)\n",
"async def fib(n: int):\n",
" if n < 0:\n",
" raise ValueError(\"n must be >= 0\")\n",
" if n < 2:\n",
" return n\n",
"\n",
" await asyncio.sleep(0.1)\n",
" return sum(await asyncio.gather(fib(n - 1), fib(n - 2)))\n",
"\n",
"\n",
"await fib(-10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Toolkits: `class Toolkit`\n",
"\n",
"Toolkits are a collection of related function tools, esp. useful when they share a state. Also good for keeping the state bound to a single instance of the toolkit, rather than a global state.\n",
"To create a toolkit, simply subclass `Toolkit` and decorate its methods with `@function_tool`.\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Shhh... here's a secret: 42\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class Notepad(Toolkit):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.content = \"\"\n",
"\n",
" @function_tool\n",
" def write(self, text: str):\n",
" \"\"\"\n",
" Write text to the notepad\n",
"\n",
" Args:\n",
" text: The text to write\n",
" \"\"\"\n",
" self.content = text\n",
"\n",
" @function_tool(require_doc=False)\n",
" def read(self):\n",
" return self.content\n",
"\n",
"\n",
"notes = Notepad()\n",
"notes.write(\"Shhh... here's a secret: 42\")\n",
"notes.read()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, simply pass the toolkit to the model. To use multiple tools and toolkits, simply put them in a list:\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'On your notepad, it says: \"Shhh... here\\'s a secret: 42\"'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ChatGPT(\n",
" tools=[notes, print_to_console, fib],\n",
")\n",
"\n",
"await model(\"What's on my notepad?\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_wxaisBbFMYRa0XNcTnP9MH1b', function=Function(arguments='{\"n\":8}', name='Fibonacci'), type='function')]\n",
"[ToolResultEvent]: result=21, tool_call=ChatCompletionMessageToolCall(id='call_wxaisBbFMYRa0XNcTnP9MH1b', function=Function(arguments='{\"n\":8}', name='Fibonacci'), type='function'), index=0\n",
"[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_gt5ZnA5v2VJL5R2gyPeHRN0a', function=Function(arguments='{\"text\":\"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63\"}', name='write'), type='function')]\n",
"[ToolResultEvent]: result=None, tool_call=ChatCompletionMessageToolCall(id='call_gt5ZnA5v2VJL5R2gyPeHRN0a', function=Function(arguments='{\"text\":\"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63\"}', name='write'), type='function'), index=0\n",
"[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_ErYx6g7gpVTnLsqg59oxHI9C', function=Function(arguments='{\"text\":\"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63\"}', name='print_to_console'), type='function')]\n",
"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63\n",
"[ToolResultEvent]: result=success, tool_call=ChatCompletionMessageToolCall(id='call_ErYx6g7gpVTnLsqg59oxHI9C', function=Function(arguments='{\"text\":\"The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63\"}', name='print_to_console'), type='function'), index=0\n"
]
},
{
"data": {
"text/plain": [
"'I have written on your notepad. The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63. I have also printed it to the console.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await model(\n",
" \"Can you calculate the 8th fibonacci number, add it to the number in my notes, and write it? also print it to console as well.\",\n",
" event_logger=lambda x: print(x) if x.startswith(\"[Tool\") else None,\n",
" parallel_tool_calls=False,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The sum of the 8th Fibonacci number (21) and the number on your notepad (42) is 63'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"notes.read()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how since our `write` function doesn't return anything, it defaults to `None` and our model gets confused! So don't forget to return an encouraging success message to make our model happy :)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool Previews\n",
"\n",
"When using streaming, and you're using function tools with a long input, you might want to preview the tool's output before it's fully processed. With the help of the `json_autocomplete` package, the JSON argument generated by the model can be parsed before it's fully complete, and the preview can be shown to the user.\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"@function_tool(require_doc=False)\n",
"async def create_slogan(title: str, content: str):\n",
" print(f\"\\n\\n[Final Slogan] {title}: {content}\")\n",
" return \"Slogan created and shown to user! Simply tell the user that it was created.\"\n",
"\n",
"\n",
"@create_slogan.preview\n",
"async def preview(title: str = \"\", content: str = \"\"):\n",
" assert isinstance(title, str) and isinstance(content, str)\n",
" print(f\"[Preview] {title}: {content}\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Preview] : \n",
"[Preview] D: \n",
"[Preview] Ducks: \n",
"[Preview] Ducks and: \n",
"[Preview] Ducks and Debug: \n",
"[Preview] Ducks and Debugging: \n",
"[Preview] Ducks and Debugging: \n",
"[Preview] Ducks and Debugging: Qu\n",
"[Preview] Ducks and Debugging: Quack\n",
"[Preview] Ducks and Debugging: Quack your\n",
"[Preview] Ducks and Debugging: Quack your code\n",
"[Preview] Ducks and Debugging: Quack your code bugs\n",
"[Preview] Ducks and Debugging: Quack your code bugs away\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.\n",
"[Preview] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.\n",
"\n",
"\n",
"[Final Slogan] Ducks and Debugging: Quack your code bugs away with the help of a debugging duck by your side.\n"
]
},
{
"data": {
"text/plain": [
"'I have created a slogan about how ducks can help with debugging!'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ChatGPT(tools=create_slogan)\n",
"await model(\n",
" \"Create a 1-sentence slogan about how ducks can help with debugging.\", stream=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you need a more coherent logic shared between the `@preview` and the final `@function_tool`, e.g. do something at the start of the function call, share some data between previews, etc... It gets messy very fast!\n",
"\n",
"Instead, you can use the `@streaming_function_tool()` decorator, which receives a single `arg_stream` parameter, which is an async generator that yields the partial arguments, as streamed from the model. Therefore, you simply need to iterate through it, and perform the actual function call at the end of the iteration. The following is the equivalent of the previous example:\n",
"\n",
"> _Note that currently, you must pass the parameter as a `schema` (either JSON Schema or Pydantic BaseModel)._\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class Slogan(BaseModel):\n",
" \"\"\"A slogan for a product\"\"\"\n",
"\n",
" title: str = Field(description=\"MUST BE EXACTLY 3 WORDS!\")\n",
" content: str = Field(description=\"less than 10 words\")\n",
"\n",
"\n",
"@streaming_function_tool(schema=Slogan)\n",
"async def create_slogan(arg_stream):\n",
" print(\"Starting slogan creation...\")\n",
"\n",
" async for args in arg_stream:\n",
" title, content = args.get(\"title\", \"\"), args.get(\"content\", \"\")\n",
" print(f'{args} -> \"{title}\", \"{content}\"', flush=True)\n",
"\n",
" print(f\"\\n\\n[Final Slogan] {title}: {content}\")\n",
" return \"Slogan created and shown to user! Simply tell the user that it was created.\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Starting slogan creation...\n",
"{'': None} -> \"\", \"\"\n",
"{'title': None} -> \"None\", \"\"\n",
"{'title': ''} -> \"\", \"\"\n",
"{'title': 'Debug'} -> \"Debug\", \"\"\n",
"{'title': 'Debugging'} -> \"Debugging\", \"\"\n",
"{'title': 'Debugging Ducks'} -> \"Debugging Ducks\", \"\"\n",
"{'title': 'Debugging Ducks', '': None} -> \"Debugging Ducks\", \"\"\n",
"{'title': 'Debugging Ducks', 'content': None} -> \"Debugging Ducks\", \"None\"\n",
"{'title': 'Debugging Ducks', 'content': ''} -> \"Debugging Ducks\", \"\"\n",
"{'title': 'Debugging Ducks', 'content': 'Qu'} -> \"Debugging Ducks\", \"Qu\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack'} -> \"Debugging Ducks\", \"Quack\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through'} -> \"Debugging Ducks\", \"Quack through\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through errors'} -> \"Debugging Ducks\", \"Quack through errors\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly'} -> \"Debugging Ducks\", \"Quack through errors effortlessly\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> \"Debugging Ducks\", \"Quack through errors effortlessly.\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> \"Debugging Ducks\", \"Quack through errors effortlessly.\"\n",
"{'title': 'Debugging Ducks', 'content': 'Quack through errors effortlessly.'} -> \"Debugging Ducks\", \"Quack through errors effortlessly.\"\n",
"\n",
"\n",
"[Final Slogan] Debugging Ducks: Quack through errors effortlessly.\n"
]
},
{
"data": {
"text/plain": [
"'I have created a slogan: \"Debugging Ducks - Quack through errors effortlessly.\"'"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ChatGPT(tools=create_slogan)\n",
"await model(\n",
" \"Create a 1-sentence slogan about how ducks can help with debugging.\", stream=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Structured Data\n",
"\n",
"We can very easily define a Pydantic model that can be generated by the LLM model, with validation and retries:\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Song(title='Hello', genres=['pop'], duration=3.5, language=, has_lyrics=True)"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from enum import StrEnum\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class Language(StrEnum):\n",
" EN = \"en\"\n",
" DE = \"de\"\n",
" KO = \"ko\"\n",
"\n",
"\n",
"class Song(BaseModel):\n",
" title: str\n",
" genres: list[str] = Field(description=\"AT LEAST 3 genres!\")\n",
" duration: float\n",
" language: Language\n",
" has_lyrics: bool\n",
"\n",
"\n",
"# normal use\n",
"Song(title=\"Hello\", genres=[\"pop\"], duration=3.5, language=Language.EN, has_lyrics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a `StructGPT` object with your pydantic model, and prompting it will always return a valid instance of the model, or raise an exception if it fails to generate a valid instance after the maximum number of retries. Your docstring and field descriptions will also be visible to the model, so make sure to write good descriptions!\n"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Song(title='Eternal Sunshine', genres=['Hip-hop', 'R&B', 'K-pop'], duration=240.0, language=, has_lyrics=True)"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_song = StructGPT(Song)\n",
"\n",
"await generate_song(\"Come up with an all-time best K-hiphop song\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Misc.\n",
"\n",
"Streaming can be enabled as usual by passing `stream=True` when prompting, and handle the partial events as they come in. Check the `Assistant` class for a list of events including the ones for streaming.\n",
"\n",
"There are some other useful utilities in the `utils` module, such as:\n",
"\n",
"- `tokens`: for token counting\n",
"- `trackers`: for transparent token tracking and prompt/response logging\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "gpt",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}