Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yomorun/llm-function-calling-examples
Strongly-typed LLM Function Calling examples, run on OpenAI, Ollama, Mistral and others.
https://github.com/yomorun/llm-function-calling-examples
function-calling llm ollama openai yomo
Last synced: 6 days ago
JSON representation
Strongly-typed LLM Function Calling examples, run on OpenAI, Ollama, Mistral and others.
- Host: GitHub
- URL: https://github.com/yomorun/llm-function-calling-examples
- Owner: yomorun
- License: mit
- Created: 2024-07-30T09:22:31.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2024-09-09T03:21:20.000Z (24 days ago)
- Last Synced: 2024-09-09T10:26:58.238Z (24 days ago)
- Topics: function-calling, llm, ollama, openai, yomo
- Language: Go
- Homepage: https://yomo.run
- Size: 45.9 KB
- Stars: 2
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM Function Calling Examples
This repository contains examples of how to create a LLM (Large Language Model) Function Calling serverless by [YoMo framework](https://github.com/yomorun/yomo).
## Write Once, Run on any Model
YoMo support multiple LLM providers, like Ollama, Mistral, Llama, Azure OpenAI, Cloudflare AI Gateway, etc. You can choose the one you want to use, details can be found on [Doc: LLM Providers](https://yomo.run/docs/llm-providers) and [Doc: Configuration](https://yomo.run/docs/zipper-configuration).
## Examples List
- [tool-get-utc-time](./tool-get-utc-time): Get the UTC time by city name.
- [too-currency-converter](./tool-currency-converter): Currency Calculator by 3rd party API.
- [tool-get-weather](./tool-get-weather): Get the weather information by city name by 3rd party API.
- [tool-timezone-calculator](./tool-timezone-calculator): Calculate the timezone for a specific time.
- [tool-get-ip-and-latency](./tool-get-ip-and-latency): Get IP and Latency by give website name like "Nike" and "Amazone" by `ping` command.## Self Hosting
Check [Docs: Self Hosting](https://yomo.run/docs/self-hosting) for details on how to deploy YoMo LLM Bridge and Function Calling Serverless on your own infrastructure. Furthermore, if your AI agents become popular with users all over the world, you may consider deploying in multiple regions to improve LLM response speed. Check [Docs: Geo-distributed System](https://yomo.run/docs/glossary) for instructions on making your AI applications more reliable and faster.