https://github.com/tjarkvandemerwe/tidyprompt
Create LLM prompting pipelines
https://github.com/tjarkvandemerwe/tidyprompt
Last synced: 5 months ago
JSON representation
Create LLM prompting pipelines
- Host: GitHub
- URL: https://github.com/tjarkvandemerwe/tidyprompt
- Owner: tjarkvandemerwe
- License: other
- Created: 2024-10-07T07:20:36.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-11-14T14:33:11.000Z (5 months ago)
- Last Synced: 2024-11-14T14:39:04.482Z (5 months ago)
- Language: R
- Homepage: https://tjarkvandemerwe.github.io/tidyprompt/
- Size: 2.03 MB
- Stars: 3
- Watchers: 2
- Forks: 1
- Open Issues: 2
-
Metadata Files:
- Readme: README.Rmd
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - tjarkvandemerwe/tidyprompt - Create LLM prompting pipelines (R)
README
---
output: github_document
---```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```# tidyprompt
[](https://github.com/tjarkvandemerwe/tidyprompt/actions/workflows/R-CMD-check.yaml)
`tidyprompt` is an R package to prompt and empower your large language models (LLMs),
the tidy way.Key features of `tidyprompt` are:
* **tidy prompting**: Quickly and elegantly construct prompts for LLMs, using piping syntax (inspired by the `tidyverse`).
Wrap a base prompt in prompt wraps to influence how the LLM handles the prompt. A library of pre-built prompt wraps is included, but you can
also write your own.* **structured output**: Extract structured output from the LLM's response, and validate it. Automatic retries with feedback to the LLM, if the output is not as expected.
* **reasoning modes**: Make your LLM answer in a specific mode, such as chain-of-thought or ReAct (Reasoning and Acting) modes.
* **function calling**: Give your LLM the ability to autonomously call R functions ('tools'). With this, the LLM can retrieve information or take other actions.
'tidyprompt' also supports R code generation and evaluation, allowing LLMs to run R code.* **compatible with all LLM providers**: Usable with any LLM provider that supports chat completion. Use included LLM providers such as Ollama (local PC/on your own server), OpenAI, OpenRouter (offering various providers), Mistral, Groq, XAI (Grok), or Google Gemini. Or easily write your own hook for any other LLM provider.
## Installation
You can install the development version of tidyprompt from [GitHub](https://github.com/tjarkvandemerwe/tidyprompt) with:
``` r
# install.packages("remotes")
remotes::install_github("tjarkvandemerwe/tidyprompt")
```## Example usage
```{r child = 'vignettes/example_usage.Rmd'}
```## More information and contributing
`tidyprompt` is under active development by Luka Koning ([email protected]) and
Tjark van de Merwe ([email protected]). Note that in this stage,
the package is not yet fully stable and its architecture is subject to change.If you encounter issues, please open an issue in the GitHub repository. You are
welcome to contribute to the package by opening a pull request. If you have any
questions or suggestions, you can also reach us via e-mail.