https://github.com/parmsam/promptimus
R package to support LLM prompt generation
https://github.com/parmsam/promptimus
Last synced: 5 months ago
JSON representation
R package to support LLM prompt generation
- Host: GitHub
- URL: https://github.com/parmsam/promptimus
- Owner: parmsam
- License: other
- Created: 2024-08-28T02:48:37.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-08-28T04:58:46.000Z (8 months ago)
- Last Synced: 2024-08-29T04:56:23.479Z (8 months ago)
- Language: R
- Homepage:
- Size: 3.5 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.Rmd
Awesome Lists containing this project
- jimsghstars - parmsam/promptimus - R package to support LLM prompt generation (R)
README
---
output: github_document
---```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```# promptimus
The goal of promptimus is to provide a set of R functions to support LLM prompt generation. The package includes simple functions for popular prompt generation frameworks such as RTF (role, task, format), Chain of Thought, Fewshot, and Prompt Chaining.
## Installation
You can install the development version of promptimus from [GitHub](https://github.com/) with:
``` r
# install.packages("devtools")
devtools::install_github("parmsam/promptimus")
```## Example
This is a basic example which shows you how to solve a common problem:
### Role-Task-Format
```{r example-rtf}
library(promptimus)# Use RTF (role then task then format) framework to generate a prompt
x <- rtf(
role = "life coach with 30 years of experience in mentoring",
task = "plan to improve my work-life balance",
format = "table"
)
strwrap(x, width = 80)
```### Chain of thought
```{r example-cot}
# Use Chain of Thought framework to generate a prompt
x <- chain_of_thought(
instructions = "How do I improve my sales calls? I've only got a 15% close rate right now, and I think it's because I'm not selling the dream enough."
)
strwrap(x, width = 80)
```### Fewshot
```{r example-fewshot}
tweets <- tibble::tribble(
~text, ~case, ~label,
"Thank you Supreme Court! #SCOTUS", "masterpiece", "Positive",
"Court rules in favor of Colorado baker!", "masterpiece", "Positive",
"Religion used to discriminate. #SCOTUS #MasterpieceCakeshop", "masterpiece", "Negative",
"Can't believe this cake case went to #SCOTUS.", "masterpiece", "Neutral",
"Court supports baker refusing gay couple's cake.", "masterpiece", "Neutral",
"#SCOTUS legitimizes religious convictions over #humanrights. #LGBTQRights", "masterpiece", "Negative",
"#ClarenceThomas is a waste on #scotus", "mazars", "Negative",
"Justice Ginsburg hospitalized, says #SCOTUS spokesperson.", "mazars", "Neutral",
"Trump will gloat if Court disappoints us tomorrow.", "mazars", "Negative",
"SCOTUS: Manhattan DA can get Trump's tax returns.", "mazars", "Positive",
"Supreme Court says Trump is not above the law.", "mazars", "Positive",
"SCOTUS rulings send Trump financial records back to lower courts.", "mazars", "Neutral"
)
x <- fewshot(text = "I am disappointed with this ruling.",
instructions = "Decide if the sentiment of this statement is Positive or Negative.",
examples = tweets,
template = "Statement: {text}\nSentiment: {label}",
copy_to_clipboard = F)
x
```## Prompt chaining
```{r prompt-chaining, cache=TRUE}
# Define the mock model interaction function
model_function <- function(
prompt,
seconds_delay = 20
){
x <- openai::create_chat_completion(
model = "gpt-3.5-turbo",
messages = list(
list(
"role" = "user",
"content" = prompt
)
)
)
Sys.sleep(seconds_delay)
return(x$choices$message.content)
}prompts <- list(
"What is the sum of 15 and 27?",
"Take the result from the previous step and multiply it by 2.",
"Subtract 10 from the result obtained in the previous step.",
"Divide the result from the previous step by 2."
)
responses <- chain_prompts(prompts, model_function)
responses
```# Credit
- [fewshot()](R/fewshot.R) is taken directly from [{promptr}](https://github.com/joeornstein/promptr) originally written by [Joe Ornstein](https://github.com/joeornstein).