https://github.com/moodymudskipper/ask
ask R anything
https://github.com/moodymudskipper/ask
Last synced: 5 days ago
JSON representation
ask R anything
- Host: GitHub
- URL: https://github.com/moodymudskipper/ask
- Owner: moodymudskipper
- License: other
- Created: 2024-08-08T19:30:02.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2025-02-28T12:27:26.000Z (about 2 months ago)
- Last Synced: 2025-04-02T03:46:27.744Z (13 days ago)
- Language: R
- Size: 535 KB
- Stars: 80
- Watchers: 2
- Forks: 2
- Open Issues: 19
-
Metadata Files:
- Readme: README.Rmd
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - moodymudskipper/ask - ask R anything (R)
README
---
output: github_document
---```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```# ask
{ask} anwsers natural language queries trough diverse actions (not just a text
answer as is done by simpler interfaces to LLMs) using optional contextual information.The function you use decides the type of action, it takes the query as its first argument :
* `ask()`: Text output through a UI in the viewer pane
* `ask_in_place()`: Script edits
* `ask_clipboard()`: Text output copied to the clipboard
* `ask_terminal()`: Copy a command to the terminal
* `ask_tibble()`, `ask_numeric()`, `ask_boolean()`: Actual R objects
* `ask_open()`: Open a script
* `ask_smart()`: Anything, will run a first API call to guess which function and context you need, and a second to actually answer the query.The context is provided as a second argument, and we have a collection of function
to make this easy.* `context_repo()`: Consider your active repo
* `context_package()`: Consider any package
* `context_script()`: Consider the active script, or any given script
* `context_clipboard`: Use clipboard content as context
* `context_url`: Use html content of url as context
* ... (many more)It is built on top of chatgpt (default) or llama for now.
## Installation
Install with:
``` r
pak::pak("moodymudskipper/ask")
```You'll need either:
* an OpenAI API key linked to a credit card : https://openai.com/index/openai-api/
* An installation of Ollama : https://ollama.com/download
* Download and install the app
* In the terminal, call for instance `ollama pull llama3.2` or `ollama pull llama3.2:1b` to pull
a specific model (check what models are available on https://ollama.com/library)You may provide a `model` argument to any `ask*()` question. The default model at the moment is "gpt-4o" but can be changed by setting for stance `options(ask.model="llama3.2")`.
A quick comparison of *gpt-4o* (OpenAI) and *llama3.1* (Ollama) according to just me.
| | gpt-4o | llama3.1 |
|----------------------|-------------------------------|--------|
| Result relevance | `r strrep(emo::ji("star"),4)` | `r strrep(emo::ji("star"),3)` |
| Change code in place | `r strrep(emo::ji("star"),3)` | not practical |
| Speed | `r strrep(emo::ji("star"),3)` | `r strrep(emo::ji("star"),2)` |
| Data protection | `r strrep(emo::ji("star"),2)` | `r strrep(emo::ji("star"),4)` (local!)|
| Price | `r strrep(emo::ji("moneybag"),1)` by token | `r strrep(emo::ji("free"),1)` |## ask, follow up and build a conversation
```{r, include=FALSE}
library(ask)
````ask()` a question to start a conversation
```{r ask1, results='asis', cache = TRUE, eval = FALSE}
library(ask)
ask("where is the Eiffel tower? 1 sentence")
```
To follow up a query, no need to store the previous output,
we keep it in a global variable (accessible with `last_conversation()`) so you might simply call `follow_up()`.```{r ask2, results='asis', cache = TRUE, eval = FALSE}
follow_up("is it tall? 1 sentence")
```
If you're not happy with an answer you might ask again. This will replace the
latest follow up rather than add an extra message as would be the case if we
reran the last command.```{r ask3, results='asis', cache = TRUE, eval = FALSE}
again()
```
To access previous conversation you might call `conversation_manager()`,
it will open a shiny app in the viewer where you can browse your history, continue
a conversation or pick one to print and continue with `follow_up()` or `again()`## Examples
```{r ex-S3, results='asis', error = TRUE, cache = TRUE, eval = FALSE}
ask(
"What S3 methods does this register? just a bullet list",
context_script(system.file("NAMESPACE", package = "ask"))
)
```
```{r ex-withr, results='asis', error = TRUE, cache = TRUE, eval = FALSE}
# this works because the package is small enough!
ask(
"how to set environment variables locally using this package? 1 sentence",
context_package("withr")
)
```
Some contexts are too big, this happens often with `context_package()` or `context_repo()`.
```{r ex-rlang1, error = TRUE, cache = TRUE}
ask(
"what version is installed and what features were added for this version? 1 sentence",
context_package("rlang")
)
```In these cases we can sometimes manage the context window.
In the case of `context_package()` code is not included by default
but the help files are and they sometimes take too many tokens for a LLM query.```{r ex-rlang2, results='asis', error = TRUE, cache = TRUE, eval = FALSE}
ask(
"what version is installed and what features were added for this version? 1 sentence",
context_package("rlang", help_files = FALSE)
)
```
```{r ex-tibble, error = TRUE, cache = TRUE}
ask_tibble(
"extract the key dates and events",
context_url("https://en.wikipedia.org/wiki/R_(programming_language)")
)
``````{r ex-clip, error = TRUE, cache = TRUE}
# simulate clipboard use
clipr::write_clip(
"Grocery list: 2 apples, 1 banana, 7 carrots",
allow_non_interactive = TRUE
)
ask_numeric("How many fruits will I buy ?", context_clipboard())
```
```{r, eval = FALSE}
# this was run manually (not reproducible because of context and output)
ask_terminal("what git command can I run to check would did the latest changes to the `ask()` function?", context_repo())
```
## Update the code base in placeEven more interesting is using those to change your code in place, for this
we use the `ask_in_place()` function.Try things like :
```{r, eval = FALSE}
ask_in_place("write roxygen2 documentation for functions in the active script", context_script())
ask_in_place("write tests for my_function()", context_script())
ask_in_place("move my_function() to its own script", context_script())
ask_in_place("write a README for this package", context_repo())
```## Caching system
Caching is useful to spare tokens and because LLMs don't always answer the same thing when asked twice.
There's a `cache` argument to most functions but the recommended use would be to
set `options(ask.cache = "my_cache_folder")`. Then `ask()` will return the same
thing when called with the same prompt and other parameters, calling `again()`
on a conversation triggers a new answer and replaces the cache."ram" is a special value to store the cache in memory rather than disk.
## Speech to text
If no input is provided we use speech to text, to end either click on "Done" or say "stop listening".
At the moment this only works on macOS and if Chrome is installed.```{r, eval = FALSE}
ask() # ask a question
```