Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/jbgruber/rollama


https://github.com/jbgruber/rollama

Last synced: 6 days ago
JSON representation

Awesome Lists containing this project

README

        

---
output: github_document
---

```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```

# `rollama` rollama-logo

[![R-CMD-check](https://github.com/JBGruber/rollama/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/JBGruber/rollama/actions/workflows/R-CMD-check.yaml)
[![Codecov test coverage](https://codecov.io/gh/JBGruber/rollama/branch/main/graph/badge.svg)](https://app.codecov.io/gh/JBGruber/rollama?branch=main)
[![CRAN status](https://www.r-pkg.org/badges/version/rollama)](https://CRAN.R-project.org/package=rollama)
[![CRAN_Download_Badge](https://cranlogs.r-pkg.org/badges/grand-total/rollama)](https://cran.r-project.org/package=rollama)
[![arXiv:10.48550/arXiv.2404.07654](https://img.shields.io/badge/DOI-arXiv.2404.07654-B31B1B?logo=arxiv)](https://doi.org/10.48550/arXiv.2404.07654)
[![say-thanks](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/JBGruber)

The goal of `rollama` is to wrap the Ollama API, which allows you to run different LLMs locally and create an experience similar to ChatGPT/OpenAI's API.
Ollama is very easy to deploy and handles a huge number of models.
Checkout the project here: .

## Installation

You can install this package from CRAN:

``` r
install.packages("rollama")
```

Or you can install the development version of `rollama` from [GitHub](https://github.com/JBGruber/rollama) with:

``` r
# install.packages("remotes")
remotes::install_github("JBGruber/rollama")
```

However, `rollama` is just the client package.
The models are run in `Ollama`, which you need to install on your system, on a remote system or through [Docker](https://docs.docker.com/desktop/).
The easiest way is to simply download and install the Ollama application from [their website](https://ollama.com/).
Once `Ollama` is running, you can see if you can access it with:

```{r}
rollama::ping_ollama()
```

### Installation of Ollama through Docker

The advantage of running things through Docker is that the application is isolated from the rest of your system, behaves the same on different systems, and is easy to download and update.
You can also get a nice web interface.
After making sure [Docker](https://docs.docker.com/desktop/) is installed, you can simply use the Docker Compose file from [this gist](https://gist.github.com/JBGruber/73f9f49f833c6171b8607b976abc0ddc).

If you don’t know how to use Docker Compose, you can follow this video to use the compose file and start Ollama and Open WebUI:

[![Install Docker on macOS, Windows and Linux](https://img.youtube.com/vi/iMyCdd5nP5U/0.jpg)](https://www.youtube.com/watch?v=iMyCdd5nP5U)

## Example

The first thing you should do after installation is to pull one of the models from .
By calling `pull_model()` without arguments, you are pulling the (current) default model --- "llama3.1 8b":

```{r lib}
library(rollama)
```
```{r eval=FALSE}
pull_model()
```

There are two ways to communicate with the Ollama API.
You can make single requests, which does not store any history and treats each query as the beginning of a new chat:

```{r query}
# ask a single question
query("why is the sky blue?")
```

Or you can use the `chat` function, treats all messages sent during an R session as part of the same conversation:

```{r chat}
# hold a conversation
chat("why is the sky blue?")
chat("and how do you know that?")
```

If you are done with a conversation and want to start a new one, you can do that like so:

```{r new}
new_chat()
```

## Model parameters

You can set a number of model parameters, either by creating a new model, with a [modelfile](https://jbgruber.github.io/rollama/reference/create_model.html), or by including the parameters in the prompt:

```{r}
query("why is the sky blue?", model_params = list(
seed = 42,
temperature = 0,
num_gpu = 0
))
```

```{r include=FALSE, results='asis'}
l <- readLines("https://raw.githubusercontent.com/ollama/ollama/main/docs/modelfile.md")
s <- grep("#### Valid Parameters and Values", l, fixed = TRUE)
e <- grep("### TEMPLATE", l, fixed = TRUE)
cat(l[s:e - 1], sep = "\n")
```

## Configuration

You can configure the server address, the system prompt and the model used for a query or chat.
If not configured otherwise, `rollama` assumes you are using the default port (11434) of a local instance ("localhost").
Let's make this explicit by setting the option:

```{r server}
options(rollama_server = "http://localhost:11434")
```

You can change how a model answers by setting a configuration or system message in plain English (or another language supported by the model):

```{r config}
options(rollama_config = "You make answers understandable to a 5 year old")
query("why is the sky blue?")
```

By default, the package uses the "llama3.1 8B" model. Supported models can be found at .
To download a specific model make use of the additional information available in "Tags" .
Change this via `rollama_model`:

```{r model}
options(rollama_model = "mixtral")
# if you don't have the model yet: pull_model("mixtral")
query("why is the sky blue?")
```