An open API service indexing awesome lists of open source software.

https://github.com/sckott/webmiddens

cache http requests
https://github.com/sckott/webmiddens

api caching fakeweb http http-cache http-mocking r rstats web

Last synced: about 2 months ago
JSON representation

cache http requests

Awesome Lists containing this project

README

          

webmiddens
==========

```{r echo=FALSE}
knitr::opts_chunk$set(
comment = "#>",
collapse = TRUE,
warning = FALSE
)
```

[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
[![R-CMD-check](https://github.com/sckott/webmiddens/workflows/R-CMD-check/badge.svg)](https://github.com/sckott/webmiddens/actions?query=workflow%3AR-CMD-check)
[![codecov](https://codecov.io/gh/sckott/webmiddens/branch/master/graph/badge.svg)](https://codecov.io/gh/sckott/webmiddens)

simple caching of HTTP requests/responses, hooking into webmockr (https://github.com/ropensci/webmockr)
for the HTTP request matching

A midden is a debris pile constructed by a woodrat/pack rat (https://en.wikipedia.org/wiki/Pack_rat#Midden)

### the need

- `vcr` is meant really for testing, or script use. i don't think it fits
well into a use case where another pkg wants to cache responses
- `memoise` seems close-ish but doesn't fit needs, e.g., no expiry, not specific
to HTTP requests, etc.
- we need something specific to HTTP requests, that allows expiration handling, a few different caching location options, works across HTTP clients, etc
- caching just the http responses means the rest of the code in the function can change, and the response can still be cached
- the downside, vs. memoise, is that we're only caching the http response, so if there's still a lot of time spent processing the response, then the function will still be quite slow - BUT, if the HTTP response processing code is within a function, you could memoise that function
- memoise is great, but since it caches the whole function call, you don't benefit from individually caching each http request, which we do here. if you cache each http request, then any time you do that same http request, it's response is already cached

### brainstorming

- use `webmockr` to match requests (works with `crul`; soon `httr`)
- possibly match on, and expire based on headers: Cache-Control, Age, Last-Modified,
ETag, Expires (see Ruby's faraday-http-cache (https://github.com/plataformatec/faraday-http-cache#what-gets-cached))
- caching backends: probably all binary to save disk space since most likely
we don't need users to be able to look at plain text of caches
- expiration: set a time to expire. if set to `2019-03-08 00:00:00` and it's
`2019-03-07 23:00:00`, then 1 hr from now the cache will expire, and a new real HTTP
request will need to be made (i.e., the cache will be deleted whenever the next
HTTP request is made)

### http libraries

right now we only support `crul`, but `httr` support should arrive soon

### installation

```{r eval=FALSE}
remotes::install_github("sckott/webmiddens")
```

### use_midden()

```{r}
library(webmiddens)
library(crul)
```

Let's say you have some function `http_request()` that does an HTTP request that
you re-use in various parts of your project or package

```{r}
http_request <- function(...) {
x <- crul::HttpClient$new("https://httpbin.org", opts = list(...))
x$get("get")
}
```

And you have a function `some_fxn()` that uses `http_request()` to do the HTTP
request, then proces the results to a data.frame or list, etc. This is a super
common pattern in a project or R package that deals with web resources.

```{r}
some_fxn <- function(...) {
res <- http_request(...)
jsonlite::fromJSON(res$parse("UTF-8"))
}
```

Without `webmiddens` the HTTP request happens as usual and all is good

```{r}
some_fxn()
```

Now, with `webmiddens`

run `wm_configuration()` first to set the path where HTTP requests will be cached

```{r}
wm_configuration("foo1")
```

```{r echo=FALSE}
bb <- midden_current()
bb$destroy()
```

first request is a real HTTP request

```{r}
res1 <- use_midden(some_fxn())
res1
```

second request uses the cached response from the first request

```{r}
res2 <- use_midden(some_fxn())
res2
```

### the midden class

```{r}
x <- midden$new()
x # no path
# Run $init() to set the path
x$init(path = "forest")
x
```

The `cache` slot has a `hoardr` object which you can use to fiddle with
files, see `?hoardr::hoard`

```{r}
x$cache
```

Use `expire()` to set the expire time (in seconds). You can set it through
passing to `expire()` or through the environment variable `WEBMIDDENS_EXPIRY_SEC`

```{r}
x$expire()
x$expire(5)
x$expire()
x$expire(reset = TRUE)
x$expire()
Sys.setenv(WEBMIDDENS_EXPIRY_SEC = 35)
x$expire()
x$expire(reset = TRUE)
x$expire()
```

FIXME: The below not working right now - figure out why

```{r eval=FALSE}
wm_enable()
con <- crul::HttpClient$new("https://httpbin.org")
# first request is a real HTTP request
x$r(con$get("get", query = list(stuff = "bananas")))
# following requests use the cached response
x$r(con$get("get", query = list(stuff = "bananas")))
```

verbose output

```{r eval=FALSE}
x <- midden$new(verbose = TRUE)
x$init(path = "rainforest")
x$r(con$get("get", query = list(stuff = "bananas")))
```

set expiration time

```{r eval=FALSE}
x <- midden$new()
x$init(path = "grass")
x$expire(3)
x
```

Delete all the files in your "midden" (the folder with cached files)

```{r eval=FALSE}
x$cleanup()
```

Delete the "midden" (the folder with cached files)

```{r eval=FALSE}
x$destroy()
```

## Meta

* Please [report any issues or bugs](https://github.com/sckott/webmiddens/issues).
* License: MIT
* Get citation information for `webmiddens` in R doing `citation(package = 'webmiddens')`
* Please note that this project is released with a [Contributor Code of Conduct][coc].
By participating in this project you agree to abide by its terms.

[coc]: https://github.com/sckott/webmiddens/blob/master/CODE_OF_CONDUCT.md