Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dfalbel/torch
torch from R!
https://github.com/dfalbel/torch
Last synced: 20 days ago
JSON representation
torch from R!
- Host: GitHub
- URL: https://github.com/dfalbel/torch
- Owner: dfalbel
- License: other
- Archived: true
- Created: 2018-10-06T17:27:42.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2020-08-07T22:21:44.000Z (over 4 years ago)
- Last Synced: 2024-07-31T19:16:34.745Z (4 months ago)
- Language: C++
- Homepage: http://dfalbel.github.io/torch
- Size: 1.15 MB
- Stars: 50
- Watchers: 7
- Forks: 5
- Open Issues: 7
-
Metadata Files:
- Readme: README.Rmd
- License: LICENSE
Awesome Lists containing this project
README
---
output: github_document
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```# torch
[![lifecycle](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://www.tidyverse.org/lifecycle/#experimental)
[![Travis build
status](https://travis-ci.org/dfalbel/torch.svg?branch=master)](https://travis-ci.org/dfalbel/torch)
[![Coverage
status](https://codecov.io/gh/dfalbel/torch/branch/master/graph/badge.svg)](https://codecov.io/github/dfalbel/torch?branch=master)torch from R\!
> A proof of concept for calling libtorch functions from R. API will
> change\! Use at your own risk. Most libtorch’s functionality is not
> implemented here too.## Installation
Installation is very simple:
### CPU
```r
Sys.setenv(TORCH_HOME="/libtorch")
devtools::install_github("dfalbel/torch")
```Code above will check whether `libtorch` is installed to `TORCH_HOME` dir. If not it will automatically download `libtorch` binaries from [`pytorch.org`](https://pytorch.org/) and unpack them to `TORCH_HOME`. After that it will install `torch` R package. If you don't set the `TORCH_HOME` env var it will use `/libtorch` as default.
Alternatively you can provide URL for binaries download by adding setting the `TORCH_BINARIES` environment variable.
**Note**: The package will return `std::bad_alloc` errors (and they crash the R session) if compiled with recent versions of `g++` (eg. the default version of Ubuntu Xenial - 5.4.0). It's recommended to compile the package with `g++-4.9`. In order to do it:
```bash
sudo apt-get install g++-4.9
```And add the following to your `.R/Makevars` (`usethis::edit_r_makevars()`):
```
CXX=g++-4.9
CXX11=g++-4.9
```You may need to reinstall the `Rcpp` package.
### GPU
On Linux you can also install `torch` with **CUDA 9.0** support (still very initial stage)
**Install CUDA 9.0**
- [follow these instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
and add necessary repositories
- install **cuda-9.0** - `sudo apt-get install cuda-9-0`
- install **cuDNN > 7** - follow the instructions [here](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html).**Install libtorch and torch R package**
```r
Sys.setenv(TORCH_BACKEND = "CUDA")
devtools::install_github("dfalbel/torch")
```## Example
Currently this package is only a prrof of concept and you can only create a Torch
Tensor from an R object. And then convert back from a torch Tensor to an R object.```{r}
library(torch)
x <- array(runif(8), dim = c(2, 2, 2))
y <- tensor(x)
y
identical(x, as.array(y))
```### Simple Autograd Example
In the following snippet we let torch, using the autograd feature, calculate the derivatives:
```{r}
x <- tensor(1, requires_grad = TRUE)
w <- tensor(2, requires_grad = TRUE)
b <- tensor(3, requires_grad = TRUE)y <- w * x + b
y$backward()x$grad
w$grad
b$grad
```### Linear Regression
In the following example we are going to fit a linear regression from scratch
using torch's Autograd.**Note** all methods that end with `_` (eg. `sub_`), will modify the tensors in
place.```{r, eval=TRUE}
x <- matrix(runif(100), ncol = 2)
y <- matrix(0.1 + 0.5 * x[,1] - 0.7 * x[,2], ncol = 1)x_t <- tensor(x)
y_t <- tensor(y)w <- tensor(matrix(rnorm(2), nrow = 2), requires_grad = TRUE)
b <- tensor(0, requires_grad = TRUE)lr <- 0.5
for (i in 1:100) {
y_hat <- tch_mm(x_t, w) + b
loss <- tch_mean((y_t - y_hat)^2)
loss$backward()
with_no_grad({
w$sub_(w$grad*lr)
b$sub_(b$grad*lr)
})
w$grad$zero_()
b$grad$zero_()
}print(w)
print(b)
```