-
Notifications
You must be signed in to change notification settings - Fork 26
Using Tensorflow as a back end
As of 2017-09-12, Nimble has experimental support for Tensorflow as a back-end.
Tensorflow is compute framework for medium sized tensor operations. Tensorflow is usually used for deep neural networks, but Nimble uses it only to accelerate tensor computations. In this sense, Nimble uses Tensorflow roughly as a GPU-compatible replacement for Eigen.
To use the Tensorflow backend for Nimble, you'll need to install RStudio's R wrapper for Tensorflow, and then use the R wrapper to install tensorflow
install.packages('tensorflow') # Installs RStudio's R wrapper library.
library('tensorflow')
tensorflow::install_tensorflow() # Installs Google's Tensorflow library.The above lines install the latest stable CPU-only version of Tensorflow in a python virtualenv named r-tensorflow. You can replace this version with a GPU-compatible version by manually installing the Python package:
$ workon r-tensorflow # You may need to install virtualenv before doing this.
$ pip install --ignore-installed tensorflow-gpuTo run on GPUs, you'll also need to install various CUDA libraries on your system. See the Installing Tensorflow docs for details.
Linux has first-class support; OS X and Windows may have more difficulty.
Nimble can use Tensorflow to accelerate nimbleFunctions.
Each nimbleFunction can currently use either Tensorflow or Eigen for vectorizable math, and this decision is made on a per-function basis.
To enable Tensorflow, set the experimentalUseTensorflow option when compiling that function:
nimbleOptions(experimentalUseTensorflow = TRUE)
tf_fun <- compileNimble(fun)It is generally safer to set this global variable temporarily using withNimbleOptions:
tf_fun <- withNimbleOptions(list(experimentalUseTensorflow = TRUE),
compileNimble(fun))