(If you want the GPU build, skip to the next section)
These instructions use virtualenv. However, you can simply use the --user flag with pip if you do not want to use virtualenv.
virtualenv tf_cpu
source tf_cpu/bin/activate
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp27-none-linux_x86_64.whl
pip install --upgrade $TF_BINARY_URL
$ python
>>> import tensorflow as tf
>>> hello = tf.constant("Hello, Tensorflow!")
>>> sess = tf.Session()
>>> print (sess.run(hello))
Hello, Tensorflow!
First, make sure CUDNN (5.*) is installed
IFS=:;for p in ${LD_LIBRARY_PATH}; do if [ -e ${p}/libcudnn.so.5 ]; then echo "Found CUDNN5 at ${p}"; break; fi; done
My output looks something like :
Found CUDNN5 at /home/ws15gkumar/.local/cudnn/lib64\\
Now, add these lines to your bashrc file so that CUDA can be located by tensorflow
export CUDA_HOME=/opt/NVIDIA/cuda-8.0
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CPLUS_INCLUDE_PATH=$CUDA_HOME/include:$CPLUS_INCLUDE_PATH
export C_INCLUDE_PATH=$CUDA_HOME/include:$C_INCLUDE_PATH
export CPATH=$CUDA_HOME/include:$CPATH
To make sure that you have the correct versions of the python dependencies, run:
(This is a sanity check step, you may skip it. Tensorflow will install the correct version via dependencies)
python -c "import pip; import numpy; print pip.__version__; print numpy.__version__"
My output looks something like :
7.1.2
1.10.1
First, upgrade pip
pip install --user --upgrade pip
For the CPU build, set
->export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-0.12.1-cp27-none-linux_x86_64.whl
and for the GPU build
->export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp27-none-linux_x86_64.whl
Now, install tensorflow
CUDA_VISIBLE_DEVICES=`/home/gkumar/scripts/free-gpu` pip install --user --upgrade $TF_BINARY_URL
Start a python process
CUDA_VISIBLE_DEVICES=`/home/gkumar/scripts/free-gpu` python
Run (you should see something like this
--------
$ CUDA_VISIBLE_DEVICES=`/home/gkumar/scripts/free-gpu` python
Python 2.7.9 (default, Jun 29 2016, 13:08:31)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
>>> hello = tf.constant("Hello, Tensorflow!")
>>> sess = tf.Session()
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: Tesla K20m
major: 3 minor: 5 memoryClockRate (GHz) 0.7055
pciBusID 0000:03:00.0
Total memory: 4.63GiB
Free memory: 4.57GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20m, pci bus id: 0000:03:00.0)
>>> print (sess.run(hello))
Hello, Tensorflow!
>>> quit()
-----------
Check cnn/examples for examples of common neural MT models.