theano – how to get the gpu to work

I have been working with Theano and it has been a bit of a journey getting the GPU to work. Here are a few notes to remind myself how to do so…

Start Python and check if Theano recognizes the GPU

$ python
Python 2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Aug 21 2014, 18:22:21)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2

>>> import theano
Using gpu device 0: GeForce GTX 760 Ti OEM

You should see something like the above line showing that Theano finds your GPU.

If you do not see something like the above, then Theano probably is probably not configured to work with your GPU. But let’s check some more just to be sure.

[update March 13, 2016]
I think Theano used to be included by default within the Anaconda package. However, it does not seem to be anymore. So if you get the following error message.

>>> import theano
ImportError: No module named theano

then you need to install the theano package. So from the command line (not within Python), run:

$ conda install theano

and follow the prompts. After you install theano, go back to python and run,

$ python
Python 2.7.11 |Anaconda 2.5.0 (64-bit)| (default, Dec  6 2015, 18:08:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2

>>> import theano
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.

I got the above warning. So then I exited python, and from the command prompt, run

$ sudo apt-get install g++

Then, go back into Python and check again,

$ python
>>> import theano

And no warnings or errors! Alright. Now continue the rest of the instructions to really make sure the GPU is setup correctly.
[/end update March 13, 2016]

Copy and paste the code under the heading “Testing Theano with GPU” found here

Save the code in a file called,

Then run this file to see if you are using your CPU or GPU.

$ python

Using gpu device 0: GeForce GTX 760 Ti OEM
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.404606103897 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
Used the gpu

If we get something like the above with a “Used the gpu” message, we are done and using the GPU! If we get, “Used the cpu“, then we try the following.

Copy and save this text:

ldflags =

floatX = float32
device = gpu

# By default the compiled files were being written to my local network drive.
# Since I have limited space on this drive (on a school's network),
# we can change the path to compile the files on the local machine.
# You will have to create the directories and modify according to where you 
# want to install the files. 
# Uncomment if you want to change the default path to your own.
# base_compiledir = /local-scratch/jer/theano/

fastmath = True

cxxflags = -ID:\MinGW\include

# Set to where the cuda drivers are installed.
# You might have to change this depending where your cuda driver/what version is installed.

to a file called, “.theanorc” (note the “.” at the start of the name) and save it to your home directory (where your other settings files are stored).

So you should see the hidden .theanorc file in your home directory among others.

$ cd ~
$ ls -a

Open a new terminal and try,

$ python

and hopefully you see the satisfying message of “Used the gpu“!

There are probably more steps and other things that I am forgetting. But hopefully, this steers you in the right direction!