Scenario: You have multiple GPUs on a single machine running Linux, but you want to use just one. By default, Keras allocates memory to all GPUs unless you specify otherwise. You use a Jupyter Notebook to run Keras with the Tensorflow backend.
Here’s how to use a single GPU in Keras with TensorFlow
Run this bit of code in a cell right at the start of your notebook (before importing tensorflow
or keras
).
import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"; # The GPU id to use, usually either "0" or "1"; os.environ["CUDA_VISIBLE_DEVICES"]="0"; # Do other imports now... import keras |
And that’s it!
If you’re not sure what your GPU id
is, run this command in the terminal on the machine with the GPUs on it:
nvidia-smi
And you should see something like this:
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 384.90 Driver Version: 384.90 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 TITAN X (Pascal) Off | 00000000:01:00.0 Off | N/A | | 23% 39C P8 18W / 250W | 2464MiB / 12188MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX TIT... Off | 00000000:07:00.0 Off | N/A | | 22% 34C P8 13W / 250W | 11MiB / 12207MiB | 0% Default | +-------------------------------+----------------------+----------------------+
Here we have two GPUs, where the leftmost column indicates the GPU ids
are 0
and 1
(and GPU 0 is currently in use).
nvidia-smi doesn’t work on windows 10 or maybe I am making a mistake?
Not sure. I’ve been working primarily on Ubuntu. Sorry! (I’ll update the post to reflect that)