Issue
I want to specify the gpu to run my process. And I set it as follows:
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant(3.0)
with tf.Session() as sess:
while True:
print sess.run(a)
However it still allocate memory in both my two gpus.
| 0 7479 C python 5437MiB
| 1 7479 C python 5437MiB
Solution
I believe that you need to set CUDA_VISIBLE_DEVICES=1
. Or which ever GPU you want to use. If you make only one GPU visible, you will refer to it as /gpu:0
in tensorflow regardless of what you set the environment variable to.
More info on that environment variable: https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/
Answered By - Russell
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.