Limiting GPU usage

Hi,

I am trying to train a model by using GPU. I can create simple tensors and do operations on them with CUDA. However, when I tried to build a complex model, it raises an exception “CUDNN_STATUS_NOT_INITIALIZED”.

 raise CuDNNError(status)

CuDNNError: 1: b'CUDNN_STATUS_NOT_INITIALIZED'


CUDNN_STATUS_NOT_INITIALIZED\

Traceback (most recent call last):

  File "<ipython-input-33-b62d175cac47>", line 1, in <module>
    CUDNN_STATUS_NOT_INITIALIZED

NameError: name 'CUDNN_STATUS_NOT_INITIALIZED' is not defined

I did some research, similar problem reported in Tensorflow discussions. Few people reported that it is a memory issue, if you can limit TF to use fraction of GPU, it solves the problem. See the link: TF Discussion . I actually once get a memory related exception but I can’t reproduce it.

How do we do that in PyTorch?

Thank you

1 Like