Hi,
When I train the network, GPU memory will grow up continually until out of memory. Is it memory leak?
The code is
https://paste.ubuntu.com/26269199/
This is a simple fully connected network to deal with a regression problem. I assume that the key of the problem is I use backward() to get gradients of output, then I use backward() again to optimize weights.
I try some suggestions:
- delete variables
- add torch.backends.cudnn.enabled = False
- add gc.collect()
… but the problem is still unresolved.
Please give me any hints or suggestions. Thanks!
—————
Vesions
pytorch 0.4.0a0+1b608ee
python 3.5.3
ubuntu 16.04