Issue description
It seems that pytorch has a memory leak when doing in-place slicing with tensors that require gradients. In the code bellow I just sample some data and multiply the sample by a single parameter, then I just leave the biggest values (as in doing k-beam), the autograd does not drop the references properly and it eventually runs out of memory.
Code example
This code runs out of memory on a K80 in GCP
import torch as tr
import os
device = tr.device("cuda:0" if tr.cuda.is_available() else "cpu")
print ('device:', device)
#parameters
pars = tr.tensor([0.5],requires_grad=True,device=device)
#data
data = tr.arange(1e9,device=device)
print(data)
#sample
def sample(x,sample_size=int(1e5)):
sample_idx = tr.randint(high=x.size()[0],size=(sample_size,),dtype=tr.long)
return data[sample_idx]
def leaveTopK(x,k):
_,idx = tr.sort(x,descending=True)
x = x[idx]
return x[:k]
#memory leak loop
out = tr.tensor([],device=device)
for i in range(int(1e4)):
out = tr.cat([out,pars*sample(data)])
out = leaveTopK(out,int(1e5))
if i%1e3 == 0:
os.system('nvidia-smi -q --display=MEMORY')
It should not run out of memory since the size of out is kept under 1e5 in the example, but it seems that auto-grad keeps reference to the sliced-out part of the out tensor. When setting requires_grad=False it does not run out of memory.
System Info
Collecting environment information…
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Debian GNU/Linux 9.5 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration: GPU 0: Tesla K80
Nvidia driver version: 390.46
cuDNN version: Probably one of the following:
/usr/local/cuda-9.1/lib64/libcudnn.so.7.1.3
/usr/local/cuda-9.1/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip3] numpy (1.12.1)
[pip3] torch (0.4.0)
[pip3] torchvision (0.2.1)
[conda] pytorch 0.4.1 py37_cuda9.0.176_cudnn7.1.2_1 pytorch
I installed pytorch by installing anaconda.