Memory leak (CPU and CUDA) when assigning to torch.autograd.Variable

I’ve noticed a memory leak when doing repeated assignments to the same torch.autograd.Variable. Assignment to Tensors in this way doesn’t cause the leak (regardless of CPU or GPU), Assignment of Variable does (on both CPU and GPU)

Here is a test case:

import torch
import torch.autograd
import random

Test memory leak when doing repeated modifications of Variable

batch = 1000
steps = 5000
all_data = 50000
nodes = 784
src_data = torch.randn(all_data, nodes)
dims = batch, nodes
x = torch.FloatTensor(*dims)
x_var = torch.autograd.Variable(torch.FloatTensor(*dims), requires_grad = False)
x_cuda = torch.cuda.FloatTensor(*dims)
x_var_cuda = torch.autograd.Variable(torch.cuda.FloatTensor(*dims), requires_grad = False)

print(‘Repeated assignment to Tensor (CPU)’)
for i in range(steps):
for j in range(batch):
k = random.randint(0, all_data - 1)
x[j] = src_data[k]
del x

print(‘Repeated assignment Variable (CPU)’)

This causes a leak

for i in range(steps):
for j in range(batch):
k = random.randint(0, all_data - 1)
x_var[j] = src_data[k]
del x_var

print(‘Repeated assignment to Tensor (GPU)’)

This doesn’t cause any memory leak

for i in range(steps):
for j in range(batch):
k = random.randint(0, all_data - 1)
x_cuda[j] = src_data[k]
del x_cuda

print(‘Repeated assignment to Variable (GPU)’)

This causes a leak

for i in range(steps):
for j in range(batch):
k = random.randint(0, all_data - 1)
x_var_cuda[j] = src_data[k]
del x_var_cuda

I’m using torch version 0.2.0_3 on Ubuntu 16.04, python 3.5

I did look through a few of the posts, but they seem to be concerned with higher-level behavior. Given this is a lower level one, I thought it would be good to post. Anyone have insight into this? Is it known already?

Thanks,

Henry

1 Like