MaxUnPool3d keep increases the GPU memory

I recently updated the pytorch to 0.4.0, and found some wierd thing.

when I forward the Tensor to the MaxUnPool3D, the GPU memory keep increasing.
like,

x = torch.rand(1,1,144,144,144).to('cuda:0')
mp = torch.nn.MaxPool3d(2,2, return_indices=True)
mup = torch.nn.MaxUnpool3d(2,2)
x, i = mp(x)
# 579 Mib
a = mup(x,i)
# 579 Mib
a = mup(x,i)
# 591 Mib
a = mup(x,i)
# 601 Mib
...

Is there I miss something or is the bug?

1 Like

Same here. I tried adding calls to torch.cuda.empty_cache() and gc.collect, did not help.

import torch
from torch import nn
import gc
x = torch.rand(1,1,144,144,144).to('cuda:0')
mp = torch.nn.MaxPool3d(2,2, return_indices=True)
mup = torch.nn.MaxUnpool3d(2,2)
while True:
    import pdb; pdb.set_trace()
    y, i = mp(x)
    a = mup(y, i)
    torch.cuda.empty_cache()
    gc.collect()

nvidia-smi output shows GPU usage increasing by about 5-10 Mib per iteration of the while loop.

Similar issue with MaxPool3D and MaxUnPool3D.

1 Like

Thanks for the report. MaxUnpool3d indeed has a memory leak, will be fixed once this PR is merged: https://github.com/pytorch/pytorch/pull/7270

1 Like