GPU memory leak in torch.qr

Hi,

Thanks for the awesome framework. Recently, I’ve been using pytorch for linear algebra and found a weird memory leak in torch.qr.

I’m using torch.__version__ 0.2.0_4 on anaconda python 3.6 CUDA 8.0 using the standard conda installation.

import torch
B = torch.rand(4000, 1000).cuda()   # nvidia-smi output 298MiB
_ = torch.qr(B)   # 388MiB
for i in range(100): torch.qr(B)  # Constantly increases until 1941MiB

compared to

import torch
B = torch.rand(4000, 1000).cuda()   # nvidia-smi output 298MiB
_ = torch.svd(B)   # 399MiB
for i in range(100): torch.svd(B)  # fluctuates between 414 to 432 and back to 414

Aside from that, is there a reason for using Magma instead of cuBLAS?

Thanks!

Thanks a lot for reporting this, we fixed the leak in master, will be part of the next release.