Hi all,
I am trying to figure out, why torch allocates so much memory for a tensor that at least to me doesn’t seem it should allocate this much memory:
weights = torch.tensor([0.3242]).cuda()
This tensor allocates more than 737MB on my GPU and I have absolutely no idea why this would happen.
I am using torch1.1 but also tried with torch 1.3 which results in more than 790MB memory allocation.
Neither
weights = weights.cpu()
torch.cuda.empty_cache()
or
weights = weights.detach().cpu()
torch.cuda.empty_cache()
NOR
del weights
torch.cuda.empty_cache()
have any effect. The memory stays allocated.
Does anyone know what to do in this case?
Thanks a lot.
Christian