Hi PyTorch Experts,
I have two questions regarding memory usages in pytorch model training and evaluation. It would be very helpful to know about it.
-
Does
.unsqueeze
operation increases memory?
For example,c = a.unsqueeze(dim=1) + b.unsqueeze(dim=0)
increases memory consumption during evaluation. Canwith torch.no_grad():
be helpful to reduce memory footprint here? -
Does
a:torch.Tensor = torch.sigmoid(c)
creates new memory for variablea
, every time this line is called?