How and where does pytorch decides how much memory should be allocated for GPU?
Hi,
Only Tensors content is allocated on the GPU. So when you call .cuda()
on a Tensor, it allocates an array of size t.nelement() * t.element_size()
bytes.
How and where does pytorch decides how much memory should be allocated for GPU?
Hi,
Only Tensors content is allocated on the GPU. So when you call .cuda()
on a Tensor, it allocates an array of size t.nelement() * t.element_size()
bytes.