Does freezzing part of the network saves GPU memory?

feat = Net1(input)
output = Net2(feat)

If I freeze Net1 by eval(Net1), does this save memory in my GPU?

The gradient need not to be computed in case of frozen layers, thus lesser memory. But I don’t see much change in the memory allocated between frozen layers and trainable layers? Is the memory difference very small?

Try
with torch.no_grad():
____feat = Net1(input)
output = Net2(feat)

This should work.

See this topic for explanation.

Thanks @dazzle-me but my question was does frozen layers saves GPU memory?

If you freeze modules by setting the requires_grad attribute of their parameters to False, then you would save the memory the gradients would otherwise occupy (assuming the .grad attributes are set to None).