requires_grad=False makes Volatile GPU-Util go up. Why?

I am using LSTM to make Language Model and I found out freezing embedding weight makes Volatile GPU-Util go up. (30% -> 80%). I don’t get it why it happens. Would you give me some advice about the issue(?)?

weight_vec = torch.load('./data/pretrained_embedding.pt')
model.emb.weight.data.copy_(weight_vec)
model.emb.weight.requires_grad = False

Hi,

This might be because the autograd is not running, so the the CPU has less work to do and can send work to the GPU faster?

I see. I asked it just out of curiosity :slight_smile:
So, CPU is supposed to work on autograd even in cuda(gpu) mode? I didn’t know that. Many thanks.