Should I add .cuda() to all the modules?

If I want to utilize GPU to calculate, I need to use model = model.cuda().
But I find that it seems every module can be added the .cuda(), such as Variable and loss.
I know that adding .cuda() means calculate on GPU, but do I have to add .cuda() to all the modules?
And which modules can be added the .cuda()? It seems not explicit in the document.

Just write model.cuda() is enough while you need to use variable = variable.cuda().

model.cuda() converts all parameters such as weights in model from Parameter(torch.Tensor) to Parameter(torch.cuda.Tensor). So if you do model.cuda(), you need to give Variable.cuda() as input. Now, model's output is cudaed, you don’t need to add .cuda() to loss.

3 Likes

Thank you very much!