I want to move all the tensor to GPU at once

resently , I got a deep learning project from github , it is about polar code decoder,(polar code : a channel code),but when i try to train the model , i find that all the tensors are computing on the CPU,it is so slowly, so I want to make use of my GPU, I try to use the command:" x=x.cuda()" but there will be a warning : all the tensors should be computed on one devise, it means that there are some tensors that i haven’t paid attention to, so I want somebody can help me ! whether there exgist a comand that can move all tensors to GPU at once, oh~, I have already tried this:
“dd=locals()
d_tmp=[value.cuda() for key,value in dd.items() if type(value)==torch.Tensor]”
thank you!!!very much!

You could try to change the default tensor type to a device tensor via e.g. torch.set_default_dtype, but I would generally not recommend to try to push all tensors to the device, but use the explicit approach instead.
The error should show the line of code which is failing and which tensors are included, so check the .device argument of these tensors and make sure they are on the GPU.

1 Like