Why I can't transform a torch.Tensor to torch.cuda.Tensor

I met a problem when I run pytorch code:
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 ‘mat1’.

That error means the net need torch.cuda.Tensor but I gave a torch.Tensor. However, before I send the data into the net, I have already copy the data to GPU memory, the error still occurs, Anyone can give a suggestion?
the error code is below:

net = net.cuda()
net = DataParallel(net)
for i, (data,target) in enumerate(dataloader):
if torch.cuda.is_available():
data = data.cuda()
target = target.cuda()
output = net(data)

I have already set the environment variable CUDA_VISIBLE_DEVICES
my pytorch version is 0.4.0

Instead, according to error the expected object of type is torch.FloatTensor.

Do you use manually tracked tensors in your network? For example in the .forward() method. If you use a tensor alone, without wrapping it in a torch.nn.Parameter, it won’t be “part” of the network, and won’t be mapped to GPU when you call .cuda().

I think the error info means the input should be torch.cuda.FloatTensor instead of the torch.FloatTensor

1 Like

You mean data.cuda() cannot always map the data into GPU? I don’t understand, I am new to pytorch

Can you please provide the code of your network?

(Your previous message) actually the error means that the matrix mat1 in your model is of type torch.FloatTensor (CPU), while the input you provide to the model is of type torch.cuda.FloatTensor (GPU).
The most likely scenario is that you have nn.Parameter or other modules such as nn.Conv2d defined in the __init__() method of your model, and additional weights or layers defined in the forward() method of your model.
In this case, the layers defined in the forward() method are not modules of the model, and they won’t be mapped to GPU when you call cuda(), see this answer for the explanation.
As mentionned in the linked topic, you also need to explicitely add the parameters to your optimizer if you want them to be updated with gradient descent.

Thank you. You are right, I found the error just like what you said.