Why I can't transform a torch.Tensor to torch.cuda.Tensor

(Your previous message) actually the error means that the matrix mat1 in your model is of type torch.FloatTensor (CPU), while the input you provide to the model is of type torch.cuda.FloatTensor (GPU).
The most likely scenario is that you have nn.Parameter or other modules such as nn.Conv2d defined in the __init__() method of your model, and additional weights or layers defined in the forward() method of your model.
In this case, the layers defined in the forward() method are not modules of the model, and they won’t be mapped to GPU when you call cuda(), see this answer for the explanation.
As mentionned in the linked topic, you also need to explicitely add the parameters to your optimizer if you want them to be updated with gradient descent.