Hello! I have this line of code: error_threshold = rmse_loss(torch.log(model(factors)), product)
where model is trained to predict exp(product) given factors. model, factors and product are all on CUDA. However, when I run the code I get this error: RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #3 'other'
Can someone tell me what this mean? What is argument number 3? The definition of rmse_loss is this:
Based on the error message, I think the error is not coming from this rmse_loss but somewhere in the forward function of model is causing this error. The reason is the error message says argument #3, but this line has only 2 arguments.
Also, you can check the device attribute of both factors and product to make sure that they are in fact on CUDA device.
I think it’s this lin that causes the issue. Can you create a tensor for the len(denom) and moved that to CUDA, something like this ... / torch.tesnor(len(denom)).to(device)
I actually just noticed that if I do this instead: error_threshold = rmse_loss(1/model(factors), product)
so replacing torch.log with 1/, the code works just fine.
It’s weird that it worked. I don’t think the problem is from torch.log., because if the input to torch.log is in CUDA, the output will also be in CUDA:
I changed that to this: denom = torch.sqrt(torch.mean(targ**2))
but I am still getting the error. Seems like that any transformation without torch. works fine (for example squared), but when I use torch. (i tried log, exp, cos and sin) I get the error.
I see. It doesn’t make sense. If we create two random tensors with the same shape as the output of the model and the shape target, and pass them to the rmse_loss, it has no problem.
I think you can debug it line by line, use print statement to display the device of each tensor after each line of computation. Then, maybe we can understand where this problem is happening.