(float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
didn’t match because some of the arguments have invalid types: (int, int, torch.FloatTensor, !torch.cuda.FloatTensor!)
(float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
didn’t match because some of the arguments have invalid types: (int, int, !torch.FloatTensor!, !torch.cuda.FloatTensor!)
It seems like the input is not on GPU, but when I output the variable the datatype is
If you implement your own nn.Module, which includes some parameters inside. You should declare it as Parameters to let nn.Module.cuda() transfer the parameters into GPU memory.
Hi, there is not enough information about Sequence() but I think hidden weight is not cuda.FloatTensor but FloatTensor . It might be the cause of that problem.
The problem is that the Sequence() module includes some variables which are not loaded to GPU, the cell states and hidden states of LSTM. I just try to declare these variable in the Module, but it seems like Variable can not be included in parameter. So what I’ve done is calls .cuda() method for those variables in forward(). Is there a more efficient solution?