Failed to compute dot product of torch.cuda.FloatTensor

I used a GPU to compute the dot product of the output of neural networks and a torch.cuda.FloatTensor (both of them are stored in GPU) but got an error saying: TypeError: dot received an invalid combination of arguments - got (torch.cuda.FloatTensor) but expected (torch.FloatTensor tensor).

the codes are like
p = torch.exp(
here vector is a torch FloatTensor and ht is the output of neural networks.

I’ve struggled with these things for days but still got no idea. Thanks in advance for any possible solution!

it is saying that one of either vector or ht is not a cuda.FloatTensor but is a FloatTensor. Find it and call .cuda() on it.

1 Like

Yes that’s it. Thanks!

Hello, Im having a similar issue, im trying to run in gpu a with the following code;

but im obtaining the error;

RuntimeError: _th_dot is not implemented for type torch.cuda.LongTensor

anyone knows what should I modify to make it work ?


Try to cast both inputs to float32:

embedding_output = embedding_output.float()
embedding_target =  embedding_target.float()
cos =

Thank you ! it worked !