What's the meaning of this kind error?

TypeError: torch.baddbmm received an invalid combination of arguments - got (int, torch.FloatTensor, int, torch.FloatTensor, torch.cuda.FloatTensor, out=torch.FloatTensor), but expected one of:
 * (torch.FloatTensor source, torch.FloatTensor batch1, torch.FloatTensor batch2, *, torch.FloatTensor out)
 * (float beta, torch.FloatTensor source, torch.FloatTensor batch1, torch.FloatTensor batch2, *, torch.FloatTensor out)
 * (torch.FloatTensor source, float alpha, torch.FloatTensor batch1, torch.FloatTensor batch2, *, torch.FloatTensor out)
 * (float beta, torch.FloatTensor source, float alpha, torch.FloatTensor batch1, torch.FloatTensor batch2, *, torch.FloatTensor out)
      didn't match because some of the arguments have invalid types: (int, torch.FloatTensor, int, torch.FloatTensor, torch.cuda.FloatTensor, out=torch.FloatTensor)

The doc didn’t describe the argument should be a torch.cuda.FloatTensor or a torch.FloatTensor.

How can i know about it?

And how can i test it?

I’m very confused.

You’re clearly trying to use this variant:

The error comes form the fact that one of your tensors is one the GPU, while all of the rest are on the CPU:

invalid combination of arguments - got (int, torch.FloatTensor, int, torch.FloatTensor, torch.cuda.FloatTensor, out=torch.FloatTensor)

You have to either call cuda() on all of your tensors, or on none. If you provide the code that leads to this error, maybe someone could find the bug

2 Likes

But in fact, I use the torch.bmm.
The param name is attn_weight and encoder_outputs.
However, the error info has nothing with these two params.
I’m confused at it.

bmm uses baddbmm under the hood: https://github.com/pytorch/pytorch/blob/master/torch/lib/ATen/Declarations.cwrap#L3189-L3212. Moreover, variable names are just names… Once you pass them to another function, they lose that info. So trying to align variable names is not really useful.

@jytug 's answer is correct.