Concatenating Variables

Hi. I’d like to concatenate two Variables, which are each an output of a nn module.
Say I have Variables v1 and v2.
I could use torch.cat([v1, v2]) in my python interactive mode, but when I try to write a code and run it, it gives error:

TypeError: cat received an invalid combination of arguments - got (tuple, int), but expected one of:

  • (sequence[torch.cuda.FloatTensor] tensors)
  • (sequence[torch.cuda.FloatTensor] tensors, int dim)
    didn’t match because some of the arguments have invalid types: (tuple, int)

How should I concatenate two Variables?
(I’d like to concat and feed it to another fully connected layer)

1 Like

I guess v1 and v2 have the different types(i.e. torch.cuda.FloatTensor and torch.FloatTensor)–that’s the problem they need to have the same type.

4 Likes

Yeap. You were right.
Thanks for answering my dumb question :wink:

Helped me too.

It seems like the error message is incorrect. My problem was that I tried to cat a FloatTensor and a LongTensor but the error was mentioning a tuple type.

I have got the same problem. However, printing type(v1) and type(v2) results in <class 'torch.autograd.variable.Variable'> for both cases. Is it not possible to concatenate Variables?

@McLawrence

Don’t use type(v1), instead use v1.type().
The former calls Python’s built-in function and only tells you the class, i.e Pytorch torch.autograd.variable.Variable as you posted. The answers here are referring to the data type of the Variable, which is what the latter call above returns. You can have the same data types for Variables as for Tensors in PyTorch and if you try to concatenate (or most other operations) two Variables of different types, you get the error above.

3 Likes

v1.type() will not work on Variables. One has to use v1.data.type() there.
Also, it is recommended to use type(v1).
However, my problem was, that one tensor was on the GPU whereas the other one was on the CPU.

In my case, the type of the concatenated tensor is both of “torch.DoubleTensor”.
Converting the type to FloatTensor by using float() worked.
I feel something odd from this error message.

I also encountered this problem, but the two variables’ type are both cuda.FloatTensor.
anyone know why? thanks~