Why torch.cat() makes a cuda Variable to a non-cuda Variable?

I am trying to run a code where I am getting error in the following code snippet.

combined_representation = torch.cat([self.encoder_hidden_states1[last_time_step_sent1][0],
                                 self.encoder_hidden_states2[last_time_step_sent2][0]], 1)
if self.config.cuda:
     combined_representation = combined_representation.cuda()

print(combined_representation.size()) # prints torch.Size([16, 600])
scores = self.linear(combined_representation)

If I don’t convert the combined_representation to cuda, I get the following error.

RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, r_, t, m1, m2)’ failed. at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487344852722/work/torch/lib/THC/generic/THCTensorMathBlas.cu:230

Please note, self.encoder_hidden_states1 and self.encoder_hidden_states2 are cuda variable, then why after torch.cat() operation, I am getting a non-cuda Variable?

torch.cat shouldn’t change the cuda variable to non-cuda variable. I suspect you are doing something slightly unexpected here?
If you give a small snippet of 10 lines to reproduce the issue, i can investigate.