Argument mismatch when using GPU

I was following the Generating Names with a Character-Level RNN[1] tutorial. The training is running fine when using CPU, but when I switch to GPU, I get the following error.

All I added is one line 'rnn.cuda()' after instantiating the model as rnn = RNN().

Traceback (most recent call last):
  File "char_rnn_generation_tutorial.py", line 323, in <module>
    output, loss = train(*randomTrainingSet())
  File "char_rnn_generation_tutorial.py", line 276, in train
    output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
  File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "char_rnn_generation_tutorial.py", line 157, in forward
    hidden = self.i2h(input_combined)
  File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/linear.py", line 54, in forward
    return self._backend.Linear()(input, self.weight, self.bias)
  File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/_functions/linear.py", line 10, in forward
    output.addmm_(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
 * (torch.FloatTensor mat1, torch.FloatTensor mat2)
 * (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
 * (float beta, torch.FloatTensor mat1, torch.FloatTensor mat2)
 * (float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
 * (float beta, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
 * (float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
 * (float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
 * (float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)

[1] http://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html#Creating-the-Network

this will give you a hint to solve your issue:

got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:

Hi smth,

I tried tweaking the dimensions and changing the data, target pairs into cuda variable. I get new kind of errors.
Can you please point where to look? Including the model here for completeness.

class RNN(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(RNN, self).__init__()
        self.hidden_size = hidden_size
        
        self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
        self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
        self.o2o = nn.Linear(hidden_size + output_size, output_size)
        self.dropout = nn.Dropout(0.1)
        self.softmax = nn.LogSoftmax()
    
    def forward(self, category, input, hidden):
        input_combined = torch.cat((category, input, hidden), 1)
        hidden = self.i2h(input_combined)
        output = self.i2o(input_combined)
        output_combined = torch.cat((hidden, output), 1)
        output = self.o2o(output_combined)
        output = self.dropout(output)
        output = self.softmax(output)
        return output, hidden

    def initHidden(self):
        return Variable(torch.zeros(1, self.hidden_size))