Porting Seq2Seq tutorial from spro/practical-pytorh from CPU to GPU

Following the tutorial on https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb, I’ve some trouble porting CPU Seq2Seq code to GPU

I’ve modified the code a little and try to put .cuda() to various places but it still throws a TypeError.

Details on https://stackoverflow.com/questions/46704352/porting-pytorch-code-from-cpu-to-gpu

Any help on how to resolve this? And when and how to use CUDA types/variables?

decoder_output, decoder_context, decoder_hidden, decoder_attn = decoder_test(word_inputs[0], decoder_context, decoder_hidden, encoder_outputs)

Are all the arguments you’re sending to decoder_test on the GPU? (You can check by printing them out in an interpreter or doing some_tensor.is_cuda()

Unfortunately there isn’t a global flag you can set to make everything go from CPU to GPU automatically.

1 Like

Thanks @richard for the .is_cuda() tip!

It’s strange when I tried to do is_cuda() on a torch.cuda.LongTensor variable:

>>> word_inputs[0]
Variable containing:
 1
[torch.cuda.LongTensor of size 1 (GPU 0)]

>>> type(word_inputs[0])
torch.autograd.variable.Variable

>>> word_input[0].is_cuda()
TypeError      Traceback (most recent call last)
---> word_inputs[0].is_cuda()
TypeError: 'bool' object is not callable

I get the same TypeError for the other inputs too:

>>> encoder_outputs.is_cuda()
TypeError      Traceback (most recent call last)
---> encoder_outputs.is_cuda()
TypeError: 'bool' object is not callable

>>> decoder_hidden].is_cuda()
TypeError      Traceback (most recent call last)
---> decoder_hidden.is_cuda()
TypeError: 'bool' object is not callable


>>> decoder_context.is_cuda()
TypeError      Traceback (most recent call last)
---> decoder_context.is_cuda()
TypeError: 'bool' object is not callable

Any clues?

The network works now after changing the input to decoder_test with .cuda() :stuck_out_tongue_winking_eye:

encoder_test = EncoderRNN(10, 10, 2) # I, H , L
decoder_test = AttnDecoderRNN('general', 10, 10, 2) # A, H, O, L


if USE_CUDA:
    encoder_hidden = encoder_test.init_hidden().cuda()
    word_inputs = Variable(torch.LongTensor([1, 2, 3]).cuda())
else:
    encoder_hidden = encoder_test.init_hidden()
    word_inputs = Variable(torch.LongTensor([1, 2, 3]))
encoder_outputs, encoder_hidden = encoder_test(word_inputs, encoder_hidden)
decoder_attns = torch.zeros(1, 3, 3)
decoder_hidden = encoder_hidden

if USE_CUDA:
    decoder_context = Variable(torch.zeros(1, decoder_test.hidden_size)).cuda()
else:
    decoder_context = Variable(torch.zeros(1, decoder_test.hidden_size))

decoder_output, decoder_context, decoder_hidden, decoder_attn = decoder_test(word_inputs[0], decoder_context, decoder_hidden, encoder_outputs)
print(decoder_output)
print(decoder_hidden)
print(decoder_attn)

But it’s still good to know why the TypeError appears when I did is_cuda() on pyTorch Variable objects.

Sorry, my mistake: it’s decoder_context.is_cuda (without the parenthesis).

2 Likes