Cuda runtime error when using nn.Embedding

The GIST Using pytorch, LSTM, mini-batches and DataSets to train a toy model. This GIST is inspired by https://gist.github.com/williamFalcon/f27c7b90e34b4ba88ced042d9ef33edd but trying to be complete, working and a bit more simpler than the orig. Additionaly it uses torch datasets. · GitHub is (for CPU yet) a running example using DataSets, minibatches and a RNN.
But when I want to run it on the GPU with CUDA I get the error message:

Traceback (most recent call last):
File “./main.py”, line 174, in
x = model(x, l)
File “/home/matthias/.local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “./main.py”, line 126, in forward
x = self.embedding(x)
File “/home/matthias/.local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/home/matthias/.local/lib/python3.7/site-packages/torch/nn/modules/sparse.py”, line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File “/home/matthias/.local/lib/python3.7/site-packages/torch/nn/functional.py”, line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 ‘index’

I just transfered the ignore_index variable to CUDA but it did not help.
Does anyone have an Idea?

Hi,

The .to() operator on Tensors is not inplace.
You need to do x = x.to(device) to get the gpu Tensor.

1 Like