The below code works fine when using CPU or 1 GPU. However, when I use more than 1 GPU, it gives error:
AttributeError: module ‘torch’ has no attribute ‘long’
The code that caused the error:
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long)
Why it doesn’t work for multuple GPUs? In this example, the batch size is 1, so I don’t think it is the issue of batch.