Good evenning everybody.
I am trying to load my model, a transformer, to my GPU, however my GPU rises the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
The example I am trying to run came 100% from this site:
https://www.datacamp.com/tutorial/building-a-transformer-with-py-torch
I tried to use, before the training cell:
device = ‘cuda’ if torch.cuda.is_available() else ‘cpu’
print(f"Using device: {device}")
transformer.to(device)
src_data.to(device)
tgt_data.to(device)
However the error continues. When a try nvidia-smi on terminal it looks like my model was loaded to GPU since a lot of memory becomes occupied, the same occurs with variables, however the error persists.
I am become crazy with this error, I will be very happy with any help.
Best regards.