Unable to load my model to GPU

Good evenning everybody.

I am trying to load my model, a transformer, to my GPU, however my GPU rises the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

The example I am trying to run came 100% from this site:

https://www.datacamp.com/tutorial/building-a-transformer-with-py-torch

I tried to use, before the training cell:
device = ‘cuda’ if torch.cuda.is_available() else ‘cpu’
print(f"Using device: {device}")

transformer.to(device)
src_data.to(device)
tgt_data.to(device)

However the error continues. When a try nvidia-smi on terminal it looks like my model was loaded to GPU since a lot of memory becomes occupied, the same occurs with variables, however the error persists.
I am become crazy with this error, I will be very happy with any help.

Best regards.

From what I can tell, seems the example code does not move the masks to the device.

src_mask and tgt_mask are tensor that have to be moved to the same device as the model and the training data. Maybe you can try

def generate_mask(self, src, tgt, DEVICE):
    ...
    return src_mask.to(DEVICE), tgt_mask.to(DEVICE)

or something like that for a quick test.

Thank you. I will try soon.