Weight tensor stored on CPU despite model being stored on GPU

Hi,

I run into the following error:

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

when I try to run my model on Colab (with GPU runtime enabled).
During my training loop I send my batches to the GPU using:
X = X.to(device)
y = y.to(device)

and the model is also on the GPU, using:
model.to(device)

Sources online indicate that this error arises because the network isn’t on the GPU (see S/O, and PT forums. However, I’m almost certain that mine is, having verified with next(model.parameters()).is_cuda.

Any advice would be appreciated.

The problem was that one of the submodules was saved in local function so it was not bound to the main nn.Module class.