Can not move weights to GPU

Hello all,

This is my first question and I am a torch newbie and feel free to educate me if I am doing anything wrong with this question topic here.

I basically want to run my existing torch code on GPU.

I do following:

    if torch.cuda.is_available():
        dev = "cuda:0"
        dev = "cpu"
    device = torch.device(dev)

    model =


    net_input =

    net_output = model(net_input)

Yet I am getting the following error:

     return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

I thought moving the model and the tensors to GPU device should be enough. What am I missing here? Where should I start debugging this issue?
Any tips appreciated.


Alright, I found out there were some vanilla list() usages instead of nn.ModuleList() which were causing certain weights to stay on CPU. I wish this was explicitly stated in Quickstart — PyTorch Tutorials 1.12.1+cu102 documentation or some other parts of the documentation more explicitly.