RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)

Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:

# Residual Block
class DenseResidual(torch.nn.Module):
    def __init__(self, inp_dim, neurons, layers, **kwargs):
        super(DenseResidual, self).__init__(**kwargs)
        self.h1 = torch.nn.Linear(inp_dim, neurons)
        self.hidden = [torch.nn.Linear(neurons, neurons) 
                      for _ in range(layers-1)]
        # Using ModuleList so that this layer list can be moved to CUDA                      
        self.hidden = torch.nn.ModuleList(self.hidden)
        
    def forward(self, inputs):
        h = torch.tanh(self.h1(inputs))
        x = h
        for layer in self.hidden:
            x = torch.tanh(layer(x))
            
        # Defining Residual Connection and returning
        return x + h
3 Likes