How to make custom method in nn.Module work with GPUs

I’m trying to implement simple res-net like below and it works with CPU.

class ResNet(nn.module):
      def __init__(self):
        super(Net, self).__init__()
        self.conv_1 = nn.Conv2d(3, 64, 4, stride=2)
        self.bn_1 = nn.BatchNorm2d(64)
        self.res_1 = self.__res_block(64, [32,32,128], True)
         ...
    def forward(self, x):
        x = self.conv_1(x)
        x = F.relu(self.bn_1(x))
        x = F.max_pool2d(x, 2, 2)
        x = self.res_1(x)
        ...

    def __res_block(self, in_channels,
                    nb_filters, right=False):
            def __res_base(_in_channels, out_channels,
                       kernel_size=1, padding=0):
            def g(x):
                x = nn.Conv2d(in_channels=_in_channels,
                              out_channels=out_channels,
                              kernel_size=kernel_size,
                              padding=padding)(x)
                x = nn.BatchNorm2d(num_features=out_channels)(x)
                return x
            return g

        def f(x):
            y = F.relu(__res_base(in_channels, nb_filters[0])(x))
            y = F.relu(__res_base(nb_filters[0], nb_filters[1],
                                  kernel_size=3, padding=1)(y))
            y = F.relu(__res_base(nb_filters[1], nb_filters[2])(y))
            if right is True:
                x = __res_base(in_channels, nb_filters[2])(x)
            return F.relu(x+y)
        return f

but it doesn’t work with GPU and throws TypeError,

TypeError: _cudnn_convolution_full_forward received an invalid combination of arguments 
- got (torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, tuple, tuple, int, bool), 
but expected (torch.cuda.RealTensor input, torch.cuda.RealTensor weight, torch.cuda.RealTensor bias, 
torch.cuda.RealTensor output, std::vector<int> pad, std::vector<int> stride, int groups, bool benchmark)

I think this is because the parameters in __res_block doesn’t pass to GPU but how can I pass them to GPU? Thank you.

1 Like

Hi,

From the error message it looks like you give him torch.FloatTensor as input.
If your model is on GPU, you should give him torch.cuda.FloatTensor (EDIT: sorry wrong name for cuda tensors).
You can convert your inputs to cuda by doing: input = input.cuda() before forwarding it through the network.

I uploaded the code here.

The code except the module above is same as my other examples and they work with GPUs. So I think the cause is in the __res_block.

And, does torch.CudaFloatTensor equal to torch.cuda.RealTensor?

In your __res_block you create new objects that are not composed by torch.CudaFloatTensors

Did you try just in def g(x):
return x.cuda() instead of return x

Ho I see,

I think you want both your __res_block and __res_base functions to return an instance of Modules.
Otherwise their parameters won’t be recognized as being part of the main network.
You want both of them to be classes that subclass the nn.Module class.
In each, you want in the init to initialize the operations and store them in self as done here then a forward pass that just use them as done here.

There’s no such thing as torch.CudaFloatTensor, only torch.cuda.FloatTensor.

Also, as @albanD said, you can’t return a closure from the __res_block and expect the modules inside it to be recognized as part of the model. Just create a new nn.Module subclass. You can see how ResNets are implemented in torchvision.

@alexis-jacq
thank you for your advice, but that didn’t work because the input to g(x) is already torch.cuda.**Tensor and the problem is the weights etc are not torch.cuda.**Tensor.

@albanD @apaszke
thank you two, ok I’ll create blocks as nn.Module subclasses like the tochvision implement.

@apaszke
Is torch.cuda.RealTensor in the TypeError message equivalent to torch.cuda.FloatTensor? I couldn’t find RealTensor in the doc.

@moskomule torch.cuda.RealTensor refers to any torch.cuda.*Tensor, and in this context, is probably a torch.cuda.FloatTensor

1 Like

thank you all, I updated the code and it works well.

So briefly, the instances of subclasses of nn.Module are able to be backproped.

No, nn.Module subclasses only add support for some convenience methods like .parameters(), .cuda() and some others. It’s possible to implement neural networks in autograd without modules, and some people actually prefer this style. But this means that you’ll need some additional helpers and custom data structures for handling parameters.

1 Like

All links are dead
could you post here the solution?