Help with memory leaking

I have the following model that takes in 4 input images and each image has its own conv layer, the way I wrote the forward function I think is causing me an out of memory exception

class FeatureLearningLevel(torch.nn.Module):

    def __init__(self, in_channels=1, out_channels=64, stride=1) -> None:

        super().__init__()

        self.convs = {}

        for i in range(1, 5):

            conv_name = f'conv{i}'

            conv = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)

            self.add_module(conv_name, conv)

            self.convs[conv_name] = conv

    def forward(self, x1, x2, x3, x4):

        inputs = [x1, x2, x3, x4]

        for idx, ((conv_name, conv), x) in enumerate(zip(self.convs.items(), inputs)):

            x = conv(x)

            x = F.relu_(x)

            x = F.max_pool2d(x, kernel_size=3, stride=1)

            inputs[idx] = x

        x1,x2,x3,x4 = inputs

        del inputs

        return x1,x2,x3,x4

can help me write this model in a better way

You could just write a simple class for the model like:

class FeatureLearningLevel(nn.Module):
    def __init__(self, in_channels=1, out_channels=64, stride=1):
        super(FeatureLearningLevel, self).__init__()
        self.conv1 = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)
        self.conv2 = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)
        self.conv3 = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)
        self.conv4 = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)

    def forward(self, x1, x2, x3, x4):
        x1 = F.max_pool2d(F.relu(self.conv1(x1)))
        x2 = F.max_pool2d(F.relu(self.conv1(x2)))
        x3 = F.max_pool2d(F.relu(self.conv1(x3)))
        x4 = F.max_pool2d(F.relu(self.conv1(x4)))
        return x1,x2,x3,x4

This model would work in the same way as you want it to.

Thanks for the answer, I started with this solution but soon I realized that I need my model to be flexible and I can easily tune the number of convs that’s why I went with the code I posted.

Can you confirm if it’s an out of memory error or a memory leak? They’re not the same.

Also, you haven’t probably declared the parent class with super().__init__(), it should be super(FeatureLearningLevel, self).__init__()

I can’t confirm is it out of memory error or a memory leak
but as soon as I run the forward pass the 12 gb of ram are consumed, I didn’t show all the model this part is only one of 4 sub_modules, I only suspected it because it looks sketchy

For the flexibility you can have Sequential layers in the model and each of these layers can then be modified to your wish.