I have the following model that takes in 4 input images and each image has its own conv layer, the way I wrote the forward function I think is causing me an out of memory exception
class FeatureLearningLevel(torch.nn.Module):
def __init__(self, in_channels=1, out_channels=64, stride=1) -> None:
super().__init__()
self.convs = {}
for i in range(1, 5):
conv_name = f'conv{i}'
conv = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=3, stride=stride)
self.add_module(conv_name, conv)
self.convs[conv_name] = conv
def forward(self, x1, x2, x3, x4):
inputs = [x1, x2, x3, x4]
for idx, ((conv_name, conv), x) in enumerate(zip(self.convs.items(), inputs)):
x = conv(x)
x = F.relu_(x)
x = F.max_pool2d(x, kernel_size=3, stride=1)
inputs[idx] = x
x1,x2,x3,x4 = inputs
del inputs
return x1,x2,x3,x4
Thanks for the answer, I started with this solution but soon I realized that I need my model to be flexible and I can easily tune the number of convs that’s why I went with the code I posted.
I can’t confirm is it out of memory error or a memory leak
but as soon as I run the forward pass the 12 gb of ram are consumed, I didn’t show all the model this part is only one of 4 sub_modules, I only suspected it because it looks sketchy