Model Print and Loss Implementation for CNN to remove atmospheric turbulence

  1. I created a CNN model for the implementation of a paper to remove the atmospheric turbulence. The model code is as follow
class Net(torch.nn.Module):   
    _kernel_size = 5
    _stride=1
    _padding=2
    _outChannels=64
    _inChannels = 64
    def __init__(self):
        super(Net, self).__init__()

        self.cnn_Inputlayer = Conv2d(3, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.relu = ReLU(inplace=True)
        self.cnn_layers =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)

        self.batch = BatchNorm2d(64)
        self.cnn_OutputLayer = Conv2d(self._inChannels, 3, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
    # Defining the forward pass    
    def forward(self, x):
        x = self.relu(self.cnn_Inputlayer(x))
        
        # for i in range(1,15):
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))
        x = self.relu(self.batch(self.cnn_layers(x)))

        x = self.cnn_OutputLayer(x)
        return x

When I print model I get as

Net(
  (cnn_Inputlayer): Conv2d(3, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (relu): ReLU(inplace=True)
  (cnn_layers): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (batch): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (cnn_OutputLayer): Conv2d(64, 3, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
)

Why I do not get the full model with 17 layers. I can see only three layers Input, CNN and Output. There should be 15 CNN layers.

  1. For this I also have to find the loss function as shown below in the image. Is this can be calculated using simple MSE or I have to implement custom. If I have to implement custom, how can I?

image

The Paper I am implementing is here

For point 1:
In the init function, you have written self.batch=BatchNorm2d(64)
In the forward function, you are calling “self.batch” 15 times. I believe since you are reusing the declared variable (“self.batch”), your print of the model does not show this 15 times.

For point 2, in the PDF, it states, “The loss function is the sum square value
of each pixel between ground truth images and recovered images”. Can you try MSE in Pytorch?

@KarthikR Thanks for your comments and suggestions.

Actually my model consists of Input layer (1), CNN_Layers with batch normalization (15) and an output layer. So total are 17 layers network. So instead of writing sequential 15 CNN layers in init I called CNN layers with batch 15 times in forward. I think model should print all the layers as my model consist of 17 layers.

For the second right now I am using MSE error but I think in loss function formula it is using formula as square of (output - target + input) How can I write customized loss function for this

The network printing issue is resolved. I changed the code and it works as

class Net(torch.nn.Module):   
    _kernel_size = 5
    _stride=1
    _padding=2
    _outChannels=64
    _inChannels = 64
    def __init__(self):
        super(Net, self).__init__()

        self.cnn_Inputlayer = Conv2d(3, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.relu = ReLU(inplace=True)
        
        self.cnn_layers1 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers2 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers3 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers4 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers5 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers6 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers7 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers8 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers9 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers10 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers11 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers12 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers13 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers14 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
        self.cnn_layers15 =  Conv2d(self._inChannels, self._outChannels, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)

        self.batch = BatchNorm2d(64)
        self.cnn_OutputLayer = Conv2d(self._inChannels, 3, kernel_size=self._kernel_size, stride=self._stride, padding=self._padding)
    # Defining the forward pass    
    def forward(self, x):
        x = self.relu(self.cnn_Inputlayer(x))
        
        # for i in range(1,15):
        x = self.relu(self.batch(self.cnn_layers1(x)))
        x = self.relu(self.batch(self.cnn_layers2(x)))
        x = self.relu(self.batch(self.cnn_layers3(x)))
        x = self.relu(self.batch(self.cnn_layers4(x)))
        x = self.relu(self.batch(self.cnn_layers5(x)))
        x = self.relu(self.batch(self.cnn_layers6(x)))
        x = self.relu(self.batch(self.cnn_layers7(x)))
        x = self.relu(self.batch(self.cnn_layers8(x)))
        x = self.relu(self.batch(self.cnn_layers9(x)))
        x = self.relu(self.batch(self.cnn_layers10(x)))
        x = self.relu(self.batch(self.cnn_layers11(x)))
        x = self.relu(self.batch(self.cnn_layers12(x)))
        x = self.relu(self.batch(self.cnn_layers13(x)))
        x = self.relu(self.batch(self.cnn_layers14(x)))
        x = self.relu(self.batch(self.cnn_layers15(x)))

        x = self.cnn_OutputLayer(x)
        return x

The main problem is with the loss. I tried to train with the MSE loss function but it is loss ıs static. I want to implement the custom loss function as asked in Question (shown in figure also). Any help? @ptrblck your comment and expert opinion on this?