Defining loss function, freezing learnable parameters

Hello,
I’m new to pytorch, and I’m trying to write my own custom loss function. I am creating a model that performs image translation (mapping an image from a domain A to a domain B), and one of the components of the loss function is that I want to enforce the low-frequency of the image to remain the same ( blur(input) =blur(output) ).

From the existing posts, I tried to write a nn.Module to perform a simple GaussianBlur (dunno if there exists a simple function that do that), and I am planning to it before computing the MSE (e.g., MSE( GaussianBlur(input) , GaussianBlur(output)) ).

But I fear that during training this GaussianBlur might learn and update its intrinsic parameters, so I need to freeze those parameters from updating. From what I’ve read, I believe I should use requires_grad=False, but I really do not know where should I put it. I am sending an ipynb of my module:

Set requires_grad to False for which parameter you want to freeze.

Like

# In SimpleGaussian.__init__()
self.conv1.weight.requires_grad=False.

Also, I recommend you to inherit Conv2d class for GaussianBlur class because what it does is just a convolution.

Like

class SimpleGaussian(nn.Conv2d):

In this case, you don’t have to redefine forward function and it will make your code much simpler.

1 Like