I’m new to pytorch, and I’m trying to write my own custom loss function. I am creating a model that performs image translation (mapping an image from a domain A to a domain B), and one of the components of the loss function is that I want to enforce the low-frequency of the image to remain the same ( blur(input) =blur(output) ).
From the existing posts, I tried to write a nn.Module to perform a simple GaussianBlur (dunno if there exists a simple function that do that), and I am planning to it before computing the MSE (e.g.,
MSE( GaussianBlur(input) , GaussianBlur(output)) ).
But I fear that during training this GaussianBlur might learn and update its intrinsic parameters, so I need to freeze those parameters from updating. From what I’ve read, I believe I should use
requires_grad=False, but I really do not know where should I put it. I am sending an ipynb of my module: