Problem training gamma correction

hi everybody,
i’m trying to learn parameters for forward and inverse gamma correction:
input->gamma_correction->network->another_gamma_correction->loss

where: gamma_correction(x,gamma_parameter)=x**gamma_parameter

what i do is simply define a nn.Parameter() for the initial correction and another nn.Parameter() for the final correction.

when i only try to learn the initial gamma correction there’s no problem but when i try to learn both initial and final gamma corrections i get nan values for the parameters after the first backpropagation.

any thought?
thanks :slight_smile:

I had a similar problem, you might be getting nan's because some of the inputs to the second gamma correction layer are very close to 0 and you raise them to a power which leads to problems with gradients. If I am not mistaken about how it works in PyTorch (roughly), the derivative of x^a is being evaluated with respect to a and with respect to x, either one could cause problems (say, a<1).

Now, I hope this small bit info helps you, because in my case I was unable to resolve my issue.

actually it appears that even in the forward direction there is a problem

1 Like

Try the following Gamma layer:

class Gamma(nn.Module):
    def __init__(self, eps=1e-8):
        super().__init__()
        self.eps = eps
        self.gamma = nn.Parameter(torch.tensor(1.))
    
    def forward(self, input):
        return input.sign() * (input.abs() + self.eps)**self.gamma

There are two tricks here. The first is to handle negative inputs by odd symmetry; f(-x) is computed as -f(x). The second is to handle zero or close-to-zero inputs by adding a small ‘epsilon’ value to ensure numerical stability.

I tested this layer and was able to get it to train whereas without these tricks the self.gamma parameter would end up NaN.

2 Likes