Input normalization inside the model

I tried to normalize the input during the forward pass of the model doing this:

class Model(nn.Module):
    def __init__(self):
        mean = torch.as_tensor([0.485, 0.456, 0.406])[None, :, None, None]
        std = torch.as_tensor([0.229, 0.224, 0.225])[None, :, None, None]
        self.register_buffer('mean', mean)
        self.register_buffer('std', std)
        ...


    def forward(self, inputs):
        # Input size [batch, channel, width, height]
        # Normalize inside the model
        inputs = inputs.sub(self.mean).div(self.std)
        ...
        return output

During training everything is fine and working but when I switch to eval() mode, model starts to give random outputs. Disabling eval() helps to get meaningful outputs during validation, but I need eval() mode since I use dropout and batchnorm in the model. Any idea what causes this weird behavior?

Do your training and validation images have the same distributions or are you processing them differently?
Are you seeing the bad results also after calling eval() and using your training images?
If so, this problem might be unrelated to the processing inside the model, but might come from a small batch size and thus skewed running estimates in the batch norm layers.

They have the same distribution and processing is identical for both sets.

When I disable normalization inside the model and perform it using torchvision library things get back to normal. I suppose the weird performance drop is purely from normalization inside the model.