How to efficiently normalize a batch of tensor to [0, 1]

Indeed, the batch size actually changes during validation even. And the change is significant. What would be a better approach to tackle this?

You could use the transforms.Normalize transformation which subtracts the mean from each input batch and divides by the stddev calculated from the training set.

1 Like

the min max scaling fails when input becomes zero before scaling. In that case, the min max scaling makes the output infinite. What should we do in that case?