Data/image normalization

Hi all,
if I try normalizing gray images with mean/standard deviation, calculated from 9000 images, a few images fall off the range from -1…+1. What to do with them?
A normalization with min/max woprks stable, but sensitive to outliers. Mean/Stddev ist stable, summed over all images, not only over one batch, but outliers now fall off -1…+1. Just bring them to -1 or +1?

Many thanks for your help.

Why would values outside of [-1, 1] be problematic?

The mean of the images rages from 77…118, the deviation from 3,0…13. The images look cloudy, or like wool, a slight texture, all gray images. I try to build an autoencoder trained only with errorfree images, to build a defect detection.

conv1(torch::nn::Conv2dOptions(1, CINV1_MASK_COUNT, /*kernel_size=*/3).padding(1)),
    bn1(torch::nn::BatchNorm2d(CINV1_MASK_COUNT)),
    conv2(torch::nn::Conv2dOptions(CINV1_MASK_COUNT, CINV2_MASK_COUNT, /*kernel_size=*/3).padding(1)),
    bn2(torch::nn::BatchNorm2d(CINV2_MASK_COUNT)),
    conv3(torch::nn::Conv2dOptions(CINV2_MASK_COUNT, CINV3_MASK_COUNT, /*kernel_size=*/3).padding(1)),
    bn3(torch::nn::BatchNorm2d(CINV3_MASK_COUNT)),

    fc1(32*32* CINV3_MASK_COUNT, 4096),
    fc2(4096, 32 * 32 * CINV3_MASK_COUNT),

    convT1(torch::nn::ConvTranspose2dOptions(CINV3_MASK_COUNT, CINV2_MASK_COUNT, 3).padding(1)),
    bnt1(torch::nn::BatchNorm2d(CINV2_MASK_COUNT)),

    convT2(torch::nn::ConvTranspose2dOptions(CINV2_MASK_COUNT, CINV1_MASK_COUNT, 3).padding(1)),
    bnt2(torch::nn::BatchNorm2d(CINV1_MASK_COUNT)),

    convT3(torch::nn::ConvTranspose2dOptions(CINV1_MASK_COUNT, 1, 3).padding(1))

3 convolutional layer in encoder/decoder.

x = conv1->forward(x);          // 15 x 256x256
    x = torch::max_pool2d(x, 2);    // 15 x 128x128
    x = torch::leaky_relu(x);             // 15 x 128x128
    //x = torch::tanh(x);             // 15 x 128x128
    x = torch::batch_norm(bn1->forward(x), bn1W, bnBias1W, bnmean1W, bnvar1W, true, 0.9, 0.001, true);
    

    x = conv2->forward(x);          // 30 x 128x128
    x = torch::max_pool2d(x, 2);    // 30 x 64x64
    x = torch::leaky_relu(x);             // 30 x 64x64
    //x = torch::tanh(x);             // 15 x 128x128
    x = torch::batch_norm(bn2->forward(x), bn2W, bnBias2W, bnmean2W, bnvar2W, true, 0.9, 0.001, true);

    x = conv3->forward(x);          // 30 x 64x764
    x = torch::max_pool2d(x, 2);    // 30 x 32x32
    x = torch::leaky_relu(x);             // 30 x 32x32
    //x = torch::tanh(x);             // 15 x 128x128
    x = torch::batch_norm(bn3->forward(x), bn3W, bnBias3W, bnmean3W, bnvar3W, true, 0.9, 0.001, true);

    // linear
    x = x.view({-1, 32 * 32 * CINV3_MASK_COUNT});

    x = fc1->forward(x);
    x = torch::leaky_relu(x);
    x = fc2->forward(x);
    x = torch::leaky_relu(x);
    
    // wieder Bilder
    x = x.view({ -1, CINV3_MASK_COUNT, 32, 32 });

    x = convT1->forward(x);
    x = torch::upsample_nearest2d(x, c10::IntArrayRef{ 64,64 });// 2);
    x = torch::leaky_relu(x);
    //x = torch::tanh(x);             // 15 x 128x128
    x = torch::batch_norm(bnt1->forward(x), bnt1W, bntBias1W, bntmean1W, bntvar1W, true, 0.9, 0.001, true);
    
    x = convT2->forward(x);
    x = torch::upsample_nearest2d(x, c10::IntArrayRef{ 128,128 });// 2);
    x = torch::leaky_relu(x);
    //x = torch::tanh(x);             // 15 x 128x128
    x = torch::batch_norm(bnt2->forward(x), bnt2W, bntBias2W, bntmean2W, bntvar2W, true, 0.9, 0.001, true);
    
    x = convT3->forward(x);
    x = torch::upsample_nearest2d(x, c10::IntArrayRef{ 256,256 });// 2);
    //x = torch::sigmoid(x);
    //x = torch::tanh(x);

I tried tanh and relu for activation, with and without batch_norm, MSE or L1 loss. All converges to a point, but the reconstruction is not very well. A few defect images, run by the network and than subtracted, show the defect areas, a few not.

MinMax and just divided by 255 converges. Mean/Stddev and the network does not converge.

It’s an interesting observation and maybe your use case would indeed benefit from another normalization approach.

I would also recommend to double check the batchnorm usage:

x = torch::batch_norm(bnt1->forward(x), bnt1W, bntBias1W, bntmean1W, bntvar1W, true, 0.9, 0.001, true);

as it seems you are applying it twice. The first time via bnt1->forward(x) and then via the functional torch::batch_norm call.

Thank you for your advice, I corrected the batch_norm call. Perhaps ZCA normalization will help.