Difference between nn.BatchNorm2d() and transforms.Normalize((0.1307,), (0.3081,))

Hi all,
Here i am trying to improve accuracy of MNIST dataset to 99.14%.So i came across nn.BatchNorm2d() which is a regularization technique to improve accuracy by decreasing noises.But in the below code while downloading MNIST datset i applied a function in torch vision named transforms.Normalize((0.1307,), (0.3081,))

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=True, download=True,
                    transform=transforms.Compose([
                        transforms.ToTensor(),
                        transforms.Normalize((0.1307,), (0.3081,))
                    ])),

Here in last line i am normalising each tensor.

  1. so what is the difference between both of them??
    2)should i use nn.BatchNorm2d() or it is equal to transforms.Normalize().
    3)So eventhough i use it does it make my image much darker or not??
  1. transforms.Normalize is used during preprocessing the input tensors not inside the model. BatchNorm layers have running estimates to normalize the activations of the previous layer as well as affine trainable parameters in the default setup.

  2. Both approaches try to normalize tensors, but are used at different places and BatchNorm layers will estimate the running stats of the activations.

  3. The brightness of your images might be just a visualization issue. E.g. if you normalize your input images to have a zero mean and a unit variance, the library you are using to plot these images might try to map these values to the colormap it is using.