Confusions regarding Normalization and Loading the model

Question 1: In many research papers, researchers tend to suggest normalization of images from [-1 to 1] . How can we achieve it in code?

Question 2: What is the meaning of following line of code?
transforms.Normalize(0.5, 0.5)

Question 3: Let say I trained my model with learning rate: 0.0001 and then save the state of my model. Now I want to change learning rate to 0.00002 so should I start training from scratch? Or I could load the previous model ?

  1. Something like this should work:
x = torch.randn(10, 10, 10)
y = x - x.min()
y = y / y.max() * 2 - 1
print(y.min(), y.max())
> tensor(-1.) tensor(1.)
  1. Normalize subtracts the mean and divides by the stddev to create normalized/standardized outputs, which are also known as z-scores.

  2. It depends on your use case and yes, you can also load the already pretrained model and continue the training with the lower learning rate.

1 Like