BatchNorm for Regression

Hi everybody,

I am currently trying to train a regressor that accepts 5 dimensional features an outputs a single value. The neural network architecture that I used accepts batch of input data and I use batchNorm in the first layer.

But beacuse of the nature of batchNorm network generates normalized predictions. For evaluation stage ground truth values should be normalized in order to make a comparison. To do that I used a single batchNorm layer that only generates normalized values of batches with the same size that the network has been fed.

My question is that; is this approach valid or not?

Thanks.
Can.

BatchNorm doesn’t necessarily generate normalized features, if affine=True.
The additional weights and bias can scale and shift the input again, but that’s only a side note.

During evaluation you should use the running statistics and not calculate the current batch statistics anymore.
You can do this by setting your model to evaluation: model.eval().
The calculation therefore won’t depend on the batch size anymore.

1 Like

Thanks for your reply, I have already used model.train() and model.eval() features of pyTorch. But still I get normalized outputs. But I think in order to evaluate the model in a valid manner I need same normalization scheme for the ground truth data for the regression problem.