Batchnorm fine tuning

Hi,
tl;dr : Does batchnorm use “learned” mean/variance to normalize data when in eval() mode? If so how do I get to mean/variance of batchnorm ( Not the affine transformation after normalization ).

Longer version:
I have a trained network. I have some new unlabeled data with the same categories as before but a slightly different domain. For simplicity let’s say I trained a “dog type” classifier on images taken during the day and now I want to fine-tune on new images that were taken at night.

I want to update the normalization factors in group norm to reflect the new data’s statistics. To the best of my understanding group norm during inference = 1) normalization with learned mean/std + 2) a learned affine transformed.

I only see the parameters of the affine transform. Is there a way to get to the mean/std and change it.

I tried to bypass this by training the network with a constant loss of zero ( since the mean/var are not dependent on the loss ). This did not do anything…

Thanks,
Dan

according to this there should be some parameters such as running_mean , running_std.
How do I get to them and can I change them- preferably by running a bunch of unlabeled input examples so that the running mean is slowly adjusted toward a new mean.