Is batch norm running mean/variance reset at any point?

Standard deep learning pipeline runs a training epoch followed by a testing epoch. Assume I would like to compute some statistics during every iteration of training. Assume also that my model has batch norm and so I have to switch to eval mode every iteration. I was wondering if this would result in the statistics of batch norm being reset every iteration or not. Also, is there something inherently wrong with switching from train to eval and backwards in each iteration of an epoch.

Thanks!

As far as I understand you would like to switch between train and eval in every iteration in your training code?
So your code would look similar to this:

for epoch in range(10):
    for batch_idx, (data, target) in enumerate(train_loader):
        model.train()
        # your training code

        # Compute your stats in each itertation
        model.eval()
        stats = model(...)

If that’s the case, the code should run. I’m not sure, if you will introduce a lot of overhead to switch the modes in each iteration.
The running stats shouldn’t be reset. They will be updated in each forward pass, when your model is in training, and just applied if it’s in eval.