Torch.no_grad and the updating of the batch normalization statistics

Does no_grad still allow to update the batch normalization statistics?
I have the following code:

def train_add_stat(model, trn_loader, optimizer, ib_max):
    model.train()
    with torch.no_grad():
        for _, (inputs, targets) in enumerate(trn_loader):
            inputs = Variable(inputs.cuda(), requires_grad=False)
            output = model(inputs)
    del inputs, targets, output
    torch.cuda.empty_cache()
    return

to update the batch and I am wondering if it is correct or no_grad() avoids the statistic updating as the gradient.
Thanks.

Yes it does still update

4 Likes