Difference between NVIDIA's pytorch-quantization and PyTorch's QAT process

Hi, after doing some searching and readings, I notice that NVIDIA’s QAT process is different from PyTorch’s.
NVIDIA seems to first calibrate the model offline, then train(QAT) the calibrated model
Whereas in PyTorch, we fuse, prepare qat, enable observer and fake quant, train (QAT) then disables observers and freeze batchnorm stats after a few epochs.
Is the enabling and disabling of the observers equivalent to NVIDIA’s offline calibration? Just that in PyTorch it is done during training?