Inference fails with cpu with sync bn on while training with multi-gpus

After training with multiple gpus with sync_bn on, I would like to do inference on CPU, However, raise the error:
ValueError: SyncBatchNorm expected input tensor to be on GPU

Is there any way to solve this issue ? Thanks

I don’t know how you are adding SyncBatchNorm to the model, but in case you are using convert_sync_batchnorm, remove the operation and keep the standard batchnorm layers.
If you are manually adding SyncBatchNorm layers to your model, you might need to replace them with their non-sync equivalent and copy their internal states over.

As far as I know, The states that I need to move include : weights, bias,running_mean,running_var, is there any else that I need to transfer over ?
Mentation about replacing SyncBatchNorm layers, could you provide simple example how to do this, maybe something like :

for  name , module in model.named_modules():
    if isinstance(module, nn.SyncBatchNorm) : 
        #do something