Cast BatchNorm2d to int32

I have several BatchNorm2d layers without some additional params. As I know the webgl runtime for onnxruntime-web doesnt support int64. But tracked parametrs for BatchNorm2d are saved in int64 (because of possible overflow of int32). I tried multiple ways to convert this several params to int32 or even float32 (I know its bad, but I just tried) and they stayed the same. Maybe its because of original implementation of BatchNorm which doesnt allow to change the type.

So as mentioned here (pytorch/pytorch#14807) there are multiple ways to solve this.

What have I tried so far:
1.Method like here (ultralytics/yolov5#250) using setattr()
2.1 By adding .to(torch.int32) - which returned “nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int32” which is understandable.
2.2 By adding .to(torch.float32) - which doesnt change anything, params stayed INT64.
Example of second approach:

self.up3 = nn.Sequential( nn.Upsample(scale_factor=2, mode='bilinear'), nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256).to(torch.float32), activation )
  1. I tried this script - onnx-typecast (search on github) - no luck
    Params in INT64:
['first_layer.2.num_batches_tracked', 'down0.1.num_batches_tracked', 'down1.1.num_batches_tracked', 'down2.1.num_batches_tracked', 'down3.1.num_batches_tracked', 'up3.2.num_batches_tracked', 'up2.2.num_batches_tracked', 'up1.2.num_batches_tracked', 'up0.2.num_batches_tracked']

Also Im loading pretrained model (not training from scratch).

@shabashaash This looks like a night time endless coffee problem. Do you have the onnx file/model which we can play with

Sure. I already trying to solve it for 3 days straight. The model is (GitHub - neuralchen/SimSwap: An arbitrary face-swapping framework on images and videos with one single trained model!). Im using the pretrained 512HQ model. (Actual model is in fs_networks_512 named Generator_Adain_Upsample).
Here is link to folder on google drive with 2 models:
true_visual_512_opset13_nc_float32cast.onnx - just added .to(‘float32’)
true_visual_512_opset13_nc_float32cast_int32.onnx - added .to(‘float32’) and also used onnx-typecast script.

Also when im trying to run model on onnx-runtime (Python) which was converted with onnx-typecast script, Im getting this error:
“[ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /content/onnx_models/true_visual_512_opset13_nc_float32cast_int32.onnx failed:This is an invalid model. Type Error: Type ‘tensor(int32)’ of input parameter (219) of operator (Unsqueeze) in node (Unsqueeze_76) is invalid.”

UPD:
And with same model on onnxruntime-web(nodeJS) with webgl (works well on wasm = cpu, but too long):
“failed to inference ONNX model: Error: unrecognized input ‘’ for node: Resize_1242.”. I suppose its because some of params are not converted to INT32 properly.
Resize block (have 4 of them):