I have several BatchNorm2d layers without some additional params. As I know the webgl runtime for onnxruntime-web doesnt support int64. But tracked parametrs for BatchNorm2d are saved in int64 (because of possible overflow of int32). I tried multiple ways to convert this several params to int32 or even float32 (I know its bad, but I just tried) and they stayed the same. Maybe its because of original implementation of BatchNorm which doesnt allow to change the type.
So as mentioned here (pytorch/pytorch#14807) there are multiple ways to solve this.
What have I tried so far:
1.Method like here (ultralytics/yolov5#250) using setattr()
2.1 By adding .to(torch.int32) - which returned “nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int32” which is understandable.
2.2 By adding .to(torch.float32) - which doesnt change anything, params stayed INT64.
Example of second approach:
self.up3 = nn.Sequential( nn.Upsample(scale_factor=2, mode='bilinear'), nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256).to(torch.float32), activation )
- I tried this script - onnx-typecast (search on github) - no luck
Params in INT64:
['first_layer.2.num_batches_tracked', 'down0.1.num_batches_tracked', 'down1.1.num_batches_tracked', 'down2.1.num_batches_tracked', 'down3.1.num_batches_tracked', 'up3.2.num_batches_tracked', 'up2.2.num_batches_tracked', 'up1.2.num_batches_tracked', 'up0.2.num_batches_tracked']
Also Im loading pretrained model (not training from scratch).