Faster-RCNN state_dict size compared to EfficientDet

I made a detection model using Faster-RCNN from torchvision. I use as backbone timm library.
Using for example mobilenetv2_110d I end up with a model size of 1.2Go by saving only the state_dict (torch.save(model.state_dict(), "model.pth").

However, using EfficientDet with the same backbone (again from Ross Rightman efficientdet-pytorch ) I end up with a model of 47mb.

I understand there is a big difference between those 2 models in terms of params, but here saved model is 25x times bigger.
Is there a better way to save weights of FasterRCNN ?

If the file size is of concern to you, you could try to pack fp32 things into bfloat16 or so to spend less bytes per parameter.

This is not really a concern as I don’t really have a limit. However, it is kind of weird that FasterRCNN is 25x bigger, and I’m wondering if I did something wrong or maybe if there is a better way to do it.

But thank you for this tips, I hadn’t thought about it!

OK, so something might be funny:
When I look at my model cache, I have a ResNet50 at 98M and a MaskRCNN with ResNet50 backbone at 170M, so I’d expect a markup of, say 100M or maybe 200M at most.