PyTorch model size in MBs

Hi, I am working with different quantized implementations of the same model, the main difference being the precision of the weights, biases, and activations. So I’d like to know how I can find the difference between the size of a model in MBs that’s in say 32-bit floating point, and one that’s in int8. I have the models saved in .PTH format, any suggestions will be great. Thanks in advance.

The function torch.finfo (torch.iinfo for integer) would be useful

import torch
model = torch.nn.Linear(2, 3)
model.bias.data = torch.tensor([1, 2, 3], dtype=torch.int8)
size_model = 0
for param in model.parameters():
    if param.data.is_floating_point():
        size_model += param.numel() * torch.finfo(param.data.dtype).bits
    else:
        size_model += param.numel() * torch.iinfo(param.data.dtype).bits
print(f"model size: {size_model} / bit | {size_model / 8e6:.2f} / MB")

Have a look at this Type Info — PyTorch 1.11.0 documentation

1 Like

Thank you for replying. I had done almost exactly the same thing. However, I’m using a special quantization library which applies the quantizations and saves the model in a .pt form. Can you tell me how the same thing can be done using a .pt file, since the model loaded by a .pt doesn’t have a “parameters” attribute?

use torch.load to load the parameters in a dict, then do it the same way

import torch
# checkpoint should be an OrderedDict type,
# like {'weight': Tensor(...)}
checkpoint = torch.load("yourfile.pth")
size_model = 0
for param in checkpoint.values():
    if param.is_floating_point():
        size_model += param.numel() * torch.finfo(param.dtype).bits
    else:
        size_model += param.numel() * torch.iinfo(param.dtype).bits
print(f"model size: {size_model} / bit | {size_model / 8e6:.2f} / MB")

This is what I get after running the script…
AttributeError: ‘collections.OrderedDict’ object has no attribute 'is_floating_point’

My example only works if you save the .pth file with an object returned by model.state_dict(). In practical, this depends on how you save your model .pth file.
You could have a look at the loaded object checkpoint. Whatever it is, just get the parameters to count the size of model.