Visualising Compression of a weight quantized deep learning model


I have performed low bit weight quantization on certain DL models (from 8 bit to 2 bit). As you know the main motivation behind quantization is to compress the size of the model mainly the parameters by representing it in lower bit precisions, I would like to know if there is any way by which I can measure how much the weights have been compressed. I had tried using the summary method in PyTorch but it didn’t help me. If anyone has done something similar please help me out