Calculating FLOPS of sparse CNN models

Hi, I’ve tried to use the code below for determining the number of Floating Point Operations required at forward pass for CNN models. For a similar model that has been made very sparse(90% zeros) through quantization, I would expect that the number of FLOPS required would be a lot less but i get the same number of FLOPS as compared to the original model. How do i get the FLOPS for a sparse model or is there a reason why the value remains the same? Thanks

def count_flops(model, input_image_size):

# flops count from each layer
counts = []

# loop over all model parts
for m in model.modules():
    if isinstance(m, nn.Conv2d):
        def hook(module, input):
            factor = 2*module.in_channels*module.out_channels
            factor *= module.kernel_size[0]*module.kernel_size[1]
            factor //= module.stride[0]*module.stride[1]
            counts.append(
                factor*input[0].data.shape[2]*input[0].data.shape[3]
            )
        m.register_forward_pre_hook(hook)
    elif isinstance(m, nn.Linear):
        counts += [
            2*m.in_features*m.out_features
        ]
    
noise_image = torch.rand(
    1, 3, input_image_size, input_image_size
)
# one forward pass
_ = model(Variable(noise_image.cuda(), volatile=True))
return sum(counts)

Hi, any suggestions?

Well, if you count the sizes of the parameters and inputs, the zeros count just as well as any other numbers.
Actually, to convert the sparsity into a reduced flop-count, you would either have to identify weight parts to eliminate (e.g. channels which are all zero) or move to a sparse representation of the parameters and inputs.

Thanks for your reply. Is there a way to my already trained model to a sparse representation in pytorch?