Pytorch Pruning not working

Hi all,

I am trying to prune my pytroch model based on the tutorial [here]. (https://pytorch.org/tutorials/intermediate/pruning_tutorial.html)

However, the size of the model doesn’t reduce (even with 40 % pruning).

Original Size:
Size (MB): 6.623636
Pruned model size
Size (MB): 6.623636

The link to my code can be found here -

moreover when I prune the bias as well I get the following error -

    import torch.nn.utils.prune as prune
    for name, module in model.named_modules():
    # prune 40% of connections in all 2D-conv layers
        if isinstance(module, torch.nn.Conv2d):
            prune.l1_unstructured(module, name='weight', amount=0.4)
            prune.l1_unstructured(module, name='bias', amount=0.3)
            prune.remove(module, 'weight')
            prune.remove(module, 'bias')
    # prune 40% of connections in all linear layers
        elif isinstance(module, torch.nn.Linear):
            prune.l1_unstructured(module, name='weight', amount=0.4)
            prune.l1_unstructured(module, name='bias', amount=0.4)
            prune.remove(module, 'weight')
            prune.remove(module, 'bias')

ERROR -


  File "/Users/raghavgurbaxani/Desktop/DI PA /EAST-Pytorch/try_pruning.py", line 143, in main
    prune.l1_unstructured(module, name='bias', amount=0.3)

  File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/utils/prune.py", line 886, in l1_unstructured
    L1Unstructured.apply(module, name, amount)

  File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/utils/prune.py", line 536, in apply
    return super(L1Unstructured, cls).apply(module, name, amount=amount)

  File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/utils/prune.py", line 167, in apply
    default_mask = torch.ones_like(orig)  # temp

TypeError: ones_like(): argument 'input' (position 1) must be Tensor, not NoneType

@Michela could you help out on whats going wrong here ? :thinking: :thinking:

The model size is not expected to change unless you turn your tensors in coordinate representation using to_sparse(). This will only give you advantages at higher levels of sparsity though. Otherwise, all you’re doing is having the same exact tensors as before, now with a bunch of values set to zero instead of whatever number they used to be, so the size will stay the same.

I’ll look into your error now.

Do you have a minimum example that runs that reproduces this error?

Hi @Michela

you can try running my script https://github.com/raghavgurbaxani/Quantization_Experiments/blob/master/try_pruning.py

based on this repository -

pruning only the weight works, but as soon as I prune the bias as well it throws me the error

What is utils here?

here’s the script for utils -

Also can you suggest the best pruning method for optimal accuracy vs inference speed ? As the model size reduces, atleast can I expect a faster inference speed ?

also @Michela

sorry the code is based on this repo

my bad

What’s preprossing? I’m sorry but I can’t debug your entire repository. If you have a simple script that reproduces the error, I can look into it.

There is no optimal method for accuracy and/or speed. Also, you cannot expect faster inference speed using pytorch pruning at the moment. The feature is experimental and not powered by a fast sparse linear algebra library yet.

@Michela thank you for your support, sorry I pasted the wrong repository (corrected it now).

If you just run my code within that repo it should reproduce the error . I appreciate your help on this.

Is there any plan on more mature pruning releases in the future ?

@Michela any update on this ?