Heya, I was trying to replicate the pruning experiments mentioned in the paper, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks which is written in Tensorflow. I have managed to create the mask and apply it but unable to figure out how to freeze the pruned weights. As much as I checked, it is only possible to freeze one layer or a parameter at once, not the individual weights.
Tried looking at Rethinking network pruning (code) as it has a implementation but couldn’t find how they froze the pruned weights. In another code Lottery ticket in Pytorch, they explicitly zeroed the gradients for pruned weights which seems cumbersome and extra computation for auto_grad as the gradients are still being computed. Is there a workaround?
Also, if there is no other way can backward hooks make the process a little bit easier as I am trying to use this with fastai package. Any help is really appreciated.