Training pruned network not implemented in nn.uitls.prune?

Hello,

I have been working with nn.utils.prune and it seems to me that you cannot currently train those networks after pruning right? We set forward_pre_hooks which means that the forward pass is properly pruned but I do not see anywhere how it is made sure that backprop works. Is it true that this is not implemented? Would it not just work to set the backward_hooks as well? I read the tutorial and looked for other forum posts but I did not find any mention of retraining pruned networks anywhere.

Kind regards,

Richard

Hi,

IIRC there is no need for backward hook as the pruning is done in a differentiable manner during the forward and so the autograd will give the right gradients.

2 Likes

Hi albanD,
Thank you for your answer, and sorry for bringing back this topic. But here I thought if I pruned a model and then retrained it with backprop, we are calculating the backward gradient with original weights, and update original weights then mask them? Because I observed that “grad” is only avaliable in “weight_origi”, and those pruned weight is still updating in “weight_origi”. Is there any way to retrain the model as if the pruned weights do not even exist(i.e. not participate in forward inference and backward update)?

Best regards

Yes, you are still computing the gradients and updating the original weights. The only difference is that they are masked before being used.

1 Like