Prunning + Adam

Hi,

I am pruning the last (linear) layer of my model. However, since I am using Adam, the parameters that were pruned still get updated with momentum. After pruning the weight matrix of my nn.Linear() layer, a percentage of the parameters in the matrix are zero. I tried setting those zero weights to “None” to avoid updates but then I get an error can't assign a NoneType to a torch.cuda.FloatTensor.

How can I avoid this behavior?

Thanks

Hi Sofia!

As I understand it, you have pruned some elements of your weight
tensor, which is to say, you have set those elements to zero. Now
you want to “freeze” those elements at zero, even as you continue
to train.

The safest way to freeze just certain elements of a tensor is to restore
them to their desired values after performing the optimization step. In
your case, you could multiply them inplace by a mask tensor:

# perform pruning
mask = linear_layer.weight != 0   # pruned elements are flagged with False
# run training forward pass
opt.zero_grad()
loss.backward()
opt.step()   # updates all elements of linear_layer.weight
with torch.no_grad():
    linear_layer.weight.mul_ (mask)   # restores pruned elements to zero

Best.

K. Frank

1 Like

Dear KFrank,

Thanks for your reply. That is exactly the solution I came up with. I was just hoping there was some better solution because I am unsure about the behavior of this option with adam.

Thank you!