Iterative Pruning: LeNet-300-100

I am trying to implement iterative pruning algorithm which is: train a model, prune p% of smallest weights per layer, re-train the pruned model and repeat. For experiment purposes, I am using LeNet-300-100 neural network on MNIST.

The code can be accessed here

Within the function “train_with_grad_freezing(model, epoch)”, I am using the following lines of code for freezing the pruned weights by making their computed gradients equal to 0:

for layer_name, param in model.named_parameters():
    if 'weight' in layer_name:
        tensor = param.data.cpu().numpy()
        grad_tensor = param.grad.data.cpu().numpy()
        grad_tensor = np.where(tensor == 0, 0, grad_tensor)
        param.grad.data = torch.from_numpy(grad_tensor).to(device)

The first time I train the model, the code works fine after which I prune the layers by using the code:

# Prune 15% of smallest magnitude weights in FC layers and 10% in output layer-
pruned_d = prune_lenet(model = best_model, pruning_params_fc = 15, pruning_params_op = 10)

# Initialize and load pruned Python3 dict into a new model-
pruned_model = LeNet300()
pruned_model.load_state_dict(pruned_d)

However, on re-training this pruned model, the training metric is stuck for these values:

training loss = 0.0285, training accuracy = 99.04%, val_loss = 0.0910 & val_accuracy = 97.68%

What’s going wrong?