Accuacy of pruned MobileNet V2 after finetune is low

Hi everyone, I am using torch.nn.utils.prune.global_unstructured to prune models. I noticed that if we prune a scratch-trained MobileNet V2 model, the accuracy usually does not drop significantly (the prune ratio is 0.4, the drop is within 2%). However, if we first finetune a pretrained model and then prune it, the accuracy will drop a lot.

To be more specific, here are two ways to get pruned MobileNet V2 models:

  1. First create a model, train it on CIFAR10, CIFAR100, SVHN for several epochs, achieving 92%, 72.5%, 96% accuracy, respectively. Then use the global pruning method to prune them with a 0.4 prune ratio. The accuracy of the pruned models is 91.9%, 71%, 96%, respectively.

  2. First fetch a pretrianed MobileNet V2 model. Then finetune it on CIFAR10, CIFAR100, SVHN, the finetuning process includes two steps where in the first step only the last layer is updated and in the second step all layers get updated. The accuracy of these models are 92.5%, 72.8%, 95%, respectively. Finally, prune the finetuned models with a 0.4 prune ratio, resulting in 46.9%, 26.9%, 82.9% accuracy, respectively.

I don’t understand how this can happen. Is there something I am missing? Or is this is a normal situation? Any help or suggestions would be appreciated.

More info about the results:

  1. I also tried ResNet18, the accuracy does not drop significantly when pruning a finetuned model.
  2. I also tried different pruning ratio, ranging from 0.2 to 0.6, the results are similar.
  3. The MobileNet V2 model I used is from torchvision.models