Fintuning a pruned model, makes it 10x slower at inference

I dont know if I have encountered a bug or its something that I’m doing wrong.
I pruned a resnet18 model and then trained it. during checkpoints, I save the model as a jit model so I dont need to have a model definition to run the pruned model.
Everything seems fine, the model learns and performs as expected. however, what is not normal is the 10x times slow-down at inference.
Below are the stats I recorded for 10 iterations of the pruned model forward time before and after being trained :

Before training : 7,405 ms  
After training :  75,480 ms

You can see and test the models yourself from here :

Whats wrong here?
Thanks a lot in advance