Why does the evaluation time in the pruned model increase compared to the normal model?

I trained my model (detectron2 model) and save it then load the model and evaluate it.
Suppose the evaluation time is 15.34 then I prune the model like below:

def prune_model_l1_unstructured(model, layer_type, proportion):
        for module in model.modules():
            if isinstance(module, layer_type):
                prune.l1_unstructured(module, 'weight', proportion)
                prune.remove(module, 'weight')
        return model
model = prune_model_l1_unstructured(torch_model, nn.Conv2d, 0.5)
torch.save(model.state_dict(), "state_dict_prune_model.pth")

when I used pruned model(state_dict_prune_model.pth) to evaluate, time is 26.54 !!!

Does anyone know the cause?
Because logically evaluation time should be less, not more