Can pytorch get a higher classification accuracy result than other framework?

Recently I read a paper. The author implements cifar10 classification task with Caffe and can get 89.9% accuracy.
I can get nearly 94% accuracy using pytorch, and the network structure is the same with the author. We both use pretrianed Alexnet.
I have two questions:

  1. Can the pretrained Alexnet (trained with ImageNet) make the classification accuracy higher? (I can only get nearly 91% accuracy without pretrained model. But I want to know is this a fact?)
  2. Is the difference possible when other situations are the same except the framework? And why?
  1. Using pretrained model on large datasets, e.g. imagenet, has been proven numerous times very effective in new classification tasks, often outperforming models without initialization of pretrained network.
  2. Many things can happen in terms of training a deep network. To name a few, pseudorandom numbers, precision issues, bug fixes, different data augmentation, different data processing in test time.