Reduce model's final classification accuracy variance

Hello,

I am training a ResNet18 classification model on a dataset of 27K images with 7 classes. I initialize the layers with ImageNet weights and I’m not using any fixed seed. However the final classification results have large variance: such as 80.70%, 80.37%, 80.31%, 81.81%, and 81.58%. I use a batch size of 256 for both train and test, with a learning rate of 0.01 initially decayed by a factor of 10 every 20 epochs. I run for 60 epochs in total and I use SGD with momentum 0.9 and weight decay 0.0005. I use the standard cross entropy function for scores. My code uses the template from the standard imagenet.py from pytorch repo. I intend to publish the results in a research paper, but the numbers vary a lot. Mentioned numbers have a mean 80.954 and std ($\sigma^2$) of 0.38.

Are there any other configurations/measures that I can apply to decrease this variance?