What could be the cause of big fluctuations in accuracy between epochs?

I was wondering what it mean when there’s >10% difference in accuracy between two epochs. Is that due to the lr? or somehting else,

There might be various reasons that potentially lead to this. Please update more information.

To diagnose on the problem, more descroptions on your training is necessary, such as the full image of loss curve (both train and validation), the scale of dataset, the policy of dropping learning rate.

Moreover, some deep learning tasks just has this kind of characteristics, such as GAN and NAS, where the proper epoch of training achieves better performance than training more epochs.

Could be due to big LR. Those big up-downs usually mean that your model is overshooting its target. Try reducing it by 10 or so. Also checking schedulers of PyTorch might help.