Question about accumulating gradients multiple times for one upgrade

I am building a model with pytorch, and the input image is very large so I try to accumulate gradients some times and then upgrade the parameters once. I use SGD with momentum, should I change learning rate to imitate the case of one forward one upgrade as much as possible?