LSTM+Dense has different training and validation output with different evaluation interval

Hello! I am developing a LSTM network with a Dense layer on it for stock prediction. I want to calculate the validation loss, so I call model.eval() and “with torch.no_grad()” before the evaluation. This can happen every epoch, every 2 epochs etc. If this epoch interval changes, the training and validation output (acc and loss) change for the same epoch.
I have also tried to do param.requires_grad=False for every parameter before evaluation and set it back to True before training, but it didn’t work.
Any explanation?
Thanks

Do you mean the training losses change for the following epoch, if you’ve used the validation loop before?
If so, you might call into the pseudo-random number generator in your validation loop, as explained here, which might change the next training epoch. If your training losses change by a large margin, it seems that your training might not be really stable for different seeds.