Pytorch upgrade from 0.4.1 to 1.0.1 inference getting random

Hello all, i am quite new to pytorch. I was using a model trained saved and loaded in pytorch 0.4.1. It was and is still working fine in pytorch 0.4.1.
But as i upgraded the pytorch version to 1.0.1, my model with no errors, warning just giving me different values each time i run it for same input.
i have no idea hwere can i go wrong and what has been changed so much ?
Any help will be much appreciated :slight_smile:

How large are the differences?
If you need deterministic results, have a look at the Reproducibility docs.

Hello,
I figured out the problem. In my model i was defining a dropout layer in farward. But it was not defined in model main function init.
Strangely model.eval() in pytorch 0.4.1 was disabling that drop out, but not the new pitch 1.0.1. Due to that drop out , i was always getting different results.
So i defined the dropout in model init function and called it in forward and it disabled while model.eval().