Make values consistent across versions

I have a model which was trained on version 0.4, and I wanted to use the weights directly in the latest version (1.6). From what I read, this should not be an issue since the stored model weights are backward compatible. However, I see a difference in the values across the versions with the older version giving slightly higher value. I think this is due to the nn.Upsample() layer that is used by the model. Is there some way by which I can make the values consistent across the two versions?

How did you isolate the difference to Upsample and could you post an example tensor, which would recreate the different outputs between these versions?
Note that you might want to have a loop at the align_corners or the mode argument.
I don’t know if the default values have changed, but it should be a quick check.

@ptrblck thank you for the reply. From the link and looking at my model, I realized that at least in the forward pass, just the nn.Upsample() layer seems to be the reason why values would be different across the two versions. I explicitly used mode='trilinear', align_corners=True while training the model in PyTorch 0.4 . Are there other elements I should keep a check on since the value difference is a bit strange.

I don’t have any specific layers in mind, as 0.4 is quite old by now and I don’t remember exactly what has changed since then. However, it seems that you’ve already isolated it to nn.Upsample or are you not sure where the difference is coming from?

@ptrblck I am not sure where the difference is coming from. I just took a guess based on the attached link.

@ptrblck if you can suggest certain items that might be causing this issue and I should take a look at, it would be really great.

I think a full comparison of the forward pass would be useful.
To do so, you could use forward hooks (as explained here) to save all intermediate output tensors.
Once the hooks are working, use a constant tensor (e.g. all ones) for both models, store all activations, and compare them.