Models loaded on on Windows are different to that loaded on Linux

I found that when I load and evaluate models trained and saved on windows use different operating systems, results are inconsistent.

Phenomenon:
I trained a model on Windows 10 and tested it on the test set before saving and got accuracy = 63.91%. Then I saved it to “model_epoN”.

  1. When I load and evaluate it on the same Windows 10, the results are consistent. i.e., accuracy = 63.91%, too.
  2. But when I load and evaluate it on Ubuntu, the results are inconsistent. accuracy =0.1%, like randomly guessing.

I’ve noticed the difference between CR LF and LF line break types and made small changes to the datasets, so I’m sure the problem lies somewhere else.

Version information:
The pytorch version on Windows 10 python3.6 is 0.3.0b0+591e73e (peterjc123).
The pytorch version on Ubuntu 16.4 python3.5 is 0.3.0.post4 (official).

Save function:

def save_checkpoint(parser, epoch):
    torch.save({'state_dict': parser.state_dict()},
    'model_epo' + str(epoch + 1))

Load function:

def load_checkpoint(filename, parser):
    checkpoint = torch.load(filename)
    parser.load_state_dict(checkpoint['state_dict'])
    return parser

No error or warning was raised. I guess the formats for saving state_dict are different according to the OS?
Currently, I’m not able to write a minimal working example because the model is too complicated and it builds dynamic nets in each iteration. I haven’t trained the model on Linux and test it on Win. But I will try and I still need some time.
I hope the problem can be solved in pytorch v0.4.0.

Did you find a workaround? I have the same problem as well and need to load my model on linux to dockerize.