Different results when loading model

Hello

Yet another question regarding loading models with seed.

I have followed every post about the subject and trying to achieve deterministic results by using the proposed solution:

def seed_torch(seed=1029):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed) # if you are using multi-GPU.
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

seed_torch()

But without success, each time I get a different result when evaluating a single data point.

My model has been trained on another computer, but should that matter?

I have checked the values for each layer and they are the same on every load, it is just the output that differs.

Any ideas?

Can it be because I’m using nn.MaxPool1d that might be mentioned under Reproducibility — PyTorch 2.1 documentation

… and many forms of pooling, padding, and sampling. There currently is no simple way of avoiding non-determinism in these functions. ?