Using the same start parameters again

I am not sure how can I make the learning process predictable. To learn starting from the same parameters each time.

I used this.

torch.manual_seed(0)
if torch.cuda.is_available(): torch.cuda.manual_seed_all(0)
torch.backends.cudnn.benchmark = False
np.random.seed(0)

But this is not enough. Any idea?

I am checking the parameters via:
print(list(m.parameters()))
This outputs something like:

[Parameter containing:
tensor([[-2.5058e-01, -4.3388e-01,  8.4871e-01,  ..., -7.4255e-01, ...

But it is different each time I run it.

I tried saving the parameters to the file like this:
torch.save({'state_dict': m.state_dict()}, 'm1.p') # first time
torch.save({'state_dict': m.state_dict()}, 'm2.p') # next time

And the output is:

-rw-r--r-- 1 root root 387361 Mar  8 14:05 m1.p
-rw-r--r-- 1 root root 387361 Mar  8 13:44 m2.p
Binary files m1.p and m2.p differ

I think you would also need to set the deterministic flag to True:

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

From docs: https://pytorch.org/docs/stable/notes/randomness.html

I used iPython cells. and you can run any cell you like when you like.
This was the trick you need to have torch.manual_seed(0) in front of the command:
m = MyModel(vocab_size, n_fac).to("cuda").

Just having torch.manual_seed(0) at the very start is not enough. because eny time you run the cell with m = MyModel(vocab_size, n_fac).to("cuda") you will have different parameters at startup.