I was seeding my code with:
seed = args.seed
if seed is None: # if seed is None it has not been set, so get a random seed, else use the seed that was set
seed = ord(os.urandom(1))
print(f'seed: {seed}')
torch.manual_seed(seed)
then I trained my models but the final errors aren’t exactly the same (they are the same in about 7 significant figures):
one run:
[1, 196], (train_loss: 2.080742538583522, train error: 0.7419323979591836) , (test loss: 1.9844910830259324, test error: 0.70859375)
another:
[1, 196], (train_loss: 2.0807455449688192, train error: 0.7419523278061224) , (test loss: 1.9844940572977066, test error: 0.70859375)