In your first approach you are resetting the seeds before calling train_eval_loop
, which makes sure that the random operations in train_eval_loop
will sample the same random numbers and thus would yield the same results.
In the second approach, you are seeding the code only once, so that multiple calls into train_eval_loop
will use the pseudorandom number generator in its current state and could yield different results. However, rerunning the second approach in multiple sessions should again yield the same results.
Usually you would stick to the second approach only, since you don’t want to sample the same “random” numbers in each training iteration (they would not be random anymore).