Problem using torch.manual_seed

Hello,

I’m trying to implement neuroevolution using PyTorch and I run into a problem when I try to recover the perturbations generated by a gaussian noise
The principle is:

  • I start from a base individual
  • I create a number of offsprings. For each offspring I:
  1. Select a integer seed, using numpy

  2. Use torch.manual_seed(numpy_seed)

  3. For each tensor in state_dict().values() I create a normal perturbation using:

     perturbation = torch.ones_like(v).normal_()
    
  4. Set the new tensor with v.copy_(v + perturbation*std)

  • I record only the seed.
  • I get fitnesses for all offsprings
  • Then, I want to move the base individual in the direction indicated by the rewards.
  • Problem arises when I try to recover the perturbation:
    For each offspring I:
  1. Get the corresponding seed, and set torch.manual_seed(this_seed)
  2. Regenerate the perturbation, which is not the same !

I can’t figure out why. Could someone help ?
In case my explanation weren’t clear, here’s a link to the code:


Thanks a lot !

I’ve tried to locate the error in your code, but couldn’t find the right place.
Could you explain a bit where the error occurs?

The seeding and generation of perturbation is deterministic:

torch.manual_seed(s)
perturbation = torch.ones_like(p).normal_()
print(perturbation)

This results in exact the same values for each run.

Hey, thanks for trying it out.

Problem occurs when in either improve or improve_bis. When I look at the perturbation re-generated using the seed, it is never the same as the one I created in the add_pop method.

In improve and improve_bis you are getting the seeds from normalized_fitness which is returned by normalize_dico, while in add_pop you get a new random seed from a np.random.randint.

It looks like the results should be different. Am I missing something?

Well, in add_pop I actually use the int from numpy as a seed in torch.manual_seed. I keep it as a memory in fitness. Afterwards, when I’m trying to improve the base individual, I regenerate the perturbation using this same seem, so I expected it to be the same perturbation as before. No ?

Ok, I see. Thanks for the info!
I see a difference between improve and improve_bis in the order of setting the seed and getting the parameters, but this does not explain the difference between add_pop and improve_bis.
I try to debug it later.

Yep, I tried both order, see if I could figure out the problem, but so far, in vain.

Thanks a lot, that’s very kind !

Sorry for the late reply.
It seems the get_clone function is messing up the random values, and I don’t know what exactly happens.
However, just add your seed in add_pop right before the for loop and you should get the same results.

I try to debug this issue a bit further.

OK, I found the issue.
In get_clone you create a new instance of Ind and copy the weights into this copy.
While instantiating this class, the Linear layers are initialized with random weights, thus the PRNG is used already when we go to the permutation code.

Hey ! Thanks a lot for sticking to the problem ! That’s very nice of you :slight_smile:

I’m not sure I got it. Even if the layers are initialized with random weights, shouldn’t this be fixed when I copy the weights ?

The weights will be the same, of you copy them, but you are initializing Ind again in get_clone.
This means, that the PRNG will be called for sampling the weights.
Calling .normal_() after this will yield different random numbers.
Have a look at the following example:

seed = 2809

torch.manual_seed(seed)
print(torch.empty(5).normal_())

torch.manual_seed(seed)
print(torch.empty(5).normal_()) # Same numbers as before

torch.manual_seed(seed)
# Init a model before sampling
model = nn.Linear(10, 10)
print(torch.empty(5).normal_()) 
# Numbers are different, since PRNG was called in nn.Linear
1 Like

Oh ! I see !
Then I suppose that I should call manual_seed after initializing the model, no ?

Exactly. Just add torch.manual_seed right before the for loop.

Haha ! You were right ! It does work !
Here’s the test output:

Adding nb 0
seed: 20595

tensor([[ 1.0241, -0.2424],
    [ 1.8578, -1.2400],
    [ 1.1870, -1.7786],
    [ 2.1059,  0.0410],
    [-0.6069, -0.7865]])

tensor([-0.3170, -0.1979,  0.3467, -0.1865,  1.0867])

tensor([[-0.4042,  1.0822,  0.0599, -0.2872,  0.0582]])

tensor([-0.2743])

Adding nb 1

seed: 155213

tensor([[-0.8802, -0.0765],
    [-2.2938, -0.9288],
    [-1.2693, -0.8255],
    [ 0.0502, -0.4021],
    [-0.6032,  1.7732]])

tensor([ 0.0450, -0.4972,  1.1490,  1.3801,  2.4020])

tensor([[-0.7401,  1.1411,  0.3848, -0.8668, -0.6465]])

tensor([-0.7383])

seed: 20595

Character 0

tensor([[ 1.0241, -0.2424],
    [ 1.8578, -1.2400],
    [ 1.1870, -1.7786],
    [ 2.1059,  0.0410],
    [-0.6069, -0.7865]])

tensor([-0.3170, -0.1979,  0.3467, -0.1865,  1.0867])

tensor([[-0.4042,  1.0822,  0.0599, -0.2872,  0.0582]])

tensor([-0.2743])

seed: 155213

Character 1

tensor([[-0.8802, -0.0765],
    [-2.2938, -0.9288],
    [-1.2693, -0.8255],
    [ 0.0502, -0.4021],
    [-0.6032,  1.7732]])

tensor([ 0.0450, -0.4972,  1.1490,  1.3801,  2.4020])

tensor([[-0.7401,  1.1411,  0.3848, -0.8668, -0.6465]])

tensor([-0.7383])

Well, at least this part works, the rest is rather far from it, but it is another story.
Thanks a lot for debugging this !

1 Like