Modifying parameters directly

Hi. I have been doing an implementation of Evolutionary Strategies (https://arxiv.org/abs/1703.03864) in PyTorch. This is a gradient-free optimisation approach to RL problems, in which neural net params are perturbed and a corresponding update is calculated based on the reward at these perturbations.

For the hacky example I have got working I have resorted to doing perturbations like:

layer.weight = torch.nn.Parameter(current_weight + eps_weight)
layer.bias = torch.nn.Parameter(current_bias + eps_bias)

where layer is, for example nn.Linear.

My problem is that for a complex net I can’t see a way to perturb all parameters in a nice way. Ideally I would iterate through model.parameters() and perturb each one in turn, but it seems model.parameters() does not return the actual modules in the model, just a copy of the parameters. Thus I cannot change the weights this way as far as I can tell.

Is there a clean way to update neural net parameters directly?

Thanks,
Tom

Hi, were you ever able to resolve this issue ?

Pretty much. The way I ended up doing it was to get the parameters into a state_dict, update them in the dict and the use load_state_dict to perform the update

So I am assuming your eps_weight is getting updated by backpropagation, right ? Can you please share a snippet of your code to enable this.

RIght now, I was thinking of adding to x.data the eps_weight, however, that won’t let me backpropagate I guess to the eps_weight