Hi. I have been doing an implementation of Evolutionary Strategies (https://arxiv.org/abs/1703.03864) in PyTorch. This is a gradient-free optimisation approach to RL problems, in which neural net params are perturbed and a corresponding update is calculated based on the reward at these perturbations.
For the hacky example I have got working I have resorted to doing perturbations like:
layer.weight = torch.nn.Parameter(current_weight + eps_weight) layer.bias = torch.nn.Parameter(current_bias + eps_bias)
layer is, for example
My problem is that for a complex net I can’t see a way to perturb all parameters in a nice way. Ideally I would iterate through
model.parameters() and perturb each one in turn, but it seems
model.parameters() does not return the actual modules in the model, just a copy of the parameters. Thus I cannot change the weights this way as far as I can tell.
Is there a clean way to update neural net parameters directly?