Manually update model parameters from a vector

Hi everyone, I’m tryng to update manually the parameter of my model by a simple update rule like: new_param = old_param + lr * vector. I don’t know how to do it properly since online i found how to manually update from grads or multiplyng params with a constant, but in my case i need to update each single parameter differently and my vector is with monodimensional shape, while the model parameters are not organized like this. It’s my first experience with pytorch, I hope you can help me

Hi, you need to provide more info. What are you updating the parameters for? Is this for a single layer or all layers? if its multiple layers, is it a different vector for each layer?

Is this vector static or is it trained along with the network?

The vector is a monodimensional array that covers all the parameters os all the layers of the network. It is a natural gradient update for a RL algorithm and it changes at each iteration of the algo, but inside the single iteration it can be considered static. My problem is that doing the usual: for param in model.parameters(), we have that param has a particular shape. Maybe I should cut and reshape my vector in a way to make it compatible with the param shape

I’m not sure how you can have a single vector shared by all the layers since the different layers will (presumably) have different shapes.

So it sounds like this is a trainable parameter per layer? If so, the easiest option might be for you to just add it as a parameter per layer. You can then adjust the learning rate for this parameter as needed (maybe per epoch as you say). That way, PyTorch will automatically train this vector for you without you having to adjust the sizes or anything.

If this isn’t what you want, can you maybe share a small code snippet to show what you have in mind or what you are trying to do. I don’t think I still fully understand what you are trying to do.