Question about multiple optimizers for different parts of model

Hi, I have a situation in my neural network that I don’t know how to handle, can someone help me get this straight?

I’m implementing a new graph pooling layer, and this graph pooling layer will be inserted between graph conv layers. However, the learnable components in the pooling layer uses a completely different loss that is calculated at the end of each epoch. I think I need to define a separate optimizer for the pooling layer, but how do I specify the other optimizer so that it doesn’t try to change the weight of the pooling layer and only changes the weight in conv layers? Thanks.

You can pass subsets of parameters to an optimizer while creating it, which makes sure that the optimizer only updates these parameters.

1 Like