Hi, I have a situation in my neural network that I don’t know how to handle, can someone help me get this straight?
I’m implementing a new graph pooling layer, and this graph pooling layer will be inserted between graph conv layers. However, the learnable components in the pooling layer uses a completely different loss that is calculated at the end of each epoch. I think I need to define a separate optimizer for the pooling layer, but how do I specify the other optimizer so that it doesn’t try to change the weight of the pooling layer and only changes the weight in conv layers? Thanks.