Suppose I have a network that looks like layer_1 -> layer_2 -> layer_3
. I have all the weights pretrained. Suppose layer_1
's output is output_1
, and layer_2
's output is output_2
.
Now I want to train another layer, my_layer_2
, with the original output_1
as input and output_2
as output, so that if I replace layer_2
in the original network with a trained my_layer_2
, the difference won’t be large.
What is a good way to do this? Is there any existing mechanism, or do I need to somehow hack it myself?