Train individual layer

Suppose I have a network that looks like layer_1 -> layer_2 -> layer_3. I have all the weights pretrained. Suppose layer_1's output is output_1, and layer_2's output is output_2.

Now I want to train another layer, my_layer_2, with the original output_1 as input and output_2 as output, so that if I replace layer_2 in the original network with a trained my_layer_2, the difference won’t be large.

What is a good way to do this? Is there any existing mechanism, or do I need to somehow hack it myself?

I don’t know if there’s an existing mechanism to do this. One thing you could do to hack it together is to generate output_1 and output_2 yourself (maybe by randomly generating inputs to layer1?) and with output_1 and output_2 data you could train my_layer_2.