I have an idea in my head and I’ve done a few experiments but it never works out. I was wondering if someone more experienced here would be able to chime in.
Let’s say I use a pre-trained ResNet to denoise an image. I’m using resnet as the encoder of an autoencoder and it works fine. Then let’s say I want to try to add a new network after this autoencoder by simply adding more convolutions and expanding the tensor again. We go from:
- Noisy_image -> autoencoder -> output_clean -> loss
- Noisy_image -> autoencoder -> small_network2 -> output_clean -> loss
In every experiment I’ve ran, the performance is worse with this added network. What happens more often that not is that the autoencoder does all the work and the following networks just work to not get in the way.
I was wondering if the fact that Resnet is pretrained reduces the chance for other parts of the network to “learn”? If so I think I would benefit from learning from scratch but this would be time consuming.