Hi all,
I have a certain NN architecture and I want, given the activations of a certain layer to run back through the network and recover the original input. I should be able to do this with certain precision simply by creating a new network with inverted layers (deconv instead of conv, linear with transposed weight tensor, etc.) but I am wondering if there is an easier way, programatically speaking.
It would be nice to get a general solution but in my specific case my architecture only contains conv, deconv and leaky_relu layers.
Thanks,