It’s not a question rather a discussion. Suppose I have two directories, one with images[1 … N] and the second with a modified version of image[1 … N]. Image1 when modified forms modified_image_1, ImageN to modified_image_N. I want to train a network that would learn from the directories and generate a model. Unknown ImageX, when passed through this image, will provide me a output of modified_image_x.
So, I would appreciate if anyone could help me on how to, or a similiar example of some network etc.
The problem is that you have no criteria about what “different” means. You can determine the modification in terms of colors, shapes, semantic, texture… You have to define from which space to which other you want to map/compare
Yes exactly. So should I just append all the loss/difference betwwen Output1 and Output2 into a Variable and train the Variable to calculate a relation. But I highly doubt it might be of use.
You have to understand that neural networks are trained to perform a task. With a non-trained network you cannot measure disparity.
If you would have, for example, a classifier, then you can compute a loss which measure the difference between two image in terms of how likely are they to belong to the same class (or whatever loss you use).
NN recognize patters, difficult patters. But you need to “teach” which samples are closer and which samples are farther in terms of your criteria.
Anyway the most closer thing you are looking for is metric learning, siamese networks with contrastive loss and hinge loss. In which NN learn to set pairs of samples closer of further, and then you can compare latent spaces to measure disparity. Anyway you will have to define which “closer” means for you