I have a model which takes an input image x and an integer factor f which is extrapolated with the constant value and added as a channel to the input image.
So if image x was a greyscale image [[1, 2][3, 4]], and the factor was 2,
I’ll feed the following vector to the model:
[[1, 2], [3, 4]],
[[2, 2], [2, 2]]
Now, I want to implement a loss function on this model which computes on the combination of outputs from fixed multiple factors say [0, 1, 2, 3, 4] for the input image, and then the optimizer makes a step.
Say, the outputs are [O0, O1, O2, O3, O4], we want to compute the loss function as f([O0, O1, O2, O3, O4])
Can i do the same with pytorch somehow or will I need to train separate networks for each factor only?