Consider the graph G->p->L, where G is the output of a neural network. p is a 2D vector obtained as p = G(.). L is the loss function which is a function of p i.e. L§ = I1[p] - I2[p]. I1,I2 are images and the loss is a color constancy constraint.
The problem I am facing is that in order to obtain I1[p], I2[p] p cannot be a Variable. In that case, the gradient cannot be backpropagated to G via p (as p won’t be a part of the graph).
I think similar concept is used in torch.nn.functional.grid_sample(),