I’d like to implement something similar to KeypointNets multi-view consistency and rotation loss. The gist of it is as follows:
Let M be our model, I an image and T a transformation matrix. For the multi-view consistency loss, one has a affine transformation matrix T and you’d like to impose equivariance upon the model wrt to T: M(TI) == TM(I)
Similarly for the rotation loss, given M(T*I) and M(I), one would like to estimate T with T_ and enforce T == T_.
The pseudo code I’ve implemented looks as follows:
But what u wanna do is not difficult.
You have just to create an additional nn.Module class like:
model = ur_model #nn.Module
class double_forward(nn.Module):
init
self.single=model
forward(I,T):
y_1 = M(I)
y_2 = M(T*I)
and so on
In the end u can have as many outputs as u want with ur custom loss. You can run the module as many times as u want (but define it only once) paying attention to the order not to create a wrong graph
I don’t really know how it would affect to perform operations over the loss out of autograd class or nn.module class. Remember any operation you do must be done using autograd compatible functions (this is, native differentiable torch functions )
Wrapping in a custom class works for me I can confirm. However, it does not seem to behave well with batchnorm. Is one forced to use InstanceNorm in this case?
I noticed something off today with similar problem. Perhaps the root of the problem is the same to your case: Calling the forward function multiple times is not well-behaved.