I would like to train two CNNs and then it has to be updating weights in both the networks. For better understanding, Please look at the architecture in attached file.
I give depth features as an input for first CNN and get a disparity map. I will warp the disparity map along with input images so I used Scipy(for interpolation operation:mapped_coordinates). Then I feed this warped images as feature tensor as an input for second CNN and calculate the mean square loss between predicted image and ground truth image.
Now the problem is that I moved a tensor outupt from first CNN to CPU(using tensor to numpy operation) and did that interpolation operation during forward propagation and created a warped image as an tensor (numpy to tensor). It works perfectly in a f/w path and loss.backwards() doesn’t calculate gradients w.r.to weights.
What could be the solution for this problem? Thanks in advance.
Yes, I understand.
I would like to do the below operation using torch tensor itself, Instead of converting into numpy operation. Scipy Map coordinate operation
In both tensorflow and pytorch, only an image resize operation is available. But, I have never seen this bicubic interpolation using map_coordinates . (To my knowledge)If it is possible, then we can treat this problem as an end to end learning… Do you have any idea guys ?