Hi,
I am learning a deep network regressor that accepts a rectangular image as the input and predicts a pixel location on the input image. The initial convolution layers have rectangular kernels. After one inception layer, the input image is square and the subsequent layers have square kernels.
I had learnt a feed forward CNN in Pytorch 0.2.0_3. The system had restarted and the numpy and pytorch versions had updated. The rows and columns had interchanged in my new numpy version. So an image was now (h,w,c) in numpy while previously it was (w,h,c). I do not remember the previous version of numpy.
I changed my input data format to the network accordingly. But the predictions from my previously trained network is now completely wrong. It is predicting random points in and around a central region in the image.
When I try retraining the model with theerfitting and the previously trained model is predicted random points in a central region same exact parameters, it is overfitting and not generalizing to the test dataset. I tried reinstalling the older version of pytorch (0.2.0_3) and the newer version (1.0.0). Both cases, a new training is overfitting and the previously trained model is predicting random points in a central region of the image.
Has anyone experienced such issues with a rectangular CNN? Is there some internal data processing when using rectangular kernels that could have changed? The input to my pytorch network is (b,c,h,w).