Trainig loss is decreasing but testing loss is not

I am training a network that performs stereo depth estimation from a stereo pair. The training loss of my network is decreasing as expected but my testing loss and errors are increasing.

Is this considered as overfitting? I have included random cropping and normalizing my input images. Also, I included weight decay of 0.01 and my learning rate is 0.0005. I am running out of idea what could possibly be wrong. My network has a total of 4120322 parameters and all are convolutional layers with batch norms and relu except the last layer.

image

A really good way to observe if your network is overfitting, change your validation dataset for you train dataset and it should obtain almost perfect accuracy.

Another experiment that you could try, is changing the Validation dataset to a random dataset, and verify if it changes something, maybe is something wrong with the generation of the validation images (one must make sure the pre-processing or creation of train and val images are the same)

I hope this could help.

Regarding the images, does the image dimension matters? I am using [3,256,512] for training and [3,540,960] for testing/validation.

You should preprocess images in the same way (except for data enhancement operations) , such as resize and normalize. Not only the dimensions of the input data will affect the results, but also color channels’ order (such as RGB and BGR, but this may not affect for stereo depth estimation).