Caculate loss with selected values in the output only cause zeroing of unselected region

I’m observing something That I’m not sure what to make of: So the output is a HxW matrix, and the loss is calculated by a selected number of values within this matrix. I found that as the training went on the unselected region would just go to zero. My network contains mostly convolutional layers, so I’d assume the loss of the selected pixel would propagate to enough locations to be covered.

Right now I’m assuming I might not be propagating the loss through the corresponding pixel locations that are not used during the calculation of loss. Is my assumption correct? Is there a way in PyTorch to set the backprop for all values in the matrix and not just the selected ones?

Here’s my intuition
Based on the fundamental theory of neural networks and back propagation, your network is trying to meet the objective defined by the loss function, which is to generate an output for the selected region as close to as the target, therefore the weights are updated to meet that and that only with no intention to pay attention to the surrounding region.

right, but wouldn’t the values would be kept randomized? and isn’t’ the loss value being propagated uniformly which means that the unselected region would at least get random values instead of zero?

Values being zero is actually what is confusing indeed, my wild guess is that you’re doing some kind of batch normalization or L2loss (weight decay) that might keep the values of the other outputs closer to zero, this is just a guess, I am not completely sure what’s happening, what you could do is try and debug a sample from one layer to the other and see what’s actually going on

1 Like

that’s true, probably the best place to start with