Loss function with condition

Would a loss function like below work: ?

def my_loss_func(y_hat, y):
   cnt = 0  
   for idx, val in  enumerate( y ) :
         if val != -1 : 
           s = s + val  - y_hat[idx]
           cnt  = cnt + 1 
   return s/cnt
 

Basically I want to take into account losses for only those values where the real answer is not equal to -1 (a value I fill the missing values with).
If not, any ideas on the correct way to approach this?

You can do that but it probably don’t work as you expect. If you choose certain values of your output backpropagation while only affect those “paths” which generates those outputs.

1 Like

Can you explain a little on choose certain values of your output backpropagation while only affect those “paths” which generates those outputs, on another thread, I read that you were allowed to use operations as long as it did not break the computation graph, any help on how to recognise if the loss function is valid or actually breaking the computation graph?

Simple example.
You compute 2 samples and u use torch.max between them. Neural network weights will be upgrades with the features corresponding to the max value. Your example is the same. Backpropagation will flow through those y_hat you are picking, meanwhile those y_hay which you discard will never backpropagate. Therefore your network learns only using y_hat you choose. Despite what you can think trying to constrain the output may not work.

You are slicing the output, choosing some values. Data flows through them thus backpropagation too.

About the computational graph, always you use torch functions to generate operate it’s ok. If you cannot define a tensor based on a previous tensor and torch functions you will probably break the computational graph. If you cast from one data type to another you also break it. If you manually assign values of s matrix you break it.
Soz, writing from the phone.

1 Like