hey.

it would be great if we can have a mask parameter for loss functions. my case is with SoftMarginLoss function where i don’t want all elements of the 2D inputs matrix to contribute in computing the error. Setting both unwanted inputs and targets elements to same value won’t work because it’s a non-linear function and the corresponding error will be non zero, given this how it’s computed (from docs) `loss(x, y) = sum_i (log(1 + exp(-y[i]*x[i]))) / x.nelement()`

. I had to write my own SoftMarginLoss function for that case but i thought having a proper implementation is better.

and Is there a way to access the core implementation of SoftMarginLoss?, I reached the following thing `_functions.thnn.SoftMarginLoss.apply`

in `torch.nn.function`

but I couldn’t go any further.