Is there a way to apply different ops on one variable controled by condition?

For example, if I want to implement a smoothl1loss output per instance loss, I need to check every elements whether they are in [-1, 1]. Now I only know I can use torch.index_select, then cat two part together but in this way I can’t restore them to one tensor as the same order of input, Is there any way to do this ? Or I must implement this using cffi?

1 Like

You don’t need to implement it using C/CUDA (but it will be definitely more efficient if you do so).
Here is one (untested) implementation for SmoothL1Loss using the functional interface:

def smooth_l1_loss(input, target):
    diff = input - target
    mask = diff.abs() < 1
    diff[mask] = diff[mask] ** 2
    return diff
1 Like

Yeah, It’s working on tensor, but if they are variable, diff[mask] = diff[mask] ** 2 is a in-place operation, and when backwards, it will raise RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

Found a way, I just need to make a new variable and assign two part to the new Variable is OK. Thanks