I want to define a loss function, the output/y’s shape is [batch_size, 1, 12, 1, 1], I will calculate the absolute error between output and y, if the error is bigger than a threshold, I will apply a different loss function, this is something like the nn.SmoothL1Loss

Below is my custom loss function

```
def myLoss(output, y):
loss = (output - y).abs_()
for j in range(loss.size(0)):
if loss[j].mean() > threshold:
loss[j] = loss[j].add_(1).log_().mul_(50)
return loss.mean()
```

This loss function is runnable, but it greatly slows down the training speed (slower 10 times comp to the L1Loss ) .

My question is it possible to apply such loss in a more efficient way?