Suppose I have a neural network model which outputs a single positive scalar value.

Every training instance in my data has two ordered input tensors (x1, x2). The outputs from the model are as follows:

```
y1 = model(x1)
y2 = model(x2)
```

How to design a loss function to make sure that y1 - y2 > 0? Additionally, I would like to achieve this without pushing the value of y2 towards 0. For my particular use-case, we don’t care about how large (y1-y2) is, just have to make sure that it is positive. What should be a suitable loss function to achieve this constraint on the outputs?