How to restrict the output of neural network such that some dimension greater than others

For example; My neural network model takes x as input, and output two dimensional vector y = model(x) where y = (y1,y2) and I want to restrict the output 0< y1 < y2 < 1. where restricting to domain (0,1) is easy as I can just apply sigmoid() as activation function at last but how to nicely satisfy the sign constraint and the allow backprogation?

I’m not sure if I shuffle the y as y = sorted(y) as last layer allows backprogration at all

Sorting is not differentiable. You can do the following trick:

y1, y2 = model(x)
y2 = torch.logsumexp([y1, y2])

Since LogSumExp is smooth maximum, y2 will also be greater than y1 (see https://en.wikipedia.org/wiki/Smooth_maximum).

Then apply sigmoid or any other non-decreasing function that transforms input to (0, 1) range.