I am trying to implement a loss function `max(f_1(x),f_2(x),...)`

.

I know that

```
max(a,b)=max(a-b,0)+b
```

so this can be implemented with O(log(n)) layers of ReLU. Is there a function in pytorch that already implements this?

I am trying to implement a loss function `max(f_1(x),f_2(x),...)`

.

I know that

```
max(a,b)=max(a-b,0)+b
```

so this can be implemented with O(log(n)) layers of ReLU. Is there a function in pytorch that already implements this?

I haven’t verified but it should look like this

```
def hardmax(ls):
half = []
for i in range(0,len(ls),2):
if i+1 == len(ls):
half.append(ls[i])
else:
relu = F.relu(ls[i]-ls[i+1])+ls[i+1]
half.append(relu)
if len(half) == 1:
return half[0]
else:
return hardmax(half)
```

Hi,

if I understand correctly, you want to get the maximum of two (or more) values.

Let´s say something like this

```
ab = torch.randint(-5, 5, (5, 2))
# Output:
# tensor([[-3, -5],
# [ 1, -4],
# [-3, -2],
# [ 1, -3],
# [ 4, -4]])
```

Where the left column would be `a`

and the right one would be `b`

in your example.

You can then use the `max`

function to get the maximum value.

```
val, ind = ab.max(dim=1)
print(val)
# Output:
# tensor([-3, 1, -2, 1, 4])
```

By defining the dimension to 1, it will look for the maximum value in each row. (You also get the indices for the column where the maximum value is located if you want/need them).

Please let me know if this is not the answer you were looking for.

Hope this helps