# Element-Wise Max Between Two Tensors?

Easy there a way to take the element-wise max between two tensors, as in `tf.maximum`? My current work-around is

``````def max(t1, t2):
combined = torch.cat((t1.unsqueeze(2), t2.unsqueeze(2)), dim=2)
``````

but it’s a bit clunky.

4 Likes

http://pytorch.org/docs/torch.html#torch.max

The third version of `torch.max` is exactly what you want.

9 Likes

Thanks! I guess I just hadn’t scrolled down far enough.

it doesn’t support usage like `torch.min(tensor1, 0.0)`

4 Likes

`torch.min(tensor1, torch.zeros_like(tensor1))`

4 Likes

Now, wo can use the `torch.max`

``````>>> a = torch.randn(4)
>>> a
tensor([ 0.2942, -0.7416,  0.2653, -0.1584])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
>>> torch.max(a, b)
tensor([ 0.8722, -0.7416,  0.2653, -0.1584])
``````
2 Likes

How to do elementwise max between multiple filters? getting the following error for 3 tensors

``````a = torch.ones(1,4,1,1)*2

b = torch.ones(1,4,1,1)*3

c = torch.ones(1,4,1,1)*4

#d = torch.ones(1,4,1,1)*5

max_ = torch.max(a,b,c)

print(max_)
``````
``````TypeError: max() received an invalid combination of arguments - got (Tensor, Tensor, Tensor), but expected one of:
* (Tensor input)
* (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)
* (Tensor input, Tensor other, *, Tensor out)
* (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
``````

Here is a simple method:

``````# 1 reduce
import functools
max_tensor = functools.reduce(torch.max, [a, b, c])

# 2 for
tensor_list = [a, b, c]
max_tensor = tensor_list[0]
for tensor in tensor_list[1:]:
max_tensor = torch.max(max_tensor, tensor)
``````

Thanks for your solution @I-Love-U. But how should one extend this to an arbitrary number of arrays? The first method does not work for 4 arrays.

https://pytorch.org/docs/stable/generated/torch.maximum.html#torch.maximum

Does this help here?

A overload implementation of `torch.max` is the same as `torch.maximum`: torch.max — PyTorch 1.8.1 documentation

So, we can focus on `torch.max`

Here are some examples and as we can see that these code can work well.

``````tensor_list = [torch.randn(2, 3) for _ in range(5)]
tensor_list
Out[9]:
[tensor([[-0.6082, -0.9290, -0.4921],
[ 0.3344, -0.9338, -0.8563]]),
tensor([[-0.3530, -0.5673,  2.6954],
[ 1.5262,  2.3859,  0.3481]]),
tensor([[ 0.5392,  0.9646, -1.5962],
[-2.2931,  0.6707, -0.4896]]),
tensor([[-1.3532, -0.5953,  1.6039],
[ 0.2937,  0.3643,  1.3153]]),
tensor([[ 1.1544,  0.7681, -1.0410],
[-0.1305, -0.8855, -0.3516]])]

import functools
functools.reduce(torch.max, tensor_list)
Out[14]:
tensor([[1.1544, 0.9646, 2.6954],
[1.5262, 2.3859, 1.3153]])

max_tensor = tensor_list[0]
for tensor in tensor_list[1:]:
max_tensor = torch.max(max_tensor, tensor)

max_tensor
Out[19]:
tensor([[1.1544, 0.9646, 2.6954],
[1.5262, 2.3859, 1.3153]])
``````