F.relu(torch.cat()) VS. torch.cat()

Hi,

Assume a, b are two tensors, what does F.relu(torch.cat((a, b), dim=1)) do in compare with torch.cat((a, b), dim=1)?

Thanks!

Hi,
torch.cat() is a function to concatenate tensors on a specified channel.
F.relu() is a activation function which you can use on any tensor to get the corresponding values.

Here is an example:


a = torch.randn(3, 2, 1, 1)
b = torch.randn(3, 2, 1, 1)
b = -1*b
ab = torch.cat((a, b), dim=1)
print(ab)

tensor([[[[ 1.2391]],
         [[ 0.0619]],
         [[ 0.6406]],
         [[ 0.1950]]],
        [[[-0.5557]],
         [[-0.9740]],
         [[ 0.3454]],
         [[-1.1767]]],
        [[[-0.1938]],
         [[ 0.8195]],
         [[-1.3665]],
         [[ 1.7226]]]])

ab_relu = nn.functional.relu(ab, inplace=False)
print(ab_relu)

tensor([[[[1.2391]],
         [[0.0619]],
         [[0.6406]],
         [[0.1950]]],
        [[[0.0000]],
         [[0.0000]],
         [[0.3454]],
         [[0.0000]]],
        [[[0.0000]],
         [[0.8195]],
         [[0.0000]],
         [[1.7226]]]])

As you can see, all negative numbers are now zero after applying Relu.

Here is the reason based on Relu function definition:
https://pytorch.org/docs/stable/nn.html#relu

1 Like

Thanks! it’s now completely clear to me!