Why can a 1*n and a n*1 tensor be added together?

I’m a beginner of PyTorch and when I trying to run the following code:

x=torch.tensor([[5,3],[5,3]])
z1=x.view(1,4)
z2=x.view(4,1)
print(z1+z2)

I got:

tensor([[10, 8, 10, 8],
[ 8, 6, 8, 6],
[10, 8, 10, 8],
[ 8, 6, 8, 6]])

Why doesn’t the compiler throw an error? Why two tensors with different dimensions can be added together?

Thank you!

This behavior is called broadcasting and tries to stay as close as possible to numpy’s implementation, which is described here.

Got it.
Thank you for your help!