Why do we use .pow(2) and not **2?

I would have thought that pytorch overwrote such syntax **2 to mean .pw(2) automatically. Is that not the case? Can I do **2 and have things work correctly (gradients etc) in pytorch?

1 Like

I always use **2 and have not run into problems and I would expect it to roughly have the same performance, too (for operators, python needs to figure out whose special function it should call). I suspect that people coming from other languages might not be used to the convenience of a power operator.

Best regards

Thomas

1 Like

I assume it does the element wise square?

Yes it’s element-wise. In PyTorch, x.pow(2) and x ** 2 are equivalent:

The ** operator (__pow__) just calls pow().

8 Likes

In some cases, the member function (x.pow()) is more elegant than the expression mode (x**2). Given a tensor a defined as:

a = torch.tensor([1, 2, 3]).

Calling a member function like

a.pow(2).mean()

should be cleaner than

(a**2).mean().

Calling member function is object oriented while bracket turn the expression it into a more functional style programming.
Following case might be rare but shows the advantages more clearly:

((a**2).add(1)**2).mean()

vs

a.pow(2).add(1).pow(2).mean()

As the expression gets more complicated, member function shows more advantages.

2 Likes