If p = 2 , 4 or 6, the result of

```
p = 3
my_tensor.pow(p).pow(1/p)
```

is ok

if p=3, 5 and …

the result with nan values, where negative numbers?

If p = 2 , 4 or 6, the result of

```
p = 3
my_tensor.pow(p).pow(1/p)
```

is ok

if p=3, 5 and …

the result with nan values, where negative numbers?

Hi,

This is because any power `1/p`

is only defined for positive numbers.

But negative number at the power 3,5, etc gives negative numbers.

So you get nan.

PS: You should use backtick ` and not " to format code in a nicer way. I edited you post to make it look better.

Hello Mikhail (and Alban)!

To give a little more context to Alban’s answer:

Yes, pytorch doesn’t want you to take a root of a negative

number (e.g. `sqrt (-1)`

which is `(-1)**(1/2)`

, a fractional

power), so it gives you a `nan`

.

(I am using `x**y`

to mean "raise `x`

to the power `y`

".)

When you raise a negative number to an *even* power you get

a positive number, so there is no problem taking the root. But

when you raise a negative number to an *odd* power, you get a

*negative* number, so pytorch gives you `nan`

for the root.

However, if you use *complex* numbers (numbers that have a

real and so-called *imaginary* part), roots of negative numbers

make perfect sense. But, for example, there is no square root

of -1 that is purely real (that is, that has no imaginary part).

So, if you stick to real numbers, the best you can do is `nan`

.

Nonetheless, you *could* argue that the cube root of -1 (that

is, `(-1)**(1/3)`

) is perfectly well defined as the purely real

number -1. Indeed it is (because `(-1)**3 == -1)`

. So why

don’t we do this?

Now the fun begins:

A number (including a negative number) has k k-th roots. That

is `x**(1/k)`

has k legitimate values, in that there are k distinct

numbers, `z`

, (most or all of which will be complex numbers) for

which `z**k == x`

.

To make the discussion a little simpler, let’s just look at `-1`

.

If we want to use just one value for `(-1)**(1/k)`

(rather than

k different values), which value should we use? Mathematicians

(for reasons that aren’t chiselled in stone but do make sense)

prefer to use (of the k values) the value of `(-1)**(1/k)`

that is closest (in the complex plane) to 1. This particular

choice will always be complex (i.e., not purely real), so, if

you want to stick to real numbers, the best you can do is `nan`

.

(You could have chosen `-1`

– that would be perfectly legitimate,

if less standard, but that’s not what pytorch does*.)

Just for fun, let’s compare pytorch’s `Tensor.pow()`

, python’s `**`

operator, and python’s `math.pow()`

:

```
import torch
print (torch.__version__)
t = torch.Tensor ([1, -1])
for p in [2, 3, 4, 5]:
print ('p = ', p)
print (' pytorch-powpow = ', t.pow (p).pow (1 / p))
import math
for p in [2, 3, 4, 5]:
print ('p = ', p)
try:
print (' math-powpow = ', math.pow (math.pow (-1, p), 1/p))
except Exception as e:
print ('Exception = ', e)
for p in [2, 3, 4, 5]:
print ('p = ', p)
print (' **-powpow = ', ((-1)**p)**(1/p))
```

Here is the output of running this script:

```
0.3.0b0+591e73e
p = 2
pytorch-powpow =
1
1
[torch.FloatTensor of size 2]
p = 3
pytorch-powpow =
1
nan
[torch.FloatTensor of size 2]
p = 4
pytorch-powpow =
1
1
[torch.FloatTensor of size 2]
p = 5
pytorch-powpow =
1
nan
[torch.FloatTensor of size 2]
p = 2
math-powpow = 1.0
p = 3
Exception = math domain error
p = 4
math-powpow = 1.0
p = 5
Exception = math domain error
p = 2
**-powpow = 1.0
p = 3
**-powpow = (0.5000000000000001+0.8660254037844386j)
p = 4
**-powpow = 1.0
p = 5
**-powpow = (0.8090169943749475+0.5877852522924731j)
```

For an odd root of a negative number, pytorch and python’s

`math.pow()`

basically agree. (Pytorch gives `nan`

, and python

throws an exception.)

But python’s `**`

operator give you a complex number (and,

indeed, of the k roots, the complex number that is closet to

1), and (for the case of `(-1)**(1/k)`

) chooses *not* to give

you `-1`

.

*) In order for pytorch to be smart enough to give you `-1`

for,

say, `torch.Tensor ([-1]).pow (1/3)`

, it would have to know

that the argument to `pow()`

is, in fact one-third. But all pytorch

sees for this argument is a floating-point number that is very

close to, but not exactly one-third. So figuring out that you meant

*precisely* one-third would be quite tricky.

Have fun!

K. Frank

2 Likes

Thanks a lot, now I understand why it happened.

I decided to do as in LPPool1d class:

```
p = 3
torch.sign(my_tensor) * torch.abs(my_tensor.pow(p)).pow(1. / p)
```

maybe it will be useful for someone