Computational Error with PyTorch

I noticed an issue with PyTorch computing exponents that are compositions of two numbers.
For example say we want to compute $$x^{2 * 0.8}$$ for a torch tensor, this returns NaN for negative numbers even though its value is always positive. It seems like Torch always fails in this instance. However, if I rewrite as $$(x^{2})^{0.8}$$, this seems to work. This same operation does not fail on numpy array or when using python floats, it is unique to torch tensors.