Torch.rfft vs scipy.fftpack.fft

Hi,
I was wondering why torch rfft doesn’t match the one of scipy:

import torch
import numpy as np

from scipy.fftpack import fft


@torch.no_grad()
def _fix_shape(x, n, axis):
    """ Internal auxiliary function for _raw_fft, _raw_fftnd."""
    s = list(x.shape)

    index = [slice(None)] * len(s)
    index[axis] = slice(0, s[axis])
    s[axis] = n
    z = torch.zeros(s, dtype=x.dtype, device=x.device)
    z[tuple(index)] = x
    return z


N_FFT = 512
cuda = torch.rand(1, 44000, device='cuda:0')
tensor = cuda.cpu()
numpy = tensor.clone().numpy()
cuda_fft = torch.rfft(cuda, signal_ndim=1, onesided=False)[0, :, 0]

tensor_fft = torch.rfft(tensor, signal_ndim=1, onesided=False)[0, :, 0].numpy()
numpy_fft = np.real(fft(numpy, axis=1, n=None))[0]
print(f'Tensor type:{tensor_fft.dtype}\n'
      f'Numpy array type: {numpy_fft.dtype}'
      )
print(f'CPU-CUDA: {np.abs(tensor_fft - cuda_fft).sum()}')
print(f'CPU-NP: {np.abs(tensor_fft - numpy_fft).sum()}')
print(f'CUDA-NP: {np.abs(cuda_fft - numpy_fft).sum()}')

The error is not extremely big but big enough not to allow to use them indifferently.

Tensor type:float32
Numpy array type: float32
0.32703477144241333
1 Like

In fact not even cpu and gpu versions match

Tensor type:float32
Numpy array type: float32
Numpy array type: torch.float32
CPU-CUDA: 0.5086007714271545
CPU-NP: 0.3293250799179077
CUDA-NP: 0.37688499689102173

Yeah, but if you use mean instead of sum, you’ll get 1e-5ish for float32 and 1e-14ish for float64, so it’s different implementations matching up to numerical accuracy.

Hmmm, Soo the thing is I was porting MIR eval library to pytorch and I don’t really know if a tol 1e-5 is good enough as the final result diverges.

Shouldn’t it be way smaller, like 1e-10 or 1e-15?

No! fp32 has as 23 bit significand, so the lowest bit is about 2^(-23)~10^(-7) in magnitude. This is roughly the relative precision of the representation. If you now compute with that, you lose some accuracy, and 1e-5ish is relatively reasonable. It also matches my experience in general.

1 Like

Thank you @tom. It’s always nice to read you.
So I’ll keep looking for bugs ^^

1 Like

Same! I was about to add a PS that it’s good to see you here but then didn’t want to make it sound as if you were gone. It’s an honor that you might have had a question that my modest knowledge of matters helps answer. :bowing_man:

1 Like

Hahhaha don’t worry, I’ll be here for a couple of years at least. It’s just that questions are too complicated for me. I will have to read your blogpost in depth to follow the rythm.

2 Likes