Comparison of torch stft with librosa


I was evaluating the stft function in pytorch vs librosa. I found that the precision is only upto 2 decimal places. I am wondering if that can be improved to atleast 4-5 decimal places.

Also, regarding the speed of execution,

librosa is ~2x faster than pytorch on CPU (Didn't expect this, thought CPU times should be similar.)
pytorch is ~4x faster than librosa on GPU (Expected this.)

Rig: TitanX pascal, 4-core CPU, pytorch 0.5.0a0+8fbab83

Do these numbers look ok? Or is there a possibility I am doing something wrong?



I found an answer from stackoverflow said that “The difference is from the difference between their default bit. NumPy’s float is 64bit by default. PyTorch’s float is 32bit by default.”