Unable to use correctly the new torch.fft module

Hello,

I’m working with pytorch 1.7 within docker (based on the image: nvcr.io/nvidia/pytorch 20.10-py3 in case it matter), I’m using Ubuntu LTS 18.04 with CUDA 11.1.

>>> torch.__version__
'1.7.0a0+7036e91'

I can use the fft functions of pytorch but I want to use the fft module as advised in the documentation.

The problem is I can’t reproduce the examples given in the doc:

Using torch.fft.fft according to the doc:

>>> import torch.fft
>>> t = torch.arange(4)
>>> t
tensor([0, 1, 2, 3])
>>> torch.fft.fft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])

What I get when I try to reproduce it:

>>> import torch.fft
>>> t = torch.arange(4)
>>> t
tensor([0, 1, 2, 3])
>>> torch.fft.fft(t)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Expected a complex tensor.

Using torch.fft.rfft according to the doc:

>>> import torch.fft
>>> t = torch.arange(4)
>>> t
tensor([0, 1, 2, 3])
>>> torch.fft.rfft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j])

What I get when I try to reproduce it:

>>> import torch.fft
>>> t = torch.arange(4)
>>> t
tensor([0, 1, 2, 3])
>>> torch.fft.rfft(t)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'torch.fft' has no attribute 'rfft'

So I’ve got 2 questions ::
1)Am I doing something wrong, or is there a problem with my pytorch installation (I had no problem so far)? How can I reproduce successfully the doc examples?

2)Is it possible to use half datatype with torch.fft module? (it’s discouraged in the torch.fft functions)

Thanks in advance for any help :slight_smile:
Thomas

I still have the same issue.
That would be really great if someone with a pytorch version 1.7 could try the above examples, at least to know if the torch.fft module is actually working or not. I’ve search on google and couldn’t find any example of the use of this module yet, as this module is quite recent maybe the documentation is wrong ?

Best regards,
Thomas

Hi,

On my machine with latest stable build of PyTorch (1.7.0) I can reproduce all examples you have mentioned. Maybe you should make sure you are using correct build of PyTorch. Unfortunately I cannot test docker edition.

Bests

2 Likes

Hi,
Thanks for testing. I will investigate, maybe test an other docker image or simply go for a workaround. In case I found something interesting I’ll let a message here.

1 Like

As @Nikronic said, update to the latest container, as 1.7.0a0+7036e91 is a pre-1.7.0 release and might thus be missing the methods.

1 Like

Ok, thanks for the confirmation.

I started using Docker quite recently and I found interesting the way it can create and manage separated environments, and it is also a good tool to share an app with all its environment. So I searched for Docker image of pytorch, and I found the nvidia deep learning frameworks (free of access, only need to register your email). This catalog seems quite useful, all the release are described here. The only sad thing is it seems all the images described there seems to be build with pre-release versions of pytorch (like for example the pytorch V1.7 image you can found are based on: 1.7.0a0+7036e91 or 1.7.0a0+8deb4fe or 1.7.0a0+6392713). They did the same with the pre-release of v1.8 (1.8.0a0+17f8c32).

Maybe the simplest solution would be to switch to anaconda?

Anyways, thanks @ptrblck and @Nikronic for your answers. I will put this thread to solved has you pointed to the root cause of the problem, but don’t hesitate to let a message if you have any further remarks.

You can always install the latest stable or nightly binaries from conda or pip.
We are building the NGC container with the latest PyTorch master for this release. Since the containers go through our QA, the commit is delayed by approx. a month.

1 Like