Cuda Numpy operations in custom Modules

Hi,

Is there a way to use Cuda through numpy arrays inside the forward() and backward() functions in custom Modules? For example, if I want to compute fft2 in GPU in the example in https://github.com/pytorch/tutorials/blob/master/Creating%20extensions%20using%20numpy%20and%20scipy.ipynb how should I go about that?

Just converting the model and input to cuda results in “RuntimeError: numpy conversion for FloatTensor is not supported” error at “result = abs(rfft2(numpy_input)) line”.

class BadFFTFunction(Function):
    
    def forward(self, input):
        numpy_input = input.numpy()
        result = abs(rfft2(numpy_input))
        return torch.FloatTensor(result)
    
    def backward(self, grad_output):
        numpy_go = grad_output.numpy()
        result = irfft2(numpy_go)
        return torch.FloatTensor(result)

Numpy doesn’t support GPU arrays, so there’s no way to do this. If you know CUDA you can see how to use your own kernel in this gist.

Okay, thanks! I will check.