Implementing Fourier integrals

Hello everyone,

Hope everyone is staying safe. I have this monstrous function
f(x,p,t) = \int_{-\infty}^{\infty} sech^{2}(x + y/2)sech^{2}(x - y/2) × [2 sinh(x + y/2) sinh(x − y/2) +
\sqrt(2) sinh(x − y/2)exp(i3t/2) +\sqrt(2) sinh(x + y/2)exp(-i3t/2) + 1]exp(−ipy)dy
This is essentially a Fourier transform but there is a shift involved. In any case, I know scipy_integrate can handle this integral. But my goal is to plug in tensors in this function W so that I can use the autograd module to compute partial derivatives. Is there some way in pytorch I can approximate this integral. I can write out a Simpson’s rule formula but wondering if there is a better approximation out there in pytorch before I write my substandard approximations.

Thank you very much for your help.

Best.

Arijit.

Hi,

I’m afraid we don’t have an integrate functionality in pytorch. So no autograd one either.
A quick look at scipy integrate looks like it is going to be quite significant work to port it as pytorch ops.

That being said, if the derivative formula can be written, writing a custom autograd.Function to support these would be easy. We would then be able to use scipy_integrate for the forward and just need to write the backward (potentially using scipy_integrate as well). I would be happy to help you write that if you want to go down that path.

Otherwise, manually implementing an approximation method would work. But the autodiff of the whole approximation method might not be the most efficient thing :confused:

Thank you so much. The equation I am trying to implement is the Moyal-Wigner equation.
\frac{\partial W(x,p,t)}{\partial t} =- p* \frac{\partial W(x,p,t)}{\partial x} + \sum_{s=0}^{k}(-h^2)^{s}\frac{1}{(2s + 1!)}(1/2)^{2s} \frac{\partial^{2s+1} U(x)}{\partial x^{2s+1}} \frac{\partial^{2s+1} W(x,p,t)}{\partial p^{2s+1}}. The partial derivative w.r.t to p and t are easy but x is messy. But I can do the derivatives by hand. U(x) is a known function as well. For starters, k=0,1. But my goal is to train a NN to approximate U. So the above equation becomes part of the loss function. So I need to be able to backprop. That’s really where I am stuck.

Thank you so much for your answer

If I understand you correctly is it something like this:

class dWdx(Function):
    @staticmethod
    def forward(ctx, input):
        numpy_input = input.detach().numpy()
        eval  =  derivative of inside function
        result = scipy_integrate(lambda y: eval))
        return input.new(result)

    @staticmethod
    def backward(ctx, grad_output):
        numpy_go = grad_output.numpy()
        result = scipy_integrate again?
        return grad_output.new(result)

I edited you message with ``` to do a code block and make it easier to read :wink:

It would be slightly different:

class dWdx(Function):
    @staticmethod
    def forward(ctx, x, p, t):
        np_x = x.detach().numpy() # The detach should not be needed here, will be in a future version of pytorch
        np_p = p.detach().numpy() # The detach should not be needed here, will be in a future version of pytorch
        np_t = t.detach().numpy() # The detach should not be needed here, will be in a future version of pytorch
        result = scipy_integrate(f, np_x, np_p, np_t) # I don't know what are the args
        result = torch.from_numpy(result)
        torch.save_for_backward(x, p, t, result)
        return result

    @once_differentiable # This is just because our backward cannot be auto-diffed
    @staticmethod
    def backward(ctx, grad_output):
        x, p, t, result = ctx.saved_tensors
        grad_x = XXX # Here you can use whatever you want because it is once_differentiable.
        grad_p = XXX
        grad_t = XXX
        return grad_x, grad_p, grad_t # Matching the input of forward
1 Like

Thank you so so much.