Custom bessel autograd function

Hi, I was able to implement a class for Jn to have arbitrary derivatives as shown below:

import numpy as np
from scipy.special import jv
import torch


class besselJv(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x, v):
        ctx.save_for_backward(x, v)
        return jv(v, x)

    @staticmethod
    def backward(ctx, grad_out):
        x, v = ctx.saved_tensors
        
        return grad_out*0.5*(besselJv.apply(x, v-1) -  besselJv.apply(x, v+1)), None


x = torch.tensor([2.0], dtype=torch.double, requires_grad=True)
v = torch.tensor(1)
y = besselJv.apply(x, v)

dy_dx   = torch.autograd.grad(y, x, create_graph=True)
dy2_dx2 = torch.autograd.grad(dy_dx, x)

print(dy_dx[0].item())
print(dy2_dx2[0])

However when I try implementing this using v as a constant and not a tensor following this post:

import numpy as np
from scipy.special import jv
import torch


class besselJv(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x, v):
        ctx.save_for_backward(x)
        ctx._v = v
        return torch.from_numpy(jv(v, x.detach().numpy()))

    @staticmethod
    def backward(ctx, grad_out):
        x = ctx.saved_tensors
        v = ctx._v
        return grad_out*0.5*(besselJv.apply(x, v-1) -  besselJv.apply(x, v+1)), None


x = torch.tensor([2.0], dtype=torch.double, requires_grad=True)
v = torch.tensor(1)
y = besselJv.apply(x, v)

dy_dx   = torch.autograd.grad(y, x, create_graph=True)
dy2_dx2 = torch.autograd.grad(dy_dx, x)

print(dy_dx[0].item())
print(dy2_dx2[0])

I get an error with this line:

What is the issue here? Is this code outdated?

Thank you,
Alex

I don’t know which error you are seeing, as it seems to be missing from your post, but executing your code yields:

TypeError: 'besselJv' object is not iterable

for me.
Checing the output y also shows:

y
<__main__.besselJv at 0x7fbfac055b80>

which is an object and not a tensor as expected so I guess you want to use:

y = besselJv.apply(x, v)

instead which works fine for me.

Hi thank you for the prompt reply. Yes the first code snippet works, but I would like to have v not as a tensor but as a constant like in the second code snippet, as was done in the referenced post.

I’m not sure what you are looking exactly for, but I assume you want to save v in ctx._v?
If so, then your second code snippet contains a few slight issues and this should work:

class besselJv(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x, v):
        ctx.save_for_backward(x)
        ctx._v = v
        return torch.from_numpy(jv(v.numpy(), x.detach().numpy()))

    @staticmethod
    def backward(ctx, grad_out):
        x, = ctx.saved_tensors
        v = ctx._v
        return grad_out*0.5*(besselJv.apply(x, v-1) -  besselJv.apply(x, v+1)), None


x = torch.tensor([2.0], dtype=torch.double, requires_grad=True)
v = torch.tensor(1)
y = besselJv.apply(x, v)

dy_dx   = torch.autograd.grad(y, x, create_graph=True)
dy2_dx2 = torch.autograd.grad(dy_dx, x)

print(dy_dx[0].item())
print(dy2_dx2[0])

Great, that works! You can simplify it further by using v=1 instead of v = torch.tensor(1), and replacing v.numpy() with v in the return of the forward function.