Pytorch updating only one element from model.parameters()

the following code is used to optimize the parameters x0, y0, i , a, b for the build_airy_disk function

  def __init__(self, guess_prms):
        super().__init__()
        # initialize weights with guess_prms
        weights = torch.from_numpy(guess_prms)
        # make weights torch parameters
        self.weights = nn.Parameter(weights, requires_grad=True)

    def forward(self, x_data):
        x = torch.from_numpy(x_data[0, :])
        y = torch.from_numpy(x_data[1, :])
        arr = torch.zeros(x.shape)
        I, X0, Y0, R, A, B = self.weights[:, 0], self.weights[:, 1], self.weights[:, 2], self.weights[:, 3], 
        self.weights[:, 4], self.weights[:, 5]
        for i , x0, y0, r, a, b in zip(I, X0, Y0, R, A, B):#self.weights:
            arr = arr + build_airy_disk(x, y, i, x0, y0, r, a, b)
        return arr

the model.parameters() array is a 4 by 6 tensor however only the first value is updating.
it is worth noting that our model’s output is the sum of 4 airy disk that are considered as the final output .
thank you in advance

Nothing looks wrong just from reading this, perhaps some part of your graph is disconnected inside the build_airy_disk function

thank you for your reply,
I rewrite the forward method without any sub function

class Model(nn.Module):
    """Custom Pytorch model for gradient optimization.
    """
    def __init__(self, guess_prms):
        super().__init__()
        # initialize weights with intitial values
        weights = torch.from_numpy(guess_prms
        self.weights = nn.Parameter(weights, requires_grad=True)

    def forward(self, x_data):
        x = torch.from_numpy(x_data[0, :])
        y = torch.from_numpy(x_data[1, :])
        arr = torch.zeros(x.shape)
        I, X0, Y0, R, A, B = self.weights[:, 0], self.weights[:, 1], self.weights[:, 2],
        self.weights[:, 3], self.weights[:, 4], self.weights[:, 5]
        Rz = 1.2196698912665045
        jinc = lambda x: torch.tensor(2*(jn(1, x.detach().numpy())) / x.detach().numpy()) 
        for i , x0, y0, r, a, b in zip(I, X0, Y0, R, A, B):
            r0 = torch.sqrt(a*(x - x0) ** 2 + b*(y - y0) ** 2)
            r0 = r0*torch.pi * Rz / r
            arr = arr + i * (jinc(r0)) ** 2
        return arr

still the process is only optimizing the first column (the ‘i’ parameters and not the x0, y0, r, a and b) of the 4 by 6 array of parameters.
thank you

the detach operation on the x variable broke the graph even though x is not a model parameter.
the problem here is that there is any equivalent to the jn(1,x) from Scipy.special library in PyTorch…

I’m not super familiar with these APIs, but a few relevant functions should’ve been added in 1.13. See torch.special — PyTorch 1.13 documentation.

have you been able to upgrade your torch version to 1.13?

Yes, 1.13 has been released, and pip installing on colab seems to work fine.