How to compute the gradient of a function without explicit expression?

I am trying to compute the gradient of mittag_leffler function with respect to its first two parameters. When I ran the below codes:

import torch
import mittag_leffler as ml
import numpy as np
from torch.autograd import Variable


# Create a tensor
x = torch.tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.], requires_grad=True)
b = torch.tensor([.1,.2,.3,.4,.5,.6,.7,.8,.9,0.9], requires_grad=True)

# Define a function

y = torch.sum(torch.tensor(ml.ml(-x,b)))

# Compute gradients
y.backward()

# Access gradients
print(x.grad)
print(b.grad)

it shows error: RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
When I add detach as below:

import torch
import mittag_leffler as ml
import numpy as np
from torch.autograd import Variable


# Create a tensor
x = torch.tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.], requires_grad=True)
b = torch.tensor([.1,.2,.3,.4,.5,.6,.7,.8,.9,0.9], requires_grad=True)

# Define a function

x = x.detach()
b = b.detach()
y = torch.sum(torch.tensor(ml.ml(-x,b)))

# Compute gradients
y.backward()

# Access gradients
print(x.grad)
print(b.grad)

it shows error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
So, what should I do next ? Any help or comments are highly appreciated.
the mittag-leffler codes see
ml

Autograd won’t track numpy operations so the issues are expected.
You could try to rewrite the ml function in PyTorch. If that’s not possible, you would need to implement a custom autograd.Function including the backward pass as described here.

I want to " implement a custom autograd.Function including the backward pass" and try the codes below:

import torch
import math
import mittag_leffler as ml

class CustomML(torch.autograd.Function):

    @staticmethod
    def forward(ctx, x, beta):
        ctx.save_for_backward(x,beta)
        return ml.ml(-x,beta)

    @staticmethod
    def backward(ctx, grad_output):
        gbeta = ((ml.ml(x,beta+0.00001)-ml.ml(x,beta))/0.00001 + (ml.ml(x,beta-0.00001)-ml.ml(x,beta))/(-0.00001))/2
        input, = ctx.saved_tensors
        return grad_output * gbeta

dtype = torch.float
device = torch.device("cpu")

x = torch.linspace(0, 3, 2000, device=device, dtype=dtype)
beta_o = torch.linspace(0.8, 1.0, 2000, device=device, dtype=dtype)
y = ml.ml(-x, beta_o)

beta = torch.full((), 0.7, device=device, dtype=dtype, requires_grad=True)

learning_rate = 5e-6
for t in range(2000):
    # To apply our Function, we use Function.apply method. We alias this as 'P3'.
    CML = CustomML.apply

    y_pred = CML(-x,beta)

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass.
    loss.backward()

    # Update weights using gradient descent
    with torch.no_grad():
        
        beta -= learning_rate * beta.grad
        
        # Manually zero the gradients after updating weights
        beta.grad = None

print(f'beta = {beta}')

but the error is the same : RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Could you help me improve the codes above ?

Assuming the error is raised in ml.ml(-x,beta), call x.detach().numpy() before passing it to ml as the error message suggests.

1 Like