I am trying to compute the gradient of mittag_leffler function with respect to its first two parameters. When I ran the below codes:
import torch
import mittag_leffler as ml
import numpy as np
from torch.autograd import Variable
# Create a tensor
x = torch.tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.], requires_grad=True)
b = torch.tensor([.1,.2,.3,.4,.5,.6,.7,.8,.9,0.9], requires_grad=True)
# Define a function
y = torch.sum(torch.tensor(ml.ml(-x,b)))
# Compute gradients
y.backward()
# Access gradients
print(x.grad)
print(b.grad)
it shows error: RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
When I add detach as below:
import torch
import mittag_leffler as ml
import numpy as np
from torch.autograd import Variable
# Create a tensor
x = torch.tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.], requires_grad=True)
b = torch.tensor([.1,.2,.3,.4,.5,.6,.7,.8,.9,0.9], requires_grad=True)
# Define a function
x = x.detach()
b = b.detach()
y = torch.sum(torch.tensor(ml.ml(-x,b)))
# Compute gradients
y.backward()
# Access gradients
print(x.grad)
print(b.grad)
it shows error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
So, what should I do next ? Any help or comments are highly appreciated.
the mittag-leffler codes see
ml