I’m needing higher order gradients of my loss function w.r.t all the parameters of my model. So, as suggested, I’m trying to make use of autograd functionals. So far, I’ve created a class that defines a forward function that takes in data and weights as input. Then it passes the data through layers of functionals parameterized appropriately by the weights and returns the models output. Then this output is used along with a target value to define the loss. Each parameter is set to require grads during initialization of a class object.
The class looks like the following.
class M(): def __init__(self): #initialize parameters def forward(self, x, weights=None): #pass x through functional layers parameterized by weights return output
I try to use this class in the following way:
m = M() loss = loss_fn(m.forward(data, weights), target) grad = autograd.grad(loss, weights, create_graph=True)
I get an error however:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I think I must be doing something obviously wrong, so I’d appreciate any help.