As @swap said, your error emerges from the fact that dx is of type None and None ** 2 is undefined. The use of Variable() is deprecated and shouldn’t be used, also when you define self.x as torch.tensor(x, requires_grad=True).double() you’re breaking your computation graph. Simply define it is as,
self.x = x.double().requries_grad=True
If you’re trying to get the gradient of the output w.r.t the inputs you can just do dx = torch.autograd.grad(output, self.x, torch.ones_like(self.x), ...) and redefine the loss to be torch.sum( (100*dx)**2 ). You can then use torch.einsum to format the output into the shape you want.
Also, your cost_function doesn’t return the loss so that’ll be a future error as well. Add return loss, and be careful with retain_graph=True as that might result in a memory leak if used improperly.
Finally, do you need to have everything within a class? Because it might be significantly easier to just write it as a script!
x is an numpy object and need to be converted to tensor.
so I tried this self.x = torch.from_numpy(x).double() self.x.requires_grad=True
Again the grad returns None type. Can you explain why I am getting none as output for grad or a way to counter this problem. As far as I know that the output is an function of input, therefore we can find grad of output wrt to input. Also is reshaping an in place operation?