class model(torch.nn.Module):
def __init__(self, D_in, H):
super(model,self).__init__()
self.transfer = torch.nn.Sigmoid()
self.lin1 = torch.nn.Linear(D_in, H)
self.lin2 = torch.nn.Linear(H, 1)
def forward(self, x1, x2):
x1 = self.lin1(x1)
x1 = self.transfer(x1)
x1 = self.lin2(x1)
x2 = self.lin1(x2)
x2 = self.transfer(x2)
x2 = self.lin2(x2)
x1 = Variable(torch.Tensor([[1e0, x1, x2], [x1, 4e0, 0e0], [x2, 0e0, 2e0]]))
x1, eigvec = torch.eig(x1,eigenvectors=False)
x1 = torch.min(x1[:,0]) # Get the lowest eigenvalue of the matrix, just the real part.
return x1

The forward pass works fine, but when I try to do loss.backward() y get:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I guess I got many errors, from the way I built the matrix to diagonalize to that I cannot use torch.eig in this context. So my question is. Is there any way to train a model like this is pytorch?

Actually, I have no idea.
To speculate: There has been an announcement for the PyTorch DevCon on October 2nd, so I would expect at least a release candidate by then.
Also, some bugs have been labeled “blocker” but I haven’t seen the “collect all release blockers” bug yet that preceded the previous releases.

Answered here.
Please don’t cross-post the same issues, as multiple users might work on the solution although it might already be solved. Let’s stick to the created topic.