RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Hey everyone.
I’m trying to train this:

class model(torch.nn.Module):
    def __init__(self, D_in, H):
        super(model,self).__init__()
        self.transfer = torch.nn.Sigmoid()
        self.lin1 = torch.nn.Linear(D_in, H)
        self.lin2 = torch.nn.Linear(H, 1)
        
    def forward(self, x1, x2):
        x1 = self.lin1(x1)
        x1 = self.transfer(x1)
        x1 = self.lin2(x1)
        
        x2 = self.lin1(x2)
        x2 = self.transfer(x2)
        x2 = self.lin2(x2)
        
        x1 = Variable(torch.Tensor([[1e0, x1, x2], [x1, 4e0, 0e0], [x2, 0e0, 2e0]]))
        x1, eigvec = torch.eig(x1,eigenvectors=False)
        x1 = torch.min(x1[:,0]) # Get the lowest eigenvalue of the matrix, just the real part.
        
        return x1

The forward pass works fine, but when I try to do loss.backward() y get:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I guess I got many errors, from the way I built the matrix to diagonalize to that I cannot use torch.eig in this context. So my question is. Is there any way to train a model like this is pytorch?

Thank you very much!

This works for me.

import torch
import numpy as np
import torch.utils.data as data

class model(torch.nn.Module):
    def __init__(self, D_in, H):
        super(model,self).__init__()
        self.transfer = torch.nn.Sigmoid()
        self.lin1 = torch.nn.Linear(D_in, H)
        self.lin2 = torch.nn.Linear(H, 1)
        
    def forward(self, x1, x2):
        x1 = self.lin1(x1)
        x1 = self.transfer(x1)
        x1 = self.lin2(x1)
        print(x1)
        
        x2 = self.lin1(x2)
        x2 = self.transfer(x2)
        x2 = self.lin2(x2)
        print(x2)
        
        matrix = torch.Tensor([[1e0, 0, 0], [0, 4e0, 0e0], [0, 0e0, 2e0]])
        matrix = matrix.view(3,3,1)
        matrix[0,1] += x1.squeeze(0)
        matrix[1,0] += x1.squeeze(0)
        matrix[0,2] += x2.squeeze(0)
        matrix[2,0] += x2.squeeze(0)

        eigval, eigvec = torch.symeig(matrix.squeeze(),eigenvectors=True)
        mineigval = torch.min(eigval) # Get the lowest eigenvalue of the matrix, just the real part.
        return mineigval


m = model(5,3)
d = torch.rand(1,5, requires_grad=True)
o = m(d, d+1)
o.backward()
print(d.grad)

I’m getting the following error when I run your code:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-1-429aeef1b570> in <module>()
     37 d = torch.rand(1,5, requires_grad=True)
     38 o = m(d, d+1)
---> 39 o.backward()
     40 print(d.grad)

/usr/local/lib/python2.7/dist-packages/torch/tensor.pyc in backward(self, gradient, retain_graph, create_graph)
     91                 products. Defaults to ``False``.
     92         """
---> 93         torch.autograd.backward(self, gradient, retain_graph, create_graph)
     94 
     95     def register_hook(self, hook):

/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.pyc in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     88     Variable._execution_engine.run_backward(
     89         tensors, grad_tensors, retain_graph, create_graph,
---> 90         allow_unreachable=True)  # allow_unreachable flag
     91 
     92 

RuntimeError: the derivative for 'symeig' is not implemented

I thought it may be a problem with my PyTorch version but I have just upgrade it. My current version is 0.4.1.

symeig is implemented in master branch (~0.5 version)

Now it works. Once I updated to 0.5 everything was fine.
Do you now how long will it take for this version to be a stable version?

Thank you very much for the help!! :slight_smile:

I’m not sure when pytorch next stable version is going to be released.

@ptrblck, @tom: any idea?

Actually, I have no idea.
To speculate: There has been an announcement for the PyTorch DevCon on October 2nd, so I would expect at least a release candidate by then.
Also, some bugs have been labeled “blocker” but I haven’t seen the “collect all release blockers” bug yet that preceded the previous releases.

Best regards

Thomas

Answered here.
Please don’t cross-post the same issues, as multiple users might work on the solution although it might already be solved. Let’s stick to the created topic.