Model's not getting updated

I am trying to use a model based on Graphs but the weights arent getting updated and the output remains the same even after several iterations and the loss is also not getting changes.Below is my implementation of the model.If someone can find the problem, It would be of great help.

class GCN_Layer(nn.Module):

  def __init__(self,in_features,out_features,bias=True):

    super(GCN_Layer,self).__init__()

    self.in_feaures=in_features

    self.out_features=out_features

    self.weight=Parameter(torch.cuda.FloatTensor(out_features),requires_grad=True)

    if bias:

      self.bias=Parameter(torch.cuda.FloatTensor(out_features),requires_grad=True)

    else:

      self.register_parameter('bias', None)

  def forward(self,input,adj_til,D_til_inv):

    result= (D_til_inv.float() @adj_til.float() @ D_til_inv.float() @input.float())

    return torch.matmul(result,self.weight.reshape(self.weight.shape[0],1))+self.bias

import scipy.linalg

class GCN(nn.Module):

  def __init__(self,n_feat,n_hidden,n_classes):

    super(GCN, self).__init__()

    self.gcn1=GCN_Layer(n_feat,n_hidden)    

    self.gcn2=GCN_Layer(n_hidden,1024)

    self.linear=nn.Sequential(nn.Linear(1024,5,bias=True),

                              nn.Softmax())

    self.relu=nn.ReLU()

  def forward(self,x,adj,features):

    # Degree Matrix

    D_til = torch.zeros(adj.shape[0],adj.shape[1],dtype=float)

    for i in range(D_til.shape[0]):

      D_til[i,i] = torch.sum(adj[i,:],dtype=float)

    # Degree Matrix power -1/2

    D_til_inv = torch.from_numpy(scipy.linalg.fractional_matrix_power(D_til, (-1/2))).cuda()

    out=self.relu(self.gcn1(x, adj,D_til_inv))

    out=self.relu(self.gcn2(out,adj,D_til_inv))

    

    result=torch.matmul(features.float(),out)

    

    out=self.linear(result)

    

    return out`Preformatted text`

Thank you…

Using other 3rd party libraries such as numpy or scipy will detach these operations from the computation graph, which would be the case in:

D_til_inv = torch.from_numpy(scipy.linalg.fractional_matrix_power(D_til, (-1/2))).cuda()

If you don’t need or want to calculate the gradients for preceding operations, your code should work fine.
Otherwise you would need to either use PyTorch methods or write a custom autograd.Function as described here.

Thank you…It worked out pretty well