Backpropagation on random networks

When we define a very random graph like the input in going as input to the 5th node of layer 4 and we have many nodes and edges like that, will loss.backward() still work?

It should.
Please post any code that you think might produce error.

I cant share the code. But what happens in the code is that we convert an adjacency matrix into the network. The problem is if I use the code like that and concatenate the outputs of previous nodes and send it into the next node does it know where that input is coming from? I believe because of this the weights don’t change. As in there is no optimization occuring. should not produce any error during the backward pass so you should be fine using it for concatenation. About converting “an adjacency matrix into the network”, I am not quite clear what you mean. Could you please post some code to elaborate?

def forward(self,x):
    for i in range(x.shape[1]):
    for i in range(self.input_size,len(self.inputs_count)):
        for j in range(self.n):
          if self.adjMat[j][i]!=0:
    return self.ouput_activation([-self.outputs_count:],1)) if self.activation != None else[-self.outputs_count:],1)

As u may see in this forward pass I split the input, then add it to a list of outputs. the outputs contains all the outputs from all nodes. And input to a node is only from a select few nodes and hence i concatenate those nodes and push them into as input to the next node.

While training this network the change in weights is very slow. The training is very slow even for a simple XOR example. So my question is what could be the reason for this? Is the fact that the node does not know properly where the inputs are coming from causing a problem? If so, how should i fix it?

My bad was doing an error in the network. Thanks for your help.