Graph disconnected

hello,
I have created a model in order to do binary classification, However the model I created returns a vector and based on this vector I returns either torch.FloatTensor([0]) or torch.FloatTensor([1])
so forward function is as follows:

def forward(self, x):
      x = self.model(x)  #output is [n, 1]
      x = self.function_check(x)  #output is a new vector [m, 1]
      if x.sum() == 10:
                return torch.FloatTensor([0]).requires_grad_()
      else:
                return torch.FloatTensor([1]).requires_grad_()

so I am afraid that using this technique, my graph of output is not connected to my model and therefore, the gradients will not backpropagated through the model. am I right ?

Edit: I displayed the graph using torchviz, and I was right my graph is disconnected when I am returning new labels, so my question now is how can I attach my output to the graph

My suggestion is

x = torch.ones(9, dtype=torch.float32, requires_grad=True).view(-1, 1)

y = x.sum()

z = torch.sign(torch.abs(y - 10))

z

Output (I removed < before SignBackward0): tensor(1., grad_fn=SignBackward0>)