I have created a model in order to do binary classification, However the model I created returns a vector and based on this vector I returns either torch.FloatTensor() or torch.FloatTensor()
so forward function is as follows:
def forward(self, x): x = self.model(x) #output is [n, 1] x = self.function_check(x) #output is a new vector [m, 1] if x.sum() == 10: return torch.FloatTensor().requires_grad_() else: return torch.FloatTensor().requires_grad_()
so I am afraid that using this technique, my graph of output is not connected to my model and therefore, the gradients will not backpropagated through the model. am I right ?
Edit: I displayed the graph using torchviz, and I was right my graph is disconnected when I am returning new labels, so my question now is how can I attach my output to the graph