Loss function without expected output

hi there,
I have a question about the mechanism of the Pytorch autograd.
For the usual loss functions, we have two inputs which are yhat(real output) and y(expected output), as shown in the following two codes.

def mse_loss (yhat, y):
    loss = torch.mean((y - yhat) ** 2)
    return loss
def cross_entropy_loss(yhat,y):
    L = len(yhat)
    loss = - (1 / L) * (torch.mm(y.T, torch.log(yhat)) 
                         + torch.mm((1 - y).T, torch.log(1 - yhat)))
    return loss

However, I want to use a neural network to minimize my own loss function, which doesn’t have y (expected output), will the loss.backward() function still return me cerrect result?

def my_loss(yhat):
     L = len(yhat)
     loss= (1/L) * torch.mm(yhat.T, torch.log(yhat))
     return loss

Yes. To the autograd machinery, any scalar function will do as a loss and it will just go over the computational graph regardless of how you arrived there (within the restrictions of needing differentiable things and only following nodes that have requires_grad, which e.g. the true labels usually do not have).

Best regards

Thomas

Hi Tom,
Thanks for your answer, I already try this function in my model and it works.