Multiple Outputs of a NN

Dear all,

Currently I am building a neural net to estimate the uncertainty in a regression, which is performed by the neural net. I will use a custom loss to update the weights of the neurons. The custom loss consits of two values, which are the outputs of the neural net. For now I am using nn.MSEloss() the perform the regression. I’m quite unsure how exactly i can implement two outputs. I would be thankful for any help regrading this issue. My Code is displayed below

class NN(nn.Module):
    def __init__(self, n_feature, n_hidden, n_output):
        super(NN, self).__init__()
        self.hidden = nn.Linear(n_feature, n_hidden)
        self.predict = nn.Linear(n_hidden, n_output)
        #probably here i can define the outputs

    def forward(self, x):
        x = torch.sigmoid(self.hidden(x))     
        x = self.predict(x)     
       #define the activation func for the outputs
        return x

net = NN(n_feature=1, n_hidden=10, n_output=1)

#create the optimizer
optimizer = torch.optim.SGD(net.parameters(), lr=0.2)

def MyLoss():
    
    #takes outputs of the neural net to compute 

   return None
    

n_iter = 200

#training loop
for i in range(n_iter):
    prediction = net(x)     # input x and predict based on x

    loss = My_Loss()    

    optimizer.zero_grad()   # zero the gradient buffers
    loss.backward()         # backpropagation, compute gradients
    optimizer.step()        # apply gradients   
.
.
.

You can simply return two tensors in the forward method of your model.
E.g. using your model:

class NN(nn.Module):
    def __init__(self, n_feature, n_hidden, n_output):
        super(NN, self).__init__()
        self.hidden = nn.Linear(n_feature, n_hidden)
        self.predict = nn.Linear(n_hidden, n_output)
        #probably here i can define the outputs

    def forward(self, x):
        x1 = torch.sigmoid(self.hidden(x))     
        x2 = self.predict(x1)     
       
        return x1, x2

would return now the intermediate activation as well as the last output.