How to make a model have the output of regression and classification?

The input is rgb-d image with the corresponding label and regression data.
How to make a model have the output of regression and classification?

This is my program concept:

#### program concept ####
# 4 class, 3 regression

class Net(torch.nn.Module):

def __init__(self, n_feature, n_hidden): 
    super(Net, self).__init__() 
    self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer 
    self.out = torch.nn.Linear(n_hidden, 4) # output layer 
    self.out2 = torch.nn.Linear(n_hidden, 3) # output layer 

def forward(self, x):
    x = F.relu(self.hidden(x)) # activation function for hidden layer 
    x_out = self.out(x) 
    x_out2 = self.out2(x) 
    return x_out, x_out2

net = Net(n_feature=4, n_hidden=1024) # define the network
optimizer = torch.optim.Adam(net.parameters(), lr=0.02)
loss_func = torch.nn.CrossEntropyLoss() # the target label is NOT an one-hotted
loss_func2 = torch.nn.MSELoss() # this is for regression mean squared loss

for rgbd, y, y2 in data_loader_image_rgbd:

    optimizer.zero_grad()   # clear gradients 

    pre_class, pre_regression = net(rgbd)     # input x and predict based on x

    loss = loss_func(pre_class, y)     # must be (1. nn output, 2. target)
    loss2 = loss_func2(pre_regression, y)     # must be (1. nn output, 2. target)
    loss_total = loss +loss2
    loss_total.backward()         # backpropagation, compute gradients

    optimizer.step()        # apply gradients

#### program concept ####

Can I update my network parameters correctly? If not, how can I change it?

Sorry, my English is very poor, I hope you understand.

thanks!

trainer.py in SSD might be useful as an example.

1 Like

You already did what you wanted. Just sum of the different loss and then do the backward. The network will take care of everything.

Hello,

Can the above program achieve the goal?

If the value of regression is (104, 180, 60) and the value of classification is laber(1, 2, 3, 4, …, 10). Need to process with loss? or not? (e.g., normalization)
What to do if needed?

    loss = loss_func(pre_class, y)     # must be (1. nn output, 2. target)
    loss2 = loss_func2(pre_regression, y)     # must be (1. nn output, 2. target)
    loss_total = loss +loss2

And,
does pytorch automatically update all parameters of the network?

def __init__(self, n_feature, n_hidden): 
    super(Net, self).__init__() 
    self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer 
    self.out = torch.nn.Linear(n_hidden, 4) # output layer 
    self.out2 = torch.nn.Linear(n_hidden, 3) # output layer

Thanks ~  ̄U ̄

  1. You should normalize the target value of regression, or I think it will be very hard to train this network. For regression, you could do something like log or, if you know the bounds, just normalize it to 0 to 1.
  2. Yes, because the net.parameters() will return all the paramters of the network so all the parameters will be optimized.
  3. It’s good to pick a topic of this thread : )
1 Like

If I want to change the more complex network to do (ex: resnet), is it just a matter of changing the last fully connected layer to two fully connected layers to output the regression and classification results?

Yes, just replace the hidden layers to resnet backbone.