How loss of dimension 1 can work with output of n dimension when backward

inp = torch.tensor([[1,1,1]], dtype=torch.float)

i = torch.tensor([[1]], dtype=torch.long).view(-1, 1)

outp = self.eval_net(inp).gather(1, i)

self.optimizer.zero_grad()

loss = F.smooth_l1_loss(q_eval, 0*q_eval)

loss.backward()

so how the backward work if(eval_net) has multiple outputs ?

class Net(nn.Module):
def init(self):
super(Net, self).init()
self.fc = nn.Linear(3, 1)
self.a_head = nn.Linear(1, 2) #in this case two outputs

1 Like

come on you people, I don’t belive that nobody helps me ! :worried:

It seems I have a similar question.

I’ve found this useful.

Best regards!