Error while returning multiple variable from forward function

I am beginner in PyTorch and want to create something like:

class net()



def forward(self, x):
out = self.layerA(x)
out = self.layerB(out)
pLoss = calcPLoss(out)
out = self.layerC(out)
return out, pLoss

train()

out, pLoss = net(input)
loss = criterion(out, input)
loss = loss + pLoss
loss.backward()

Here, I have two question:

  1. I am not able to return multiple variables from forward function. Can somebody show me correct way to do this?
  2. If I would be able to do this, does PyTorch is able to manage the gradients accordingly? Because my pLoss is calculated in the intermediate layer and I have directly summing up this with the final layer loss. But I want my gradients from pLoss to backpropogate only before intermediate layer.

Thanks!

Hi,

This is the right way to do it.
The gradients will be propagated properly.

Thank you for answering my second question. But I am getting error while doing this. So what about my first question?

Thanks for such a quick response.

What is the error? I don’t think it is related.

I am getting error like “if input is not None and input.dim() != 4:
AttributeError: ‘tuple’ object has no attribute ‘dim’” when I try to return those two values.

When I won’t return pLoss and just return the calculated value. It won’t throw any error.

Are you sure that you didn’t made a typo and pass to the criterion the tuple of the two outputs instead of only out?
You may want to print the returned values to make sure they are what you think.
There are no problem returning multiple elements.

Just realized that the flow of the code is not sequential as I thought. My actual code structure looks like:

class net1()



def forward(self, x):
print(‘2’)
out = self.layerA(x)
print(‘3’)
out = self.layerB(out)
pLoss = calcPLoss(out)
out = self.layerC(out)
return out, pLoss

class net2()

layerBB = make_layer_from_net1

def forward(self, x):
out = self.layerAA(x)
print(‘1’)
out, pLoss = self.layerBB(out)
print(‘4’)
out = self.layerCC(out)
return out, pLoss

train()

out, pLoss = net(input)
loss = criterion(out, input)
loss = loss + pLoss
loss.backward()

When I execute this:
I get
1
2
3
2

I am surprised to see this. And I am getting error at the statement between print(“2”) and print(“3”). Tuple object has no ‘dim’!

I am not sure if all these make sense to you. Regardless, thank you so much for trying to help.