I would like to apply math operations dynamically between two loss functions or nn.Modules. I am not sure how exactly to implement it. But any help is really appreciated.
For example: In below example I would like to add two loss functions.
nn.L1Loss() + nn.CosineEmbeddingLoss()
If I do this it gives me error:
----> 1 nn.L1Loss() + nn.CosineEmbeddingLoss()
TypeError: unsupported operand type(s) for +: 'L1Loss' and 'CosineEmbeddingLoss'`
I also tried creating a wrapper with forward function and torch operations like below, but it doesn’t work either.
class Execute_Op(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y, op):
if op == 'add':
return torch.add(x, y)
elif op == 'subtract':
return torch.subtract(x - y)
exec_op = Execute_Op()
exec_op(nn.L1Loss(), nn.CosineEmbeddingLoss(), 'add')
It gives error like below:
Execute_Op.forward(self, x, y, op)
5 def forward(self, x, y, op):
6 if op == 'add':
----> 7 return torch.add(x, y)
8 elif op == 'subtract':
9 return x - y
TypeError: add(): argument 'input' (position 1) must be Tensor, not L1Loss