Hello
I wanted to learn whether it makes sense or possible to create a costume loss that has weights, bias nn.Parameters. How will autograd handle this? Will those weights learned in conjunction with the model?
This is a dummy example it’s not the actual loss but I try to explain what I want to do below.
class MyCustomLoss(nn.Module):
def __init__(self, in_features, out_features):
self.custom_layer = Custome(in_features, out_features)
def forward(self, inputs, targets):
return F.cross_entropy(self.custom_layer(inputs, targets), targets)
This is just a dummy example. My layer is a layer that has weights but also needs the targets values in the forward pass. Hence, why I want to place it in my loss function as opposed to the model. I have successfully implemented it by changing my model to also pass the target parameter but that seems ugly and not modular since I will have to manually change the forward pass of every model I want to use with this loss function.
I would prefer a model who outputs the features and then handle the custom layer and loss inside the custom loss. In the end, I just want a MyCustomLoss
that has a layer with learnable weights/parameters inside of it and then calls cross_entropy
on the output of that layer.