So I want to report my regression error in a particular fashion (RMSE per attribute of data sample). I can go about doing this in two ways:
- Creating a custom loss function that does this and therefore extracting the particular value from `loss=optimizer.step() such that my custom loss is as such:
class CustomLoss(nn.Module):
def init(self):
super(CustomLoss,self).init()
def forward (self,output,target,attribute):
output_per_attribute=torch.div(output,attribute)
target_per_attribute=torch.div(target,attribute)
lossfn=nn.MSELoss()
loss=torch.sqrt(lossfn(output_per_attribute,target_per_attribute))
return loss
`
- Using a MSE Loss function and then including lines to go through the data and calculate the specific loss accordingly:
optimizer.zero_grad()
output=model(input)
loss=MSELoss(output,target)
with torch.no_grad():
output=model(input)
output_per_attribute=torch.div(output,attributes)
target_per_attribute=torch.div(target,attributes)
desired_loss = MSELoss(output_per_attribute, target_per_attribute)
print(desired_loss)
loss.backward()
optimizer.step()
My question is, is there an optimal way to approach this? I tried both scenarios and noticed that training with the first method seemed to be slower - possibly because the loss function values are smaller and hence gradients are smaller leading to smaller steps? Thanks!