Hi,
I’m trying to implement a custom loss in a public repository regarding knowledge distillation. The link to the repository is the following:
The main issue is regarding how to add the new loss because simply adding it to the actual loss (/helper/loops.py) doesn’t produce any change in the output (I’ve already made an issue in the git repo)
In simpler terms, without looking into the repository, do you have any advice on how should be done this in Pytorch? The reasoning is the following:
Criterion = Nn.MSEloss()
Loss_custom = criterion( torch.tensor(map1, requires_grad=True), torch.tensor(map2, requires_grad=True)).cuda()
map1 and map2 are numpy.ndarray()
and then,
Total_loss = weight_1 * Main_Loss + weight_2 * Loss_custom
If I set the weight_2 to zero or a very high number the final result is the same.
The implementation should use something in particular? (e.g., nn.Parameter, a specific grad function or this like that)
Thanks in advance