Add a custom loss in a public github repository (distillation)

Hi,

I’m trying to implement a custom loss in a public repository regarding knowledge distillation. The link to the repository is the following:

" GitHub - DefangChen/SimKD: [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier". "

The main issue is regarding how to add the new loss because simply adding it to the actual loss (/helper/loops.py) doesn’t produce any change in the output (I’ve already made an issue in the git repo)

In simpler terms, without looking into the repository, do you have any advice on how should be done this in Pytorch? The reasoning is the following:

Criterion = Nn.MSEloss()

Loss_custom = criterion( torch.tensor(map1, requires_grad=True), torch.tensor(map2, requires_grad=True)).cuda()

map1 and map2 are numpy.ndarray()

and then,

Total_loss = weight_1 * Main_Loss + weight_2 * Loss_custom

If I set the weight_2 to zero or a very high number the final result is the same.
The implementation should use something in particular? (e.g., nn.Parameter, a specific grad function or this like that)

Thanks in advance

This is expected, since map1 and map2 are numpy arrays and thus not tracked by Autograd. Creating new tensors via torch.tensor(map1, requires_grad=True) creates new leaf-variables without any gradient history.

You would need to compute map1 and map2 via differentiable operations in PyTorch. If you need to use a 3rd party library, such as numpy, you would need to implement a custom autograd.Function including the backward pass as described here.

1 Like