How to make a manually changed loss work in backpropagation

Hi,

I’m working on a project to train a model with noisy data examples. During training, I manually increase or decrease the loss value computed from forward computation inside the no_grad() context based on the noise in the input as below:

logits = torch.randn(3, 5, requires_grad=True) # raw model outputs
targets = torch.tensor([1, 0, 4]) # class indices
criterion = nn.CrossEntropyLoss(reduction=‘mean’)
loss = criterion(logits, targets)
# we don't want the manually changing of loss value be tracked by autograd
with torch.no_grad():
    # adjust the loss value manually
    loss_tmp = loss.data + noise_weight * batch_size 
    # set the manually changed loss back to loss
    loss.data = loss_tmp

loss.backward()
# the grad won't change even the loss.data has been changed
grad = logits.grad.data 

However, I find that the grad never changes whether or not I set loss.tmp to loss_data, even I do find the loss.data has been changed.

Could someone explain me why this happen and how can I make it work?

Many thanks!

After some reading and debugging, I realized that the official gradient implementation of Cross Entropy may not use loss at all. But can I customize a loss function in which its forward is just the same as official cross entropy but with gradient implemented based on loss?

Many thanks!