Have specific examples not back propagate

Apologies if this is simple, but I can’t find the answer anywhere.
I’m running a 2 class classification NER task with a transformer, and I would like to have specific words in a segment not back propagate. For example, take the sentence, “the dog ran”, and say I don’t want “ran” to be back propagated. If I could use one hot encoded vectors, I could simply have the corresponding label vectors to “ran” be [0,0]. But to my knowledge, PyTorch does not support one hot encoded vectors, and I need to put some integer as the class from 0 to C-1. So is there another way to achieve this?

You could create an additional class index, which can be ignored in the loss calculation by specifying ignore_index in the criterion, and set the desired words to this index.