To calculate my final loss, a trained layer is used to pre-process the ground-truth labels for later comparison against another model result. I only want that layer to learn from the first input (actual input data) but not from the second input (ground truth). But I need the same operation performed on both so future steps remain comparable. Is there a way to temporarily disable “learning”?
I’m not sure if I understand your question completely, but as long as you don’t call .backward() on a loss, no gradients will be calculated and thus this loss won’t have any influence on the training.
I’m training a transformer and I want to perform positional encoding the ground truth labels of future values of a sequence before comparing them to the transformer outputs. I’m not sure how this is commonly done, directly comparing them without encoding one of them seems wrong. So effectively both encodings become part of the loss which is used for the gradients, but I thought maybe the inputs should be considered.