Masked Linear Layer which only updates a subset of the Input vector

Hi all,

I have the following setting. Imagine a tensor A as input of shape [n,m]. I want to pass the entire tensor through a linear layer, but I only want one of the rows to be updated and the rest to remain the same. So after NN(A) I want only the i-th row to be updated and all other rows to be the same as in the input.

Do any of you know of a way to achieve this, without copying and cloning tensors? I don’t want to break the differentiability.

TLDR: How can I define a layer which applies the Identity function to every row except row i and a linear function to row i?

Thanks in advance

Question: Why do you want to pass the entire input through the linear layer if only one row needs to be transformed? Isn’t it a waste of computation?

The problem is that my input is from a pre-trained model which I want to fine-tune, but at the same time, I want to update parts of the input.

I understand that this is a bit ill-defined/not-nice conceptually. Would be happy to hear any thought you have.

Split the input. Feed the separate parts through different layers: One does the linear transform and the other performs the identity.
Merge the output of the two layers.