I have the following setting. Imagine a tensor A as input of shape [n,m]. I want to pass the entire tensor through a linear layer, but I only want one of the rows to be updated and the rest to remain the same. So after NN(A) I want only the i-th row to be updated and all other rows to be the same as in the input.
Do any of you know of a way to achieve this, without copying and cloning tensors? I don’t want to break the differentiability.
TLDR: How can I define a layer which applies the Identity function to every row except row i and a linear function to row i?
Split the input. Feed the separate parts through different layers: One does the linear transform and the other performs the identity.
Merge the output of the two layers.