Overwriting/detaching part of a tensor

Hi, the situation is thus. Suppose I have 3 (say) models:

  1. two “guest” models which take batch inputs x1, x2 and output z1, z2 respectively
  2. one “host” model which concatenates z = cat(z1, z2) and produces predictions y_hat

each model has its own optimizer. In the training loop, I first obtain z1, then I obtain z2 and concatenate it to z1, and then I pass it through the host model. I calculate loss = some criterion(y, y_hat) and loss.backward().

I want to externally modify some elements of tensor z, say by setting some (predetermined) indices’ values to random values.

I suppose it isn’t possible to BP through this random node, so those elements need to be detached from the computational graph. But I want to update parameters that were not affected by this randomization. Is it possible to achieve this, or something to this effect?

Thanks!