x2[:, 0:1] += x1 does modify x2 inplace, because you are indexing
into x2.
This is not necessarily a problem, but it can break the computation
graph – as appears to be happening in your use case – depending
upon what else is going on in your forward pass.
The solution is to not modify the tensor in question inplace.
Note, x2 = x2[:, 0:1] + x1 does not modify the original tensor to
which the python variable x2 referred. Instead, it creates a new tensor
and then sets the python variable x2 to refer to this new tensor. (If
autograd’s computation graph needs the original x2 tensor for the
backward pass, autograd will keep a separate reference to it so that
it won’t get garbage-collected when x2 no longer refers to it.)
Sometimes autograd needs to reuse tensors from the forward pass in
the backward pass. If you modify such a tensor inplace, you will break
the computation graph and get the error message you quoted, “one of
the variables needed for gradient computation has been modified by an
inplace operation.”
You have a choice: You can “update a single channel value inside the
tensor x2” and break the computation graph, or you can “create a new
tensor” and have the backward pass work.