I switch tensors inside running networks out, during training but in between training passes.
Doing it manually on weights and biases could be done like this:
with torch.no_grad(): linear1.weight.data[...] = torch.Tensor([[-0.1], [0.2]])
But what I am looking for is an efficient way to replace “one neuron” and store the replaced tensor in a storage for later use.
Is there a way to have a storage and in the nn.Linear just point to the tensors I want to currently use? If not possible, what is an efficient way to copy tensors back and forth between storage and active?