Effectively switch tensors in nn.Linear out

I switch tensors inside running networks out, during training but in between training passes.

Doing it manually on weights and biases could be done like this:

with torch.no_grad():
    linear1.weight.data[...] = torch.Tensor([[-0.1], [0.2]])

But what I am looking for is an efficient way to replace “one neuron” and store the replaced tensor in a storage for later use.

Is there a way to have a storage and in the nn.Linear just point to the tensors I want to currently use? If not possible, what is an efficient way to copy tensors back and forth between storage and active?

you could use something along the lines of copy_()

weight = linear1.weight.clone()
new_weight = change_neuron(weight)
linear1.weight.copy_(new_weight)

The change neuron function could be just done via indexing/slicing of the Tensor. I.e.,

weight[0,0] = 4 #overwrite value with 4

p.s. don’t use the .data attribute it’s deprecated and can lead to unexpected behavior.

1 Like

Thx, kind of straight forward then. I was hoping for something similar to memory pointers or references, but havn’t found anything in the docs