I’m trying to define a set of new parameters
B in a pytorch model. I would like to initialize the new params with the current weights of the model
Question: I want the params
B to be differentiable, but autograd should not track their history to
B should have a new memory with no reference to
B = nn.Parameter(W.detach().clone()) the correct function to use?
B = W.clone() will result in autograd tracking history of
W while differentiating. Also I understand that
B = W.detach().clone() will not be differentiable.