Setting learner parameters with metalearner doesn't backprop to metalearner

I’d like to set the parameters() of a model to the output of another model without breaking the autograd graph. A naive approach like

for p in model.parameters(): p = nn.Parameter(t)

doesn’t work. How do I go about this?

First, set the parameters before the training. You can set it by accessing the .data attribute(which is a tensor) of the parameter. For example, a.data.copy_(b). You can also set it with torch.no_grad():.

Thanks @G.M, but that is the opposite of what I’m asking. I want to repeatedly reset the parameters of a ‘learner’ model each iteration during training by a ‘metalearner’. Using data or no_grad doesn’t allow gradient to pass back to the metalearner parameters, which is what I’m looking to do.

so r u trying to let the gradient of the “learner” to pass to the “meta learner”?

Yes. Do you know how to set parameter variables while still being traceable by the autograd library?

I guess parameters work the same as tensors. U can try a.copy_() without disabling autograd.