Assuming that the model has two linear layers,
First, put the input x in the first layer.
And output from the first layer manipulates the weight of the second layer.
Finally, put the same input x into the second layer.
I would like to ask if both layers will be updated normally in the following situations.
Are you creating a custom nn module?
def forward(self, x):
return torch.cat((x, x)) * self.w * x + self.b
If you do something like this, it should be OK. Trying to reuse Linear OTOH, and modifying its weight in-place is not recommended.
Thanks for you kind reply!!
Originally, I tried to approach it second way, but I’ll try the first way.
Hi, did you solve this problem? I’m also trying to manipulate the weights in the second model with the outputs of the first model. The following is the foward function in second model:
def forward(self, x, kernel, bias):
self.conv.weight.data = self.conv.weight.data*kernel
self.conv.bias.data = self.conv.bias.data +torch.squeeze(bias)
out = self.conv(x)
where kernel and bias are the output of the first model. But I got the error message
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Do you have any idea to solve this problem?