I wonder if it is possible to multiply the output of the module by the weight of another module

Assuming that the model has two linear layers,

First, put the input x in the first layer.

And output from the first layer manipulates the weight of the second layer.

Finally, put the same input x into the second layer.

I would like to ask if both layers will be updated normally in the following situations.

Thank you.

Are you creating a custom nn module?

def forward(self, x):
    return torch.cat((x, x)) * self.w * x + self.b

If you do something like this, it should be OK. Trying to reuse Linear OTOH, and modifying its weight in-place is not recommended.

Thanks for you kind reply!!

Originally, I tried to approach it second way, but I’ll try the first way.

Thank you.

Hi, did you solve this problem? I’m also trying to manipulate the weights in the second model with the outputs of the first model. The following is the foward function in second model:
def forward(self, x, kernel, bias):
self.conv[0].weight.data = self.conv[0].weight.data*kernel
self.conv[0].bias.data = self.conv[0].bias.data +torch.squeeze(bias)
out = self.conv(x)
return out
where kernel and bias are the output of the first model. But I got the error message
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Do you have any idea to solve this problem?