I want to create a neural network generating the parameters of another NN. The NN ‘A’ generates ‘B’.

But there is a problem: the output of ‘A’ is a ‘tensor’ (with requires_grad = True) while the parameters of ‘B’ are ‘Parameter’. Since I want to backward the gradient up to the parameters of ‘A’, I am looking for a conversion tensor -> Parameter that keeps the computational graph.

A naive conversion ‘w = Parameter(y)’, where w is the generated parameter of ‘B’, and y is the output of ‘A’, does not work: the computational graph is cut between w and y (the gradient of the parameters of ‘A’ are None, while the gradient of w, which should be non-leaf, is not None).

‘B’ is a NN with fully connected layers and convolutional layers, whose parameters are of type ‘Parameter’.
So, I could replace the usual layers by layers using torch.nn.functional functions… but I need to rewrite each layer I want to use, then. Is this the only solution?