How to mix model parameters?

Hello, all.
I am a new-bee to PyTorch.
Here I have a simple idea but I do not know how to implement it with PyTorch.
The question is about how to mix model parameters on the fly, to do something like weighted network parameter sum.

Here I got multiply nn.Module Instance A,B,C,D , they share the same network structure but have different parameter values.

Then I got a simple Multiple Layer Perception which is also implemented as an nn.Module to output a tensor W of size(4,1).

The tensor W works as a weight vector to be learned by the networks. And the instances A,B,C,D are pre-trained, and parameters are frozen to be used as the components to be weighted.

Then I do not know to correctly implement the forward function of the following class.

class Mixture(nn.Module):
    
    def __init__(self, input_dim, hidden_dim, hidden_depth,base_modules =[]):
        super().__init__()

        self.weight_network= nn.Softmax(utils.mlp(input_dim, hidden_dim, len(base_modules ),hidden_depth))
        
        self.base_modules = base_modules 
        for m in self.base_modules :
            for c in m.children():
                for param in child.parameters():
                    params.requires_grad = False
        
        self.outputs = dict()

    def forward(self, x):
        
        weights= self.weight_network(x)
        ######################### Here need help ##################
        #  Here i want to  get a network : 
        #      f = w0*param_A+w1*param_B+w2*param_C+w3*param_D
        #########################################################
        f = ?
        y = f(x)
        self.outputs['y'] = y
        
        return y

Thanks for any possible instructions and help.

Help wanted! i really need that

Hi,

If the latency doesn’t matter, I think the snippet below will work.

weights = self.weight_network(x)
f = sum(w * m(x) for w, m in zip(weights, self.base_modules))

I tried this here: https://colab.research.google.com/drive/163Id8tsSofW4c_Mm6_7X_XThJadTpn6e?usp=sharing