Hi! everyone
I am currently working on a problem where i have 3 trained regression model, where each model gives output of (1,300) length sequence. things i need to do is combine the output from the regression model at certain ratio ax + by + c*z = final output (1,300). I want to train the ratio a,b,c. I don’t want the gradient to back propagate to the trained model only the ratio must be updated. suggest me how can i achieve this.
I’d you want to keep the models frozen and only train the scaling factors, initialize them as nn.Parameters
, pass them to an optimizer, and update them in each iteration.
did you mean like this ?
class combined_model(nn.Module):
def __init__(self):
super(combined_model,self).__init__()
self.a = nn.Parameter(torch.tensor(1.0))
self.b = nn.Parameter(torch.tensor(1.0))
self.c = nn.Parameter(torch.tensor(1.0))
def forward(self,x1,x2,x3):
return self.a*x1+self.b*x2+self.c*x3
Yes, your approach looks valid.