I have a model defined as:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cl1 = nn.Linear(25, 60)
self.cl2 = nn.Linear(60, 84)
self.fc1 = nn.Linear(84, 10)
self.other_params()
def other_params(self):
self.other_params_list = nn.ParameterList([
nn.Parameter(torch.randn(60, 25)),
nn.Parameter(torch.randn(84, 60)),
nn.Parameter(torch.randn(84, 10))
])
def forward(self, x):
x = F.relu(self.cl1(x))
x = F.relu(self.cl2(x))
return self.fc1(x)
I’d like to use a meta-learning approach to learn weights of the linear layers of the network in the inner loop and learn the parameters in other_params_list
in an outer optimization loop. How can I define a new list of params that excludes other_params_list
from model.parameters()
, to pass to the inner loop optimization model?
EDIT: My solution is to define:
self.model_params_list = nn.ParameterList([
self.cl1.weight,
self.cl1.bias,
self.cl2.weight,
self.cl2.bias,
self.fc1.weight,
self.fc1.bias
])
is there anything more glorious than this? like subtracting lists without a need to add every parameter one by one? or perhaps a loop over model params? Note that for the latter, I can’t define model.parameters()
in the MyModel
class.