How does one have the parameters of a model NOT BE LEAFS?

thanks for that!

I’ve been playing around with the library but was wondering if it was possible to have a trainable step-size with that library. Is it possible?

I tried:

#
child_model = nn.Sequential(OrderedDict([
        ('conv1', nn.Conv2d(in_channels=3,out_channels=2,kernel_size=5)),
        ('relu1', nn.ReLU()),
        ('Flatten', Flatten()),
        ('fc', nn.Linear(in_features=28*28*2,out_features=10) )
    ]))
eta = nn.Sequential(OrderedDict([
    ('fc', nn.Linear(1,1)),
    ('sigmoid', nn.Sigmoid())
]))
inner_opt = torch.optim.Adam(child_model.parameters(), lr=eta)
meta_params = itertools.chain(child_model.parameters(),eta.parameters())
meta_opt = torch.optim.Adam(meta_params, lr=1e-3)

but it failed with error:

Exception has occurred: TypeError
'<=' not supported between instances of 'float' and 'Sequential'