Model.named_parameters() will lose some layer modules

Thanks for the quick reply. My completed code looks like:

param_frozen_list = [] # should be changed into torch.nn.ParameterList()
param_active_list = [] # should be changed into torch.nn.ParameterList()

for name, param in model.named_parameters():
    if name == 'frozen_condition':
          param_frozen_list.append(param)
    elif name == 'active_condition':
        param_active_list.append(param) 
    else:
        continue

optimizer = torch.optim.SGD([
                     {'params': param_frozen_list, 'lr': 0.0},
                     {'params': param_active_list, 'lr': args.learning_rate}],
                 lr = args.learning_rate,
                 momentum = args.momentum,
                 weight_decay = args.weight_decay)

The key problem is that in the loop result of model.named_parameters(), some layers is lost.
I use model.modules() to check out that they exits in the full module sequences. So is there wrong in my code or a pytorch bug?