Multiple weights of each layer with a learnable parameter

net2 = copy.deepcopy(net1)

old_dict = collections.OrderedDict()
for k in net1.module.state_dict().items():
    key = k[0]
    value = k[1]
    old_dict[key] = value

# network contains `bn` layers
for k in net1.module.state_dict().items():
     key = k[0]
     value = k[1]
     value_old = old_dict[key]
     if key.find('num_batches_tracked') != -1:
            continue
     # update param
     old_dict[key] = alpha * value_old + (1 - alpha) * value
net2.module.load_state_dict(old_dict, strict=True)

In the original version, alpha is fixed and shared across all the layers, however, I want to set alpha as a learnable parameter for each layer for now.
How to do it?