Questions about creating a shared neural network among different models or layers

Hi, I intend to create a newtork with different pathes and I believe I can select them to determine which part of the nerwork I intend to train. Here is my current design:

class GCNEncoder_Multiinput(torch.nn.Module):
def init(self, out_channels, graph_list, label_list):
super(GCNEncoder_Multiinput, self).init()
self.activ = nn.ReLU()

    conv_dict = {}
    for i in graph_list:
        conv_dict[i.show_index] = TransformerConv(i.x.shape[1], out_channels, heads = 2).to(device)
    self.convl1 = conv_dict
    
    conv_dict_l2 = {}
    conv_dict_l3 = {}
    tissue_specific_list = list(set(label_list))
    
    for i in tissue_specific_list:
        conv_dict_l2[i] = TransformerConv(out_channels*2, out_channels).to(device)
        conv_dict_l3[i] = TransformerConv(out_channels, out_channels).to(device)
    self.convl2 = conv_dict_l2
    self.convl3 = conv_dict_l3
        
    
def forward(self, x, edge_index, show_index):
    x = self.convl1[show_index](x, edge_index)
    x = self.activ(x)
    x = self.convl2[show_index.split('__')[0]](x, edge_index)
    x = self.activ(x)
    return self.convl3[show_index.split('__')[0]](x, edge_index)

However, if I intend to pass the parameters to the optimizer, I received such a error:
ValueError: optimizer got an empty parameter list
How to address this problem? Thanks a lot.

Hi,
As I can see, convl1 through convl3 aren’t getting registered as tunable parameters (these apparently are just python dictionary objects).

Potentially, checkout ParameterDict that behaves similar to a regular python dictionary but ensures that the parameters/layers it contains are properly registered.

1 Like

Thanks a lot, I will try it to see its function!!!

Btw, here I think I should use moduleDict.