How can I implement a dynamic parameter in Linear Layer by using pytorch?

class LinearText(nn.Module): 
    def __init__(self):
        super(CNNText, self).__init__()
        self.encoder_tit = nn.Embedding(3281, 64)
        self.encoder_con = nn.Embedding(496037, 512)
        
        self.title_fc = nn.Linear(64, 32) # 1 * len(tit) * 32
        self.con_fc = nn.Linear(512, 256) # 1 * len(con) * 256
# t.cat((content_out_3,content_out_4,content_out_5),dim=1)
# input_size = len(tit) + len(con)
        self.fc = nn.Linear(input_size, 9)

    def forward(self, title, content):
        
        logits = self.fc()
        return F.log_softmax(logits)

As the above, if the input_size of Linear Layer is changed with the input.(input_size = len(tit) + len(con)).
Therefore, input_size is dynamic.Each input has a different input_size.
How can I implement it in pytorch?
Can anyone can help me?I am very confused.Thanks.

look at using the functional interface and registering your learnable parameters as nn.Parameter

You can do:

F.linear(self.weight) 

See such an example in this repo for: https://github.com/szagoruyko/diracnets/blob/master/diracconv.py#L38-L39

1 Like

there is a problem that is still solved.

input_size is dynamic, therefore, the ‘weight’ will be dynamic. Because the size of ‘weight’ is the length of input_size * feature_numbers. Is this feasible in pytorch?

self.weight = nn.Parameter() The size of weight is solid.How it can be dynamic.

even after self.weight = nn.Parameter(...), you can change size of self.weight manually if you want.

But it’s still not clear to me how you are changing the size of self.weight. How is self.weight actually computed? (not the shape, but also the value of the elements?)

I implemented your suggestion but the output shape does not change

1 Like