Changing LSTM model structure as it's learning

I know I can specify the input_size in the __init__() method. The problem is we instantiate the model and this remains fixed. Is there a way to change the input_size during training?

Here is my situation, I am training with data from multiple CSV files. The problem is the tables in these CSV files have varying width (and I am passing this data through an LSTM). Can I change that input_width as the loader passes a new batch with new tensor widths ?

I am thinking of something along the lines of:

class Net(nn.Module):
    def __init__(self, input_size):
        super(Net, self).__init__()

        self.input_size = input_size
        self.encoding_layer = nn.Linear(self.input_size, 12)

    def forward(self, x):
        self.input_size = x.size() 
        x = self.fc(x)
        return x

or even calling the attribute from the training loop:

for epoch in range(no_epochs):
     # load data
     model.input_size = data.size()
     output = model(data)
     # loss and the rest

would anything like this work?