Editing ScriptModule parameters

Hi.

I know this seems like an anti-pattern question to ask, but I was wondering if there’s a way to edit a ScriptModule’s parameters and delete references to the old object at the same time?

Currently it is not allowed to delete children of ScriptModule._parameters, which I assume is because the management of this happens in the lower C++ levels of the code.

Are there any thoughts on supporting this in the future?

For now it is at least possible to replace a value in the same ScriptModule._parameters as long as it’s of lesser size than the current value.

Example of something that will still execute (after the model is traced):

class Net(nn.Module):
    def __init__(self, num_emb):
        super(Net, self).__init__()
        self.emb = nn.Embedding(num_emb, 100)
        
    def forward(self, x):
        return self.emb(x)
m = Net(10000)
*trace model*
subset = torch.randn_like(m.emb.weight[:80])
m.emb.weight.data = subset

Are there any huge drawbacks to doing this, other than being anti-pattern?
My usecase is training a model, shipping it to production, then trimming it down to a subset of our most recent items before accepting queries.

Bonus question: Are there any good ways to convert a traced model back to a “normal” model after doing torch.jit.load()?

Modifying a traced model, then doing a copy() forces a “proper” re-allocate at least. :slight_smile:
Any assistance with some of the other questions would still be appreciated! But my immediate needs are at least met.

lm = torch.jit.load('m.pt')
subset = torch.randn_like(lm.emb.weight[:80])
lm.emb.weight.data = subset
tmp = lm.copy()

Hi NegatioN,
I know it’s a long time but just wondering have you found any answer to your question? I have been struggling with this question for a while and would be very grateful if someone could help me. How can we modify a traced model? How can we finetune them? And what’s the point if we cannot finetune these models?

Hi @ptrblck,
Sorry to also mention you in this post. I appreciate it if you could help me.

I don’t think you can modify a scripted model, so would need to create the model architecture before scripting it. Afterwards you can fine-tune it as a plain eager model.

Thanks so much, @ptrblck, for your prompt answer. I am not familiar with the eager model. I just guess you mean I should apply pytorch_to_keras for fine-tuning?
I work with big data, and it takes more than one week to train the model! so if I just want to change the size of a Linear layer like ( self.fc = nn.Linear(64,1)) to ( self.fc = nn.Linear(4096,1)) it is not possible, and I should modify the model and train that from the beginning?!

No, by “eager mode” I meant the “normal” PyTorch model without scripting.

You can directly change the layer in the model before scripting it. E.g. something like this would work:

model = MyModel()
model.fc = nn.Linear(...)
model_scripted = torch.jit.script(model) # script afterwards

# manipulating the scripted model might fail
model_scripted.fc = nn.Linear(...)

Would this work for your use case or do you need to load an already scripted model (and somehow cannot manipulate it beforehand)?

Yes @ptrblck, I need to load an already scripted model and then modify that.
I actually didn’t know after loading a scripted model it cannot be fine_tuned or modified! So I guess I have to train my model from scratch and save that as a normal model? Isn’t there any way to convert an already saved script model (after loading) to a normal model?

Small correction: the scripted model can be fine-tuned (i.e. trained), but I don’t believe modified (JIT experts might correct me here).
I’m not sure what restrictions you are working with and why loading a scripted model is necessary, i.e. I guess you might not have access to the model definition?

for fine_tune I mean, if we want to change last layer size(e.g. number of class), is it still possible?

Let’s say my model is something like that:

class Mymodel(nn.Module):
def init(self, num_classes):
super(Mymodel, self).init()
self.model1 = models.resnet50(pretrained=True)
self._dropout = nn.Dropout(.4)
self.sig = nn.Sigmoid()
self.fc =nn.Linear(2048, num_classes)

then what I did is:

  • train the model
  • evaluate model
  • save model using torch.jit.save(net_trace,‘model.pt’)

Yes, your workload is possible assuming you are not trying to manipulate the model architecture of the scripted model.
The potential issue I’ve mentioned before would arise if you are using this workflow:

  • save model using torch.jit.save(net_trace,‘model.pt’)
  • load the model
  • change the model architecture #!!!
  • train the model

Thanks @ptrblck for your time and your help :blush: