Saving customized model architecture

Hi,

I got a simple model with a given architecture. After training I applied quantization and added a custom quantization layer after each convolution layer.

Now I would need to save this architecture and access it from a different directory. My current way is to save it via torch.save and access it later via torch.load. The problem is that I have to keep the exact directory structure, as described in https://pytorch.org/docs/stable/notes/serialization.html#, i. e. the file which includes the custom layer has to be carried everywhere the torch.load gets executed.

Is there another way to save the network architecture in pytorch? I’m aware of saving the weights via statedict, but I couldn’t find anything about saving the architecture.

2 Likes

Save:

checkpoint = {'model': Classifier(),
          'state_dict': model.state_dict(),
          'optimizer' : optimizer.state_dict()}

torch.save(checkpoint, 'checkpoint.pth')

Load:

def load_checkpoint(filepath):
    checkpoint = torch.load(filepath)
    model = checkpoint['model']
    model.load_state_dict(checkpoint['state_dict'])
    for parameter in model.parameters():
        parameter.requires_grad = False

    model.eval()
    return model

model = load_checkpoint('checkpoint.pth')
5 Likes

i think you’d like to use the static graph,eg. proto of caffe.
however,the AFAIK pytorch have NOT imple this future.

Hi there,
This would solve the issue is one is importing the model in the same folder structure and environment. However, as of now, Pytorch doesn’t support saving the architecture.