Saving customized model architecture


I got a simple model with a given architecture. After training I applied quantization and added a custom quantization layer after each convolution layer.

Now I would need to save this architecture and access it from a different directory. My current way is to save it via and access it later via torch.load. The problem is that I have to keep the exact directory structure, as described in, i. e. the file which includes the custom layer has to be carried everywhere the torch.load gets executed.

Is there another way to save the network architecture in pytorch? I’m aware of saving the weights via statedict, but I couldn’t find anything about saving the architecture.



checkpoint = {'model': Classifier(),
          'state_dict': model.state_dict(),
          'optimizer' : optimizer.state_dict()}, 'checkpoint.pth')


def load_checkpoint(filepath):
    checkpoint = torch.load(filepath)
    model = checkpoint['model']
    for parameter in model.parameters():
        parameter.requires_grad = False

    return model

model = load_checkpoint('checkpoint.pth')

i think you’d like to use the static graph,eg. proto of caffe.
however,the AFAIK pytorch have NOT imple this future.

Hi there,
This would solve the issue is one is importing the model in the same folder structure and environment. However, as of now, Pytorch doesn’t support saving the architecture.