I got a simple model with a given architecture. After training I applied quantization and added a custom quantization layer after each convolution layer.
Now I would need to save this architecture and access it from a different directory. My current way is to save it via torch.save and access it later via torch.load. The problem is that I have to keep the exact directory structure, as described in https://pytorch.org/docs/stable/notes/serialization.html#, i. e. the file which includes the custom layer has to be carried everywhere the torch.load gets executed.
Is there another way to save the network architecture in pytorch? I’m aware of saving the weights via statedict, but I couldn’t find anything about saving the architecture.
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = checkpoint['model']
model.load_state_dict(checkpoint['state_dict'])
for parameter in model.parameters():
parameter.requires_grad = False
model.eval()
return model
model = load_checkpoint('checkpoint.pth')
Hi there,
This would solve the issue is one is importing the model in the same folder structure and environment. However, as of now, Pytorch doesn’t support saving the architecture.