I got a simple model with a given architecture. After training I applied quantization and added a custom quantization layer after each convolution layer.
Now I would need to save this architecture and access it from a different directory. My current way is to save it via
torch.save and access it later via
torch.load. The problem is that I have to keep the exact directory structure, as described in https://pytorch.org/docs/stable/notes/serialization.html#, i. e. the file which includes the custom layer has to be carried everywhere the
torch.load gets executed.
Is there another way to save the network architecture in pytorch? I’m aware of saving the weights via
statedict, but I couldn’t find anything about saving the architecture.