I have a trained NTS-NET that uses 108 MB on file storage. In my server I do not have enough space but is only some MB. So I compress “state_dict” using “tar.gz” and I arrive to 100 MB. It i just enought. So to load the model I use the funcion
import pickle import tarfile from torch.serialization import _load, _open_zipfile_reader def torch_load_targz(filep_ath): tar = tarfile.open(filep_ath, "r:gz") member = tar.getmembers() with tar.extractfile(member) as untar: with _open_zipfile_reader(untar) as zipfile: torch_loaded = _load(zipfile, None, pickle) return torch_loaded if __name__ == '__main__': torch_load_targz("../models/nts_net_state.tar.gz") # equivalet for torch.load("../models/nts_net_state.pt") for .tar.gz
So at the end I read the torch model from tar.gz directly. But in this way the prediction are too slow.
Exist some better solution at this problem?
(I’m using torch-1.4.0, and python 3.6)