I am working on a remote server where there is a large collection of different pre-trained word embedding vectors available in zip archives, on a shared location.
As my personal storage space is limited, I cannot store much more than one pre-trained model on my personal user at a time.
My problem is as follows:
I would like to access the zip archives on the shared space at the server, and load them into torchtext without having to unzip them and store them in my personal storage.
I currently do:
import zipfile import torchtext target = "/path/to/shared/storage/model.zip" with zipfile.ZipFile(target, "r") as archive: archive.extract("model.txt") vectors = torchtext.vocab.Vectors("model.txt")
I would like to do something along the lines of:
import zipfile import torchtext target = "/path/to/shared/storage/model.zip" with zipfile.ZipFile(target, "r") as archive: vectors = torchtext.vocab.Vectors(archive.open("model.txt"))
(this example does not work of course, since
torchtext.vocab.Vectors expects a path to
Is there a way to achieve this in torchtext?
I am grateful for any suggestions you may have