RAM leak in TorchScript

Hi! I have several hundred of the simplest binary classifiers that I need to combine into one model, for this I create a class that I later script, it is actually a dictionary torch.nn.ModuleDict(), in which the corresponding classifier can be obtained by the value of the key.
When I create an instance of this class, there are no problems, I script it and work with it. But when I save the scripted model (it weighs 1.5Mb), and then load it, then at this moment a memory leak begins, and it takes about 50Gb in ram during the loading process. The problem is observed only on linux (Ubuntu), there is no such problem on Mac OS, the model loads quickly and takes up almost no space in ram, as it should be.
Please tell me, what could be the problem?

class ClassifierModel(torch.nn.Module):

    def __init__(self, inp, out, threshold):
        super(ClassifierModel, self).__init__()
        self.linear = torch.nn.Linear(inp, out)
        self.threshold = threshold

    def forward(self, x: torch.Tensor):
        outputs = torch.sigmoid(self.linear(x))
        return outputs

class Class(torch.nn.Module):
    
    def __init__(self, path_to_json_classifiers: str):
        super().__init__()
        with open(path_to_json_classifiers, 'r', encoding='utf-8') as file:
            json_classifiers = json.load(file)

        input_size = json_classifiers['input_size']
        output_size = json_classifiers['output_size']
        path_to_state_dict_dir = json_classifiers['path_to_state_dict_dir']

        self.models_dict = torch.nn.ModuleDict()
        for classifier in json_classifiers['classifiers']:
            if 'path_to_state_dict' in json_classifiers['classifiers'][classifier]:
                json_classifier = json_classifiers['classifiers'][classifier]
                model = ClassifierModel(input_size, output_size, float(json_classifier['threshold']))
                model.load_state_dict(torch.load(os.path.join(path_to_state_dict_dir, json_classifier['path_to_state_dict'])))
                model = torch.jit.script(model)
                self.models_dict[classifier] = model

    def forward(self, hash: str, embedding: torch.Tensor):
        for model_hash, model in self.models_dict.items():
            if model_hash == str(hash):
                output = model(embedding)
                output = 1 if output > model.threshold else 0
                return output
        return -1


model = Class(path)
model = torch.jit.script(model)
torch.jit.save(model, path2)

I create an instance and everything is fine up to here, but then I load the newly saved model:

model = torch.jit.load(path2)

And a long boot process begins, which takes at least 30GB of RAM.

NAME=“Ubuntu” VERSION=“18.04.6 LTS (Bionic Beaver)” python 3.9.10 64-bit torch 1.13.1