(libtorch) Saving multiple modules and optimizers in single file

Multiple modules can easily be solved by creating a class containing every module and registering them all. How can I put multiple optimizers in the same file?

A solution for static optimizers may work something like this:

std::ostream &operator<<(std::ostream &os, const ModelImpl &model) {
    for (const torch::Tensor &param : model.parameters()) {
        size_t size = 1;
        for (long s : param.sizes()) size *= s;
        torch::Tensor cpu = param.to(torch::kCPU, false);

        os.write(static_cast<const char *>(cpu.data_ptr()), size * 4);
    }
    return os;
}

std::istream &operator>>(const ModelImpl &model, std::istream &is) {
    torch::NoGradGuard g;
    for (const torch::Tensor &param : model.parameters()) {
        size_t size = 1;
        for (long s : param.sizes()) size *= s;
        torch::Tensor cpu = torch::empty_like(param, torch::kCPU);

        is.read(static_cast<char *>(cpu.data_ptr()), size * 4);
        param.copy_(cpu);
    }
    return is;
}

where each value is just written directly to an output stream. This ignores endianness and datatypes other than floats, and doesn’t add the optimizers’ state. This is not at all optimal because of those reasons and also because it means the optimizer must be initalized on every value.