Quantizing a torch::jit::Module or updating a quantized model's weights

Hi everyone,
I have a PyTorch model in C++ which is a torch::jit::Module. The model has been quantized to int8 and traced in PyTorch. I can load it via torch::jit::load. I would like to programmatically update the weights, but haven’t found a way to do so yet. I have the following code which finds the modules that contain _packed_params, but I’m not sure how to go about updating the weights:

void copy_weights_quantized(torch::jit::Module& src_module, torch::jit::Module& dst_module) {
    std::map<std::string, torch::jit::Module> src_param_map;
    for (const auto& pair : dst_module.named_modules()) {
        src_param_map.emplace(pair.name, pair.value);
        for (const auto& pair2 : pair.value.named_attributes()) {
            if (pair2.name == "_packed_params._packed_params") {
                std::cout << "    Attribute: " << pair2.name << " mod: "<< pair2.value.isModule() << " capsule: " << pair2.value.isCapsule() << std::endl;
                auto packed_weight_holder = pair2.value.toObjectRef().getSlot(0).toCapsule();
                c10::intrusive_ptr<LinearPackedParamsBase> packed_weight = c10::static_intrusive_pointer_cast<LinearPackedParamsBase>(packed_weight_holder);
                // pair2.value.toObjectRef().getSlot(0).

Alternatively if there was a way to quantize a torch::jit::Module to int8 in C++ that would also be great. I haven’t found the equivalent of quantize_dynamic yet. Any pointers are appreciated.