Problem in C++ parser data in TorchScript for Frontend in network Inference

I am working for torchscript parser as frontend for our network inference framework. I found the way for parser float torchscript model, as convolution for example;

Input:const torch::jit::Node* node
const auto& inputs = node->inputs();
const auto stride = getValue<std::vector<int64_t>>(inputs[3]);
const auto padding = getValue<std::vector<int64_t>>(inputs[4]);

template <typename T>
static inline T getValue(const torch::jit::Value* value) {
    auto optional_ivalue = toIValue(value);
    T res;
    if (!optional_ivalue) {
        MNN_ERROR("getValue: must Constant Node.");
        return res;
    }
    c10::IValue& val = optional_ivalue.value();
    auto optional_res = val.toOptional<T>();
    if (!optional_res) {
        // MNN_ERROR("getValue: value is None.");
        return res;
    }
    return optional_res.value();
}

However, when I try to parser or unpack qat torchscript model, I found that In pytorch 1.10, the input of convolution number is 4, which include input, __packed_params, scale, zero_point. So I have big problem in unpacking __packed_params in C++.
So if there any easy API such as “toIValue” that I can use for unpack __packed_params, Thanks.