Torch::jit::forward Thread 1: EXC_BAD_ACCESS?

I am getting a Thread 1: EXC_BAD_ACCESS error in the torch::jit::Module::forward call here

  IValue forward(std::vector<IValue> inputs) {
    // get_method("forward") returns but calling (std::move(inputs)) after 
    // results in Thread 1: EXC_BAD_ACCESS
    return get_method("forward")(std::move(inputs)); 

The code before calling the function is from here which follows closely to that of the D2Go’s github object detection repo

        at::Tensor tensor = torch::from_blob(imageBuffer, { 3, input_width, input_height }, at::kFloat);
        torch::autograd::AutoGradMode guard(false);
        at::AutoNonVariableTypeMode non_var_type_mode(true);
        std::vector<torch::Tensor> v;
        std::vector<at::IValue> b = {at::TensorList(v)};
        auto outputTuple = _impl.forward(b).toTuple(); // error 

Might anyone have any intuition why this might happen ? For context, I am calling this function in swift UI on the main thread.

Based on this post it seems you are hitting a memory violation. Could you .clone() tensor before passing it to the function?

Thanks @ptrblck for the reply. I noticed that this exception occurs when the model in python is not quantised. The exception I got happened for mobilenetv3 faster rcnn and retinanet. However, when I built a model with quantisation layers then I do not get an exception.

I will try your suggestion and see how it goes.

I did try to clone the tensor but still met with memory exception in another thread. Maybe IOS does not allow running of large models, since the detect function manages to run without errors for quantised models.

I realised the problem was I think because the pointer to imageBuffer was somehow deallocated by the time it tried to execute the model inference. I was passing a pointer to imageBuffer from swift to C++. Somehow I am not sure why imageBuffer seems to be deallocated when I called the inference code from swift. I fixed it by defining a @State property for the imageBuffer variable in swift