Batch as input to module->forward({inputs}).toTensor()

Hello,
I am trying to work in batches while reading frames using CV2 from a video like so. I first read 8 frames, batch them into std::vector<torch::jit::IValue> and then run them through module->forward({inputs}).toTensor();. However, if the batch size is larger than 1, it crashes. I guess there is some reshaping involved to add the batch dimension I am just not sure how to do it in C++.

    int batch_size =1;
    int b_counter=0;

    std::vector<torch::jit::IValue> inputs; //Accumulates the batches

    for (num_frames = 0; num_frames < nb_frames; num_frames++) {
        video_reader >> frame;
        cv::cvtColor(frame, frame, CV_BGR2RGB);
        frame.convertTo(frame, CV_32FC3, 1.0f / 255.0f);
        auto input_tensor = torch::from_blob(frame.data, {1, frame_h, frame_w, kCHANNELS});
        
        input_tensor = input_tensor.permute({0, 3, 1, 2});
        inputs.emplace_back(input_tensor.to(torch::kCUDA)); // Add the frame to the vector
        //Ok we now have enough inputs
        if (batch_size == inputs.size()) {
            b_counter++;
            std::cout <<"Batch:" << b_counter<<std::endl;
//            std::cout <<"inputs.size() loop:" << inputs.size()<<std::endl;
            torch::Tensor out_tensor = module->forward({inputs}).toTensor(); // This line fails

Thanks,

1 Like

I have tried jit.trace the model again with batchsize m, then

module->forward({inputs}) the same batchsize data, but it still failed.

I do not want use for(;;;) too.

you could just create a tensor of size BxCxHxW, instead of pushing multiple tensors of size 1xCxHxW to your std::vector<torch::jit::IValue>. For example

auto module = torch::jit::load("../model.pt");

// Thats how to forward a whole batch.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({10, 3, 224, 224})); // BxCxHxW
module->forward(inputs);

// You only need to push back multiple inputs if, say your forward takes two arguments, then
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224})); // 1xCxHxW
inputs.push_back(torch::ones({1, 3, 224, 224})); // 1xCxHxW
module->forward(inputs);
1 Like

I just tried your code:

std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));
inputs.push_back(torch::ones({1, 3, 224, 224}));
module->forward(inputs);

but it failed.

the code down works:

`std::vector<at::Tensor> inputs_vec;`
`inputs_vec.push_back(torch::ones({1, 3, 224, 224})); // 1xCxHxW`
`inputs_vec.push_back(torch::ones({1, 3, 224, 224})); // 1xCxHxW`
`at::Tensor input_ = torch::cat(inputs_vec);`
`std::vector<torch::jit::IValue> inputs;`
`inputs.push_back(input_ )`
`module->forward(inputs);`

That is all. Thanks again. sorry about the code format.

yes, as explained, the code only works if your forward function expects 2 arguments. This is not the case here. It was just an example. Please take some time and read the annotations and it will become clear

Thanks. However, how come my loop does work in the same way as your:

inputs.push_back(torch::ones({10, 3, 224, 224}));
?

I dont understand the question, can you explain it further?

May I ask if this is batch=2 forward ?