Why my trained model output is same for each random input?

I trained my model on the Python platform. after training, I faced up with same output for each random input. I solved this problem by deactivating BatchNorm layers with the model.eval() method. but when I tried to load my trained model in C++ with Pytorch C++ API, this problem showed up again, and model.eval() not helping me at this time. I faced the same output for each random input again.

This is my C++ model loading function:

std::vector<torch::jit::script::Module> module_loader(std::string file_addr) {
    std::vector<torch::jit::script::Module> modul;
    torch::jit::script::Module model = torch::jit::load(file_addr);
    model.eval();
    modul.push_back(model);
    return modul;
}

And this is my testing function:

void test(std::vector<torch::jit::script::Module> &model) {
    std::vector<torch::jit::IValue> inputs;
    inputs.push_back(torch::rand({1, 2, 64, 172}));
    torch::Tensor output = model[0].forward(inputs).toTensor();
    std::cout << output << std::endl;
}

After all, I put it all together in main() like this:

int main() {
    auto modul = module_loader(MODEL_ADDRESS);
    test(modul);
}

MODEL_ADDRESS is a macro for the address of the trained model on my local disk.

The output of the program is this for every run:

0.3231 [ CPUFloatType{1,1} ]