Docker Environment
- Docker Environment
- Ubuntu 20.04
- libtorch : libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcpu.zip
- cpu inference
I trained rnn transducer torch model and quantize model and get its ji traced model from python.
I try to run jit module in cpp using libtorch library.
I build dockerfile in my local machine. And execute docker run in my local machine
I checked cost of forward of ‘transcriber module’.
Ironically, second run is slower than first run for same input. In my common sense, second run is faster than first run because of “warm up” or “cache”
I tried to add options torch::jit::setGraphExecutorOptimize(false) or torch::jit::getProfilingMode() = false or both;
Whether or not the option was added, the same result always appeared.
I don’t know why this is happend. I want to keep the speed of the first run. How can I solve this?
# First run
Transcriber out takes : 0.0299713
Transcriber out takes : 0.0131214
Transcriber out takes : 0.0170015
Transcriber out takes : 0.00426234
Transcriber out takes : 0.00432165
Transcriber out takes : 0.00328154
Transcriber out takes : 0.00338658
Transcriber out takes : 0.00475054
Transcriber out takes : 0.00407795
Transcriber out takes : 0.0623491
Transcriber out takes : 0.00280292
Transcriber out takes : 0.0462229
Transcriber out takes : 0.00375231
Transcriber out takes : 0.00448459
Transcriber out takes : 0.00417234
Transcriber out takes : 0.00383614
Transcriber out takes : 0.00443198
Transcriber out takes : 0.00493904
Transcriber out takes : 0.00365757
Transcriber out takes : 0.00531058
Transcriber out takes : 0.00445643
Transcriber out takes : 0.00429056
Transcriber out takes : 0.00430563
Transcriber out takes : 0.00582294
Transcriber out takes : 0.00475776
# Second run
Transcriber out takes : 0.0202359
Transcriber out takes : 0.020516
Transcriber out takes : 0.0175968
Transcriber out takes : 0.0202013
Transcriber out takes : 0.0185216
Transcriber out takes : 0.0201316
Transcriber out takes : 0.0200546
Transcriber out takes : 0.0194746
Transcriber out takes : 0.0183476
Transcriber out takes : 0.0186506
Transcriber out takes : 0.0187843
Transcriber out takes : 0.0176204
Transcriber out takes : 0.0158483
Transcriber out takes : 0.016436
Transcriber out takes : 0.026535
Transcriber out takes : 0.0237587
Transcriber out takes : 0.0175559
Transcriber out takes : 0.0175385
Transcriber out takes : 0.0229276
Transcriber out takes : 0.0212121
Transcriber out takes : 0.0196309
Transcriber out takes : 0.0198682
Transcriber out takes : 0.0204592
Transcriber out takes : 0.0205975
Transcriber out takes : 0.0195115
Interestingly, my local machine always show same speed for first run and second run (always 0.004 in average)
Local Environment
- MacOs
- libtorch-macos-1.11.0.zip
- cpu inference
Additional Information
torch::jit::Module RnntTranscribe::load_torch_model() {
torch::jit::Module module = torch::jit::load("transcriber_jit_traced_quantized.pt");
module.to(torch::kCPU);
module.eval();
return module;
}
std::tuple<at::Tensor, torch::jit::IValue> RnntTranscribe::get_transcriber_out(at::Tensor &embed,
TranscriberHiddens &transcriber_hiddens) {
std::vector<torch::jit::IValue> inputs;
inputs.emplace_back(embed.unsqueeze(0));
inputs.emplace_back(transcriber_hiddens.transcriber_lstm_hiddens);
auto outputs = transcriber.forward(inputs).toTuple();
auto transcriber_out = outputs -> elements()[0].toTensor();
auto transcriber_lstm_hiddens = outputs -> elements()[1].toTuple();
return std::make_tuple(transcriber_out, transcriber_lstm_hiddens);
}