LIBTORCH/C++ and Unreal Engine 4 Runtime error: Variable is optimized away and not avaible

I am trying to run my BERT model in Unreal engine with Libtorch. However, at runtime I get the following error when I run the forward function: Variable is optimized away and not avaible.

I use: LibTorch 1.10, Visual Studio C++ 2019 and Unreal Engine 4 version 4.26.2. Note: If I use only use LibTorch 1.10 and Visual Studio C ++ 2019 in release mode, my program run fine.

Please, could someone give me a suggestion on how to overcome this runtime error?

This is my function where ocurrs the error:

string predict(string question, string paragraph)
{
vectortorch::jit::IValue inputs;
vector input_ids;
vector segment_ids;
vector attention_mask;
vector tokens;
int start_index;
int end_index;
string answer;

preprocess(question, paragraph, input_ids, segment_ids, attention_mask, tokens);

int size = input_ids.size() > max_seq_length ? max_seq_length : input_ids.size();
torch::Tensor tensor_input_ids = torch::from_blob(input_ids.data(), c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);
torch::Tensor tensor_attention_mask = torch::from_blob(attention_mask.data(), c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);
torch::Tensor tensor_segment_ids = torch::from_blob(segment_ids.data(), c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);

inputs.push_back(tensor_input_ids);
inputs.push_back(tensor_attention_mask);
inputs.push_back(tensor_segment_ids);

auto outputs = bert.forward(inputs).toTuple();
torch::Tensor start_scores = bert.forward(inputs).toTuple()->elements()[0].toTensor();
torch::Tensor end_scores = bert.forward(inputs).toTuple()->elements()[1].toTensor();

start_index = torch::argmax(start_scores).item().toInt();
end_index = torch::argmax(end_scores).item().toInt();

for (int i = start_index; i < end_index + 1; i++) {
	if (tokens[i].substr(0, 2) == "##") {
		answer = answer + tokens[i].substr(2, tokens[i].length() - 2);
	}
	else
		answer = answer + " " + tokens[i];
}

if (answer.empty() || answer.substr(1, 6) == "[CLS]") {

if LANGUAGE

	answer = "Unable to find the answer to your question.";

#else
answer = “Lo siento, no puedo responder a tu pregunta.”;
#endif
}

return answer;

}

I can’t decipher the image and if the error message is raised there.
However, could you explain the error a bit more? Are you seeing this error only while trying to debug the application or while running the application?
I would guess that some Release optimizations are used and the compiler removed unused variables or code snippets, which would yield undefined behavior.
E.g. I’ve been debugging this issue which was hit by violating the strict aliasing rule and the compiler was happy to optimize the function call away if -fstrict-aliasing was used.

Thanks ptrblck,

  1. I run my project with the Local Windows Debugger
  2. I get a runtime error in the forward function
  3. I press the button continue and I get another message
  4. I press the button continue again and my program runs normally

Message of step 3:

In case you are concerned about the inability to display local variables: rebuild your application with debug symbols, as not all local variables are available during debugging.
To disable compiler optimizations during the build and add debug symbols, rebuild PyToch with DEBUG=1 python setup.py install.
I don’t know what the error message means.

I overcome the error by placing at the top of the file a: #pragma optimize ("", off)

However, errors continue. Applying the Debug files of LibTorch and the DebuGame mode of Unreal Engine I realized that the error starts from loading of the model.

First, I ran the test by serializing a sample Libtorch model.

import torch
import torchvision

# An instance of your model.
model = torchvision.models.resnet18 ()

# An example input you would normally provide to your model's forward () method.
example = torch.rand (1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace (model, example)

traced_script_module.save ("modelG.pt")

After, I load the model in the constructor of my application in unreal engine and I get an error with fopen.cpp when I try to load the model:

string name_model = "C:/Users/Jorge/PycharmProjects/CreateTestModels/modelG.pt";
torch::jit::script::Module module = torch::jit::load(name_model.c_str());

Please, could someone give me a suggestion on how to overcome this new runtime error?

Expression: file_name != nullptr points to an error during the file loading, so you would have to check that the file name is valid and exists at the specified location.

Could that be similar to Model cannot be loaded with torch::jit::load? The model exists but cannot be loaded at the moment in my test as well.

Yes, it is similar. I have verified that the file exists and disabled the antivirus in that folder and the problem continues. I will keep trying.

Solved my issue. Check the build configuration, if you use torchlib in release, build the project also in release.