LIBTORCH/C++ and Unreal Engine 4 Runtime error: Variable is optimized away and not avaible

I am trying to run my BERT model in Unreal engine with Libtorch. However, at runtime I get the following error when I run the forward function: Variable is optimized away and not avaible.

I use: LibTorch 1.10, Visual Studio C++ 2019 and Unreal Engine 4 version 4.26.2. Note: If I use only use LibTorch 1.10 and Visual Studio C ++ 2019 in release mode, my program run fine.

Please, could someone give me a suggestion on how to overcome this runtime error?

This is my function where ocurrs the error:

string predict(string question, string paragraph)
vectortorch::jit::IValue inputs;
vector input_ids;
vector segment_ids;
vector attention_mask;
vector tokens;
int start_index;
int end_index;
string answer;

preprocess(question, paragraph, input_ids, segment_ids, attention_mask, tokens);

int size = input_ids.size() > max_seq_length ? max_seq_length : input_ids.size();
torch::Tensor tensor_input_ids = torch::from_blob(, c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);
torch::Tensor tensor_attention_mask = torch::from_blob(, c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);
torch::Tensor tensor_segment_ids = torch::from_blob(, c10::IntArrayRef{ 1, size }, torch::kCPU).clone().to(torch::kInt64);


auto outputs = bert.forward(inputs).toTuple();
torch::Tensor start_scores = bert.forward(inputs).toTuple()->elements()[0].toTensor();
torch::Tensor end_scores = bert.forward(inputs).toTuple()->elements()[1].toTensor();

start_index = torch::argmax(start_scores).item().toInt();
end_index = torch::argmax(end_scores).item().toInt();

for (int i = start_index; i < end_index + 1; i++) {
	if (tokens[i].substr(0, 2) == "##") {
		answer = answer + tokens[i].substr(2, tokens[i].length() - 2);
		answer = answer + " " + tokens[i];

if (answer.empty() || answer.substr(1, 6) == "[CLS]") {


	answer = "Unable to find the answer to your question.";

answer = “Lo siento, no puedo responder a tu pregunta.”;

return answer;


I can’t decipher the image and if the error message is raised there.
However, could you explain the error a bit more? Are you seeing this error only while trying to debug the application or while running the application?
I would guess that some Release optimizations are used and the compiler removed unused variables or code snippets, which would yield undefined behavior.
E.g. I’ve been debugging this issue which was hit by violating the strict aliasing rule and the compiler was happy to optimize the function call away if -fstrict-aliasing was used.

Thanks ptrblck,

  1. I run my project with the Local Windows Debugger
  2. I get a runtime error in the forward function
  3. I press the button continue and I get another message
  4. I press the button continue again and my program runs normally

Message of step 3:

In case you are concerned about the inability to display local variables: rebuild your application with debug symbols, as not all local variables are available during debugging.
To disable compiler optimizations during the build and add debug symbols, rebuild PyToch with DEBUG=1 python install.
I don’t know what the error message means.

I overcome the error by placing at the top of the file a: #pragma optimize ("", off)

However, errors continue. Applying the Debug files of LibTorch and the DebuGame mode of Unreal Engine I realized that the error starts from loading of the model.

First, I ran the test by serializing a sample Libtorch model.

import torch
import torchvision

# An instance of your model.
model = torchvision.models.resnet18 ()

# An example input you would normally provide to your model's forward () method.
example = torch.rand (1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace (model, example) ("")

After, I load the model in the constructor of my application in unreal engine and I get an error with fopen.cpp when I try to load the model:

string name_model = "C:/Users/Jorge/PycharmProjects/CreateTestModels/";
torch::jit::script::Module module = torch::jit::load(name_model.c_str());

Please, could someone give me a suggestion on how to overcome this new runtime error?

Expression: file_name != nullptr points to an error during the file loading, so you would have to check that the file name is valid and exists at the specified location.

Could that be similar to Model cannot be loaded with torch::jit::load? The model exists but cannot be loaded at the moment in my test as well.

Yes, it is similar. I have verified that the file exists and disabled the antivirus in that folder and the problem continues. I will keep trying.

Solved my issue. Check the build configuration, if you use torchlib in release, build the project also in release.

Hi, I’m facing the same issue and it’s not exactly clear to me how you solved it. I’m using torchlib release and building the project in Development from VS. Everything works fine except when trying to call the forward method. How have you solved it?

Hi, sadly Libtorch and ue4 have several integration problems. Strictly, I have not been able to resolve the integration, but I can share my progress with you if it is helpful.

  1. I first tried to directly integrate Libtorch and ue4. I get the memory error in the forward function, however if I ignore the exception I can continue normally, this may be useful if you want to do a demo or proof of concept. For example, my demo is this: Question Answering: BERT(Bidirectional Encoder Representation with Transformers) with LibTorch & UE4 - YouTube. The link where I describe the steps is here: How to properly integrate Libtorch (Pytorch) and Unreal Engine 4? - AI (Artificial Intelliegence) - Unreal Engine Forums

  2. Later, I tried to do it through DLLs, but it didn’t work. However, with DLLs it is possible to catch the exception and learn a little more about the problem. The link where I describe the problem is here: Integrate Libtorch into Unreal Engine 4: _ivalue_INTERNAL ASSERT FAILED · Issue #69425 · pytorch/pytorch · GitHub

  3. The next step I will try is to try a more elaborate solution which is to integrate libtorch as a plugin. For example: GitHub - NeuralVFX/basic-unreal-libtorch-plugin: A "Hello World" for running LibTorch inside Unreal Engine or UE4 Plugin to execute trained PyTorch modules | BestOfCpp.

In case of success, I will gladly share it here :).

1 Like

Thank you! Sad to know this is an unresolved issue. I’m trying to integrate libtorch as a plugin yet having the same problem and can’t figure how it is solved in the two repo you mentioned. The issue is also present if trying to perform a forward over an nn Module, so I guess it’s related to how either torch or UE deals with memory…