Libtorch on jupyter using Cling C++

Hello,
Out of curiosity, and as far as I know no one did this before, I tried using Libtorch directly on Jupiter using Cling (https://github.com/QuantScientist/DarkTorch/blob/master/cpp_pytorch.ipynb).
I was partially successful and was able to load a CNN model, however this line:

auto input_tensor = torch::from_blob(m.data, {1, 224, 224, kCHANNELS});

Raises the following exception which to my best understanding is unrelated to Libtorch:

IncrementalExecutor::executeFunction: symbol '__emutls_v._ZSt11__once_call' unresolved while linking [cling interface function]!

Any help would be appreciated.

1 Like

I think @krshrimali is working on something similar and might have an idea.

2 Likes

Thanks for the mention, @ptrblck.

@dambo - you can check my latest blog on using Xeus Cling here: https://krshrimali.github.io/Setting-Up-Xeus-Cling-Libtorch-OpenCV/. I’ve shown how to solve this issue in the later part of the blog.

Let me know if the problem still persists.

Thanks!

3 Likes

@ptrblck thanks!
@krshrimali, what an amazing repository you have there!. I noticed that you are using Mac OSX, so there imports are very different, by inspecting the imports, I couldn’t find any reason why I have that error in my Jupyter notebook. Did you take a look at my code?

Thanks,

1 Like

Hi @dambo

Thanks! Glad you liked it.

Can you please tell if you’re using a conda virtual environment for Xeus-Cling? If not, can you please try and install xeus-cling in a new virtual environment and double-check? I suspect the error might be because of the conflicts with existing libraries.

Also, if the above solution doesn’t work, I suggest changing set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=") to set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=1") and reload the libraries and header files. This, I suspect, might solve C++ ABI compatibility issue, if any.

Thanks!

Hello,
I am using conda, see my docker here:

Everything runs perfectly when I dont use cling (see https://github.com/QuantScientist/DarkTorch/tree/master/001_verify) on the same docker image; I already resolved anything related to TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=1

This happens only when I use cling. Is there a way to force cling to be ABI compatible or I have to compile from scratch?

Any would would loading the model.pt work if there is an ABI comptability problem?
Thanks,

Hi @dambo

I checked the Dockerfile and I suggest creating a conda environment and then installing the required libraries in it. I had the same problem, and installing it in a virtualenv worked.

I have verified this on the Dockerfile as well. I think there’s already a Dockerfile created, which I’ll release in a couple of days after some testing.

Please let me know if using a conda environment doesn’t work. I have also tested it on the Linux systems, and it works well. The last solution could be to compile from scratch and see if that works. (as also mentioned in some of the GitHub issues).

Thanks

Hi @dambo

I just had a quick look and was able to reproduce the problem. I could solve it by building PyTorch from the source. Also, I had to set the flag D_GLIBCXX_USE_CXX11_ABI to 0 from 1.

System: Ubuntu 16.04
CUDA: None. (Installation was done without CUDA)
Instructions to build from source (from where followed): https://medium.com/repro-repo/build-pytorch-from-source-on-ubuntu-18-04-1c5556ca8fbf

Let me know if this doesn’t solve your problem
Thanks!

1 Like

Hi @krshrimali Sorry for coming late to this discussion. I haven’t been able to solve this problem. Just to clarify, I have to do something like (I use Ubuntu 18.04):
export D_GLIBCXX_USE_CXX11_ABI=1
before compiling the Pytorch source code, right?
Many thanks!

Same exact issue… using xeus-cling in conda environment, manually built pytorch. Seems to mostly work except for the “from_blob” function which has the same exact:

IncrementalExecutor::executeFunction: symbol '__emutls_v._ZSt11__once_call' unresolved while linking [cling interface function]!

Error…

Just in case anyone reading this, I updated my libtorch cpu version to 1.7 and it worked smoothly with Cling C++