Segmentation fault when loading custom operator

I follow the Extending TorchScript tutorial and I manage to build the warp_perspective operator. I try following code:

import torch
torch.ops.load_library(“/path/to/libwarp_perspective.so”)
print(torch.ops.my_ops.warp_perspective)

but I get the segmentation fault error. I have installed pytorch 1.2.0 for python 3.6 and cuda 10.0. I have also downloaded the latest nightly version of libtorch. Any idea where the problem is coming from?

Just as first check: did you change /path/to/libwarp_perspective.so to the actual path of libwarp_perspective.so?

Yeah I have changed the path according to my own build directory.

I am having the same problem since migrating to pytorch 1.2. Even exporting a single function in a single .cpp file which does nothing at all will trigger the segfault.

This is what GDB says:

#0  0x00007ffff2c3119f in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) ()
   from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007fff30f4a683 in c10::RegisterOperators::inferSchemaFromKernels_(std::string const&, c10::RegisterOperators::Options const&) ()
   from /home/zh217/.pyenv/versions/3.7.0/lib/python3.7/site-packages/torch/lib/libtorch.so
#2  0x00007fff30f4b0ca in c10::RegisterOperators::checkSchemaAndRegisterOp_(std::string const&, c10::RegisterOperators::Options&&) ()
   from /home/zh217/.pyenv/versions/3.7.0/lib/python3.7/site-packages/torch/lib/libtorch.so
#3  0x00007fffe409919c in c10::RegisterOperators::op(std::string const&, c10::RegisterOperators::Options&&) && (this=0x7fffffffce60, schemaOrName="infictc::make_lattice",
    options=...) at /home/zh217/libtorch-1.2-cuda10/libtorch/include/ATen/core/op_registration/op_registration.h:366
#4  0x00007fffe409da0f in c10::RegisterOperators::op<std::vector<at::Tensor, std::allocator<at::Tensor> > (at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double)>(std::string const&, std::vector<at::Tensor, std::allocator<at::Tensor> > (*)(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double), c10::RegisterOperators::Options&&) && (this=0x7fffffffce60, schemaOrName="infictc::make_lattice", func=
    0x7fffe4071560 <infictc::make_lattice(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double)>, options=...)
    at /home/zh217/libtorch-1.2-cuda10/libtorch/include/ATen/core/op_registration/op_registration.h:420
#5  0x00007fffe409b5fc in c10::RegisterOperators::RegisterOperators<std::vector<at::Tensor, std::allocator<at::Tensor> > (*)(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double)> (this=0x7fffffffce60, schemaOrName="infictc::make_lattice",
    func=@0x7fffffffce30: 0x7fffe4071560 <infictc::make_lattice(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double)>, options=...)
    at /home/zh217/libtorch-1.2-cuda10/libtorch/include/ATen/core/op_registration/op_registration.h:379
#6  0x00007fffe4095fc2 in __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at /tmp/tmp.EOjruwuXRl/infictc/pytorch_export.cpp:8
#7  0x00007fffe4096471 in _GLOBAL__sub_I_pytorch_export.cpp(void) () at /tmp/tmp.EOjruwuXRl/infictc/pytorch_export.cpp:13
#8  0x00007ffff7de5733 in call_init (env=0x555555bac1f0, argv=0x7fffffffe168, argc=3, l=<optimized out>) at dl-init.c:72
#9  _dl_init (main_map=main_map@entry=0x555555b052e0, argc=3, argv=0x7fffffffe168, env=0x555555bac1f0) at dl-init.c:119
#10 0x00007ffff7dea1ff in dl_open_worker (a=a@entry=0x7fffffffd210) at dl-open.c:522
#11 0x00007ffff71872df in __GI__dl_catch_exception (exception=0x7fffffffd1f0, operate=0x7ffff7de9dc0 <dl_open_worker>, args=0x7fffffffd210) at dl-error-skeleton.c:196
#12 0x00007ffff7de97ca in _dl_open (file=0x7ffff6aa8b90 "/usr/local/lib/libinfictc.so.0.0.1", mode=-2147483646, caller_dlopen=0x7fff7d284698 <py_dl_open+136>,
    nsid=<optimized out>, argc=3, argv=<optimized out>, env=0x555555bac1f0) at dl-open.c:605
#13 0x00007ffff79b2f96 in dlopen_doit (a=a@entry=0x7fffffffd440) at dlopen.c:66
#14 0x00007ffff71872df in __GI__dl_catch_exception (exception=exception@entry=0x7fffffffd3e0, operate=0x7ffff79b2f40 <dlopen_doit>, args=0x7fffffffd440)
    at dl-error-skeleton.c:196
#15 0x00007ffff718736f in __GI__dl_catch_error (objname=0x555555b5bd80, errstring=0x555555b5bd88, mallocedp=0x555555b5bd78, operate=<optimized out>, args=<optimized out>)
    at dl-error-skeleton.c:215
#16 0x00007ffff79b3735 in _dlerror_run (operate=operate@entry=0x7ffff79b2f40 <dlopen_doit>, args=args@entry=0x7fffffffd440) at dlerror.c:162
#17 0x00007ffff79b3051 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#18 0x00007fff7d284698 in py_dl_open (self=self@entry=0x7fff7d4bc368, args=args@entry=0x7ffff6de8448)
    at /tmp/python-build.20181217100434.36047/Python-3.7.0/Modules/_ctypes/callproc.c:1336
#19 0x00005555555ca1c6 in _PyMethodDef_RawFastCallKeywords (kwnames=0x0, nargs=0, args=0x9, self=0x7fff7d4bc368, method=<optimized out>) at Objects/call.c:694
#20 _PyCFunction_FastCallKeywords (func=0x7fff7d4b40d8, args=args@entry=0x7fff7dadc3f0, nargs=nargs@entry=2, kwnames=kwnames@entry=0x0) at Objects/call.c:730
#21 0x00005555555b832e in call_function (kwnames=0x0, oparg=2, pp_stack=<synthetic pointer>) at Python/ceval.c:4547
#22 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3117
#23 0x00005555556884ea in PyEval_EvalFrameEx (throwflag=0, f=0x7fff7dadc238) at Python/ceval.c:547
#24 _PyEval_EvalCodeWithName (_co=_co@entry=0x7fff7d49b780, globals=globals@entry=0x7fff7daa2a68, locals=locals@entry=0x0, args=args@entry=0x7fffffffd7e0, argcount=2,
    kwnames=kwnames@entry=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x7fff7d4c5dd0, defcount=4, kwdefs=0x0, closure=0x0, name=0x7ffff6e70270, qualname=0x7fff7d4adc30)
    at Python/ceval.c:3923
#25 0x00005555555c97ef in _PyFunction_FastCallDict (func=0x7fff7d4c3510, args=0x7fffffffd7e0, nargs=<optimized out>, kwargs=0x0) at Objects/call.c:376
#26 0x00005555555cc911 in _PyObject_FastCallDict (kwargs=0x0, nargs=2, args=0x7fffffffd7e0, callable=0x7fff7d4c3510) at Objects/call.c:98
#27 _PyObject_Call_Prepend (callable=callable@entry=0x7fff7d4c3510, obj=obj@entry=0x7ffff6a8aa20, args=args@entry=0x7ffff6e54198, kwargs=kwargs@entry=0x0)
    at Objects/call.c:904

@Iman Could you share with me the backtrace of the segfault error you saw?

@zh217 Just to further debug, does the example in https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html starting from "The second flavor of JIT compilation allows you to pass the source code for your custom TorchScript operator as a string. For this, use torch.utils.cpp_extension.load_inline" work for you?

Also the following minimal Python script should work using PyTorch 1.2.0:

import torch
import torch.utils.cpp_extension

op_source = """
#include <torch/script.h>

torch::Tensor warp_perspective() {
  torch::Tensor output = torch::randn({3, 4});
  return output.clone();
}

static auto registry =
  torch::jit::RegisterOperators("my_ops::warp_perspective", &warp_perspective);
"""

torch.utils.cpp_extension.load_inline(
    name="warp_perspective",
    cpp_sources=op_source,
    is_python_module=False,
    verbose=True,
)

print(torch.ops.my_ops.warp_perspective())

This could be a C++ ABI issue. To ensure libtorch ABI compatibility, I recommend first try using the libtorch libraries in the PyTorch package installation folder. For example, if you install PyTorch 1.2.0 using conda, please try to run something similar to cmake -DCMAKE_PREFIX_PATH=/data/miniconda3/envs/v1.2.0/lib/python3.7/site-packages/torch .. (change the package path as appropriate) as the cmake step.

I can confirm that it is an ABI issue. Using the library included with the python installation makes loading in python OK.

So does that mean if one wants to work with both C++ and python, one should always use the libraries included with the python installation? I haven’t looked at it, and I am not very sure if the python libraries include everything the libtorch installation offers.

I also confirm that using -DCMAKE_PREFIX_PATH=/virtualenv/lib/python3.7/site-packages/torch … instead of -DCMAKE_PREFIX_PATH=/path/to/nightly/libtorch … prevents segmentation fault.

1 Like