Pytorch and Torch7 C++ interlacing

Greetings,
I have c++ service on linux which uses torch7 libraries and .t7 models.
It became necessary to add pytorch support with .pt models without refusal of legacy torch7 models.
Unfortunately i have encountered errors with memory, due to .so
incompatibility. Same symbols like THCudaInit() and etc. present in libTHC.so (torch7 library) and libcaffe2_gpu.so (pytorch library), thus providing errors like
free(): invalid next size (normal): 0x000000000349d920
or
fatal: owning_ptr == NullType::singleton() || owning_ptr->refcount_.load() > 0 ASSERT FAILED at /pytorch/c10/util/intrusive_ptr.h:350, please report a bug to PyTorch. intrusive_ptr: Can only intrusive_ptr::reclaim() owning pointers that were created using intrusive_ptr::release(). (reclaim at /pytorch/c10/util/intrusive_ptr.h:350)

I see several possible solutions:

  1. Probably it is possible to convert all .t7 models to .pt, but models are trained by another department and I would like to use this option last
  2. Rebuild torch7 or pytorch with same common code which causes the conflicts.
    I use precompiled pytorch 1.0.1 (https://download.pytorch.org/libtorch/cu90/libtorch-shared-with-deps-latest.zip) and torch7 from (https://github.com/torch/distro). May be there is certain commit of torch7 code which is used in pytorch libraries?

Is there any additional information, options which i might have missed or advices for solving this problem?
Thanks in advance.

1 Like