RuntimeError: attribute 'name' has the wrong type in Pytorch v2.1 with libc++

Hi,
I built the latest pytorch(v2.1.0) with libc++. Now whenever I run torch._C._jit_pass_onnx_function_substitution(graph) function, I get this error:

RuntimeError: required keyword attribute ‘name’ has the wrong type

Here is an example code that I run:

import torch

class MyModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, padding=1)
    def forward(self, x):
        return self.conv(x)
    
model = MyModel()
inputs = torch.randn(1,3,224,224)

traced_model = torch.jit.trace(model, inputs)

graph = traced_model.graph
torch._C._jit_pass_onnx_function_substitution(graph)

I traced the error using debugger and it took me to line 133 of functionCallSubstitution function in https://github.com/pytorch/pytorch/blob/3fcc5ff0d670747a267ad0645f4b2b64ce29234d/torch/csrc/jit/passes/onnx/function_substitution.cpp#L133

const std::string& name = cur->s(attr::name);

which in turn took me to

CREATE_ACCESSOR(String, s)

and to this getAttr function from there:

 template <typename T>
 typename T::ValueType& getAttr(Symbol name) const {
   AT_ASSERT(name.is_attr());
   auto it = findAttr(name, true);
   auto* child = dynamic_cast<T*>(it->get());
   if (child == nullptr) {
     throw IRAttributeError(name, true);
   }
   return child->value();
 }

The “it” variable is a nullpointer.

The error only occurs if I build torch with libc++. This is the command I used to build pytorch with libc++:

CC=clang CXX=clang++ CXXFLAGS="-stdlib=libc++"  CMAKE_PREFIX_PATH=/local/mnt/workspace/fh/Software/pytorch-install/lib CMAKE_CXX_FLAGS=-stdlib=libc++ CMAKE_EXE_LINKER_FLAGS=-stdlib=libc++ python3 setup.py develop

I’d really appreciate it if anyone could provide some insight on this question.

Thank you

I don’t know if libc++ is supported or if e.g. libstdc++ would be needed, but you might want to create a GitHub issues so that the code owners could also take a look at it and chime in.
Also CC @malfet in case you are aware of the current toolchain limitations.

Thanks @ptrblck. I need to install torch with libcpp because the libstdcpp version throws pybind11 exception when used in conjunction with tvm but I could not find a definite answer to whether pytorch is supposed to work.with libc++… I’ll open a github issue for my issue.

You need to enable dynamic_fallback in libcxxabi. You can also rebuild clang tools by defining it in libcxxabi/src/private_typeinfo.cpp (#define _LIBCXX_DYNAMIC_FALLBACK)

Thank you very much @blrm. You are right, I did not built my libcxxabi. So, would these steps do the work instead of the #define process that you mentioned (I’m not sure I fully understand it)?

  1. Rebuild my llvm with clang by adding these flags:

-DLLVM_ENABLE_RUNTIMES=‘libcxx;libcxxabi;libunwind’
-DLIBCXXABI_USE_LLVM_UNWINDER=ON

  1. pass

DLIBCXXABI_USE_LLVM_UNWINDER=ON

to my pytorch build command?

You need to do both. You can see Vague linkage when using libcxxabi built torch in torch_glow · Issue #65397 · pytorch/pytorch · GitHub for more details.

1 Like