RPC cannot run in jetson orin because of the specific uuid of orin

When run RPC demo in jetson orin, the uuid issue were shown as below:

tensorpipe/channel/cuda_ipc/context_impl.cc:65 "uuidStr.substr(0, 4) != "GPU-"Couldn’t obtain valid UUID for GPU #0 from CUDA driver.

The uuid of jetson does not begin with characters “GPU-” like RTX series, the failure message will appear at once.

I think that tensorpipe didnot support jetson because of the specific characters “GPU-“ check, and i do not know how to run RPC in jetson. How should i do to solve that. Thanks.

What does the UUID for this device show?
The TensorPipe repository was already archived in the past and while it received a few fixes unblocking compilation with recent CUDA toolkits and clang, it’s unclear if it will be archived again or if the repo will receive more fixes. Could you explain your use case as well?

Thanks for your reply so quickly. I will show more information about that issue.

  1. uuid of RTX series looks like GPU 0: NVIDIA GeForce RTX 3060 (UUID:

    GPU-ceea231c-4257-7af7-6726-efcb8f
    )

  2. uuid of Orin looks like GPU 0: Orin (nvgpu) (UUID:

    36baf986-26a8-5222-9d8b-823b8d
    )
    It is clear that uuid of RTX series begin with characters “GPU-” while Orin not. The lines 65-67 of tensorpipe/channel/cuda_ipc/context_impl.cc are as below:
    TP_THROW_ASSERT_IF(uuidStr.substr(0, 4) != “GPU-”)
    << “Couldn’t obtain valid UUID for GPU #” << devIdx
    << " from CUDA driver. Got: " << uuidStr;
    Because of the uuid of Orin without “GPU-”, the demo run in Orin will corrupt immetiately.
    That is all i know about the issue now. Thanks.

So my conclusion is that tensorpipe does not support Orin use case now.

My use case is LLM distribution inference with RTX server and Orin. This may be not important for tensorpipe issue. Because the demo corrupted when i just run a simple demo with Orin only.

@ptrblck Should i give more information.

Or who should i consult.