Extracting Individual Tensors from a PyTorch .pt File for Custom CUDA/C++ Inference

Hello,

I am training a model in PyTorch and would like to load it into a custom CUDA/C++ library I’ve written for fast inference. I am aware of solutions that use libtorch to load and run pre-trained PyTorch models in C++ (e.g., this discussion). However, my use case involves bypassing libtorch and directly extracting the individual tensors from the .pt file to load them into my custom library.

Does a library or utility exist that can help me parse a .pt file to extract its tensor data, or would I need to implement this functionality manually? Any guidance or references would be greatly appreciated.

Thank you!