Hi everyone,
I’m developing a C++ application using LibTorch 2.6.0+cu124 on Windows. I currently have 7 TorchScript models (2 YOLO detection + 5 segmentation) running on a shared NVIDIA GPU (4GB VRAM), but I’m experiencing memory spikes (300MB baseline → 1300MB spikes) that cause OOM errors.
My goal is to offload some models to the Intel iGPU to reduce NVIDIA GPU memory pressure.
Question is, does LibTorch C++ support Intel iGPU on Windows? Is there a device type like torch::Device(torch::kXPU) or similar? Any other recommended backends? Can LibTorch manage NVIDIA dGPU + Intel iGPU simultaneously on Windows?
Any guidance, documentation links, or working examples would be greatly appreciated!
Best regards,
Dragan