Get device id of DLA

When running Pytorch inference on a Resnet model on Jetson Xavier GPU, in my python script I use -

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

so that I can later do something like - inputs = inputs.to(device) to copy the inputs from host to device (cpu to gpu) before running the inference.

Similarly, when I want to use the 2 DLAs on the Jetson for my inference, how should I find the device id?

When i try -

print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
print('GPU Device name:', torch.cuda.get_device_name(torch.cuda.current_device()))

I get

Available devices  1
Current cuda device  0
GPU Device name: Xavier

Trying something like

device = torch.device('dla:0' if torch.cuda.is_available() else 'cpu')

gives

RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, ve, ort, mlc, xla, lazy, vulkan, meta, hpu device type at start of device string: dla

I can verify that the DLA is present, as I can run tensorrt inference on it. Could somebody please guide me on how to do it via pytorch?

The DLA is not a built-in device in PyTorch and you could use it e.g. via TensorRT.

1 Like

So, via only Pytorch, there is no way to access the DLA.
So far, tensorrt seems to be the only framework through which i could access the DLA. Do you know of any other frameworks i could use? Als, is there any standard model you know that wil run on DLA directly without gpu fallback?