CPU/GPU/TPU and the beloved PyTorch

Is there anything more except CPU/GPU/TPU where PyTorch can work?
Simple and clear but I cannot find the answer for now.

It can work on microcontrollers, maybe fpga (with some hacks).

1 Like

Aha, is there any link on where PyTorch can work?

For microcontrollers I have mostly seen onnx exported models or torchscript models. You can check deepC. They use onnx exported models which is not pytorch directly but you can kind of use pytorch to onnx export.

I was referring to examples more on the deployment side where you can use c++ to deploy models.

1 Like

@Kushaj it can work on arm Does Pytorch support ARM processor(aarch64)?, which is Raspberry cake architecture. Some guy @smt h confirmed that.

I am interested if can work on amd as well.
I found some PyTorch usability on AMD hardware? link but they don’t say if it can work or no.

Also I am interested if can work on DSP processors. When I find documents https://on-demand.gputechconf.com/gtc-cn/2019/pdf/CN9624/presentation.pdf like this I am completely lost. There is a mention but no feedback.

Also I am interested to know if PyTorch can work on FPGA. There is a link I found Pytorch and FPGA but still very confused.

Also I am interested if can work on anything else.
Thanks

So what you are saying is it can work on other devices just via ONNX?

I’d take a look at TVM. The target FPGA and you can take PyTorch JITed models as the basis.

1 Like

@tom, thanks, TVM is still not the buzz word. At least for me, and I think I saw it but I haven’t realized the importance of this project.

From what I can see one of the main components it has is VTA.

Certainly CUDA is an option, on the other side, it is unclear to me how this can be plugged on HHVM, but OK. Metal may be the Apple GPU, like HIP is for Amd, which is missing in this diagram.

So you run in VTA graph optimization, and you just send the script export from PyTorch. Right. So the whole idea of TVM is export scripts from PyTorch to JIT. Right.

What I heard on some classes of FastAi they downgraded JIT technology and this gave some bias on my understanding of things.

What you are saying in other words, JIT is the future. Right.

Thanks

You can use PyTorch on AMD. It requires quite some work (if you do not want to use the official conda build). I will be getting my new PC in 7 days which has AMD CPU. I will document the entire process to setup PyTorch on that.

PyTorch can work on ARM. I did this a long time ago. Took a rasberry pi and installed pytorch on it and it worked. I installed from a wheel that I found online (you can compile also).

1 Like

Very proactive. I am glad you will do that, and share the experience. Should we except also some testing AMD HIP for AMD GPU.

Note that not all AMD GPUs work with HIP/ROCm: Vega and Polaris do, notably Navi does not.

Best regards

Thomas

1 Like

For the GPU, I personally think Nvidia has a monopoly (or you can say THE ONLY OPTION), if you want to keep your life easy.

For the CPU, I think Intel is the much much easier option (you just install from conda and you don’t have to worry about anything). When doing research for AMD setup on Ubuntu I have already seen some links that make me feel it would be a long long day.

I think for the CPU it is money v/s convenience thing.