Pytorch and FPGA

Hi Guys,

it’s time to start supporting pytorch for FPGA.

As well known torch 7 has some possibility to use OpenCL and in particular this could be useful in order to use FPGA.

Now intel turns out to be the only one interested in pushing deep-learning on FPGAs and the lack of frameworks that do not support it make research in this area weak.

Speaking from the point of view of possible applications: medical, IoT but above all self-driving; FPGA appears to be strategic.

There is already an open discussion:

However, my intent is not to push pytorch to be fully converted into OpenCL but an easy and simpler solution for useing it in any FPGA device.

I would like to start with AlexNet, in fact it turns out to be not very demanding computationally consuming 68% of DRAM load latency.

In fact, what we try to reduce with an FPGA accelerator (as GPU) s the velocity of the dimensional convolution for each input map through a kernels-weights. The max-pooling, which serves as a downsamplig operation, only has the propensity to reduce the computational capacity by providing an in-variance to the translation; and do not require huge computation power.

Doing a short search on the web, xilinx provides some solutions for Caffe and mxnet.

I think the easy solution is to build a layer of code that convert the binary model of pytorch in a binary for OpenCL compiler for fitting it in a FPGA device.

Maybe is not a good idea but I think convert pytorch in OpenCL is more difficult.

I really want to have pytorch in FPGA.

I want to start this project because I love pytorch.
Which are the best simpler solutions can we use for the FPGA?

Best,

Nico

4 Likes