I just wanted to share something with you all, just because I thought it was pretty cool.
There’s this team here in Colorado and they have this new pytorch add-on tool that I was asked to try. It allows anything that can run pytorch to connect and use GPUs with no code changes.
For example, I tried to run pytorch on a raspberry pi, and with this tool it ran on the pi but all the Llama model loading and GPU processing ran on some Nvidia V100 but it looked like it was on the PI. It was pretty awesome.
But the thing was that there were Zero code changes to make this happen. Just a simple pip install of an add-on package.
These guys are just in alpha mode, but it might be worth checking out.
Hi everyone - I’m one of the guys from this mytorch team.
MyTorch currently selects from a pool of GPUs on the backend. For now, it’s our pool, but it could be a private or other public pool. As a client system makes a GPU request via standard pytorch, that request is translated and transferred to the GPU, executed and the result of the call is then sent back.
Remember that this in Alpha at the moment and has limited flexibility. Beta will let you add optional env vars or values to specify a specific GPU type. Additionally we are only supporting a few Llama models for Alpha, though that is moving fast as models are not a limitation, the code does not care what model you would like to use.
The code is largely GPU agnostic, the Alpha pool will have MI100’s or V100’s, and we’re just about to enable Neuron devices from AWS and soon after that TPU or GPUs from Google. The higher end devices will eventually be pay per time or token model but at a discounted price as we are working with several of these cloud vendors to get discounts. But for now, everything is totally free on the lower end devices. I think there will always be a free option, but can’t say that for sure.
We are currently taking on a few dozen Alpha users just to hear what folks think. Total free, no credit card or anything like that. Just provide an email address, we send you a token. On the site is a short video, its a super simple install, it’s just a pip package.
Where is this going? It’s an edge device ML option, or “Internet of Intelligent things” (IoIT) - Further down the road we will see some very special sauce, but we are not there yet. - For now we just want to hear what others think.