First, I am no Pytorch user, I only build the wheels for others.
At the moment, I build a CPU and GPU wheel and suffixed them with
_gpu. The drawback is that some users have difficulties installing the torch wheels since many other wheels depends on
torch and not
I wonder if I build torch with GPU+CPU support, then if users not using GPUs will encounters issues?
(errors such as : “no cuda capable device found” are ok and acceptable in the context of no GPU device and code not targeted to the CPU.)
I am aware that many GPU operations are encapsulated into
torch.cuda and shielded with
I did run the Pytorch tests with a
torch_gpu wheel and no GPU device, and most of them succeeded.
What are your thoughts on this? Do you see any caveats/issues?