PyTorch support for Intel GPUs on Mac

This thread is for carrying on any discussion from:

It seems that Apple is choosing to leave Intel GPUs out of the PyTorch backend, when they could theoretically support them. For reference, on the other thread, I pointed out that Apple did the same thing with their TensorFlow backend. When it was released, I only owned an Intel Mac mini and could not run GPU-accelerated TF. Other people may feel the same way, even though M1 is more common now.

For their earliest (now archived) TF 2.4 backend with MLCompute, it crashed at runtime on the Mac mini from allocating 40 GB of virtual memory. The second backend officially removed support for Intel GPUs, which are still a large part of their consumer base.

1 Like

Hi,

Sorry for the inaccurate answer on the previous post.

After some more digging, you are absolutely right that this is supported in theory.
The reason why we disable it is because while doing experiments, we observed that these GPUs are not very powerful for most users and most are better off using the CPU part which will actually be faster.
And so while most users do have these processors, most of them should not use them for ML workloads.

If you want to try this on your machine, you should be able to re-enable it relatively easily when building from source by simply making this if statement true: pytorch/MPSDevice.mm at 8571007017b61d793c406142bad6baeda331d00d Ā· pytorch/pytorch Ā· GitHub
Since we support only one device, you might want to make sure this does not shadow a more powerful AMD GPU (if you have two GPUs on that machine).

I think the plan is to keep this disabled for now and only enable it if there is strong signal that people need this.

Curious to hear if that works for you!

I donā€™t plan on compiling PyTorch myself as that isnā€™t my primary ML project, but I will inject my opinion here. I think itā€™s a bad idea to prevent the user from accessing something. Most people wonā€™t have the patience or experience to compile PyTorch from source and use the compiled build products ergonomically. As someone who makes software for the user, it should be up to the user to decide. Especially if someone happens to run a CPU-intensive process alongside their ML process, where the GPU would be the part of the chip thatā€™s open to computation. This will also make your PyTorch backend stand out from the TF backend.

I think it would be best to enable support from the start, then disable it if thereā€™s a strong signal from people to do so. I recommend that you put a warning in the PyTorch docs saying ā€œthis may be slow on Intel GPUsā€. Or at the very least, put a large notice telling Intel Mac users how to compile PyTorch from source if they want to test an Intel Mac GPU.

Edit: It would also be weird if you have a script on macOS that tries to profile the GPU or use the GPU in some way, only to have the framework disable acceleration when you switch between your Apple and Intel Mac. Maybe you could provide a hidden or documented option to re-enable execution on the Intel device through the Python API. It should be extremely simple to add that feature to PyTorch - just a conditional statement surrounding your cited ƘʀŹ„ɛɕįŗ—Ä®ā±“ə-ʇ code. Although Iā€™m not going to make a PR to do so myself.

2 Likes

I concur with @philipturner. This should be built into the library itself. PyTorch isnā€™t an end-user product. It should allow its developers to do what they want. Especially when this situation could easily be controlled by a simple boolean check. Recompiling the library seems like overkill for this purpose.

6 Likes

One big reason why Iā€™m dead set on using Intel GPUs is my personal project, the revival of Swift for TensorFlow (S4TF). This is another ML framework like PyTorch, but different in that could theoretically run on iOS and could take drastically less time to compile. Thereā€™s going to be two possible compile options. One is the old version, which uses the TensorFlow code base as a backend and is CPU-only on macOS. The other option uses a small custom code base, is GPU-only, and runs on iOS and macOS, among other platforms. The code base can be small because system libraries (MPS and MPSGraph) contain the kernels and graph compiler. Or, in the case of OpenCL, the kernel library is DLPrimitives, which is tiny.

For something thatā€™s GPU-only, it will be mandatory to use the Intel GPU on certain Macs. The maximum limit of ALU utilization for matrix multiplications is around 90% on Intel GPUs. This means ~350 GFLOPS of power for the Intel UHD 630. Compare that to the CPU, which is on the order of 10ā€™s of GFLOPS. In theory, if all other bottlenecks are eliminated, most models would run faster on the Intel GPU than the CPU.

The big ā€œifā€ is whether bottlenecks are eliminated. I hypothesize that CPU overhead or model configurations that underutilize the GPU are why it runs slow on PyTorch. For S4TF, I have quite extensive plans to reduce CPU overhead, leaving the only problem being models that underutilize the GPU. For example, oddly shaped matrix multiplications or convolutions that canā€™t use Winograd. Potentially, the entire Intel GPU architecture is terrible at ML, even the 10 TFLOPS Arc Alchemist. But that conclusion contradicts the fact that Intel invested money and time making MMX kernels for Intel GPUs in oneDNN.

We will have to wait and see why the Intel GPUs are being so slow for training, whether because of PyTorchā€™s design or some other fundamental problem that canā€™t be solved in an S4TF backend. Even if it is slower, I will definitely give the user the choice of using the CPU or GPU on Macs with only an Intel GPU.

3 Likes

@albanD Iā€™m curious about how bad the Intel GPU was during internal benchmarks. Before getting into this, I have a few questions:

  1. Did you test only the 400-GFLOPS UHD 630, or also the 800-GFLOPS Iris Plus? The second processor has 35% of the FLOPS of a 7-core M1, with relatively similar ALU utilization during matrix multiplications. It should also have identical main memory bandwidth.
  2. Did you try using shared memory on Intel iGPUs, which would bring performance closer to Apple iGPUs?
  3. Did you say the Intel iGPU was slower than single-core CPU or multi-core CPU?

Letā€™s say that someone can only use operators available to MPS. They canā€™t process double-precision numbers either. They run every single operation on the GPU. Based on your benchmarks, what is the performance delta of ____ compared to single-core CPU?

  1. Apple integrated GPU
  2. Intel integrated GPU

Intel Macs donā€™t have AMX, so CPU matrix multiplications are considerably slower. If you could provide both average and worst-case metrics, that would be just what Iā€™m looking for.

Iā€™m asking this because a GPU backend Iā€™m developing for machine learning is GPU-only. Removing CPU operations makes my code base smaller and more maintainable. In an era where exponential growth in processing power comes from greater parallelization, single-core CPU is becoming increasingly obsolete. That is why Iā€™m pursuing such intense latency optimizations described in Sequential throughput of GPU execution. I have to make ML operators run as fast as possible on an Intel iGPU, because I cannot run them on the CPU.


I would argue that this is problematic because PyTorch is an end-user product. Most clients donā€™t have the knowledge or experience with Git/command-line to compile PyTorch. They might not even know that Objective-C exists, and Python is their first programming language. Are we telling them that because of their lack of experience, they donā€™t have the right to test their iGPU for machine learning? Even if it is slower, they lack access to appropriate tools for proving that it is slower and reproducing that proof themself. These are concepts we take for granted in the field of science, where reproducibility is mandatory.

This is something Apple benefits from, because the only other options are either (1) upgrade to an M1 Mac or (2) switch to PC and get a cheaper Nvidia GPU with tensor cores. Now what if they are a teenager and canā€™t muster up hundreds over a thousand dollars to upgrade their hardware, because their parents arenā€™t giving them that stuff for free? I have been in this exact position before. I had a powerful Apple GPU, and made a whole research paper centered on it. But the M1-family GPU was on my iPhone, not my Mac.

3 Likes

We strongly demand that PyTorch support Intel GPUs on Mac !!!
We strongly urge that PyTorch support Intel GPUs on Mac !!!
We strongly request that PyTorch support Intel GPUs on Mac !!!

Hi,

Have you tried removing the if statement at pytorch/MPSDevice.mm at 0fe5367058a1d67134aee510ed81691cf9e61e33 Ā· pytorch/pytorch Ā· GitHub and running with that to see how well it performs?

You can find it here: https://www.reddit.com/r/pytorch/comments/13np8ws/introducing_pytorch_with_intel_integrated/

I have built one and it works!!!
https://www.reddit.com/r/pytorch/comments/13np8ws/introducing_pytorch_with_intel_integrated/