Hello,
I’m trying to run a FastAI model on a Mac M1 Max using PyTorch with the MPS backend. However, I’m encountering an error with an adaptive pooling layer: “Adaptive pool MPS: input sizes must be divisible by output sizes.”
I understand this is due to a limitation of the MPS backend, but I’m wondering if there’s a workaround or solution that would allow me to use adaptive pooling without this restriction, while still utilizing the GPU on the M1 Max.
I’ve tried setting the PYTORCH_ENABLE_MPS_FALLBACK=1
environment variable to enable CPU fallback for unsupported operations, but I’m still encountering the error.
Any advice or suggestions would be greatly appreciated:)
Kind regards Patrik