Apple Silicon support buggy?

Since M1 GPU support is now available (Introducing Accelerated PyTorch Training on Mac | PyTorch) I did some experiments running different models. While everything seems to work on simple examples (mnist FF, CNN, …), I am running into problems with a more complex model known as SwinIR (GitHub - JingyunLiang/SwinIR: SwinIR: Image Restoration Using Swin Transformer (official repository)). Running the code pulled from github with device=‘mps’ and the environment variable ‘PYTORCH_ENABLE_MPS_FALLBACK=1’ to fall back to the cpu in cases where mps is not supported yet, produces the wrong results:
When upscaling an image of Lincoln (works correctly on cpu) it produces this instead:

Does anyone know what is going on, or what I could try to fix this?

Thanks for trying out the new backend! I think we have seen issues arising from the new backend (e.g., [MPS] AdaptiveAvgPool2D doesn't accept input spatial size smaller than output shape · Issue #80732 · pytorch/pytorch · GitHub), so it would be helpful if you could isolate the discrepancy (e.g., vs. CPU) to a particular layer or shape in order to file a bug.