Could not run 'aten::prelu' with arguments from the 'Metal' backend

I tried to run my model on iOS using Metal. However, the PreLu module does not seem to be supported, yet. I get:

Could not run ‘aten::prelu’ with arguments from the ‘Metal’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::prelu’ is only available for these backends: [CPU, BackendSelect, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC].

It runs fine on CPU but no luck with Metal. Am I making some kind of stupid mistake, is there a workaround, or do I have to be patient and wait for Metal support of the aten::PreLu?
Here is my code example (just a slight modification of the HelloWorldMetal example, which runs fine without the PreLu layer):

import torch
import torchvision
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile

class myModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.network = nn.Sequential(
            nn.MaxPool2d(8, 8),
            nn.PReLU(),
            nn.Flatten(),
            nn.Linear(2352, 10))
        
    def forward(self, xb):
        return self.network(xb)

model= myModel()
model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
torchscript_model_optimized = optimize_for_mobile(traced_script_module, backend='metal')
torchscript_model_optimized._save_for_lite_interpreter("HelloWorld/HelloWorld/model/model2.pt")

Then I use the model in my app with:

#import "TorchModule.h"
#import <LibTorch-Lite-Nightly/LibTorch-Lite.h>

@implementation TorchModule {
 @protected
  torch::jit::mobile::Module _impl;
}

- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
  self = [super init];
  if (self) {
    try {
      _impl = torch::jit::_load_for_mobile(filePath.UTF8String);
    } catch (const std::exception& exception) {
      NSLog(@"%s", exception.what());
      return nil;
    }
  }
  return self;
}

- (NSArray<NSNumber*>*)predictImage:(void*)imageBuffer {
  try {
    c10::InferenceMode mode;
    at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 224, 224}, at::kFloat).metal();
    auto outputTensor = _impl.forward({tensor}).toTensor().cpu();
    float* floatBuffer = outputTensor.data_ptr<float>();
    if (!floatBuffer) {
      return nil;
    }
    NSMutableArray* results = [[NSMutableArray alloc] init];
    for (int i = 0; i < 1000; i++) {
      [results addObject:@(floatBuffer[i])];
    }
    return [results copy];
  } catch (const std::exception& exception) {
    NSLog(@"%s", exception.what());
  }
  return nil;
}

@end

Thank you guys so much in advance!

Any update on this?

Is there an exhaustive list of operators that are supported with Metal backend?