Exporting torchvision FCN model to ONNX and running in onnxjs - opset confusion

I’m trying to export an FCN from torchvision using the following code:

model= models.segmentation.fcn_resnet101(pretrained=True, progress=True, num_classes=21, aux_loss=None)
x = torch.randn(1,3,512,512)
torch_out = model(x)
torch.onnx.export(model, x, "seg_rn.onnx", 

When exporting the model, I need minimum opset 11 to support the way pytorch’s interpolation works, and this is confirmed by the output of the onnx model when running in the python onnx runtime.

Running in the python onnx runtime is fine in python, but when I load the model in a browser using onnxjs, like this:

var session = new InferenceSession();
const modelURL = "./models/seg_rn.onnx";
await session.loadModel(modelURL);

I get Uncaught (in promise) TypeError: cannot resolve operator 'Shape' with opsets: ai.onnx v11

If I go and create my own copies of bits from torchvision.models.segmentation, I can get rid of the error about Shape (by speciying a static shape for the input and telling the interpolation what the resizing factor should be), but then I get another error: Uncaught (in promise) TypeError: cannot resolve operator 'MaxPool' with opsets: ai.onnx v11 but this time in reference to MaxPool. Ignoring tests and outputting with opset v10 results in a loadable model, but one which will be incorrect.

What is going on? Is there any way forward, or am I basically stuck?

I think opset v13 might have all the ops I need (gleaned from onnxjs github commits recently). How big a job is it to implement the new opset for pytorch? I’m not afraid to get stuck in, I just don’t really know where to start.