Dynamic Input for ONNX.js using a Pytorch trained model

So I’ve got this autoencoder that I’ve trained and now I wanna deploy it to a website.
I came across ONNX.js and liked it because it can run natively on the browser(Saving me my much needed Student cloud credits :slight_smile:)

However I noticed that onnx requires a dummy input so that it can trace the graph and this requires a fixed input size.

dummy = torch.randn(1, 3, 1920, 1080)
torch.onnx.export(transformer, dummy, save_path, opset_version=11)

I’m almost certain that on the ONNX.js side I need the input to be the same shape.
Is there anyway around this, because I need dynamic shape inputs.
Or should I just used Flask and host the model?

Thanks!

I’m by no means an expert, but I think you can use the dynamic_axes optional argument to onnx.export

In the tutorial here (about a quarter of the way down) the example uses the dynamic_axes argument to have a dynamic batch size:

                  dynamic_axes={'input' : {0 : 'batch_size'},    # variable lenght axes
                                'output' : {0 : 'batch_size'}})

I assume that can also be done for the other axes, but I haven’t tried it - my use case has images which are guaranteed to be the same size, so I don’t need it.

I have seen some discussion that would suggest this is not the case though, so maybe someone with more experience can weigh in?