So I’ve got this autoencoder that I’ve trained and now I wanna deploy it to a website.
I came across ONNX.js and liked it because it can run natively on the browser(Saving me my much needed Student cloud credits )
However I noticed that onnx requires a dummy input so that it can trace the graph and this requires a fixed input size.
dummy = torch.randn(1, 3, 1920, 1080) torch.onnx.export(transformer, dummy, save_path, opset_version=11)
I’m almost certain that on the ONNX.js side I need the input to be the same shape.
Is there anyway around this, because I need dynamic shape inputs.
Or should I just used Flask and host the model?