I’m currently trying to export a model to ONNX from PyTorch. I have some layers inside a model that are not supported by the ONNX and I don’t really want them to be inside ONNX graph anyway, so I’m trying to use
keep_initializers_as_inputs=True in order to fool tracer that my variables are actually inputs and inside custom nn.Module implementation’s
forward, I’ve added this:
if onnx.is_in_onnx_export(): gamma = torch.zeros([int(x.shape), int(x.shape)]).clone().detach().requires_grad_(True) beta = torch.zeros([int(x.shape), int(x.shape)]).clone().detach().requires_grad_(True) else:
but those initializers aren’t getting traced as graph inputs, but rather as constants and then being baked into a graph. I’ve tried various approach but all of them failed. Can somebody hint on how can one convert intermediate variables to graph inputs?