Traceable model with gaussian fit in preprocessing


We are trying to export a model for inference on an iPad. To do this we are using the torch.jit.trace() function to create a Torch Script module that can be later compiled in C++.
The model itself is rather simple, just a sequence of fully connected layers, and is not giving any issue by itself.

The problem is that we need to perform a preprocessing step to transform a raw signal into a sum of gaussian functions, from which we extract the values that will be the inputs for the layers of the model.
This gaussian fit is easy to do with access to SciPy library, for instance, but in order to have full compatibility with Torch Script, we require that only Pytorch is used.
As an alternative, we used the idea posted here:
Using PyTorch optimizers for nonlinear least squares curve fitting,
where an optimizer is used to find the best fitting curve. This gave us satisfactory results, but when we trace the model, the use of a closure function and the call to backward() won’t return a traced model. We have tried to use a different optimizer that does not require a closure function, such as Adam, but still we need to perform the backward() pass.

'for _ in range(iterations):
'' _fit.optimizer.zero_grad()
'' output = _fit( _fit.x, _fit.init_params)# returns gauss function with this x axis and set of params
'' loss = F.mse_loss(output, _fit.y)
'' loss.backward()
'' _fit.optimizer.step()

The error we get on the loss.backward() line is the following:

Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient

Do you have any suggestion regarding how to perform this gaussian fit in a model that can be traced and exported to production?

Thank you very much,
kind regards.

1 Like