Gradient decent extracting data from simulations(returning numpy)

Hello everybody!

I am working on a project where I need to estimate parameters comparing the data in a table with data extracted for a simulation program. This is an early stage of the project
The problem is Y = mobility(E, rho, eta).

Every time I calculate Y, I pass different values of E, rho and eta expecting to get closer and closer to the target signal.

In a first approach I did it analytically(with mathematical formulas) and it works beautifully. The model converges and the results are good. The problem is when I try to do it obtaining the data from the simulation program

I have checked that the problem is torch and numpy. With the analytical problem, all the operations are done with torch variables. E, rho and eta are tensors and the output Y is also a tensor. The gradient path works fine.
For the simulation program, I pass E, rho and eta as strings, ask for the simulation to run, and receive the output as a numpy array. Here is where the gradient path gets lost and does not fit or converge.

Do you know what can I do for not loosing the gradient track when I do de simulations?

Here the code

def model(x, y_obs):
    # Density definition
    rho_mean = pyro.param("rho_mean", torch.tensor(0.))
    rho_std = pyro.param("rho_std", torch.tensor(.1), constraint=constraints.positive)
    rho = pyro.sample("rho", dist.Normal(rho_mean, rho_std))
    # Damping loss factor definition
    eta_mean = pyro.param("eta_mean", torch.tensor(0.))
    eta_std = pyro.param("eta_std", torch.tensor(.1), constraint=constraints.positive)
    eta = pyro.sample("eta", dist.Normal(eta_mean, eta_std))
    # Young's modulus definition
    E_mean = pyro.param("E_mean", torch.tensor(0.))
    E_std = pyro.param("E_std", torch.tensor(.1), constraint=constraints.positive)
    E = pyro.sample("E", dist.Normal(E_mean, E_std))
    comsol = solveComsol(E, rho, eta, x)
    error = comsol - y_obs

    with pyro.plate("data", len(y_obs)):
        y = pyro.sample("y", dist.Normal(error, 0.001), obs=torch.zeros(len(y_obs)))
    
    posteriorValues.append(comsol)
    return y

def solveComsol(E, rho, eta):#, freq=10):
    # Update parameters
    modelComsol.parameter('youngs', str(E.item())+' [Pa]')
    modelComsol.parameter('density', str(rho.item())+' [kg/m^3]')
    modelComsol.parameter('eta', str(eta.item()))

    # Solving comsol FEM
    modelComsol.solve("Study 2")
    comsolResults = abs(modelComsol.evaluate('comp1.point2', dataset="Probe Solution 2"))
    comsolResults = torch.tensor(comsolResults, requires_grad=True)
    return comsolResults

This is expected since Autograd won’t be able to track 3rd party operations. You would either need to stick to PyTorch tensors or you could implement custom autograd.Functions using e.g. numpy and implement the backward pass manually.