Hello there! I believe I’m taking an unconventional approach to training a pytorch NN model, and everything is working except for the model weights updating.

I am working with college football data. Each week, games are played and then there are rankings at the end of the week to determine the best 25 teams. I would like to pass in a NN model as an input to this simulation, and then the output is just the total loss value (that created).

The idea is that I would like to pass in a NN model, get the loss value of the simulation, update the NN model based on the loss value, and then keep looping so the NN adjusts to lower the total loss of the simulation.

So, there is no predefined X, which is almost every single tutorial out there. Here is my example:

```
import torch
from Simulation import Simulation
class MLP(nn.Module):
'''
Multilayer Perceptron for regression.
'''
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(14, 7),
nn.ReLU(),
nn.Linear(7, 2)
)
def forward(self, x):
'''
Forward pass
'''
return self.layers(x)
nn_model = MLP()
def loss_fn(simulation):
"""
Takes in a simulation and calculates the total loss values
"""
simulation.run()
loss_vals = torch.tensor(simulation.loss_values, dtype=torch.float32, requires_grad=True)
return loss_vals.sum()
# Initialize optimizer
learning_rate = .01
optim = torch.optim.SGD(nn_model.parameters(), lr=learning_rate)
for epoch in range(100):
# Clear optimizer gradient
optim.zero_grad()
simulation = Simulation(nn_model)
simulation_loss = loss_fn(simulation)
# Simulation loss back propogation
simulation_loss.backward()
# Update gradients for optimizer
optim.step()
```

Currently, when I run this, the nn_model does not get adjusted at all and the loss value stays consistent. I have a feeling I am misusing the gradient wrong since I am recreating the loss values each time I run the simulation. I don’t see any other kind of examples like this online, but I think it can work?

Can anyone help?