Hello, I have the following net to perform a binary classification of some trajectories:

#### Architecture

```
class DCR(nn.Module):
def __init__(self, kemb_size, nvar, points, device):
super().__init__()
self.phis = load_phis_dataset()
self.kemb = get_kernel_embedding(self.phis, nvar, samples = kemb_size).to(device) # (concepts, kemb_size)
_ = self.kemb.requires_grad_()
self.fc1 = nn.Linear(kemb_size + (nvar*points), 64)
self.fc2 = nn.Linear(64, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
# concept truth degrees
rhos = get_robustness(x, self.phis, time = False) # (trajectories, concepts)
_ = rhos.requires_grad_()
# embed trajectories in kernel space
traj_emb = torch.matmul(rhos, self.kemb) # (trajectories, kemb_size)
_ = traj_emb.requires_grad_()
# combine info from traj_embed and x to predict class
x_new = x.view(x.size(0), -1) # flatten x
combined_features = torch.cat((traj_emb, x_new), dim=1) # (trajectories, kemb_size + x.shape[0]*x.shape[1])
output = self.fc1(combined_features)
output = F.relu(output)
output = self.fc2(output)
output = self.sigmoid(output)
return output.squeeze(1)
```

#### Training

```
model = DCR(kemb_size, nvar, points, device).to(device)
criterion = nn.BCELoss().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01)
model.train()
for epoch in range(10):
epoch_loss = 0.0
for batch, labels in train_loader:
batch, labels = batch.to(device), labels.to(device)
y_preds = model(batch)
loss = criterion(y_preds, labels.float())
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += y_preds.shape[0] * loss.item()
print(f'Epoch: {epoch}, Loss: {epoch_loss/len(y_train):.5f}')
```

However, the training loss remains perfectly constant at every epoch, and the weights are not updated. What could I be doing wrong?

I tried solving the problem with some `.requires_grad_()`

, but it didn’t work.

#### Some explanations

- phis = list of STL formulae
- kemb = kernel embedding of said STL formulae
- rhos = robustness of STL formulae on input trajectories

All shapes are as comments in the code.