How can I include loss of derivatives?

Hi!

I’m working on a project on molecule energies. The loss function consists of two parts: the energy term and the force term and forces are derivatives wrt the input coordinates. This is part of my code:

def energies_forces_trainer(container, optimizer, loss_fn):
    def train_and_store_loss(engine, batch):
        model.train()
        optimizer.zero_grad()
        inputs, targets = batch
        final = {'energies': [], 'forces': []}
        for s, c in inputs:
            results = model((s, c)) # c stands for coordinates
            _, energies = energy_shifter(results) # Some transformation
            forces = -torch.autograd.grad(energies.sum(), c)[0] # I want to compute the result's derivatives wrt the input
            final['energies'].append(results[1])
            final['forces'].append(forces)
        final['energies'] = torch.cat(final['energies'])
        final['forces'] = torch.cat(final['forces'])
        loss = loss_fn('energies')(targets, final) + loss_fn('forces')(targets, final) # Energy loss and force loss
        loss.backward() # Error ocurrs
        optimizer.step()
        return loss.item()
    return ignite.engine.Engine(train_and_store_loss)

This code produces ‘Trying to backward through the graph a second time’ error. So does torch.autograd.grad accumulate grads? How can I include loss of derivatives?

Thank you!

try:
forces = -torch.autograd.grad(energies.sum(), c, retain_graph=True, create_graph=True)[0]

1 Like