NotImplementedError when calling .backward() on BCEWithLogitsLoss result

Hi all,

I’m relatively new to PyTorch and have been having an issue with performing back-propagation that I just don’t know how to solve.

First of all, my code for calculating the loss and then performing back-propagation:

    def discriminator_loss(self, real_samples, fake_samples):
        criterion = BCEWithLogitsLoss()

        real_loss = criterion(self.discriminator(real_samples), ones((1, 1)).to(self.gpu_id))
        fake_loss = criterion(self.discriminator(fake_samples), zeros((1, 1)).to(self.gpu_id))

        return (real_loss + fake_loss) / 2


        def optimise_discriminator(self, input, targets):

        # Load input and targets into GPU memory.
        input = input.cuda()
        for target in targets:

        # Pass input through the generator.
        outputs = self.generator.forward(input)

        # Get the loss.
        discriminator_loss = discriminator_loss(targets, outputs)

        # Optimise the discriminator.

        # Return the loss.
        return discriminator_loss.item()

The problem occurs when I call the backward method on the output of discriminator_loss function. I get the following error:

Traceback (most recent call last):
  File "", line 84, in <module>
  File "", line 74, in main
    generator_loss = model.optimise_discriminator(input, targets)
  File "/nobackup/sccmho/projects/DBT2FFDM/models/", line 101, in optimise_discriminator
  File "/nobackup/sccmho/.conda/envs/pix2pix/lib/python3.7/site-packages/torch/", line 166, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/nobackup/sccmho/.conda/envs/pix2pix/lib/python3.7/site-packages/torch/autograd/", line 99, in backward
    allow_unreachable=True)  # allow_unreachable flag
  File "/nobackup/sccmho/.conda/envs/pix2pix/lib/python3.7/site-packages/torch/autograd/", line 77, in apply
    return self._forward_cls.backward(self, *args)
  File "/nobackup/sccmho/.conda/envs/pix2pix/lib/python3.7/site-packages/torch/autograd/", line 181, in backward
    raise NotImplementedError

This occurs when only returning real_loss or fake_loss in the discriminator_loss function as well. I’m at a loss (:wink:) as to what is causing this, as I’m not using a custom loss function. Any guidance would be appreciated.


Are you using a custom autograd.Function ? This error is because you try to use a custom autograd Function without implementing the backward.

1 Like

Hi Alban,

Turns out that was the issue, thanks a lot for your prompt help! :grinning:

1 Like