Loss.backward raises error with make_private()

I am following the tutorial: https://opacus.ai/tutorials/building_image_classifier. The only difference is that I have a smaller network than resnet and I want to use make_private function instead of make_private_with_epsilon since I do not have the epsilon and delta before hand, but I do know the noise multiplier.

optimizer = optim.SGD(netS.parameters(), lr=opt.lr)
    
privacy_engine = PrivacyEngine()

model, optimizer, train_loader = privacy_engine.make_private(
    module=netS,
    optimizer=optimizer,
    data_loader=trainloader,
    noise_multiplier= NOISE_MULTIPLIER,
    max_grad_norm=MAX_GRAD_NORM,
)

When I call loss.backward() in the train function, I get the following error:
RuntimeError: grad can be implicitly created only for scalar outputs

In non-DP training, it makes sense to get this error when we do not do loss.mean(). But I was under the impression that Opacus will handle it in the per-sample gradient setting. Is there anything I am missing? Thanks in advance.