Back propogation after applying a function to output

I am training a unet and my model output is [B,C,D,H,W]. in my forward pass I apply a complex mathematical formula to my output and I wish to compute loss over the output of this function after prediction and my ground truth.This formula could be something like:
output_from_channel0*( - torch.exp(-10/ output_from_channel1))*torch.exp(-5/ output_from_channel2)
i.e. :
outputs = model(input)

outputs2=a_complex_mathematical_formula_on_tensors(outputs)

my_loss=loss(ground_truth , outputs2)
my_loss.backward()
optimizer.step()
but apparantly this doesnt work
I see here is mentioned Backward needs to be implemented manually. How this should be done? The documentation of AUTOGRAD seems high level and quite complicated.

You have an example here
https://pytorch.org/tutorials/intermediate/custom_function_conv_bn_tutorial.html