custom calculation

hi everyone
I have a question about custom calculation before passing it to the loss function. Considering we have a network that generates a feature map I want to do some logic calculations without having any learnable parameter and pass it to the loss function. Would this calculation cause a problem for the backward method?
thank you so much and Best Regards.

You shouldn’t see any issues if you are using differentiable operations on the model output.

thank you for your response.
even if they are not part of the feed-forward they should be differentiable? I include the pseudo code here :

        output, _ = generator(input) # trianable model 
        custom_output = logic_cacualtion(output)
        custom_output_gt = logic_cacualtion(GT)
     
        loss =  torch.nn.functional.mse_loss(custom_output,custom_output_gt)

Yes, as generally PyTorch/Autograd does not care if an operation is inside a model’s forward function or in a global context. The computation graph will be created using the tracked operations so they should be differentiable if you want to backpropagate through them.

I really appreciate your response.
I have yet two other questions.
1-So if I want to use unfold and reshape is it impossible? if yes how backpropagation works for this function?
2- can I ensure the gradient is calculated by :

 for name, param in model.named_parameters():
                 if param.grad is not None:
                      print(model - Parameter: {name}, Gradient Norm: {param.grad.norm()}')

Best Regards.

  • unfold and reshape are differentiable, so autograd should work
  • not sure what you mean by your second question, do you want to compute second order gradients?

thank you for your response.
actually, I want to be sure that the gradient for each parameter is calculated. so can I use that code to print the gradient of model parameters?