Error: grad can be implicitly created only for scalar outputs

Hi,
I am trying to compute the gradients of my network output (a batch of a single number) with respect to the model trainable parameters.

I assumed this would do the trick: outputs.backward(), but i am getting the same error as stated in this thread. Although does backward() compute the gradients w.r.t model trainable parameters? Additionally, how can I access the calculated gradients as I need to perform some operations on them?

Please share your thoughts on how I can accomplish the desired functionality?