Non scalar backward and self mini batch implementation

I appreciate your reply

In this case

torch.autograd.backward([output],[grad])

Is grad of each data in minibatch accumulated like bellow?

Thanks