Backpropagate loss of single batch sample rather than mean loss

For every image in my dataset, I subdivide it into a patch and run the patches through the model as a “batch”. Then I calculate the loss for each patch using nn.CrossEntropyLoss(reduction='none'), which returns a loss for each of my patches in the batch. I next select only one of the losses that were returned and backpropagate it. I do this by indexing the tensor and then calling loss.backward() as the example below shows. When I do this the grad_fn parameter changes from <BinaryCrossEntropyBackward> to <SelectBackward>. Does this impact my ability to backpropagate just the selected loss?

loss_all = criterion(m(patches), target)
loss_all

tensor([1.0222, 0.8956, 0.4532], grad_fn=<BinaryCrossEntropyBackward>)
loss = loss_all[0]
loss
tensor(1.0222, grad_fn=<SelectBackward>)
loss.backward()