Backproping on a subset of the output

I was wondering if there is a way in PyTorch to do back propagation on just a subset of the output vector variable, with the aim of saving computation?
Let’s say I have batches of 100 samples and I want to keep 3 out of them. If I backdrop for example on loss(output[:3],target[:3]), it takes the same time as backproping on loss(output,target).

It makes sense for it to take the same amount of time. Unless the slices don’t depend on each other across the entire network, it would be necessary to backward using the full size. If they do, then you can just activate for those slices, and save time that way.