Computing loss along batch dimension

I have tensors of BxCHxW where B is the batch. Then are the following codes equivalent?

l1_loss = torch.nn.L1Loss()
loss = l1_loss(pred,gt)

and

l1_loss = torch.nn.L1Loss()

for idx in range(B):
  if idx==0:
    loss = l1_loss(pred[idx,:,:,:],gt[idx,:,:,:])
  else:
    loss = l1_loss(pred[idx,:,:,:],gt[idx,:,:,:]) + loss

loss = loss/B

Yes, this seems to be the case for random input values:

pred = torch.randn(10, 10, 10, 10)
gt = torch.randn(10, 10, 10, 10)

l1_loss = torch.nn.L1Loss()
loss1 = l1_loss(pred,gt)


for idx in range(pred.size(0)):
  if idx==0:
    loss2 = l1_loss(pred[idx,:,:,:],gt[idx,:,:,:])
  else:
    loss2 = l1_loss(pred[idx,:,:,:],gt[idx,:,:,:]) + loss2

loss2 = loss2/pred.size(0)

print(torch.allclose(loss1, loss2))
> True