How to define loss function on batches for each iteration

Imagine I have
lables = torch.tensor([1,2,3.,5])
and batchsize of 4, so my network output is in size of [4,1],
model_output = torch.rand(4,1)*5
B.requires_grad = True

now I want to train my network in a way that in each iteration the sum of batches for that iteration be equal to 1.
how can i use loss function for that? lets say at the end I am gonna do L1 between sum of the batches and 1.