How to manage batches. In main file or inside forward?

For me to understand how pytorch graphs, autograd etcetera… works let me ask:

what’s the difference between doing:

model=model(defining model)
batch_output=[]
for i in range(batch_size)
    new_sample = dataloader(batch_size=1)  #loads just one sample
    output = model(new_sample)
    batch_output.append(new_sample)
torch.stack(batch_output)

This would be applying the model several times to process the whole batch in the main file running model(input) as many times as batch requires
and processing the batch inside the forward function:

forward(self,batch):
    batch_output=[]
    for i in range(batch):
       sample_i=batch[i]
       ....
       ....
       batch_output.append(sample_i)
   return torch.stack(batch.output)

And doing the same iteration inside the forward function?
I’ve seen that doing it inside the forward function is hugely memory consuming. I imagine it’s duplicating graphs or something like that but I don’t really understand how all this pytorch world works properly enough not to commit mistakes.
I’m not using pre-implemented nn.Modules , I need to create mine and I don’t know what is the properly way of using PyTorch.
Can anyone explain me what exactly happens when I use a for loop inside forward function?
What happens when I run model(input)?

Any good guide to understand all this?

Thank you!