Hello,
First off, sorry if my question is probably more related to Python itself rather than PyTorch.
I am unclear if torch.transform pipelines created with torch.compose are done in parrallel over the batch.
If I have a batch of 2 samples (fetched with a DataLoader, num_worker = 0).
And a torch.compose having two transformations A & B.
What really happens ?
(A&B over sample 1) then (A&B over sample 2).
or
(A over sample 1) then (A over sample 2 & B over sample 1) then ( B over sample 1)
I have the same question regarding a forward pass in any sequential model.
Are samples passed one by one over the entire network, or are they sent in a queue layer by layer ?