I am trying to implement a parallel evaluation of a function on different sections of my data within the forward() in my model. I am not even sure if this is possible.
I saw that there is a torch.multiprocessing.Pool that I can use to map a function to a list of tensors, but when used within my forward method, it complains because I cannot use pool objects within a class (apparently):
NotImplementedError: pool objects cannot be passed between processes or pickled.
Here more or less what I would like to try:
def forward(self,x): x = nf.unfold(x) #unfold the e.g. image to get patches x = evaluate_function_in_parallel(x) # parallelize this evaluation e.g. x = pool.map(function,x) x = torch.cat(x) return x
I have only seen examples of distributed training with torch.multiprocessing and torch.distributed but not examples for distributing the work within the forward function. Is it even possible? If so, are there any examples available?
Any comment on this would be really helpful. Thanks.