How to speed up a for loop of calls

In the forward method of my nn.Module, I have these 2 lines :

views = [z] + [transformation(z) for transformation in self.model.transformations]
representations = torch.stack([self.model.encoder(view) for view in views])

where self.model.transformations is a list of nn.Modules (Convolutions actually).
By profiling my code, I seed that these formulation as a list comprehension might be problematic in term of speed.

Is there a pytorch function/module that would take a list of modules and, in a fast way, apply it to my entry z and then stack the resuls ?


Very unlikely, unless you have thousands of transformations on tiny arrays. In normal circumstances, your hotspots should be in native (c++ or cuda); this may be invisible to your profiler, or you’re doing profiling without cuda synchronization.