Evaluate all submodules for GPU use

Hello everyone. I have a question. I’ll be grateful for any help
I have a network:

class Net(nn.Module):

	def __init__(self):
		super(Net,self).__init__();
		self.Charts = nn.ModuleList([net() for i in range(10)]) 
#net is a module I defined before


def forward(self,x):
		x = torch.cat([self.Charts[i](x) for i in range(10)],1)
        	return x

But I want to evaluate all the modules at the same time instead of going trough the loop: [self.Charts[i](x) for i in range(10)]. Otherwise running in the GPU doesn’t improve performance

Anyone know how to do that?
(Also I know 10 submodules is not enough to see an improvement by running it in GPU but this is only an example, I have way more submodules in my actuala code).