Loop will decrease the utilization of the GPU?

I have a loop in my DeepLearning Pipeline’s forward part to normalize the intermediate result.

Do it run on CPU and decrease the utilization of the GPU?

some snippet as follow:

def forward(self):
    for b in range(batch_size):
        self.points[b] = self.unit_cube(self.points[b])

The Python script including the loops will run on the CPU.
Depending on which device self.points is stored and what is used inside self.unit_cube the GPU might be used. In this case a kernel will be executed in each iteration.