How to run conventional for loop on GPU?

Hi,

I would like to know how to run conventional for loop on GPUs? I have 2 GPUs. Below is a sample code for your consideration. It takes around a few minutes to run the code on the CPU. However, I need to run the code within a few sections. Could you please help me out?

maxiter = 10000
for iteration in range(maxiter):
    print(str(iteration) + '--' + str(datetime.datetime.now()))

Not relevant to PyTorch

Have a look at