PyTorch TPU support

Any news on PyTorch TPU support?

Last I heard (


Was wondering the same…

There was an announcement in the PyTorch Developer Conference this week, where a ResNet50 was successfully trained on a TPU.
For more info, check

1 Like

That’s great to read, thank you very much for sharing.

we’re pleased to announce that engineers on Google’s TPU team are actively collaborating with core PyTorch developers to connect PyTorch to Cloud TPUs

There is already some keras code to show how to do TPU in a very concrete and practical way :

tpu_model = tf.contrib.tpu.keras_to_tpu_model(
        tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),

def train_gen(batch_size):
  while True:
    offset = np.random.randint(0, x_train.shape[0] - batch_size)
    yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]

    validation_data=(x_test, y_test),

I am hoping to see some pytorch demo codes that enables TPU usage.
I guess it is coming soon. Can not wait !!!

Anyone heard anything new on Pytorch on TPU?

I believe it’s already usable but a little rough around the edges, although I haven’t tried it myself.

1 Like

Any developments on that in the mean while? XLA is a nice temporary solution, but it would be good to know if we can expect an official solution soon.


Thanks to the awesome work by the PyTorch and XLA team, we were able to get TPU support fully working out of the box with PyTorch Lightning.

Check out this nifty guide on going from PyTorch to PyTorch Lightning

Does pytorch TPU supports also float16 precision?

Yes. Moreover, it switches to 16 bit by default.