Please redirect all xla questions to pytorch/xla github issues
|
|
0
|
1198
|
May 18, 2020
|
Script runs on v5p-8 but gets stucked on xmp.spawn on v5p-32
|
|
0
|
7
|
October 13, 2024
|
Simple Tutorial for TPU usage with pytorch
|
|
2
|
699
|
September 25, 2024
|
Export module (via torch_dynamo) with arbitrary tensors marked as sharded
|
|
0
|
159
|
April 2, 2024
|
Export module to StableHLO with communication collectives
|
|
9
|
549
|
April 1, 2024
|
Error in xm.optimizer_step()
|
|
0
|
272
|
March 14, 2024
|
Kaggle TPU roberta Finetuning
|
|
0
|
428
|
December 4, 2023
|
Pytorch/xla SPMD on Cloud TPU pod
|
|
2
|
544
|
November 28, 2023
|
Error when attempting to access XLA tensor.shape
|
|
3
|
1090
|
August 15, 2023
|
Loss backward took forever until memory leaking on TPU v3
|
|
1
|
1285
|
June 7, 2023
|
Enable multiprocessing on pytorch XLA for TPU vm
|
|
6
|
1572
|
May 30, 2023
|
Can not import torch_xla on google colab without tpu
|
|
1
|
2738
|
April 25, 2023
|
XLA debug flags
|
|
1
|
925
|
April 25, 2023
|
Why USE_CUDA must be 0 when XLA_CUDA=1
|
|
3
|
958
|
February 24, 2023
|
XLA AOT with PyTorch
|
|
7
|
1287
|
January 26, 2023
|
Can I get help with this error: xmp.spawn() (import torch_xla.distributed.xla_multiprocessing as xmp)
|
|
4
|
1114
|
January 17, 2023
|
XLA-TPU - Extremely slow
|
|
2
|
1910
|
November 15, 2022
|
TPU train;how to fix this bug;runtime error: Numpy is not available
|
|
1
|
1217
|
November 9, 2022
|
Error with pytorch xla
|
|
0
|
1110
|
January 12, 2022
|
TPU: Resource exhausted, although there seems to be enough memory
|
|
0
|
1329
|
November 10, 2021
|
How to concatenate all the predicted labels in XLA?
|
|
0
|
898
|
August 19, 2021
|
Cannot replicate if number of devices (1) is different from 8 - TPU
|
|
1
|
1957
|
July 20, 2021
|
PT built from source, no cuda devices found
|
|
4
|
1404
|
July 20, 2021
|
Understanding use of xm.mark_step() in torch_xla
|
|
1
|
2929
|
July 20, 2021
|
Input tensor is not an XLA tensor: torch.FloatTensor
|
|
1
|
2240
|
May 29, 2021
|
Im2col im2col_backward ops lowering not supported in xla. Any temporary workaround to replace `nn.unfold` without trigger the unsupported ops?
|
|
2
|
1115
|
May 6, 2021
|
Pytorch XLA is not at all working on colab or kaggle kernels
|
|
0
|
1174
|
February 2, 2021
|
Problem with training model on Google Colab TPU
|
|
0
|
1837
|
December 2, 2020
|
Unexpected bevaiour with torch xla
|
|
1
|
969
|
October 24, 2020
|
What does xmp.MpSerialExecutor() really do?
|
|
0
|
1351
|
September 13, 2020
|