I try to ensure reproducibility in my code by setting the random seeds and using only deterministic algorithms.
However, the following simple code yields an error in my implementation:
import torch torch.use_deterministic_algorithms(True, warn_only=True) device = torch.device("cuda") tensor = torch.ones(size=(10, 2), device=device, dtype=torch.long) indices = torch.arange(4, 8, device=device) value = 5. tensor[indices] = int(value)
The error I obtain is:
linearIndex.numel()sliceSizenElemBefore == expandedValue.numel() INTERNAL ASSERT FAILED at “…/aten/src/ATen/native/cuda/Indexing.cu”:389, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor: 8 vs 4
File “debug_determinism.py”, line 14, in
tensor[indices] = int(value)
The same code runs totally fine on the CPU, but yields the error message when executing the code on the GPU.
Is this a bug in the current release?