Deterministic Algorithms yield an error

Hi,
I try to ensure reproducibility in my code by setting the random seeds and using only deterministic algorithms.
However, the following simple code yields an error in my implementation:

import torch
torch.use_deterministic_algorithms(True, warn_only=True)
device = torch.device("cuda")
tensor = torch.ones(size=(10, 2), device=device, dtype=torch.long)
indices = torch.arange(4, 8, device=device)
value = 5.

tensor[indices] = int(value)

The error I obtain is:
linearIndex.numel()sliceSizenElemBefore == expandedValue.numel() INTERNAL ASSERT FAILED at “…/aten/src/ATen/native/cuda/Indexing.cu”:389, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor: 8 vs 4
File “debug_determinism.py”, line 14, in
tensor[indices] = int(value)

The same code runs totally fine on the CPU, but yields the error message when executing the code on the GPU.
Is this a bug in the current release?

The error should be related to this issue.
CC @eqy as I’m unsure what the current status is.

In the referenced post, index_put is used, which expects an tensor of the same shape. However, it should also be possible to assign the same value to all indexed entries (as I tried in the example above). Thus, I am not quite sure if the problems are related.

Strangely, it works with a boolean value, but not with a numerical value.