Indices should be either on cpu or on the same device as the indexed tensor

Using device: mps
1.13.0.dev20220614
0.14.0.dev20220614
Traceback (most recent call last):
  File "Disco_Diffusion_v5_2_m1.py", line 2340, in <module>
    do_run()
  File "Disco_Diffusion_v5_2_m1.py", line 983, in do_run
    txt = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
  File "/Users/aiden/notebook/CLIP/clip/model.py", line 355, in encode_text
    x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

Can you make sure that all the Tensors are on the same device?
In particular, you might want to use torch.arange(size, device=x.device) to make sure it matches.

        print("+++++++++++++++")
        print(x.device)
        print(text.device)
        print(self.text_projection.device)
        print("+++++++++++++++")

They are all on device mps:0

import numpy as np
x = torch.arange(72, dtype=torch.float32).to(device)
x = x.reshape((6,3,4))
print(x)
print(x.device)
index1 = torch.arange(1).to(device)
index2 = torch.arange(1).to(device)
index2[0] = 5
print(index1)
print(index2)
y = x[index1, index2]
print(y)

NotImplementedError: The operator 'aten::index.Tensor_out' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

‘aten::index.Tensor_out’ triggers fallback to cpu.