Is tensor gpu computing possible with general calculation?

can this numpy computation be converted to tensor and be calculated using gpu?

if possible, how can I approach this problem?

Yes, you can basically just replace all numpy methods with the PyTorch ones:

device = 'cuda:0'
query_vecs = torch.randn(123, 32, device=device)
reference_vecs = torch.randn(456, 32, device=device)

dist = torch.cat([torch.sqrt(torch.sum((q - reference_vecs)**2, dim=1)) for q in query_vecs])

so when numpy is converted to tensor with device in it, the computation automatically uses gpu. is this correct?

I’m not sure what you mean by “converted to tensor”, but all operations will be executed on the GPU if the tensors are stored on this device as well.
If you would like to transform numpy arrays into Pytorch tensors, you can use torch.from_numpy(array).

thanks for the explanation!! now I understand :slight_smile: