Hello,
I’m tring to calculate the cosine similarity between two tensors of different shapes:
tensor1: torch.Size([15, 24, 4, 120])
tensor2: torch.Size([5608, 4, 120])
To make them have the same dimension, I used unsqueeze to make them both of size
torch.Size([15, 24, 5608, 4, 120])
However, this new tensor makes me run out of memory because it has 969M elements and takes 3.8Gb.
Is there a better way to calculate cosine similarity without creating this giant tensor?
@Yunchao_Liu
you can try something like
tensor1 = torch.rand([15, 24, 4, 120])
tensor2 = torch.rand([5608, 4, 120])
cos = nn.CosineSimilarity(dim=0)
def custom_cos(cur_tensor):
return cos(cur_tensor, tensor2)
results = map(custom_cos, tensor1.view(-1, 4, 120))
results = torch.concat(list(results)).view(15,24,4,120)
results.size()
If you have the latest version you can also use
https://pytorch.org/tutorials/prototype/vmap_recipe.html
Thanks @anantguptadbl ! I will give it a try.
Hi @anantguptadbl . I tried the method but it is too slow. I need to compute this tensor in every iteration of my forward function. So the speed is also an issue.