hey there, I have a model now requiring me to find the average of item embeddings of the user as the user interest at the particular time.

The problem is that the item interacted at different times are different, ie 5 for this month and 10 for the other. Therefore, I couldn’t just pack them in one tensor and make a torch.mean computation.

Here are my codes to tackle this problem by concatenating the mean row by row and creating a new tensor.

```
items_emb = nn.Embedding(6000, 10)
items = [[3245, 3976, 4136, 2757, 2524],
[5415, 2324, 3246, 1418, 298, 1052],
[903, 3428, 106, 1130, 5717, 745, 5576, 4757, 4867, 2897],
[5387, 1911, 5866, 824, 2462]]
def calc_mean(inputs):
out = torch.mean(inputs, 0)
return out
def get_mean_tensor(items_emb, items):
first = items_emb(torch.LongTensor(items[0]))
second = items_emb(torch.LongTensor(items[1]))
embs = torch.cat([calc_mean(first), calc_mean(second)], 0)
for i in range(2, len(items)):
cur = items_emb(torch.LongTensor(items[i]))
embs = torch.cat([embs, calc_mean(cur)], 0)
embs = embs.unsqueeze(0).view(-1, 10)
return embs
print(get_mean_tensor(items_emb, items))
```

The output would be a tensor of len(items) x embedding dimension.

```
tensor([[ 0.0158, -0.2520, 0.2216, -0.2114, -0.6670, 0.4971, 0.5369, 0.7796,
0.4892, -0.3147],
[ 0.0904, 0.1451, 0.2905, -0.5891, -0.1611, -0.1272, -0.1093, -0.1912,
-0.1019, 0.0333],
[ 0.2133, -0.4288, -0.5960, -0.1288, -0.0509, -0.1023, -0.0398, -0.2128,
-0.2405, 0.0800],
[ 0.1568, -0.3687, 0.0273, 0.2744, -0.7340, -0.0794, 0.2946, -0.7839,
-0.0467, -0.0430]], grad_fn=<ViewBackward>)
```

Tell me if there’s a better way to do this, thanks!