Attention weighted aggregation

Let the tensor shown below be the representation of two sentences (batch_size = 2) composed with 3 words (max_lenght = 3) and each word being represented by vectors of dimension equal to 5 (hidden_size = 5) obtained as output from a neural network:

net_output
# tensor([[[0.7718, 0.3856, 0.2545, 0.7502, 0.5844],
#          [0.4400, 0.3753, 0.4840, 0.2483, 0.4751],
#          [0.4927, 0.7380, 0.1502, 0.5222, 0.0093]],

#         [[0.5859, 0.0010, 0.2261, 0.6318, 0.5636],
#          [0.0996, 0.2178, 0.9003, 0.4708, 0.7501],
#          [0.4244, 0.7947, 0.5711, 0.0720, 0.1106]]])

Also consider the following attention scores:

att_scores
# tensor([[0.2425, 0.5279, 0.2295],
#         [0.2461, 0.4789, 0.2751]])

Which efficient approach allows obtaining the aggregation of vectors in net_output weighted by att_scores resulting in a vector of shape (2, 5)?