Is it mandatory to flatten the embeddings?

I’m trying to replicate the results of a paper.
They were using TF and they have mentioned that it is crucial to flatten the embeddings.

Now, given this simple implementation, does it make sense to flatten anything? If so, why?

class GMF(nn.Module):
    def __init__(self, config):
        super().__init__()
        self.user_embedding = nn.Embedding(config['user_pool'], config['latent_dim'])
        self.item_embedding = nn.Embedding(config['item_pool'], config['latent_dim'])
        self.out = nn.Linear(config['latent_dim'], 1)

    def forward(self, users, items):
        users, items = self.user_embedding(users), self.item_embedding(items)
        haddamard = mul(users, items)
        return self.out(haddamard)