What is the recommended way to calculating multi head attention scores?

Which of them is correct?

embeddings = nn.Embedding(vocab, dim_size)
mha = nn.MultiHeadAttention(dim_size, num_heads)

embeds = embeddings(inp)
mha_out = mha(embeds, embeds, embeds)

or

embeddings = nn.Embedding(vocab, dim_size)
query_linear = nn.Linear(dim_size, dim_size)
key_linear = nn.Linear(dim_size, dim_size)
value_linear = nn.Linear(dim_size, dim_size)
mha = nn.MultiHeadAttention(dim_size, num_heads)

embeds = embeddings(inp)
q = query_linear(embeds)
k = key_linear(embeds)
v = value_linear(embeds)
mha_out = mha(q,k,v)

I’ve been looking around Internet and original transformer paper. But I have been getting conflicting answers. I’d appreciate any help.