How can i average subword embedding?

how can i average subword embedding vectors to generate an approximate vector for the original word as i get the embedding using this function

def get_bert_embed_matrix(bert):
    bert_embeddings = list(bert.children())[0]
    bert_word_embeddings = list(bert_embeddings.children())[0]
    mat = bert_word_embeddings.weight.data.numpy()
    return mat

and used tokenizer for BERT