Skipping embedding layers of pre-trained model GPT2 by huggingface

This is the architecture of gpt2 language model.
GPT2LMHeadModel(
(transformer): GPT2Model(
(wte): Embedding(50257, 768)
(wpe): Embedding(1024, 768)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
(0): Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(dropout): Dropout(p=0.1, inplace=False)
)
)
…more blocks are there

I am using gpt2 pre-trained model by huggingface.
I want to use GATs for embedding and want to incorporate that embedding with embedding provided by gpt2 pretrained model.
so my question is, can we directly give input to the block 0 by skipping the embedding layer part of this model.