generator = pipeline(‘text-generation’, model=‘EleutherAI/gpt-neo-2.7B’) # Second line
OSError: Unable to load weights from pytorch checkpoint file for ‘EleutherAI/gpt-neo-2.7B’ at 'C:\Users\USER7/.cache\huggingface\transformers\0839a11efa893f2a554f8f540f904b0db0e5320a2b1612eb02c3fd25471c189a.a144c17634fa6a7823e398888396dd623e204dce9e33c3175afabfbf24bd8f56’If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I used something called Jupyter Notebook to execute each line
Based on the error message it seems that Huggingface uses the from_tf argument as mentioned e.g. here, so you could try to pass this argument to the from_pretrained method, if it’s used.
You’re not the only one. I also experienced this problem. Setting from_tf to True only yields
OSError: Can't load weights for 'EleutherAI/gpt-neo-2.7B'. Make sure that:
- 'EleutherAI/gpt-neo-2.7B' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'EleutherAI/gpt-neo-2.7B' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
I pre-trained the model on custom corpus from roberta-base checkpoint and saved it in drive using save_pretrained(dir_path).
Now I am trying to load this pre-trained model using from_pretrained(dir_path) .
I throws the same error. If anyone found the solution, please guide me in right direction.