Help Set tf=True?

Hello,
Not sure this is a pytorch issue .Very new to this and was trying something from a Youtube video called:

AI Text and Code Generation with GPT Neo and Python | GPT3 Clone
[GitHub - nicknochnack/GPTNeo}

Using Jupyter Notebook

This line was run:

generator = pipeline(‘text-generation’, model=‘EleutherAI/gpt-neo-2.7B’) # Second line
OSError: Unable to load weights from pytorch checkpoint file for ‘EleutherAI/gpt-neo-2.7B’ at 'C:\Users\USER7/.cache\huggingface\transformers\0839a11efa893f2a554f8f540f904b0db0e5320a2b1612eb02c3fd25471c189a.a144c17634fa6a7823e398888396dd623e204dce9e33c3175afabfbf24bd8f56’If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

I used something called Jupyter Notebook to execute each line

What I do not know, is where to set tf=True

This was the first line:
!pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

which executed OK.

Thanks,

V

Based on the error message it seems that Huggingface uses the from_tf argument as mentioned e.g. here, so you could try to pass this argument to the from_pretrained method, if it’s used.

Thank you for responding.

Unfortunately this is still beyond me
This example had 4 lines. And I guess it works for someone with the right set up - which I don’t have

not sure if I can get away with this:

generator = pipeline(‘text-generation’, model=‘EleutherAI/gpt-neo-2.7B’, from_tf=True)

guess I will blow up my system and try again!

I really appreciate your patience!

Thank you!

1 Like

You’re not the only one. I also experienced this problem. Setting from_tf to True only yields

OSError: Can't load weights for 'EleutherAI/gpt-neo-2.7B'. Make sure that:

- 'EleutherAI/gpt-neo-2.7B' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'EleutherAI/gpt-neo-2.7B' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.

Thanks for pointing this out!!!

I pre-trained the model on custom corpus from roberta-base checkpoint and saved it in drive using save_pretrained(dir_path).
Now I am trying to load this pre-trained model using from_pretrained(dir_path) .
I throws the same error. If anyone found the solution, please guide me in right direction.