I am using a ReformerForQuestionAnswering
for training on a QA task.
Here’s a snippet of the code that can reproduce the error.
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
The backward() call throws the following error
TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’
Thank You