Loss.backward() on Huggingface Reformer model gives error

I am using a ReformerForQuestionAnsweringfor training on a QA task.

Here’s a snippet of the code that can reproduce the error.

from transformers import ReformerTokenizer, ReformerForQuestionAnswering

tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')

question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()

The backward() call throws the following error

TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’

Thank You

I don’t know which operation raises this error, so could you please post the complete stack trace?
Also, you might get faster and better answers in the huggingface forum :wink:

I hadn’t put the model in train() mode.

After doing model.train(). It worked