Hello, i’m using a pretrained bert model to do text embedding. I’ve applied a tokeniser on my data and when i call the model with a tensor of dimension 2000x64, i get an error: end of process. It works when i use fewer data (1000x64):
with torch.no_grad():
outputs = bertweet(inputs, attention_mask=attention_mask)
How can i use my model with a bigger tensor ?
PS: I’m working locally on my computer on ubuntu 20