Prevent crash when using a large tensor

Hello, i’m using a pretrained bert model to do text embedding. I’ve applied a tokeniser on my data and when i call the model with a tensor of dimension 2000x64, i get an error: end of process. It works when i use fewer data (1000x64):

with torch.no_grad():
outputs = bertweet(inputs, attention_mask=attention_mask)

How can i use my model with a bigger tensor ?

PS: I’m working locally on my computer on ubuntu 20

Are you using the CPU or a GPU?
Could you check, if you might be running our of system RAM and thus your script just crashes (if your GPU is running out of memory, you should get a CUDA error)?