GPU out of memory error

“CUDA out of memory error”
Getting this error here test data is an iterator.
How to fix it?
Here is the code where error is occuring?
Can i take batches and test?

get predictions for test data

with torch.no_grad():
preds = model(test_seq.to(device), test_mask.to(device))
preds = preds.detach().cpu().numpy()

Your batch size might be too large, so you could try to lower it during the test run.

PS: you can post code snippets by wrapping them into three backticks ``` :wink: