Loading donut transformers model getting error

self.model = self.model.to(self.device)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1145, in to
return self._apply(convert)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 820, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

my code is
class Donut:
def init(self):
self.processor = DonutProcessor.from_pretrained(“naver-clova-ix/donut-base-finetuned-docvqa”)
self.model = VisionEncoderDecoderModel.from_pretrained(“naver-clova-ix/donut-base-finetuned-docvqa”)
self.device = “cuda” if torch.cuda.is_available() else “cpu”
self.model.to(self.device)

torch version :- torch==2.0.1+cu118
torchaudio==2.0.2+cu118
torchvision==0.15.2+cu118
Nvidia Driver :- Driver Version: 550.144.03 CUDA Version: 12.4

Often device asserts are triggered by failing indexing operations so check if e.g. an embedding layer received an invalid input or any other indexing fails.

Thanks for reply. It has resolve , my max_position_embeddings mismatching.