When i load GeosSeg model i see this error?

i follow the structure bellow in this github page :
WangLibo1995/GeoSeg: UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery, ISPRS. Also, including other vision transformers and CNNs for satellite, aerial image and UAV image segmentation. (github.com)
and train the data and when i want to test the model and see the result with this code: python GeoSeg/loveda_test.py -c GeoSeg/config/loveda/dcswin.py -o fig_results/loveda/dcswin_test -t ‘d4’
i got this error: ( i have to note that i didnt change any parameter just i changed the batch size to 1 becuase i got cuda out of memory error )
model = Supervision_Train.load_from_checkpoint(os.path.join(config.weights_path, config.test_weights_name+‘.ckpt’), config=config)
model.cuda()
model.eval() model = Supervision_Train.load_from_checkpoint(os.path.join(config.weights_path, config.test_weights_name+‘.ckpt’), config=config)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/utilities/model_helpers.py”, line 125, in wrapper
return self.method(cls, *args, **kwargs)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/core/module.py”, line 1586, in load_from_checkpoint
loaded = _load_from_checkpoint(
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/core/saving.py”, line 63, in _load_from_checkpoint
checkpoint = pl_load(checkpoint_path, map_location=map_location)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/lightning_fabric/utilities/cloud_io.py”, line 56, in _load
with fs.open(path_or_url, “rb”) as f:
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/fsspec/spec.py”, line 1293, in open
f = self._open(
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/fsspec/implementations/local.py”, line 184, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/fsspec/implementations/local.py”, line 306, in init
self._open()
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/fsspec/implementations/local.py”, line 311, in _open
self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: ‘/media/deed/Programs/Miss_Yousefi/treesegmentation/model_weights/loveda/dcswin-small-512crop-ms-epoch30/dcswin-small-512crop-ms-epoch30.ckpt’
(airs) deed@deedasia-980GTX:/media/deed/Programs/Miss_Yousefi/treesegmentation$ python GeoSeg/loveda_test.py -c GeoSeg/config/loveda/dcswin.py -o fig_results/loveda/dcswin_test -t ‘d4’
INFO:albumentations.check_version:A new version of Albumentations is available: 1.4.11 (you have 1.4.10). Upgrade using: pip install --upgrade albumentations
/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/torch/functional.py:512: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at …/aten/src/ATen/native/TensorShape.cpp:3587.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File “GeoSeg/loveda_test.py”, line 138, in
main()
File “GeoSeg/loveda_test.py”, line 60, in main
model = Supervision_Train.load_from_checkpoint(os.path.join(config.weights_path, config.test_weights_name+‘.ckpt’), config=config)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/utilities/model_helpers.py”, line 125, in wrapper
return self.method(cls, *args, **kwargs)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/core/module.py”, line 1586, in load_from_checkpoint
loaded = _load_from_checkpoint(
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/core/saving.py”, line 91, in _load_from_checkpoint
model = _load_state(cls, checkpoint, strict=strict, **kwargs)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/pytorch_lightning/core/saving.py”, line 187, in _load_state
keys = obj.load_state_dict(checkpoint[“state_dict”], strict=strict)
File “/home/deed/anaconda3/envs/airs/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 2189, in load_state_dict
raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format(
RuntimeError: Error(s) in loading state_dict for Supervision_Train:
size mismatch for net.decoder.segmentation_head.0.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 96, 3, 3]).
size mismatch for net.decoder.segmentation_head.0.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for net.decoder.segmentation_head.0.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for net.decoder.segmentation_head.0.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for net.decoder.segmentation_head.0.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([96]).
can anyone help me to solve it ?

There seems to be a difference between the Model Structure that you have instantiated and the model structure whose weights you are retrieving from the checkpoint