REduce capacity of a model

I am trying to run a deep learning model written in pytorch. since I am trying in my laptop I only able to run about 20 frames. Can anyone please tell how can I input 20 frames of a video dataset instead of the whole of it?
I also have 2 .pth files in the model. Do these need to be modified? Here is the code

Is this due to some memory limitation? Would you like to process 20 frames in each batch?

How are you loading your data so far?
If you are using a custom Dataset, limiting the length should be easy, as you would only have to return the reduced number in __len__(self).

What do these files contain?

You would need to provide some more information about the use case, what you have done so far, and where you are stuck currently.

I just followed the steps written to execute and I have not added anything to the code. Yes I want to input a video with 20 frames. Yes this is due to memory limitation and I am using DAVIS and VOTdatasets. These .pth files were included in the code.

when I execute this command
python …/…/tools/ --resume SiamMask_DAVIS.pth --config config_davis.json

only an image is the output without the mask and bounding box and the following shows. Please help me how to fix this-

[2019-11-30 18:24:53, 66] Current training 0 layers:

[2019-11-30 18:24:53, 66] Current training 1 layers:

[2019-11-30 18:24:53, 31] load pretrained model from SiamMask_DAVIS.pth
[2019-11-30 18:24:57, 25] remove prefix ‘module.’
[2019-11-30 18:24:57, 18] used keys:356

I am trying this code

Sorry, I’m not familiar with the code base and think you might get a better answer, if you ask in the repository directly.

I have done that. Before I get any solution from there, could you please tell what actually .pth files contain? What could be the reason behind getting “load pretrained model from SiamMask_DAVIS.pth”, “remove prefix ‘module.’”, " used keys:356" error messages?