Force_reload but still uses cache?

Hello,
With my team we are currently facing a problem. We are working on tuning the last layers of a slowfast network to recognize some actions in videos.
After some time, we are getting some “nan” outputs and we are looking for causes and solutions to solve it. So my question is not regarding that, but more the fact that even after

slowfast = torch.hub.load('facebookresearch/pytorchvideo', 'slowfast_r50', pretrained=True, force_reload=True)

I get nan again. It seems like even if I use force_reload=True, pytorch uses the cached model, which seems to have been modified :confused:

What can I do to force to use a fresh new model?

What also brings me thinking the cache is used is that the first time the container is launched, those two lines appear:

Downloading: "https://github.com/facebookresearch/pytorchvideo/zipball/main" to /root/.cache/torch/hub/main.zip
Downloading: "https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/SLOWFAST_8x8_R50.pyth" to /root/.cache/torch/hub/checkpoints/SLOWFAST_8x8_R50.pyth

After that, only the first line appears …
Thank you in advance