Hello PyTorch community,
I hope this message finds you well. I have been trying to utilize the pre-trained swin_v2_b model on the Imagenet22k dataset (‘https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window12_192_22k.pth’), but I have encountered some difficulties. Despite my efforts, I have been unable to resolve the issues of loading pre-trained model from ‘load_state_dict_from_url’ function and instantiating the swin_v2_b
model from torchvision.models
. However, I am unsure how to proceed from there to address the above challenges.
Could someone kindly guide me on how to properly load the pre-trained swin_v2_b model with the imagenet22k weights, access its input features, modify the architecture if needed, and ensure its successful integration into my project?
Any help, suggestions, or code examples would be greatly appreciated. Thank you in advance for your time and support.