D_L
(D L)
June 22, 2021, 9:42pm
1
Working through the deep insight example herehttps://github.com/alok-ai-lab/DeepInsight/blob/master/examples/pytorch_squeezenet.ipynb .
However, when trying to load this I get “HTTPError: rate limit exceeded”
I did some searching and couldn’t find a solution. There were some mentions of something similar being due to your location here .
Any thoughts?
D_L
(D L)
June 23, 2021, 7:21pm
2
FYI - work around is downloading to disk from git and:
torch.hub.load(PATH , ‘squeezenet1_1’, source = ‘local’, pretrained=False, verbose=False)
cloudhan
(Cloud Han)
June 24, 2021, 3:51am
3
You can comment this line out to workaround this issue temporarily.
# To check if cached repo exists, we need to normalize folder names.
repo_dir = os.path.join(hub_dir, '_'.join([repo_owner, repo_name, normalized_br]))
use_cache = (not force_reload) and os.path.exists(repo_dir)
if use_cache:
if verbose:
sys.stderr.write('Using cache found in {}\n'.format(repo_dir))
else:
# Validate the tag/branch is from the original repo instead of a forked repo
_validate_not_a_forked_repo(repo_owner, repo_name, branch)
cached_file = os.path.join(hub_dir, normalized_br + '.zip')
_remove_if_exists(cached_file)
url = _git_archive_link(repo_owner, repo_name, branch)
sys.stderr.write('Downloading: \"{}\" to {}\n'.format(url, cached_file))
download_url_to_file(url, cached_file, progress=False)
with zipfile.ZipFile(cached_file) as cached_zipfile:
extraced_repo_name = cached_zipfile.infolist()[0].filename
Hi D_L,
Did you get a solution to this problem?
D_L
(D L)
July 6, 2021, 10:13pm
5
I worked around using the solution I posted above. Thanks for following up!
Verner
July 6, 2021, 11:28pm
6
Hi again,
Thanks for answering. Did you download the code and use it stored on your local machine? That’s it?
Thanks a lot.
Verner
July 7, 2021, 11:55pm
7
Hi D_L,
Thanks for answer.
I managed to run the model by copying the code from github to Collab. I did this after reading your answer.
It is not a definitive solution, but it serves to continue my research.
Thanks a lot.
Pytorch released a bugfix for 1.9 (i.e. 1.9.1) so updating to 1.9.1 should rectify this issue for good