Pretrained model load error

Hi all,
I am a beginner of PyTorch and try to master the PyTorch library.
I met a problem about pretrained model loading with the following codes:

import torchvision.models as models
resnet18 = models.resnet18(pretrained=True)

The error information is listed as follows:

Downloading: "" to /home/xxx/.cache/torch/checkpoints/resnet18-5c106cde.pth

TqdmKeyError                              Traceback (most recent call last)
<ipython-input-1-ca5a1cdda949> in <module>
      1 import torchvision.models as models
----> 2 resnet18 = models.resnet18(pretrained=True)
      3 #resnet18 = models.resnet18()

~/.conda/envs/wzh/lib/python3.6/site-packages/torchvision/models/ in resnet18(pretrained, progress, **kwargs)
    229     """
    230     return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,
--> 231                    **kwargs)

~/.conda/envs/wzh/lib/python3.6/site-packages/torchvision/models/ in _resnet(arch, block, layers, pretrained, progress, **kwargs)
    215     if pretrained:
    216         state_dict = load_state_dict_from_url(model_urls[arch],
--> 217                                               progress=progress)
    218         model.load_state_dict(state_dict)
    219     return model

~/.conda/envs/wzh/lib/python3.6/site-packages/torch/ in load_state_dict_from_url(url, model_dir, map_location, progress, check_hash)
    483         sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
    484         hash_prefix = if check_hash else None
--> 485         download_url_to_file(url, cached_file, hash_prefix, progress=progress)
    487     # Note: extractall() defaults to overwrite file if exists. No need to clean up beforehand.

~/.conda/envs/wzh/lib/python3.6/site-packages/torch/ in download_url_to_file(url, dst, hash_prefix, progress)
    402             sha256 = hashlib.sha256()
    403         with tqdm(total=file_size, disable=not progress,
--> 404                   unit='B', unit_scale=True, unit_divisor=1024) as pbar:
    405             while True:
    406                 buffer =

~/.local/lib/python3.6/site-packages/tqdm/ in __init__(self, iterable, desc, total, leave, file, ncols, mininterval, maxinterval, miniters, ascii, disable, unit, unit_scale, dynamic_ncols, smoothing, bar_format, initial, position, postfix, gui, **kwargs)
    660 """, fp_write=getattr(file, 'write', sys.stderr.write))
    661                 if "nested" in kwargs else
--> 662                 TqdmKeyError("Unknown argument(s): " + str(kwargs)))
    664         # Preprocess the arguments

TqdmKeyError: "Unknown argument(s): {'unit_divisor': 1024}"

If I removed “pretrained=True”, the codes work normally. I tried to google the error information, but no useful information returned. Could you please help me about this error? Thanks

1 Like

It seems tqdm doesn’t recognize the unit_divisor argument.
Did you install another version (or an older one) in your environment?

No, I just installed

Pytorch with version = 1.3.1,
torchvision = 0.4.2,
python = 3.6.9,
cudatoolkit = 10.1.243.

But I will try an older version such as Pytorch 1.2 and see.

pytorch 1.2.0 will cause this error too.
But pytorch 1.1.0 and torchvision 0.3.0 do not trigger this error.

Could you print the tqdm version in your current environment please?
Downgrading PyTorch to 1.1.0 is not the right solution to this problem, and I would like to figure out, what’s going on.

My current tqdm version is 4.11.2
Hope it helps.

Could you update tqdm to 4.36.1 or a later version? (The latest seems to be 4.41.1).

I’ve got the same issue. tqdm 4.36.1 . Weird thing was I was able to download pretrained models as of 8 days ago. Not sure what’s changed since then…

I met the same error. Simple reinstall and upgrade tqdm solved!
pip uninstall tqdm
pip install tqdm

If still not work, use pip install tqdm==4.46(any version rather than the previous one)

1 Like