Pretrained VGG-Face model

I have searched for vgg-face pretrained model in pytorch, but couldn’t find it. Is there a github repo for the pretrained model of vgg-face in pytorch?

4 Likes

Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.

http://www.robots.ox.ac.uk/~albanie/pytorch-models.html

Hope this helps.

2 Likes

Thank you @shashankvkt.

I hope this also helps.

Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.

http://www.robots.ox.ac.uk/~albanie/pytorch-models.html

Hope this helps.

Hi, is this loadable in the VGG16 torchvision model ?

I dont think its available as torchvision model. You still have to load the pretrained weights manually.

I managed to load them manually thanks for your response.

def compose_transforms(meta, resize=256, center_crop=True,
                       override_meta_imsize=False):
    """
    Compose preprocessing transforms for model
    The imported models use a range of different preprocessing options,
    depending on how they were originally trained. Models trained in MatConvNet
    typically require input images that have been scaled to [0,255], rather
    than the [0,1] range favoured by PyTorch.
    Args:
        meta (dict): model preprocessing requirements
        resize (int) [256]: resize the input image to this size
        center_crop (bool) [True]: whether to center crop the image
        override_meta_imsize (bool) [False]: if true, use the value of `resize`
           to select the image input size, rather than the properties contained
           in meta (this option only applies when center cropping is not used.
    Return:
        (transforms.Compose): Composition of preprocessing transforms
    """
    normalize = transforms.Normalize(mean=meta['mean'], std=meta['std'])
    im_size = meta['imageSize']
    assert im_size[0] == im_size[1], 'expected square image size'
    if center_crop:
        transform_list = [transforms.Resize(resize),
                          transforms.CenterCrop(size=(im_size[0], im_size[1]))]
    else:
        if override_meta_imsize:
            im_size = (resize, resize)
        transform_list = [transforms.Resize(size=(im_size[0], im_size[1]))]
    transform_list += [transforms.ToTensor()]
    if meta['std'] == [1, 1, 1]:  # common amongst mcn models
        transform_list += [lambda x: x * 255.0]
    transform_list.append(normalize)
    return transforms.Compose(transform_list)

The model is using the above preprocessing function. What are the transforms actually applied here ?

Also why is the following transform used ?

if meta['std'] == [1, 1, 1]:  # common amongst mcn models
    transform_list += [lambda x: x * 255.0]