I’m trying to use the pretrained vgg16 model provided by torchvision.

But it seems that the expected input image is not a 3224224, as I got the following?

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 3 3, but got 3-dimensional input of size [3, 224, 224] instead

My code:

import torch
import torch.nn as nn
from torchvision import models
original_model = models.vgg16(pretrained=True)
ran = torch.rand((3, 224,224))
original_model.features(ran)