Given size of the image is larger than the input size of the LeNet5 architecture input. How can I reduce the size of the image so that the image can be sent into the architecture

hey, i got some error in my project, please help me .

here is the code snippet

here is the entire notebook

I think , you could use transforms and https://pytorch.org/docs/stable/torchvision/transforms.html and add it to your all dataset.

i already did that
train_tfms =tt.Compose([tt.Compose([
tt.Resize(256),
tt.CenterCrop(224),
tt.ToTensor(),
tt.Normalize(mean=0.1817, std=0.1797)
])])
valid_tfms=tt.Compose([tt.Compose([
tt.Resize(256),
tt.CenterCrop(224),
tt.ToTensor(),
tt.Normalize(mean=0.1817, std=0.1797)
])])

May be, You shouldn’t put input and labels to torch.autograd.Variable because autograd is handle by default and also Variable is depreciated now. inputs are already changed to tensor by transforms, so need not to worry that.
you have this error RuntimeError: Given groups=1, weight of size [6, 1, 5, 5], expected input[100, 3, 224, 224] to have 1 channels, but got 3 channels instead
that means your input should take (3,224,224) but you may be passing 256

hey, then again i got error as
RuntimeError: mat1 dim 1 must match mat2 dim 0

I think your problem is on conv2 to linear, seen on notebook you have used
self.fc1 = torch.nn.Linear(16*5*5, 120) which may become incompatible when given 1x224x224 or 1x256x256 images.

For input of [1, 1, 224,224] , you should use 16*54*54 which worked.

In your forward method you could print what is output of last conv2-maxpool layer, and tried to initialize dimension with that

    def forward(self, x):
        x = torch.nn.functional.relu(self.conv1(x))  
        x = self.max_pool_1(x) 
        x = torch.nn.functional.relu(self.conv2(x))
        x = self.max_pool_2(x)
        print(x.shape)
        x = x.view(-1, 16*54*54)
        # x = x.view(-1, x.shape[0])
        x = torch.nn.functional.relu(self.fc1(x))
        x = torch.nn.functional.relu(self.fc2(x))
        x = self.fc3(x)

you could always use dummy variable to test it like below and try to print where is error coming from.

rand = torch.randn([1,1,224,224])
net.forward(rand)

Hope this works.