Traceback (most recent call last):
File "main.py", line 151, in <module>
train(models, method, criterion, optimizers, schedulers, dataloaders, args.no_of_epochs, EPOCHL)
File "/po1/kanza.ali/workspace/OtherDataSets/ImageNet/train_test.py", line 112, in train
loss = train_epoch(models, method, criterion, optimizers, dataloaders, epoch, epoch_loss)
File "/po1/kanza.ali/workspace/OtherDataSets/ImageNet/train_test.py", line 78, in train_epoch
scores, _, features = models(inputs)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/po1/kanza.ali/workspace/OtherDataSets/ImageNet/models/resnet.py", line 113, in forward
out2 = self.layer2(out1)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/po1/kanza.ali/workspace/OtherDataSets/ImageNet/models/resnet.py", line 31, in forward
out += self.shortcut(x)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/kanza.ali/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 10.76 GiB total capacity; 9.27 GiB already allocated; 129.44 MiB fre$
The image size is different in ImageNet datasets, So I did data augmentations and cropped (224,224), when I use batch size 32 then it gives me the following error
RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x25088 and 512x1000)