Thanks for your reply. My model is VGG16 after removing last three FC layers. After the fifth conv block of VGG16, I have upsampled the output and resized it to (1, 480, 640).
Since I am finding error between two images, I used MSELoss.
As you said, when I trained my network after putting .convert(‘RGB’) for input, network got trained and gave some finite loss though the testing did not go well. But, after I change my batch_size, it is giving the following error:
RuntimeError Traceback (most recent call last)
in
15 masks_train = masks_train.to(device)
16 optimizer.zero_grad()
—> 17 _,logps = model(images_train)
18 loss = criterion(logps.float(), masks_train.float())
19 loss.backward()
~/yes/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
–> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
in forward(self, x)
11 self.decoder = nn.Conv2d(512,1,1,padding=0,bias=False)
12 def forward(self,x):
—> 13 e_x = self.encoder(x)
14 d_x = self.decoder(e_x)
15 #e_x = nn.functional.interpolate(e_x,size=(480,640),mode=‘bilinear’,align_corners=False)
~/yes/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
–> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/yes/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
–> 100 input = module(input)
101 return input
102
~/yes/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
–> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/yes/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
343
344 def forward(self, input):
–> 345 return self.conv2d_forward(input, self.weight)
346
347 class Conv3d(_ConvNd):
~/yes/lib/python3.7/site-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight)
340 _pair(0), self.dilation, self.groups)
341 return F.conv2d(input, weight, self.bias, self.stride,
–> 342 self.padding, self.dilation, self.groups)
343
344 def forward(self, input):
RuntimeError: CUDA out of memory. Tried to allocate 2.34 GiB (GPU 0; 10.73 GiB total capacity; 9.14 GiB already allocated; 299.25 MiB free; 9.33 GiB reserved in total by PyTorch).
Please let me know the solution if any.