Multi GPU problems

Traceback (most recent call last):
File “main.py”, line 129, in
model.train()
File “/home/xiangtai/project/PytorchCV/methods/seg/fcn_segmentor.py”, line 227, in train
self.__train()
File “/home/xiangtai/project/PytorchCV/methods/seg/fcn_segmentor.py”, line 85, in __train
outputs = self.seg_net(inputs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py”, line 73, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File “/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py”, line 83, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File “/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/parallel_apply.py”, line 67, in parallel_apply
raise output
ValueError: Expected more than 1 value per channel when training, got input size [1L, 256L, 1L, 1L]
I was train with 8 GPUs with batchsize 8, I met this problem (pytorch 3.0) image segmentation task
**However, when I change batchsize 16, which means more than 1 example in each GPU, this problem **
disappears.
Help !!!

Hi, the issue is with error reporting in pytorch, I have noticed often how error reports conflicts channel specification in the final layers with batch size.