Loss.backward() cause RuntimeError: invalid argument (in version 0.1.12 no error)

Hi all, I observe the same.

it occurs in loss backward, if the batchsize at the end of the dataloader sampling becomes 1:

loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([64, 3, 256, 256]) torch.Size([64, 1])
loss torch.Size([1]) torch.Size([1, 3, 256, 256]) torch.Size([1, 1])
Traceback (most recent call last):

… i omitted messages here … note: batchsize is now 1

in train_model3
loss.backward()
File “/home/binder/entwurf6/virtuals/pytorch_py3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/home/binder/entwurf6/virtuals/pytorch_py3/lib/python3.5/site-packages/torch/autograd/init.py”, line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 6: expected 3D tensor at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:442