RuntimeError: The expanded size of the tensor (1) must match the existing size (0) at non-singleton dimension 0

I was working on a CNN model and I got this error without much information from the call stack. The only thing I know is It happened during backprop. Is there any common reason for this?

Call stack:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-89> in <module>()
     29                 optimizer.zero_grad()
     30                 pred = model(data)
---> 31               criterion(pred, label).backward()
     32                 optimizer.step()
     33 

~/anaconda3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
     94                 products. Defaults to ``False``.
     95         """
---> 96         torch.autograd.backward(self, gradient, retain_graph, create_graph)
     97 
     98     def register_hook(self, hook):

~/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     88     Variable._execution_engine.run_backward(
     89         tensors, grad_tensors, retain_graph, create_graph,
---> 90         allow_unreachable=True)  # allow_unreachable flag
     91 
     92

Sounds like a bug. Could you post some code that reproduces this?

It seems that is a ulimit issue. I have searched online and saw previous discussion on this. Tried clean up method in those posts (with su) and that solved the issue. Not sure why that happened though.

I got the same problem during backprop. Would you mind sharing how your solved it and where you found the solution?