Request for error solution: Chunks expects 'chunks' to be greater than 0, got: 0

I am running a few methods with one dataset.

Except for only one method, all the other methods were fine to train given models, but the one method fails to train a given model with the next error message regarding chunks problem…?

I do not know that much about the mechanism of backward procedure… so I can not figure out how to fix this error… Even after searching a few times, I was not able to find a similar error report yet.

Can anyone help me?

Traceback (most recent call last):
  File "run.py", line 138, in <module>
    list_tr,list_val,list_ts = Learning_unit(methods[i],model,tr,val,test,flags)
  File "/home/song/exp/training.py", line 282, in Learning_unit
    tr_loss.backward(retain_graph=True)
  File "/home/song/.local/lib/python3.6/site-packages/torch/tensor.py", line 185, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/song/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: chunk expects `chunks` to be greater than 0, got: 0
Exception raised from chunk at /pytorch/aten/src/ATen/native/TensorShape.cpp:496 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fa759c2d1e2 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: at::native::chunk(at::Tensor const&, long, long) + 0x2af (0x7fa795b2c0ef in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #2: <unknown function> + 0x1287fe9 (0x7fa795f1dfe9 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0x2e741c9 (0x7fa797b0a1c9 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x12c4a43 (0x7fa795f5aa43 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #5: at::Tensor::chunk(long, long) const + 0xe0 (0x7fa795fcd0e0 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #6: torch::autograd::generated::RepeatBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x319 (0x7fa7979ca029 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x3375bb7 (0x7fa79800bbb7 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #8: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7fa798007400 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #9: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7fa798007fa1 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #10: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7fa798000119 in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
frame #11: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7fa7a57a04ba in /home/song/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0xbd6df (0x7fa7a68fc6df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #13: <unknown function> + 0x76db (0x7fa7a937c6db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #14: clone + 0x3f (0x7fa7a96b588f in /lib/x86_64-linux-gnu/libc.so.6)

Could you try to create a code snippet to reproduce this issue, so that we could debug it?