Hi there, I have a call in software
/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in one_batch(self, i, b)
161 self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')
162 if not self.training: return
--> 163 self.loss.backward(); self('after_backward')
164 self.opt.step(); self('after_step')
165 self.opt.zero_grad()
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
182 products. Defaults to ``False``.
183 """
--> 184 torch.autograd.backward(self, gradient, retain_graph, create_graph)
185
186 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
121 Variable._execution_engine.run_backward(
122 tensors, grad_tensors, retain_graph, create_graph,
--> 123 allow_unreachable=True) # allow_unreachable flag
124
125
So it gives
RuntimeError: vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
So it seems that the calculation of the loss causes a type of internal assertion. Would love to know how to debug this type of error. By the way, it had calculated the training and validation pass correclty AFAIK. SO I guess tensor in some way are working correctly.