Norm cannot backpropagate for p<1

Hi I use tensor.norm somewhere in my code which works fine for all values of p (including fractional ones e.g. p=1.5) but runs into problems with p<1. With anomaly detection on, I get this error:


RuntimeError Traceback (most recent call last)
in
11 # ===================backward====================
12 optimizer.zero_grad()
—> 13 loss.backward()
14 optimizer.step()
15 # ===================log========================

c:\users\windows.conda\envs\normnet\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
183 products. Defaults to False.
184 “”"
→ 185 torch.autograd.backward(self, gradient, retain_graph, create_graph)
186
187 def register_hook(self, hook):

c:\users\windows.conda\envs\normnet\lib\site-packages\torch\autograd_init_.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
125 Variable._execution_engine.run_backward(
126 tensors, grad_tensors, retain_graph, create_graph,
→ 127 allow_unreachable=True) # allow_unreachable flag
128
129

RuntimeError: Function ‘NormBackward1’ returned nan values in its 0th output.

any idea how to fix this?

Could you check, if the input to torch.norm could be a tensor containing all zeros?