How to solve:--- RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Traceback (most recent call last):
File “train_search.py”, line 265, in
main()
File “train_search.py”, line 179, in main
train_acc, train_obj = train(train_queue, valid_queue, model, architect, criterion, optimizer, lr)
File “train_search.py”, line 218, in train
architect.step(input, target, input_search, target_search, lr, optimizer, unrolled=args.unrolled)
File “/raid/shubhangi/Akshay/NAS/darts-depth/cnn/architect.py”, line 36, in step
self._backward_step(input_valid, target_valid)
File “/raid/shubhangi/Akshay/NAS/darts-depth/cnn/architect.py”, line 44, in _backward_step
loss.backward()
File “/home/cvprlab/.conda/envs/praful/lib/python3.5/site-packages/torch/tensor.py”, line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/cvprlab/.conda/envs/praful/lib/python3.5/site-packages/torch/autograd/init.py”, line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I guess you are detaching the output or loss in architect.step from the computation graph.
If you are only using PyTorch methods, Autograd will be able to record all operations and calculate the gradients in the backward pass using the created computation graph.
However, if you are using other libraries, e.g. numpy, you are detaching the tensors from the computation graph, and Autograd won’t be able to automatically compute the backward pass.

Let us know, if this might be the case or post some code snippets so that we can have a look.

PS: you can add code snippets by wrapping them in three backticks ``` :wink: