Hello guys. According to this link, If we want to implement a loss function using autograd concept, we should not use unpacking variables. Something like this is forbidden in autograd based backprop:
var.data[0,:]
I would like to know is var[0,:] unpacking as same as var.data[0,:]? (var is a variable type)
Thanks for your response, But as you have mentioned, I did following code:
output = net(images) # batches*95*S*S
for ind in range(B):
output[:,2+(1+coords)*ind,:,:] = torch.sqrt(output[:,2+(1+coords)*ind,:,:])
output[:,3+(1+coords)*ind,:,:] = torch.sqrt(output[:,3+(1+coords)*ind,:,:])
But bellow error occurred:
Traceback (most recent call last):
File “Main_v3.py”, line 200, in
train(epoch)
File “Main_v3.py”, line 193, in train
cost.backward()
File “/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/autograd/_functions/pointwise.py”, line 130, in backward
i, = self.saved_tensors
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Just to confirm, what is your pytorch version? (run torch.__version__ in the python interpreter).
One thing, tensor assignment is an inplace operation, so that might indicate where the problem is.
So Is there any way to manipulate some parts of my output variable and avoid this error? Because as you can see in My loss function, I should take square of some parts of output variable!