how to compute the gradient of an image in pytorch.
I need to compute the gradient(dx, dy) of an image, so how to do it in pytroch?
here is a reference code (I am not sure can it be for computing the gradient of an image )
import torch
from torch.autograd import Variable
w1 = Variable(torch.Tensor([1.0,2.0,3.0]),requires_grad=True)
w2 = Variable(torch.Tensor([1.0,2.0,3.0]),requires_grad=True)
print(w1.grad)
print(w2.grad)
d = torch.mean(w1)
d.backward()
w1.grad
( here is 0.3333 0.3333 0.3333)
d.backward()
w1.grad
(here is 0.6667 0.6667 0.6667)
why the grad is changed, what the backward function do?
maybe this question is a little stupid, any help appreciated!
Let me explain why the gradient changed. If you don’t clear the gradient, it will add the new gradient to the original. 0.6667 = 2/3 = 0.333 * 2. Try this:
import torch
from torch.autograd import Variable
w1 = Variable(torch.Tensor([1.0,2.0,3.0]),requires_grad=True)
w2 = Variable(torch.Tensor([1.0,2.0,3.0]),requires_grad=True)
d = torch.mean(w1)
d.backward()
print w1.grad.data
## clear the gradient manually
w1.grad = None
d = torch.mean(w1)
d.backward()
print w1.grad.data
thanks for reply. what is torch.mean(w1) for? backward function is the implement of BP(back propagation)?
-
What is
torch.mean(w1)
for?
torch.mean(input)
computes the mean value of the input tensor. See the documentation here: http://pytorch.org/docs/0.3.0/torch.html?highlight=torch%20mean#torch.mean -
Yes.
backward()
do the BP work automatically, thanks for the autograd mechanism of PyTorch.
For example, for the operationmean
, we have:
y = mean(x) = 1/N * \sum x_i
So,dy/dx_i = 1/N
, whereN
is the element number ofx
. This is why you got 0.333… in the grad.
Thank you! (for 20 characters enough)