Nn.functional.conv2d doesn't autograd

Hi, faced a problem:
Trying to implement nn.Conv2d by myself.
Init method:
self.filters = Variable(
torch.randn(
self.n_filters,
self.input_depth,
self.kernel_size,
self.kernel_size
)
)

self.bias = torch.autograd.Variable(torch.from_numpy(np.zeros(n_filters, dtype=‘float32’)))
self.params = [self.filters, self.bias]

Forward method:
self.output = torch.nn.functional.conv2d(
x,
self.filters,
bias=self.bias,
stride=self.stride,
padding=self.padding
)

But in the optimizer, when I call parameter.grad, it is None.
P.s. x is not one image. It consists of several input images in one tensor.
Thanks in advance for your help!

Use nn.Parameter instead of Variable if you want proper nn.Modules. If you want gradients for a Variable use requires_grad=True.

Best regards

Thomas

@tom Thanks, it worked. But doesn’t it require grad by default?

Variable is used for all sorts of values. Model weights, RNN hidden states, and input data. Most of these do not need gradients.

Declaring a value using nn.Parameter tells PyTorch two things. 1. that gradients are needed. 2. that the optimiser must optimise these values.

@jpeg729 Thanks, it was helpful

Hi, I also have the same issue with nn.functional.conv2d and nn.functional.conv_transpose2d. I use nn.Parameter type for variables but still get None gradient. Do you have any solution to it? Thanks! (I am using the newest version of Pytorch 0.3)
Here is my code:

filters = nn.Parameter(torch.randn(8,4,3,3),required_grad=True)
inputs = nn.Parameter(torch.randn(1,4,5,5),requires_grad=True)
tmp = F.conv2d(inputs, filters, padding=1).sum()
tmp.backward()
tmp.grad

Then I got nothing for tmp.grad

tmp.grad is the grad of tmp with respect to tmp, which is of little use.

You probably want filters.grad.

Generally the input should be a Variable not a Parameter and most of the time you don’t need or want the input to requires_grad.

Really thanks for your answer!