Custom Layer Grad is NoneType

I’ve read through a lot of other posts over the last week trying to solve this and I’m still not making any headway, so here goes. Let me know if there’s any other info I can give you.

Basically I have the following situation,

  • Built a custom NN layer (below)
  • Then included it as part of a neural network (also see below).
  • This can successfully train and test, however, as soon as I try the following,
            zero_gradients(x)
            out = model(x)

            y.data = out.data.max(1)[1]
            _loss = loss(out, y)
            _loss.backward()
            normed_grad = step_alpha * torch.sign(x.grad.data)

I get the following error for “normed_grad = step_alpha * torch.sign(x.grad.data)”,

AttributeError: 'NoneType' object has no attribute 'data'

“grad” is what is NoneType and I can’t seem to figure out why. I’ve tried this with non-custom neural networks and it works fine. I do model saving/loading and I’ve tried doing it right after training instead of after having saved and loaded the model.

Custom Neural Network Layer (partial)

class CustomLayer(nn.Module):

    def __init__(self, input_features, output_features, num_vectors=64, bias=True):
        super(CustomLayer, self).__init__()
        self.input_features = input_features
        self.output_features = output_features
        self.vector_count = num_vectors

        self.weight = nn.Parameter(torch.Tensor(output_features, input_features))
        if bias:
            self.bias = nn.Parameter(torch.Tensor(output_features))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self):
        stdv = 1. / math.sqrt(self.weight.size(1))
        self.weight.data.uniform_(-stdv, stdv)
        if self.bias is not None:
            self.bias.data.uniform_(-stdv, stdv)

    def forward(self, x):
        generated_vectors = []
        for rx in x:
            # stuff gets appended to generated_vectors

        x = numpy.array(generated_vectors)
        x = torch.from_numpy(x).float()
        x = x.view(-1, len(rx) * self.vector_count)
        x = Variable(x, requires_grad=True)

        return F.linear(x, self.weight, self.bias)

    def __repr__(self):
        return self.__class__.__name__ + '(' \
            + 'in_features=' + str(self.input_features) \
            + ', out_features=' + str(self.output_features) \
            + ', bias=' + str(self.bias is not None) + ')'

Then I used the custom layer as part of the following neural network.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.fc1b = nn.Linear(640, 50)
        self.fc2b = nn.Linear(50, 10)
        self.custom= custom_layer.CustomLayer(640, 640)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = self.custom(x)
        x = self.fc1b(x)
        x = self.fc2b(x)

        return F.log_softmax(x)

Only variables that require grad will receive .grad in a backward. So you will want to make sure that your x has requires_grad=True.

Also a better way to calculate y is y = out.max(1)[1].detach().

Yeah, my x has requires_grad=True. Thanks for the tip about “y”

Here you seem to work outside the Variable interface. If generated_vectors is generated using tensors and/or numpy, then no history will be tracked. So pytorch cannot backward pass it.

Darn… Yeah that’s a problem. Okay thank you!

Hey Simon, so I rewrote my custom layer. But I still seem to be “corrupting” the gradient somewhere. Is there a particular way you would suggest looking for this mistake? Something like the following code. I’m just trying to track it down now :confused:


    def forward(self, x):
        # Regular layer
        x = F.relu(F.max_pool2d(self.conv1(x), 2))

        # Me trying to get a gradient
        z = torch.add(x, 1)
        s = torch.mul(z, z)
        out = s.mean()
        out.backward()
        print(x.grad)

To anyone looking in the future. See this answer and this page.

1 Like

no hx/unhx about it. doesn’t matter. and any is ok