About autograd,if I add new user-defined layers, how should I make its parameters update?

Hello, everyone !

My demand is a optical-flow-generating problem. I have two raw images and a optical flow data as ground truth, now my algorithm is to generate optical flow using raw images, and the euclidean distance between generating optical flow and ground truth could be defined as a loss value, so it can implement a backpropagation to update parameters.

I take it as a regression problem, and I have to ideas now:

  1. I can set every parameters as (required_grad = true), and compute a loss, then I can loss.backward() to acquire the gradient, but I don’t know how to add these parameters in optimizer to update those.

  2. I write my algorithm as a model. If I design a “custom” model, I can initilize several layers such as nn.Con2d(), nn.Linear() in def init() and I can update parameters in methods like (torch.optim.Adam(model.parameters())), but if I define new layers by myself, how should I add this layer’s parameters in updating parameter collection???

This problem has confused me several days. ,are there any good methods to update user-defined parameters ? I would be very grateful if you could give me some advice !

When you define a new custom layer in a class, you can inherit the properties of nn.Module and define the learnable parameters as nn.Parameter(). Then, these parameters can be updated by the autograd module. I have written a small example for a custom defined layer:

import torch
import torch.nn as nn
import torch.optim as optim

class CustomLayer(nn.Module):
    def __init__(self):
        super(CustomLayer, self).__init__()
        weight = torch.randn(10, 2)
        bias = torch.zeros(2)
        print(weight.requires_grad)
        self.weight = nn.Parameter(weight)
        print(self.weight.requires_grad)
        self.bias = nn.Parameter(bias)

    def forward(self, x):
        return torch.matmul(x, self.weight) + self.bias

net = CustomLayer()
optimizer = optim.Adam(net.parameters(), lr=0.1)

So, since I have defined the self.weight and self.bias as nn.Parameter, then they will be included in net.parameters(). For example, let’s print the first parameter (in this case weights).

print(next(net.parameters()))
Parameter containing:
tensor([[-0.8969,  0.0836],
        [ 2.7248, -0.2516],
        [-0.8740,  0.8217],
        [-0.5867, -0.8351],
        [-0.3588, -0.0523],
        [ 0.2368,  1.6558],
        [ 0.8367,  2.5776],
        [ 1.5905,  0.1696],
        [-0.3271,  0.3540],
        [ 0.5066,  0.2650]], requires_grad=True)

So these are the initial values of self.weight. Now, we define a loss function and call the backward() and update the parameters:

x = torch.randn(4, 10)
y = torch.tensor([[1, 0], [0, 1], [1, 1], [0, 1]], dtype=torch.float)
h = net(x)

loss = torch.sum(torch.pow(h-y, 2))
loss.backward()
optimizer.step()
print(next(net.parameters()))
Parameter containing:
tensor([[-0.7969, -0.0164],
        [ 2.6248, -0.1516],
        [-0.7740,  0.7217],
        [-0.6867, -0.7351],
        [-0.2588,  0.0477],
        [ 0.3368,  1.5558],
        [ 0.9367,  2.4776],
        [ 1.4905,  0.2696],
        [-0.4271,  0.2540],
        [ 0.4066,  0.1650]], requires_grad=True)

As you can see, the parameters of this custom-layer is updated by optim.step().

Thank you ! It worked.