Does self layer combied with torch operations need rewrite forward and backward codes?

Like this:
def subtract_layer(x, device):
y = torch.zeros(x.shape[0], x.shape[1])
y = y.to(device)
for i in range(x.shape[0]):
if i % 2 == 0:
y[i, :] = x[i, :] - x[i + 1, :]
y[i + 1, :] = x[i + 1, :] - x[i, :]
return y
In this layer, x’s shape is (batchsize, channels), i need the feature compare result of two neighbor samples.
Is the code correct? Or should I rewrite this layer with forward and backward codes?

As a function, you don’t need to care about the BP.
Additionally, it will be more formal to write a subclass of nn.Module if there are some member variables or more complex logic in it.

I rewrite as follow, are they the right code format?

class subtract_neighbor_sample(nn.Module):
def init(self, device):
super(subtract_neighbor_sample, self).init()
self.device = device

def forward(self, x):
    output = torch.FloatTensor(x.shape[0], x.shape[1]).to(self.device)
    for i in range(x.shape[0]):
        if i % 2 == 0:
            output[i, :] = x[i, :] - x[i + 1, :]
            output[i + 1, :] = x[i + 1, :] - x[i, :]
    return output

I think it can achieve your intention.

1 Like