Writing clean code and Linear, Conv1d, LSTM etc's parameter orders

I’m aware of several threads where parameter orders have been discussed (c.f. https://github.com/pytorch/pytorch/issues/1220) and I’m pretty agnostic to whichever is the default (i.e. NCHW, CNHW or whatever else).

However, I’d like to make sure I’m using the library in the spirit of its design and I also need some clarification regarding batches.

So when I have a stack of layers that I’m piping through:

input = Variable( torch.randn( 120, 1 , 6 )) # NBC
lstm = nn.LSTM( 6, 512 , 2) # default NBC with option for BNC
intermediate , ctx = lstm(input)
ll = nn.Linear( 512, 256) # default NC (with .view() trick to flatten and reshape a cube when B > 1)
intermediate = ll(intermediate.squeeze(1)).unsqueeze(1)
conv = nn.Conv1d( 256, 32, 5, padding=2 ) # default CNL
output = conv( intermediate.permute(0,2,1) ).permute(0, 2, 1)

The questions are:

  • what’s the proper way to handle batches? RNN’s, LSTM’s, GRU’s have native support for them, but I think for all other layers, one has to manually flatten the cube. Is this correct? This seems particularly tricky when I want to send batches to conv layers as it’s ripe for programmer errors in the padding department.

  • is there a preferred method for passing outputs to inputs between modules? For instance, I am making use of nn.ModuleList and nn.Sequential(OrdererDict([...])) to clean up my code. Ideally, I’d like to be able to implement the following code:

    intermediate = input
    for layer in self.module_list:
    intermediate = layer(intermediate)
    output = intermediate

As it stands right now, I’m doing a bunch of:

if layer.__class__ == nn.Conv1d:
    # do perumutations

Which really isn’t the most ideal way to program.

Thanks.

1D convolutions are NCL.

Up to my knowledge there is no they around than using permute and transpose operations. Though, if you swap just two dimensions, transpose is more readable. Also, I suggest writing one such operation per line, as in my code:

    def forward(self, x):
        embedded = self.embed(x)
        embedded = embedded.transpose(0, 1)
        outputs, (h, c) = self.lstm(embedded)
        h = h.transpose(0, 1).contiguous()
        h = h.view(h.size(0), -1)
        res = self.fc(h)
        return res

See more: Inconsistent dimension ordering for 1D networks - NCL vs NLC vs LNC